Unnamed: 0
int64
0
10k
input
stringlengths
9.18k
112k
output
stringlengths
136
194k
instruction
stringclasses
1 value
321
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Electrocardiogram (ECG) signal classification plays a critical role in the automatic diagnosis of heart abnormalities. While most ECG signal patterns cannot be recognized by human interpreter, can be detected with precision using artificial intelligence approaches, making the ECG a powerful non-invasive biomarker. However, performing rapid and accurate ECG signal classification is difficult due to the low amplitude, complexity, and non-linearity. The widely available and proposed deep learning (DL) method has explored and presented an opportunity to substantially improve the accuracy of automated ECG classification analysis using rhythm or beat feature. Unfortunately, a comprehensive and general evaluation of the specific DL architecture for ECG analysis across a wide variety of rhythm and beat features has not been previously reported. Some previous studies have been concerned with detecting ECG class abnormalities only through rhythm or beat feature separately.</ns0:p><ns0:p>Methods. This study proposes a single architecture based on DL method with one-dimensional convolutional neural network (1D-CNN) architecture, to automatically classify 24 patterns of ECG signals through both features rhythm and beat. To validate the proposed model, five databases which consisted of nine-class of ECG-base rhythm and 15-class of ECG-based beat are utilized in this study. The proposed DL network applied and experimented with varying datasets with different frequency samplings in intra and inter-patient scheme.</ns0:p><ns0:p>Results. Using a 10-fold cross-validation scheme, the performance results had an accuracy of 99.98%, a sensitivity of 99.90%, a specificity of 99.89%, a precision of 99.90%, and an F1-score of 99.99% for ECG rhythm classification. Also, for ECG beat classification, the model obtained an accuracy of 99.87%, a sensitivity of 96.97%, a specificity of 99.89%, a precision of 92.23%, and an F1-score of 94.39%. In conclusion, this study provides clinicians with an advanced methodology for detecting and discriminating heart abnormalities between different ECG rhythm and beat assessments by using one outstanding proposed DL architecture.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Globally, heart abnormality deaths are projected to increase to 23.4 million, comprising 35% of all deaths in 2030 (World Health Organization 2016). The clinical symptoms, electrocardiogram (ECG) pattern analysis, and measurement of important cardiac biomarkers are the current heart diagnostic cornerstone <ns0:ref type='bibr' target='#b34'>(O'Gara et al. 2013)</ns0:ref>. However, such diagnosis is based on the invasive laboratory test and requires specific tools, cost, and infrastructures, such as trained clinical staff for inspecting blood and performing assays and a hematology analyser with biochemical reagents <ns0:ref type='bibr' target='#b8'>(Cho et al. 2020)</ns0:ref>. For this reason, such assessments are difficult to use in remote healthcare monitoring or developing countries <ns0:ref type='bibr' target='#b29'>(Makimoto et al. 2020)</ns0:ref>. Analysis of ECG patterns could help with early detection of life-threatening heart abnormalities and is considered for diagnosing patients' health conditions into specific grades, which can assist clinicians with proper treatment <ns0:ref type='bibr' target='#b39'>(Siontis et al. 2021)</ns0:ref>. ECG measures the electricity of the heart, by analyzing each of electrical signal, it is possible to detect some abnormalities. In such conditions, ECG should allow continuous and remote monitoring.</ns0:p><ns0:p>Although the acquisition of ECG recordings is well standardized, human interpretations of ECG recordings vary widely. This is due to differences in the level of experience and expertise. To minimize these constraints, computer-generated interpretations have been used for various years. However, because this interpretation is based on predetermined rules and the limitations of the feature recognition algorithm, the interpretation results do not always capture the complexities and nuances contained in the ECG <ns0:ref type='bibr' target='#b39'>(Siontis et al. 2021</ns0:ref>). Based on this, the ECG by itself is often insufficient to diagnose several heart abnormalities. In myocardial infarction (MI), for example, is due to ST-segment deviation and may be occurred in other conditions such as acute pericarditis, left ventricular hypertrophy, left bundle-branch block, Brugada syndrome, and early repolarizations <ns0:ref type='bibr' target='#b45'>(Wang, Asinger, and Marriott 2003)</ns0:ref>. Due of this, automatically diagnosing MI using a ruled based inference system from conventional ECG machine has a low reliability and by practice cardiologists are unable to diagnose it only from ECG record <ns0:ref type='bibr' target='#b9'>(Daly et al. 2012)</ns0:ref> <ns0:ref type='bibr' target='#b8'>(Cho et al. 2020)</ns0:ref>. Furthermore, the traditional methods for diagnosing heart abnormalities specifically from a 12-lead ECG are difficult to apply in wearable devices <ns0:ref type='bibr' target='#b42'>(Walsh, Topol, and Steinhubl 2014)</ns0:ref> <ns0:ref type='bibr' target='#b8'>(Cho et al. 2020</ns0:ref>) and a wide variability in ECG morphology between patients causes major challenges.</ns0:p><ns0:p>The heart abnormalities analysis by using ECG signal processing can be conducted by using rhythm and beat feature. In the previous studies, such research has been proposed with several method. Deep learning (DL) is one type of artificial intelligence approach that can learn and extract meaningful patterns from complex raw data and recently has begun to widely used to analyze ECG signals for diagnosing an arrhythmia, heart failure, myocardial infarction, left ventricular hypertrophy, valvular heart disease, age, and sex with ECG alone and produce good result <ns0:ref type='bibr'>(Darmawahyuni et al. 2021)</ns0:ref> <ns0:ref type='bibr' target='#b29'>(Makimoto et al. 2020)</ns0:ref> <ns0:ref type='bibr' target='#b3'>(Attia et al. 2019)</ns0:ref> <ns0:ref type='bibr' target='#b15'>(Hannun et al. 2019</ns0:ref>) <ns0:ref type='bibr' target='#b8'>(Kwon et al. 2020(a)</ns0:ref>) <ns0:ref type='bibr'>(Kwon et al. 2020(b)</ns0:ref>) <ns0:ref type='bibr' target='#b49'>(Yildirim, 2018)</ns0:ref> <ns0:ref type='bibr' target='#b22'>(LeCun, Bengio, and Hinton 2015)</ns0:ref>. DL performs excellently over a relatively short period of time. The sophistication of DL Manuscript to be reviewed Computer Science has a much better ability to feature representation at an abstract level compared to general machine learning. The DL model can extract a hierarchical representation of the raw data automatically and then utilize the last stacking layers to gain knowledge from complex features to the simpler ones <ns0:ref type='bibr' target='#b17'>(Khan and Yairi 2018)</ns0:ref>.</ns0:p><ns0:p>In the previous study, the ECG signal classification based on heart rhythm can be conducted with several features morphology of ECG signal like presenting ST-elevation and depression, T-wave abnormalities, and pathological Q-waves <ns0:ref type='bibr' target='#b2'>(Ansari et al. 2017)</ns0:ref>. Moreover, a variety of ECG rhythm features, such as the R-R interval, S-T interval, P-R interval, and Q-T interval have been implemented to automatically detect heart abnormalities over the past decade <ns0:ref type='bibr'>(Gopika et al. 2020)</ns0:ref>. Unlike an ECG rhythm, the efficiency classification of the irregular heartbeat , either faster or slower than normal, or even waveform malformation can be improve by using beat feature. <ns0:ref type='bibr' target='#b16'>(Khalaf, Owis, and Yassine 2015)</ns0:ref>. For heartbeat classification, ECG pattern may be similar for different patients who have different heartbeats and may be different for the same patient at different times. ECG-based heartbeat classification is virtually a problem of temporal pattern recognition and classification <ns0:ref type='bibr' target='#b53'>(Zubair, Kim, and Yoon 2016)</ns0:ref> <ns0:ref type='bibr' target='#b11'>(Dong, Wang, and Si 2017)</ns0:ref>. Based on the aforementioned instances, the variety of ECG signals with abnormalities must be handled specifically, either as an ECG rhythm or beat features.</ns0:p><ns0:p>Unfortunately, the challenge in analyzing the pattern of ECG signal is not limited to this. ECG signals have small amplitudes and short durations, measured in millivolts and milliseconds, respectively, and large inter-and intra-observer variability that influences the perceptibility of these signals <ns0:ref type='bibr' target='#b24'>(Lih et al. 2020</ns0:ref>). The analysis of thousands of ECG signals is time-consuming, and the possibility of misreading vital information is high. Automated diagnostic systems can utilize computerized recognition of heart abnormalities based on rhythm or beat to overcome such limitations. This could become the standard procedure by clinicians classifying ECG recordings.</ns0:p><ns0:p>Hence, the present study proposes a single DL architecture for classifying ECG patterns by using both rhythm and heartbeat features. Rather than treating ECG heartbeat and rhythm separately, we process both of them in the same framework. Hence, we only need a single DL architecture to classify the ECG signal with high accuracy. DL-based frameworks mainly include a stacked autoencoder (SAE), long short-term memory (LSTM), a deep belief network (DBN), convolutional neural networks (CNN), and so on. Among DL algorithms, we have generated a one-dimensional CNN (1D-CNN) model and showed promising results in our previous works <ns0:ref type='bibr'>(Nurmaini et al. 2020)</ns0:ref> <ns0:ref type='bibr'>(Tutuko et al. 2021)</ns0:ref>. In other works, 1D-CNN has also performed well for ECG classification, with overall performances ranging from 93.53% to 97.4% accuracy using rhythm <ns0:ref type='bibr' target='#b1'>(Acharya et al. 2017)</ns0:ref>; <ns0:ref type='bibr' target='#b44'>(Wang 2020)</ns0:ref> and with overall 92.7% to 96.4% accuracy using beat <ns0:ref type='bibr' target='#b53'>(Zubair et al. 2016)</ns0:ref> <ns0:ref type='bibr' target='#b19'>(Kiranyaz, Ince, and Gabbouj 2015)</ns0:ref>. For the pattern recognition technique, 1D-CNN is well known, as it integrates feature extraction, dimensionality reduction, and classification techniques utilizing several convolution layers, pooling layers, and a fully connected layer. Convoluted optimum features are derived and classified using feed-forward artificial neural networks using a fully connected layer with a learning framework for back propagation <ns0:ref type='bibr' target='#b23'>(Li et al. 2019</ns0:ref>). This study and the proposed approach make the following novel The rest of this paper is organized as follows. Section 2 presents the materials and methods, which comprise ECG raw data and the proposed methodology for ECG rhythm and beat classification using 1D-CNN architecture. Section 3 presents the results and discussion. Finally, the conclusions are presented in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials and Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Data Preparation</ns0:head><ns0:p>In this study, we use the public data set from PhysioNet <ns0:ref type='bibr' target='#b14'>(Goldberger et al. 2000)</ns0:ref>. The ECGs data in the Physionet were collected from healthy volunteers and patients with different heart diseases. This database has already been published online by a third party and unrelated to the study. Consequently, there should be no concerns regarding the ethical disclosure of the information. To process the ECG signal pattern recognition, we utilize two segmentation processes, rhythm and beat. Therefore the experimental databases is divided into two cases, (i) for ECG rhythm classification utilize the PTB Diagnostic ECG (PTB DB) <ns0:ref type='bibr' target='#b5'>(Bousseljot, Kreiseler, and Schnabel 1995)</ns0:ref>, the BIDMC Congestive Heart Failure (CHF) <ns0:ref type='bibr' target='#b4'>(Baim et al. 1986</ns0:ref>), the China Physiological Signal Challenge 2018 <ns0:ref type='bibr' target='#b26'>(Liu et al. 2018)</ns0:ref>, the MIT-BIH Normal Sinus Rhythm <ns0:ref type='bibr' target='#b14'>(Goldberger et al. 2000)</ns0:ref>; and (ii) for ECG beat classification utilize the MIT-BIH Arrhythmia Database <ns0:ref type='bibr' target='#b30'>(Moody and Mark 2001)</ns0:ref>.</ns0:p><ns0:p>A summary of each database is provided as follows:</ns0:p><ns0:p>&#61623; The PTB DB contains 549 records from 290 patients (209 men and 81 women). ECG signals were sampled at 1000 Hz. Each ECG record includes 15 signals measured simultaneously: 12 conventional leads (I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5, V6) along with three ECG Frank leads <ns0:ref type='bibr'>(vx, vy, vz)</ns0:ref> in the .xyz file. For this study, only a single lead (lead II) was used. The database provides ECG normal and nine heart abnormalities, such as myocardial infarction, cardiomyopathy, bundle branch block, dysrhythmia, hypertrophy, myocarditis, and valvular heart disease.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed Methodology of 1D-CNN</ns0:head><ns0:p>In this study, we present a methodology using single DL architecture to classify 25 classes of ECG signal pattern of based on rhythm and beat feature. Unlike others methodologies that treat beat and rhythm separately, our approach enables both forms to proceed on a single DL architecture. The methodology includes ECG signal denoising, beat and rhythm segmentation and classification. Using this approach, pattern abnormalities that occurred in the ECG, both in beats and rhythms, can be detected only using single architecture. We generalized the 1D-CNN architecture that was published in previous work <ns0:ref type='bibr'>(Nurmaini et al. 2020)</ns0:ref> <ns0:ref type='bibr'>(Tutuko et al. 2021)</ns0:ref>. The proposed methodology of the 1D-CNN generalized architecture is presented in Figure <ns0:ref type='figure'>3</ns0:ref>. The general process in the methodology with standardize the evaluation process considering a clinical point of view. This standardization and defined the workflow to perform the evaluation to make sure the experiments are reproducible and comparable. Aiming to standardize, the general methodology including five main stages are, (i) data base selection from the ECG public database; (i) the pre-processing stage of ECG signals by eliminating various kinds of noise and artifacts using discrete wavelet transforms; (iii) the segmentation stage for ECG signal based on rhythm and beat. The ECG signal segmented to 2700 and 252 nodes, respectively; (iv) 1D-CNN, as the feature extraction and classifier, learn the characteristics of each rhythm and beat episodes PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:07:64082:2:0:NEW 22 Nov 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for ECG signal classification. and (v) the evaluation stage of the proposed model based on validation and testing data with accuracy, sensitivity, specificity, precision and F1-score.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.'>Database Selection</ns0:head><ns0:p>We have total of 168,472 rhythm episodes and 110,082 beat episodes as ECG features were used for training, validation, and an testing (as unseen data). The 1D-CNN architecture was used to classify the nine-class by using rhythms feature segmentation and 15-class of beats feature segmentation of the ECG signal. The information available from the single-lead ECG standard recordings included different signal lengths and frequency samplings (128, 250, 500, and 1000 Hz).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Pre-processing</ns0:head><ns0:p>The ECG signal can become corrupted during acquisition due to different types of artifacts and interference, such as muscle contraction, baseline drift, electrode contact noise, and power line interference <ns0:ref type='bibr' target='#b38'>(Sameni et al. 2007</ns0:ref>) <ns0:ref type='bibr' target='#b40'>(Tracey and Miller 2012</ns0:ref>) <ns0:ref type='bibr' target='#b43'>(Wang et al. 2015)</ns0:ref>. To achieve an accurate analysis and diagnosis, undesirable noise and signals should be removed or deleted from the ECG by eliminating various kinds of noise and artifacts. This study implemented DWT as a frequently used denoising technique that offers a useful option for denoising ECG signals. This study also implemented some wavelet families for ECG signals, such as symlets, daubechies, haar, bior, and coiflet, to analyze which type of wavelet would obtain the best signal denoising result. Among them, based on the highest the signal noise to ratio (SNR) results, the symlet wavelet was the best DWT parameter and was chosen for ECG signal denoising.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>ECG Signal Segmentation</ns0:head><ns0:p>The aim of ECG segmentation is to divide a signal into many parts with similar statistical properties, such as amplitude, nodes, and frequency. The presence, time, and length of each segment of an ECG signal have diagnostic and biophysical significance, and the various sections of an ECG signal have distinctive physiological meaning <ns0:ref type='bibr' target='#b47'>(Yadav and Ray 2016)</ns0:ref>. ECG signal segmentation may also be accurately analyzed. The process of ECG feature segmentation for rhythm and beat classification can be described as follows:</ns0:p><ns0:p>&#61623; ECG rhythm segmentation is the process to produce the features for the entire ECG signal recordings at 2700 nodes without considering the different frequency sampling (128, 250, 500, and 1000 Hz) for ECG rhythm classification. In our previous work <ns0:ref type='bibr'>(Nurmaini et al. 2020)</ns0:ref>, we successfully segmented the length of AF episodes to 2700 nodes. Therefore, for this study, we generated the features for nine-class of normalabnormal ECG rhythm. The length of 2700 nodes contained at least two R-R intervals between one and the next beat with different frequency samplings in all records. Furthermore, the 2700-node segmentations might show more than two R-R intervals with a minimum frequency sampling of 128 Hz for the training, validation, and unseen set. As a result, the best ECG episodes were chosen from 2700 nodes for segmentation. The process of ECG rhythm classification is illustrated in Figure <ns0:ref type='figure'>4</ns0:ref>(a). Figure <ns0:ref type='figure'>4</ns0:ref>(a) shows that all lengths of the ECG recordings have been segmented to each episode of 2700 nodes. If the total nodes were less than 2700 nodes, we added the zero-padding technique, which involved extending a signal with zeros. &#61623; ECG beat segmentation is the step of intercepting numerous nodes in a signal to discern not only subsequent heart beats but also the waveforms included in each beat <ns0:ref type='bibr' target='#b36'>(Qin et al. 2017)</ns0:ref>. The former refers to the characteristics retrieved from a single beat, which typically only contains one R-peak. The latter, however, refers to features that are dependent on at least two beats. These features include more information than a single R-peak. The waveforms of beat segmentation are presented in Figure <ns0:ref type='figure'>4(b)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>4</ns0:ref>(b) shows the positions of the P-wave, QRS-complex, and T-wave, which are all intimately connected to the location of the R-peak. According to <ns0:ref type='bibr' target='#b36'>(Qin et al. 2017</ns0:ref>) <ns0:ref type='bibr' target='#b6'>(Chang et al. 2012</ns0:ref>) <ns0:ref type='bibr' target='#b33'>(Nurmaini et al. 2019)</ns0:ref>, the average ECG rhythm frequency is between 60 and 80 beats per minute, the t1 duration is 0.25 seconds before R-peak, and the t2 duration is 0.45 seconds after R-peak, which results in a total length of 0.7 seconds. A total of 0.7 seconds contains 252 nodes, with a sampling frequency of 360 Hz, which covers the P-wave, QRS-complex, and T-wave (one beat).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Feature Extraction and Classification</ns0:head><ns0:p>The 1D-CNN classifier was proposed by <ns0:ref type='bibr'>(Nurmaini et al. 2020</ns0:ref>) for AF detection. By using the architecture, we generalized the model for abnormal-normal rhythm and beat classification. The rectified linear unit (ReLU) function was adopted with 13 convolution layers (64, 128, 256, and 512 filters) and also consisted of five max pooling layers. The 1D-CNN model comprised two fully connected layers with 1000 nodes for each layer and one node for the output layer. The 1D-CNN required a three-dimensional input, which consists of n samples, n features, and timesteps. The detailed process of 1D-CNN architecture for both ECG rhythm and beat classification was as follows:</ns0:p><ns0:p>&#61623; For ECG based on rhythm classification, the input timesteps with the dimension 2700 x 1 were fed into the convolution layer equipped with the ReLU activation function. The first and second convolutional layers produced an output length of 64 with a kernel size of 3. The output of the first and second convolutional layers through the max pooling layer had a kernel size of 2 for the feature reduction. The output of the first max pooling layer as the input for the third and fourth convolutional layers produced 128 feature maps. The convolutional layers were passed onto the fifth and last convolutional layers and produced output lengths of 256 and 512, respectively, with a kernel size of 3. The output of the last convolutional layer was then passed onto two fully connected layers with a total of 1000 nodes. This architecture produced an output of a nine-class ECG rhythm classification. &#61623; Unlike ECG rhythm classification, none of the processes differed from the features interpretation for ECG based on beat classification. The main differences were the input timesteps value of (252, 1) and products of the output size of the 15-class ECG beat classification. The architecture also implemented the ReLU activation function with 64, 128, 256, and 512 filters, with a kernel size of 3. For each max pooling layer, a kernel size of 2 was also used for the feature interpretation of the 1D-CNN for ECG beat classification.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Model Evaluation</ns0:head><ns0:p>Classification ECG signal based on rhythm and beat feature is evaluated by using intra and inter patient scheme. Such schemes are conducted to resemble a clinical environment and to ensure the robustness of the proposed model. Five commons metrics used in this study are accuracy, sensitivity, specificity, precision and F1-score. Moreover, two measures are usually considered for evaluating the classification performance, specifically for imbalance data, are receiver-operating characteristic (ROC) and Precision-Recall (P-R) curves. These two-evaluation metrics were added because the overall accuracy was distorted by the majority class results, since the beat type classes are extremely imbalanced in the available dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>For ECG rhythm classification, the proposed 1D-CNN model was tested on an unseen set, but ECG beat classification was only tested on the validation set in this study. All experimentation in the training processes used a 10-fold cross-validation scheme. This scheme divides the collection of observations into groups, or folds, of roughly similar size at random. k The scheme is fitted on the remaining folds, with the initial fold serving as a validation set. 1 k &#61485; For the selected model, the parameters that provided the best cross-validation accuracy, sensitivity, specificity, precision, and F1 score were chosen.</ns0:p></ns0:div> <ns0:div><ns0:head>ECG Rhythm Classification in Validation Model</ns0:head><ns0:p>A total of 2445 records consisted of rhythm episodes of 138,415 training sets, 15,373 validation sets, and 14,684 unseen sets after being segmented by each 2700 nodes (refer to Table <ns0:ref type='table'>3</ns0:ref>). A total of 168,472 episodes were analyzed for the ECG rhythm classification task. As can be seen, all PTB Diagnostics ECG records were used for the training and validation sets. The rest of the datasets were used for the training, validation, and unseen sets. Table <ns0:ref type='table'>3</ns0:ref> shows the large different ratio between one class and another (imbalanced) class, for example, a total number of MI, HF, and HC classes to H, M, and VHD classes. However, we did not implement the oversampling techniques to overcome such a case in this study.</ns0:p><ns0:p>Without considering the data ratio (total number) of episodes for each class, we validated the proposed 1D-CNN model to the 10-fold cross-validation scheme. Figure <ns0:ref type='figure'>5</ns0:ref> shows the performance results of folds 1 through 10, which were evaluated for accuracy, sensitivity, specificity, precision, and F1 score. The performance results obtained above 99% for accuracy and specificity and ranged from 93 to 99% for sensitivity, precision, and F1 score. The model with the highest accuracy was chosen as the best model for this study out of all the models analyzed. The model had an accuracy of 99.98%, a sensitivity of 98.53%, a specificity of 99.99%, a precision of 99.81%, and an F1 score of 99.15% (fold 6). The results showed that the proposed 1D-CNN was the most accurate predictor, with an accuracy of 99.98%.</ns0:p><ns0:p>Confusion Matrix (CM) for rhythm evaluation result is shown in Figure <ns0:ref type='figure'>6</ns0:ref>. This metric is used to capture information about the predicted results from the model respected to the actual label. It can be seen that the BBB class has four prediction errors (predicted as healthy-control class) and two healthy-control rhythms that are predicted as BBB. The prediction error between those arises due to the morphology of these types of rhythms is almost similar. Even so, the overall predictive results of the proposed approach provide a satisfactory evaluation performance. Based on the CM, the classification result of the ECG signal with rhythm feature produces good performance, due to only two class of ECG pattern have misclassified (HC and HF). However, overall result can be state that the classification is close to 100%.</ns0:p><ns0:p>Using the performance value in CM, we can observe the classification result with other views in terms of classification model at all classification thresholds named receiver-operating characteristic (ROC) and precision-recall (PR) curves. This curve plots two parameters true positive rate (sensitivity) and false positive rate (specificity) provide a graphical representation of a classifier's performance across many thresholds, rather than a single value. It is important to understand the trade-off in performance for different threshold values. As shown in Figures <ns0:ref type='figure'>7(a</ns0:ref>) and 7(b). Figure <ns0:ref type='figure'>7</ns0:ref>(a) shows the resulting ROC curve, which compares the nine-class of ECG rhythm characteristics. The comparable value is sensitivity versus specificity. Sensitivity is the ability to correctly identify the true positive class of ECG rhythm, whereas specificity is the ability to correctly identify the true negative rate of ECG rhythm. Therefore, if used in medical data, it will produce a precise and accurate diagnosis. Misclassification between positive class and negative class of ECG rhythm can be dangerous, and the consequences can be as serious as death.</ns0:p><ns0:p>The area under the curve (AUC) is the value analyzed in the ROC by looking at how far the middle value is and whether the area below the curve approaches the value of 1. The lower left point of the graph (0,0) is a value that does not contain errors (no false positives) and does not detect any true positives. On the upper right side of the graph (1,1), the opposite point defines all true positives but with a 100% error rate (rates of false positives). The upper left point (0,1) is the ideal classification that defines all true positives without any mistakes (no false positives or 0 cost). The lower right point (1,0) is the worst classification, where all subjects labeled as positive are simply false positives, without knowing true positives. As shown in Figure <ns0:ref type='figure'>7</ns0:ref>(a), the ROCs of the nine-class normal-abnormal ECG rhythm show excellent performance, as the value of the ROC for the nine-class classification is 1, or the AUC is about 100%. This means that the proposed 1D-CNN can categorize all classes with higher accuracy and precision. However, the ROC cannot be trusted with imbalanced data, and it remains unchanged even after the performance changes. Therefore, the P-R curve is used to describe the classifier performance on imbalanced data (Figure <ns0:ref type='figure'>7(b)</ns0:ref>). The overall performances are also good, as the P-R value is 1.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> lists the performance results for the nine-class of ECG based on rhythm feature. As can be seen, the C, D, H, MI, M, and VHD classes obtained 100% for accuracy, sensitivity, specificity, precision, and F1 score. The proposed 1D-CNN model was proven to be robust and had no effect on the imbalanced class problem. For the nine-class classification, the average of all performance metrics achieved above 99% accuracy.</ns0:p></ns0:div> <ns0:div><ns0:head>ECG Beat Classification in Validation Model</ns0:head><ns0:p>For the 15-class of ECG beats, a total of 110,082 beats were trained, validated, and tested (unseen) in this study. The large different ratio between one class and another (imbalanced) class, however in this study we can't conducted the augmentation data. All ECG beats data divided into a ratio of 8: 2 or 80% is used for training data and the remaining for testing. The process of training with 10-fold is selected with randomly. Therefore, about 88,065 beats are used as training data and about 22,017 beats as testing data.</ns0:p><ns0:p>The performance results of folds 1 through 10 using the 10-fold cross-validation scheme are shown in Figure <ns0:ref type='figure'>8</ns0:ref>. As can be seen, the results vary from 0% as the lowest and around 99% as the highest result. Accuracy and specificity achieved above 99%, sensitivity and precision ranged from above 0% to 94%, and the F1 score ranged from 0% to 94%. The model had an accuracy of 99.88%, a sensitivity of 96.98%, a specificity of 99.90%, a precision of 92.24%, and an F1 score of 94.39% (fold 6). Unlike the ECG rhythm results, the performance of the 10-fold was not good enough. There was an outlier of sensitivity, which had a 0 (zero) value in the initial fold. The massive difference between the total number of the normal beat (N) class and the other abnormal beats could be an imbalanced class problem.</ns0:p><ns0:p>To analyze the performance of the 15-class of ECG beats, we also presented the confusion matrix evaluation in Figure <ns0:ref type='figure'>9</ns0:ref>. It can be seen that normal beats have the highest number of true positives (with 7248 data). However, this beat also has the most false-negative and false positive values with 46 and 21 data, respectively compared to other classes. Furthermore, both classes J and e have neither false positive nor false negative errors. Even though the ratio of number data used in this study was imbalance, the atrial escape beat (e), which is proven to be a minority class, was able to classify by the model correctly. However, an imbalanced data problem still requires a particular concern to avoid the model simply predicting the majority class rather than the minority.</ns0:p><ns0:p>To analyze the performance of the 15-class of ECG beats, we also presented the ROC and P-R curve (refer to Figures <ns0:ref type='figure'>9(a</ns0:ref>) and 9(b)). Figure <ns0:ref type='figure'>9</ns0:ref>(a) shows that the perfect classification can be presented in the R, L, j, and P beat classes. However, the other beat classes obtained an AUC value above 75%. Also, Figure <ns0:ref type='figure'>9</ns0:ref>(b) shows the worst classification as the e beat class, with an AUC value above 50%. According to the ratio of number data that was used in this study, the atrial escape beat (e) is proven to be a minority class, as it has limited dataset representation. Due to the large imbalanced class, the model tends to perform poorly and requires some modifications to avoid simply predicting the majority class in all cases.</ns0:p><ns0:p>The results for the 15-class ECG beat classification are listed in Table <ns0:ref type='table'>5</ns0:ref>. As can be seen, the results show an above 99% accuracy and specificity for all 15-class of ECG beats. The results presentation is quite good for the ECG beat classification task, although some beats' results (A, a, and j) are not. The A and a beats are related to an atrial premature beat, causing aberrant ventricular conduction. An unexpected beat discharged by an ectopic focus in the atria is termed a premature atrial beat. While some fibers are still refractory, the impulse from the premature beat reaches the His-Purkinje system early. Due to abnormal ventricular conduction, the resultant QRS complex exhibits a right BBB pattern. Also, the j beat is a delayed heartbeat originating from an ectopic focus in the atrioventricular junction. The classification of ECG beats tends to be more challenging because the results are related to the heart beat segmentation process, which will be close to optimal with the QRS detection.</ns0:p></ns0:div> <ns0:div><ns0:head>ECG signal Classification with Inter-patient Data</ns0:head><ns0:p>Tables <ns0:ref type='table'>4 and 5</ns0:ref> list the proposed model result with dataset based on intra-patient scenario. Such conditions where the ECG data from the same patients probably appear in the training and validation set. In this study, we took the precaution to construct and evaluate the classification using rhythm and beat features also from different patients (inter-patient). To test the robustness of the proposed 1D-CNN model, we tested the model on an unseen set (refer to Table <ns0:ref type='table'>6</ns0:ref>). The unseen set sample consisted of five of the 24-class of ECG-based rhythms and beat feature-BBB, HC, HF, V and L class. From the experiment, the performance still achieved outstanding results.</ns0:p></ns0:div> <ns0:div><ns0:head>Benchmarking of The Proposed Model</ns0:head><ns0:p>The comparison results of our proposed 1D-CNN architecture with the state-of-the-art model are listed in Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref>. In order to make a fair benchmarking, we compare our proposed model with several previous studies. These studies focus on the ECG signal classification using DL architecture, especially the use of 1D-CNN architecture <ns0:ref type='bibr' target='#b49'>(Yildirim et al. 2018)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Rajkumar et al. 2019</ns0:ref>) <ns0:ref type='bibr' target='#b31'>(Nannavecchia et al. 2021)</ns0:ref>, LSTM architecture <ns0:ref type='bibr' target='#b50'>(Yildirim et al. 2019)</ns0:ref> <ns0:ref type='bibr' target='#b13'>(Gao et al. 2019)</ns0:ref>, and combination architecture of 1D-CNN as a feature extraction and LSTM as a classifier <ns0:ref type='bibr' target='#b27'>(Lui, and Chow 2018)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Oh et al. 2018)</ns0:ref> <ns0:ref type='bibr' target='#b52'>(Yildirim et al. 2020)</ns0:ref> <ns0:ref type='bibr' target='#b7'>(Chen et al. 2020)</ns0:ref> <ns0:ref type='bibr' target='#b28'>(Luo et al. 2021)</ns0:ref>. However, all the classification methodologies are developed by treating beat and rhythm separately. In contrast, our study utilizes a single architecture based on 1D-CNN architecture through both features, rhythm and beat, to classify 24 patterns of ECG signals. In the ECG signal interpretation, the abnormalities can be analyzed using heart beat or heart rhythm feature. Therefore, such process is more efficient to integrate two features for classifying ECG signals in one architecture. To our knowledge, no studies developed such combination scenario with one architecture. Thus, in this study, we will analyze and compare the classification results for beat and rhythm features separately.</ns0:p><ns0:p>It can be seen in Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref>, the previous studies show that the ECG signal classification based on rhythm feature propose 1D-CNN architecture for 17 classes with 91.30% accuracy <ns0:ref type='bibr' target='#b49'>(Yildirim et al. 2018)</ns0:ref>, and propose LSTM architecture for five classes with 99.23% accuracy <ns0:ref type='bibr' target='#b50'>(Yildirim et al. 2019)</ns0:ref>. From these two studies, the classification performance is improved; however, the number of classes are reduced from 17 to 5-class. Other study for ECG signal classification based on beat feature utilize combination between 1D convolutional layers and LSTM architecture with 10.000 subject and seven-class abnormalities <ns0:ref type='bibr' target='#b52'>(Yildirim et al. 2020)</ns0:ref>. By using the proposed model, the classification accuracy around 92.24%, unfortunately, the sensitivity only reaches 80.15 %. It means that the smallest change in the ECG signal can't be detected by the network. They use convolutional layers to produce a low-and high-level feature, however, the LSTM classifiers lack to recognize the dynamic of ECG feature extracted from CNN. The sensitivity is important value in medical analysis, its relation to the number of false-negative result. The small sensitivity value indicates that many ECG signals are misclassified. Such case also occurred in <ns0:ref type='bibr' target='#b27'>(Lui and Chow 2018)</ns0:ref>, they use 1D-CNN-LSTM architecture, however the sensitivity only reaches 92.40%.</ns0:p><ns0:p>Combination of 1D Convolutional layers with other DL architecture can actually produce quite impressive results <ns0:ref type='bibr' target='#b27'>( Lui, and Chow 2018)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Oh et al. 2018)</ns0:ref> <ns0:ref type='bibr' target='#b7'>(Chen et al. 2020)</ns0:ref> <ns0:ref type='bibr' target='#b52'>(Yildirim et al. 2020</ns0:ref>) <ns0:ref type='bibr' target='#b28'>(Luo et al. 2021</ns0:ref>) rivalling a model which uses individual 1D-CNN and LSTM architecture <ns0:ref type='bibr' target='#b49'>(Yildirim et al. 2018)</ns0:ref> <ns0:ref type='bibr' target='#b13'>(Gao et al. 2019)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Rajkumar et al. 2019)</ns0:ref> <ns0:ref type='bibr' target='#b31'>(Nannavecchia et al. 2021)</ns0:ref>. Even in other study, in order to obtain satisfactory results in ECG signal classification, three different DL architecture, 1D-CNN, LSTM and GRU are combined <ns0:ref type='bibr' target='#b28'>(Luo et al. 2021)</ns0:ref>. Unfortunately, they used SMOTE algorithm to eliminate the imbalanced problems. By using resampling methods can increase the overlapping between classes and can introduce additional noise. The beat classification is still challenging, when the number of classes is increased it will decline the performance. In <ns0:ref type='bibr' target='#b31'>(Nannavecchia et al. 2021)</ns0:ref>, they classify ECG signal for 21 classes abnormalities. However, the classification result is unsatisfactory with 89.51% accuracy and 87.79% sensitivity, which means a large of number classes are misclassified.</ns0:p><ns0:p>In our proposed model only utilize 1D-CNN architecture with 13 convolutional layers and five max-pooling layers, we produce a satisfactory result using both beat and rhythm features. Our model performance outperforms other studies with a large number of classes. All the performance of nine-class classification value reach over 99% by using beats feature, but the classification sensitivity decreases to 96.67% by using rhythm feature. It happened because we use 15 classes of ECG signal abnormalities with an unbalanced number of classes. Such condition causes the classifiers tend to make biased learning model that has a poorer predictive accuracy over the minority classes compared to the majority classes. Even though, our model still maintains the classification performance with imbalanced data, the sensitivity value is decreased only 3%, it does not affect the performance significantly. One dimensional convolutional learning methods are more efficient to learn local patterns than a recurrent neural network. It contains a sliding filter, which may be regarded as moving across the input by sharing weights over a local patch function. It concludes that 1D-CNN performs better in areas where local patterns are important for classification task.</ns0:p><ns0:p>Although the results look promising for ECG rhythm and beat classification, there are some limitations to our study:</ns0:p><ns0:p>&#61623; The pre-processing stage of the ECG signal still needs improvement, specifically in the case of ECG signals that have different sampling frequencies, leads, and various noises.; &#61623; The segmentation of the P, QRS, and T-waves and the HRV measurement before the classification process were not carried out; and &#61623; The proposed model was not validated against the hospital patient data. We only used the available public dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Deep learning has gained a central position in recent years for ECG rhythm and beat classification. It was built on a foundation of significant algorithmic details and generally can be understood in the construction and training of DL architectures. A DL approach based on one 1D-CNN architecture has been presented to automatically learn and classify the nine-class of ECG pattern with rhythms feature and 15-class of ECG pattern with beats feature, which is important for classifying the abnormalities pattern. In this study, the proposed 1D-CNN model, which consisted of 13 convolutional layers and five max-pooling layers, was used. The 1D-CNN has low computational requirements. Thus, it is well-suited for real-time and low-cost applications for ECG devices.</ns0:p><ns0:p>Using the 10-fold cross-validation scheme, the performance results had an accuracy of 99.98%, a sensitivity of 99.90%, a specificity of 99.89%, a precision of 99.90%, and an F1 score of 99.99% for ECG rhythm classification. Also, for ECG beat classification, the model obtained an accuracy of 99.87%, a sensitivity of 96.97%, a specificity of 99.89%, a precision of 92.23%, and an F1 score of 94.39%. We realize the performance results of the ECG rhythm are better than the ECG beat classification. The selection of an appropriate preprocessing step for QRS detection to accurately find the R-peak to achieve the best model for ECG beat classification is needed to achieve high performance results. In the future, the challenges regarding ECG signals are still many, such as the precision segmentation of P, QRS, and T-waves before the process of rhythm and beat classification. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,268.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,198.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,268.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>ECG rhythm data description</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Label/</ns0:cell><ns0:cell>Records</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Abbreviation</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>PTB Diagnostic ECG</ns0:cell><ns0:cell>Bundle Branch Block</ns0:cell><ns0:cell>BBB</ns0:cell><ns0:cell>17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Cardiomyopathy</ns0:cell><ns0:cell>C</ns0:cell><ns0:cell>17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dysrhythmia</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Health Control</ns0:cell><ns0:cell>HC</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Myocardial Hypertrophy</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Myocardial Infarction</ns0:cell><ns0:cell>NU</ns0:cell><ns0:cell>368</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Myocarditis</ns0:cell><ns0:cell>M</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Valvural</ns0:cell><ns0:cell>VHD</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>BIDMC Congestive Heart</ns0:cell><ns0:cell cols='2'>Congestive Heart Failure HF</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Failure</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>China Physiological Signal</ns0:cell><ns0:cell cols='2'>Left bundle branch block BBB</ns0:cell><ns0:cell>207</ns0:cell></ns0:row><ns0:row><ns0:cell>Challenge 2018</ns0:cell><ns0:cell>Right bundle branch</ns0:cell><ns0:cell /><ns0:cell>1,695</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>block</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MIT-BIH Normal Sinus</ns0:cell><ns0:cell>Normal sinus (healthy</ns0:cell><ns0:cell>HC</ns0:cell><ns0:cell>18</ns0:cell></ns0:row><ns0:row><ns0:cell>Rhythm</ns0:cell><ns0:cell>control)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>ECG beat data description</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Total Beats</ns0:cell></ns0:row><ns0:row><ns0:cell>MIT-BIH Arrhythmia</ns0:cell><ns0:cell>Normal Beat (N)</ns0:cell><ns0:cell>75,022</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Atrial Premature Beat (A)</ns0:cell><ns0:cell>2,546</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Premature Ventricular Contraction (V)</ns0:cell><ns0:cell>7,129</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Right Bundle Branch Block Beat (R)</ns0:cell><ns0:cell>7,255</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Left bundle branch block beat (L)</ns0:cell><ns0:cell>8,072</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Aberrated atrial premature beat (a)</ns0:cell><ns0:cell>150</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ventricular flutter wave (!)</ns0:cell><ns0:cell>472</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Fusion of ventricular and normal beat</ns0:cell><ns0:cell>802</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(F)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Fusion of paced and normal beat (f)</ns0:cell><ns0:cell>982</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Nodal (junctional) escape beat (j)</ns0:cell><ns0:cell>229</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Nodal (junctional) premature beat (J)</ns0:cell><ns0:cell>83</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Paced beat (/)</ns0:cell><ns0:cell>7,025</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ventricular escape beat (E)</ns0:cell><ns0:cell>106</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Non-conducted P-wave (x)</ns0:cell><ns0:cell>193</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Atrial escape beat (e)</ns0:cell><ns0:cell>16</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison results with the state of the art Performance Results (%)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Authors</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell cols='2'>Feature Method</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Acc.</ns0:cell><ns0:cell>Sens.</ns0:cell><ns0:cell cols='2'>Spec. Pre.</ns0:cell></ns0:row><ns0:row><ns0:cell>Rajkumar et al.</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell cols='2'>rhythm 1D-CNN</ns0:cell><ns0:cell>93.60</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2019</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yildirim et al.</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell cols='2'>rhythm 1D-CNN</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>83.90</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>85.4</ns0:cell></ns0:row><ns0:row><ns0:cell>2018</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Nannavecchia et</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>beat</ns0:cell><ns0:cell>1D-CNN</ns0:cell><ns0:cell>89.51</ns0:cell><ns0:cell>87.79</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>86.78</ns0:cell></ns0:row><ns0:row><ns0:cell>al. 2021</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yildirim et al.</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>rhythm LSTM</ns0:cell><ns0:cell>99.23</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>99.00</ns0:cell></ns0:row><ns0:row><ns0:cell>2019</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Gao et al. 2019</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell cols='2'>rhythm LSTM</ns0:cell><ns0:cell>99.26</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='2'>99.26 99.14</ns0:cell></ns0:row><ns0:row><ns0:cell>Lui and Chow</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>beat</ns0:cell><ns0:cell cols='2'>1D-CNN-LSTM -</ns0:cell><ns0:cell>92.40</ns0:cell><ns0:cell cols='2'>97.70 -</ns0:cell></ns0:row><ns0:row><ns0:cell>2018</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yildirim et al.</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>beat</ns0:cell><ns0:cell cols='2'>1D-CNN-LSTM 92.24</ns0:cell><ns0:cell>80.15</ns0:cell><ns0:cell cols='2'>98.72 80.31</ns0:cell></ns0:row><ns0:row><ns0:cell>2020</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Oh et al. 2018</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='3'>rhythm 1D-CNN-LSTM 98.10</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>97.50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Chen et al. 2020 6</ns0:cell><ns0:cell cols='3'>rhythm 1D-CNN-LSTM 99.32</ns0:cell><ns0:cell>97.75</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Luo et al. 2021</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>rhythm 1D-CNN-</ns0:cell><ns0:cell>99.01</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>99.44</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>LSTM-GRU</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Our work</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>beat</ns0:cell><ns0:cell>1D-CNN</ns0:cell><ns0:cell>99.98</ns0:cell><ns0:cell>99.90</ns0:cell><ns0:cell cols='2'>99.89 99.90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>15</ns0:cell><ns0:cell cols='2'>rhythm 1D-CNN</ns0:cell><ns0:cell>99.87</ns0:cell><ns0:cell>96.97</ns0:cell><ns0:cell cols='2'>99.89 92.23</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>*Acc. (accuracy); Sen. (sensitivity); Spec. (specificity); Pre, precision</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64082:2:0:NEW 22 Nov 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64082:2:0:NEW 22 Nov 2021)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Intelligent System Research Group Universitas Sriwijaya November 22rd. 2021 Dear Editors Thank you for giving us the opportunity to submit a revised draft of our manuscript to PeerJ Computer Science. We would like to thank the reviewers for encouraging comments. It helps us make this work clearer to understand, and the constructive comments helped us improve the quality of the manuscript. We appreciate the time and effort that you have dedicated to providing your valuable feedback on our manuscript. We are grateful to you for your insightful comments on our paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewer’ comments and concerns. We are uploading (a) our point-by-point response to the comments (below) (response to Editor and reviewers), (b) an updated manuscript with track changes enabled, and (c) a clean updated manuscript without highlights. Prof. Siti Nurmaini Corresponding Author On behalf of all authors. Editor As the reviewer noticed, a direct comparison with the state-of-the-art is missing. The authors must include it if they want to reach the acceptance of this article. Author response: Thank you for the concern. We have updated the benchmarking section and compare our result with several state-of-the-art study. Author action: We change the benchmarking section with the following statement (line 415-470): The comparison results of our proposed 1D-CNN architecture with the state-of-the-art model are listed in Table 7. In order to make a fair benchmarking, we compare our proposed model with several previous studies. These studies focus on the ECG signal classification using DL architecture, especially the use of 1D-CNN architecture (Yildirim et al. 2018) (Rajkumar et al. 2019) (Nannavecchia et al. 2021), LSTM architecture (Yildirim et al. 2019) (Gao et al. 2019), and combination architecture of 1D-CNN as a feature extraction and LSTM as a classifier (Lui, and Chow 2018) (Oh et al. 2018) (Yildirim et al. 2020) (Chen et al. 2020) (Luo et al. 2021). However, all the classification methodologies are developed by treating beat and rhythm separately. In contrast, our study utilizes a single architecture based on 1D-CNN architecture through both features, rhythm and beat, to classify 24 patterns of ECG signals. In the ECG signal interpretation, the abnormalities can be analyzed using heart beat or heart rhythm feature. Therefore, such process is more efficient to integrate two features for classifying ECG signals in one architecture. To our knowledge, no studies developed such combination scenario with one architecture. Thus, in this study, we will analyze and compare the classification results for beat and rhythm features separately. It can be seen in Table 7, the previous studies show that the ECG signal classification based on rhythm feature propose 1D-CNN architecture for 17 classes with 91.30% accuracy (Yildirim et al. 2018), and propose LSTM architecture for five classes with 99.23% accuracy (Yildirim et al. 2019). From these two studies, the classification performance is improved; however, the number of classes are reduced from 17 to 5-class. Other study for ECG signal classification based on beat feature utilize combination between 1D convolutional layers and LSTM architecture with 10.000 subject and seven-class abnormalities (Yildirim et al. 2020). By using the proposed model, the classification accuracy around 92.24%, unfortunately, the sensitivity only reaches 80.15 %. It means that the smallest change in the ECG signal can’t be detected by the network. They use convolutional layers to produce a low- and high-level feature, however, the LSTM classifiers lack to recognize the dynamic of ECG feature extracted from CNN. The sensitivity is important value in medical analysis, its relation to the number of false-negative result. The small sensitivity value indicates that many ECG signals are misclassified. Such case also occurred in (Lui and Chow 2018), they use 1D-CNN-LSTM architecture, however the sensitivity only reaches 92.40%. Combination of 1D Convolutional layers with other DL architecture can actually produce quite impressive results ( Lui, and Chow 2018) (Oh et al. 2018) (Chen et al. 2020) (Yildirim et al. 2020) (Luo et al. 2021) rivalling a model which uses individual 1D-CNN and LSTM architecture (Yildirim et al. 2018) (Gao et al. 2019) (Rajkumar et al. 2019) (Nannavecchia et al. 2021). Even in other study, in order to obtain satisfactory results in ECG signal classification, three different DL architecture, 1D-CNN, LSTM and GRU are combined (Luo et al. 2021). Unfortunately, they used SMOTE algorithm to eliminate the imbalanced problems. By using resampling methods can increase the overlapping between classes and can introduce additional noise. The beat classification is still challenging, when the number of classes is increased it will decline the performance. In (Nannavecchia et al. 2021), they classify ECG signal for 21 classes abnormalities. However, the classification result is unsatisfactory with 89.51% accuracy and 87.79% sensitivity, which means a large of number classes are misclassified. Table 7. Comparison results with the state of the art Authors Class Feature Method Performance Results (%) Acc. Sens. Spec. Pre. Rajkumar et al. 2019 8 rhythm 1D-CNN 93.60 - - - Yildirim et al. 2018 17 rhythm 1D-CNN 91.30 83.90 - 85.4 Nannavecchia et al. 2021 21 beat 1D-CNN 89.51 87.79 - 86.78 Yildirim et al. 2019 5 rhythm LSTM 99.23 - - 99.00 Gao et al. 2019 8 rhythm LSTM 99.26 - 99.26 99.14 Lui and Chow 2018 4 beat 1D-CNN-LSTM - 92.40 97.70 - Yildirim et al. 2020 7 beat 1D-CNN-LSTM 92.24 80.15 98.72 80.31 Oh et al. 2018 5 rhythm 1D-CNN-LSTM 98.10 - - 97.50 Chen et al. 2020 6 rhythm 1D-CNN-LSTM 99.32 97.75 - - Luo et al. 2021 9 rhythm 1D-CNN-LSTM-GRU 99.01 99.58 - 99.44 Our work 9 beat 1D-CNN 99.98 99.90 99.89 99.90 15 rhythm 1D-CNN 99.87 96.97 99.89 92.23 *Acc. (accuracy); Sen. (sensitivity); Spec. (specificity); Pre, precision In our proposed model only utilize 1D-CNN architecture with 13 convolutional layers and five max-pooling layers, we produce a satisfactory result using both beat and rhythm features. Our model performance outperforms other studies with a large number of classes. All the performance of nine-class classification value reach over 99% by using beats feature, but the classification sensitivity decreases to 96.67% by using rhythm feature. It happened because we use 15 classes of ECG signal abnormalities with an unbalanced number of classes. Such condition causes the classifiers tend to make biased learning model that has a poorer predictive accuracy over the minority classes compared to the majority classes. Even though, our model still maintains the classification performance with imbalanced data, the sensitivity value is decreased only 3%, it does not affect the performance significantly. One dimensional convolutional learning methods are more efficient to learn local patterns than a recurrent neural network. It contains a sliding filter, which may be regarded as moving across the input by sharing weights over a local patch function. It concludes that 1D-CNN performs better in areas where local patterns are important for classification task. Reviewer 1 Basic reporting The manuscript is well-written, has a good presentation, and thoroughly covers the literature. Author Response: Thank you for your appreciation. Experimental design The experiments have been correctly and thoroughly defined and conducted. Author Response: Thank you for your appreciation. Validity of the findings Results seem good, and the experiments use plenty of databases. The comparison with the state-of-the-art remains flawed and does not suffice to assess the relative quality of the proposed method. The most promising literature approaches should be tested in the exact same conditions as the proposed method to directly compare their performance results. Author response: Thank you for the concern and suggestion. We have updated the benchmarking section and compare our result with several state-of-the-art study. Author action: We have updated and adds more state-of-the-art study in the benchmarking section. Please refer to page 11-13 line 415-470. Additional comments I thank the authors for their response to my comments. I believe most have been addressed, although one key concern has been avoided (and the manuscript's quality has suffered because of that). Direct comparison with the state-of-the-art is fundamental. If you have more classes than the literature, or other things are different (random seeds, etc.), then implement state-of-the-art methods and test them in your exact scenario. If there are many competing works in the literature, select at least the 1-2 best to have this direct benchmarking. This is essential to have a real fair comparison and to really understand if, where, and how your method is best. Author Response: Thank you for your appreciation and comments. We have updated the benchmarking section and compare our result with several state-of-the-art study. Author action: We change the benchmarking section with the following statement (page 11-13, line 415-470): The comparison results of our proposed 1D-CNN architecture with the state-of-the-art model are listed in Table 7. In order to make a fair benchmarking, we compare our proposed model with several previous studies. These studies focus on the ECG signal classification using DL architecture, especially the use of 1D-CNN architecture (Yildirim et al. 2018) (Rajkumar et al. 2019) (Nannavecchia et al. 2021), LSTM architecture (Yildirim et al. 2019) (Gao et al. 2019), and combination architecture of 1D-CNN as a feature extraction and LSTM as a classifier (Lui, and Chow 2018) (Oh et al. 2018) (Yildirim et al. 2020) (Chen et al. 2020) (Luo et al. 2021). However, all the classification methodologies are developed by treating beat and rhythm separately. In contrast, our study utilizes a single architecture based on 1D-CNN architecture through both features, rhythm and beat, to classify 24 patterns of ECG signals. In the ECG signal interpretation, the abnormalities can be analyzed using heart beat or heart rhythm feature. Therefore, such process is more efficient to integrate two features for classifying ECG signals in one architecture. To our knowledge, no studies developed such combination scenario with one architecture. Thus, in this study, we will analyze and compare the classification results for beat and rhythm features separately. It can be seen in Table 7, the previous studies show that the ECG signal classification based on rhythm feature propose 1D-CNN architecture for 17 classes with 91.30% accuracy (Yildirim et al. 2018), and propose LSTM architecture for five classes with 99.23% accuracy (Yildirim et al. 2019). From these two studies, the classification performance is improved; however, the number of classes are reduced from 17 to 5-class. Other study for ECG signal classification based on beat feature utilize combination between 1D convolutional layers and LSTM architecture with 10.000 subject and seven-class abnormalities (Yildirim et al. 2020). By using the proposed model, the classification accuracy around 92.24%, unfortunately, the sensitivity only reaches 80.15 %. It means that the smallest change in the ECG signal can’t be detected by the network. They use convolutional layers to produce a low- and high-level feature, however, the LSTM classifiers lack to recognize the dynamic of ECG feature extracted from CNN. The sensitivity is important value in medical analysis, its relation to the number of false-negative result. The small sensitivity value indicates that many ECG signals are misclassified. Such case also occurred in (Lui and Chow 2018), they use 1D-CNN-LSTM architecture, however the sensitivity only reaches 92.40%. Combination of 1D Convolutional layers with other DL architecture can actually produce quite impressive results ( Lui, and Chow 2018) (Oh et al. 2018) (Chen et al. 2020) (Yildirim et al. 2020) (Luo et al. 2021) rivalling a model which uses individual 1D-CNN and LSTM architecture (Yildirim et al. 2018) (Gao et al. 2019) (Rajkumar et al. 2019) (Nannavecchia et al. 2021). Even in other study, in order to obtain satisfactory results in ECG signal classification, three different DL architecture, 1D-CNN, LSTM and GRU are combined (Luo et al. 2021). Unfortunately, they used SMOTE algorithm to eliminate the imbalanced problems. By using resampling methods can increase the overlapping between classes and can introduce additional noise. The beat classification is still challenging, when the number of classes is increased it will decline the performance. In (Nannavecchia et al. 2021), they classify ECG signal for 21 classes abnormalities. However, the classification result is unsatisfactory with 89.51% accuracy and 87.79% sensitivity, which means a large of number classes are misclassified. Table 7. Comparison results with the state of the art Authors Class Feature Method Performance Results (%) Acc. Sens. Spec. Pre. Rajkumar et al. 2019 8 rhythm 1D-CNN 93.60 - - - Yildirim et al. 2018 17 rhythm 1D-CNN 91.30 83.90 - 85.4 Nannavecchia et al. 2021 21 beat 1D-CNN 89.51 87.79 - 86.78 Yildirim et al. 2019 5 rhythm LSTM 99.23 - - 99.00 Gao et al. 2019 8 rhythm LSTM 99.26 - 99.26 99.14 Lui and Chow 2018 4 beat 1D-CNN-LSTM - 92.40 97.70 - Yildirim et al. 2020 7 beat 1D-CNN-LSTM 92.24 80.15 98.72 80.31 Oh et al. 2018 5 rhythm 1D-CNN-LSTM 98.10 - - 97.50 Chen et al. 2020 6 rhythm 1D-CNN-LSTM 99.32 97.75 - - Luo et al. 2021 9 rhythm 1D-CNN-LSTM-GRU 99.01 99.58 - 99.44 Our work 9 beat 1D-CNN 99.98 99.90 99.89 99.90 15 rhythm 1D-CNN 99.87 96.97 99.89 92.23 *Acc. (accuracy); Sen. (sensitivity); Spec. (specificity); Pre, precision In our proposed model only utilize 1D-CNN architecture with 13 convolutional layers and five max-pooling layers, we produce a satisfactory result using both beat and rhythm features. Our model performance outperforms other studies with a large number of classes. All the performance of nine-class classification value reach over 99% by using beats feature, but the classification sensitivity decreases to 96.67% by using rhythm feature. It happened because we use 15 classes of ECG signal abnormalities with an unbalanced number of classes. Such condition causes the classifiers tend to make biased learning model that has a poorer predictive accuracy over the minority classes compared to the majority classes. Even though, our model still maintains the classification performance with imbalanced data, the sensitivity value is decreased only 3%, it does not affect the performance significantly. One dimensional convolutional learning methods are more efficient to learn local patterns than a recurrent neural network. It contains a sliding filter, which may be regarded as moving across the input by sharing weights over a local patch function. It concludes that 1D-CNN performs better in areas where local patterns are important for classification task. "
Here is a paper. Please give your review comments after reading it.
322
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. On January 8, 2020, the Centers for Disease Control and Prevention officially announced a new virus in Wuhan, China. The first novel coronavirus (COVID-19) case was discovered on December 1, 2019, implying that the disease was spreading quietly and quickly in the community before reaching the rest of the world. To deal with the virus' wide spread, countries have deployed contact tracing mobile applications to control viral transmission. Such applications collect users' information and inform them if they were in contact with an individual diagnosed with COVID-19. However, these applications might have affected human rights by breaching users' privacy.</ns0:p><ns0:p>Methodology. This systematic literature review followed a comprehensive methodology to highlight current research discussing such privacy issues. First, it used a search strategy to obtain 808 relevant papers published in 2020 from well-established digital libraries. Second, inclusion/exclusion criteria and the snowballing technique were applied to produce more comprehensive results. Finally, by the application of a quality assessment procedure, 40 studies were chosen.</ns0:p><ns0:p>Results. This review highlights privacy issues, discusses centralized and decentralized models and the different technologies affecting users' privacy, and identifies solutions to improve data privacy from three perspectives: public, law, and health considerations.</ns0:p><ns0:p>Conclusions. Governments need to address the privacy issues related to contact tracing apps. This can be done through enforcing special policies to guarantee users privacy. Additionally, it is important to be transparent and let users know what data is being collected and how it is being used.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>At the end of December 2019, a new COVID-19virus appeared in Wuhan, China. The novel virus severe acute respiratory syndrome (SARS-CoV-2), a COVID-19virus family member, produces an infectious disease known as COVID-19, causing illnesses that vary from the common cold to more severe diseases <ns0:ref type='bibr' target='#b52'>(Sharma et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b54'>Skoll, Miller &amp; Saxon, 2020)</ns0:ref>.</ns0:p><ns0:p>The rapid global spread of COVID-19virus overwhelmed health sectors and caused a significant worldwide public health crisis, prompting the World Health Organization (WHO) to declare COVID-19 a global pandemic <ns0:ref type='bibr' target='#b67'>(Whaiduzzaman et al., 2020)</ns0:ref>. The virus has affected human life in many different aspects: travelling, education, business, entertainment, and others <ns0:ref type='bibr' target='#b38'>(Mbunge, 2020)</ns0:ref>.</ns0:p><ns0:p>Due to the virus' spread, many countries have taken measures and imposed restrictions like lockdowns, limiting activities requiring human gathering and interactions <ns0:ref type='bibr' target='#b10'>(De, Pandey &amp; Pal, 2020)</ns0:ref>. Consequently, national governments seek solutions to minimize infected cases, which results in employing digital surveillance technologies to contain the virus. A few countries have managed and controlled the pandemic using such technologies and developing smartphone apps and information system technologies to control the virus' spread <ns0:ref type='bibr' target='#b11'>(Dwivedi;</ns0:ref><ns0:ref type='bibr'>2020)</ns0:ref>. Specifically, when an individual is diagnosed with COVID-19, anyone close to them during the contagious period must quarantine for two weeks <ns0:ref type='bibr' target='#b8'>(Cho, Ippolito &amp; Yu, 2020)</ns0:ref>. This solution requires individuals to download the app and access their sensitive data; therefore, it raises significant privacy issues.</ns0:p><ns0:p>Contact-tracing apps <ns0:ref type='bibr' target='#b64'>(Vitak &amp; Zimmer, 2020)</ns0:ref> collect a wide range of data, including personal, location, and health and fitness data. In many countries, individuals avoid using these apps due to privacy concerns. Debates about the COVID-19 app's ethics have been largely preoccupied with privacy concerns, as these data is shared with governments, health ministries, and organizations <ns0:ref type='bibr' target='#b22'>(Hendl, Chung &amp; Wild, 2020)</ns0:ref>. It is an individual's right to understand what data these apps have access to, who has privileges to obtain them, and how their data is used <ns0:ref type='bibr' target='#b64'>(Vitak &amp; Zimmer, 2020)</ns0:ref>. Digital solutions must comply with confidentiality requirements (privacy and security) to ensure personal data protection. Examples of these actions could be obtaining users' consent, transparency, voluntary self-reporting, and anonymization. Moreover, clear and ethical principles must be stated <ns0:ref type='bibr' target='#b22'>(Hendl, Chung &amp; Wild, 2020)</ns0:ref>. When designing and deploying these apps and any other solutions, it is necessary mitigating privacy concerns.</ns0:p><ns0:p>There are a few literature reviews focused on apps developed for contact-tracing, prevention, surveillance measures, and mapping disease spread. In <ns0:ref type='bibr' target='#b26'>(Jalabneh et al., 2020)</ns0:ref> were identified 17 primary studies whose current application is to monitor and diagnose infected individuals. However, the authors only share their view concerning data privacy regarding how users' information is not necessarily accurate, which affects data analysis. <ns0:ref type='bibr'>Golinelli et al. identified 52</ns0:ref> articles classified into seven categories covering public health needs addressed by recognized digital solutions <ns0:ref type='bibr' target='#b19'>(Golinelli et al., 2020)</ns0:ref> in the COVID-19 pandemic context. The authors address privacy in two of their identified categories, mentioning only that it would preferable the applications do not collect personal data <ns0:ref type='bibr' target='#b19'>(Golinelli et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b62'>Verma &amp; Mishra, 2020)</ns0:ref> provides a systematic review of smartphone technology applied in the fight against COVID-19.</ns0:p><ns0:p>Regarding the collection of users' information, there is only mention of an app developed in the United Kingdom, where the data are stored in various places. <ns0:ref type='bibr'>Zimmermann et al. provided</ns0:ref> a set of population perceptions about contact-tracing apps regarding authority trusting and individual privacy, among others <ns0:ref type='bibr' target='#b71'>(Zimmermann, 2021)</ns0:ref>. However, the context of such work only considers the three main German-speaking countries: Germany, Austria, and Switzerland. Finally, <ns0:ref type='bibr' target='#b23'>(Hussein, 2020)</ns0:ref> provides a review of several digital health surveillance systems where regulations and data protection are only approached from the perspective of the pressure on users to share their personal information to access such systems and apps.</ns0:p><ns0:p>As evident, several studies have been conducted focusing on contact-tracing apps developed during the COVID-19 pandemic. Some of those studies discussed such apps about privacy and ethical concerns. Nevertheless, there is an important window of opportunity for how such information technologies can help health ministries and other parties contain the virus' spread. Our overarching goal is to provide a better study of privacy concerns in the context of COVID-19 apps. Toward this goal, we examined and analyzed the existing studies on COVID-19 apps and privacy concerns and their findings, and summarized this research's efforts.</ns0:p><ns0:p>The remainder of this paper is as follows: Section 2 introduces our methodology for paper selection and data collection, Section 3 represents our collected studies' analysis, Section 4 highlights implications, Section 5 discusses future work and limitations, and Section 6 concludes.</ns0:p></ns0:div> <ns0:div><ns0:head>Survey Methodology</ns0:head><ns0:p>To ensure the accuracy of our systematic literary review results, we adapted and modified Liao's methodology, proposed in <ns0:ref type='bibr' target='#b35'>(Liao, 2020)</ns0:ref>. The methodology goes through six stages as shown in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RESEARCH QUESTIONS</ns0:head><ns0:p>We aim to answer, at the end of this systematic literary review, three research questions:</ns0:p><ns0:p>1-What techniques are proposed to protect users' privacy in digital surveillance? 2-How does the law protect users' privacy in COVID-19 applications? 3-How do different entities contribute to preserving individuals' health privacy?</ns0:p></ns0:div> <ns0:div><ns0:head>SEARCH STRATEGY</ns0:head><ns0:p>The search strategy consists of three steps, as defined below.</ns0:p></ns0:div> <ns0:div><ns0:head>Finding Keywords</ns0:head><ns0:p>To find keywords related to our topic, we used Nails Project <ns0:ref type='bibr' target='#b31'>(Knutas, 2015;</ns0:ref><ns0:ref type='bibr' target='#b49'>Salminen, 2020)</ns0:ref>. We then chose the words that we deemed to be the most relevant to the systematic literary review topic and those most accurate from among the candidate words.</ns0:p></ns0:div> <ns0:div><ns0:head>Forming Search Strings</ns0:head><ns0:p>From the selected words, we formalized two strings to use in the search process: 2.1 'Privacy' AND ('mobile application' OR 'apps') AND ('COVID-19' OR 'COVID-19' OR 'COVID-19virus' OR 'COVID-19 pandemic').</ns0:p><ns0:p>2.2 'Privacy' AND ('mobile application' OR 'apps') AND ('COVID-19' OR 'COVID-19' OR 'COVID-19virus' OR 'COVID-19 pandemic') AND ('contact-tracing' OR 'location privacy' OR 'data protection' OR 'privacy protection').</ns0:p><ns0:p>This systematic literary review focuses on three subjects: 'COVID-19' and 'privacy' in 'mobile applications.' To ensure coverage of all papers across the field, we chose synonyms for each term. In the first string, the results will be general, including any papers mentioning the three main subjects. In the second string, we narrowed the results to papers related to 'contact-tracing' and combined it with keywords related to this term.</ns0:p></ns0:div> <ns0:div><ns0:head>Selecting Sources</ns0:head><ns0:p>For the research process, we selected ten libraries. They are the most reliable and provide the highest quality of research: Microsoft Academic (https://academic.microsoft.com/home), Wiley (https://onlinelibrary.wiley.com/), IEEE (https://ieeexplore.ieee.org/Xplore/home.jsp), Sage Journals (http://us.sagepub.com), Taylor &amp; Francis (https://taylorandfrancis.com/online/), Springer Link (https://link.springer.com/), Science Direct (https://www.sciencedirect.com/), Scopus (https://www.scopus.com/home.uri), ACM (https://www.scopus.com/home.uri), and Web of Science (https://www.webofknowledge.com/).</ns0:p></ns0:div> <ns0:div><ns0:head>INCLUSION AND EXCLUSION CRITERIA</ns0:head><ns0:p>After exploring all the libraries' engines, we obtained 808 papers, which decreased to 565 papers after duplication removal. We then filtered the papers by reading titles, keywords, and abstracts. This process yielded 60 papers. We used the inclusion and exclusion criteria to filter the papers as shown in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>QUALITY ASSESSMENT CRITERIA</ns0:head><ns0:p>The 60 papers from the previous step were read thoroughly and checked against quality assessment questions as in Table <ns0:ref type='table'>2</ns0:ref>. All questions were weighted to 1 point for Yes, 0 for No, and 0.5 for partial. Papers that scored 2 points or more were included in the final collection as shown in Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>. Papers scoring less than 2 points were reviewed by a team member and re-scored. Papers still scoring less than 2 were excluded.</ns0:p></ns0:div> <ns0:div><ns0:head>DOCUMENT SEARCH STRATEGY</ns0:head><ns0:p>After assembling the final collection of papers, we assigned each paper to its origin library. As shown in Table <ns0:ref type='table'>3</ns0:ref>, some libraries have no papers in the final collection.</ns0:p></ns0:div> <ns0:div><ns0:head>SNOWBALLING</ns0:head><ns0:p>For better comprehensiveness of papers, we conducted forward snowballing to cover all the papers related to our topic. After reading the titles of the papers' references, we obtained 13 papers in the first stage. After reading the abstracts, the number decreased to 6 papers. We then read the papers in their entirety and performed quality assessment, yielding 5 papers. This process was repeated until no more results were produced; resulting in 40 papers. Figure <ns0:ref type='figure'>3</ns0:ref> shows the total number of papers from each data source and figure <ns0:ref type='figure'>4</ns0:ref> illustrates the selection process.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>In January 2020, WHO declared the newly-identified COVID-19 virus as a global pandemic (Whaiduzzaman, 2020). When a COVID-19 patient has face-to-face contact with another person for 15 minutes or more and the distance between them is less than 1.5 meters, there is a high possibility the other person will become infected with the virus <ns0:ref type='bibr' target='#b17'>(Garg, 2020)</ns0:ref>. Therefore, when an individual is in close contact with a person diagnosed with COVID-19, the individual is advised to quarantine themselves for approximately two weeks <ns0:ref type='bibr' target='#b8'>(Cho, Ippolito &amp; Yu, 2020)</ns0:ref>. This process was easy to manage at the pandemic's beginning. However, with the virus' widespread, it became difficult and time-consuming (Whaiduzzaman, 2020), necessitating the tracing of infected, suspected, and contact persons in relation to COVID-19 patients.</ns0:p><ns0:p>Unlike vaccines, requiring time for development and approval <ns0:ref type='bibr' target='#b28'>(Joo &amp; Shin, 2020)</ns0:ref>, populationwide contact-tracing applications can more immediately control viral spread and enable the successful containment of COVID-19 or any future infectious disease <ns0:ref type='bibr' target='#b54'>(Skoll, Miller &amp; Saxon, 2020;</ns0:ref><ns0:ref type='bibr' target='#b11'>Dwivedi, 2020;</ns0:ref><ns0:ref type='bibr' target='#b47'>Riemer et al., 2020)</ns0:ref>. Many COVID-19-affected countries look at these technology-based solutions, facilitating and automating limiting infection and minimizing viral spread. These can be deployed following different approaches and adapting multiple technologies, such as a global positioning system (GPS), Wireless Fidelity (Wi-Fi) technology, and Bluetooth <ns0:ref type='bibr' target='#b38'>(Mbunge, 2020)</ns0:ref>.</ns0:p><ns0:p>Contact-tracing refers to identifying an individual and their contacts <ns0:ref type='bibr' target='#b64'>(Vitak &amp; Zimmer, 2020)</ns0:ref>. In addition to administering infected cases, contact-tracing apps trace the infection route from the diagnosed individual to those with whom they have been in close contact. Traditional contacttracing is a strategy proposed more than 80 years ago. It was used as a part of the response to any disease outbreak and has been implemented to control infectious diseases like the severe acute respiratory syndrome (SARS) epidemic, since it is easy to adopt at any time <ns0:ref type='bibr' target='#b39'>(McLachlan et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fahey &amp; Hino, 2020;</ns0:ref><ns0:ref type='bibr' target='#b59'>Trang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Different approaches exist to develop contact-tracing. The first order app identifies only the individuals in direct contact with the patient. The single-step app was the enhanced version of the first step. It identifies any individual in contact with an infected patient and any who became infected, along with their contacts, and so on. Different apps, such as iterative and retrospective, have many limitations and have failed to achieve their purpose. To bridge the gap, modern contact-tracing apps were proposed, which rely on technologies like wireless and Bluetooth. These apps have many features: live maps of confirmed cases, location-based tracking, and quarantine and isolation monitoring. Even with these applications, from a public health viewpoint, this tactic might not be ethically effective <ns0:ref type='bibr' target='#b48'>(Rowe, 2020)</ns0:ref>.</ns0:p><ns0:p>Countries worldwide have taken different approaches and applied different technologies and models to develop and roll out contact-tracing applications. Table <ns0:ref type='table'>4</ns0:ref> summarizes contact tracing applications developed in different countries during the COVID-19 pandemic along with their privacy concerns. A distribution of contact-tracing applications around the world is illustrated in Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>Contact-tracing apps collect sensitive personal data like phone numbers, MAC addresses, and GPS location data <ns0:ref type='bibr' target='#b39'>(McLachlan, 2020;</ns0:ref><ns0:ref type='bibr' target='#b3'>Cao, 2020)</ns0:ref>. Individuals' perceptions of these applications vary; however, users' primary concern while using these apps is privacy, which is the key factor motivating many to refrain from downloading them <ns0:ref type='bibr' target='#b52'>(Sharma, 2020;</ns0:ref><ns0:ref type='bibr' target='#b18'>Goggin, 2020;</ns0:ref><ns0:ref type='bibr' target='#b46'>O'Leary, 2020)</ns0:ref>.</ns0:p><ns0:p>Contact-tracing raises significant privacy concerns and questions about user privacy <ns0:ref type='bibr' target='#b67'>(Whaiduzzaman, 2020;</ns0:ref><ns0:ref type='bibr' target='#b3'>Cao et al., 2020)</ns0:ref>. Some considerations to bear in mind are: Are individuals willing to share their contacts and locations with governments and health authorities? What will happen to these data once the pandemic ends? What is this data's lifespan? For what purpose will this data be used? <ns0:ref type='bibr' target='#b11'>(Dwivedi, 2020;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vitak &amp; Zimmer, 2020;</ns0:ref><ns0:ref type='bibr' target='#b46'>O'Leary, 2020)</ns0:ref>. Privacy risks can be mitigated by obtaining consent and safeguarding individuals' privacy by giving them control over how their data are collected and used to encourage the continued voluntary use of these apps <ns0:ref type='bibr' target='#b38'>(Mbunge, 2020;</ns0:ref><ns0:ref type='bibr' target='#b59'>Trang, 2020;</ns0:ref><ns0:ref type='bibr' target='#b44'>Nanni et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b66'>Wang &amp; Liu, 2020;</ns0:ref><ns0:ref type='bibr' target='#b29'>Klar &amp; Lanzerath, 2020)</ns0:ref> Countries worldwide have developed and deployed contact-tracing apps and adapted different approaches to mitigate privacy concerns <ns0:ref type='bibr' target='#b66'>(Wang &amp; Liu, 2020)</ns0:ref>. Two technology-based solutions, namely Bluetooth and GPS, are used to implement these apps. Bluetooth-based apps identify whether two individuals are in the same place and whether there is a distance of at least 1.5 meters between them. It does not collect exact location information, but, ironically, is more PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:07:64175:1:1:NEW 16 Nov 2021)</ns0:ref> Manuscript to be reviewed Computer Science accurate. Users may feel they have a greater degree of privacy and may be less concerned about being monitored 24/7. In contrast, GPS-based apps collect individuals' data on a 24/7 basis.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed techniques to protect users' privacy in digital surveillance</ns0:head><ns0:p>Based on how applications collect and share information, app architecture is classified into two different models: centralized and decentralized. These models differ in their approaches to protecting users' privacy and the anonymity degree. In the centralized model, health authorities and governments collect data from individuals regardless of whether they are healthy or diagnosed with COVID-19, mapping the collected information to everyone uniquely in a central server. This approach effectively controls cases if it is widely used, as it provides a comprehensive view. However, the model does not ensure users' privacy because there is no control over data sharing. In contrast, the decentralized model does not offer public control, as it does not have a central server for data storage. Instead, individuals who tested negative or did not test at all store their data locally on their devices and can check whether they were in touch with infected people through public platforms that have already gone through a data anonymization cycle. The sections below discuss the models in detail <ns0:ref type='bibr' target='#b66'>(Wang &amp; Liu, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Centralized Contact-tracing</ns0:head><ns0:p>As abovementioned, there are two different technologies to collect individuals' information used in centralized and decentralized models. This section will discuss the differences between using GPS and Bluetooth technologies in the centralized model from a privacy perspective. In centralized GPS-based applications, the user's places are collected and shared with the authorities' servers <ns0:ref type='bibr' target='#b28'>(Joo &amp; Shin, 2020)</ns0:ref>. In centralized Bluetooth-based applications, data are collected by creating random tokens at different times and exchanging with other users if they happen within 6 feet. Later, each user's phone number and token are sent to health authorities, informing the people that person has countered within the past two weeks if the user is found to be infected. A centralized application using both GPS and Bluetooth technologies to collect users' information has been deployed in India <ns0:ref type='bibr' target='#b66'>(Wang &amp; Liu, 2020)</ns0:ref>.</ns0:p><ns0:p>The main examples of centralized contact-tracing applications are Alipay Health Code used in China and Self-Quarantine Safety Protection used in South Korea <ns0:ref type='bibr' target='#b28'>(Joo &amp; Shin, 2020)</ns0:ref>, both of which are GPS-based, and TraceTogether, a Bluetooth-based app used in Singapore <ns0:ref type='bibr' target='#b66'>(Wang &amp; Liu, 2020)</ns0:ref>. These apps have helped governments limit the spread of COVID-19. In the centralized model, control is vested in governments and authorities, since they can trace all users' health status and the number of infected people; this makes such apps more accurate and efficient in quickly understanding the situation. Thus, they allow for better control of the virus <ns0:ref type='bibr' target='#b47'>(Riemer, 2020)</ns0:ref>.</ns0:p><ns0:p>When comparing Asian countries with some European countries to adopt centralized contacttracing apps, Asia have shown better control over the virus' spread than Europe. The reason can be attributed to Asian citizens' willingness to sacrifice privacy in the interest of public health PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64175:1:1:NEW 16 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b4'>(Cha, 2020)</ns0:ref>, whereas in some European countries, such as France with the Stop-COVID app, citizens refused to use contact-tracing, triggering a massive virus spread and losing control <ns0:ref type='bibr' target='#b48'>(Rowe, 2020)</ns0:ref>. In other words, using the most effective approach does not guarantee the best results because civic readiness for and app's acceptance are crucial influencing factors.</ns0:p><ns0:p>Centralized applications have shown great potential to limit viral spread; however, users' concerns about privacy and how their data are exposed and controlled by government and health authorities have caused stress and motivated them to avoid using contact-tracing apps. This stress is known as technostress: technology-caused anxiety and negative emotions. The Alipay Health Code used in China is a great example of how collecting users' information increases stress and anxiety. Considering its different information collection methods, like drones, GPS, QR codes, and CCTV <ns0:ref type='bibr' target='#b28'>(Joo &amp; Shin, 2020)</ns0:ref>, it can be agreed it is difficult to trust the government and health authorities when they do not state how they protect and process the collected information <ns0:ref type='bibr' target='#b64'>(Vitak &amp; Zimmer, 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Nabity-Grover, Cheung &amp; Thatcher, 2020)</ns0:ref> .</ns0:p><ns0:p>Moreover, using centralized techniques has raised some privacy risks associated with each technology. For GPS-based applications, the authority collects information from all users regardless of infection status and broadcasts all the locations the user has visited recently when someone tests positive, making it hard to maintain infected users' confidentiality. Bluetoothbased applications are also vulnerable to these stated risks. Besides, there is a risk the authorities will connect with another database via users' phone numbers and access sensitive information if deemed necessary <ns0:ref type='bibr' target='#b66'>(Wang &amp; Liu, 2020)</ns0:ref>. Furthermore, some people think the centralized contacttracing approach infringes freedom and carries long-term risks like records of the information collected during the pandemic even after ending <ns0:ref type='bibr' target='#b48'>(Rowe, 2020)</ns0:ref>.</ns0:p><ns0:p>However, one study suggests forcing citizens to use contact-tracing apps regardless of privacy concerns applying a framework complete with rules and policies punishing non-users will enhance control and limit the virus' spread <ns0:ref type='bibr' target='#b47'>(Riemer, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Decentralized Contact-tracing</ns0:head><ns0:p>Based on the aforementioned problems with centralized contact-tracing apps, the importance of using decentralized platforms offering a higher degree of privacy regarding individuals' data has emerged <ns0:ref type='bibr' target='#b8'>(Cho et al., 2020)</ns0:ref>. Decentralized contact-tracing provides more privacy because, unlike in the centralized model, it has mechanisms to verify privacy and use public and private keys and digital signatures <ns0:ref type='bibr' target='#b54'>(Skoll, Miller &amp; Saxon, 2020)</ns0:ref> The solutions provided in the distributed approach can utilize either GPS or Bluetooth, both offering a degree of privacy. With Bluetooth, data are transferred in a phone-to-phone transaction: location data are sent directly without an HTTP connection. Compared to GPS, Bluetooth is more private, as GPS applications share data via HTTP protocol. However, both can offer more privacy when used with blockchain <ns0:ref type='bibr' target='#b17'>(Garg et al., 2020)</ns0:ref>.</ns0:p><ns0:p>BayesCOVID is a GPS approach combining contact-tracing, symptom tracking, and Bayesian network. This approach gives the user a choice, since it provides a user contract that enhances the application's utility <ns0:ref type='bibr' target='#b39'>(McLachlan, 2020)</ns0:ref>.</ns0:p><ns0:p>One important solution is the Apple/Google Bluetooth approach that prioritized individuals' privacy by using Privacy-Pre-Serving Proximity Tracing (DP-3 T) with both the SARS and the Middle East Respiratory Syndrome (MERS) outbreaks. This method provides a high degree of privacy, since it uses Bluetooth, which offers more location privacy than GPS <ns0:ref type='bibr' target='#b13'>(Fahey &amp; Hino, 2020;</ns0:ref><ns0:ref type='bibr' target='#b66'>Wang &amp; Liu, 2020)</ns0:ref> and avoids storing data (location, users' identities, contacts) in archives. This approach is very secure and provides a high degree of privacy. Unfortunately, it taxes on smartphones' battery life, which is a drawback for users <ns0:ref type='bibr' target='#b13'>(Fahey &amp; Hino, 2020)</ns0:ref>.</ns0:p><ns0:p>However, offering users control is encouraging. Specifically, users can control which data are being transferred and which are not. This approach uses two models: simple data transfer and distributed computation. It also ensures self-awareness, which will encourage users to risk sharing data <ns0:ref type='bibr' target='#b44'>(Nanni et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Furthermore, the decentralized technique provides a high degree of privacy because it uses twofactor authentication blockchain. It also uses personal data without storing it in a database. It accomplishes this with encoded data storage, which can be accessed with users' consent. This approach cryptographically signs user data and stores it. When the data are deleted, the hash will redirect to a null reference called 'orphan hash.' This approach has the advantage of encouraging users to trust contact-tracing applications, which will help restore normal life sooner <ns0:ref type='bibr' target='#b12'>(Eisenstadt et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Based on the outcome of this SLR and our findings, we present the following future considerations and directions for contact tracing apps and related technologies in the fight against COVID-19 and future pandemic outbreaks that are worth investigating and implementing to encourage adoption by the wider population:</ns0:p><ns0:p>Utilizing privacy-protecting technologies such as Artificial Intelligence (AI) and Machine Learning (ML) is suggested to help analyze the level of infection by viruses through identification of infected areas, tracing, and monitoring infected people <ns0:ref type='bibr' target='#b60'>(Vaishya, et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b1'>Ahmed et al., 2020)</ns0:ref>. Other researchers <ns0:ref type='bibr' target='#b69'>(Yang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b58'>Ting et al., 2020)</ns0:ref> propose the use of the Internet of Things (IoT) and thermal imaging devices <ns0:ref type='bibr'>(Chamberlain et al., 2020;</ns0:ref><ns0:ref type='bibr'>Mohammed et al., 2019)</ns0:ref> to track positive cases and control the wide spread of COVID-19 virus. Additionally, some studies proposed use of a privacy-preserving contact-tracing scheme through blockchainbased medical applications <ns0:ref type='bibr'>(Zhang et al., 2020;</ns0:ref><ns0:ref type='bibr'>Chang et al., 220)</ns0:ref>.</ns0:p><ns0:p>Governments, decision makers, and public health authorities must implement a proper feedback system throughout contact tracing apps deployment phases to gain public trust and increase adoption levels. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>design and development phases of the apps before their actual implementation. Authorities could implement multiple models and theories, such as the technology acceptance model, diffusion of innovation model, and motivation theory, to study the acceptance and usage level of future contact tracing technologies <ns0:ref type='bibr' target='#b37'>(Lucivero et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b20'>He et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b52'>Sharma et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Lastly, it is necessary to focus on privacy and data transformation while minimizing data collection and access to reduce contact-tracing privacy concerns <ns0:ref type='bibr' target='#b13'>(Fahey &amp; Hino, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Privacy protection laws for COVID-19 applications</ns0:head><ns0:p>Data privacy, also known as information privacy, is a subset of data security that focuses on data management while complying with data security guidelines. The essence of data privacy is how data should be collected, stored, managed, and shared. Practical data privacy issues often revolve around whether or not data is shared with third parties, how it is shared, and how data is lawfully collected and preserved. With the rise of the digital economy, one of the most difficult issues for organizations to address is data privacy. As a result, adhering to a data privacy policy and managing the data that is required is crucial in order to gain users' confidence.</ns0:p><ns0:p>With the individual as the major character, data privacy involves not only the proper handling of data but also the public expectations about privacy. Individuals are entitled to privacy and control over their personal information. Procedures for safely and securely keeping, processing, acquiring, and sharing personal data must be implemented at all times.</ns0:p><ns0:p>Consumers' data protection and privacy are vital in today's technology era, therefore governments, healthcare providers, and business groups have been using digital tracking to keep COVID-19 outbreaks under control. Although this method has the potential to minimize pandemic transmission, it has significant privacy implications <ns0:ref type='bibr' target='#b52'>(Sharma 2020;</ns0:ref><ns0:ref type='bibr' target='#b42'>Callie 2021)</ns0:ref>.</ns0:p><ns0:p>There is tension between privacy and information disclosure, and the data's privacy managed by digital health apps must be maintained to limit the virus spread. There are two types of COVID-19 data. The first type consists of cases and special medical data, such as disease statistics, medical sources, and the history of cases in contact with the disease. The second type is data related to government-imposed containment policies and health measures, such as social distancing and quarantine.</ns0:p><ns0:p>A problem has emerged in the data managed in COVID-19 applications and on social media: data and information are released from various sources, confusing the public about what information is true and which sources are credible. The main sources of COVID-19 data and information are government agencies, local governments, health authorities, and international organizations such as the World Health Organization (WHO). The data and information administered by COVID-19 digital health apps has helped limit the epidemic spread through disclosure, specifically facilitating people's understanding of the containment measures. This has been effective because people are more willing to comply with containment measures when they PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64175:1:1:NEW 16 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>understand the issues raised around the virus. Additionally, this will result in raising awareness and acceptance of policies, and an enhanced sense of safety.</ns0:p><ns0:p>Finally, information disclosure reshapes people's perceptions of the epidemic and containment measures, enabling the public to overcome difficulties and limit the virus' spread. However, disclosure should be performed only with individual consent <ns0:ref type='bibr' target='#b15'>(Fu et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.'>Protection Law:</ns0:head><ns0:p>COVID-19 is the most recent danger posing a threat to the world's health and economic sectors. Tracing the main and secondary contacts of confirmed COVID-19 cases using contact-tracing technologies and devices is one of the most effective approaches to reduce the spread of the virus. The European Union (EU) emphasized the importance of data protection and privacy in digital measures, stating that data must be used exclusively for the intended purpose, that is, to prevent the spread of disease. Strategies to contain the pandemic include the use of technology to contain and warn those who have been in contact with people infected.</ns0:p><ns0:p>The benefits gained by tracking people are greater than the potential of losing users privacy because of its desired benefits to eliminate virus outbreaks <ns0:ref type='bibr'>(Gallaway, 2020;</ns0:ref><ns0:ref type='bibr' target='#b50'>Schneble, 2020)</ns0:ref>. The EU, which has a robust data protection system, requires that all states share personal data that has been collected through contact-tracing applications.</ns0:p><ns0:p>The US Government and the public sought to develop consumer data privacy protection laws. The value of privacy was stressed by multiple entities including elected leaders, members of congress, and others by emphasizing a myriad of possible harms connected with its violation. Privacy protection also helps guard against fraudulent or economic damage caused by identity theft, fraud, extortion, or other acts of crime. Guaranteeing information privacy is also important to reduce public fears of divulging details of their private lives, such as their personal contacts or behavioral habits, in the context of health-related data.</ns0:p><ns0:p>Most mobile phone applications that track symptoms or trace contacts require widespread use among the population, which is only achieved if users trust these apps. Some of the common privacy protection criteria of smartphone apps are transparency, purpose, anonymity, informed consent, time limits, and data management.</ns0:p><ns0:p>Transparency requires straightforward software reporting policies, which help to create public confidence in governments and health organizations, all of which are important to promote informed and voluntary use of COVID-19 related services.</ns0:p><ns0:p>Anonymization is the use of a series of mechanisms to prevent data from being associated to a specific person. Informed consent ensures that consumers have the information they need to make informed decisions about willingly releasing confidential personal data in order to respond to public health requirements. Time limits means ensuring that the data obtained about contacts, location tracking, and mobile device proximity can only be used in the scope of the crisis. Data management means applying all protections measures throughout the life cycle of data-collection systems for contact-tracing mobile apps <ns0:ref type='bibr' target='#b2'>(Boudreaux, 2020)</ns0:ref>.</ns0:p><ns0:p>COVID-19 apps should be free from security and privacy problems because these aspects are important to users. Legal protection must also be provided because a lack of privacy will lead to application failure if users shy away from using these applications due to a lack of trust <ns0:ref type='bibr' target='#b22'>(Hendl, Chung &amp; Wild, 2020;</ns0:ref><ns0:ref type='bibr' target='#b9'>Culnane, Leins &amp; Rubinstein, 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Islam et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Ethical</ns0:head><ns0:p>In addition to privacy concerns, there are many ethical issues related to the data collection processes and algorithms of contact-tracing apps. During the COVID-19 pandemic, governments and healthcare organizations rely on location and health data to assess infection rates, effectiveness of social distancing measures, and disease transmission rates. As COVID-19 spreads, several COVID-19 tracking applications were developed to aid in the containment of the pandemic. A framework has been established to validate COVID-19 apps' ethics. It is intended to assist designers and publishers of contact-tracing apps in determining the application's ethical justification. If used properly, the apps should be a major component of disease management, proportional to the severity of the public health threat and scientifically sound and time-bound ethical design. Following the COVID-19 breakout, these groups must utilize this data in an ethical, robust, and transparent manner to prevent widespread skepticism and any breaches of privacy rights <ns0:ref type='bibr' target='#b29'>(Klar &amp; Lanzerath, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Values</ns0:head><ns0:p>When viruses start to cause safety issues, the balance between the need to fight the virus and the obligation to uphold individual rights often changes. COVID-19 tracing apps illustrates how close monitoring and contact-tracing compromises privacy rights around the world. In response to the epidemic, the Australian government created COVID-19 Safe, a mobile phone app for contact-tracing. Governments around the world have also been using technology to help maintain social distancing, isolation, and contact-tracing.</ns0:p><ns0:p>Moreover, with privacy as a major concern, COVID-19 apps must be built explicitly based on values including fairness, equality, solidarity, and user benefit. Due to the high privacy impact, data sharing is only possible when there are serious health conditions. There are cases in which governments receiving personal data must be identified. All population groups should be able to use COVID-19 surveillance technology which respects their privacy <ns0:ref type='bibr' target='#b36'>(Lodders &amp; Paterson, 2020;</ns0:ref><ns0:ref type='bibr' target='#b61'>van Kolfschooten &amp; de Ruijter, 2020;</ns0:ref><ns0:ref type='bibr' target='#b34'>Lee &amp; Lee, 2020)</ns0:ref>.The use of the contact-tracing process is unprecedented and could have serious consequences for public health. It is necessary to implement public-interest digital technology practices that are in line with values <ns0:ref type='bibr'>(Lodders et. Al, 2021;</ns0:ref><ns0:ref type='bibr' target='#b61'>Kolfschooten &amp; Ruijter, 2020;</ns0:ref><ns0:ref type='bibr' target='#b34'>Lee &amp; Lee, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Issues</ns0:head><ns0:p>As mentioned before, employing digital surveillance technologies to contain the virus raised several privacy issues related to the use, storage, and manipulation of collected personal information. Individuals' concerns about their privacy prevented them from using such apps, which obviously affected the tracing process considerably. Those individuals questioned the integrity of the data collection and utilization processes and whether the data will be anonymous, temporarily stored, or open to public use. Figure <ns0:ref type='figure'>6</ns0:ref> illustrates the most important privacy issues of contact-tracing apps.</ns0:p><ns0:p>To fulfill privacy guidelines to the highest degree, tracing apps should clearly reveal for which purpose the data will be used, and whom will have access to it and control it. In addition, the data should only be stored by authorized agencies. Health care agencies must clarify what will happened to the data in the future after the pandemic is over. All tracing apps must comply with the international privacy standards to minimize public privacy concerns.</ns0:p><ns0:p>On the other side, these issues can be eliminated from technological perspective by applying more secure and private models. Two popular models have been used in designing tracing apps, which are centralized and decentralized. After comparing these two models, the decentralized model proved to be more secure and reliable, especially when used with Bluetooth technology.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Entities contributing to preserving privacy of healthcare applications</ns0:head><ns0:p>In 1996, a law obliged the health authorities to protect patients' privacy, disclosing their data only in high-risk cases and guaranteeing information privacy even during data exchange. Furthermore, integration between citizens' data and the health authorities will help identify infected patients and restrict their interaction with healthy people. Patients' privacy is one of the most important topics raised during the COVID-19 pandemic.</ns0:p><ns0:p>COVID-19's proliferation has created unique circumstances that have prompted a change toward telemedicine infrastructure adoption. Telemedicine has become an important part of clinical care delivery, and many medical institutions report a significant increase in the use of telemedicine. For example, One Medical Center in New York City, witnessed a major increase in urgent care virtual visits from 102 per day pre-COVID-19, to 802 per day post-COVID-19. As the transition to telemedicine progresses, new problems and dangers have emerged, especially in the areas of information security and privacy.</ns0:p><ns0:p>The Privacy Rule, issued in December 2000 by the US Department of Health and Human Services (HHS), protects the privacy of individually identifiable health information. In addition, The European Union, passed a data privacy legislation to protect patient's personal health information, and has one of the most effective data protection and privacy policies in the world. However, government agencies around the world have warned that the risk of cyberattacks against healthcare departments and institutions researching COVID-19 is increasing since the pandemic started <ns0:ref type='bibr' target='#b27'>(Jalali et al., 2021)</ns0:ref> There are many applied methodologies when it comes to the type of data health officials are sharing with the public. In the US, the local government in Los Angeles County provides an estimated age distribution of patients, and a breakdown of the number of cases in more than 140 cities and communities. However, residents in Florida are given much more information, including the cities affected, the number of people tested, the age distribution of cases, and the number of cases in nursing homes.</ns0:p><ns0:p>In response to the Covid-19 pandemic, the Indian Ministry of Health and Family Welfare issued guidelines for the mandatory notification of information for COVID-19 patients, allowing the government to enact any regulations it deems necessary to prevent the outbreak or spread of such epidemics. The Indian government requires doctors to report COVID-19 cases and suspected cases to designated government agencies, and the government agencies can then respond appropriately to limit the disease spread based on the information provided by the health care professionals <ns0:ref type='bibr' target='#b15'>(Fu et al., 2020 and</ns0:ref><ns0:ref type='bibr' target='#b53'>Shekhawat et al., 2020)</ns0:ref> Digital health apps are available in Apple Store and Google Play App Store that contain more than 318,000 available applications which are updated daily, to comply with the latest policies announced by the health authorities <ns0:ref type='bibr'>[42]</ns0:ref>.</ns0:p><ns0:p>The Federal Data Protection Act (FADP) provides a comprehensive framework dealing with data protection using defined principles. It insists on securing individuals' privacy and provides protection measures. According to FADP, patient data are considered sensitive and require additional privacy. Each user has to give consent for the health authorities to use their data for health purposes. Therefore, the applications have to grant the user the right to revoke, update, and remove the data <ns0:ref type='bibr'>(Vokinger et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The COVID-19 pandemic has shed light on digital health applications but ignored their data privacy. Blockchain technology appears to be an ideal solution to secure and authenticate certificates, health and medical records, and prescriptions, while preserving privacy <ns0:ref type='bibr' target='#b10'>(De, Pandey &amp; Pal, 2020)</ns0:ref> Doctors may disclose patients' information to competent authorities under specific circumstances for society's greater interest. Quarantine and social isolation measures imposed by the health authorities have effectively limited viral spread. Thus, it is important to collect individuals' information via contact-tracing apps <ns0:ref type='bibr' target='#b33'>(Labs &amp; Terry, 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Shekhawat et al., 2020)</ns0:ref>.f</ns0:p><ns0:p>In conclusion, tracing applications help to control the speared of COVID-19 pandemic. Governments and health authorities developed these apps based on many technologies and different models. The aim of these apps is to monitor the infected individuals and keep track of the public status. Some countries had followed international laws and local regulations to protect users' privacy while designing and developing these apps. However, other countries did not comply with these requirements, which resulted in privacy breaches. Nevertheless, there was no PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64175:1:1:NEW 16 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science clear guidelines regarding the disclosure of using the personal data in tracing apps. However, even with the privacy limitation in tracing apps, WHO stated that they had effectively helped in containing the pandemic and slowing down its spread.</ns0:p><ns0:p>COVID-19 has put forward privacy concerns in many fields, and this systematic literature review has discussed it from three perspectives: the public's privacy in using contact-tracing apps, laws and policies that should be followed to protect users' privacy and digital authorities' strategies for dealing with data privacy. The classifications of the included literature is illustrated in Figure <ns0:ref type='figure'>7</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>IMPLICATIONS</ns0:head><ns0:p>Many persons avoid using COVID-19 apps due to privacy concerns about location and health data <ns0:ref type='bibr' target='#b10'>(De, Pandey &amp; Pal, 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fahey &amp; Hino, 2020)</ns0:ref>. To prevent this issue, governments should develop policies ensuring individual data privacy rights, encouraging people to trust these apps and provide their information voluntarily <ns0:ref type='bibr' target='#b34'>(Lee &amp; Lee, 2020)</ns0:ref>.</ns0:p><ns0:p>Another implication: not all citizens have internet access. The solution is providing universal internet so that everyone can access COVID-19 apps. Moreover, some individuals do not have their own devices, raising the issue these apps may not cover all citizens. This challenge can be surmounted by assigning one account per family to reach the highest number of users <ns0:ref type='bibr' target='#b10'>(De, Pandey &amp; Pal, 2020;</ns0:ref><ns0:ref type='bibr' target='#b48'>Rowe, 2020)</ns0:ref>.</ns0:p><ns0:p>Lastly, COVID-19 problems are widespread since it is a new virus, but researchers are trying to address each concern to reduce the virus' effects and minimize its spread <ns0:ref type='bibr' target='#b10'>(De, Pandey &amp; Pal, 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hendl, Chung &amp; Wild, 2020)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>LIMITATIONS AND FUTURE WORK</ns0:head><ns0:p>COVID-19 has revealed many limitations in many areas such as information management and data privacy; therefore, digital surveillance has become more important than ever. Artificial intelligence, big data, the internet of things, and GPS have been recognized as paramount technologies in developing COVID-19 contact-tracing apps <ns0:ref type='bibr' target='#b38'>(Mbunge, 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Joo &amp; Shin, 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fahey &amp; Hino, 2020)</ns0:ref>.</ns0:p><ns0:p>Privacy protection is an important issue. In the context of digital health and the COVID-19 epidemic, within a framework for evaluating applications from epidemiological and legal perspectives, solutions are designed to obtain useful information with several limitations, including preventing sharing sensitive personal information <ns0:ref type='bibr' target='#b52'>(Sharma, 2020</ns0:ref><ns0:ref type='bibr' target='#b44'>, Nanni et al., 2020;</ns0:ref><ns0:ref type='bibr'>Vokinger, 2020)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The studies did not cover all classifications of COVID-19 research's keywords, and they are also restricted to specific countries' cultures and policies. Moreover, the studies did not consider the differences in values and cultural and political aspects of the countries using tracking applications. Additionally, no study has discussed contact-tracing applications developed after April 30, 2020 <ns0:ref type='bibr' target='#b17'>(Garg at al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b59'>Trang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Islam et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Kumar, Shahrabani &amp; Das, 2020)</ns0:ref>. For the future, Garg expects to develop an RFID solution and aims to reduce the cost of scaling the RFID range <ns0:ref type='bibr' target='#b17'>(Garg et al., 2020)</ns0:ref> The COVID-19 pandemic is a recent one, hence applications in this field are limited. The study found that a limited number of mobile applications were developed, which will be used as a benchmark for future applications' specifications and learn more about users' interactions on different national app platforms. That result makes a significant contribution to health institutes and health practitioners <ns0:ref type='bibr' target='#b59'>(Trang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Islam et al., 2020)</ns0:ref>.</ns0:p><ns0:p>During epidemics, it has been suggested it is prudent to allow for some loss of privacy and place trust in smart technologies to help fight deadly, invisible creatures. With the continuing spread of COVID-19, Singapore will continue to deploy technological tools and interventions <ns0:ref type='bibr' target='#b34'>(Lee &amp; Lee, 2020)</ns0:ref>.</ns0:p><ns0:p>A limitation of future research on an ultimate dependent variable is the adoption of COVID-19 applications recommendations for pre-and post-testing in future studies. Emphasis should be placed on collecting data about infectious diseases, ensuring public health and that epidemiological surveillance technology features are ethical and reflective of fair values, and reducing the vulnerability of at-risk individuals <ns0:ref type='bibr' target='#b52'>(Sharma et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b54'>Skoll, Miller &amp; Saxon, 2020;</ns0:ref><ns0:ref type='bibr' target='#b67'>Whaiduzzaman et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hendl, Chung &amp; Wild, 2020)</ns0:ref>.</ns0:p><ns0:p>Moreover, developing and improving the efficiency and effectiveness of information systems and technology in organizations as well as monitoring people's safety and privacy in the fight against <ns0:ref type='bibr'>COVID-19 (Mbunge, 2020;</ns0:ref><ns0:ref type='bibr' target='#b46'>O'Leary, 2020;</ns0:ref><ns0:ref type='bibr' target='#b66'>Wang &amp; Liu, 2020)</ns0:ref> are essential. Additionally, expanding anti-snooper privacy safeguards, imposing usage restrictions in contacttracing, and adding a private messaging system will enhance overall privacy. There have been discussions about creating an application to track contacts directly using Bluetooth <ns0:ref type='bibr' target='#b8'>(Cho, ;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vitak &amp; Zimmer, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>The effect of the COVID-19 pandemic represents an enormous challenge to public health authorities and governments around the world. The pandemic put major pressure on health systems and resulted in fundamental changes to everyday life for individuals and organizations. Public health authorities introduced contact-tracing systems which include the use of digital contact-tracing mobile apps. Contact-tracing apps are promising technologies for rapid tracing and tracking of infected persons, and they can support manual contact-tracing and tracking methods to control the COVID-19 virus. However, some people avoid using digital surveillance Manuscript to be reviewed Computer Science apps altogether since they are concerned about their privacy. Governments and health authorities should address this issue and try to preserve the rights of those who do not wish to waive their privacy.</ns0:p><ns0:p>In most countries, the use of these apps is not mandatory, which makes it challenging to predict their acceptance and participation levels. It is significant for governments and health officials to gain the trust of their citizens and show suitable transparency by clarifying what personal data is collected and how it is being used. The efficiency of contact-tracing apps is highly dependent on how authorities address all related privacy challenges and concerns. Their efforts will surely determine the role of digital contact-tracing technologies in future pandemic occurrences and lessons learned from similar errors.</ns0:p><ns0:p>The challenges facing contact-tracing apps include, in addition to privacy, technical, usability, and addressing additional requirements reported by some users. A considerable number of contact-tracing apps were not welcomed by the public and suffered low acceptance levels, which dramatically affected their efficiency. As an example, only the Singaporean app had a penetration level of a little over 30%, the Australian and Swiss apps had a penetration level below 20%, and the penetration values for the majority of other apps around the world were less than 5%.</ns0:p><ns0:p>The volume of personal data contact-tracing apps collected varied considerably, some apps collected absolutely no data while others collected a significant amount of highly private personal data. The majority of the surveyed apps did not give users an option to deactivate the app, such as logging out, without uninstalling them. Additionally, the lack of standardization for contact-tracing technologies resulted in fragmented non-interoperable apps. As countries are coming out of lockdown and reopening borders, there is an essential need for a unified and interoperable contact-tracing app that can easily be implemented globally without compromising users' privacy.</ns0:p><ns0:p>A possible solution to the privacy issues and concerns can be implemented through a comprehensive government-mandated data privacy policy in the context of digital health applications. Another option is for governments to deploy fully decentralized and highly accurate applications, which do not keep any records of sensitive personal information and provide the same level of accuracy as the centralized approach. One suggestion for a decentralized approach is to use a blockchain-based app algorithm balancing users' privacy and public health requirements. Moreover, internet intermediaries must work with governments and civil society to address privacy and surveillance issues to improve new contact-tracing technology adoption levels in the future. </ns0:p><ns0:note type='other'>Figure Legends</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>It must be very clear to users what data is being collected, who is accessing the data, and how it is being used. It is key to study and understand human behavior throughout the PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64175:1:1:NEW 16 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64175:1:1:NEW 16 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64175:1:1:NEW 16 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The Methodology's Main Stages.Figure 2. Quality Assessment Flow Chart Figure 3. Total Number of Papers from Each Data Source Figure 4. Paper Selection Process Figure 5 Map to Visualize Contact-tracing Applications around the World Figure 6 Privacy Issues of Contact-tracing Applications Figure 7. Classification of Privacy during the the COVID-19 Pandemic</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 1. The Methodology's Main Stages.Figure 2. Quality Assessment Flow Chart Figure 3. Total Number of Papers from Each Data Source Figure 4. Paper Selection Process Figure 5 Map to Visualize Contact-tracing Applications around the World Figure 6 Privacy Issues of Contact-tracing Applications Figure 7. Classification of Privacy during the the COVID-19 Pandemic</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,370.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,370.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,370.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64175:1:1:NEW 16 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Editor comments (Sedat Akleylek) According to the reviewers' comments, the paper needs improvements before publishing. Recent and important papers should be discussed. Conclusions should be extended. Sections are not given in detail. Thank you for your comment. We included 14 additional recent and important papers. Part of those are added to the analysis and another part is added to the discussion on how to improve privacy of contact-tracing apps immediately after (Decentralized contact-tracing) section. We have also expanded the conclusions of multiple sections and the paper’s conclusion at the end. Reviewer 1 (Z Zhang) Basic reporting This literature review has followed a comprehensive methodology to study different digital technologies such as CTAs which have been used to control the COVID-19 spread. The manuscript has been written clearly using professional English which is easy to understand. Thank you for your comment. Experimental design The article content has fallen in the Aims and Scope of PeerJ Computer Science. The study has done a thorough investigation with a high ethical standard on all kinds of contact tracing applications (CTAs) and reviewed 800+ papers published in some prestigious journals such as Science Direct, IEEE, Scopus, etc. Thank you for your comment. Validity of the findings Conclusions are well stated, and valid. The result and recommendations made are beneficial to governments and countries all over the world. Thank you for your comment. Additional comments Figure 1 is a bit ambiguous. I would suggest changing it as in the attachment. Thank you for your suggestion and for including the attachment. We agree with the suggestion and have updated the figure accordingly to make it clear for the readers to understand. Reviewer 2 (Anonymous) Basic reporting - The paper is very well written and in simple words. It is thus easy to understand which is a very important characteristic of any document. Thank you for your comment. - The sections are titled as questions. While this is innovative and gets the point across, I prefer a more conventional approach to naming sections Thank you for your comment and we totally agree with your suggestion. We have changed all titles to follow the conventional approach without using any questions for naming the sections. - Each section and each point has to be discussed in much more detail. The authors superficially discuss the ideas of various works and do so in a rather heterogeneous manner with separate paragraphs dedicated to separate papers. I would look forward to a more homogeneous discussion of ideas with disparate work smoothly falling into the discussion. Agreed. We have updated the following sections in a homogenous manner and discussed ideas in a more holistic approach: Privacy protection laws for COVID-19 applications, Protection Law, Values, Issues, Entities contributing to preserving privacy of healthcare applications, and the Conclusion. Experimental design - Very little discussion is dedicated to the users’ privacy and its breach. The authors discuss laws that are prevalent around the world, “ethical” issues, and “values”. I would prefer a much more detailed analysis of the issues around privacy and its breach first in a general context and subsequently specifically with respect to the pandemic and the tracing apps. Thank you for your comment and we totally agree with it. Under the section (Issues) we have included a much more detailed analysis of the privacy and its breach, first in a general context and then with respect to the pandemic. We have also included a specific diagram for the same purpose (Figure 6) - The authors discuss the attempts made by health authorities to protect and preserve privacy. I feel this discussion is limited. The authors should also include the efforts made by governments, health authorities, world bodies, and also those made at individual levels. Agreed. As suggested, we have added a dedicated section titled (Entities contributing to preserving privacy of healthcare applications) which include efforts by governments, health authorities, and world bodies. Validity of the findings - Conclusions on various aspects of the study are not drawn. The authors talk about the issues based on the contributions of various endeavors but do not draw appropriate conclusions based on these. Thank you for your comment. We have drawn appropriate conclusions in multiple sections and improved the paper’s conclusion at the end. Additional comments - A few figures, block diagrams etc. would help the reader more effectively grasp the ideas being discussed. Agreed. We have added two diagrams and a table as follows: 1. A diagram classifying privacy issues is added to the paper (Figure 6) 2. A diagram showing the geographical locations of the contact tracing apps around the world is added (Figure 5) 3. Table 3 is added summarizing all contact tracing apps and their privacy concerns. -The authors may want to justify the content and both ends for a better appearance. Thank you for your comment. We have justified the content and both ends as requested for better appearance. "
Here is a paper. Please give your review comments after reading it.
323
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. The side-channel cryptanalysis method based on convolutional neural network (CNNSCA) can effectively carry out cryptographic attacks. The CNNSCA network models that achieve cryptanalysis mainly include CNNSCA based on the VGG variant (VGG-CNNSCA) and CNNSCA based on the Alexnet variant (Alex-CNNSCA). The learning ability and cryptanalysis performance of these CNNSCA models are not optimal, and the trained model has low accuracy, too long training time, and takes up more computing resources. In order to improve the overall performance of CNNSCA, the paper will improve CNNSCA model design and hyperparameter optimization.</ns0:p><ns0:p>Methods. The paper first studied the CNN architecture composition in the SCA application scenario, and derives the calculation process of the CNN core algorithm for side-channel leakage of one-dimensional data. Secondly, a new basic model of CNNSCA was designed by comprehensively using the advantages of VGG-CNNSCA model classification and fitting efficiency and Alex-CNNSCA model occupying less computing resources, in order to better reduce the gradient dispersion problem of error back propagation in deep networks , the SE (Squeeze-and-Excitation) module is newly embedded in this basic model , this module is used for the first time in the CNNSCA model, which forms a new idea for the design of the CNNSCA model.Then apply this basic model to a known first-order masked dataset from the side-channel leak public database (ASCAD). In this application scenario, according to the model design rules and actual experimental results, exclude non-essential experimental parameters. Optimize the various hyperparameters of the basic model in the most objective experimental parameter interval to improve its cryptanalysis performance, which results in a hyper-parameter optimization scheme and a final benchmark for the determination of hyper-parameters.</ns0:p><ns0:p>Results. Finally, a new CNNSCA model optimized architecture for attacking unprotected encryption devices is obtained--CNNSCAnew. Through comparative experiments, CNNSCAnew's guessing entropy evaluation results converged to 61. From model training to successful recovery of the key, the total time spent was shortened to about 30 minutes, and we obtained better performance than other CNNSCA models.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Side Channel Analysis <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> (SCA) refers to bypassing the tedious analysis of encryption algorithms, by using the information (such as execution time, power consumption, electromagnetic radiation , etc.)leaked by the hardware device embedded in the encryption algorithm during the calculation process , combined with statistical analysis methods to attack cryptographic systems. The sidechannel cryptanalysis method is divided into profiling methods and non-profiling methods: nonprofiling methods include differential power attack <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> (DPA), correlation power attack <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref> (CPA) and mutual information attack <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref> (MIA); profiling methods include template attack <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> (TA), sidechannel cryptography attack based on multi-layer perceptron (MLPSCA), and side-channel cryptography attack based on convolutional neural networks (CNNSCA). Although the attack method of the non-profiling method is simple and direct, weak side-channel signal or excessive environmental noise can cause the attack to fail. The profiling method can effectively analyze the characteristics of the side-channel signal when the encryption knowledge of the attacking device is obtained in advance, so it is easier to crack the cryptogramme. In the case of an encrypted implementation copy, the best cryptanalysis attack in the traditional SCA method is TA <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref><ns0:ref type='bibr' target='#b6'>[6]</ns0:ref><ns0:ref type='bibr' target='#b7'>[7]</ns0:ref><ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> , but TA has difficulties in statistical analysis when processing high-dimensional side-channel signals, and cannot attack the implementation of protected encryption. With the rapid development of supervised machine learning algorithms, it can effectively analyze one-dimensional data with similar power consumption traces in other fields, and side-channel cryptanalysis based on machine learning (MLSCA) <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref><ns0:ref type='bibr' target='#b10'>[10]</ns0:ref><ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> has begun to emerge. The new profiling method MLPSCA surpasses the traditional profiling method in attack performance <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref><ns0:ref type='bibr' target='#b12'>[12]</ns0:ref><ns0:ref type='bibr' target='#b14'>[13]</ns0:ref> , and overcomes the shortcomings of template attacks that cannot handle high-dimensional side-channel signals, but it also loses effectiveness when attacking encryption with protection. Nowadays, with the development of machine learning, deep learning techniques with excellent performance in image classification and target recognition have become popular. Studies have shown that the application of convolutional neural network algorithms under deep learning can produce better encryption performance in side-channel analysis <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref><ns0:ref type='bibr' target='#b14'>[13]</ns0:ref><ns0:ref type='bibr' target='#b15'>[14]</ns0:ref><ns0:ref type='bibr' target='#b16'>[15]</ns0:ref><ns0:ref type='bibr' target='#b17'>[16]</ns0:ref> . The deep network helps to mine the deep features in the data, which can make the neural network have more powerful performance, which makes CNNSCA can also attack the encryption implementation with protection. In the sidechannel analysis application scenario, deep learning eliminates the step of manually extracting features from the workflow of model construction. For example, in the traditional bypass attack method, the TA with better attack effect only selects 5 strong feature points, while the deep learning model can select hundreds to thousands of feature points, select more features to construct a template, it is extremely beneficial to the generalization and robustness of the sidechannel analysis model.</ns0:p><ns0:p>Analyze the above domestic and foreign documents, there are two main types of CNN structures that have successfully used CNNSCA to achieve cryptanalysis, which are based on two variants of Alexnet and VGGnet network structures <ns0:ref type='bibr' target='#b12'>[12,</ns0:ref><ns0:ref type='bibr' target='#b17'>[16]</ns0:ref><ns0:ref type='bibr' target='#b18'>[17]</ns0:ref><ns0:ref type='bibr' target='#b19'>[18]</ns0:ref> . Among them, the 2012 ILSVRC(ImageNet Large Scale Visual Recognition Challenge) champion structure Alexnet <ns0:ref type='bibr' target='#b21'>[19]</ns0:ref> , although successful in the SCA application, but in fact, the training accuracy of CNNSCA based on this network variant is not high, moreover, the Alex-CNNSCA network model in the literature [16] has a large amount of training parameters and a long calculation time, which means that there is still room for optimization of this network structure. The 2013 ILSVRC champion network ZFNet <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref> has not changed much from the 2012 first ILSVRC champion network Alexnet. The 2014 ILSVRC runner-up structure VGGnet <ns0:ref type='bibr' target='#b23'>[21]</ns0:ref> also succeeded in breaking secrets in the SCA application. In the literature [12,17-18], VGG-CNNSCA models with different parameters were proposed. Among them, the best cryptanalysis performance is in the literature [12] proposed VGG-CNNSCA, but its training accuracy is still not high. Obviously, there is still room for improvement in the cryptanalysis performance. The 2014 ILSVRC champion network GoogLeNet <ns0:ref type='bibr' target='#b24'>[22]</ns0:ref> and the 2015 ILSVRC champion network ResNet <ns0:ref type='bibr' target='#b25'>[23]</ns0:ref> have also been used in SCA, but the effect is average. This conclusion has been confirmed in the literature [12]. The last ILSVRC champion network in 2017 was the SEnet <ns0:ref type='bibr' target='#b26'>[24]</ns0:ref> proposed by Momenta and Oxford University. There is currently little literature on applying this network to SCA scenarios.</ns0:p><ns0:p>Although CNNSCA overcomes the shortcomings of the previous profiling methods and improves the cryptanalysis performance, the existing CNNSCA model learning ability and cryptanalysis performance are not optimal. The disadvantages of these models are: low training accuracy and excessive training time long, taking up too much computing resources, etc. The reason is mainly affected by CNNSCA model design and hyperparameter optimization. In order to improve the overall performance of CNNSCA, the paper will improve CNNSCA model design and hyperparameter optimization, and has done the following work:</ns0:p><ns0:p>1. The composition of the CNN architecture in the SCA application scenario is studied, and the calculation process of the CNN core algorithm for side-channel leakage of onedimensional data is deduced. The algorithms involved in the paper experiments are all programmed in the Python language, and use the deep learning architecture Keras library <ns0:ref type='bibr' target='#b28'>[25]</ns0:ref> (version 2.4.3) or directly use the GPU version of the Tensorflow library <ns0:ref type='bibr' target='#b29'>[26]</ns0:ref> (version 2.2.0). The experiment was carried out on an ordinary computer equipped with 16 GB RAM and 8G GPU (Nvidia GF RTX 2060). All experiments use side-channel leaking public data sets-known first-order mask data sets in the ASCAD database, use 50,000 pieces of data from its training set to train the model, and randomly select 1,000 pieces of data from its test set for testing. When testing the cryptanalysis performance of the CNNSCA model, the guessing entropy index is used to evaluate the cryptanalysis performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods Materials</ns0:head><ns0:p>1 CNN Convolutional Neural Network (CNN) is one of the most successful algorithms of artificial intelligence, and it is a multi-layer neural network with a new structure. Its design is inspired by the research on the optic nerve receptive field <ns0:ref type='bibr' target='#b30'>[27]</ns0:ref><ns0:ref type='bibr' target='#b32'>[28]</ns0:ref> . The core component of CNN, the convolution kernel, is the structural embodiment of the local receptive field. It belongs to the deep network of back propagation training. It uses the two-dimensional spatial relationship of the data to reduce the number of parameters that need to be learned, and improves the training performance of the BP algorithm(Error Back Propagation, which is used to calculate the gradient of the loss function with respect to the parameters of the neural network) to a certain extent. The main difference between CNN and MLP is the addition of the convolution block structure. In the convolution block, a small part of the input data is used as the original input of the network structure, and the data information is forwarded layer by layer in the network, and each layer uses several convolution cores to extract features of the input data. Convolutional neural networks have been successfully applied in computer vision, natural language processing, disaster climate prediction and other fields, especially shine on ILSVRC <ns0:ref type='bibr' target='#b33'>[29]</ns0:ref> . ILSVRC is one of the most popular and authoritative academic competitions in the field of machine vision in recent years, representing the highest level in the field of imaging. The introduction of outstanding CNNs in the image classification and target positioning projects of the ILSVRC competition over the years is shown in Table <ns0:ref type='table'>1</ns0:ref>(CNN with outstanding performance in previous ILSVRC competitions).</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> sorts out the champion networks and individual runner-up networks of the last ILSVRC classification task from 2012 to 2017, and briefly introduces their names, rankings, classification results under the top1 and top5 indicators, and some remarks. Top1 refers to the largest the training effect of the network model-the learning rate (also called the step size), aims to promote the gradient (ie, the error gradient) drop during the training process.</ns0:p><ns0:p>The number of iterations and the amount of batch learning affect the degree of model training, and the optimizer and learning rate are used to control the gradient of the error. These parameters all have an important impact on CNNSCA's cryptanalysis performance and need to be adjusted according to specific attack scenarios. (the so-called depth), making the CNN resistant to time-domain distortion Vector features <ns0:ref type='bibr' target='#b35'>[31]</ns0:ref> . The convolutional layer usually needs to set the padding mode , one is valid padding , so that the dimension of the feature vector after convolution is smaller than the original vector; the other is the same padding , so that the convolutional The feature vector dimension is the same as the original vector.</ns0:p><ns0:p>b) Batch Normalization layers <ns0:ref type='bibr' target='#b36'>[32]</ns0:ref> (BN for short), whose role is to reduce the deviation of covariates in the two stages of training and prediction, which is conducive to the use of a higher learning rate for the network model <ns0:ref type='bibr' target='#b37'>[33]</ns0:ref> . c) Activation layers (ACT for short) are non-linear layers and consist of a single real function, which acts on each coordinate of the input vector. The ReLU function is currently the first choice in deep learning.</ns0:p></ns0:div> <ns0:div><ns0:head>d)</ns0:head><ns0:p>Pooling layers (POOL for short) are non-linear layers. Use the pooling window to slide on the input vector to extract salient feature points to reduce the feature dimension. There is no weight in the pooling layer, which will not cause distortion of the input signal.</ns0:p><ns0:p>e) Fully-Connected layers (FC for short), the neurons between the layers are completely connected, and these layers need to train a lot of weights. This layer is expressed by an affine function as: D-dimensional x vector is the input, and Ax+B is the output.</ns0:p><ns0:p>Among them, A&#8712;RC&#215;D is the weight matrix and B&#8712;RC is the deviation vector.</ns0:p><ns0:p>These weights and deviations are the training parameters of the FC layer. <ns0:ref type='bibr' target='#b26'>[24]</ns0:ref> . The structure of the SE module is shown in Figure <ns0:ref type='figure'>1</ns0:ref> (SE module).</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>1</ns0:ref>, the SE module uses global pooling as a squeeze operation, and then uses two FC layers to form an excitation structure to profile the correlation between channels, and output and input the same number of feature channels weights. The advantages of this are: 1) it has more nonlinearity and can better fit the complex correlation between channels; 2) the amount of parameters and the amount of calculation are greatly reduced. Then obtain the normalized weight between 0 and 1 through a sigmoid function, and then use a scale operation to weight the normalized weight to the features of each channel <ns0:ref type='bibr' target='#b26'>[24]</ns0:ref> . Finally, the output of scale is superimposed on the input x before the SE module to generate a new vector .</ns0:p><ns0:p>x % 3.2 Core algorithm of CNN for SCA 1) Convolution calculation Usually convolution operations in the field of computer vision are numerical operations on twodimensional image data. In the SCA application scenario, the dimensionality of the convolution operation is adjusted, which is to slide the convolution kernel on the one-dimensional energy trace data. The number of steps moved each time is called the step length, and the convolution calculation is performed on each sliding to obtain a value. After one round of calculation is completed, a feature vector representing the vector feature is obtained. The rule of numerical operation is to multiply a one-dimensional convolution kernel with a value at the corresponding position of a one-dimensional vector, and then sum. For example, there is a 1x3 convolution kernel, which convolves a 1x6 one-dimensional vector with a step size of 1. The calculation process is shown in Figure <ns0:ref type='figure'>2</ns0:ref> (Convolution calculation process).</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>2</ns0:ref>(a), the convolution kernel slides from the left side of the input vector. The first numerical calculation is: 1x1+0x0+1x1=2, and the first value 2 of the new feature vector is obtained. Then, the convolution kernel slides one step to the right to continue the numerical calculation: 1x0+0x1+1x0=0, to get the second value 0 of the new feature vector, as shown in Figure <ns0:ref type='figure'>2</ns0:ref>(b). Repeat this process until the convolution kernel slides to the far right of the input vector, and the convolution calculation is complete.</ns0:p><ns0:p>2) Pooling calculation There are three ways of pooling: Max-Pooling, Mean-Pooling and Stochastic Pooling. Maximum pooling is to extract the maximum value of the value in the pooling window, average pooling is to extract the average value of the value in the pooling window, and random pooling is to randomly extract the value in the pooling window. The original pooling operation of CNN is also a numerical operation on two-dimensional image data. In the SCA application scenario, the pooling calculation has also been dimensionally adjusted, and a pooling mode is selected for calculation on the one-dimensional energy trace data. For example, the pooling window size is 1x2, and the maximum or average pooling operation is performed on a 1x6 one-dimensional vector with a step size of 2. The pooling calculation is shown in Figure <ns0:ref type='figure'>3</ns0:ref> (Pooling calculation process).</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>3</ns0:ref>(a), the maximum pooling starts from the left side of the input vector. Every two steps of the pooling window, the maximum value of the two values in the window is selected as a value of the new feature vector. The average pooling is shown in Figure <ns0:ref type='figure'>3</ns0:ref>(b). For every two sliding steps of the pooling window, the average of the two values of the window class is calculated as a value of the new feature vector. The pooling window slides to the right until the rightmost of the input vector, and the pooling calculation is complete.</ns0:p></ns0:div> <ns0:div><ns0:head>3) softmax function</ns0:head><ns0:p>This function normalizes the output value and converts all output values into probabilities. The sum of the probabilities is 1. The formula of softmax is:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#119904;&#119900;&#119891;&#119905;&#119898;&#119886;&#119909; (&#119909; &#119894; ) = &#119890;&#119909;&#119901; (&#119909; &#119894; ) &#120564; &#119895; &#119890;&#119909;&#119901; (&#119909; &#119895; )</ns0:formula><ns0:p>Here represents the input of the i-th neuron in the softmax layer, represents the input of i x j x the j-th neurons in the softmax layer, and is the sum of calculations for . The result of the j &#61669; j x function is used as the fitting probability of the i-th neuron label.</ns0:p></ns0:div> <ns0:div><ns0:head>4) Principle of weight adjustment</ns0:head><ns0:p>Using the cost function and gradient descent algorithm <ns0:ref type='bibr' target='#b38'>[34]</ns0:ref> , each time the network model is trained, the weights are automatically adjusted in the direction of error reduction, so that the training parameters are repeated until all iterations are over, and the weight adjustment is completed.</ns0:p></ns0:div> <ns0:div><ns0:head>5) Evaluation of Cryptanalysis Performance</ns0:head><ns0:p>Generally, security officers consider two indicators when evaluating CNNSCA's cryptanalysis performance: one is the training accuracy of the neural network model during modeling, the Acc indicator <ns0:ref type='bibr' target='#b39'>[35]</ns0:ref> , and the other is the security indicator guessing entropy of the key obtained in the attack phase <ns0:ref type='bibr' target='#b40'>[36]</ns0:ref><ns0:ref type='bibr' target='#b41'>[37]</ns0:ref> . The guessing entropy index is commonly used to evaluate the SCA cryptanalysis performance, and the guessing entropy is used to measure the efficiency of decrypt.Guessing Entropy (GE) is obtained through a custom rank function Rank(&#8901;), which is defined as:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>&#119877;ank(&#119892;,&#119863; &#119905;&#119903;&#119886;&#119894;&#119899; ,&#119863; &#119905;&#119890;&#119904;&#119905; ,&#119899;) = |{&#119896; &#8712; &#119870;|&#119889; &#119899; [ &#119896; ] &#8805; &#119889; &#119899; [&#119896; * ] }|</ns0:formula><ns0:p>The adversary uses the modeling data set D train to establish a bypass analysis model g, and uses n energy trace samples in the attack data set D test to perform n attacks during the attack phase.</ns0:p><ns0:p>After each attack, the logarithm value of the distribution probability of 256 types of hypothetical</ns0:p><ns0:formula xml:id='formula_2'>cryptograms is obtained, compose a vector [ [1], [2], , [ ]] i i i k L i d = d d d</ns0:formula><ns0:p>, whose indexes are arranged in the positive order of the hypothetical cryptogramme's key space (the index counts from zero), where i&#8712;n, k&#8712;K, and K is the key space of the hypothetical cryptogramme. The results of each attack are accumulated. Then, the rank function Rank(&#8901;) sorts all the elements of the vector d i in reverse order by value, and keeps the position of the corresponding index of each element in the vector before and after sorting consistent with the position of the element, and obtains a new ranking vector</ns0:p><ns0:formula xml:id='formula_3'>[ [1], [2], , [ ]] i i i k L i D = D D D</ns0:formula><ns0:p>, where each the element D i [k] contains two values k and d[k], and finally the index of the logarithmic element of the known cryptogramme k * probability in D i is output, that is, the guessing entropy GE(d[k * ]). At the i-th attack, the higher the matching rate of the energy trace model of the real cryptogramme, the higher the index ranking of its GE(d[k * ]). Guessing entropy is the GE(d[k * ]) index ranking output of each attack--rank. In n attacks, the better the performance of the cryptanalysis method and the higher the efficiency, the faster the ranking of GE(d[k * ]) converge to zero. It shows that in the i-th attack, the guessing entropy converges to zero and continues to converge in subsequent attacks. The adversary only needs i attacks to crack the cryptogramme, that is, only i power consumption traces are needed to break the secret. Equation ( <ns0:ref type='formula'>2</ns0:ref>) can be rewritten as (3):</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_4'>GE &#119899; (&#119892;) = Every[&#119877;ank(&#119892;,&#119863; &#119905;&#119903;&#119886;&#119894;&#119899; ,&#119863; &#119905;&#119890;&#119904;&#119905; ,&#119899;)]</ns0:formula><ns0:p>4 Side-channel leaking public data sets The newly published ASCAD database <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref> aims to achieve AES-128 with first-order mask protection, namely 8-bit AVR microcontroller (ATmega8515), in which the energy trace is the data signal converted by the collected electromagnetic radiation. The adversary outputs the collected signal for the third S-box of the first round of AES encryption, and launches an attack against the first AES key byte. The database follows the MNIST database <ns0:ref type='bibr'>[38]</ns0:ref> rules and provides a total of four data sets, each with 60,000 entries power consumption traces, of which 50,000 power consumption traces are used for analysis/training, and 10,000 power consumption traces are used for testing/attack. The first three ASCAD data sets respectively represent the encryption realization leakage with three different random delay protection countermeasures. The signal offsets desync=0, desync=50, and desync=100 are used to represent these three data sets with two strategies of mask and delay. All power consumption traces in the first three types of data sets contain 700 feature points. These feature points are selected from the original energy trace containing 100,000 feature points, and the selection basis is the position of the largest signal peak. When the mask is known, the maximum signal-to-noise ratio of the data set can reach 0.8, but it is almost 0 when the mask is unknown. The last ASCAD data set stores the original energy trace. The initial configuration basis and selection of CNNSCAbase are as follows: Find out the two prototypes of Alex-CNNSCA and VGG-CNNSCA to set the same parameters, set these parameters in CNNSCAbase in the same way, these parameters are as follows: 5 convolutional blocks , 3 full connections , the padding modes of the convolutional layers are SAME, and the activation functions of all layers before the last layer of the network select ReLU. In addition, in most classification tasks, convolutional networks use softmax, crossentropy, and RMSprop as the model's classification function, loss function, and optimizer <ns0:ref type='bibr' target='#b21'>[19]</ns0:ref><ns0:ref type='bibr' target='#b22'>[20]</ns0:ref><ns0:ref type='bibr' target='#b23'>[21]</ns0:ref><ns0:ref type='bibr' target='#b24'>[22]</ns0:ref><ns0:ref type='bibr' target='#b25'>[23]</ns0:ref> . Here, CNNSCAbase also chooses to use these three activation functions . Since the side-channel leakage data belongs to one-dimensional data, the processing complexity is less than that of two-dimensional data. Here, the convolution layer of each convolution block is initialized to 1, and the number of convolution kernels in the first convolution layer is 64 (choose the smaller number of Alexnet or VGGnet), the size of the convolution kernel is 3x3, the step size is 1, the pooling mode is tentatively averaged pooling mode, the pooling window size is 2, and the step size is 2. In addition, in the initial setting of CNNSCAbase, a new SE module is embedded in the last four convolution blocks. All initial structure parameters of CNNSCAbase are shown in Table <ns0:ref type='table'>2</ns0:ref>(CNNSCAbase Configuration).</ns0:p></ns0:div> <ns0:div><ns0:head>Methods</ns0:head><ns0:p>Here Next, we discuss the hyperparameter optimization of the CNNSCA model. Model structure parameters and training parameters are hyperparameters and need to be set in advance. Later, we will design a set of experimental procedures to optimize these hyperparameters in specific application scenarios. For example, we choose to determine the model parameters first rather than the global training parameters, first determine the number of Conv layers, rather than first determine the kernel size or the number of filters . The reason for this design is: currently, Python's deep learning architecture library <ns0:ref type='bibr' target='#b28'>[25]</ns0:ref><ns0:ref type='bibr' target='#b29'>[26]</ns0:ref> is mainly used to program the CNN network. When using these library methods, the CNN network structure is usually programmed first. The order in which these parameters appear in the program code will be affected by the library methods, and then the training parameters are designed according to the size of the network and the size of the training set. It is precisely in consideration of the order in which the parameters appear during programming, we have designed the order of the following experimental procedures.</ns0:p><ns0:p>2 Selection and optimization of CNN structure parameters for side-channel cryptanalysis 2.1 Structural parameter selection rules In section Methods 1, the base model CNNSCAbase is set, and the best model after parameter optimization will be named CNNSCAnew later. In CNNSCAbase, in addition to the specific set of structural parameters, the remaining structural parameters need to be customized. These structural parameters include classification function, loss function, optimizer, the number of convolutional layers in each convolution block, the number of convolution kernels in the convolution layer, convolution kernel size, pooling layer pooling mode. When choosing these custom structure parameters, you need to follow the classic rules of building a deep learning network structure <ns0:ref type='bibr' target='#b22'>[20,</ns0:ref><ns0:ref type='bibr' target='#b24'>22]</ns0:ref> , which can reduce the number of unnecessary test parameters. The rules are as follows:</ns0:p><ns0:p>Rule 1: Set the same parameters for the convolutional layers in the same convolutional block to keep the amount of data generated by different layers unchanged. Rule 2: The dimensionality of each pooling window is 2, and the window sliding step is also 2, each operation reduces the dimensionality of the input data to half. Rule 3: In the convolutional layer of the i-th block (starting from i=1), the number of convolution kernels is n: , i&#8805;2. This rule keeps the amount of data processed by different</ns0:p><ns0:formula xml:id='formula_5'>1 1 2 i i n n &#61485; &#61501; &#61620;</ns0:formula><ns0:p>convolution blocks as constant as possible. The network structure characteristics of VGG-16 in this reference [21] are formulated. Rule 4: The size of the convolution kernel of all convolution layers is the same.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Structural parameter optimization</ns0:head><ns0:p>Among the custom structure parameters, the structure parameters that need to be further adjusted through experimental analysis are: the number of convolution layers in each convolution block, the number of convolution kernels in the convolution layer, the size of the convolution kernel, the pooling mode of the pooling layer, SE module. The experimental process of structural parameter optimization is as follows:</ns0:p><ns0:p>1) Number of convolutional layers In Section Methods 1, in the initial CNNSCAbase structure, the number of convolutional layers for each convolution block is 1, and the convolutional structure is named Cnov1. Refer to the number of convolutional layers of different convolutional blocks of the Alexnet and VGGnet16 prototypes. It is found that the minimum number is 1 and the maximum is 3, and the small number is distributed in the front convolution block, and the large number is at the back. This is also to build deep learning The common habit of the Internet. Therefore, the upper limit of the number of convolutional layers of the CNNSCAbase convolution block is set to 3, and the baseline is Cnov1, and a certain convolutional layer parameter configuration can be obtained through two sets of necessary experiments. When training the CNNSCAbase model, the training iteration and batch parameters of the current optimal CNNSCA model <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref> are used, which are 75 and 200 (in all experiments in section Methods 2.2, unless otherwise specified, the iteration and batch parameters are used. experiment).</ns0:p><ns0:p>Experiment 1: Set up a model in which the number of convolutional layers in 5 convolutional blocks is 2, and other parameters are consistent with CNNSCAbase, and the structure is named Cnov2. Then set the number of convolutional layers of the first 4 convolutional blocks to 2, and the convolutional layer of the last convolutional block to 3. Other parameters are consistent with CNNSCAbase, and the structure is named Cnov3. The specific settings of the number of convolutional layers of each convolution block of Cnov1~3 are shown in Table <ns0:ref type='table'>3</ns0:ref>(CNNSCAbase.Conv1-7 Configuration). The three structures constructed are trained and tested, and the results of experiment 1 are shown in Figure <ns0:ref type='figure' target='#fig_2'>6</ns0:ref>(Convergence of guessing entropy of Cnov1~3).</ns0:p><ns0:p>From the results in Figure <ns0:ref type='figure' target='#fig_2'>6</ns0:ref>, it is found that when Cnov2 and Cnov1 attack the 750th energy trace, their guessing entropy basically converges to 0, while Cnov3 cannot converge in a finite number of (1000) attacks. When doing further analysis, if you set two or more convolutional blocks with 3 convolutional layers in the 5 convolutional blocks of the model, the calculation amount and parameter amount of model training will increase by several times, and the 8G GPU memory used in the experiment will be directly exhausted , unable to run the code, then this parameter setting method will have no practical significance. Therefore, the upper limit of the number of convolutional layers for each convolutional block is determined to be 2.</ns0:p><ns0:p>Experiment 2: On the basis of the conclusion of Experiment 1, the convolutional layer parameter setting of each convolution block is further accurate. As shown in Figure <ns0:ref type='figure' target='#fig_2'>6</ns0:ref>, the convergence of the orange line representing the entropy of Cnov2's guess is more stable than that of Cnov1, but it is obvious that there are more convolutional layers, which means that the amount of model calculations and parameters are relatively large, which affects the overall performance of the model. Therefore, four structures of Cnov4~7 are set, and each structure sequentially reduces the number of convolutional layers in each convolution block of Cnov2 by one. The specific settings of the number of convolutional layers of each convolution block of Cnov4~7 are shown in Table <ns0:ref type='table'>3</ns0:ref>(CNNSCAbase.Conv1-7 Configuration). Train and test these constructed structures, and the results of experiment 2 are shown in Figure <ns0:ref type='figure'>7</ns0:ref>(Convergence of guessing entropy of Cnov1~2,4~7).</ns0:p><ns0:p>The red curve representing the entropy of Cnov5 guessing in Figure <ns0:ref type='figure'>7</ns0:ref> converges optimally. Finally, the number of convolutional layers of the Cnov5 structure is determined, and the benchmark is set for the number of convolutional layers of CNNSCAnew.</ns0:p><ns0:p>2) The number of convolution kernels in the convolution layer It is known that the number of convolution kernels of each convolutional layer of CNNSCAbase is initially set according to Rule 3. The number of convolution kernels of the first convolutional layer is 64. Usually increasing the number of convolution kernels means that more dimensional feature extraction is performed on the input data, thereby improving the classification efficiency of the convolutional network. But it will inevitably lead to an increase in the amount of calculation and storage of the attack device, which will lead to an increase in the training time of the model. Therefore, under the condition that the efficiency loss of the guarantee model is not large, the model training time can be reduced by reducing the number of convolution kernels.</ns0:p><ns0:p>Since the number of convolution kernels in the later layer increases by a factor of 2 of the number of convolution kernels in the first convolution layer, to determine the number of convolution kernels as a benchmark, it is only necessary to test the number of convolution kernels in the first convolution layer. At the same time, the CNNSCA model in reference [12], the upper limit of the number of convolution kernels in the convolution layer is 512, which can achieve the effect of breaking the density, so the paper also adjusts the upper limit of the number of convolution kernels to 512.</ns0:p><ns0:p>Experiment 3: Name the four structures tested as filter1, filter2, filter3, and filter4. The convolution kernel values of the first convolution layer are 8, 16, 32, 64, and the number of convolution kernels of the remaining four convolution blocks is also increased by a factor of 2 respectively. The upper limit of the number of convolution kernels is always 512. Other structural parameters are the parameters of the current CNNSCAnew. Train and test the filter1~4 structure, and the result of experiment 3 is shown in Figure <ns0:ref type='figure'>8</ns0:ref>(Convergence of guessing entropy of filter1~4 (epochs=75)).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> shows that after 75 iterations of training, the guessing entropy of the filter4 structure cannot converge. Although the guessing entropy of the filter1~3 structure converges, it fluctuates all the time. When checking the training accuracy of the filter1~3 structure, it is found that the accuracy of the three structures has reached more than 99%, or even reached 1. Obviously, the model has an overfitting phenomenon, which is the most common problem in neural networks. Therefore, the number of training iterations of the filter1~3 structure is reduced to 40, the three structures are retrained, and then the test set is attacked again, and the result shown in Figure <ns0:ref type='figure'>9</ns0:ref>(Convergence of guessing entropy of filter1~3 (epochs=40)) is obtained. The guessing entropy of the filter1~3 structure in Figure <ns0:ref type='figure'>9</ns0:ref> all converge to rank 0, and filter3 converges to the position of rank 0 earliest. In summary, the benchmark for selecting convolution kernel parameters is the filter3 structure, and the convolution kernel parameters of the CNNSCAnew structure are updated synchronously.</ns0:p></ns0:div> <ns0:div><ns0:head>3) Pooling mode of pooling layer</ns0:head><ns0:p>It is known that the initial setting mode of the pooling layer of CNNSCAbase is AveragePool, and another common pooling mode is MaxPooling. According to rule 3, both the pooling window and the pooling step size are still selected here.</ns0:p><ns0:p>Experiment 4: Will test the impact of two pooling modes AveragePool and MaxPooling on the current CNNSCAnew structure. The result of experiment 4 is shown in Figure <ns0:ref type='figure' target='#fig_3'>10</ns0:ref>(Convergence of guessing entropy of AveragePool and MaxPool structure).</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_3'>10</ns0:ref>, it is obvious that the guessing entropy convergence of the average pooling structure is better than the maximum pooling structure, so the benchmark of the pooling layer pooling mode is average pooling, and the pooling mode of the CNNSCAnew structure is set to average pooling.</ns0:p></ns0:div> <ns0:div><ns0:head>4) Convolution kernel size</ns0:head><ns0:p>The size of the convolution kernel of each convolution layer in CNNSCAbase is initially set to 1x3, or 3 for short. In deep learning, people often reduce the size of the convolution kernel by increasing the depth of the network, thereby reducing the computational complexity of the network. In the VGG-CNNSCA structure, the convolution kernel uses a larger size of 11, and in the Alex-CNNSCA structure, a small size of 3 is used.</ns0:p><ns0:p>Experiment 5: Test the attack effects of the models with the convolution kernel sizes of 3, 5, 7, 9, and 11, and name these five structures as kernel3, kernel5, kernel7, kernel9, and kernel11. The other parameters of these structures are compared with The current CNNSCAnew is the same. The result of experiment 5 is shown in Figure <ns0:ref type='figure' target='#fig_4'>11</ns0:ref>(Convergence of guessing entropy of different convolution kernel size structures).</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_4'>11</ns0:ref>, the convolution kernel size of the structure kernel3 is 3, which guesses that the entropy convergence is better than other structures, so the size 3 is used as the setting reference for the convolution kernel size. At the same time, the size of the convolution kernel of the CNNSCAnew structure is set to 3.</ns0:p></ns0:div> <ns0:div><ns0:head>5) SE module</ns0:head><ns0:p>The attention mechanism in deep learning is essentially similar to the selective visual attention mechanism of humans, and the core role is to select information that is more critical to the current task goal from a large number of information <ns0:ref type='bibr' target='#b26'>[24]</ns0:ref> . The paper has initially added an SE fixed module to the last four convolution blocks of CNNSCAbase. The initial setting of the dimensional change ratio of the first full link layer of the SE module is 1/16, but this conventional setting is in the SCA scene The suitability of the medium requires further verification.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed As shown in Figure <ns0:ref type='figure' target='#fig_5'>12</ns0:ref>, when the dimensional ratio of the SE module is 1/8, the guessing entropy convergence of the overall structure of the CNN is the best. On the basis of this dimensional ratio, the result of Experiment 7 is shown in Figure <ns0:ref type='figure'>13</ns0:ref>(Convergence of guessing entropy for different number of SE cycles). It is found that when the last four convolution blocks of CNNSCAnew use the SE module twice, the guessing entropy converges fastest. Therefore, the dimensional ratio of the SE module is 1/8 and the SE module is looped twice as a new benchmark for the parameters of the SE module in the CNNSCAnew structure.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>6) Number of channels at the full link layer</ns0:head><ns0:p>The CNNSCA model in literature [12,16] uses 4096 channels in the fully connected layer, which is similar to the number of channels in the fully connected layer of the original VGGnet and Alexnet structures. Considering that the classification task of the ImageNet competition is 1000 classifications, and only 256 classifications are needed in the SCA scene, the number of channels can be adjusted appropriately to reduce the training complexity of the model.</ns0:p><ns0:p>Experiment 8: Will test the model attack effect of the four cases where the number of channels in the fully connected layer is 4096, 3072, 2048, and 1024. The other parameters of these test models are the same as CNNSCAnew. The reason why the number of channels is not set lower than 1024 is that from the convolutional layer to the fully connected layer, if the vector dimension changes sharply, the feature points of the vector are greatly reduced, which will affect the training effect of the model. The result of experiment 8 is shown in Figure <ns0:ref type='figure'>14</ns0:ref>(Convergence of guessing entropy of the four channel number structure of FC layer).</ns0:p><ns0:p>It is found from Figure <ns0:ref type='figure'>14</ns0:ref> that when the number of channels of the FC layer is 1024, the guessing entropy of its structure converges fastest and continues to be stable. Therefore, 1024 is selected as the reference for the number of channels in the FC layer of the CNNSCAnew structure.</ns0:p><ns0:p>In summary, the parameter benchmark of the CNNSCAnew structure has been optimized. The new structure parameters are shown in Table <ns0:ref type='table'>4</ns0:ref> (CNNSCAnew Configuration).</ns0:p><ns0:p>Almost all experiments in Section Methods 2.2 use the three parameters of 75 iterations, 200 batches of learning, and 1x10-4 learning rate for experiments. These training parameters have little effect on the experimental effects of optimizing various structural parameters, but It is not the optimal setting. The convolutional network of deep learning is applied to the side-channel attack, and these training parameters should also be tuned according to the actual processed sidechannel signal data. The order of training parameter tuning is usually learning rate, batch learning amount, and number of iterations <ns0:ref type='bibr'>[39]</ns0:ref><ns0:ref type='bibr'>[40]</ns0:ref> . In the experiment of training parameter optimization, the current CNNSCAnew structure is used. The training parameter optimization experiment process is as follows:</ns0:p><ns0:p>1) Learning rate The learning rate is a hyperparameter that is artificially set. The learning rate is used to adjust the size of the weight change, thereby adjusting the speed of model training. The learning rate is generally between 0-1. The learning rate is too large, and the learning will be accelerated in the early stage of model training, making it easier for the model to approach the local or global optimal solution, but there will be large fluctuations in the later stage of the training, and even the value of the loss function may hover around the minimum value, which is always difficult to reach Optimal solution; the learning rate is too small, the model weight adjustment is too slow, and the number of iterations is too much.</ns0:p><ns0:p>Experiment 9: Will test the impact of five commonly used learning rates on the model's cryptanalysis effect, namely lr1=1x10 -2 , lr2=1x10 -3 , lr3=1x10 -4 , lr4=1x10 -5 , lr5=1x10 -6 . The result of experiment 9 is shown in Figure <ns0:ref type='figure'>15</ns0:ref>(Convergence of model guessing entropy under five learning rates).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>15</ns0:ref> reflects that when the learning rate is lr2, the guessing entropy of CNNSCAnew converges fastest and is the most stable. Therefore, 1x10 -3 is selected as the learning rate benchmark of the CNNSCAnew structure.</ns0:p></ns0:div> <ns0:div><ns0:head>2) Batch size</ns0:head><ns0:p>The appropriate batch size is more important for the optimization of the model. This parameter does not need to be fine-tuned, just take a rough number, usually 2 n (GPU can play a better performance for batches of the power of 2). A batch size that is too large will be limited by the GPU memory, the calculation speed will be slow, and it cannot increase indefinitely (the training set has 50,000 data); it cannot be too small, which may cause the algorithm to fail to converge. It can be seen from Figure <ns0:ref type='figure' target='#fig_2'>16</ns0:ref> that when the batch learning amount is 128, the guessing entropy of CNNSCAnew converges fastest and is the most stable. Therefore, 128 is selected as the batch size benchmark of CNNSCAnew structure.</ns0:p></ns0:div> <ns0:div><ns0:head>3) Number of iterations (epoch)</ns0:head><ns0:p>The number of iterations is related to the fitting performance of the CNNSCA model. The model has been fitted (the accuracy rate reaches 1), and there is no need to continue training; on the contrary, if all epochs have been calculated, but the loss value of the model is still declining, and the model is still optimizing, then the epoch is too small. Should increase. At the same time, the number of iterations of model training also refers to the actual cryptanalysis effect of the model, which is measured by guessing entropy. From the results in Figure <ns0:ref type='figure'>17</ns0:ref>, it is found that the model of epoch1-4 guesses that the entropy does not converge. Separately recalculate the graph of epoch5-8 model. The result is shown in Figure <ns0:ref type='figure'>18</ns0:ref>(Convergence of model guessing entropy under four epochs). It can be clearly seen that the epoch5-8 model has a convergence trend. Among them, the epoch5 curve is closest to the position of ranking 0, and the epoch8 curve first converges to ranking 0, but afterwards it fluctuates more widely, and it is obviously over-fitting. The convergence of epoch6 and 7 is similar, the curve begins to fluctuate greatly, and it is close to ranking 0 in the later period. Figure <ns0:ref type='figure'>19</ns0:ref> shows that the guessed entropy of the epoch70 model converges best, and its guessed entropy converges fastest and is the most stable. Therefore, 70 is selected as the training iteration benchmark of the CNNSCAnew structure.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>1 </ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Comparative analysis of CNNSCAnew and other profiling side-channel attack methods 1 Comparative analysis of CNNSCAnew, classic template attack and MLPSCA Experiment 13: Compare the cryptanalysis's performance of CNNSCAnew with the HW-based TA <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> and MLPSCA method proposed by Benadjila et al <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref> . TA and MLPSCA are the profiling methods that performed better in the early traditional profiling methods and the later new profiling methods, respectively. Experiment 13 carried out an attack on the ASCAD data set with a known mask, which represents the realization of the encryption in an unprotected state. The result of experiment 13 is shown in Figure <ns0:ref type='figure'>20</ns0:ref>(TA, MLPSCA, CNNSCAnew guessing entropy convergence).</ns0:p><ns0:p>It can be seen from Figure <ns0:ref type='figure'>20</ns0:ref> that CNNSCAnew's guessing entropy convergence is significantly better than TA and MLPSCA.</ns0:p><ns0:p>2 Comparative analysis with other existing CNNSCA Experiment 14: Compare the breaking performance of CNNSCAnew model with VGG-CNNSCA <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref> and Alex-CNNSCA <ns0:ref type='bibr' target='#b17'>[16]</ns0:ref> . The latter two methods are the profiling methods with better performance among the latest profiling methods. Among them, VGG-CNNSCA in [12] uses the ASCAD public data set, and Alex-CNNSCA in [16] uses a self-collected data set. Experiment 14 carried out an attack on the ASCAD data set with a known mask, which represents the realization of the encryption in an unprotected state. The result of experiment 14 is shown in Figure <ns0:ref type='figure'>21</ns0:ref>(CNNSCAnew, VGG-CNNSCA, Alex-CNNSCA guessing entropy convergence).</ns0:p><ns0:p>It can be seen from Figure <ns0:ref type='figure'>21</ns0:ref> that the CNNSCAnew proposed in this paper has a better guessing entropy convergence than other CNNSCAs. In [12] Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Among the profiling side-channel cryptography attack methods, the most popular one is CNNSCA, a side-channel attack method combined with deep learning convolutional neural network algorithms. Its cryptanalysis performance is significantly better than traditional profiling methods. Among the existing CNNSCA methods, the CNNSCA network models that achieve cryptanalysis mainly include CNNSCA based on the VGG variant (VGG-CNNSCA) and CNNSCA based on the Alexnet variant (Alex-CNNSCA). The learning capabilities and cryptanalysis performance of these CNNSCA models it is not optimal. The paper aims to explore effective methods to obtain the performance gains of the new side-channel attack method CNNSCA.</ns0:p><ns0:p>After studying the related knowledge, necessary structure and core algorithm of CNNSCA, the paper found that CNNSCA model design and hyperparameter optimization can be used to improve the overall performance of CNNSCA. In terms of CNNSCA model design, the advantages of VGG-CNNSCA model classification and fitting efficiency and the Alex-CNNSCA model occupying less computing resources can be used to design a new CNNSCA basic model. In order to better reduce the gradient dispersion problem of error back propagation in the deep network, it is a very effective method to embed the SE module in this basic model; in terms of the hyperparameter optimization of the CNNSCA model, the above basic model is applied to side-channel leakage A known first-order mask data set in the public database (ASCAD). In this specific application scenario, according to the model design rules and actual experimental results, unnecessary experimental parameters can be excluded to the greatest extent. Various hyperparameters of the model are optimized within the parameter interval to improve the performance of the new CNNSCA, and the final determination benchmark for each hyperparameter is given. Finally, a new CNNSCA model optimized architecture for attacking unprotected encryption devices is obtained--CNNSCAnew. The paper also verified through experimental comparison that CNNSCAnew's cryptanalysis effect is completely superior to traditional profiling methods and the new profiling methods in literature [12,16]. In the literature [12,16], the results of CNNSCA's guessing entropy are: convergence to 650 and oscillation. The result of CNNSCAnew's guessing entropy proposed in this paper is to converge to a minimum of 61. Under the same experimental environment and experimental equipment conditions, literature [12] took 40 minutes from model training to attacking the key, while the total calculation time of CNNSCAnew was shortened to 30 minutes.</ns0:p><ns0:p>It should be noted that in practice, the results of each training of the CNNSCAnew model will have a slight deviation. This is a normal phenomenon during neural network training and will not affect the average performance of the model. While proposing the new CNNSCA method, the paper also provides a more comprehensive and detailed design plan and optimization method for the side-channel cryptanalysis researchers who need to design the CNNSCA model. In the future, we can use these design schemes and optimization methods to continue to explore the CNNSCA model that is more suitable for attacking protected equipment to achieve efficient attacks on encrypted equipment with protection, which is of great significance to information security and encryption protection. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 11</ns0:note><ns0:note type='other'>Computer Science Figure 12</ns0:note><ns0:note type='other'>Computer Science Figure 13</ns0:note><ns0:note type='other'>Computer Science Figure 14</ns0:note><ns0:note type='other'>Computer Science Figure 15</ns0:note><ns0:note type='other'>Computer Science Figure 16</ns0:note><ns0:note type='other'>Computer Science Figure 17</ns0:note><ns0:note type='other'>Computer Science Figure 18</ns0:note><ns0:note type='other'>Computer Science Figure 19</ns0:note><ns0:note type='other'>Computer Science Figure 20</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>2 . 4 .</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Taking advantage of the high efficiency of classification and fitting of the VGG-CNNSCA model and the advantages of the Alex-CNNSCA model occupying less computing resources, a new basic model of CNNSCA is designed to better reduce the gradient dispersion of error back propagation in the deep network. The problem is that the SE module is newly embedded in this basic model, so that the model basically achieves the purpose of breaking the secrets, thereby solving the problem of constructing the CNNSCA model.3. Apply the above basic model to a known first-order mask data set of the side-channel leak public database (ASCAD). In this application scenario, according to the model design rules and actual experimental results, unnecessary experiments are maximized parameter, optimize the various hyperparameters of the model in the most objective experimental parameter interval to improve the breaking performance of the new CNNSCA, which solves the problem of hyperparameter optimization, and gives the final determination benchmark for hyperparameters. Finally, a new CNNSCA model PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021) Manuscript to be reviewed Computer Science optimized architecture for attacking unprotected encryption devices-CNNSCAnew is obtained. The performance verified on public data sets exceeds other profiling SCA methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>we first verify and analyze the model training effect of CNNSCAbase with and without SE module. Remove the SE module from the CNNSCAbase model, all other parameters remain unchanged, and name this model CNNSCAnoSE. Train the models CNNSCAnoSE and CNNSCAbase on the training set of the ASCAD dataset with known masks. The training results of the two models are shown in Figure 5(Training effect of CNNSCAnoSE model and CNNSCAbase model): As shown in Figure 5, when the training iteration reaches 28 times, the accuracy of CNNSCAbase is significantly higher than that of CNNSCAnoSE, which is about 96%. Continue to train the CNNSCAnoSE model, and when the training iteration reaches 70 times, its accuracy rate rises to about 90%. In addition, when the training accuracy of the two models is close, the training time of the 28-iteration CNNSCAbase model is about 1393 seconds, which is significantly less than the training time of the 70-iteration CNNSCAnoSE model, the training time of the latter is about 2240 seconds. This proves that the SE module can promote the improvement of the classification performance of the CNNSCAbase model and can reduce the model training time.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Experiment 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Test the SE module of the model. The rate of the dimensional change of the first full link layer is 1/4, 1/8, 1/16, and 1/32 respectively. The other parameters of the test model are the same as CNNSCAnew. . Experiment 7: Test the attack effect of the SE module used 1, 2, and 3 times for the last four convolutional blocks in the current CNNSCAnew structure. The results of experiment 6 are shown in Figure 12(Convergence of guessing entropy for different SE dimension ratios).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Experiment 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>According to the size of the ASCAD data set in Section Materials 4, this experiment selects the batch size values: 32, 64, 128, and 256 for the experiment. The result of experiment 10 is shown in Figure 16(Convergence of model guessing entropy under four batches).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Experiment 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>In experiments 1-10, almost all use iteration number 75 to train the CNNSCA model. This experiment will center on iteration number 75, and train the CNNSCAnew model at 10 intervals in the upper and lower intervals to further optimize the model parameter epoch. The interval number of 10 is chosen because the step interval is too small, and the error loss of model training is not much different, so the setting is meaningless; the interval is too large, and repeated experiments may be required to determine an appropriate number of iterations. Therefore, Experiment 11 will test 8 iteration parameters epoch1=15, epoch2=25, epoch3=35, epoch4=45, epoch5=55, epoch6=65, epoch7=75, epoch8=85. The current CNNSCAnew structure has achieved higher training accuracy and breaking performance, in order to reduce model calculation pressure and calculation time, lower iteration parameters are usually selected when the model performs better. Therefore, the upper limit of the epoch test parameter is set to 85. The result of experiment 11 is shown in Figure17(Convergence of model guessing entropy under eight epochs).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Experiment 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Continue to debug the epoch parameters in a smaller range, and test the other two iterations with an interval of only 5: epoch60=60, epoch70=70. The trained epoch60 and epoch70 models and the previously trained epoch5, epoch6, and epoch7 models are simultaneously attacked on the target set. The results of Experiment 12 are shown in Figure 19(Convergence of model guessing entropy under five epochs).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 10 Convergence_of_guessing_entropy_of_AveragePool_and_MaxPool_structure</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Convergence_of_guessing_entropy_of_different_convolution_kernel_size_structures</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of 5 convolution kernel sizes structures. The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Convergence_of_guessing_entropy_for_different_SE_dimension_ratios</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of the four SE dimension ratios . The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Convergence_of_guessing_entropy_for_different_number_of_SE_cycles</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of the three SE cycles . The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Convergence_of_guessing_entropy_of_the_four_channel_number_structure_of_FC_layer</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of the four FC layer structures. The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Convergence_of_model_guessing_entropy_under_five_learning_rates</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of five learning rates . The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Convergence_of_model_guessing_entropy_under_four_batches</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of the four training batches. The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Convergence_of_model_guessing_entropy_under_eight_epochs</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of eight training epochs . The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Convergence_of_model_guessing_entropy_under_four_epochs</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of four training epochs . The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Convergence_of_model_guessing_entropy_under_five_epochs</ns0:head><ns0:label /><ns0:figDesc>Each curve represents the model guessing entropy of five training epochs . The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>TA,_MLPSCA,_CNNSCAnew_guessing_entropy_convergenceEach curve represents the model guessing entropy of TA, MLPSCA, and CNNSCAnew respectively. The abscissa represents the number of energy trajectories used in the attack, and the ordinate represents the order of guessing entropy.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,179.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,342.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,231.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,112.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,255.37,525.00,254.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,255.37,525.00,254.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,255.37,525.00,254.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,255.37,525.00,254.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,255.37,525.00,390.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>SOFT for short). In multi-classification tasks, softmax is usually used as the activation function of the output layer. Here, softmax is used to represent the output layer. This layer classifies the input, obtains the predicted value of each label, and takes the label corresponding to the maximum value as the global classification</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>f)</ns0:cell><ns0:cell>Softmax layer (result.</ns0:cell></ns0:row><ns0:row><ns0:cell>g)</ns0:cell><ns0:cell cols='2'>SE module, SEnet is a classic attention model structure, and it is also a required basic</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>network structure for fine-grained classification tasks. SEnet proposed the Squeeze-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>and-Excitation (SE) module, which did not introduce a new spatial dimension, and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>improved the representation ability of the model by displaying the channel correlation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>between the features of the convolutional layer. The feature recalibration mechanism:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>by using global information to selectively enhance informatized features and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>compress those useless features at the same time. In deep network training, this</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>mechanism can effectively overcome the gradient dispersion problem in error back</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>propagation. The SE module is universal. Even if it is embedded in an existing</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>model, its parameters do not increase significantly. It is a relatively successful</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>attention module</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Get a new model CNNSCAnew for attacking ASCAD data set with known first-order mask protection. According to the 12 sets of experiments in Section Methods 2 and Section Methods 3, the best benchmarks for CNNSCAnew structure parameters and training parameters are demonstrated. The CNNSCAnew model contains 5 convolutional blocks, 8 convolutional layers, and 3 fully connected layers. The size of the convolution kernel of each convolution layer is 3, the activation function is ReLU, and the padding is Same. Each convolutional block is equipped with a pooling layer, the pooling layer selects the average pooling mode, and the pooling window is(2, 2). The number of output channels of the convolution layer in the convolution block 1-5 starts from 32 and increases by a multiple of 2 in turn. Two SE modules are added after the convolution layer of each convolution block in the convolution block 2-4, and the dimension ratio of the SE is set to 1/8. In the first two fully connected layers, set the number of output channels to 1024 and the activation function to ReLU. The output channel number of the third fully connected layer is the target classification number 256, and the classification function is Soft-max. The global configuration loss function is crossentropy, the optimization method is RMSprop, the number of training iterations is 70, the learning rate is 1x10 -3 , and the batch learning volume is 128. All parameters of the newly obtained CNNSCAnew are shown in Table5(CNNSCAnew Configuration).2 The CNNSCA model design method and the convolutional network hyperparameter optimization scheme for side-channel attack are refined. The CNNSCA model design method is refined: comprehensively utilize the advantages of VGG-CNNSCA model classification and fitting efficiency and Alex-CNNSCA model occupy less computing resources, while using SEnet's SE module to reduce the gradient dispersion problem of error back propagation in deep neural networks to save calculation time, a new basic model of CNNSCA was designed, named CNNSCAbase.At the same time, the hyperparameter optimization scheme of the convolutional network used for side-channel attacks is refined: design the structural parameter optimization experiment and the training parameter optimization experiment, and use CNNSCAbase to implement the attack training. According to parameter selection rules, common sense of parameter optimization of CNN model, and data characteristics of actual application scenarios, the test parameters of each experiment are designed, and unnecessary test parameters are excluded. Each time, according to the cryptanalysis results of the experiment, the parameters that make CNNSCAbase's cryptanalysis effect better are selected. Relying on two sets of experimental processes, a hyperparameter optimization scheme is formed, and the hyperparameters finally determined by the experiment are used as the parameters of the new model CNNSCAnew.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>, the guessing entropy of VGG-CNNSCA requires at least 650 power consumption traces to converge to rank zero, and the model training time takes 37 minutes. The CNNSCAnew method constructed in this paper only requires 61 power consumption traces, and the model training time only needs about 28 minutes. The training time of CNNSCAnew and VGG-CNNSCA in this paper are shown in Figure 22(CNNSCAnew and VGG-CNNSCA training time).After comparing CNNSCAnew with VGG-CNNSCA and Alex-CNNSCA, the model comparison analysis and the cryptanalysis performance comparison analysis, the results are summarized in Table6(Comparative analysis of CNNSCAnew, VGG-CNNSCA and Alex-CNNSCA) to show.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='3'>Selection and optimization of CNN training parameters for side-channel cryptanalysis PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65519:1:1:NEW 20 Nov 2021)</ns0:note> </ns0:body> "
"Shijiazhuang Campus PLA Army Engineering University Shijiazhuang, 050003, China Tel:15271388465 E-mail:llyun324@163.com November 19th, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have have edited the manuscript to address their concerns. We believe that the manuscript is now suitable for publication in PeerJ. Miss.Yun Lin. Liu Master of Software Engineering On behalf of all authors. Reviewer 1 (Anonymous) Basic reporting The overall passage is well structured with adequate deep learning background information, which I think provides basic intuitions for researchers who want to study deep learning based side channel attack. It should be better if more background information about side channel attack is included, e.g. the meaning of the rank, how it is calculated, the relationship between rank and classification results, and so on. We have added background information about rank, including the meaning of rank, how to calculate it, the relationship between rank and classification results, and so on.( lines 322-347) The language is good, easy to understand. Although there are some typos, which I point below. No problem to use the public database ASCAD, I am not sure it is necessary to publish the author’s training and testing codes, but they share the model structure, which I think is enough. We agree with the reviewer's opinion that training or testing code can be removed when the paper is published. As for the figures, please rename the figures to make the indexes consistent with the figure titles. Now they are confusing. We have renamed the figures to make the indexes consistent with the figure titles. Typos: In line 53, secret.key. ->secret key. In line 101, deep network The problem-> deep network. The problem In line 108, Parameter -> parameter In line 122, please rephrase your sentence In lines 282-283, x_j is not the inputs of all neurons but one neuron In line 395, please rephrase your sentence Please check the name ‘Conv’, there are many ‘Cnov’ in the passage We have corrected these spelling and grammatical errors (line 53, 108, 115, 129, 309, 451). The name 'Conv' is the abbreviation for the convolutional layer. The original name 'CONV' has been changed to 'Conv' (line 210, 375), and the convolutional layer is uniformly denoted by 'Conv' in the manuscript. Experimental design However, I think the authors miss one important thing. They should explain why the whole experiments flow is designed like this. For example, why they choose to determine the model parameters first instead of the global training parameters? and in section 2.2, why they choose to determine the number of Conv layers first instead of kernel size or the number of filters first? We have explained the reason for the design of the whole experiments flow in the manuscript . (line 422-434) In addition, I know there are lots of parameters to be optimized and it is tough to go through all the combinations. So I am thinking that whether the impacts of different parameters on the model performance are independent, if yes, there is no problem to optimize the model parameter by parameter, if no, then the whole experiment should be carefully addressed in order to find the optimal model. I will appreciate it if the authors can give more details about this. Our parameter optimization scheme is proposed on the basis of previous research contributions in the field of deep learning and side channel analysis, avoiding many tedious original experiments. In this regard, we are very grateful for the contributions of previous researchers. The CNNSCAnew model obtained after optimizing the model parameters one by one did produce a good cryptanalysis effect and proved to be a feasible method. We used this scheme to generate another model for more complex objects. The relevant research content has not been published yet, so it will not be announced here. The CNNSCAnew we proposed is definitely not the optimal model. We acknowledge that the effects of different parameters on model performance are not independent. From this perspective, it is a complicated project to study the optimization scheme to obtain the optimal model, which requires a lot of experiments, but we will pay attention to and study this aspect in the future. Finally, please explain why choose the epoch number 75 for most of the experiments ? Have you tried other epoch numbers? And I think the authors should also add comparison results between models with SEnet and without SEnet if they want to show SEnet can reduce the gradient dispersion. The use of epoch 75 refers to the epoch used in the literature [12]. This epoch will not produce overfitting in the literature [12]. Therefore, epoch 75 was used in the previous experiment to facilitate the verification of the influence of other parameters on the performance of the model. In the experiment of optimizing epoch parameters, the cryptanalysis effect of the model in other epochs is also verified, which is introduced in the manuscript. We have added to the manuscript the comparative results of the cryptanalysis of the basic model with and without the SE module. (line 405-420) Validity of the findings Based on the conclusions, the authors provide us with a new CNNSCA structure that has better classification results and lower computational time than previous model structures (Benadj, Alex VGG). This part is strongly related to the experiment design part, other than the updated model structure and short training time, I expect more conclusions here. We have added more conclusions. (line 737-754) Additional comments Please gives more background information about why it is important to use deep learning in side channel attacks and comparison between deep learning performance and traditional side channel attack analysis methods. We have added background information about the importance of using deep learning in side channel attacks and the performance comparison of deep learning with traditional side channel attack analysis methods. (line 66-74) Reviewer 2 (Anonymous) Basic reporting - This paper proposes a convolutional neural network-based side-channel attack model design and hyperparameter optimization to improve side-channel attack performance. The paper is mostly clear but there are numerous grammatical errors. For example, the sentences in lines 90, 124, 319-323, 339, and 639 need revision. We have corrected these grammatical errors. (line 96-98, 129-131, 369-374, 389-390, 718-719) - The paper contains sufficient references. I can only recommend the authors add a reference for the MNIST dataset in line 306 of the paper. We added a reference to the MNIST dataset on line 354 of the paper. (line 354, 950) - Table 1 is not clearly explained in the text. The authors can mention one or two sentences about remarks in table 1. We have added an explanation of Table 1 in the text. (line 149-151, 156-163) - Figure 1 is not explained in detail. We have added a detailed description of Figure 1 in the manuscript. (line 255-263) - For figure 2, the authors can use labels to show which data is input, kernel, and output. We have used labels in Figure 2 to show which data is input, kernel, and output. At the same time, the same modification was made to Figure 3. - In line 135, 'BP algorithm' is mentioned but readers may not know the BP. What does BP stand for? We explained the 'BP algorithm' in the manuscript. (line 142-143) Reviewer 3 (Anonymous) Basic reporting This paper aims to improve the state of art of deep learning based side channel-attacks by proposing a new model, named CNNSCAnew. The architecture of this model relies on a new kind of block of layers, the SE module, whose effectiveness has already been established in the domain of computer vision. A fine tunning of the hyperparameters is performed by studying the impact of each parameters independently in several experiences. The performance of the final model is compared with the state of the art models on the open dataset ASCAD. From the editorial side, this paper is not well written. There is a lot of approximations, confusions and contradictions, mostly due to the fact that some english technical terms are not correctly used. Some sentences are not grammatically correct. The overall meaning of the text can be understood (or somehow guessed), but it is very tedious and some typos or formulations can be easily corrected with a more careful review of the paper. Here are some examples found in the abstract and introduction (the list is not exhaustive) l12. decryption -> cryptanalysis l13. decryption -> cryptanalysis l13. Alex -> AlexNet l14. there is Model training -> the trained model l23. the SEnet module : the 'SEnet' abbreviation is not yet defined l24. unnecessary parameters are maximized: do you mean 'optimized' instead of maximized? l28. Optimize ... CNSSCA : the meaning of this sentence is not clear, do you mean : 'We optimized the various ... CNNSCA'? l26. first-order mask data set of -> first-order masked dataset from l34. guess entropy -> guessing entropy l35. successful attack key -> successful recovery of the key l36. the performance was better -> we obtained better performance l42. and using cryptographic algorithms to leak ... harware encryption: the formulation is not correct, I would suggest something like : 'by using the leakage of cryptographic algorithms during the computation of data (...) on hardware devices' l43. radiation Etc -> radiation, etc. l44. to crack : the word is a little bit slang l50: the weak -> weak l53: the secret. key: typo l54. the best decryption effect -> the best cryptanalysis attack l58. energy traces -> power consumption traces l69. At present, at home and abroad : ? l79. the best perfomance of breaking secrets -> the best cryptanalysis performance l123. declassification -> classification In the manuscript, we corrected the incorrect use of English terminology and some grammatical errors. (line 12, 13, 14, 23, 24, 26, 28-30, 34, 35, 36, 38, 42-44, 50, 53, 54, 58, 76, 87, 89, 96, 97, 129, 131, 203, 318, 319, 323, 324, 339, 342, 343, 345, 355, 356, 360, 486, 490, 536, 539, 546, 576, 621, 654, 682, 759, 782, 784, 789, 796, 798, 799, 819) Validity of the findings Nevertheless, due to the poor level of english and typos, it is difficult to undestand the description of the experiments and sometimes the results are not clear. By example, in the section 'Structural parameter optimization', experiment 1, the authors have tested three deeplearning architectures with different number of convolutional layers for each block (namely 1, 2 and 2/3). At line 416, they claim that 'if [the number of layers] exceeds 2 or more, ..., the 8G[bytes] GPU memory ... will be exhausted, unable to run code'. But they computed the guessing entropy for some of their architecture where the number of layer was 2 (cnov2 and cnov3), which is in contradiction with the fact that they were unable to run the code. We have revised the ambiguous sentence in this part of the manuscript. (line 491-492) Moreover some claims are vague or not correct. By example l62. 'but it also loses effectiveness when attacking encryption with protection.' : this statement is not true : in the ASCAD paper, a MLP is successfuly applied to a protected AES encrytion. Another example at l539, the authors claim that if the number of channels of the dense layer is low, then the computational complexity increase. However if n is the input dimension and c the number of channel, then the number of computation is equal to c*n and it increases with the number of channel. We have revised 539 lines of ambiguous sentences in the original manuscript and updated them in the new manuscript. (line 619-620) We propose 'but when the protected encryption is attacked, it will also lose its effectiveness.' this conclusion is verified based on actual experimental results. Moreover, in the ASCAD paper, MLP is only successfully applied to AES encryption with a known first-order mask, and when the first-order mask is known, the signal-to-noise ratio of AES encryption leakage is very high, which is similar to an unprotected state. Please review the following two documents to resolve the misunderstanding. [1] Liu Linyun, Chen Kaiyan, Li Xiongwei, Zhang Yang, Liu Junyan. Research progress and analysis on modeling and analysis methods of bypass cryptographic attacks[J]. Radio Engineering, 2021, 51(07): 655-662. [2] Benadjila R, Prouff E, Strullu R, et al. Study of deep learning techniques for side-channel analysis and introduction to ASCAD database[J]. ANSSI, France & CEA,LETI, MINATEC Campus, France.2018, 22: 2018. Finally, the comparaison with the state of the art models lacks some robustness. I recommand to perform a 10-fold crossvalidation or a validation method with a high number of different training/testing steps: the results obtained with these well-known methods will be more significant than a single training/testing evaluation and will reduce bias. First of all thank the reviewers for their suggestions. What we want to explain is that the new model proposed in the paper is verified on the basis of a large number of experiments, and its cryptanalysis performance is relatively stable. In addition, the guessing entropy evaluation method is a commonly used method in other existing model evaluations. For example, this evaluation method is also used in ASCAD papers. "
Here is a paper. Please give your review comments after reading it.
324
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This report shares the experience during selection, implementation and maintenance phases of an electronic laboratory notebook (ELN) in a public private partnership project and comment on user feedback. In particular, we address which time constraints for rollout of an ELN exist in granted projects and which benefits and/or restrictions come with out-of-the-box solutions. We discuss several options for the implementation of support functions and potential advantages of open access solutions. Connected to that, we identified willingness and a vivid culture of data sharing as the major item leading to success or failure of collaborative research activities. The feedback from users turned out to be the only angle for driving technical improvements, but also exhibited high efficiency.</ns0:p><ns0:p>Based on these experiences, we describe best practices for future projects on implementation and support of an ELN supporting a diverse, multidisciplinary user group based in academia, NGOs, and/or for-profit corporations located in multiple time zones.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Laboratory notebooks (LNs) are vital documents of laboratory work in all fields of experimental research. The LN is used to document experimental plans, procedures, results and considerations based on these outcomes. The proper documentation establishes the precedence of results and in particularly for inventions of intellectual property (IP). The LN provides the main evidence in the event of disputes relating to scientific publications or patent application. A well-established routine for documentation discourages data falsification by ensuring the integrity of the entries in terms of time, authorship, and content <ns0:ref type='bibr' target='#b21'>(Myers 2014)</ns0:ref>. LNs must be complete, clear, unambiguous and secure. A remarkable example is Alexander Fleming's documentation, leading to the discovery of penicillin <ns0:ref type='bibr' target='#b3'>(Bennett &amp; Chung, 2001)</ns0:ref>.</ns0:p><ns0:p>The recent development of many novel technologies brought up new platforms in life sciences requiring specialized knowledge. As an example, next-generation sequencing and protein structure determination are generating datasets, which are becoming increasingly prevalent especially in molecular life sciences <ns0:ref type='bibr' target='#b9'>(Du &amp; Kofman, 2007)</ns0:ref>. The combination and interpretation of these data requires experts from different research areas <ns0:ref type='bibr'>(Ioannidis et al., 2014)</ns0:ref>, leading to large research consortia.</ns0:p><ns0:p>In consortia involving multidisciplinary research, the classical paper-based version of a LN is an impediment to efficient data sharing and information exchange. Most of the data from these largescale collaborative research efforts will never exist in a hard copy format, but will be generated in a digitized version. An analysis of this data can be performed by specialized software and dedicated hardware. The classical application of a LN fails in these environments. It is commonly replaced by digital reporting procedures, which can be standardized (Handbook: Quality practices in basic biomedical research, 2006) <ns0:ref type='bibr' target='#b7'>(Bos et al., 2007)</ns0:ref> <ns0:ref type='bibr' target='#b32'>(Schnell, 2015)</ns0:ref>. Besides the advantages for daily operational activities, an electronic laboratory notebook (ELN) yields long-term benefits regarding data maintenance. These include, but are not limited to, items listed in Table 1 <ns0:ref type='bibr' target='#b23'>(Nussbeck et al., 2014)</ns0:ref>. The order of mentioned points is not expressing any ordering. Beside general tasks, especially in the field of drug discovery some specific tasks have to be facilitated. One of that is functionality allowing searches for chemical structures and substructures in a virtual library of chemical structures and compounds (see table 1, last item in column 'Potentially'). Such a function in an ELN hosting reports about wet-lab work dealing with known drugs and/or compounds to be evaluated, would allow dedicated information retrieval for the chemical compounds or (sub-) structures of interest.</ns0:p><ns0:p>Interestingly, although essential for the success of research activities in collaborative settings, the above mentioned advantages are rarely realized by users during daily documentation activities and institutional awareness in academic environment is often lacking. Since funding agencies and stakeholders are becoming aware of the importance of transparency and reproducibility in both experimental and computational research <ns0:ref type='bibr' target='#b27'>(Sandve, Nekrutenko, Taylor, &amp; Hovig, 2013</ns0:ref>) <ns0:ref type='bibr'>(Bechhofer et al., 2013)</ns0:ref> <ns0:ref type='bibr' target='#b34'>(White et al., 2015)</ns0:ref>, the use of digitalized documentation, reproducible analyses and archiving will be a common requirement for funding applications on national and international levels <ns0:ref type='bibr' target='#b35'>(Woelfle, Olliaro &amp; Todd, 2011</ns0:ref>) (DFG, 2013) (Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020, 2013).</ns0:p><ns0:p>A typical example for a large private-public partnership is the Innovative Medicines Initiative (IMI) New Drugs for Bad Bugs (ND4BB) program <ns0:ref type='bibr' target='#b24'>(Payne, Miller, Findlay, Anderson &amp; Marks, 2015;</ns0:ref><ns0:ref type='bibr' target='#b17'>Kostyanev et al., 2016)</ns0:ref> (see Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> for details). The program's objective is to identify new ways of delivering antibiotics into Gram-negative bacteria. The TRANSLOCATION consortium focus on (i) improving the understanding of the overall permeability of Gram-negative bacteria, and (ii) enhancing the efficiency of antibiotic research and development through knowledge sharing, data sharing and integrated analysis. To meet such complex needs, the TRANSLOCATION consortium was established as a multinational and multisite public private partnership (PPP) with 15 academic partners, 5 pharmaceutical companies and 7 small and medium sized enterprises (SMEs) <ns0:ref type='bibr' target='#b25'>(Rex, 2014)</ns0:ref> <ns0:ref type='bibr' target='#b33'>(Stavenger &amp; Winterhalter, 2014</ns0:ref><ns0:ref type='bibr'>) (ND4BB -TRANSLOCATION, 2015)</ns0:ref>.</ns0:p><ns0:p>In this article we describe the process of selecting and implementing an ELN in the context of the multisite PPP project TRANSLOCATION, comprising about 90 bench scientists in total. Furthermore we present the results from a survey evaluating the users' experiences and the benefit for the project two years post implementation. Based on our experiences, the specific needs in a PPP setting are summarized and lessons learned will be reviewed. As a result, we propose recommendations to assist future users avoiding pitfalls when selecting and implementing ELN software.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Selection and implementation of an ELN solution</ns0:head><ns0:p>The IMI project call requested a high level of transparency enabling the sharing of data to serve as an example for future projects. The selected consortium TRANSLOCATION had a special demand for an ELN due to its structure -various labs and partners spread widely across Europe needed to report into one common repository -and due to the final goal -data was required to be stored and integrated into one central information hub, the ND4BB Information Centre. Fortunately, no legacy data had to be migrated into the ELN.</ns0:p><ns0:p>The standard process for the introduction of new software follows a highly structured multiphase procedure <ns0:ref type='bibr' target='#b22'>(Nehme &amp; Scoffin, 2006)</ns0:ref> with its details outlined in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Step1</ns0:head><ns0:p>&#8226; User Requirement Specification &#8226; Collect user requirements Step2 For the first step, we had to manage a large and highly heterogeneous user group (</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>) that would be using the ELN scheduled for roll out within 6 months after project launch. All personnel of the academic partners were requested to enter data into the same ELN potentially leading to unmet individual user requirements, especially for novices and inexperienced users.</ns0:p><ns0:p>As a compromise for step 1 (Error! Reference source not found.), we assembled a collection of user requirements (URS) based on the experiences of one laboratory that had already implemented an ELN. We further selected a small group of super users based on their expertise in documentation processes, representing different wet laboratories and in silico environments. The resulting URS was reviewed by IT and business experts from academic as well as private organisations of the consortium. The final version of the URS is available as a supplement (Article S1).</ns0:p><ns0:p>In parallel, based on literature <ns0:ref type='bibr' target='#b26'>(Rubacha, Rattan &amp; Hossel, 2011)</ns0:ref> and Internet searches, presentations of widely used ELNs were evaluated to gain insight into state-of-the-art ELNs. This revealed a wide variety of functional and graphical user interface (GUI) implementations differing in complexity and costs. The continuum between simple out-of-the-box solutions and highly sophisticated and configurable ELNs with interfaces to state-of-the-art analytical tools were covered by the presentations. Notably, the requirements specified by super users also ranged from 'easy to use' to 'highly individually configurable'. Based on this information it was clear that the ELN selected for this consortium would never ideally fit all user expectations. Furthermore, the exact number of users and configuration of user groups were unknown at the onset of the project. The most frequently or highest prioritized items of the collected user requirements are listed in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. We divided the gathered requirements into 'core' meaning essential and 'non-core' standing for 'nice to have, but not indispensable'. Further, we list here only the items, which were mentioned by more than two super users from different groups. The full list of URS is available as a supplement (Article S1). <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>) reduced the number of appropriate vendors. Their response provided their offerings aligned to the proposed specifications.</ns0:p><ns0:p>Key highlights and drawbacks of the proposed solutions were collected as well as approximations for the number of required licenses and maintenance costs. The cost estimates for licenses were not comparable because some systems require individual licenses whereas others used bulk licenses. At the time of selection, the exact number of users was not available.</ns0:p><ns0:p>Interestingly, the number of user specifications available out of the box differed by less than 10% between systems with the lowest (67) and highest (73) number of proper features. Thus, highlights and drawbacks became a more prominent issue in the selection process.</ns0:p><ns0:p>For the third step, (Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>), the two vendors meeting all the core user requirements and the highest number of non-core user requirements were selected to provide a more detailed online demonstration of their ELN solution to a representative group of users from different project partners. In addition, a quote for 50 academic and 10 commercial licenses was requested. The direct comparison did not result in a clear winner -both systems included features that were instantly required, but each lacked some of the essential functionalities.</ns0:p><ns0:p>For the final decision only features which were different in both systems and supported the PPP were ranked between 1=low (e.g. academic user licenses are cheap) and 5= high (e.g. cloud hosting) as important for the project. The decision between the two tested systems was then based on the higher number of positively ranked features, which revealed most important after presentation and internal discussions of super users. The main drivers for the final decision are listed in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>. In total, the chosen system got 36 positive votes on listed features meeting all high ranked demands listed in table 3, while the runner up had 24 positive votes on features. However, if the system had to be set up in the envisaged consortium it turned out to be too expensive and complex in maintenance. Main drivers for the final decision &#61623; Many end users were unfamiliar with the use of ELNs, therefore the selected solution should be intuitive &#61623; Easy to install and maintain &#61623; Minimal user training required &#61623; Basic functionalities available out-of-the-box (import of text, spreadsheet, pdf, images and drawing of chemical structures), with as few configurations as possible &#61623; ELN does not apply highly sophisticated checking procedures, which would require a high level of configuration, restricting users to apply their preferred data format (users should take responsibility for the correct data and the correct format of the data stored in the ELN instead) &#61623; Web interface available to support all operating systems to avoid deploying and managing multiple site instances &#61623; Vendor track record -experience of the vendor with a hosted solution as an international provider &#61623; Sustainability:</ns0:p><ns0:p>&#903; affordable for academic partners also after the five year funding duration of the project (based on the maturity of the vendor, number of installations/users, the state-ofthe-art user interface and finally also the costs) Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Main drivers for the final decision &#61623; Proposed installation timeframe &#61623; Per user per year costs for academics and commercial users</ns0:p><ns0:p>The complete process, from the initial collection of URS data until the final selection of the preferred solution, took less than 5 months.</ns0:p><ns0:p>Following selection, the product was tested before specific training was offered to the user community <ns0:ref type='bibr' target='#b16'>(Iyer &amp; Kudrle, 2012)</ns0:ref>. Parallel support frameworks were rolled out at this time, including a help desk as a single point of contact (SPoC) for end users.</ns0:p><ns0:p>The fourth step (Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>) was the implementation of the selected platform. This deployment was simple and straightforward because it was available as Software as a Service (SaaS) hosted as a cloud solution. Less than one week after signing the contract, the administrative account for the software was created and the online training of key administrators commenced. The duration of training was typically less than 2 hours including tasks such as user and project administration.</ns0:p><ns0:p>To accomplish step 5 (Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>), internal training material was produced based on the experiences made during the initial introduction of the ELN to the administrative group. This guaranteed that all users would receive applicable training. During this initial learning period, the system was also tested for the requested user functionalities. Workarounds were defined for missing features detected during the testing period and integrated into the training material.</ns0:p><ns0:p>One central feature of an ELN in its final stage is the standardization of minimally required information. These standardizations include but are not limited to: &#61623; Define required metadata fields per experiment (e.g. name of the author, date of creation, experiment type, keywords, aim of experiment, executive summary, introduction) &#61623; Define agreed naming conventions for projects &#61623; Define agreed naming conventions for titles of experiments &#61623; Prepare (super user agreed) list of values for selection lists &#61623; Define type of data which should be placed in the ELN (e.g. raw data, curated data)</ns0:p><ns0:p>We did not define specific data formats since we could not predict all the different types of data sets that would be utilized during the lifetime of the ELN. Instead, we gave some best practise advice on arranging data (Table <ns0:ref type='table' target='#tab_4'>S3</ns0:ref> and Table <ns0:ref type='table' target='#tab_7'>S4</ns0:ref>) facilitating its reuse by other researchers. The initially predefined templates, however, were only rarely adopted, also some groups are using nearly the same structure to document their experiments. More support especially for creating templates may help users to document their results more easily.</ns0:p><ns0:p>For the final phase, the go live, high user acceptance was the major objective. A detailed plan was created to support users during their daily work with the ELN. This comprised the setup of a PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science support team (the project specific ELN-Helpdesk) as a single point of contact (SPoC) and detailed project-specific online trainings including documentation for self-training. As part of the governance process, we created working instructions describing all necessary administrative processes provided by the ELN-Helpdesk.</ns0:p><ns0:p>Parallel to the implementation of the ELN-Helpdesk, a quarterly electronic newsletter was rolled out. This was used advantageously to remind potential users that the ELN is an obligatory central repository for the project. The newsletter also provided a forum for users to access information and news, messaging to remind them the value of this collaborative project. Documents containing training slide sets, frequently asked questions (FAQs) and best practice spreadsheet templates (Table <ns0:ref type='table' target='#tab_7'>S4</ns0:ref>) have been made available directly within the ELN to give users rapid access to these documents while working with the ELN. In addition, the newsletter informed all users about project-specific updates or news about the ELN.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>Operation of the ELN Software operation can be generally split into technical component/cost issues and end-user experiences. The technical component considers stability, performance and maintenance whereas the end-user experience is based on the capability and usability of the software.</ns0:p></ns0:div> <ns0:div><ns0:head>Technical solution</ns0:head><ns0:p>The selected ELN, hosted as a SaaS solution on a cloud-based service centre, provided a stable environment with acceptable performance, e.g. login &lt; 15 sec, opening an experiment with 5 pages &lt; 20 sec (for further technical details please see Supplementary file S1). During the evaluation period of two years, two major issues emerged. The first involved denial of access to the ELN for more than three hours, due to an external server-problem, which was quickly and professionally solved after contacting the technical support, and the other was related to the introduction of a new user interface (see below)</ns0:p><ns0:p>The administration was simple and straightforward, comprising mainly minor configurations at the project start and user management during the runtime. One issue was the gap in communication regarding the number of active users causing a steady increase in number of licences.</ns0:p><ns0:p>A particular disadvantage using the selected SaaS solution concerned system upgrades. There was little notice of upcoming changes and user warnings were hidden in a weekly mailing. To keep users updated, weekly or biweekly mails about the ELN were sent to the user community by the vendor. Although these messages were read by users initially, interest diminished over time. Consequently, users were confused when they accessed the system after an upgrade and the functionality or appearance of the ELN had changed. On the other side, system upgrades were performed over weekends to minimize system downtime.</ns0:p><ns0:p>The costs per user were reasonable, especially for the academic partners for whom the long-term availability of the system, even after project completion, could be assured. This seemed to be an effect of the competitive market that caused a substantial drop in price during the last years.</ns0:p></ns0:div> <ns0:div><ns0:head>User experience</ns0:head><ns0:p>In total, more than 100 users were registered during the first two years runtime, whereas the maximum number of parallel user accounts was 87, i.e. 13 users left the project for different reasons. The number of 87 users is composed of admins (n=3), external reviewers (n=4), project owners (n= 26) which are reviewing and countersigning as Principal Investigators (PIs), and normal users (n=41). Depending on the period, the number of newly entered experiments per month ranged between 20 and 200 (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> blue bars). The size of the uploaded or entered data was heterogeneous and comprised experiments with less than 1 MB, i.e. data from small experimental assays, but also contained data objects much larger than 100 MB, e.g. raw data from mass spectroscopy. Interestingly, users structured similar experiments in different ways. Some users created single experiments for each test set, while other combined data from different test sets into one experiment.</ns0:p><ns0:p>In an initial analysis, we evaluated user experience by the number of help desk tickets created during the live time of the system (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> orange line).</ns0:p></ns0:div> <ns0:div><ns0:head>May-13</ns0:head><ns0:p>Jun Manuscript to be reviewed</ns0:p><ns0:p>Computer Science During the initial phase (2013.06 -2013.10) most of the tickets were associated with user access to the ELN. However, after six months (2013.12 -2014.01), many requests were related to functionality, especially those from infrequent ELN users. In Feb 2015, the vendor released an updated graphical user interface (GUI) resulting in a higher number of tickets referring to modified or missing functions and the slow response of the system. The higher number of tickets in Aug/Sept 2015 were related to a call for refreshing assigned user accounts. However, overall the number of tickets was within the expected range (&lt;10 per month).</ns0:p><ns0:p>There was a clear decline in frequent usage during the project runtime. The ELN was officially introduced in June 2013 and the number of experiments increased during the following month as expected (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> blue bars). The small decrease in Nov 2013 was anticipated as a reaction to a new release, which caused some issues on specific operating system/browser combinations and language settings. Support for Windows XP also ended at this time. Some of the issues were resolved with the new release (Dec 2013). The increase in new experiments in Oct 2014 and subsequent decline in Nov 2014 is correlated to a reminder sent out to the members of the project to record all activities into the ELN. The same is true for Sept/Oct 2015. The last quarter of each year illustrates year-end activities, including the conclusion of projects and the completion of corresponding paperwork, as reflected in the chart (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> orange line).</ns0:p><ns0:p>Overall, the regular documentation of experiments in the ELN appeared to be unappealing to researchers. This infrequent usage prompted us to carry out a survey of user acceptance in May/June 2015 (detailed description of analysis methods including KNIME workflow and raw data are provided in Article S2). The primary aim was to evaluate user experiences compared to expectations in more detail. In addition, the administrative team wanted to get feedback to determine what could be done to the existing support structure to increase usage or simplify the routine documentation of laboratory work. Overall, 77 users (see Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>) were invited to participate in the survey. Two users left the project during the runtime of the survey. We received feedback from 60 (=80%) out of the remaining 75 users. 2 questionnaires were rejected due to less than 20% answered questions. The number of evaluated questionnaires is 58. see Supplementary file S2. There are also some limitations of the survey which should be discussed. The number of invited active ELN users was low (n=77), thus we refused to collect detailed demographic data in order to ensure full anonymity of the participants, so we expected a higher participation and especially more detailed answers to the free text questions. In addition, some interesting analysis could not be answered by of the questionnaire due to the low number of returned forms (n=58). E.g. only six users had some experience with ELNs. Three out of the six users found the ELN is changing the way of personal documentation positively, while the others didn't answered the question or gave a neutral answer. Thus we didn't reported these results as not representative. It should also be mentioned that this survey reflects the situation of this specific PPP project. The results cannot be easily transferred to other projects. It would be of interest if the same survey would give different or the same results i) either in other projects ii) during the time course of this project.</ns0:p><ns0:p>A summary of the most important results of the survey is presented in Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref> below. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#61623; Helpdesk support in general is O.K. (36%), but some users (10%) seem to be not satisfied, especially with training (47%) &#61623; Most users demand higher speed (n=15) and/or better user interface (n=14) Despite the perceived advantages of an ELN compared to traditional paper-based LNs as described above, users encountered several drawbacks during their usage of this ELN. Users criticized the provision of templates and cloned experiments, which were considered to impede the accurate documentation of procedural deviations. The standardized documentation of experimental procedures made it difficult to detect deviations or variations because they are not highlighted. Careful review of the complete documentation was required in order to check for missing or falsified information.</ns0:p><ns0:p>For many users (44 out of 58 = 76%), a paper-based LN was still the primary documentation system. They established a habit of copying the documentation into the ELN only after the completion of successful experiments rather than using the ELN online in real time. This extra work is also a major source of dissatisfaction and could create difficult situations in case of discrepancies between the paper and the electronic version when intellectual property needs to be demonstrated. In these cases, failed experiments were not documented in the ELN, although comprehensive documentation is available offline. For failed experiments, the effort to document the information in a digital form was not considered worthwhile by the users. In other cases, usage of the ELN for documentation of experimental work was hindered due to performance issues by technically outdated lab equipment.</ns0:p><ns0:p>The survey helped to understand what factors where contributing to the low usage of the ELN. Additional results of the survey grouped by operating system or usage are shown in Table <ns0:ref type='table' target='#tab_9'>6</ns0:ref> and Table <ns0:ref type='table'>7</ns0:ref> below. For a more detailed analysis, see Supplementary file S2. </ns0:p></ns0:div> <ns0:div><ns0:head>Summarized result based on the answers by OS:</ns0:head><ns0:p>&#61623; Windows users mainly conduct wet lab work while Linux and Mac users perform in silico work &#61623; Windows users find the software too slow and too labor-intensive &#61623; Windows users know the functionality of the ELN better, as they are using the system more frequently &#61623; Mac OS and Linux users are more comfortable with the speed of the software, but they PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>would not use or recommend it again. This may be related to the specific in silico work which might not be supported by the ELN sufficiently 316 </ns0:p></ns0:div> <ns0:div><ns0:head>Self-assessed frequency of the ELN</ns0:head></ns0:div> <ns0:div><ns0:head>Self-assessed length of usagetime of the ELN</ns0:head></ns0:div> <ns0:div><ns0:head>Summarized results based on frequency of usage:</ns0:head><ns0:p>&#61623; In silico users enter into the ELN less frequently, which is not unexpected as computational experiments generally run for a longer period of time than wet lab experiments &#61623; More frequent users operate the ELN online during their lab work &#61623; Frequent users would like to have higher performance (this might be related to Windows) &#61623; Better quality documentation was associated with more frequent use &#61623; Frequent users are not disrupted by documenting their work in the ELN, they like the software and would use an ELN in future &#61623; Frequent users of an ELN obtain a positive effect on the way documentation is prepared &#61623; More frequent users like the software and feel comfortable about using the software while Infrequent users find the ELN complex and are frustrated about functionality &#61623; Infrequent users are disappointed about the quality of search results Conclusions from the survey About 40% of the users did not find the selected solution appropriate for their specific requirements. Either the solution did not support specific data sets or experiment types, or the solution did not respond fast enough to be used adequately. This indicates that the solution was not fit for purpose. More individual user demands would have to be considered to improve the outcome. This would require more resources in time and manpower than can be accommodated in a publicly funded project. Time would be a key factor as experimental work begins within 6 months after project kick-off and the documentation process of experiments starts parallel the experimental initiation. Keeping in mind that every user needs training time to get acquainted to a PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science new system and there are always initial 'pitfalls' to any newly introduced system, an electronic laboratory notebook must be available within 4-5 months after project start. About one month should be allocated for the vendor negotiation process. Another month or two are required for writing and launching the tender process. This reduces the time frame for a systematic user requirement evaluation process to less than 1 month after kick-off meeting. It should be acknowledged that not all types of experiments will be fully defined nor will all users be identified. Thus the selection process will always be based on assumptions as described above.</ns0:p><ns0:p>The slow response of the selected system might have occurred due to many potential issues. It could be related to the bandwidth available at the location, but more frequently we believe this is based on the hardware available. We tested the ELN on modern hardware with low and high bandwidth (Windows: Core i7 CPU @ 2.6 GHz, 6 GB RAM tested on ADSL: 25 kbit/s download, 5 kbit/s upload and iMac Core i5 CPU @ 3.2 GHz, 4 MB RAM tested on ADSL: 2 kbit/s download and 400 bit/s upload) without a major impact on performance, but we did not test old hardware. During this project we learned that in certain labs computers run Windows XP, MS Office 2003 and Internet Explorer 8. Using outdated software and hardware can be a contributor for the slow response of the ELN. Another potential issue could arise from uploading huge data sets on slow Asymmetric Digital Subscriber Lines (ADSL). Users working on local file servers and downloading data from Internet face unexpected low performance when uploading data to a web resource using ADSL, which is due to low upload bandwidth, respectively high latency. This is true for all centralized server infrastructures accessed by Internet lines including SaaS and should be reflected when considering hosting in the cloud.</ns0:p><ns0:p>Finally, users demand similar functionalities as on their daily working platform. This is an unsolved challenge due to the heterogeneity of software used in life sciences; from interactive graphical user interface (GUI) based office packages to highly sophisticated batch processing systems. The evolution of new ELNs should provide more closely aligned capabilities to meet the users' requirements.</ns0:p><ns0:p>For the ongoing PPP project, a more individualized user support capability may have helped to overcome some of the issues mentioned above. Individual on-site training parallel to the experimental work could offer insight to users' issues and provide advice for solutions or workarounds. This activity would require either additional travel for a small group of super users or the creation of a larger, widely spread group of well-trained super users which is highly informed about ongoing issues and solutions.</ns0:p><ns0:p>The issues discussed above also constitute a social or a scientific community problem. Being the first, which often is considered as being the best, is the dictum scientists strive to achieve especially when performance is reduced to the number of publications and frequency of citation, not to the quality of documentation/reproducibility accounting for determination of quality of results. This culture needs to be replaced by 'presenting full sets of high quality results including all metadata' PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for additional benefit to the scientific community <ns0:ref type='bibr' target='#b18'>(Macarelly, 2014)</ns0:ref>. ELNs could contribute significantly to this goal.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>We successfully selected and implemented an ELN solution within an ambitiously short timeframe for mostly first-time electronic lab book users. The implementation included the creation and arrangement of internal training material and the establishment of an ELN helpdesk as a SPoC.</ns0:p><ns0:p>Lessons learned from the selection and implementation of an ELN solution Generally accepted strategies for software selection and implementation are recommended because they provide a structured and well-defined process for decision-making. Typically, any selection process involves balancing several contrasting features. In our case, the functionality of the system was balanced against long-term maintenance options and costs. The choice was a compromise between these three aspects.</ns0:p><ns0:p>Performance, ease of use and functionality are the most important aspects for end users of newly launched software. The general expectation is that software should make work quicker, easier and more precise. Thus, users anticipate that software should simplify typical tasks and support their current workflow without adaptation. However, an optimal use of the potential capacity of an ELN requires a change in the working process. Replacing the paper-based LN by a fixed installed lab PC will not lower the entrance barrier. The benefit for the end-user must be communicated in data handling and flexibility in data reuse (Off-site use and rapid communication among partners).</ns0:p><ns0:p>For an ELN in a PPP, the documentation of daily work is the key issue. In particular, a project with widespread activities ranging from fundamental chemical wet laboratory and in silico work to biological in vitro and in vivo studies, the intercommunication between the sites requires certain data structures. The selected ELN must support many different types of documentation (e.g. flat text files, unstructured images and multidimensional data containers) and in parallel must be at least as flexible as a paper-based notebook (e.g. portable, accessible, ready for instant use and suitable for use when the researcher is wearing laboratory protective clothing). The concealed features of an ELN, such as comprehensive filing of all information for an experiment in one place, standardised structuring of experiments, and long-term global accessibility are not as important for the end user when they are working with the system. The end user requires fast access to his latest experiments and expects support of his specific workflow during documentation. For some users, documenting laboratory work in an electronic system also requires more time and attention than writing entries in a paper notebook.</ns0:p><ns0:p>No ELN can fully reproduce the flexibility of a blank piece of paper. The first draft for experimental documentation in a classical paper-based LN can be rough and incomplete because only the documenter needs to understand it. Later on, the user can improve the notes and add additional information. In contrast, the documentation process in an ELN is more structured and guided by mandatory fields, which appears to be more time consuming to the user. However, the PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science overall process could be faster once the user gets familiar with the system. The opportunity to lose any necessary information would be lowered by using the system online during the practical work. There are also other advantages in the ELN, e.g. linking experiments to regular protocols.</ns0:p><ns0:p>The main value of an ELN is that it makes the data more sharable because it is &#61623; constantly accessible &#61623; more complete &#61623; easier to follow Lessons learned from technical solution Operating system or configuration dependent issues were also encountered frequently. Supporting different systems and browsers, and interfacing with other tools such as office software packages, is complex and requires testing in different environments. Only the most common combinations of systems and tools are recommended by vendors. One possible source of difficulty was the laboratory computers in academia. For purpose of data recording, older versions of software running on outdated hardware are often used with restrictions based on instrument software installed on them (Article S2). Many incompatibilities are based on atypical configurations. However, it is impossible to drive the computer upgrade path for laboratory computers from the requirements of a single system, particularly for academic partners. Furthermore, in large academic institutions, many systems are not updated due to frequently changing personnel structures.</ns0:p></ns0:div> <ns0:div><ns0:head>Data sharing</ns0:head><ns0:p>Science, by definition, should be a discipline sharing 'knowledge' with the scientific community and the general public (DFG, 2013), particularly if funded by public organizations. Creating knowledge only for self-interests makes little sense, because knowledge can be verified and extended only by disputation with other researchers. Why do we struggle to share and discuss our data with colleagues?</ns0:p><ns0:p>Scientists often display a strong unwillingness to share their data. They often believe they are the data owner, i.e. the entity that can authorize or deny access to the data. Nevertheless, they are responsible for data accuracy, integrity and completeness as the representative of the data owner.</ns0:p><ns0:p>The data generator should be granted primary use (i.e. publication) of the data (DFG, 2013), but the true owner is the organization that financially supports the project.</ns0:p><ns0:p>Within a PPP project it is necessary to establish a documentation policy that is suitable for all. An agreement must occur on standards for the responsibility, content and mechanisms of documentation, particularly in international collaborations where country-specific and cultural differences need to be addressed (Elliott, What are the benefits of ELN?, 2010). Furthermore, no official, widely accepted standards are pre-defined. As long as the justification for ELNs is for more control over performance than to foster willingness to cooperate and share data and knowledge in the early phase of experiments, user acceptance will remain low <ns0:ref type='bibr' target='#b21'>(Myers, 2014)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Without user acceptance, the quality of the documented work will not improve <ns0:ref type='bibr' target='#b0'>(Asif, Ahsan &amp; Aslam, 2011</ns0:ref>) <ns0:ref type='bibr' target='#b36'>(Zeng, Hillman &amp; Arnold, 2011)</ns0:ref>.</ns0:p><ns0:p>In addition to these social and community-based challenges <ns0:ref type='bibr'>(Sarich, ELN Incorporation Into Research, 2013</ns0:ref>) (Sarich, Choosing an ELN, 2014) <ns0:ref type='bibr'>(Sarich, ELN Presentation, 2014)</ns0:ref> there are technical aspects and security concerns that remain to be addressed. A simple 'copy and paste' functionality for any type of text and data, including special characters and symbols, was high on the list of user demands. Another issue is the speed and convenience of access to the ELN, especially for technicians in the laboratory. Although the first enhancement needs some improvements from vendors, the responsibility for the second feature lies with the policies of the research organizations and their IT departments. Typically, out-of-date hardware and slow Internet connections are installed in laboratories. This impedes adoption of ELNs because slow hardware and slow network connections both have a negative impact on the usability of software. Vendors should ensure that the minimal specifications for network access and hardware required to run their systems without excessive latency are clearly defined also on a long-term perspective. Before implementation, hardware/network configurations should also be checked by responsible persons in the research organizations and replaced by adequate equipment. Finally, a generalized standard export/import format for the migration of data between different ELN solutions would be beneficial, because this would provide independence from one selected product and would also support data archiving <ns0:ref type='bibr'>(Elliott, Thinking beyond ELN, 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The adoption and use of ELNs in PPP projects need a careful selection and implementation process with a change management activity in parallel. Without the willingness of users to document their experimental work in a constructive, cooperative way, it will be difficult to acquire all the necessary information in time. Mandating users to record all activities, which could easily be obliged in a company by creating a company policy, will not work in a PPP project with independent organisations. Forcing users to record all activities suggests control of users and will result in minimal, low quality documentation.</ns0:p><ns0:p>Although user buy-in is a key requirement, basic technical requirements must also be addressed. Up to date hardware and high-speed network connections for accessing the ELN by laboratory personnel are important for user acceptance. Finally, the selected ELN solution should support the daily work of each user by simplifying their documentation processes and adding value primarily to the user. Vendors need to complement the heterogeneous workflows that are common in life science research, particularly by adding drag and drop functionality to streamline the usage of an ELN. Another option to support users would be an easier transition of paper notebook pages e.g. by printed ELN templates and a dedicated scanning and importing procedure with optical character recognition (OCR). However, this would require well prepared templates and accurate recordings by the user.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Structural outline of the New Drugs for Bad Bugs (ND4BB) framework.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Schematic outline of stepwise procedure for the implementation of a new system (Nehme &amp; Scoffin, 2006).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Technical and organizational challenge: schematic overview of paths for sharing research activity results within a public-private-partnership on antimicrobial research.</ns0:figDesc><ns0:graphic coords='7,221.33,286.02,54.86,51.17' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>experiments (2342) No. of tickets (309) No. of experiments No. of tickets PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Overview of usage of the ELN and workload for the helpdesk over time. Y-axis on the left shows number of experiments created per month (blue bars) overlaid by number of help desk tickets created per month (orange line) scaled on the y-axis on the right.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Overview of self-assessed frequency and usage-time of the ELN given the used OS plus summarized results of the survey for the different user-groups.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 : Long-term benefits of an electronic laboratory notebook (ELN) compared to a paper based LN.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Definitely</ns0:cell><ns0:cell>Potentially</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Create (standard) protocols for</ns0:cell><ns0:cell>&#61623; Exchange protocols/standard operating</ns0:cell></ns0:row><ns0:row><ns0:cell>experiments</ns0:cell><ns0:cell>procedures (SOPs)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Create and share templates for</ns0:cell><ns0:cell>&#61623; Remote access of results/data from other</ns0:cell></ns0:row><ns0:row><ns0:cell>experimental documentation</ns0:cell><ns0:cell>working groups</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Share results within working groups &#61623; Amend/extend individual protocols &#61623; Full complement of data/information</ns0:cell><ns0:cell>&#61623; Ensure transparency within projects &#61623; Discuss results online &#61623; Control of overall activity by timely</ns0:cell></ns0:row><ns0:row><ns0:cell>from one experiment is stored in one</ns0:cell><ns0:cell>planning of new experiments based on</ns0:cell></ns0:row><ns0:row><ns0:cell>place (in an ideal world)</ns0:cell><ns0:cell>former results</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Storage of data from all experiments in a</ns0:cell><ns0:cell>&#61623; Search for chemical (sub)structures</ns0:cell></ns0:row><ns0:row><ns0:cell>dedicated location</ns0:cell><ns0:cell>within all chemical drawings in</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Protect intellectual property (IP) by &#61623; Search functionality (keywords, full text)</ns0:cell><ns0:cell>experiments</ns0:cell></ns0:row><ns0:row><ns0:cell>timely updating of experimental data with</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>date/time stamps</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 : Overview of user requirements organized as 'core user requirements' for essential items, and 'non-core user requirements' representing desirable features.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Core user requirements</ns0:cell><ns0:cell>Non-core user requirements</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; System set-up and implementation should be fast and simple &#61623; Access from different platforms should be</ns0:cell><ns0:cell>&#61623; Conform with Good Laboratory Practise &#61623; User management with dedicated access permissions (expectation: all users working</ns0:cell></ns0:row><ns0:row><ns0:cell>possible: Windows, Linux, Mac OS</ns0:cell><ns0:cell>on the same project, but in different work</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Low training requirements (for high level</ns0:cell><ns0:cell>packages)</ns0:cell></ns0:row><ns0:row><ns0:cell>of acceptance)</ns0:cell><ns0:cell>&#61623; Workflow management</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Hosted system with state-of-the-art security settings &#61623; Simple user management (only limited</ns0:cell><ns0:cell>&#61623; Order management &#61623; Chemical structure handling &#61623; Dedicated tree structure for storing</ns0:cell></ns0:row><ns0:row><ns0:cell>support by project members possible) &#61623; Suitable for both chemical (including e.g. drawings of molecules) and biological (including e.g. capture fluorescent images) experiments &#61623; Low costs, especially for long-term usage</ns0:cell><ns0:cell>experiments &#61623; Legally-binding procedures (signatures) &#61623; Modular expandability &#61623; Appropriate integrated analytical features &#61623; Social networking and collaborative (chat)</ns0:cell></ns0:row><ns0:row><ns0:cell>in the academic area</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>features &#61623; Storage for large sets of 'raw' data for reanalysisBased on the user URS, a tender process (Step 2, Figure2) was initiated in which vendors were invited to respond via a Request for Proposal (RFP) process. The requirement of the proposed ELN to support both chemical and biological research combined with the need to access the ELN by different operating systems (Windows, Linux, Mac OS) (see Figure</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 : List of drivers for final decision about which ELN-solution to be set up and run in the project consortium.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 Number of users of the ELN and participants in the survey</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Description</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 : Overview about most important results from survey, which was sent out to 77 users from 18 academic and SME organisations. A set of 58 comprehensively answered questionnaires was considered for evaluation, Major results from survey</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>While 52% of the Mac and Linux users are satisfied about the performance 35% of the Windows users are unhappy compared to 27% which are satisfied about the performance of the system</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#61623; Most user never used an ELN before (51 out of 58 users replying to the survey stated 'I</ns0:cell></ns0:row><ns0:row><ns0:cell>never used an ELN before this project'; no info from remaining 19 invited users)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Most users (76%) are using a paper notebook in addition to the ELN &#61623; Many users (n=23) would not recommend using an ELN again &#61623; No of Operating systems: Linux=7, Mac OS=14, Windows=37 &#61623; ELN typically used</ns0:cell></ns0:row><ns0:row><ns0:cell>o Rarely or sometimes with &lt; 1 h per session (53%)</ns0:cell></ns0:row><ns0:row><ns0:cell>o Frequently &lt; 1 h per session (16%)</ns0:cell></ns0:row><ns0:row><ns0:cell>o Sometimes or frequently 1-2 h per session (9%) &#61623; Frequent users (n=13) realized an increase in quality of documentation (46%) and 38%</ns0:cell></ns0:row><ns0:row><ns0:cell>would recommend this software to colleagues while even three out the 13 frequent users</ns0:cell></ns0:row><ns0:row><ns0:cell>would not use an ELN again if they could decide</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61623; Rarely users (n=19) are skeptical about ELN functionality (42%) &#61623;</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 : Summary of acceptance of ELN and user experience given the mainly used OS plus an overview of the main result from survey for the different user-groups.</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Frequently used platform</ns0:cell><ns0:cell>No of</ns0:cell><ns0:cell>Percentage of</ns0:cell><ns0:cell>No. of wet-lab</ns0:cell><ns0:cell>No. of in-silico</ns0:cell></ns0:row><ns0:row><ns0:cell>to access the ELN</ns0:cell><ns0:cell>users</ns0:cell><ns0:cell>all users</ns0:cell><ns0:cell>researchers*</ns0:cell><ns0:cell>researchers*</ns0:cell></ns0:row><ns0:row><ns0:cell>Mac</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>24%</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>Windows</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>64%</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell>Linux</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>12%</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell></ns0:row></ns0:table><ns0:note>* Those columns do not add up to 58 because some participants stated that they are both wet-lab and in-silico researchers.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Frequently used platform to access the ELN No. of users Percentage* Percentage/ OS</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Rarely</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Linux</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3%</ns0:cell><ns0:cell>29%</ns0:cell></ns0:row><ns0:row><ns0:cell>Sometimes</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Linux</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>9%</ns0:cell><ns0:cell>71%</ns0:cell></ns0:row><ns0:row><ns0:cell>Rarely</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Mac</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5%</ns0:cell><ns0:cell>21%</ns0:cell></ns0:row><ns0:row><ns0:cell>Rarely</ns0:cell><ns0:cell>1-2h</ns0:cell><ns0:cell>Mac</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>7%</ns0:cell><ns0:cell>29%</ns0:cell></ns0:row><ns0:row><ns0:cell>Sometimes</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Mac</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>9%</ns0:cell><ns0:cell>36%</ns0:cell></ns0:row><ns0:row><ns0:cell>Frequently</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Mac</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3%</ns0:cell><ns0:cell>14%</ns0:cell></ns0:row><ns0:row><ns0:cell>Rarely</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>16%</ns0:cell><ns0:cell>24%</ns0:cell></ns0:row><ns0:row><ns0:cell>Rarely</ns0:cell><ns0:cell>1-2h</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell>Sometimes</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>12%</ns0:cell><ns0:cell>19%</ns0:cell></ns0:row><ns0:row><ns0:cell>Sometimes</ns0:cell><ns0:cell>1-2h</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>14%</ns0:cell><ns0:cell>22%</ns0:cell></ns0:row><ns0:row><ns0:cell>Sometimes</ns0:cell><ns0:cell>&gt;2h</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2%</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell>Frequently</ns0:cell><ns0:cell>&lt;1h</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>12%</ns0:cell><ns0:cell>19%</ns0:cell></ns0:row><ns0:row><ns0:cell>Frequently</ns0:cell><ns0:cell>1-2h</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>7%</ns0:cell><ns0:cell>11%</ns0:cell></ns0:row></ns0:table><ns0:note>* This column does not add up to 100% due to rounding errors.</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10416:1:0:CHECK 28 Jul 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Rebuttal letter manuskript „Electronic laboratory notebooks in a public-private-partnership“ Dear Editor, Dear Reviewers, We thank you for your constructive comments and recommendations regarding both language quality and content wise. We gave the text to one of our internal native speaking scientists for proofreading and highlighted the improvements in track-change modus. Further, we have edited the minor revisions as following: • Abstract was revised and transformed from bullet-points into running text • Typos were corrected • Position of captions for tables and figures now consistent according to conventions (for tables above, for figures below) • Numbers of users are mentioned in tables/figures, where appropriate • Figure 4 was revised according to suggestions by editor and both reviewers 1 and 2, please see also below for further details ◦ “No” changed into “No.” ◦ Both charts are now overlaid • The text-layout was adapted in order to bring each tables fully on one page • Section-headers for “Methods” (page 4 line 118) and “Results” (page 10 line 244) have been inserted • The formatting of the references in the text is fixed • The position of figure 2 and 3 have been exchanged and captions have been improved for all figures and tables • Extension of Supplementary file S2 by table 10 “Overview of all questions from survey with both numbers and relative figures regarding given answer.” We are happy to get your opinion regarding the point of digital preservation and upcoming problems connected to that. We definitely agree that paper notebooks are the most widely used and major tool for documentation and reporting of experimental work. The lack of accepted and widely used standards for electronic laboratory notebook both in terms of software and file format is a critical point. We have inserted a paragraph on page 19 lines 531 – 533 as following: “Finally, a generalized standard export/import format for the migration of data between different ELN solutions would be beneficial, because this would provide independence from one selected product and would also support data archiving (Elliott, Thinking beyond ELN, 2009).” Further, we have prepared a point-by-point reply to the reviewers comments below and hope that we could answer all comments and questions comprehensively and satisfyingly. With kind regards, Manfred Kohler and Lea Vaas On behalf of all authors Reviewer 1: […] However, the study could be improved by having a more sophisticated ELN system to test. Particular features of the ENL interface could be isolated and the user experience tested. For example, drawing pictures or version control (e.g. per git model). However, the reviewer understands that this is a lot of work and may be outside of the resources available to the authors. This should be considered in the future. Comments for the Author Plots in Fig. 4 should be overlayed and on the same axis. We thank Reviewer 1 for the helpful comments and the suggestion about testing a more sophisticated ELN system. We definitely agree that the next step for improvement of the system would cover the mentioned aspects of improving the interface and test the specific issues in the user experience. However, the manuscript in the current version covers the process of selection and rollout of an electronic laboratory notebook (ELN) system used in a collaborative research program. Our first aim was not to improve running systems, but to have a system up and running quickly. However, as mentioned above, the next step would be to have a more sophisticated ELN system plus an engaged user group willing to participate in controlled testing experiments with isolated features exploring user experiences. We have revised the figure 4 as recommended. Reviewer 2: - (Table 1) What does 'Chemical (sub)structure search' mean? Expand on that little. We agree with the reviewer, that this is a specific type of data retrieval, which needs some additional explanation. We have changed the item into “Search for chemical (sub)structures within all chemical drawings in experiments” (Table 1, page 3 last item in the right column) and added a text-block expanding this in the text on page 2-3 lines 52 - 82 “The order of mentioned points is not expressing any ordering. Beside general tasks, especially in the field of drug discovery some specific tasks have to be facilitated. One of that is functionality allowing searches for chemical structures and substructures in a virtual library of chemical structures and compounds (see table 1, last item in column “Potentially”). Such a function in an ELN hosting reports about wet-lab work dealing with known drugs and/or compounds to be evaluated, would allow dedicated information retrieval for the chemical compounds or (sub-) structures of interest.” - (Line 122) 'The most important items of the collected user requirement are listed in Table 2'. How was the importance of an item determined? We have added a short paragraph explaining the determination of importance on page 7 lines 155 - 158 “We divided the gathered requirements into ‘core’ meaning essential and ‘non-core’ standing for ‘nice to have, but not indispensable’. Further, we list here only the items, which were mentioned by more than two super users from different groups. The full list of URS is available as a supplement (Article S1).” - (Table 2) What does GLP stand for? Good Laboratory Practice? Yes, it is the abbreviation for Good Laboratory Practise. It has been changed accordingly. - (Line 128) What does ‘chemical and biological search’ mean? This might be obvious to people in biological field, but not to me (computer science background). We are very sorry, but we are unable to find this phrase in line 128 or in the surrounding text. - (Table 2) How were requirements classified as core vs. non-core? This is now explained in the introduced short paragraph referring to the reviewer’s comment on line 122, please see above. - (Table 2) Explain 'chemical and Biological notebook'. Do you just mean a notebook for chemical and biological experiments? If so, rephrase. Yes, we have rephrased it to “Suitable for both chemical (including e.g. drawings of molecules) and biological (including e.g. capture fluorescent images) experiments” in the table. - (Line 139) 'two vendors were selected'. Based on what? Thanks for pointing out, that this information is missing. We have rephrased the sentence on page 8 lines 176 – 179 as following: “For the third step, (Figure 2), the two vendors meeting all the core user requirements and the highest number of non-core user requirements were selected to provide a more detailed online demonstration of their ELN solution to a representative group of users from different project partners.” - (Line 144) 'The final decision was based on the higher number of positive features in the chosen system'. Just the higher number of features does not seem like a good criteria. Were all features equally important? Perhaps explain little more on the choice here. It would be useful to know the key features that led to choosing one vendor over the other. We agree with the reviewer that this sentence was too short sighted and sloppy in wording. We have rephrased the paragraph on page 8 lines 182 – 190 into “For the final decision only features which were different in both systems and supported the PPP were ranked between 1=low (e.g. academic user licenses are cheap) and 5= high (e.g. cloud hosting) as important for the project. The decision between the two tested systems was then based on the higher number of positively ranked features, which revealed most important after presentation and internal discussions of super users. The main drivers for the final decision are listed in Table 3. In total, the chosen system got 36 positive votes on listed features meeting all high ranked demands listed in table 3, while the runner up had 24 positive votes on features. However, if the system had to be set up in the envisaged consortium it turned out to be too expensive and complex in maintenance.” - (Table 3) The fifth point ('Users should take responsibility ...') is not clear. It does not read like a feature either. We agree with the reviewer, that the statement in this point was unclear. However, for the selection of a system, it was an important feature. We have rephrased the point to • “ELN does not apply highly sophisticated checking procedures, which would require a high level of configuration, restricting users to apply their preferred data format (users should take responsibility for the correct data and the correct format of the data stored in the ELN instead)” - (Table 3) 'The potential to continually use the ELN after five year ...' How was this evaluated? Was it just based on cost? Thanks for the feedback on this point. Indeed, we considered the potential of affordability for the academic partners also after end of the funding period for this project. We have adapted the text accordingly: “affordable for academic partners also after the five year funding duration of the project (based on the maturity of the vendor, number of installations/users, the state-of-the-art user interface and finally also the costs)” - (Line 174) 'The initially predefined templates were rarely adopted.' I can see that happening. Was there any common format/pattern among users that could be used to revise the default templates? If so, maybe the templates could be revised based on ELN usage for first few months? We thank the reviewer for stating these evident questions. The team was trying to find a solution to this as soon as we became aware of it. Unfortunately, the basic characteristics of the experimental work in academic environments and especially in the consortium at hand is high variability of experimental set-ups. We tried to find common formats and patterns, however, in basic research every experiment is unique in terms of set-up. Although we thought, that at least certain types or measurements could be grouped and ‘starting-templates’ would be helpful, it turned out to be not feasible for the researchers. We feel that these experiences, however, should not be over-emphasized. Thus, we tried to address this issue in a more general way in the discussion on page 17 lines 462-474, which is: “For an ELN in a PPP, the documentation of daily work is the key issue. In particular, a project with widespread activities ranging from fundamental chemical wet laboratory and in silico work to biological in vitro and in vivo studies, the intercommunication between the sites requires certain data structures. The selected ELN must support many different types of documentation (e.g. flat text files, unstructured images and multidimensional data containers) and in parallel must be at least as flexible as a paper-based notebook (e.g. portable, accessible, ready for instant use and suitable for use when the researcher is wearing laboratory protective clothing). The concealed features of an ELN, such as comprehensive filing of all information for an experiment in one place, standardised structuring of experiments, and long-term global accessibility are not as important for the end user when they are working with the system. The end user requires fast access to his latest experiments and expects support of his specific workflow during documentation. For some users, documenting laboratory work in an electronic system also requires more time and attention than writing entries in a paper notebook.” - (Line 195) What was defined as acceptable performance? We have expanded the description of the technical solution on page 10 line 252 – 254: “The selected ELN, hosted as a SaaS solution on a cloud-based service centre, provided a stable environment with acceptable performance, e.g. login < 15 sec, opening an experiment with 5 pages < 20 sec (for further technical details please see Supplementary file S1).” - (Line 196) Expand on the 'denial of access' case. Was it due to just heavy unexpected usage of ELN cloud or was it due to an external factor? We have inserted a short explanation about the three-hours down-time, because it was a simple technical problem on external servers and the help-desk reacted quickly and professional (page 10 line 255 - 258). “The first involved denial of access to the ELN for more than three hours, due to an external server-problem, which was quickly and professionally solved after contacting the technical support, and the other was related to the introduction of a new user interface (see below).” - (Line 213) Here you say 100 users overall but in Supplement Article S2 you mention the number of overall users was less than 80. In introduction you say 90 bench scientist. Having a para on study participants explaining study participant demographics (age, gender, title, and previous familiarity with ELN) would help. In that para you can explain the overall number of users and number of parallel users, number of users who took the survey, and so forth. We thank the reviewer for pointing on this sloppy description of user numbers. We see that this needed some refinement and therefore revised the first paragraph on page 11 lines 278 – 283, which now is: “In total, more than 100 users were registered during the first two years runtime, whereas the maximum number of parallel user accounts was 87, i.e. 13 users left the project for different reasons. The number of 87 users is composed of admins (n=3), external reviewers (n=4), project owners (n= 26) which are reviewing and countersigning as Principal Investigators (PIs), and normal users (n=41). Depending on the period, the number of newly entered experiments per month ranged between 20 and 200 (Figure 4 blue bars).” This paragraph comes with a new table 4 (see page 12), depicting the available characteristics of ELN users and their participation in the survey: Table 4 Number of users of the ELN and participants in the survey Description n Remarks Maximum number of parallel users of the ELN 87 Number of parallel users at the time point of the survey 84 Administrators 3 Not invited to participate in survey External Reviewers 4 Not invited to participate in survey Project Owners = Principal Investigators 26 Invited to participate in survey Normal User 51 Invited to participate in survey Deactivated users during the runtime of the questionnaire 2 Questionnaires returned 60 Questionnaires rejected due to insufficient (less than 20%) number of answered questions 2 Evaluated questionnaires 58 We further added a paragraph on page 12/13 line 323 – 345 explaining that we refused to collect detailed demographic data in order to ensure full anonymity of the participants at any time. The new paragraph is: “Overall, 77 users (Table 4.) were invited to participate in the survey. Two users left the project during the runtime of the survey. We received feedback from 60 (=80%) out of the remaining 75 users. 2 questionnaires were rejected due to less than 20% answered questions. The number of evaluated questionnaires is 58. see Supplementary file S2. There are also some limitations of the survey which should be discussed. The number of invited active ELN users was low (n=77), thus we refused to collect detailed demographic data in order to ensure full anonymity of the participants, so we expected a higher participation and especially more detailed answers to the free text questions. In addition, some interesting analysis could not be answered by of the questionnaire due to the low number of returned forms (n=58). E.g. only six users had some experience with ELNs. Three out of the six users found the ELN is changing the way of personal documentation positively, while the others didn’t answered the question or gave a neutral answer. Thus we didn’t reported these results as not representative. It should also be mentioned that this survey reflects the situation of this specific PPP project. The results cannot be easily transferred to other projects. It would be of interest if the same survey would give different or the same results i) either in other projects ii) during the time course of this project.” - (Table 4) Are survey results different for users who were familiar with ELN vs. who weren't? If the results are different, it is worth pointing out. This table is now Table 5. All comprehensively answered surveys considered for evaluation were provided by people who did not used an ELN before. Thus, we cannot discuss this point. - (Table 4) Mention number of users in parenthesis when saying 'Most user' and 'Many users'. E.g., 'Most user (n=?)'. We thank the reviewer for this suggestion. The respective numbers and short explanations were introduced in the table, which is now table 5. Further, we have expanded the paragraphs around table 4 (page 12/13 line 323 – 345, see also above). Table 5: Overview about most important results from survey, which was sent out to 77 users from 18 academic and SME organisations. A set of 58 comprehensively answered questionnaires was considered for evaluation, Major results from survey • Most user never used an ELN before (51 out of 58 users replying to the survey stated “I never used an ELN before this project”; no info from remaining 19 invited users) • Most users (76%) are using a paper notebook in addition to the ELN • Many users (n=23) would not recommend using an ELN again • No of Operating systems: Linux=7, Mac OS=14, Windows=37 • ELN typically used ◦ Rarely or sometimes with < 1 h per session (53%) ◦ Frequently < 1 h per session (16%) ◦ Sometimes or frequently 1-2 h per session (9%) • Frequent users (n=13) realized an increase in quality of documentation (46%) and 38% would recommend this software to colleagues while even three out the 13 frequent users would not use an ELN again if they could decide • Rarely users (n=19) are skeptical about ELN functionality (42%) • While 52% of the Mac and Linux users are satisfied about the performance 35% of the Windows users are unhappy compared to 27% which are satisfied about the performance of the system • Helpdesk support in general is O.K. (36%), but some users (10%) seem to be not satisfied, especially with training (47%) • Most users demand higher speed (n=15) and/or better user interface (n=14) - (Table 4) Just curious, did the frequent users who realize an increase in quality of documentation said they would recommend ELN? We have revised and extended the newTable 5 by the following points about • Frequent users (n=13) realized an increase in quality of documentation (46%) and 38% would recommend this software to colleagues while even three out the 13 frequent users would not use an ELN again if they could decide • Rarely users (n=19) are skeptical about ELN functionality (42%) - (Table 4) Among users who were not satisfied (seventh point), was there any common theme/reason for their dissatisfaction? Interestingly, none of the six users stating that the helpdesk support was unsatisfying (see second last bullet point in table 5), stated a comment on that in the field asking for “What do you think needs most improvement, and why?” nor in the field asking for “Do you have any other suggestions?”. However, they rated down the training. - (Line 262) 'For many users'. How many exactly? Give number, e.g., 'For many users (n=?)' We inserted the number “(44 out of 58 = 76%),” in the text (page 14, line 365) - (Line 321) '... not to the quality of results.' Not just results. Isn't it also about quality of documentation/reproducibility? It was good to see this paragraph. We thank the reviewer for this specification. The second sentence in last paragraph on page 16 (lines 438 – 441) was extended to: “Being the first, which often is considered as being the best, is the dictum scientists strive to achieve especially when performance is reduced to the number of publications and frequency of citation, not to the quality of documentation/reproducibility accounting for determination of quality of results.” - Suggestion for a discussion point: In this experiment it seems, to me, the users were forced to i) change the way they document their experiments, and ii) document their experiments on a computer instead of a paper notebook. Would it have been an easier transition if a paper notebook was printed with a template (from the ELN) and the users were first asked to use the paper notebook (with template from ELN), and then later transition to ELN. This of course would be a speculation (permitted by PeerJ). It might be interesting to add a small para on this from the author's perspective based on their experience. This reviewer raises a critical point the authors were indeed internally discussing intensively from beginning of the project on. We were very aware that users might feel uncomfortable (up to refusing the use at all) when the usage of an ELN becomes mandatory within the project. Unfortunately the time-constraints for implementation and roll-out to project’s consortium members were very tight. Thus we decided to have a one-step change from paper notebooks to the ELN, but designed the training exercises and materials (slide-sets, info brochures, newsletter, etc.) more comprehensively. For getting a reliable and representative insight, if the proposed smoother transfer using paper notebooks with pre-printed ELN templates would improve users’ compliance, we would need a randomized controlled trial with a reasonable number of participants. Because we cannot provide such, we feel that we should not extensively elaborate on it. We inserted therefore only a short extension of the last paragraph on page 19 (line 550 - 553) “Another option to support users would be an easier transition of paper notebook pages e.g. by printed ELN templates and a dedicated scanning and importing procedure with optical character recognition (OCR). However, this would require well prepared templates and accurate recordings by the user.” - (Line 458) Very appropriate quote from one of the users. Incorporate more of such quotes from users throughout the paper wherever applicable. You do have many quotes from users (Survey supplement). We thank the reviewer for this suggestion. Indeed, the survey comprised three possibilities for free text statements. Unfortunately, most of those statements do not seem to be appropriate for insertion into the text, because they are either simply not constructive (What do you think is the best aspect of this software, and why?: “nothing, just annoying”) or unconstructive and rude criticisms (What do you think needs most improvement, and why?: “speed, GUI, accessibility ... in 2015, this software is kind of a no-go ... it feels like being back into the 90s, where you have your first Windows installation and the sandclock turning around and you keep waiting and waiting and waiting until something happens .. haven't seen such a bad program in years”). We therefore decided not to include more statements directly quoting, however, all phrases are provided in the Supplementary file S2. - Show number of users (n) in table/figure captions. E.g., Table 4 (how many users responded to the survey?), Figure 4 (number of users might have varied over time, but you can show median or minimum number of users). We thank the reviewer for this suggestion. We agree that figure captions had be improved and added number of users and other specifics where appropriate. We have added a paragraph about the composition of the user group (page 11 line 278 - 282): “In total, more than 100 users were registered during the first two years runtime, whereas the maximum number of parallel user accounts was 87, i.e. 13 users left the project for different reasons. The number of 87 users is composed of admins (n=3), external reviewers (n=4), project owners (n= 26) which are reviewing and countersigning as Principal Investigators (PIs), and normal users (n=41).” We further contacted the vendor for statistics regarding number of registered and active users over time. Although they tried different things to gather those information, they could not deliver it. Thus we have to apologize that these information are not available and cannot be integrated into Figure 4. Reviewer 3 The introduction provides good motivation as to why an ELN is important; however, there is no related work section that compares and contrasts the design choices made in the implementation of this ELN compared to others. If there are other examples of similar systems, it would extremely helpful to compare and contrast your design choices to the choices of prior work. The authors thank the reviewer for this suggestion. When starting the project and having the first discussions about the ELN, the team did a literature review for similar examples, especially seeking for “lessons learned”. To the best of our knowledge, the process of software implementation and roll-out within a life-science focussed research environment does not seem to be considered as research topic. At that early time point, we found the cited articles by Asif, Ahsan & Aslam (2011), Du & Kofman (2007), Elliott (2009, 2010) and others dealing with the technical aspects and benefits of an ELN. The only report about the actual implementation which we could find was Iyer & Kudrle (2012). Unfortunately, a literature search done for this rebuttal did not result in new findings. Although we would be happy to provide such a comparison, it seems not feasible at the moment. It would be helpful to have another table that describes some statistics about the user studies – how many users, distribution of PC vs Mac vs Linux users. Some of this is provide in lines 213 – 222 on page 13, but the numbers aren’t precise – it would be better to pull these out into a table when possible. Finally, some analytics on the survey results could provide some more meaningful insights into precisely why the users felt the system was frustrating to use. This would help to better justify the findings in tables 5 and 6. We thank the reviewer for this helpful comment. We have extended the paragraph describing the survey (number of invitations send out, number of replies, etc.) on page 12 lines 323 – 326 accordingly and expanded table 5 by information about distribution of which OS was used and table 6 by information about frequency and duration of usage of the ELN given the used OS. Was data actually entered into spreadsheets such as the ones presented in tables S3 and S4? It might be really helpful to include a couple of screenshots from the actual ELN being used in a browser to give the reader a better understanding as to what the users were dealing with. Currently, I’m not sure how the spreadsheets fit within the web-based GUI used for experiment reporting. The authors thank the reviewer for this constructive comment. We clearly see that it is difficult to assess the usability of a software without having access to a demo-version or similar. However, this publication does not want to advertise the chosen solution, but rather discuss critical points during the selection, implementation, roll-out and runtime of an ELN software solution. Nevertheless, we checked possibilities to show screenshots of at least details or specific features of the chosen software, but had to realize that the product would be identifiable in all cases. Thus, we would like to refer to the vendor’s website (http://accelrys.com/products/unified-lab-management/biovia-electronic-lab-notebooks/notebook/index.html) for product specifics and the collection of webinars http://accelrys.com/events/webinars/eln/ for having a look at the user interface. Further, screenshots of user interfaces always retain the risk of system updates, as it happened during the runtime of this project. Due to this, users were faced with different user interfaces, which were similar in provided functionality but dramatically different regarding their look. We hope that the reviewer can understand our hesitation to provide those screenshots within the manuscript. As I mentioned in my comments above, I think you could do a better job quantifying the results from the user surveys. Where quantified results are available, you should present them. When you present the high level results in tables 4 – 6, it would be good to have numbers to bring out how important the findings are. For example, “Windows users were unhappy about the performance of the system” – if there were a table that showed how many users used which platform, we could better understand how big of an impact this really is. Ideally, you should present the data in tables 4 – 6 in histogram format rather than in the currently presented qualitative manner. We thank the reviewer for these helpful suggestions. We agree that presentation of more quantitative results makes it easier for the reader to understand the survey results. Therefore we have integrated some data from the Supplementary material S1 into the tables. In Table 4 we have included the absolute number of users, where possible and appropriate. This table should give an overview without diving too much into detail. Thus, we feel that we should not give to many specific numbers here. We extended Table 5 now showing the distribution of users over their preferred OS (both absolute and relative figures) and how many assess themselves as wet-lab researcher or in-silico researcher, respectively. We feel that these three lines readers easily get an impression of distributions and a histogram is not necessary in this case. In Table 6 we have inserted figures for the following self-assessed items: frequency of the ELN, length of usage-time of the ELN, frequently used platform to access the ELN. Further, we state the number of users for each combination both as Percentage and Percentage/OS. Due to the high number of groups and the self-assessment character of the questions different assignments of “length of usage” to the same category of “frequency” happened. This results in a very large number of different groups and a visualization does not seem to be meaningful. Therefore, we prefer the table format for presentation. "
Here is a paper. Please give your review comments after reading it.
325
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Text classification is a fundamental task in many applications such as topic labeling, sentiment analysis, and spam detection. The text syntactic relationship and word sequence are important and useful for text classification. How to model and incorporate them to improve performance is one key challenge. Inspired by human behavior in understanding text. In this paper, we combine the syntactic relationship, sequence structure, and semantics for text representation, and propose an attention-enhanced capsule network-based text classification model. Specifically, we use graph convolutional neural networks to encode syntactic dependency trees, build multi-head attention to encode dependencies relationship in text sequence, merge with semantic information by capsule network at last. Extensive experiments on five datasets demonstrate that our approach can effectively improve the performance of text classification compared with state-of-the-art methods. The result also shows capsule network, graph convolutional neural network, and multi-headed attention has integration effects on text classification tasks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Text classification is the basic task of text analysis with broad applications in topic labeling, sentiment analysis, and spam detection. Text representation is one important way for classification. From the analysis of text structure, the text is a sequence of words by certain rules. Sequences of different word orders show different meanings. The sequence structure of a text contains important information about the semantics of the text. Moreover, words in a text are contextual, the distance dependence between words in the sequence structure will also affect the meaning of the text. From the analysis of text composition, the text is composed of words or phrases with different syntactic functions according to certain syntactic relations. For humans, syntactic relations are the basis for editing and reading texts. According to the syntactic relation, we can understand the subject, predicate, and object of the text to promptly understand the semantics of the text. Whether it is text representation or text classification by text representation, it is the key and important question for our research to extract information from the sequence structure and syntactic relations of text.</ns0:p><ns0:p>Traditional methods represent text with handcrafted features, such as bag-of-words <ns0:ref type='bibr' target='#b0'>(Joachims 1998;</ns0:ref><ns0:ref type='bibr' target='#b2'>Mccallum and Nigam 1998)</ns0:ref>, N-grams <ns0:ref type='bibr' target='#b3'>(Lin and Hovy 2003)</ns0:ref>, and TF-IDF <ns0:ref type='bibr' target='#b4'>(Zhang, Yoshida, and Tang 2008)</ns0:ref>, and then adopted machine learning algorithms such as Naive Bayes <ns0:ref type='bibr' target='#b2'>(Mccallum and Nigam 1998)</ns0:ref>, logistic regression <ns0:ref type='bibr' target='#b5'>(Genkin and Madigan 2007)</ns0:ref>, support vector machine <ns0:ref type='bibr' target='#b0'>(Joachims 1998)</ns0:ref>, etc. for classification. These methods ignore the word order and semantic information, which has an important role in understanding the semantics of the text <ns0:ref type='bibr' target='#b6'>(Pang, Lee, and Vaithyanathan 2002)</ns0:ref>. On the contrary, other methods use word2vec and glove <ns0:ref type='bibr' target='#b7'>(Mikolov and Sutskever 2013;</ns0:ref><ns0:ref type='bibr'>Penning-ton, Socher, and Manning 2014)</ns0:ref> to represent text that can represent the semantic information of words, and then adopted convolutional neural network (CNN) <ns0:ref type='bibr' target='#b9'>(Kim 2014;</ns0:ref><ns0:ref type='bibr' target='#b10'>Zhang, Zhao, and Lecun 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Conneau et al. 2017)</ns0:ref>, long short-term memory networks (LSTM) <ns0:ref type='bibr' target='#b12'>(Mousa and Schuller 2017)</ns0:ref>, Capsule networks (CapsNet) <ns0:ref type='bibr' target='#b13'>(Zhao et al. 2018</ns0:ref>) and other deep neural networks for text classification. These methods can effectively encode the word order and semantic information. <ns0:ref type='bibr' target='#b9'>Kim et al. (Kim 2014</ns0:ref>) proposed a CNN model that adopted convolution filters to extract local text semantic features for text classification, leading to the model loss of a part of location information <ns0:ref type='bibr' target='#b13'>(Zhao et al. 2018</ns0:ref>). Capsule networks with vector neural units and dynamic routing algorithms can effectively overcome the disadvantages of CNN <ns0:ref type='bibr' target='#b14'>(Sabour, Frosst, and Hinton 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Xi, Bing, and Jin 2017)</ns0:ref>. Zhao et al. <ns0:ref type='bibr' target='#b13'>(Zhao et al. 2018</ns0:ref>) optimized the capsule networks for text classification (Capsule-A), and further experiments show the effectiveness of capsule networks for text classification. The characteristics of capsule networks are our starting point. However, this model has limitations in recognizing text with semantic transitions since it cannot encode long-distance dependencies and the global topology, which affects the effectiveness of complex text classification. The attentionbased approach is effective in overcoming the problem of distance dependencies of sequence structure. Both Transformer <ns0:ref type='bibr' target='#b16'>(Vaswani, Shazeer, and Parmar 2017)</ns0:ref> and Bidirectional Encoder Representations from Transformers (BERT) <ns0:ref type='bibr' target='#b17'>(Alaparthi, and Mishra 2020)</ns0:ref> use multi-head attention as a basic unit to extract text features. It's part of our motivation that it can extract the distance dependencies information of the text from different subspaces.</ns0:p><ns0:p>Syntactic information is a different attribute for text. It describes the syntactic dependency relationship between words in a sentence. It can be represented as the syntactic dependency tree and show the global topology. Figure <ns0:ref type='figure'>1</ns0:ref> is a syntactic dependency tree, where 'monkey' is the subject of the predicate 'eats', and 'apple' is its object. GCN is a graph convolutional network that operates on graphs and induces embeddings of nodes based on the properties of their neighborhoods. It can capture the information of the immediate neighbors of nodes at most K hops away <ns0:ref type='bibr' target='#b18'>(Marcheggiani and Titov 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Duvenaud and Maclaurin 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b18'>Marcheggiani et al. (Marcheggiani and Titov 2017)</ns0:ref> used GCN to encode syntactic dependency trees to generate word representations, and combined with long short-term memory networks for semantic role labeling tasks. These works show that GCN can effectively extract syntactic information in syntactic dependency trees. This is another motivation for us.</ns0:p><ns0:p>The text syntactic relationship and word sequence are important and useful for text classification. How to model and incorporate them to improve performance is one key challenge. In this paper, we combine the syntactic relationship, sequence structure, and semantics for text representation, and propose a novel model that utilizes GCN for syntactic relationship, multihead attention for words, and corporate them in capsule network for text classification.</ns0:p><ns0:p>The contributions of this paper can be summarized as follows:</ns0:p><ns0:p>&#8226; We incorporate syntactic relationship, sequence structure, and semantics for text representation.</ns0:p><ns0:p>&#8226; We introduce GCN to extract syntactic information for dependencies relationship representation.</ns0:p><ns0:p>&#8226; We build multi-head attention to encode the different influences of words to enhance the effect of capsule networks on text classification.</ns0:p><ns0:p>&#8226; We show that CapsNet, GCN, and multi-head attention have an integration effect for text classification.</ns0:p></ns0:div> <ns0:div><ns0:head>Related work</ns0:head></ns0:div> <ns0:div><ns0:head>Machine learning-based methods</ns0:head><ns0:p>Early methods adopted the typical features such as bag-of-words <ns0:ref type='bibr' target='#b0'>(Joachims 1998;</ns0:ref><ns0:ref type='bibr' target='#b2'>Mccallum and Nigam 1998)</ns0:ref>, N-grams <ns0:ref type='bibr' target='#b3'>(Lin and Hovy 2003)</ns0:ref>, and TF-IDF <ns0:ref type='bibr' target='#b4'>(Zhang, Yoshida, and Tang 2008)</ns0:ref> features as input and utilized machine learning algorithms such as support vector achine (SVM) <ns0:ref type='bibr' target='#b0'>(Joachims 1998)</ns0:ref>, logistic regression <ns0:ref type='bibr' target='#b5'>(Genkin and Madigan 2007)</ns0:ref>, naive Bayes (NB) <ns0:ref type='bibr' target='#b2'>(Mccallum and Nigam 1998)</ns0:ref> for classification. However, these methods ignore the text word order and semantic information, and usually heavily rely on laborious feature engineering.</ns0:p></ns0:div> <ns0:div><ns0:head>Deep learning-based methods</ns0:head><ns0:p>With the introduction of distributed word vector representation (Word Embedding) <ns0:ref type='bibr' target='#b7'>(Mikolov and Sutskever 2013;</ns0:ref><ns0:ref type='bibr' target='#b8'>Pennington, Socher, and Manning 2014)</ns0:ref>, neural networks-based methods have substantially improved the performance of text classification tasks by encoding text semantics.</ns0:p><ns0:p>CNN was first applied to image processing. <ns0:ref type='bibr' target='#b9'>Kim et al. (Kim 2014)</ns0:ref> proposed the CNNbased text classification model (TextCNN). The model uses convolution filters to extract local semantic features and improved upon the state of the art on 4 out of 7 tasks. Zhang X et al. <ns0:ref type='bibr' target='#b10'>(Zhang, Zhao, and Lecun 2015)</ns0:ref> proposed the character-level CNN model, which extracts semantic information from character-level original signals for text classification tasks. <ns0:ref type='bibr' target='#b11'>Conneau et al. (Conneau et al. 2017)</ns0:ref> proposed very deep convolutional networks to learn the hierarchical representation for text classification. Being a spatially sensitive model, CNN pays a price for the inefficiency of replicating feature detectors on a grid.</ns0:p><ns0:p>Recently, Hinton et al. <ns0:ref type='bibr' target='#b14'>(Sabour, Frosst, and Hinton 2017)</ns0:ref> proposed the CapsNet model, which uses vector neural units and dynamic routing update mechanisms, and verified its superiority in image classification. <ns0:ref type='bibr' target='#b13'>Zhao et al. (Zhao et al. 2018)</ns0:ref> proposed the text classification model based on CapsNet (Capsule-A), which adopted CapsNet to encode text semantics, and proved that its classification effect is superior to CNN and LSTM. In CapsNet, the feature is represented by a capsule vector instead of a scalar (activation value output by neuron). Different dimensions in a vector can represent different properties of a feature. For a text feature, it often means different meanings in different semantic relations. We use capsules to represent text features to learn the semantic information of different dimensions of text features. On the other hand, the similarity between features at different levels is different. For building a high-level feature, lower levels with high similarity have a higher weight. CapsNet can learn this similarity relationship through a dynamic routing algorithm. Although CapsNet can effectively improve coding efficiency, it still has limitations in recognizing text with semantic transitions.</ns0:p><ns0:p>Attention mechanisms are widely used in tasks such as machine translation <ns0:ref type='bibr' target='#b16'>(Vaswani, Shazeer, and Parmar 2017)</ns0:ref> and speech recognition <ns0:ref type='bibr' target='#b22'>(Chorowski et al. 2014)</ns0:ref>. <ns0:ref type='bibr' target='#b23'>Lin et al. (Lin et al. 2017)</ns0:ref> proposed the self-attention mechanism that can encode long-range dependencies. Vaswani et al. <ns0:ref type='bibr' target='#b16'>(Vaswani, Shazeer, and Parmar 2017)</ns0:ref> proposed a machine translation model (Transformer) based on multi-head attention. Alaparthi et al. <ns0:ref type='bibr' target='#b17'>(Alaparthi, and Mishra 2020)</ns0:ref> proposed a pretrained model of language representation (BERT) that also takes multi-head attention as its basic component. The basic unit of multi-head attention is scaled dot-product attention. Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. Attention is to extract the long-distance dependencies in the text by calculating the similarities between words in the text. The words in the text can express different meanings in different semantic scenarios. The representation of words is different in different semantic spaces. Similar to ensemble learning, multi-head attention can put text in different semantic spaces to calculate attention and get integrated attention. The location information cannot be obtained by relying solely on the attention mechanism, and location information also has an important influence on understanding text semantics. Kim et al. <ns0:ref type='bibr' target='#b24'>(Kim, Lee, and Jung 2018)</ns0:ref> proposed a text sentiment classification model combining attention and CNN, but it is still limited by the disadvantages of CNN. Although BERT has been particularly effective on many tasks. But it requires a lot of data and computing resources for pre-training. Therefore, our research is still valuable. Syntactic information-based Methods CNN, RNN, and most deep learning-based methods always utilized word local topology to represent text. Word order and semantic, text syntactic information all have important influences on text classification. Some researchers have done some work on text syntactic information for different tasks. A text is made up of words that represent different syntactic elements, such as subject, predicate, object, and so on. Different syntactic elements are interdependent. A syntax dependency tree is a kind of tree structure, which describes the dependency relationship between words. Figure <ns0:ref type='figure'>1</ns0:ref> shows an example of a syntax dependency tree. There are many tools (like StanfordNLP) to generate syntactic dependency trees by analyzing syntactic dependency relations.</ns0:p><ns0:p>Eriguchi et al. <ns0:ref type='bibr'>(Eriguchi, Tsuruoka, and Cho 2017)</ns0:ref> adopted RNN to integrate syntactic information for machine translation. <ns0:ref type='bibr' target='#b26'>Le et al. (Le and Zuidema 2014)</ns0:ref> introduced RNN to model syntax dependency tree. Miwa et al. <ns0:ref type='bibr'>(Eriguchi, Tsuruoka, and Cho 2017)</ns0:ref> used sequential LSTM and tree-LSTM to extract syntactic relations.</ns0:p><ns0:p>The syntactic dependency tree is also a kind of graph data. <ns0:ref type='bibr' target='#b21'>Bastings et al. (Bastings et al. 2017</ns0:ref>) used GCN to encode the syntactic dependency tree and combined it with the CNN for machine translation. <ns0:ref type='bibr' target='#b18'>Marcheggiani et al. (Marcheggiani and Titov 2017)</ns0:ref> used GCN to encode syntactic dependency trees to generate word representations and combined it with LSTM for the role labeling task. These works show that GCN can effectively extract syntactic information in PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:ref> Manuscript to be reviewed Computer Science syntactic dependency trees. The syntactic relationship is presented as a tree structure. A tree is also a form of a graph. Secondly, the sequence structure-based model represents the syntactic tree as a sequence according to some rules of nodes. But the sequence is directional, and nodes in a tree are not sequential. GCN can directly consider the relationship between nodes in a syntax tree.</ns0:p><ns0:p>In summary, we aim to propose a novel model named Syntax-AT-CapsNet that uses multihead attention to extract long-distance dependencies information and that uses GCN to encode syntactic dependency trees to extract syntactic information, which enhances the effect of capsule networks on text classification tasks.</ns0:p></ns0:div> <ns0:div><ns0:head>Syntax-AT-CapsNet model</ns0:head><ns0:p>Our Syntax-AT-CapsNet model consists of the following three modules as depicted in (Fig. <ns0:ref type='figure'>2</ns0:ref>).</ns0:p><ns0:p>&#8226; Attention module. It is composed of an attention layer that adopts multi-head attention. It </ns0:p></ns0:div> <ns0:div><ns0:head>Input</ns0:head><ns0:p>The input of the Syntax-AT-CapsNet model is defined as the sentence matrix : &#119935;</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#119935; = [&#119961; &#120783; , &#119961; &#120784; ,&#8230;, &#119961; &#119923; ] &#8712; &#8477; &#119923; &#215; &#119941; ,</ns0:formula><ns0:p>where is the word vector of the -th word in the sentence, is the length of the sentence, &#119961; &#119946; &#8712; &#8477; &#119889; &#119894; &#119871; and is the embedding vector size of words. &#119889;</ns0:p></ns0:div> <ns0:div><ns0:head>Attention module</ns0:head><ns0:p>The Attention module is shown in (Fig. <ns0:ref type='figure'>3</ns0:ref>). The calculation of attention in this module can be divided into five steps.</ns0:p><ns0:p>First step, linearly transform the input sentence matrix and divide it into three matrices:</ns0:p><ns0:formula xml:id='formula_1'>&#119883; &#119928; &#12289; &#12289; &#65306; &#8712; &#8477; &#119871; &#215; &#119889; &#119922; &#8712; &#8477; &#119871; &#215; &#119889; &#119933; &#8712; &#8477; &#119871; &#215; &#119889; (2) [ &#119928;,&#119922;,&#119933; ] = &#119956;&#119953;&#119946;&#119949;&#119957; [ &#119935; &#8226; &#119934; ] ,</ns0:formula><ns0:p>where is the transform matrix, and denotes the division operation. &#119934; &#8712; &#8477; &#119889; &#215; 3&#119889; &#119904;&#119901;&#119894;&#119897;&#119905; Second step, linearly projects the matrices onto different linear subspaces: &#119928;,&#119922;,&#119933; &#8462;</ns0:p><ns0:formula xml:id='formula_2'>[&#119928; &#120783; ,&#8230;,&#119928; &#119945; ] = [&#119928; &#8226; &#119934; &#119928; &#120783; ,&#8230;,&#119928; &#8226; &#119934; &#119928; &#119945; ], [&#119922; &#120783; ,&#8230;,&#119922; &#119945; ] = [&#119922; &#8226; &#119934; &#119922; &#120783; ,&#8230;,&#119922; &#8226; &#119934; &#119922; &#119945; ],<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>[&#119933; &#120783; ,&#8230;,&#119933; &#119945; ] = [&#119933; Manuscript to be reviewed</ns0:p><ns0:p>Computer Science where , , is the mapping of on the -th subspace,</ns0:p><ns0:formula xml:id='formula_3'>&#119928; &#119946; &#8712; &#8477; &#119871; &#215; &#119889; &#8462; &#119922; &#119946; &#8712; &#8477; L &#215; d h &#119933; &#119946; &#8712; &#8477; &#119871; &#215; &#119889; &#8462; &#119928;,&#119922;,&#119933; &#119894; &#119934; &#119928; &#119946; &#8712;</ns0:formula><ns0:p>, is the transform matrix, . The purpose of this step is</ns0:p><ns0:formula xml:id='formula_4'>&#8477; &#119889; &#215; &#119889; &#8462; ,&#119934; &#119922; &#119946; &#8712; &#8477; &#119889; &#215; &#119889; &#8462; &#119934; &#119933; &#119946; &#8712; &#8477; &#119889; &#215; &#119889; &#8462; &#119894; = [1,&#8230;,&#8462;]</ns0:formula><ns0:p>to compute multiple attention values in parallel. At the same time, the dimension of input matrix is reduced to reduce the calculation pressure caused by multiple calculation.</ns0:p><ns0:p>Third step, calculate the attention on each subspace in parallel:</ns0:p><ns0:p>(4)</ns0:p><ns0:formula xml:id='formula_5'>&#119945;&#119942;&#119938;&#119941; &#119946; = &#119956;&#119952;&#119943;&#119957;&#119950;&#119938;&#119961;(&#119928; &#119946; &#8226; &#119922; &#119946; &#119931; &#119941; )&#119829; &#119946; ,</ns0:formula><ns0:p>where is the attention value on the -th subspace, denotes the &#119945;&#119942;&#119938;&#119941; &#119946; &#119894; &#119904;&#119900;&#119891;&#119905;&#119898;&#119886;&#119909; &#119904;&#119900;&#119891;&#119905;&#119898;&#119886;&#119909; function <ns0:ref type='bibr' target='#b16'>(Vaswani, Shazeer, and Parmar 2017)</ns0:ref>. In fact, and represents the sentence matrix (5)</ns0:p><ns0:formula xml:id='formula_6'>&#119924;&#119958;&#119949;&#119957;&#119946; &#119945;&#119942;&#119938;&#119941; = &#119940;&#119952;&#119951;&#119940;&#119938;&#119957;(&#119945;&#119942;&#119938;&#119941; &#120783; ,&#8230;,&#119945;&#119942;&#119938;&#119941; &#119945; )&#119934; &#119924; ,</ns0:formula><ns0:p>where is the transform matrix, is the attention value of the entire &#119934; &#119924; &#8712; &#8477; &#119889; &#215; &#119889; &#119924;&#119958;&#119949;&#119957;&#119946;_&#119945;&#119942;&#119938;&#119941; sentence, and denotes the concat operation. &#119888;&#119900;&#119899;&#119888;&#119886;&#119905; The final step, Connect the attention value to the original sentence matrix &#119924;&#119958;&#119949;&#119957;&#119946;_&#119945;&#119942;&#119938;&#119941; &#119935; to get the sentence matrix output by the module:</ns0:p><ns0:p>(6)</ns0:p><ns0:formula xml:id='formula_7'>&#119935; &#120783; = &#119955;&#119942;&#119956;&#119946;&#119941;&#119958;&#119938;&#119949;_&#119914;&#119952;&#119951;&#119951;&#119942;&#119940;&#119957; ( &#119935;,&#119924;&#119958;&#119949;&#119957;&#119946;_&#119945;&#119942;&#119938;&#119941; ) ,</ns0:formula><ns0:p>where is the output of the attention module, and denotes the &#119935; &#120783; &#8712; &#8477; &#119871; &#215; &#119889; &#119903;&#119890;&#119904;&#119894;&#119889;&#119906;&#119886;&#119897;_&#119862;&#119900;&#119899;&#119899;&#119890;&#119888;&#119905; residual connection operation.</ns0:p></ns0:div> <ns0:div><ns0:head>Syntax module</ns0:head><ns0:p>The Syntax module is shown in (Fig. <ns0:ref type='figure'>4</ns0:ref>). The Syntax module uses GCN to encode syntactic dependency trees, which can encode syntactic relationships between words in a text into word vectors. The module first needs to use a natural language processing tool (we adopted StanfordNLP) to generate the syntactic dependency tree of the input sentence, and construct its adjacency matrix. The adjacency matrix construction algorithm in this paper is shown in Algorithm 1. Since the syntactic relationship between word nodes in the syntactic dependency tree has direction, when constructing the adjacency matrix, the syntactic dependency tree is used as a directed graph. In addition, in order not to be disturbed by its word vector, the node is not provided with a self-loop. As shown in (Fig. <ns0:ref type='figure'>5</ns0:ref>), the adjacency matrix corresponds to the example sentence 'The Monkey eats an apple' shown in (Fig. <ns0:ref type='figure'>1</ns0:ref>) and is generated by our method. The input sentence matrix and adjacency matrix are further passed through the GCN to obtain a text representation containing syntactic information. </ns0:p><ns0:formula xml:id='formula_8'>&#119918; = &#119955;&#119942;&#119949;&#119958;( ( &#119935; &#8226; &#119938;&#119941;&#119947; ) &#119934; &#119957;&#120783; ),<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>&#119935; &#120784; = &#119956;&#119952;&#119943;&#119957;&#119950;&#119938;&#119961;( ( &#119918; &#8226; &#119938;&#119941;&#119947; ) &#119934; &#119957;&#120784; ),</ns0:formula><ns0:p>where is the output of the first layer graph convolution operation, is the</ns0:p><ns0:formula xml:id='formula_10'>&#119918; &#8712; &#8477; &#119871; &#215; &#119889; &#119935; &#120784; &#8712; &#8477; &#119871; &#215; &#119889;</ns0:formula><ns0:p>output of the second layer graph convolution operation, and is adjacent matrix, &#119938;&#119941;&#119947; &#8712; &#8477; &#119871; &#215; &#119871; &#119882; &#119905;1</ns0:p><ns0:p>are parameter matrices. In operation (7), the adjacency matrix and sentence &#12289;&#119882; &#119905;2 &#8712; &#8477; &#119889; &#215; &#119889; matrix go through the first level graph-convolution operation. The relationship between nodes directly connected is obtained. Then, through operation (8), the relationship between nodes indirectly connected nodes is calculated.</ns0:p></ns0:div> <ns0:div><ns0:head>Capsule network module</ns0:head><ns0:p>The capsule network module is composed of a fusion layer, a convolution layer, a primary capsule layer, a convolution capsule layer, and a fully connected capsule layer. It uses the text representation output by the attention module and the syntax module as input to further extract features. Each layer in the module can extract different levels of features. By further combining low-level features to obtain higher-level features, and finally form a feature representation of the entire text for classification.</ns0:p><ns0:p>The first layer is the fusion layer, put the text representations and output by the &#119935; &#120783; &#119935; &#120784; syntax module and the attention module into a single layer network: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The second layer is the convolutional layer, which extracts N-gram phrase features at different positions in the text. This layer uses convolution filters to perform convolution on &#119896; 1 the sentence matrix to obtain the N-gram feature matrix : &#119935; &#120785; &#119924; (10</ns0:p><ns0:formula xml:id='formula_11'>) &#119924; = [&#119924; &#120783; , &#119924; &#120784; ,&#8230;, &#119924; &#119948; &#120783; ] &#8712; &#8477; (&#119923; -&#119925; + &#120783;) &#215; &#119948; &#120783; ,</ns0:formula><ns0:p>where is the -th column vector in , each element The original features are extracted by convolution in this step.</ns0:p><ns0:formula xml:id='formula_12'>&#119924; &#119948; &#120783; = [&#119898; 1 , &#119898; 2 ,&#8230;, &#119898; &#119871; -&#119873; + 1 ] &#8712; &#8477; &#119871; -&#119873; + 1 &#119896; 1 &#119924; in this</ns0:formula><ns0:p>The third layer is the primary capsule layer, which combines the N-gram phrase features extracted at the same location as capsules. This layer uses transformation matrices to &#119896; 2 transform the feature matrix into the primary capsule matrix : &#119872; &#120562;</ns0:p><ns0:formula xml:id='formula_13'>(12) &#120620; = [&#119927; &#120783; , &#119927; &#120784; ,&#8230;, &#119927; &#119948; &#120784; ] &#8712; &#8477; (&#119923; -&#119925; + &#120783;) &#215; &#119948; &#120784; &#215; &#119949;</ns0:formula><ns0:p>, where is the -th column capsule in , each capsule</ns0:p><ns0:formula xml:id='formula_14'>&#119927; &#119948; &#120784; = [&#119953; &#120783; , &#119953; &#120784; ,&#8230;, &#119953; &#119923; -&#119925; + &#120783; ] &#8712; &#8477; (&#119871; -&#119873; + 1) &#215; &#119897; &#119896; 2 &#120562;</ns0:formula><ns0:p>is obtained by operation ( <ns0:ref type='formula'>13</ns0:ref>):</ns0:p><ns0:p>(13)</ns0:p><ns0:formula xml:id='formula_15'>&#119953; &#119946; = &#119944;(&#119934; &#119940;&#120784; &#8226; &#119924; &#119946; + &#119939; &#120784; ),</ns0:formula><ns0:p>where denotes the nonlinear compression function, is the th transformation</ns0:p><ns0:formula xml:id='formula_16'>&#119892; &#119882; &#119888;2 &#8712; &#8477; &#119896; 2 &#215; 1 &#215; &#119897; &#119896; 1</ns0:formula><ns0:p>matrix, is the -th row vector in , and is bias item. The primary capsules are constructed &#119924; &#119946; &#119894; &#119924; &#119887; 2 by linearly transforming the original features of the same location.</ns0:p><ns0:p>The fourth layer is the convolution capsule layer, which uses a shared transformation matrix to extract local capsules, similar to the convolution layer. This layer uses transformation &#119896; 3 matrices to perform capsule convolution operation on to obtain the capsule matrix : &#119875; &#119880;</ns0:p><ns0:formula xml:id='formula_17'>(14) &#119932; = [&#119932; &#120783; , &#119932; &#120784; ,&#8230;, &#119932; &#119948; &#120785; ] &#8712; &#8477; (&#119923; -&#119925; -&#119925; &#120783; + &#120784;) &#215; &#119948; &#120785; &#215; &#119949;</ns0:formula><ns0:p>, where is the -th column capsule in , each Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_18'>&#119932; &#119948; &#120785; = [&#119958; &#120783; , &#119958; &#120784; ,&#8230;, &#119958; &#119923; -&#119925; -&#119925; &#120783; + &#120784; ] &#8712; &#8477; (&#119871; -&#119899; 2 + 1) &#215; &#119897; &#119896; 3 &#119880; capsule is</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(17)</ns0:p><ns0:formula xml:id='formula_19'>&#119854; &#119955; = &#119944;(&#120506; &#119946; &#119836; &#119946; &#8226; &#119953; &#119946; ),</ns0:formula><ns0:p>where is a nonlinear activation function, is the coupling coefficient, which is updated with a &#119892; c &#119894; dynamic routing algorithm <ns0:ref type='bibr' target='#b13'>(Zhao et al. 2018</ns0:ref>). The similarity between the primary capsules and the generated convolutional capsules is different within a window. A primary capsule with a high similarity should be given a higher weight. Equation ( <ns0:ref type='formula'>17</ns0:ref>) is based on this principle.</ns0:p><ns0:p>The last layer is the fully connected capsule layer, which is used to form the capsule Y representing the category:</ns0:p><ns0:formula xml:id='formula_20'>(18) &#119936; = [&#119962; &#120783; , &#119962; &#120784; ,&#8230;, &#119962; &#119947; ] &#8712; &#8477; &#119947; &#215; &#119949; ,</ns0:formula><ns0:p>where denotes the capsule of the -th category. The capsules in are linearly &#119962; &#119947; &#8712; &#8477; &#119897; &#119895; &#119880; transformed ( <ns0:ref type='formula'>16</ns0:ref>) to obtain the prediction vector , and the operation ( <ns0:ref type='formula'>17</ns0:ref>) is performed to &#119958; &#119947;|&#119955; obtain . The fully connected capsule layer is shown in (Fig. <ns0:ref type='figure'>6</ns0:ref>), it can also represent a &#119962; &#119947; convolution window operation at the fourth layer.</ns0:p><ns0:p>Finally, the modulus of the capsule vector representing the category in the fully connected capsule layer is taken as the probability of belonging to the category.</ns0:p></ns0:div> <ns0:div><ns0:head>Syntax-AT-CapsNet learning algorithm</ns0:head><ns0:p>The learning algorithm of Syntax-AT-CapsNet is shown in Algorithm 2. When the model is learning, the coupling coefficients are updated by the dynamic routing algorithm, and the global parameters of the model are updated by the back propagation algorithm. Then the trained Syntax-AT-CapsNet model parameters can be obtained. During prediction, the classification results can be obtained through the sequential calculation of each module in the model. </ns0:p></ns0:div> <ns0:div><ns0:head>Experimental details</ns0:head><ns0:p>We did extensive experiments to verify the effect of Syntax-AT-CapsNet model on the single label and multi-label text classification tasks and designed more ablation experiments to demonstrate the role of each module.</ns0:p></ns0:div> <ns0:div><ns0:head>Data sets</ns0:head><ns0:p>We choose the following five datasets in our experiments: movie reviews (MR) <ns0:ref type='bibr' target='#b28'>(Miwa and Bansal 2016)</ns0:ref>, subjectivity dataset (Subj) <ns0:ref type='bibr' target='#b29'>(Pang and Lee 2004)</ns0:ref>, customer review (CR) <ns0:ref type='bibr' target='#b30'>(Hu and Liu 2004)</ns0:ref>, Reuters-21578 <ns0:ref type='bibr' target='#b31'>(Lewis 1992)</ns0:ref>.</ns0:p><ns0:p>The data in MR, Subj, and CR have two categories and are used for the single-label classification tasks, where MR and Subj are composed of movie sentiment review data, CR is composed of product reviews from Amazon and Cnet. The Reuters-21578 test set consists of Reuters news documents. We selected 10788 news documents under the 8 category labels related to economic and financial topics in Reuters-21578 and further divided them into two subdatasets (Reuters-Full and Reuters-Multi). In Reuters-Full, all texts are kept as the test set, and in Reuters-Multi, only multi-label texts are kept as the test set. The experimental data description is shown in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation index</ns0:head><ns0:p>Exact Match Ratio (ER), Micro Averaged Precision (Precision), Micro Averaged Recall (Recall), and Micro Averaged F1 (F1) were used as evaluation indexes in the experiment. Accuracy is used instead of ER in the single-label classification.</ns0:p></ns0:div> <ns0:div><ns0:head>Parameter setting</ns0:head><ns0:p>The experimental parameters of our work are as follows. In the model, input a 300-dimensional word2vec word vector During the model test, for the single label classification task, the category label corresponding to the capsule vector with the largest module length is taken. For the multi-label classification tasks, the category labels corresponding to capsule vectors with a modulus length greater than 0.5 are taken. </ns0:p></ns0:div> <ns0:div><ns0:head>Benchmark model</ns0:head></ns0:div> <ns0:div><ns0:head>Experimental results and discussion Performance on single-classification and multi-classification tasks</ns0:head><ns0:p>The experimental result of single-label classification is shown in Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref>, and the multi-label classification experiment result is shown in Table <ns0:ref type='table' target='#tab_8'>3</ns0:ref>.</ns0:p><ns0:p>It can be observed from the experimental results:</ns0:p><ns0:p>&#8226; Compared with the benchmark model (Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref>), our model achieved the best results of accuracy, Recall, and F1 on the three binary classification data sets ER, Subj, CR.</ns0:p><ns0:p>Compared with AT-CapsNet (a multi-headed attention capsule network that does not introduce syntax), it is found that the three data sets have a significant improvement, which proves the value of introducing syntactic information in this article.</ns0:p><ns0:p>&#8226; Compared with the benchmark models (Table <ns0:ref type='table' target='#tab_8'>3</ns0:ref>), on the two multi-label data sets Reuters- That is, our model has better effects on multi-label and single-label classification tasks than benchmark methods. As described in the introduction, the problems with the baseline model are the key to our research. This shows that our model overcomes the shortcomings of these models to a certain extent. That's the purpose of our work.</ns0:p></ns0:div> <ns0:div><ns0:head>Syntax module verification experiment</ns0:head><ns0:p>To show the effect of the syntax module, we did the following experiments and the experimental results are shown in Table <ns0:ref type='table' target='#tab_9'>4</ns0:ref>.</ns0:p><ns0:p>We can see that when the syntactic module is added to the benchmark model, the four evaluation indicators have been significantly improved, which shows that the syntactic module of this article can effectively improve the effect of text classification tasks. It also proves the Manuscript to be reviewed Computer Science feasibility and value of extracting syntactic information with graph convolutional neural networks. This also shows that we are right to learn from human reading behavior.</ns0:p></ns0:div> <ns0:div><ns0:head>Module ablation experiment</ns0:head><ns0:p>The results of the module ablation experiment are shown in Table <ns0:ref type='table' target='#tab_11'>5 and Table 6.</ns0:ref> From the above experimental results, we can draw the following conclusions:</ns0:p><ns0:p>&#8226; When controlling a single module (Table <ns0:ref type='table' target='#tab_10'>5</ns0:ref>), the ablation of each module will cause the classification effect to decrease to varying degrees, which shows that each module in our model has a certain role in improving the text classification effect. In addition, by comparing the reduced values, it was found that the capsule network had the greatest influence, followed by attention, and the syntax module was smaller, which indicated the correctness and value of taking the capsule network as the core module in this paper.</ns0:p><ns0:p>&#8226; When two modules are controlled (Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref>), the ablation of any two modules will cause the classification effect to decrease to varying degrees. Among them, the ablation syntax and capsule network modules have the greatest influence, and the ablation syntax and attention module have the second influence, and the ablation attention and the capsule network module has the least influence, which shows that the syntactic module can function to the greatest extent when combined with other modules. It also shows that the motivation of using graph neural networks to encode syntactic information and other models is correct.</ns0:p><ns0:p>&#8226; It can be seen from the above that when the attention module, the syntax module, or the capsule network module in the model of this article is removed or partially removed, the effectiveness of the model has declined to vary degrees. Since the syntactic module uses graph convolutional neural networks, the above experiments also prove that graph convolutional neural networks, capsule networks, and multi-head attention have an integrated effect on text classification tasks. This also shows that we are on the right track in building the model.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper proposes an enhanced capsule network text classification model Syntax-AT-CapsNet for text classification tasks. The model first uses graph convolutional neural networks as submodules to encode syntactic dependency trees, extract syntactic information in text, and further integrate with sequence information and dependency relationships, thereby improving the effect of text classification. Through model classification effect verification experiment, syntax module verification experiment, and module ablation experiment, the effect of the model in this paper on text classification and multi-label text classification task is verified, the function of syntax module is demonstrated, and the integrated effect of graph convolutional neural network, capsule network, and multi-head attention is proved. Future work will further optimize the model for other downstream tasks of text classification.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>The example of Syntactic dependency tree, where 'monkey' is the subject of the predicate 'eats', and 'apple' is its object. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The experimental result of control two module ablation. Remove two module from our model to verify the effect of the module. There is no corresponding module for the first model in each control group.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>encodes the dependency relationship between words in the text sequence and important word information to form a text representation. &#8226; Syntax module. It is composed of GCN. It encodes the syntax dependency tree, extracts the syntactic information in the text to for a text representation. &#8226; Capsule network module. It is a capsule network with 5-layer. Based on the text representation output by the Attention module and the Syntax module, it further extracts text semantic and structural information to classify the text.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>&#119935; &#120785; = &#119934; &#119943;&#120783; &#8226; &#119935; &#120783; + &#119934; &#119943;&#120784; &#119935; &#120784; , where . Two sentence matrices are combined by linear transformation through this &#119882; &#119891;1 ,&#119882; &#119891;2 &#8712; &#8477; &#119871; step. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>The first layer of the capsule network module uses 32 convolution filters , the window (&#119896; 1 = 32) size is 3 (N=3). The second layer uses 32 transformation matrices and 16-dimensional (&#119896; 2 = 32) capsule vectors . The third layer uses 16 conversion matrices , the window (&#119897; = 16) (&#119896; 3 = 16) size is 3 .The last layer uses 9 capsule vectors to represent 9 classes. (&#119873; 1 = 3) (&#119895; = 9) PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science In model training, mini-batch with a size of 25 are used, the training batch is (&#119887;&#119886;&#119905;&#119888;&#8462;_&#119904;&#119894;&#119911;&#119890;) controlled to 20 , and the learning rate is set to 0.001 . (&#119864;&#119901;&#119900;&#119888;&#8462; = 20) (&#119897;&#119890;&#119886;&#119903;&#119899;&#119894;&#119899;&#119892;_&#119903;&#119886;&#119905;&#119890; = 0.001)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>In this paper, TextCNN, Capsule-A, and AT-CapsNet are used as benchmark models for comparative experiments. TextCNN is a classic model of text classification based on CNN, which is representative. Capsule-A is a text classification model based on capsule networks. AT-CapsNet is a multi-headed attention capsule network text classification model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Full</ns0:head><ns0:label /><ns0:figDesc>and Reuters-Multi, our model has achieved competitive results in four evaluation indicators. And achieves the best results in ER, Recall, and F1, which indicates the effectiveness of the model in the classification of multi-tag texts.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Fourth step, concat the attention values on each subspace and get the attention value of the entire sentence through linear transformation:</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#119928; &#119946;</ns0:cell><ns0:cell>&#119922; &#119946;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>on the subspace. It's divided by</ns0:cell><ns0:cell>&#119889;</ns0:cell><ns0:cell>in case the dot product gets too big. The weight of the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>sentence matrix on the subspace</ns0:cell><ns0:cell>&#119829; &#119946;</ns0:cell><ns0:cell>is obtained by calculating the dot product of and and &#119928; &#119946; &#119922; &#119946;</ns0:cell></ns0:row><ns0:row><ns0:cell>using a</ns0:cell><ns0:cell>&#119904;&#119900;&#119891;&#119905;&#119898;&#119886;&#119909;</ns0:cell><ns0:cell>function.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>The calculation in the Syntax module can be divided into two steps.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>3 for each edge &lt;i, j&gt; in</ns0:cell><ns0:cell>&#119879;&#119903;&#119890;&#119890;</ns0:cell><ns0:cell>do:</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>4 Put the element</ns0:cell><ns0:cell cols='3'>&#119938;&#119941;&#119947;[&#119946;,&#119947;]</ns0:cell><ns0:cell>in the matrix</ns0:cell><ns0:cell>&#119938;&#119941;&#119947;</ns0:cell><ns0:cell>to 1;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>5 end for;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>6 return the adjacency matrix</ns0:cell><ns0:cell>&#119938;&#119941;&#119947;</ns0:cell><ns0:cell>;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>7 end.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>First step, use StanfordNLP tool to generate syntactic dependency tree and construct</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>adjacency matrix</ns0:cell><ns0:cell>&#119938;&#119941;&#119947;</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>Second step, perform a two-layer graph convolution operation on the input sentence matrix</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119883;</ns0:cell><ns0:cell cols='3'>and adjacency matrix</ns0:cell><ns0:cell cols='2'>&#119938;&#119941;&#119947;</ns0:cell><ns0:cell>:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(7)</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science Algorithm 1: Adjacency matrix construction algorithm Input: sentence matrix &#119935; Output: adjacency matrix &#119938;&#119941;&#119947; 1 define zero matrix &#119938;&#119941;&#119947; 2 Use the StanfordNLP tool to generate the syntax dependency tree ; &#119879;&#119903;&#119890;&#119890;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>&#119950; &#119946; = &#119943;(&#119934; &#119940;&#120783; &#8226; &#119961; &#119946;:&#119946; + &#119925; -&#120783; + &#119939; &#120783; ),</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#119898; &#119894;</ns0:cell><ns0:cell>vector is obtained by operation (11):</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(11)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where denotes the nonlinear activation function, &#119891;</ns0:cell><ns0:cell>&#119882; &#119888;1 &#8712; &#8477; &#119873; &#215; &#119889;</ns0:cell><ns0:cell>is the -th convolution filter, &#119896; 1</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119961; &#119946;:&#119946; + &#119925; -&#120783;</ns0:cell><ns0:cell cols='3'>denotes that -word vectors in the sentence are connected in series, is bias item. &#119873; &#119887; 1</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The experimental result of single label classification. Our model is compared with the benchmark model on three single-label datasets. The evaluation index is accuracy.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Evaluation index</ns0:cell><ns0:cell>Accuracy(%)</ns0:cell><ns0:cell>Precision(%)</ns0:cell><ns0:cell>Recall(%)</ns0:cell><ns0:cell>F1(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Data set</ns0:cell><ns0:cell /><ns0:cell>MR</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>TextCNN</ns0:cell><ns0:cell>73.0</ns0:cell><ns0:cell>92.3</ns0:cell><ns0:cell>87.6</ns0:cell><ns0:cell>89.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Capsule-A</ns0:cell><ns0:cell>74.0</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>89.8</ns0:cell><ns0:cell>90.0</ns0:cell></ns0:row><ns0:row><ns0:cell>AT-CapsNet</ns0:cell><ns0:cell>74.8</ns0:cell><ns0:cell>91.5</ns0:cell><ns0:cell>90.0</ns0:cell><ns0:cell>90.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Syntax-AT-Capsule(ours)</ns0:cell><ns0:cell>75.3</ns0:cell><ns0:cell>92.0</ns0:cell><ns0:cell>90.8</ns0:cell><ns0:cell>91.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Data set</ns0:cell><ns0:cell /><ns0:cell>Subj</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>TextCNN</ns0:cell><ns0:cell>81.6</ns0:cell><ns0:cell>96.2</ns0:cell><ns0:cell>88.5</ns0:cell><ns0:cell>92.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Capsule-A</ns0:cell><ns0:cell>79.5</ns0:cell><ns0:cell>92.6</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>91.4</ns0:cell></ns0:row><ns0:row><ns0:cell>AT-CapsNet</ns0:cell><ns0:cell>82.4</ns0:cell><ns0:cell>92.2</ns0:cell><ns0:cell>91.0</ns0:cell><ns0:cell>91.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Syntax-AT-Capsule(ours)</ns0:cell><ns0:cell>82.9</ns0:cell><ns0:cell>93.9</ns0:cell><ns0:cell>92.6</ns0:cell><ns0:cell>93.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Data set</ns0:cell><ns0:cell /><ns0:cell>CR</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>TextCNN</ns0:cell><ns0:cell>89.5</ns0:cell><ns0:cell>98.2</ns0:cell><ns0:cell>90.6</ns0:cell><ns0:cell>94.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Capsule-A</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>94.0</ns0:cell><ns0:cell>92.0</ns0:cell><ns0:cell>93.0</ns0:cell></ns0:row><ns0:row><ns0:cell>AT-CapsNet</ns0:cell><ns0:cell>89.9</ns0:cell><ns0:cell>96.8</ns0:cell><ns0:cell>93.2</ns0:cell><ns0:cell>95.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Syntax-AT-Capsule(ours)</ns0:cell><ns0:cell>90.6</ns0:cell><ns0:cell>97.3</ns0:cell><ns0:cell>95.3</ns0:cell><ns0:cell>96.3</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The experimental result of multi label classification. Our model is compared with the benchmark model on two multi-label datasets. The evaluation index is ER, Precision,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Recall and F1.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The experimental result of Syntax module verification. Our syntactic module was added to the benchmark model and experimented on Reuters-Full datasets. The evaluation index is ER, Precision, Recall and F1. In each control group, the first model has no syn</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data set</ns0:cell><ns0:cell /><ns0:cell cols='2'>Reuters-Full</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Evaluation index</ns0:cell><ns0:cell>ER(%)</ns0:cell><ns0:cell>Precision(%)</ns0:cell><ns0:cell>Recall(%)</ns0:cell><ns0:cell>F1(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>TextCNN</ns0:cell><ns0:cell>85.0</ns0:cell><ns0:cell>97.0</ns0:cell><ns0:cell>86.5</ns0:cell><ns0:cell>91.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Syntax-CNN</ns0:cell><ns0:cell>85.9</ns0:cell><ns0:cell>97.1</ns0:cell><ns0:cell>87.7</ns0:cell><ns0:cell>92.1</ns0:cell></ns0:row><ns0:row><ns0:cell>Capsule-A</ns0:cell><ns0:cell>84.5</ns0:cell><ns0:cell>92.5</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>91.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Syntax-CapsNet</ns0:cell><ns0:cell>85.4</ns0:cell><ns0:cell>93.1</ns0:cell><ns0:cell>91.7</ns0:cell><ns0:cell>92.4</ns0:cell></ns0:row><ns0:row><ns0:cell>AT-CapsNet</ns0:cell><ns0:cell>86.6</ns0:cell><ns0:cell>94.9</ns0:cell><ns0:cell>90.6</ns0:cell><ns0:cell>92.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Syntax-AT-CapsNet(ours)</ns0:cell><ns0:cell>88.1</ns0:cell><ns0:cell>95.0</ns0:cell><ns0:cell>91.8</ns0:cell><ns0:cell>93.4</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The experimental result of control single module ablation. Remove one module from our model to verify the effect of the module. There is no corresponding module for the first model in each control group.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data set</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6 (on next page)</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='1'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66681:1:0:NEW 23 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editors We thank the reviewers for their recognition of our work. We have corrected the suggestions of reviewers in the manuscript. We improved our English writing and checked our mistakes. Formatting problems are caused by the conversion between the review PDF and the original manuscript. To avoid this, I've modified the formatting of the manuscript. We have also explained the reviewer's concerns and added them to the related work. PeerJ Computer Science is a quality journal, we believe that the manuscript is now suitable for publication in PeerJ Computer Science. Mr. Xudong Jia On behalf of all authors. Reviewer: Peng Bao Basic reporting This paper makes a focused study on the text classification problem from the text syntactic relationship, sequence structure, and semantics. Specifically, the authors first comprehensively summarize the shortcomings of existing models and conclude the challenges for text classification. Later, the author introduced GCN to extract syntactic information for dependencies relationship representation on the one hand and built multi-head attention to encoding the different influence of words to enhance the effect of capsule networks on text classification on the other hand. In the end, the author proved that CapsNet, GCN, and multi-head attention have an integration effect for text classification through experiments. In general, the author states the research problems very clearly, and presents a very interesting and effective model named Syntax-AT-CapsNet. The paper is organized logically. Thus, I strongly suggest this work should be accepted. Experimental design The experiment designed by the author is very sufficient, and the effectiveness of the model (Syntax-AT-CapsNet) proposed by the author can be well proved through the experiment. However, the evaluation metrics could be further enhanced. More diverse evaluation indicators could be added to evaluate the effectiveness of the model in a single-label classification task. The addition of other evaluation metrics can show the effect of our model more comprehensively. Therefore, as shown in Table 1, Precision, Recall, and F1 evaluation indexes also has been added in the single-label classification task. Validity of the findings The author verified the validity of the conclusion through theoretical description and experimental verification, and it has strong persuasiveness and credibility. Additional comments Firstly, several typos should be corrected, such as: 1. At lines 160-161, “... (BERT)also also takes multi-head attention as its basic component.” -> (also) “... (BERT) also takes multi-head attention as its basic component.” Agreed. I have deleted the word “also”. 2. At lines 281, “... denotes that Nword vectors in the sentence are connected in series...”-> (N-word) “... denotes that N-word vectors in the sentence are connected in series...”. Agreed. I have corrected this. In addition, it is suggested that Fig.4 and Fig.6 should be more clear. Maybe it would be better to increase the resolution of the picture. This is caused by the conversion between the review PDF and original Figure Files. These original figure files are vector images. But I have increased the resolution of these pictures to make sure it's clearer. Finally, the format of the formula in the paper should be unified. This is caused by the conversion between the review PDF and the original manuscript. But I have unified the format of these formulas in the paper. Also, to avoid this, I modified the layout of these formulas. Reviewer 2 Basic reporting The paper is logically organized, the research content is systematic and the structure is reasonable. The paper considers that the text syntactic relationship and word sequence are important and useful for text classification. So it analyzes the sequence structure and syntactic relations of text, combines the syntactic relationship, sequence structure, and semantics for text representation. And proposes a novel model that utilizes GCN for syntactic relationship, multi-head attention for words and corporate them in capsule network for text classification. The experimental results prove the effectiveness of the proposed method. Experimental design The problem researched in this paper is clearly defined. The experimental part is related to the research problem, which is designed reasonably. Experiments on single classification and multi-classification tasks, Syntax module and module ablation experiment were carried out in the experimental part. Experiments prove that graph convolutional neural networks, capsule networks, and multi-head attention have an integrated effect on text classification tasks. And the module ablation experiment shows that each module has a certain role in improving the text classification effect. Methods described with sufficient detail and analyzed the results accordingly. Validity of the findings The paper analyzes the background and research status of the problem, proposes an attention enhanced capsule network-based text classification model, which is quite innovative. The technical introduction of the model is clear. The analysis of the experimental results is reasonable, and the data charts are clear. The results of the experiment also verify the effectiveness of the proposed model. Additional comments 1. It is better to briefly introduce the formulas in section 3 and separately add a new section to describe the details of the algorithm by pseudo-code overall. I agree with your advice that briefly introduces the formulas in section 3. I have further described the important formulas in section 3. For the details of the algorithm, in section 4, I have the pseudo-code description of the whole algorithm. In addition, I have added some details and comments to the pseudo-code. 2. It's better to give some insight into each module. Not just simply put the conclusions of others’ papers in “related work”. For example, what is the characteristics of CapsNet? Why it is better? I really want to see your own understanding with the formulas. The same as to GCN and multi-head attention. The introduction of those modules is not deep enough. This is a good suggestion. I have further described these three modules and added them to the Related work, as detailed descriptions follow. In CapsNet, the feature is represented by a capsule vector instead of a scalar (activation value output by neuron). Different dimensions in a vector can represent different properties of a feature. For a text feature, it often means different meanings in different semantic relations. We use capsules to represent text features to learn the semantic information of different dimensions of text features. On the other hand, the similarity between features at different levels is different. For building a high-level feature, lower levels with high similarity have a higher weight. CapsNet can learn this similarity relationship through a dynamic routing algorithm. First of all, GCN has a good performance in processing graph data. The syntactic relationship is presented as a tree structure. A tree is also a form of a graph. Secondly, the sequence structure-based model represents the syntactic tree as a sequence according to some rules of nodes. But the sequence is directional, and nodes in a tree are not sequential. GCN can directly consider the relationship between nodes in a syntax tree. Attention is to extract the long-distance dependencies in the text by calculating the similarities between words in the text. The Words in the text can express different meanings in different semantic scenarios. The representation of words is different in different semantic spaces. Similar to ensemble learning, multi-head attention can put text in different semantic spaces to calculate attention and get integrated attention. 3. Elaborate on the principle of Syntactic Dependency Tree. There is a brief introduction to the syntactic dependency tree in the introduction section, which I have expanded on. A text is made up of words that represent different syntactic elements, such as subject, predicate, object, and so on. Different syntactic elements are interdependent. A syntactic dependency tree is a kind of tree structure, which describes the dependency relationship between words. Figure 1 shows an example of a syntactic dependency tree. There are many tools to generate syntactic dependency trees by analyzing syntactic dependency relations. The StanfordNLP is used in this paper. 4. There are several mistakes about the format. Such as the label “Figure” in section 3.3 paragraph 2, misses number 3; Section 4.1 paragraph 1, the label “Table” misses number 1, section 4.2 (1), the label “Table 33” repeats number 3, etc. 6.In section 3, the authors should add citations of the formulas. Agreed. I have corrected these and added citations of the formulas. The description of Equation 17 needs to be clearer. Agreed. I have added a more detailed description of Equation 17. ( Alaparthi, and Mishra 2020) proposed a pre-trained model of language representation (BERT)also also takes multi-head attention as its basic component. 'n-grams' should be 'N-grams'. Agreed. I have corrected these. 5. Why Table 6 can show that the motivation of using graph neural network to encode syntactic information and other models is correct? I apologize for the ambiguous statement. Table 6 shows that the syntax module has the greatest influence on the model when it is combined with other modules. The main function of the syntax module is to encode syntactic information through GCN. This also shows that our motivation to use GCN to encode syntactic information and combine it with other modules is correct. I have revised the statement in the manuscript. Reviewer 3 Basic reporting This article proposes a new network architecture for the text classification task. In the feature extraction phase, it uses multi-head attention to encode words' embeddings to extract long-range dependency information, uses GCN to encode Syntactic Dependency Tree to extract syntactic info, and subsequently concatenate them by add operation. Then pass it through a CapsNet for classification. Experimental design Generally, the motivation is clear and the experimental work is solid and comprehensive. Validity of the findings It's a combination of existed works but results in better performance than some of the previous works. Additional comments Follows are some comments to further improve the paper: 1. (1) The English writing of the thesis needs to be further improved and the tenses be unified. Some contents are in the past tense and some are in the present tense. Agreed. I have corrected the problems in the English writing of the paper. (2) There are a lot of errors in the cross-reference of tables and charts in the paper. There are no corresponding labels behind many tables and figures. I have checked the cross-reference of the tables and charts and corrected them. And I treat Algorithms 1 and 2 as algorithmic parts, not tables. There are also some problems caused by the conversion between the review PDF and the original manuscript. (3) The writing of formulas also needs to be more standardized. The corresponding ',' needs to be added after most formulas, and where should be lowercase without uppercase. Agreed. I have standardized the formulas in the paper. (4) The data set used needs to be added to the corresponding reference instead of a footnote. In fact, I have added the corresponding references for the data set used in the references section. 2. There are many kinds of methods that can extract syntactic information. Why do the authors choose the GCN instead of other models? I mean what is the benefit of GCN? First of all, GCN has a good performance in processing graph data. The syntactic relationship is presented as a tree structure. A tree is also a form of a graph. Secondly, the sequence structure-based model represents the syntactic tree as a sequence according to some rules of nodes. But the sequence is directional, and nodes in a tree are not sequential. GCN can directly consider the relationship between nodes in a syntax tree. 3. In addition to the experimental comparison with several baselines, it is also necessary to add the experimental results compared with some other existing algorithms. I have selected some of the most representative models so far. My experiments are in the same experimental environment and parameter Settings. This ensures that my model is fairly comparable to the baseline model. Our experiments have been able to confirm our conclusions. Our model is basic, and compared with other practical algorithms, it is difficult to control the same experimental environment. The application of our model to practical tasks and its comparison with other practical models is our next task. Please look forward to it. "
Here is a paper. Please give your review comments after reading it.
326
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Background.</ns0:head><ns0:p>Ultrasound imaging has been recognized as a powerful tool in clinical diagnosis. Nonetheless, the presence of speckle noise degrades the signal-to-noise of ultrasound images. Various denoising algorithms cannot fully reduce speckle noise and retain image features well for ultrasound image. With the development of deep learning, the application of deep learning in ultrasound image denoising has attracted more and more attention in recent years.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods.</ns0:head><ns0:p>In the article, we propose a generative adversarial network with residual dense connectivity and weighted joint loss (GAN-RW) to avoid the limitations of traditional image denoising algorithms and surpass the most advanced performance of ultrasound image denoising. The denoising network is based on U-Net architecture which includes four encoder and four decoder modules.</ns0:p><ns0:p>Each of the encoder and decoder is replaced with residual dense connectivity and BN to remove speckle noise. The discriminator network applies a series of convolutional layers to identify differences between the translated images and the desired modality. In the training processes, we introduce a joint loss function consisting of a weighted sum of the L1 loss function, binary crossentropy with a logit loss function and perceptual loss function.</ns0:p></ns0:div> <ns0:div><ns0:head>Results.</ns0:head><ns0:p>We split experiments into two parts. First, experiments were performed on Berkeley segmentation (BSD68) datasets corrupted by simulated speckle. Compared with the eight existing denoising algorithms, the GAN-RW achieved the most advanced despeckling performance in terms of the peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and subjective visual effect. When the noise level was 15, the average value of the GAN-RW</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Background.</ns0:head><ns0:p>Ultrasound imaging has been recognized as a powerful tool in clinical diagnosis. Nonetheless, the presence of speckle noise degrades the signal-to-noise of ultrasound images. Various denoising algorithms cannot fully reduce speckle noise and retain image features well for ultrasound image. With the development of deep learning, the application of deep learning in ultrasound image denoising has attracted more and more attention in recent years.</ns0:p></ns0:div> <ns0:div><ns0:head>Methods.</ns0:head><ns0:p>In the article, we propose a generative adversarial network with residual dense connectivity and weighted joint loss (GAN-RW) to avoid the limitations of traditional image denoising algorithms and surpass the most advanced performance of ultrasound image denoising. The denoising network is based on U-Net architecture which includes four encoder and four decoder modules. Each of the encoder and decoder is replaced with residual dense connectivity and BN to remove speckle noise. The discriminator network applies a series of convolutional layers to identify differences between the translated images and the desired modality. In the training processes, we introduce a joint loss function consisting of a weighted sum of the L1 loss function, binary cross-entropy with a logit loss function and perceptual loss function.</ns0:p></ns0:div> <ns0:div><ns0:head>Results.</ns0:head><ns0:p>We split experiments into two parts. First, experiments were performed on Berkeley segmentation (BSD68) datasets corrupted by simulated speckle. Compared with the eight existing denoising algorithms, the GAN-RW achieved the most advanced despeckling performance in terms of the peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and subjective visual effect. When the noise level was 15, the average value of the GAN-RW increased by approximately 3.58% and 1.23% for PSNR and SSIM, respectively. When the noise level was 25, the average value of the GAN-RW increased by approximately 3.08% and 1.84% for PSNR and SSIM, respectively. When the noise level was 50, the average value of the GAN-RW increased by approximately 1.32% and 1.98% for PSNR and SSIM, respectively. Second, experiments were performed on the ultrasound images of lymph nodes, the foetal head and the brachial plexus. The proposed method shows higher subjective visual effect when verifying on the ultrasound images. In the end, through statistical analysis, the GAN-RW achieved the highest mean rank in the Friedman test.</ns0:p></ns0:div> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Ultrasound has been widely used in clinical diagnosis. Compared with CT and MRI, it has the advantages of cost-effectiveness and non-ionizing radiation. However, due to the coherent nature, speckle noise is inherent in ultrasound images <ns0:ref type='bibr' target='#b29'>(Singh et al., 2017)</ns0:ref>. The speckle noise is the primary cause of low contrast resolution and the signal-to-noise ratio. It makes image processing and analysis more challenging, such as image classification and segmentation.</ns0:p><ns0:p>Therefore, eliminating speckle noise is of great significance for improving the ultrasound images signal-to-noise ratio and diagnosing disease accurately.</ns0:p><ns0:p>There are various traditional methods for image denoising, which include frequency domain, time-domain and joint time-domain/frequency-domain methods. Among the traditional methods, the most widely used denoising method is based on wavelets <ns0:ref type='bibr' target='#b8'>(Jaiswal et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b31'>Srivastava et al., 2016;</ns0:ref><ns0:ref type='bibr'>Gupta et al., 2004)</ns0:ref>. <ns0:ref type='bibr'>Shih et al. proposed</ns0:ref> an iterated two-band filtering method to solve the selective image smoothing problem <ns0:ref type='bibr' target='#b28'>(Shih et al., 2003)</ns0:ref>. <ns0:ref type='bibr'>Yue et al. introduced</ns0:ref> a novel nonlinear multiscale wavelet diffusion for speckle noise removal and edge enhancement, which proved that this method is better than wavelet-transform alone in removing speckle noise <ns0:ref type='bibr' target='#b43'>(Yue et al., 2006)</ns0:ref>. Among the above methods, speckle noise is transformed into additive noise and removed. Because speckle noise is not purely multiplicative noise, the selection of wavelets is based on experience, which creates artefacts. Traditional methods based on the spatial domain include the Kuan filter <ns0:ref type='bibr' target='#b11'>(Kuan et al.,1987)</ns0:ref>, speckle reducing anisotropic diffusion filter <ns0:ref type='bibr' target='#b42'>(Yu et al., 2002)</ns0:ref> and Frost filter <ns0:ref type='bibr'>(Frost et al., 1982)</ns0:ref>. These methods mainly use local pixel comparison. The nonlocal means (NLM) method was proposed which is based on a nonlocal averaging of all pixels in the image <ns0:ref type='bibr' target='#b1'>(Buades et al., 2005)</ns0:ref>. However, the NLM filter cannot preserve the fine transform-domain collaborative filtering (BM3D) method, which reduced the computing time and effectively suppressed noise by grouping 3D data arrays of similar 2D image fragments <ns0:ref type='bibr' target='#b4'>(Dabov et al., 2007)</ns0:ref>. However, the disadvantage of these methods is that they cannot maintain a balance between noise suppression and image detail preservation.</ns0:p><ns0:p>The development of deep learning provides a perfectly feasible solution for image denoising.</ns0:p><ns0:p>Zhang et al. <ns0:ref type='bibr' target='#b45'>(Zhang et al., 2017)</ns0:ref> introduced feed-forward denoising convolutional neural network (DnCNN), where residual learning <ns0:ref type='bibr'>(He et al., 2016</ns0:ref>) was adopted to separate noise from noisy image and batch normalization (BN) <ns0:ref type='bibr' target='#b24'>(Salimans et al., 2016)</ns0:ref> was integrated to speed up the training process and boost the denoising performance. Using the small medical image datasets, Jifara et al. designed a denoising convolutional neural network with residual learning and BN for medical image denoising (DnCNN-Enhanced) <ns0:ref type='bibr' target='#b9'>(Jifara et al., 2019)</ns0:ref>. More specifically, they used residual learning by multiplying a very small constant and added it to better approximate the residual to improve performance. Tian et al. proposed a novel algorithm called a batchrenormalization denoising network (BRDNet) for image denoising <ns0:ref type='bibr' target='#b35'>(Tian et al., 2020)</ns0:ref>. This network combines two networks to expand the width to capture more feature information.</ns0:p><ns0:p>Meanwhile, BRDNet adopted BN to address small mini-batch problems and dilated convolution to enlarge the receptive field to extract more feature information. In addition to feed-forward denoising algorithm, there are some algorithms based on the encoder-decoder network. The U-Net is the most widely used encoder-decoder network which used the segmentation of biomedical images <ns0:ref type='bibr' target='#b22'>(Ronneberger et al., 2015)</ns0:ref>. These are various algorithms based on U-Net for image processing, such as U-Net++ <ns0:ref type='bibr'>(Zhou et al., 2019)</ns0:ref>, residual-dilated-attention-gate network (RDAU-Net) <ns0:ref type='bibr'>(Zhuang et al., 2019)</ns0:ref>, Wasserstein GAN algorithm (RDA-UNET-GAN) <ns0:ref type='bibr'>(Negi et</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science al., 2020), Attention Gate-Dense Network-Improved Dilation Convolution-U-Net (ADID-UNET) <ns0:ref type='bibr' target='#b20'>(Raj et al., 2021)</ns0:ref>, <ns0:ref type='bibr'>VGG-UNet (Fawakherji et al., 2019)</ns0:ref>, Ens4B-UNet <ns0:ref type='bibr' target='#b0'>(Abedalla et al., 2021)</ns0:ref> and so on. Park et al. designed a densely connected hierarchical image denoising network (DHDN) for removing additive white Gaussian noise of natural images <ns0:ref type='bibr' target='#b19'>(Park et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Based on the U-Net, it applied the hierarchical architecture of the encoder-decoder module with dense connectivity and residual learning to solve the vanishing-gradient problem. Guo et al.</ns0:p><ns0:p>suggested training a convolutional blind denoising network (CBDNet) using noisy-clean image pairs and realistic noise model <ns0:ref type='bibr'>(Guo et al., 2019)</ns0:ref>. To further provide an interactive strategy to conveniently correct the denoising results, the noise estimation subnetwork with asymmetric learning was embedded in CBDNet to suppress the underestimation of the noise level. Couturier et al. applied the deep encoder-decoder network (EDNet) to address additive white Gaussian and multiplicative speckle noises <ns0:ref type='bibr' target='#b3'>(Couturier et al., 2018)</ns0:ref>. The encoder module used to extract features and remove the noise, whereas the decoder module recovered a clean image. To yield a performance improvement, there are some methods using generative adversarial network (GAN) in the training phase. Lsaiari et al. performed image denoising using generative adversarial network (GAN) <ns0:ref type='bibr' target='#b7'>(Lsaiari et al. 2019)</ns0:ref> <ns0:ref type='table' target='#tab_2'>2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>BN to remove speckle noise <ns0:ref type='bibr' target='#b36'>(Wang et al., 2017)</ns0:ref>. However, such a method cannot deal with the speckle noise of ultrasound images well.</ns0:p><ns0:p>In this thesis, we proposed a generative adversarial network with residual dense connectivity and weighted joint loss (GAN-RW) to overcome the limitations of traditional image denoising methods and surpass the most advanced performance of ultrasound image denoising. The proposed network consists of a denoising network and a discriminator network. The denoising network is based on U-Net architecture which includes four encoder and four decoder modules.</ns0:p><ns0:p>Each block of the encoder and decoder is replaced with residual dense connectivity and BN to remove speckle noise. The discriminator network applies a series of convolutional layers to identify differences between the translated images and the desired modality. In the training processes, we introduced a joint loss function consisting of a weighted sum of the L1 loss, the perceptual loss function and the binary cross-entropy with logit loss (BCEWithLogitsLoss) function. Experiments on natural images and ultrasound images illustrate that the proposed algorithm surpasses the deep learning-based algorithms and conventional denoising algorithms.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 provides the proposed method and implementation details. Extensive experiments are conducted to evaluate our proposed methods in Section 3. We discuss these results in Section 4 and conclude in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>An overview of the proposed network framework for ultrasound image denoising is shown in Figure <ns0:ref type='figure'>1</ns0:ref>. In this section, the network architecture is introduced in detail.</ns0:p></ns0:div> <ns0:div><ns0:head>Speckle Noise Model</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Due to speckle noise, ultrasound image processing is a very challenging task. Speckle noise is an interference mode generated by the coherent accumulation of random scattering in the ultrasonic beam resolution element, so it has the characteristics of asymmetrical intensity distribution and significant spatial correlation <ns0:ref type='bibr' target='#b30'>(Slabaugh et al., 2006)</ns0:ref>. This characteristic has an adverse effect on the image quality and interpretability. Because these characteristics are difficult to model, many methods of ultrasound image processing only assume that speckle noise is Gaussian noise, resulting in these speckle noise models are more suitable for X-ray and MRI image than ultrasound image. The gamma distribution <ns0:ref type='bibr' target='#b25'>(Sarti et al., 2005)</ns0:ref> and Fisher-Tippett distribution <ns0:ref type='bibr' target='#b17'>(Michailovich et al., 2003)</ns0:ref> have been proposed to approximate speckle noise. <ns0:ref type='bibr'>Slabaugh G et al.</ns0:ref> argued that Fisher-Tippett distribution was suitable for fully formed speckle noise in the ultrasound image <ns0:ref type='bibr' target='#b30'>(Slabaugh et al., 2006)</ns0:ref>. In this article, the speckle noise model of the ultrasound image is given as:</ns0:p><ns0:p>(1) &#119907;(&#119909;,&#119910;) = &#119906;(&#119909;,&#119910;) + &#119906;(&#119909;,&#119910;) &#119903; &#120579;(&#119909;,&#119910;)</ns0:p><ns0:p>where v(x, y) is the pixel location of the speckle noise image, u(x, y) is the pixel location of the noise-free ultrasound image, &#61553;(x, y) is additive white Gaussian noise (AWGN) with zero-mean and variance &#61555; 2 , and r is associated with ultrasonic equipment. A large number of studies have shown that r=0.5 is the best value that can be used to simulate speckle noise in ultrasonic images <ns0:ref type='bibr' target='#b40'>(Yu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Lan et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Denoising Network</ns0:head><ns0:p>The architecture of denoising network is shown in Figure <ns0:ref type='figure'>2</ns0:ref> Manuscript to be reviewed Computer Science contracting function of the encoder module is to gradually reduce the spatial dimensions and capture high-level feature information. Nevertheless, these successive convolutions and pooling layers cause the loss of spatial information. Additionally, the problem of vanishing gradient is a key point that hinders the networks from training as the networks deepen. Some densely connected methods capture more information and avoid the appearance of vanishing-gradient problem <ns0:ref type='bibr' target='#b6'>(Huang et al.,2017;</ns0:ref><ns0:ref type='bibr' target='#b47'>Zhang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Park et al.,2019)</ns0:ref>. Inspired by these methods, we applied two residual dense connectivity blocks (RDCBs) to each module of the encoder and decoder modules. The architecture of RDCBs is shown in Figure <ns0:ref type='figure'>2 (b)</ns0:ref>. The RDCBs is composed of three convolutional layers followed by BN and ReLU. Each module applies the previous feature map through dense connectivity. We adopt dense connectivity and local residual learning to improve the information flow so that the proposed algorithm can avoid the vanishing gradient problem and accurately remove speckle noise. Meanwhile, RDCBs can capture more features to improve denoising performances.</ns0:p><ns0:p>The network architecture of the encoder module is shown in Figure <ns0:ref type='figure'>2 (c</ns0:ref>). The encoder module is composed of two RDCBs, a downsampling module and a convolution module. The downsampling module is a 2&#61620;2 max-pooling layer. The convolutional module is a 1&#61620;1 convolution layer followed by BN and ReLU. The feature map is fed into two RDCBs to preserve more feature information and avoid vanishing gradient. Subsequently, the feature map is fed into 2&#61620;2 max-pooling layers decreasing the size of feature map. Finally, the feature map is fed into a 1&#61620;1 convolution layer followed by BN and ReLU. The size of the output feature maps of the encoder module is half the size of the input feature maps.</ns0:p><ns0:p>The architecture of the decoder module is shown in Figure <ns0:ref type='figure'>2</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Discriminator</ns0:head><ns0:p>The discriminator is trained to distinguish the difference between the denoising image and the standard image, where the denoising attempts to fool the discriminator. It uses a set of convolutional layers to build a discriminative network. It consists of an input convolutional layer and nine convolutional layers followed by BN and ReLU. The output channels of consecutive convolutional layers are 64, 128, 256, 512 and 1. Therefore, when the input image is passed through each convolution block, the spatial dimension is decreased by a factor of two. The architecture of the discriminator network framework for ultrasound image denoising is shown in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Loss Function</ns0:head><ns0:p>Traditionally, learning-based image restoration uses the per-pixel loss between the restored image and ground truth as the optimization target, and excellent quantitative scores can be obtained. Nevertheless, in recent studies, relying only on low-level pixels to minimize pixelwise errors has proven that it can lead to the loss of details and smooth the results <ns0:ref type='bibr' target='#b10'>(Johnson et al., 2016)</ns0:ref>. In this paper, we use a weighted sum of the loss function. It consists of the denoising loss, the perceptual loss of the feature extractor and the discriminator loss.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The denoising network loss is the L1 loss function, which minimizes the pixelwise differences between the standard image and the denoising image. The L1 loss is used and calculated as follows:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_0'>&#119871;1 = &#8721; &#119899; &#119894; = 1 |&#119909; -&#119910;|</ns0:formula><ns0:p>where x is the denoising image and y is the corresponding ground truth.</ns0:p><ns0:p>Recent studies have shown that the target image and the output image have similar feature representations, not just every low-level pixel that matches them <ns0:ref type='bibr' target='#b10'>(Johnson et al., 2016)</ns0:ref>. The critical point is that the pretrained convolutional neural model used for image semantic segmentation or classification has learned to encode image features, and these features can be directly used for perceptual loss.</ns0:p><ns0:p>To preserve image details more effectively in removing noise, we use perceptual loss as one of the loss functions, which is calculated by:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_1'>&#119871; &#119901;&#119890;&#119903; = &#8214; &#61553; (&#119910; &#119905;&#119903;&#119906;&#119890; ) -&#61553; (&#119910; &#119900;&#119906;&#119905; )&#8214; 2 2</ns0:formula><ns0:p>Where &#61553; represents the feature extraction operator of the pretrained network. The convolution neural network pre-trained for image classification which has already learned to capture features. These features can be used as perceptual loss. In our proposed method, we adopt the output before the first pooling layer from the pretrained VGG-19 network to extract features as perceptual loss <ns0:ref type='bibr'>(Gong et al., 2018)</ns0:ref>. To the discriminator network, we use BCEWithLogitsLoss to discern the output image quality from the denoising network and the standard image. Then, we obtain the weighted joint loss function, which consists of L1 loss(L1), perceptual loss (Lper) and BCEWithLogitsLoss (L BCE ). &#955; 1 , &#955; 2 , &#955; 3 are scalar weights for L loss .</ns0:p><ns0:p>(4) <ns0:ref type='table' target='#tab_2'>2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>&#119871; &#119897;&#119900;&#119904;&#119904; = &#61548; 1 L 1 + &#61548; 2 L per + &#61548; 3 L BCE , , &#61548; 1 = 1 &#61548; 2 = 0.1 &#61548; 3 = 1 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Training and Testing details</ns0:head><ns0:p>To train our network, we use the Berkeley segmentation dataset (BSD400) composed of 400 images of size 180 &#215; 180 for training <ns0:ref type='bibr' target='#b15'>(Martin et al., 2001;</ns0:ref><ns0:ref type='bibr' target='#b45'>Zhang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b38'>Chen et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Then, according to Equation (1), speckle noise is added to the datasets and the noisy images are generated. For training data that have three noise levels, we train the model for speckle denoising with noise levels &#61555; =15, 25 and 50 independently. We set the patch sizes to 40&#61620;40 to train our model. To avoid overfitting, we apply data augmentation by randomly rotating and flipping. The initial learning rate is set to 1e-4 and halved every 2000 epochs. We use Adam optimizer and a batch size of 32 during training.</ns0:p><ns0:p>For the test images, we adopt Berkeley segmentation (BSD68) <ns0:ref type='bibr' target='#b15'>(Martin et al,.2001;</ns0:ref><ns0:ref type='bibr' target='#b23'>Roth et al., 2009)</ns0:ref> datasets for grey synthetic noisy images, which include 68 natural images, 321&#61620;481 or 481&#61620;321 in size. To further verify the practicality of the proposed GAN-RW method, we also illustrate the results of our method as well as eight existing denoising methods for ultrasound images from the Kaggle Challenge <ns0:ref type='bibr' target='#b21'>(Rebetez et al., 2016)</ns0:ref>, the Grand Challenge <ns0:ref type='bibr'>(Thomas et al., 2018)</ns0:ref> and lymph node datasets <ns0:ref type='bibr' target='#b44'>(Zhang et al., 2009)</ns0:ref>. We applied PyTorch (version 1.7.0) as the framework to implement our network. Training takes place on a workstation equipped with an NVIDIA 2080Ti graphic card with 11 GB of memory.</ns0:p><ns0:p>There is a phenomenon that deep learning-based networks with the same training data and seed points will get different results. Therefore, we repeated each training three times with the same parameters and seeds, and then used the results of three experiments on test datasets to obtain the mean value and standard deviation.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>In order to test the performance of the proposed method, the peak signal-to-noise ratio (PSNR) <ns0:ref type='bibr' target='#b2'>(Chan et al., 1983)</ns0:ref> and the structural similarity (SSIM) <ns0:ref type='bibr' target='#b37'>(Wang et al., 2004)</ns0:ref> are used to verify quantitative metrics. Meanwhile, the denoising results are used to show the visual quality of denoising images. If the denoising method has higher the PSNR and SSIM results on the test datasets, the denoising network shows better performance. In addition, to clarify the visual effect on the denoised images, we zoom in on the area of the denoising image for display. If the magnified area is clearer, it shows that the denoising method is more effective than others.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>To demonstrate the superiority of our proposed method in despeckling effect, we compared our proposed network with deep learning-based methods and traditional denoising algorithms.</ns0:p><ns0:p>The methods for these comparisons were as follows: BM3D <ns0:ref type='bibr' target='#b4'>(Dabov et al., 2007)</ns0:ref>, DnCNN <ns0:ref type='bibr' target='#b45'>(Zhang et al., 2017)</ns0:ref>, DnCNN_Enhanced <ns0:ref type='bibr' target='#b9'>(Jifara et al., 2019)</ns0:ref>, BRDNet <ns0:ref type='bibr' target='#b35'>(Tian et al., 2020)</ns0:ref>, DHDN <ns0:ref type='bibr' target='#b19'>(Park et al., 2019)</ns0:ref>, <ns0:ref type='bibr'>CBDNet (Guo et al., 2019)</ns0:ref>, MuNet <ns0:ref type='bibr' target='#b14'>(Lee et al., 2020)</ns0:ref> and EDNet <ns0:ref type='bibr' target='#b3'>(Couturier et al., 2018)</ns0:ref>. Two performance metrics are used, namely, PSNR and SSIM, which are expressed in terms of average value and standard deviation. Statistical analysis was performed with SPSS statistics software (version 26.0; IBM Inc., Armonk, NY, USA). All deep-learning based methods were trained three times and BM3D used three different parameters to obtain the average value and standard deviation. Experiments were performed on the BSD68 and ultrasound images.</ns0:p></ns0:div> <ns0:div><ns0:head>The BSD68</ns0:head><ns0:p>Table &#8544; shows the mean, standard deviation of the proposed methods and the compared methods for the BSD68 test datasets. In Table <ns0:ref type='table' target='#tab_0'>I</ns0:ref>, the best result is highlighted in bold. When the PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science noise level was 15, the average PSNR and SSIM of our proposed method improved by 1.21dB and 0.0113, which were better than those of the compared method. The average performance of the GAN-RW increased by approximately 3.58% and 1.23% for PSNR and SSIM, respectively.</ns0:p><ns0:p>When the noise level was 25, the average PSNR and SSIM of this method improved by 0.96dB and 0.0160, and increased by approximately 3.08% and 1.84% for PSNR and SSIM, respectively. When the noise level was 50, the average PSNR and SSIM of this method improved by 0.36dB and 0.0156 and increased by approximately 1.32% and 1.98% for PSNR and SSIM, respectively. As shown in Table <ns0:ref type='table' target='#tab_0'>I</ns0:ref>, the proposed method is superior to the traditional methods for three noise levels.</ns0:p><ns0:p>To compare subjective performance, we compared the denoising images for different methods. Figure <ns0:ref type='figure'>4</ns0:ref>, 5 and 6 show the grey scale denoising image of the proposed methods and the compared method at different noise levels. To easily observe the performance of GAN-RW and other methods, we zoomed in on an area from denoising images obtained using the compared methods. In Figure <ns0:ref type='figure'>4</ns0:ref>, the proposed method accurately restored the pattern, while the compared methods achieved blurred denoising image. As shown in Figure <ns0:ref type='figure'>5</ns0:ref>, the compared methods failed to exactly restore the windows or achieved blurred denoising image. However, the proposed method restored the windows accurately. Similarly, unlike the compared methods, the details of the zebra stripes could not be restored. The proposed method restored the details in Figure <ns0:ref type='figure'>6</ns0:ref>. As shown in these images under different noise levels, the traditional methods produced blurred results and could not restore the details of the patterns, while the proposed method accurately restored the patterns.</ns0:p></ns0:div> <ns0:div><ns0:head>Ultrasound Images</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We used the ultrasound images of lymph nodes, the foetal head and the brachial plexus with a noise level of 25 to verify the practicality of the proposed GAN-RW. To observe the performance of GAN-RW and other eight existing algorithms, we marked the fine details with red box in the figure. Figure <ns0:ref type='figure'>7</ns0:ref> shows the despeckling images of different methods on the lymph node ultrasound image. Compared methods either failed to removed noise effectively or produced blurry and artifact results. Obviously, the results showed that the proposed method effectively removed speckle noise while better retaining image details and improving the subjective visual effect.</ns0:p><ns0:p>In addition, other foetal ultrasound images were applied to visually compare the despeckling performance of all evaluated methods. In Figure <ns0:ref type='figure'>8</ns0:ref>, it is easy to observe that our proposed algorithm produced a smoother outline and retained the image details better than the other methods.</ns0:p><ns0:p>In the end, we compare the different methods on the brachial plexus ultrasound images. In Figure <ns0:ref type='figure'>9</ns0:ref>, the proposed GAN-RW can smoother background regions and preserve image hierarchy structure information better than the other methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Ablation Study</ns0:head><ns0:p>To justify the effectiveness of the RDCBs, we conducted the following experiments on BSD68. In section of denoising network, RDCBs is composed of three convolutional layers followed by BN and ReLU and each module applied the previous feature map through dense connectivity. We used two successive convolutional layers followed by BN and ReLU without dense connectivity (GAN-RW-WD) to replace two RDCBs. The experimental results compared with GAN-RW are shown in Table <ns0:ref type='table' target='#tab_0'>I</ns0:ref>. When the noise level was 15, RDCBs can enhance the PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science average PSNR by approximately 0.44% and the average SSIM by approximately 0.16% for the BSD68, respectively. When the noise level was 25, RDCBs can enhance the average PSNR by approximately 0.53% and the average SSIM by approximately 0.57% for the BSD68, respectively. When the noise level was 50, RDCBs can enhance the average PSNR by approximately 0.37% and the average SSIM by approximately 0.61% for the BSD68, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistical Analysis</ns0:head><ns0:p>Statistical analysis is necessary to verify the superiority of the proposed method. Due to the PSNR and SSIM values were not Gaussian distribution, we used the nonparametric Friedman test <ns0:ref type='bibr'>(Friedman., 1937)</ns0:ref> to assess the performance of different denoising algorithms. The mean rank and p-Value of PSNR and SSIM of all algorithms are shown in Table <ns0:ref type='table' target='#tab_0'>II</ns0:ref>. Usually, a p-value of less than 0.05 is deemed the significant difference. The mean rank presents the performance of different algorithms, and the higher value of mean rank has the better performance. It can be seen from Table <ns0:ref type='table' target='#tab_0'>II</ns0:ref> that GAN-RW has a significant improvement over other algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In this paper, we proposed a generative adversarial network for ultrasound image despeckling.</ns0:p><ns0:p>The GAN-RW is based on U-Net with residual dense connectivity, BN and a joint loss function to remove speckle noise. We used natural images and ultrasound images to verify our method.</ns0:p><ns0:p>For the BSD68 test datasets, when the noise level was 15, our method achieved 35.28dB and 0.9404 for PSNR and SSIM. Compared with the original noise image, the average values of the GAN-RW increased by approximately 10.05% and 12.26% for PSNR and SSIM, respectively.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>When the noise level was 25, our method achieved 32.15dB and 0.8969 for PSNR and SSIM, respectively. Compared with the original noise image, the average performance of the GAN-RW increased by approximately 16.01% and 28.43% for PSNR and SSIM. When the noise level was 50, our method achieved 27.74dB and 0.8064 for PSNR and SSIM, respectively. Compared with the original noise image, the average performance of the GAN-RW increased by approximately 26.88% and 75.35% for PSNR and SSIM, respectively. In Figure <ns0:ref type='figure'>10</ns0:ref>, boxplots show the comparison of PSNR and SSIM under different noise levels for BSD68. In the end, we used the ultrasound images of lymph nodes, the brachial plexus and the foetal head to verify the practicality of the proposed GAN-RW. In contrast, GAN-RW can effectively eliminate speckle noise while retaining image details better and improving the visual effect.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In conclusion, we developed and verified a new ultrasound image despeckling method. GAN-RW is based on U-Net and uses residual dense connectivity, BN and joint loss functions to remove speckle noise. Compared with BM3D, DnCNN, DnCNN-Enhanced, BRDNet, DHDN, CBDNet, MuNet, EDNet and GAN-RW achieves better despeckling performance on three fixed noise levels of BSD68. We also effectively verified the proposed method on ultrasound images of lymph nodes, the brachial plexus and the foetal head. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The mean rank (Friedman test) of the PSNR (dB) and SSIM values of the different methods for denoising BSD68 gray images.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022) Manuscript to be reviewed Computer Science details and edge features in the image. Dabov et al. proposed a block-matching and 3D</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>.</ns0:head><ns0:label /><ns0:figDesc>Yang et al. introduced a new CT image denoising method based on GAN with Wasserstein distance and perceptual similarity (Yang et al. 2018). Dong et al. developed a custom GAN to denoise optical coherence tomography (Dong et al. 2020). Lee et al. proposed a model consisting of multiple U-Nets (MuNet) for three-dimensional neural image denoising (Lee et al., 2020). It consisted of multiple U-Nets and using GAN in the training phase. These methods perform well in removing Gaussian noise, but they cannot accurately suppress speckle noise. Wang et al. proposed a set of convolutional layers along with a componentwise division residual layer and a rectified linear unit (ReLU) activation function and PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>(a). The denoising network is based on U-Net, which consists of a contracting path and an expanding path. The expanding function of the decoder module is to gradually restore the spatial and boundary information. The PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>(d). It is the inverse process of the encoder module. It consists of three modules: two RDCBs, a 1&#61620;1 convolution layer followed PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)Manuscript to be reviewed Computer Science by BN and ReLU and a subpixel interpolation layer. We use a 1&#61620;1 convolution layer to refine the feature maps. Compared with the 2&#61620;2 deconvolution layer, subpixel interpolation can expand the feature maps size more accurately and efficiently. Therefore, the size of the output feature map of the upsampling block is twice the size of the input feature map, and the number of channels of the input feature map is one second.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,280.87,525.00,324.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,280.87,525.00,192.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,234.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,272.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,199.12,525.00,181.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,199.12,525.00,230.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,199.12,525.00,210.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,199.12,525.00,234.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,199.12,525.00,237.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table I :</ns0:head><ns0:label>I</ns0:label><ns0:figDesc>The mean and standard deviation of the PSNR (dB) and SSIM values of different methods for denoising BSD68 gray images. The best result is highlighted with bold. Enhanced 33.87&#177;0.0036 0.9332&#177;0.0004 31.00&#177;0.0057 0.8862&#177;0.0006 27.22&#177;0.0066 0.7907&#177;0.0007 BRDNet 33.79&#177;0.0202 0.9319&#177;0.0006 30.95&#177;0.0283 0.8843&#177;0.0015 27.20&#177;0.0140 0.7888&#177;0.0018 DHDN 35.18&#177;0.0203 0.9393&#177;0.0003 32.03&#177;0.0388 0.8938&#177;0.0013 27.62&#177;0.0374 0.8035&#177;0.0009 CBDNet 33.83&#177;0.0206 0.9334&#177;0.0004 31.01&#177;0.0145 0.8875&#177;0.0006 27.24&#177;0.0161 0.7926&#177;0.0015 MuNet 34.67&#177;0.1099 0.9296&#177;0.0024 31.51&#177;0.0956 0.8780&#177;0.0022 27.53&#177;0.1049 0.7929&#177;0.0035 EDNet 34.07&#177;0.3938 0.9277&#177;0.0039 31.42&#177;0.1916 0.8799&#177;0.0049 27.53&#177;0.0261 0.7993&#177;0.0022 GAN-RW-WD 35.13&#177;0.0334 0.9389&#177;0.0003 31.98&#177;0.0378 0.8919&#177;0.0009 27.63&#177;0.0537 0.8015&#177;0.0028 GAN_RW 35.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>BSD68</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>&#120590; = 15</ns0:cell><ns0:cell /><ns0:cell>&#120590; = 25</ns0:cell><ns0:cell /><ns0:cell>&#120590; = 50</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>PSNR</ns0:cell><ns0:cell>SSIM</ns0:cell><ns0:cell>PSNR</ns0:cell><ns0:cell>SSIM</ns0:cell><ns0:cell>PSNR</ns0:cell><ns0:cell>SSIM</ns0:cell></ns0:row><ns0:row><ns0:cell>Noisy</ns0:cell><ns0:cell>32.06</ns0:cell><ns0:cell>0.8377</ns0:cell><ns0:cell>27.71</ns0:cell><ns0:cell>0.6984</ns0:cell><ns0:cell>21.86</ns0:cell><ns0:cell>0.4599</ns0:cell></ns0:row><ns0:row><ns0:cell>BM3D</ns0:cell><ns0:cell cols='6'>33.30&#177;0.2178 0.9045&#177;0.0045 30.62&#177;0.1747 0.8512&#177;0.0055 27.43&#177;0.1737 0.7683&#177;0.0049</ns0:cell></ns0:row><ns0:row><ns0:cell>DnCNN</ns0:cell><ns0:cell cols='6'>33.86&#177;0.0043 0.9332&#177;0.0001 30.98&#177;0.0091 0.8862&#177;0.0002 27.24&#177;0.0124 0.7907&#177;0.0005</ns0:cell></ns0:row><ns0:row><ns0:cell>DnCNN-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>28&#177;0.0193 0.9404&#177;0.0004 32.15&#177;0.0243 0.8969&#177;0.0010 27.74&#177;0.0151 0.8064&#177;0.0003</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67114:2:0:NEW 5 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> </ns0:body> "
"Dear Editors and Reviewers: Thank you for your comments on our manuscript entitled” Ultrasound Image Denoising Using Generative Adversarial Networks with residual dense connectivity and weighted joint loss”. Those comments are valuable and very helpful for revising and improving our paper, as well as the important guiding significance to our research. We have revised manuscript which we hope meet with approval. Revised portions are marked in red in the manuscript. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Dr. Lun Zhang On behalf of all authors Editor comments (Yilun Shang) A minor revision is needed before further processing. I look forward to receiving your revised version. Thanks for the Editor’s comments, we have revised the paper according to suggestions. I addressed the reviewer’s comments in a rebuttal letter. Meanwhile, any edits or clarifications mentioned in the letter are also inserted into the revised manuscript with highlight. Reviewer 1 (Luka Posilović) Basic reporting References are improved. Professional English is used. Thanks for the Reviewer’s comments, we employed an English-language editing service to polish our paper. In the version of minor revision, we have added some descriptions of the experimental times at line 271-274. Experimental design Research question is now well defined. Methods description is improved as well as figures. However, authors do not state how many times were experiments done to prove the robustnes of the methods, at least I do not see it. Following the Reviewer’s suggestions, I have added some descriptions of the experimental times at line 271-274. Validity of the findings Conclusions are well stated, and all underlying data has been provided. Thanks to the Reviewer’s comments, your suggestions are valuable and very helpful for improving our paper. Additional comments If these small modifications are made the article is suitable for publishing, in my opinion. According to the Reviewer’s suggestions, we have revised manuscript and revised portions are marked in red in the manuscript. "
Here is a paper. Please give your review comments after reading it.
327
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Opinion mining for app reviews aims to analyze people's comments from app stores to support data-driven requirements engineering activities, such as bug report classification, new feature requests, and usage experience. However, due to a large amount of textual data, manually analyzing these comments is challenging, and machine-learning-based methods have been used to automate opinion mining. Although recent methods have obtained promising results for extracting and categorizing requirements from users' opinions, the main focus of existing studies is to help software engineers to explore historical user behavior regarding software requirements. Thus, existing models are used to support corrective maintenance from app reviews, while we argue that this valuable user knowledge can be used for preventive software maintenance. This paper introduces the temporal dynamics of requirements analysis to answer the following question: how to predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation? We present the MAPP-Reviews (Monitoring App Reviews) method, which (i) extracts requirements with negative evaluation from app reviews, (ii) generates time series based on the frequency of negative evaluation, and (iii) trains predictive models to identify requirements with higher trends of negative evaluation. The experimental results from approximately 85,000 reviews show that opinions extracted from user reviews provide information about the future behavior of an app requirement, thereby allowing software engineers to anticipate the identification of requirements that may affect the future app's ratings.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Opinion mining for app reviews aims to analyze people's comments from app stores to support data-driven requirements engineering activities, such as bug report classification, new feature requests, and usage experience. However, due to a large amount of textual data, manually analyzing these comments is challenging, and machine-learning-based methods have been used to automate opinion mining. Although recent methods have obtained promising results for extracting and categorizing requirements from users' opinions, the main focus of existing studies is to help software engineers to explore historical user behavior regarding software requirements. Thus, existing models are used to support corrective maintenance from app reviews, while we argue that this valuable user knowledge can be used for preventive software maintenance. This paper introduces the temporal dynamics of requirements analysis to answer the following question: how to predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation? We present the MAPP-Reviews (Monitoring App Reviews) method, which (i) extracts requirements with negative evaluation from app reviews, (ii) generates time series based on the frequency of negative evaluation, and (iii) trains predictive models to identify requirements with higher trends of negative evaluation. The experimental results from approximately 85,000 reviews show that opinions extracted from user reviews provide information about the future behavior of an app requirement, thereby allowing software engineers to anticipate the identification of requirements that may affect the future app's ratings.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Opinions extracted from app reviews provide a wide range of user feedback to support requirements engineering activities, such as bug report classification, new feature requests, and usage experience <ns0:ref type='bibr' target='#b11'>(Dabrowski et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b42'>Martin et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>AlSubaihin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Araujo and Marcacini, 2021)</ns0:ref>.</ns0:p><ns0:p>However, manually analyzing a reviews dataset to extract useful knowledge from the opinions is challenging because of the large amount of data and the high frequency of new reviews published by users <ns0:ref type='bibr' target='#b32'>(Johanssen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b42'>Martin et al., 2016)</ns0:ref>. To deal with these challenges, opinion mining has been increasingly used for computational analysis of the people's opinions from free texts (B. <ns0:ref type='bibr' target='#b37'>Liu, 2012)</ns0:ref>.</ns0:p><ns0:p>In the context of app reviews, opinion mining allows extracting excerpts from comments and mapping them to software requirements, as well as classifying the positive, negative or neutral polarity of these requirements according to the users' experience <ns0:ref type='bibr' target='#b11'>(Dabrowski et al., 2020)</ns0:ref>.</ns0:p><ns0:p>One of the main challenges for software quality maintenance is identifying emerging issues, e.g., bugs, in a timely manner <ns0:ref type='bibr' target='#b1'>(April and Abran, 2012)</ns0:ref>. These issues can generate huge losses, as users can fail to perform important tasks or generate dissatisfaction that leads the user to uninstall the app. A recent survey showed that 78.3% of developers consider removing unnecessary and defective requirements to be equally or more important than adding new requirements <ns0:ref type='bibr' target='#b48'>(Nayebi, Kuznetsov, et al., 2018)</ns0:ref>. According to <ns0:ref type='bibr' target='#b36'>Lientz and Swanson (1980)</ns0:ref>, maintenance activities are categorized into four classes: i) adaptive -changes PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:1:1:NEW 17 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science in the software environment; ii) perfective -new user requirements; iii) corrective -fixing errors; and iv) preventive -prevent problems in the future. The authors showed that around 21% of the maintenance effort was on the last two types <ns0:ref type='bibr' target='#b4'>(Bennett and Rajlich, 2000)</ns0:ref>. Specifically, in the context of mobile apps <ns0:ref type='bibr'>Mcilroy, Ali, and Hassan (2016)</ns0:ref> found that rationale for the update most frequently communicated task in app stores is bug fixing which occurs in 63% of the updates. Thus, approaches that automate the analysis of potentially defective software requirements from app reviews are important to make strategic updates, as well as prioritization and planning of new releases <ns0:ref type='bibr' target='#b35'>(Licorish, Savarimuthu, and Keertipati, 2017)</ns0:ref>. In addition, the app stores offer a more dynamic way of distributing the software directly to users, with shorter release times than traditional software systems, i.e., continuous update releases are performed every few weeks or even days <ns0:ref type='bibr' target='#b46'>(Nayebi, Adams, and Ruhe, 2016)</ns0:ref>. Therefore, app reviews provide quick feedback from the crowd about software misbehavior that may not necessarily be reproducible during regular development/testing activities, e.g., device combinations, screen sizes, operating systems and network conditions <ns0:ref type='bibr' target='#b56'>(Palomba, Linares-V&#225;squez, Bavota, Oliveto, Penta, et al., 2018)</ns0:ref>. This continuous crowd feedback can be used by developers in the development and preventive maintenance process.</ns0:p><ns0:p>Using an opinion mining approach, we argue that software engineers can investigate bugs and misbehavior early when an app receives negative reviews. Opinion mining techniques can organize reviews based on the identified software requirements and their associated user's sentiment <ns0:ref type='bibr' target='#b11'>(Dabrowski et al., 2020)</ns0:ref>. Consequently, developers can examine negative reviews about a specific feature to understand the user's concerns about a defective requirement and potentially fix it more quickly, i.e., before impacting many users and negatively affecting the app's ratings.</ns0:p><ns0:p>Different strategies have recently been proposed to discover these emerging issues <ns0:ref type='bibr' target='#b77'>(Zhao et al., 2020)</ns0:ref>, such as issues categorization <ns0:ref type='bibr' target='#b69'>(Tudor and Walter, 2006;</ns0:ref><ns0:ref type='bibr' target='#b30'>Iacob and Harrison, 2013;</ns0:ref><ns0:ref type='bibr' target='#b16'>Galvis Carre&#241;o and Winbladh, 2013;</ns0:ref><ns0:ref type='bibr' target='#b53'>Pagano and W. Maalej, 2013;</ns0:ref><ns0:ref type='bibr' target='#b44'>Mcilroy, Ali, Khalid, et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Khalid et al., 2015;</ns0:ref><ns0:ref type='bibr'>Panichella et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b59'>Panichella et al., 2016)</ns0:ref>, sentiment analysis of the software requirements to identify certain levels of dissatisfaction <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref>, and analyze the degree of utility of a requirement <ns0:ref type='bibr' target='#b26'>(Guzman and Walid Maalej, 2014)</ns0:ref>. These approaches are concerned only with past reviews and acting in a corrective way, i.e., these approaches do not have preventive strategies to anticipate problems that can become frequent and impact more users in the coming days or weeks. Analyzing the temporal dynamics of a requirement from app reviews provides information about a requirement's future behavior. In this sense, we raise the following research question: how do we predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation?</ns0:p><ns0:p>In this paper, we present the MAPP-Reviews (Monitoring App Reviews) method. MAPP-Reviews explores the temporal dynamics of software requirements extracted from app reviews. First, we collect, pre-process and extract software requirements from large review datasets. Then, the software requirements associated with negative reviews are organized into groups according to their content similarity by using clustering technique. The temporal dynamics of each requirement group is modeled using a time series, which indicates the time frequency of a software requirement from negative reviews. Finally, we train predictive models on historical time series to forecast future points. Forecasting is interpreted as signals to identify which requirements may negatively impact the app in the future, e.g., identify signs of app misbehavior before impacting many users and prevent the low app ratings. Our main contributions are briefly summarized below:</ns0:p><ns0:p>&#8226; Although there are promising methods for extracting candidate software requirements from application reviews, such methods do not consider that users describe the same software requirement in different ways with non-technical and informal language. Our MAPP-Reviews method introduces software requirements clustering to standardize different software requirement writing variations.</ns0:p><ns0:p>In this case, we explore contextual word embeddings for software requirements representation, which have recently been proposed to support natural language processing. When considering the clustering structure, we can more accurately quantify the number of negative user mentions of a software requirement over time.</ns0:p><ns0:p>&#8226; We present a method to generate the temporal dynamics of negative ratings of a software requirements cluster by using time series. Our method uses equal-interval segmentation to calculate the frequency of software requirements mentions in each time interval. Thus, a time series is obtained and used to analyze and visualize the temporal dynamics of the cluster, where we are especially interested in intervals where sudden changes happen.</ns0:p></ns0:div> <ns0:div><ns0:head>2/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:1:1:NEW 17 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Time series forecasting is useful to identify in advance an upward trend of negative reviews for a given software requirement. However, most existing forecasting models do not consider domainspecific information that affects user behavior, such as holidays, new app releases and updates, marketing campaigns, and other external events. In the MAPP-Reviews method, we investigate the incorporation of software domain-specific information through trend changepoints. We explore both automatic and manual changepoint estimation.</ns0:p><ns0:p>We carried out an experimental evaluation involving approximately 85,000 reviews over 2.5 years for three food delivery apps. The experimental results show that it is possible to find significant points in the time series that can provide information about the future behavior of the requirement through app reviews.</ns0:p><ns0:p>Our method can provide important information to software engineers regarding software development and maintenance. Moreover, software engineers can act preventively through the proposed MAPP-Reviews approach and reduce the impacts of a defective requirement. This paper is structured as follows. Section 'Background and Related Work' presents the literature review and related work about mining user opinions to support requirement engineering and emerging issue detection. In 'MAPP-Reviews method' section, we present the architecture of the proposed method.</ns0:p><ns0:p>We present the main results in 'Results' section. Thereafter, we evaluate and discuss the main findings of the research in 'Discussion' section. Finally, in 'Conclusions' section, we present the final considerations and future work.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND AND RELATED WORK</ns0:head><ns0:p>The opinion mining of app reviews can involve several steps, such as software requirements organization from reviews <ns0:ref type='bibr' target='#b3'>(Araujo and Marcacini, 2021)</ns0:ref>, grouping similar apps using textual features <ns0:ref type='bibr' target='#b66'>(Al-Subaihin et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b27'>Harman, Jia, and Yuanyuan Zhang, 2012)</ns0:ref>, reviews classification in categories of interest to developers (e.g., Bug and New Features) <ns0:ref type='bibr' target='#b2'>(Araujo, Golo, et al., 2020)</ns0:ref>, sentiment analysis of the users' opinion about the requirements <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019;</ns0:ref><ns0:ref type='bibr' target='#b41'>Malik, Shakshuki, and Yoo, 2020)</ns0:ref>, and the prediction of the review utility score (Ying <ns0:ref type='bibr' target='#b76'>Zhang and Lin, 2018)</ns0:ref>. The requirements extraction has an essential role in these steps since the failure in this task directly affects the performance of the other steps. <ns0:ref type='bibr' target='#b11'>Dabrowski et al. (2020)</ns0:ref> evaluated the performance of the three state-of-the-art requirements extraction approaches: SAFE <ns0:ref type='bibr' target='#b31'>(Johann, Stanik, Walid Maalej, et al., 2017)</ns0:ref>, ReUS <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019)</ns0:ref> and GuMa <ns0:ref type='bibr' target='#b26'>(Guzman and Walid Maalej, 2014)</ns0:ref>. These approaches explore rule-based information extraction from linguistic features. GuMa <ns0:ref type='bibr' target='#b26'>(Guzman and Walid Maalej, 2014</ns0:ref>) used a co-location algorithm, thereby identifying expressions of two or more words that correspond to a conventional way of referring to things. SAFE <ns0:ref type='bibr' target='#b31'>(Johann, Stanik, Walid Maalej, et al., 2017)</ns0:ref> and ReUS <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019)</ns0:ref> defined linguistic rules based on grammatical classes and semantic dependence. The experimental evaluation of <ns0:ref type='bibr' target='#b11'>(Dabrowski et al., 2020)</ns0:ref> revealed that the low accuracy presented by the rule-based approaches could hinder its use in practice. <ns0:ref type='bibr' target='#b3'>Araujo and Marcacini (2021)</ns0:ref> After extracting requirements from app reviews, there is a step to identify more relevant requirements and organize them into groups of similar requirements. Traditionally, requirements obtained from user interviews are prioritized with manual analysis techniques, such as the MoSCoW <ns0:ref type='bibr' target='#b69'>(Tudor and Walter, 2006)</ns0:ref> method that categorizes each requirement into groups, and applies the AHP (Analytical Hierarchy Process) decision-making <ns0:ref type='bibr' target='#b64'>(Saaty, 1980)</ns0:ref>. These techniques are not suitable for prioritizing large numbers of software requirements because they require domain experts to categorize each requirement. Therefore, recent studies have applied data mining approaches and statistical techniques <ns0:ref type='bibr' target='#b53'>(Pagano and W. Maalej, 2013)</ns0:ref>.</ns0:p><ns0:p>The statistical techniques have been used to find issues such as to examine how app features predict an app's popularity (M. <ns0:ref type='bibr'>Chen and X. Liu, 2011)</ns0:ref>, to analyze the correlations between the textual size of Manuscript to be reviewed Computer Science the reviews and users' dissatisfaction <ns0:ref type='bibr' target='#b70'>(Vasa et al., 2012)</ns0:ref>, lower rating and negative sentiments <ns0:ref type='bibr' target='#b29'>(Hoon et al., 2012)</ns0:ref>, correlations between the rating assigned by users and the number of app downloads <ns0:ref type='bibr' target='#b27'>(Harman, Jia, and Yuanyuan Zhang, 2012)</ns0:ref>, to the word usage patterns in reviews <ns0:ref type='bibr' target='#b22'>(G&#243;mez et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b35'>Licorish, Savarimuthu, and Keertipati, 2017)</ns0:ref>, to detect traceability links between app reviews and code changes addressing them <ns0:ref type='bibr' target='#b56'>(Palomba, Linares-V&#225;squez, Bavota, Oliveto, Penta, et al., 2018)</ns0:ref>, and explore the feature lifecycles in app stores <ns0:ref type='bibr' target='#b65'>(Sarro et al., 2015)</ns0:ref>. There also exists some work focus on defining taxonomies of reviews to assist mobile app developers with planning maintenance and evolution activities <ns0:ref type='bibr' target='#b14'>(Di Sorbo et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b10'>Ciurumelea et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b48'>Nayebi, Kuznetsov, et al., 2018)</ns0:ref>. In addition to user reviews, previous works <ns0:ref type='bibr' target='#b24'>(Guzman, Alkadhi, and Seyff, 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Guzman, Alkadhi, and Seyff, 2017;</ns0:ref><ns0:ref type='bibr' target='#b47'>Nayebi, Cho, and Ruhe, 2018)</ns0:ref> explored how a dataset of tweets can provide complementary information to support mobile app development.</ns0:p><ns0:p>From a labeling perspective, previous works classified and grouped software reviews into classes and categories <ns0:ref type='bibr' target='#b30'>(Iacob and Harrison, 2013;</ns0:ref><ns0:ref type='bibr' target='#b16'>Galvis Carre&#241;o and Winbladh, 2013;</ns0:ref><ns0:ref type='bibr' target='#b53'>Pagano and W. Maalej, 2013;</ns0:ref><ns0:ref type='bibr' target='#b44'>Mcilroy, Ali, Khalid, et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Khalid et al., 2015;</ns0:ref><ns0:ref type='bibr'>N. Chen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b22'>G&#243;mez et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Gu and Kim, 2015;</ns0:ref><ns0:ref type='bibr' target='#b38'>Walid Maalej and Nabil, 2015;</ns0:ref><ns0:ref type='bibr' target='#b72'>Villarroel et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b50'>Nayebi, Marbouti, et al., 2017)</ns0:ref>, such as feature requests, requests for improvements, requests for bug fixes, and usage experience. Noei, F. <ns0:ref type='bibr' target='#b52'>Zhang, and Zou (2021)</ns0:ref> used topic modeling to determine the key topics of user reviews for different app categories.</ns0:p><ns0:p>Regarding analyzing emerging issues from app reviews, existing studies are usually based on topic modeling or clustering techniques. For example, LDA (Latent Dirichlet Allocation) <ns0:ref type='bibr' target='#b5'>(Blei, Ng, and Jordan, 2003)</ns0:ref>, DIVER (iDentifying emerging app Issues Via usER feedback) <ns0:ref type='bibr' target='#b20'>(Gao, Zheng, et al., 2019)</ns0:ref> and IDEA <ns0:ref type='bibr' target='#b17'>(Gao, Zeng, Lyu, et al., 2018)</ns0:ref> approaches were used for app reviews. The LDA approach is a topic modeling method used to determine patterns of textual topics, i.e., to capture the pattern in a document that produces a topic. LDA is a probabilistic distribution algorithm for assigning topics to documents. A topic is a probabilistic distribution over words, and each document represents a mixture of latent topics <ns0:ref type='bibr' target='#b26'>(Guzman and Walid Maalej, 2014)</ns0:ref>. In the context of mining user opinions in app reviews, especially to detect emerging issues, the documents in the LDA are app reviews, and the extracted topics are used to detect emerging issues. The IDEA approach improves LDA by considering topic distributions in a context window when detecting emerging topics by tracking topic variations over versions <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref>. In addition, the IDEA approach implements an automatic topic interpretation method to label each topic with the most representative sentences and phrases <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref>.</ns0:p><ns0:p>In the same direction, the DIVER approach was proposed to detect emerging app issues, but mainly in beta test periods <ns0:ref type='bibr' target='#b20'>(Gao, Zheng, et al., 2019)</ns0:ref>. The IDEA, DIVER and LDA approaches have not been considered sentiment of user reviews. Recently, the MERIT (iMproved EmeRging Issue deTection) <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref> approach was proposed and explore word embedding techniques to prioritize phrases/sentences of each positive and negative topic. <ns0:ref type='bibr' target='#b61'>Phong et al. (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b73'>Vu et al. (2016)</ns0:ref> grouped the keywords and phrases using clustering algorithms and then determine and monitor over time the emergent clusters based on the occurrence frequencies of the keywords and phrases in each cluster. <ns0:ref type='bibr' target='#b54'>Palomba, Linares-V&#225;squez, Bavota, Oliveto, Di Penta, et al. (2015)</ns0:ref> proposes an approach to tracking informative user reviews of source code changes and to monitor the extent to which developers addressing user reviews. These approaches are descriptive models, i.e., they analyze historical data to interpret and understand the behavior of past reviews. In our paper, we are interested in predictive models that aim to anticipate the growth of negative reviews that can impact the app's evaluation.</ns0:p><ns0:p>In short, app reviews formed the basis for many studies and decisions ranging from feature extraction to release planning of mobile apps. However, previous related works do not explore the temporal dynamics with a predictive model of requirements in reviews, as shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. Related works that incorporate temporal dynamics cover only descriptive models. In addition, existing studies focus on only a few steps of the opinion mining process from app reviews, which hinders its use in real-world applications. Our proposal instantiates a complete opinion mining process and incorporates temporal dynamics of software requirements extracted from app reviews into forecasting models to address these drawbacks.</ns0:p></ns0:div> <ns0:div><ns0:head>THE MAPP-REVIEWS METHOD</ns0:head><ns0:p>In order to analyze the temporal dynamics of software requirements, we present the MAPP-Reviews approach with five stages, as shown in Figure <ns0:ref type='figure'>1</ns0:ref>. First, we collect mobile app reviews in app stores through a web crawler. Second, we group the similar extracted requirements by using clustering methods. Third,</ns0:p></ns0:div> <ns0:div><ns0:head>4/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2021:08:64857:1:1:NEW 17 Nov 2021)</ns0:ref> Manuscript to be reviewed Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the most relevant clusters are identified to generate time series from negative reviews. Finally, we train the predictive model from time series to forecast software requirements involved with negative reviews, which will potentially impact the app's rating.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>. Overview of the proposed method for analyzing temporal dynamics of requirements engineering from mobile app reviews.</ns0:p></ns0:div> <ns0:div><ns0:head>App Reviews</ns0:head><ns0:p>The app stores provide the textual content of the reviews, the publication date, and the rating stars of user-reported reviews. In the first stage of MAPP-Reviews, raw reviews are collected from the app stores using a web crawler tool through a RESTful API. At this stage, there is no pre-processing in the textual content of reviews. Data is organized in the appropriate data structure and automatically batched to be processed by the requirements extraction stage of MAPP-Reviews. In the experimental evaluation presented in this article, we used reviews collected from three food delivery apps: Uber Eats, Foodpanda, and Zomato.</ns0:p></ns0:div> <ns0:div><ns0:head>Requirements Extraction</ns0:head><ns0:p>This section describes stages 2 of the MAPP-Reviews method, where there is the software requirements extraction from app reviews and text pre-processing using contextual word embeddings.</ns0:p><ns0:p>MAPP-Reviews uses the pre-trained RE-BERT <ns0:ref type='bibr' target='#b3'>(Araujo and Marcacini, 2021)</ns0:ref> model to extract software requirements from app reviews. RE-BERT is an extractor developed from our previous research.</ns0:p><ns0:p>We trained the RE-BERT model using a labeled reviews dataset generated with a manual annotation process, as described by <ns0:ref type='bibr' target='#b11'>Dabrowski et al. (2020)</ns0:ref>. The reviews are from 8 apps of different categories as showed in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. RE-BERT uses a cross-domain training strategy, where the model was trained in 7 apps and tested in one unknown app for the test step. RE-BERT software requirements extraction performance was compared to SAFE <ns0:ref type='bibr' target='#b31'>(Johann, Stanik, Walid Maalej, et al., 2017)</ns0:ref>, ReUS <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019)</ns0:ref> and GuMa <ns0:ref type='bibr' target='#b26'>(Guzman and Walid Maalej, 2014)</ns0:ref>. Since RE-BERT uses pre-trained models for semantic representation of texts, the extraction performance is significantly superior to the rule-based methods. Given this scenario, we selected RE-BERT for the requirement extraction stage. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows an example of review and extracted software requirements. In the raw review 'I am ordering with delivery but it is automatically placing order with pick-up', four software requirements were extracted ('ordering', 'delivery', 'placing order', and 'pick-up'). Note that 'placing order' and 'ordering' are the same requirement in practice. In the clustering step of the MAPP-Reviews method, these requirements are grouped in the same cluster, as they refer to the same feature. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science more tokens. We filter reviews that are more associated with negative comments through user feedback.</ns0:p><ns0:p>Consider that the user gives a star rating when submitting a review for an app. Generally, the star rating ranges from 1 to 5. This rating can be considered as the level of user satisfaction. In particular, we are interested in defective software requirements, and only reviews with 1 or 2 rating stars were considered.</ns0:p><ns0:p>Thus, we use RE-BERT to extract only software requirements mentioned in reviews that may involve complaints, bad usage experience, or malfunction of app features.</ns0:p><ns0:p>RE-BERT extracts software requirements directly from the document reviews and we have to deal with the drawback that the same requirement can be written in different ways by users. Thus, we propose a software requirement semantic clustering, in which different writing variations of the same requirement must be standardized. However, the clustering step requires that the texts be pre-processed and structured in a format that allows the calculation of similarity measures between requirements.</ns0:p><ns0:p>We represent each software requirement through contextual word embedding. Word embeddings are vector representations for textual data in an embedding space, where we can compare two texts semantically using similarity measures. Different models of word embeddings have been proposed, such as Word2vec <ns0:ref type='bibr' target='#b45'>(Mikolov et al., 2013)</ns0:ref>, Glove <ns0:ref type='bibr' target='#b60'>(Pennington, Socher, and Manning, 2014)</ns0:ref>, FastText <ns0:ref type='bibr' target='#b6'>(Bojanowski et al., 2017)</ns0:ref> and BERT <ns0:ref type='bibr' target='#b12'>(Devlin et al., 2018)</ns0:ref>. We use the BERT Sentence-Transformers model <ns0:ref type='bibr' target='#b62'>(Reimers and Gurevych, 2019)</ns0:ref> to maintain an neural network architecture similar to RE-BERT.</ns0:p><ns0:p>BERT is a contextual neural language model, where for a given sequence of tokens, we can learn a word embedding representation for a token. Word embeddings can calculate the semantic proximity between tokens and entire sentences, and the embeddings can be used as input to train the classifier. BERT-based models are promising to learn contextual word embeddings from long-term dependencies between tokens in sentences and sentences <ns0:ref type='bibr' target='#b3'>(Araujo and Marcacini, 2021)</ns0:ref>. However, we highlight that a local context more impacts the extraction of software requirements from reviews, i.e., tokens closer to those of software requirements are more significant <ns0:ref type='bibr' target='#b3'>(Araujo and Marcacini, 2021)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science a special token called [MASK] <ns0:ref type='bibr' target='#b3'>(Araujo and Marcacini, 2021)</ns0:ref>. One of the training objectives is the noisy reconstruction defined in Equation <ns0:ref type='formula'>1</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_0'>p(r|r) &#8793; k &#8721; j&#8793;1 m j exp(h &#8890; c j w t j ) &#8721; t &#8242; exp(h &#8890; c j w t &#8242; ) (1)</ns0:formula><ns0:p>where r is a corrupted token sequence of requirement r, r is the masked tokens, m t is equal to 1 when t j is masked and 0 otherwise. The c t represents context information for the token t j , usually the neighboring tokens. We extract token embeddings from the pre-trained BERT model, where h c j is a context embedding and w t j is a word embedding of the token t j . The term &#8721; t &#8242; exp(h &#8890; c w t &#8242; ) is a normalization factor using all tokens t &#8242; from a context c. BERT uses the Transformer deep neural network to solve p(r|r) of the Equation <ns0:ref type='formula'>1</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> illustrates a set of software requirements in a two-dimensional space obtained from contextual word embeddings. Note that the vector space of embeddings preserves the proximity of similar requirements, but written in different ways by users such as 'search items', 'find items', 'handles my searches' and 'find special items'. </ns0:p></ns0:div> <ns0:div><ns0:head>Requirements Clustering</ns0:head><ns0:p>After mapping the software requirements into word embeddings, MAPP-Reviews uses the k-means algorithm <ns0:ref type='bibr' target='#b39'>(MacQueen et al., 1967)</ns0:ref> to obtain a clustering model of semantically similar software requirements.</ns0:p><ns0:p>Formally, let R &#8793; {r 1 ,r 2 ,...,r n } a set of extracted software requirements, where each requirement r is a m-dimensional real vector from an word embedding space. The k-means clustering aims to partition the</ns0:p><ns0:formula xml:id='formula_1'>n requirements into k (2 &#8804; k &#8804; n) clusters C &#8793; {C 1 ,C 2 ,.</ns0:formula><ns0:p>..,C k }, thereby minimizing the within-cluster sum of squares as defined in Equation <ns0:ref type='formula'>2</ns0:ref>, where &#181; i is the mean vector of all requirements in C i . Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>&#8721; C i &#8712;C &#8721; r&#8712;C i &#8741;r &#8722; &#181; i &#8741; 2 (2)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We observe that not all software requirements cluster represents a functional requirement in practice.</ns0:p><ns0:p>Then, we evaluated the clustering model using a statistical measure called silhouette <ns0:ref type='bibr' target='#b63'>(Rousseeuw, 1987)</ns0:ref> to discard clusters with many different terms and irrelevant requirements. The silhouette value of a data instance is a measure of how similar a software requirement is to its own cluster compared to other clusters.</ns0:p><ns0:p>The silhouette measure ranges from &#8722;1 to +1, where values close to +1 indicate that the requirement is well allocated to its own cluster <ns0:ref type='bibr' target='#b71'>(Vendramin, Campello, and Hruschka, 2010)</ns0:ref>. Finally, we use the requirements with higher silhouette values to support the cluster labeling, i.e., to determine the software requirement's cluster name. For example, Table <ns0:ref type='table'>3</ns0:ref> shows the software requirement cluster 'Payment' and some tokens allocated in the cluster with their respective silhouette values.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. Example of software requirement cluster 'Payment' and some tokens allocated in the cluster with their respective silhouette values.</ns0:p></ns0:div> <ns0:div><ns0:head>Cluster Label</ns0:head><ns0:p>Tokens with Silhouette (s)</ns0:p><ns0:p>Payment 'payment getting' (s = 0.2618), 'payment get' (s = 0.2547), 'getting payment' (s = 0.2530), 'take payment'(s = 0.2504), 'payment taking' (s = 0.2471), 'payment' (s = 0.2401)</ns0:p><ns0:p>To calculate the silhouette measure, let r i &#8712; C i a requirement r i in the cluster C i . Equation 3 compute the mean distance between r i and all other software requirements in the same cluster, where d(r i ,r j ) is the distance between requirements r i and r j in the cluster C i . In the equation, the expression 1</ns0:p><ns0:formula xml:id='formula_3'>|C i |&#8722;1 means the distance d(r i ,r i )</ns0:formula><ns0:p>is not added to the sum. A smaller value of the silhouette measure a(i) indicates that the requirement i is far from neighboring clusters and better assigned to its cluster.</ns0:p><ns0:formula xml:id='formula_4'>a(r i ) &#8793; 1 |C i | &#8722; 1 &#8721; r j &#8712;C i ,r i &#8800;r j d(r i ,r j )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Analogously, the mean distance from requirement r i to another cluster C k is the mean distance from r i to all requirements in C k , where C k &#8800; C i . For each requirement r i &#8712; C i , Equation 4 defines the minimum mean distance of r i for all requirements in any other cluster, of which r i is not a member. The cluster with this minimum mean distance is the neighbor cluster of r i . So this is the next best-assigned cluster for the r i requirement. The silhouette (value) of the software requirement r i is defined by Equation <ns0:ref type='formula'>5</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_5'>b(r i ) &#8793; min k&#8800;i 1 |C k | &#8721; r j &#8712;C k d(r i ,r j ) (4) s(r i ) &#8793; b(r i ) &#8722; a(r i ) max{a(r i ),b(r i )} , if |C i | &gt; 1 (5)</ns0:formula><ns0:p>At this point in the MAPP-Reviews method, we have software requirements pre-processed and represented through contextual word embeddings, as well as an organization of software requirements into k clusters. In addition, each cluster has a representative text (cluster label) obtained according to the requirements with higher silhouette values.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> shows a two-dimensional projection of clustered software requirements from approximately 86,000 food delivery app reviews, which were used in the experimental evaluation of this work. Highdensity regions represent clusters of similar requirements that must be mapped to the same software requirement during the analysis of temporal dynamics. In the next section, techniques for generating the time series from software requirements clusters are presented, as well as the predictive models to infer future trends.</ns0:p></ns0:div> <ns0:div><ns0:head>9/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:1:1:NEW 17 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Time Series Generation</ns0:head><ns0:p>Time series can be described as an ordered sequence of observations <ns0:ref type='bibr' target='#b7'>(Chatfield and Xing, 2019)</ns0:ref>. A time series of size s is defined as X &#8793; (x 1 ,x 2 ,...,x s ) in which x t &#8712; R represents an observation at time t.</ns0:p><ns0:p>MAPP-Reviews generates time series for each software requirements cluster, where the observations represent how many times each requirement occurred in a period. Consequently, we know how many times a specific requirement was mentioned in the app reviews for each period. Each series models the temporal dynamics of a software requirement, i.e., the temporal evolution considering occurrences in negative reviews.</ns0:p><ns0:p>Some software requirements are naturally more frequent than others, as well as the tokens used to describe these requirements. For the time series analysis to be compared uniformly, we generate a normalized series for each requirement. Each observation in the time series is normalized according to Equation <ns0:ref type='formula'>6</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_6'>x normalized &#8793; x z p (6)</ns0:formula><ns0:p>where x normalized is the result of the normalization, where x is the frequency of cluster (time series observation) C in the period p, z p is the total frequency of the period. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science thereby indicating that users have negatively evaluated the app for that requirement. Predicting the occurrence of these periods for software maintenance, aiming to minimize the number of future negative reviews is the objective of the MAPP-Reviews predictive model discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>Predictive Models</ns0:head><ns0:p>Predictive models for time series are very useful to support an organization in its planning and decisionmaking. Such models explore past observations to estimate observations in future horizons, given a confidence interval. In our MAPP-Reviews method, we aim to detect the negative reviews of a software requirement that are starting to happen and make a forecast to see if they will become serious in the subsequent periods, i.e., high frequency in negative reviews. The general idea is to use p points from the time series to estimate the next p + h points, where h is the prediction horizon.</ns0:p><ns0:p>MAPP-Reviews uses the Prophet Forecasting Model <ns0:ref type='bibr' target='#b68'>(Taylor and Letham, 2018)</ns0:ref>. Prophet is a model from Facebook researchers for forecasting time series data considering non-linear trends at different time intervals, such as yearly, weekly, and daily seasonality. We chose the Prophet model for the MAPP-Reviews method due to the ability to incorporate domain knowledge into the predictive model. The Prophet model consists of three main components, as defined in Equation <ns0:ref type='formula'>7</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_7'>y(t) &#8793; g(t) + s(t) + h(t) +t&#949; (7)</ns0:formula><ns0:p>where g(t) represents the trend, s(t) represents the time series seasonality, h(t) represents significant events that impacts time series observations, and the error term t represents noisy data.</ns0:p><ns0:p>During model training, a time series can be divided into training and testing. The terms g(t), s(t) and h(t) can be automatically inferred by classical statistical methods in the area of time series analysis, such as the Generalized Additive Model (GAM) <ns0:ref type='bibr' target='#b28'>(Hastie and Tibshirani, 1987)</ns0:ref> In the experimental evaluation, we show the MAPP-Reviews ability to predict perceptually important points in the software requirements time series, allowing the identification of initial trends in defective requirements to support preventive strategies in software maintenance.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> shows an emerging issue being predicted 6 weeks in advance in the period from October 2020 to January 2021. The table presents a timeline represented by the horizon (h) in weeks, with the volume of negative raw reviews (Vol.</ns0:p><ns0:p>). An example of a negative review is shown for each week until reaching the critical week (peak), with h &#8793; 16. The table row with h &#8793; 10 highlighted in bold shows when MAPP-Reviews identified the uptrend. In this case, we show the MAPP-Reviews alert for the 'Time of arrival' requirement of the Uber Eats app. In particular, the emerging issue identified in the negative reviews is the low accuracy of the estimated delivery time in the app. The text of the user review samples has been entered in its entirety, without any pre-treatment. A graphical representation of this prediction is shown in Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The proposed approach is validated through an experimental evaluation with popular food delivery apps.</ns0:p><ns0:p>These apps represent a dynamic and complex environment consisting of restaurants, food consumers, and drivers operating in highly competitive conditions <ns0:ref type='bibr' target='#b75'>(Williams et al., 2020)</ns0:ref>. In addition, this environment means a real scenario of commercial limitations, technological restrictions, and different user experience contexts, which makes detecting emerging issues early an essential task. For this experimental evaluation, we used a dataset with 86,610 reviews of three food delivery apps: Uber Eats, Foodpanda, and Zomato. The dataset was obtained in the first stage (App Reviews) of MAPP-Reviews and is available at https://github.com/vitormesaque/mapp-reviews. The choice of these apps was based on their popularity and the number of reviews available. The reviews are from September 2018 to January 2021.</ns0:p><ns0:p>After the software requirements extraction and clustering stage (with k &#8793; 300 clusters), the six most popular (frequent) requirements clusters were considered for time series prediction. The following software requirements clusters were selected: 'Ordering', 'Go pick up', 'Delivery', 'Arriving time', 'Advertising', and 'Payment'. The requirements clusters are shown in Table <ns0:ref type='table'>5</ns0:ref> with the associated words ordered by silhouette.</ns0:p><ns0:p>In the MAPP-Reviews prediction stage, we evaluated two scenarios using Prophet. The first scenario is the baseline, where we use the automatic parameters fitting of the Prophet. By default, Prophet will automatically detect the changepoints. In the second scenario, we specify the potential changepoints, thereby providing domain knowledge for software requirements rather than automatic changepoint detection. Therefore, the changepoint parameters are used when we provide the dates of the changepoints instead of the Prophet determining them. In this case, we use the most recent observations that have a value greater than the average of observations, i.e., critical periods with high frequencies of negative reviews in the past.</ns0:p><ns0:p>We used the MAPE (Mean Absolute Percentage Error) metric to evaluate the forecasting performance <ns0:ref type='bibr' target='#b40'>(Makridakis, 1993)</ns0:ref>, as defined in Equation <ns0:ref type='formula'>8</ns0:ref>, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref>. Software requirements clusters for food delivery apps used in the experimental evaluation. Tokens well allocated in each cluster (silhoutte measure) were selected to support the cluster labeling.</ns0:p><ns0:p>Cluster Label Tokens with Silhouette values (s)</ns0:p><ns0:p>Ordering 'ordering' (s = 0.1337), 'order's' (s = 0.1250), 'order from' (s = 0.1243), 'order will' (s = 0.1221), 'order' (s = 0.1116), 'the order', (s = 0.1111)</ns0:p><ns0:p>Go pick up 'go pick up'(s = 0.1382)', 'pick up the' (s = 0.1289)', 'pick up at', (s = 0.1261), 'to take' (s = 0.1176), 'go get' (s = 0.1159) Delivery 'delivering parcels' (s = 0.1705), 'delivery options' (s = 0.1590), 'waive delivery' (s = 0.1566), 'delivery charges' (s = 0.1501), 'accept delivery' (s = 0.1492) Arriving time 'arrival time' (s=0.3303), 'waisting time' (s = 0.3046), 'arriving time' (s = 0.3042), 'estimate time' (s = 0.2877), 'delievery time' (s = 0.2743)</ns0:p><ns0:p>Advertising 'anoyning ads' (s = 0.3464), 'pop-up ads' (s = 0.3440), 'ads pop up' (s = 0.3388), 'commercials advertise' (s = 0.3272), 'advertising' (s = 0.3241)</ns0:p><ns0:p>Payment 'payment getting' (s = 0.2618), 'payment get' (s = 0.2547), 'getting payment' (s = 0.2530), 'take payment'(s = 0.2504), 'payment taking' (s = 0.2471), 'payment' (s = 0.2401)</ns0:p><ns0:formula xml:id='formula_8'>MAPE &#8793; 1 h &#8721; h t&#8793;1 |real t &#8722; pred t | real t (8)</ns0:formula><ns0:p>where real t is the real value and pred t is the predicted value by the method, and h is the number of forecast observations in the estimation period (prediction horizon). In practical terms, MAPE is a measure of the percentage error that, in a simulation, indicates how close the prediction was made to the known values of the time series. We consider a prediction horizon (h) ranging from 1 to 4, with weekly seasonality. In particular, we are interested in the peaks of the series since our hypothesis is that the peaks represent potential problems in a given software requirement. Thus, Table <ns0:ref type='table' target='#tab_11'>7</ns0:ref> shows MAPE calculated only for time series peaks during forecasting. In this case, predictions with the custom changepotins locations (scenario 2) obtained better results than the automatic detection for all prediction horizons (h &#8793; 1 to h &#8793; 4), obtaining 3.82% of forecasting improvement. These results provide evidence that domain knowledge can improve the detection of potential software requirements to be analyzed for preventive maintenance.</ns0:p><ns0:p>In particular, analyzing the prediction horizon, the results show that the best predictions were obtained with h &#8793; 1 (1 week). In practical terms, this means the initial trend of a defective requirement can be identified one week in advance. We can note that a prediction error rate (MAPE) of up to 20% is acceptable. For example, consider that the prediction at a given point is 1000 negative reviews for a Manuscript to be reviewed</ns0:p><ns0:p>Computer Science For reproducibility purposes, we provide a GitHub repository at https://github.com/vitormesaque/mappreviews containing the source code and details of each stage of the method, as well as the raw data and all the results obtained. </ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>An issue related to a software requirement reported in user reviews is defined as an emerging issue when there is an upward trend in these requirements in negative reviews. The MAPP-Reviews train predictive models to identify requirements with higher negative evaluation trends, but inevitably a negative review will impact the rating. However, our objective is to mitigate this negative impact.</ns0:p><ns0:p>The results show that the best prediction horizon (h) is one week (h &#8793; 1). In practical terms, this means the initial trend of a defective requirement can be identified one week in advance. We can note that a prediction error rate (MAPE) of up to 20% is acceptable. For example, consider that the prediction is 1000 Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>negative reviews for a specific requirement at a given point, but the model predicts 800 negative reviews.</ns0:p><ns0:p>Even with 20% of MAPE, we can identify a significant increase in negative reviews for a requirement and trigger alerts for preventive software maintenance, i.e., when MAPP-Reviews predicts an uptrend, the software development team should receive an alert. In the time series forecast shown in Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>, we observe that the model would be able to predict the peaks of negative reviews for the software requirement one week in advance. We show that MAPP-Reviews provides software engineers with tools to perform software maintenance activities, particularly preventive maintenance, by automatically monitoring the temporal dynamics of software requirements.</ns0:p><ns0:p>The forecast presented in Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> shows that the model was able to predict the peak of negative reviews for the 'Arriving time' requirement. An emerging issue detection system based only on the frequency of a topic could trigger many false detections, i.e., it would not detect defective functionality but issues related to the quality of services offered. Analyzing user reviews, we found that some complaints are about service issues rather than defective requirements. For example, the user may complain about the delay in the delivery service and negatively rate the app, but in reality, they are complaining about the restaurant, i.e., a problem with the establishment service. We've seen that this pattern of user complaints is repeated across other app domains, not just the food delivery service. In delivery food apps, these complaints about service are constant, uniform, and distributed among all restaurants available in the app. In Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>, it is clear that the emerging issue refers to the deficient implementation of the estimated delivery time prediction functionality. Our experiment showed that when there is a problem in the app related to a defective software requirement, there are increasing complaints associated with negative reviews regarding that requirement.</ns0:p><ns0:p>We intend to explore further our method to deeply determine the input variables that most contribute to the output behavior and the non-influential inputs or to determine some interaction effects within the model. In addition, sensitivity analysis can help us reduce the uncertainties found more effectively and calibrate the model.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations</ns0:head><ns0:p>The results of our research show there are new promising prospects for the future. However, in the scope of our experimental evaluation, we just investigate the incorporation of software domain-specific information through trend changepoints. Company-sensitive information and the development team's domain knowledge were not considered in the predictive model. Domain knowledge information provided by software engineers can potentially be exploited in the future to improve the predictive model. For this purpose, we depend on sensitive company data related to the software development and management process, e.g., release planning, server failures, and marketing campaigns. In particular, we can investigate the relationship between the release dates of app updates and the textual content of the update publication with the upward trend in negative evaluations of a software requirement. In a real-world scenario in the industry, software engineers using MAPP-Reviews will be able to provide domain-specific information.</ns0:p><ns0:p>In the future, we intend to evaluate our proposed method in the industry and explore more specifics of the domain knowledge to improve the predictive model.</ns0:p><ns0:p>Another issue that is important to highlight is the sentiment analysis in app reviews. We assume that it is possible to improve the classification of negative reviews by incorporating sentiment analysis techniques. We can incorporate a polarity classification stage (positive, negative, and neutral) of the extracted requirement, allowing a software requirements-based sentiment analysis. In the current state of our research, we only consider negative reviews with low ratings and associate them with the software requirements mentioned in the review.</ns0:p><ns0:p>Finally, to use MAPP-Reviews in a real scenario, there must be already a sufficient amount of reviews distributed over time, i.e., a minimum number of time-series observations available for the predictive model to work properly. Therefore, in practical terms, our method is more suitable when large volumes of app reviews are available to be analyzed.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>Opinion mining for app reviews can provide useful user feedback to support software engineering activities. We introduced the temporal dynamics of requirements analysis to predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation. We presented the MAPP-Reviews (Monitoring App Reviews) approach to (1) extract and cluster software Manuscript to be reviewed</ns0:p><ns0:p>Computer Science requirements, (2) generate time series with the time dynamics of requirements, (3) identify requirements with higher trends of negative evaluation.</ns0:p><ns0:p>The experimental results show that our method is able to find significant points in the time series that provide information about the future behavior of a requirement through app reviews, thereby allowing software engineers to anticipate the identification of requirements that may affect the app's evaluation.</ns0:p><ns0:p>In addition, we show that it's beneficial to incorporate changepoints into the predictive model by using domain knowledge, i.e., defining points over time with significant impacts on the app's evaluation.</ns0:p><ns0:p>We compared the MAPP-Reviews in two scenarios: first using automatic changepoint detection and second specifying the changepoint locations. In particular, the automatic detection of points of change had better MAPE results in most evaluations. On the other hand, the best predictions at the time series peaks (where there is a greater interest in identifying potential defective requirements trends) were obtained by specifying changepoints.</ns0:p><ns0:p>Future work directions involve evaluating MAPP-Reviews in other scenarios to incorporate and compare several other types of domain knowledge into the predictive model, such as new app releases, marketing campaigns, server failures, competing apps, among other information that may impact the evaluation of apps. Another direction for future work is to implement a dashboard tool for monitoring app reviews, thus allowing the dispatching of alerts and reports.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64857:1:1:NEW 17 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Example of a review and extracted requirements.</ns0:figDesc><ns0:graphic coords='8,203.77,160.86,289.51,192.51' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Set of software requirements in a two-dimensional space obtained from contextual word embeddings.</ns0:figDesc><ns0:graphic coords='9,183.09,263.99,330.91,285.35' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Two-dimensional projection of clustered software requirements from approximately 86,000 food delivery app reviews.</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.58,358.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5shows an example of one of the generated time series for a software requirement. The time dynamics represented in the time series indicate the behavior of the software requirement concerning negative reviews. Note that in some periods there are large increases in the mention of the requirement,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Time series with the normalized frequency of 'Arriving time' requirement from Zomato App in negative reviews.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.58,202.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>used in Prophet. In the training step, the terms are adjusted to find an additive model that best fits the known observations in the training time series. Next, we evaluated the model in new data, i.e., the testing time series.In the case of the temporal dynamics of the software requirements, domain knowledge is represented by specific points (e.g. changepoints) in the time series that indicate potential growth of the requirement in negative reviews. Figure6shows the forecasting for a software requirement. Original observations are the black dots and the blue line represents the forecast model. The light blue area is the confidence interval of the predictions. The vertical dashed lines are the time series changepoints. Changepoints play an important role in forecasting models, as they represent abrupt changes in the trend. Changepoints can be estimated automatically during model training, but domain knowledge, such as the date of app releases, marketing campaigns, and server failures, are changepoints that can be added 11/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:1:1:NEW 17 Nov 2021)Manuscript to be reviewed Computer Science manually by software engineers. Therefore, the changepoints could be specified by the analyst using known dates of product launches and other growth-altering events or may be automatically selected given a set of candidates. In MAPP-Reviews, we have two possible options for selecting changepoints in the predictive model. The first option is automatic changepoint selection, where the Prophet specifies 25 potential changepoints which are uniformly placed in the first 80% of the time series. The second option is the manual specification which has a set of dates provided by a domain analyst. In this case, the changepoints could be entirely limited to a small set of dates. If no known dates are provided, by default we use the most recent observations which have a value greater than the average of the observations, i.e., we want to emphasize the highest peaks of the time series, as they indicate critical periods of negative revisions from the past.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Prophet forecasting with automatic changepoints of a requirement.</ns0:figDesc><ns0:graphic coords='13,141.73,197.59,413.58,229.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Finally</ns0:head><ns0:label /><ns0:figDesc>, to exemplify MAPP-Reviews forecasting, Figure 7 shows the training data (Arriving time software requirement) represented as black dots and the forecast as a blue line, with upper and lower bounds in a blue shaded area. At the end of the time series, the darkest line is the real values plotted over the predicted values in blue. The lines plotted vertically represent the changepoints.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Forecasting for software requirement cluster (Arriving time) from Uber Eats App reviews.</ns0:figDesc><ns0:graphic coords='16,141.73,349.25,413.58,227.78' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='7,141.73,110.44,413.58,220.94' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Overview of related works.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Reference</ns0:cell><ns0:cell /><ns0:cell cols='2'>Data Representa-</ns0:cell><ns0:cell>Pre-processing and</ns0:cell><ns0:cell>Requirements/Topics Cluster-</ns0:cell><ns0:cell>Temporal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell><ns0:cell /><ns0:cell>Extraction of Re-</ns0:cell><ns0:cell>ing and Labeling</ns0:cell><ns0:cell>Dynam-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>quirements</ns0:cell><ns0:cell>ics</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Araujo and Mar-</ns0:cell><ns0:cell cols='2'>Word embeddings.</ns0:cell><ns0:cell>Token Classification.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>cacini, 2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Gao, Zeng, Wen, et</ns0:cell><ns0:cell cols='2'>Word embeddings.</ns0:cell><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. It combines word embed-</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>dings with topic distributions as</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>the semantic representations of</ns0:cell><ns0:cell>Model.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>words.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Malik, Shakshuki,</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>and Yoo, 2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Gao, Zheng, et</ns0:cell><ns0:cell>Vector space.</ns0:cell><ns0:cell /><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. Anomaly Clustering Algo-</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>rithm.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Dragoni, Federici,</ns0:cell><ns0:cell cols='2'>Dependency tree.</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>and Rexha, 2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Gao, Zeng, Lyu, et</ns0:cell><ns0:cell cols='2'>Probability vector.</ns0:cell><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. AOLDA -Adaptively On-</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>line Latent Dirichlet Allocation.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>The topic labeling method con-</ns0:cell><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>siders the semantic similarity</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>between the candidates and the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>topics.</ns0:cell></ns0:row><ns0:row><ns0:cell>(Johann,</ns0:cell><ns0:cell cols='2'>Stanik,</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Rule-based.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Walid Maalej, et</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>al., 2017)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>(Vu et al., 2016)</ns0:cell><ns0:cell /><ns0:cell cols='2'>Word embeddings.</ns0:cell><ns0:cell>Pre-defined.</ns0:cell><ns0:cell>Yes. Soft Clustering algorithm</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>that uses vector representation</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>of words from Word2vec.</ns0:cell><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(Villarroel</ns0:cell><ns0:cell>et</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>Yes. DBSCAN clustering algo-</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2016)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>rithm. Each cluster has a label</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>composed of the five most fre-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>quent terms.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Gu and Kim, 2015)</ns0:cell><ns0:cell>Semantic</ns0:cell><ns0:cell>Depen-</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>Yes. Clustering aspect-opinion</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>dence Graph.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>pairs with the same aspects.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Phong et al., 2015)</ns0:cell><ns0:cell>Vector space.</ns0:cell><ns0:cell /><ns0:cell>Rule-based.</ns0:cell><ns0:cell>Yes. Word2vec and K-means.</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Guzman and Walid</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. LDA approach.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Maalej, 2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(N. Chen et al., 2014)</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>Yes. LDA and ASUM approach</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>with labeling.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Iacob and Harri-</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. LDA approach.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell>son, 2013)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Galvis Carre&#241;o and</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>Yes. Aspect and Sentiment</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Winbladh, 2013)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Unification Model (ASUM) ap-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>proach.</ns0:cell></ns0:row><ns0:row><ns0:cell>(Harman,</ns0:cell><ns0:cell cols='2'>Jia,</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Pre-defined.</ns0:cell><ns0:cell>Yes. Greedy-based clustering</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell>and</ns0:cell><ns0:cell cols='2'>Yuanyuan</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>algorithm.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Zhang, 2012)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Palomba, Linares-</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Topic-modeling.</ns0:cell><ns0:cell>Yes. AR-Miner approach with</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell>V&#225;squez,</ns0:cell><ns0:cell cols='2'>Bavota,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>labeling.</ns0:cell></ns0:row><ns0:row><ns0:cell>Oliveto,</ns0:cell><ns0:cell cols='2'>Penta,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>et al., 2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>5/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:1:1:NEW 17 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistics about the datasets from 8 apps of different categories used to train the RE-BERT model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>eBay</ns0:cell><ns0:cell>Evernote</ns0:cell><ns0:cell>Facebook</ns0:cell><ns0:cell>Netflix</ns0:cell><ns0:cell>Photo editor</ns0:cell><ns0:cell>Spotify</ns0:cell><ns0:cell>Twitter</ns0:cell><ns0:cell>WhatsApp</ns0:cell></ns0:row><ns0:row><ns0:cell>Reviews</ns0:cell><ns0:cell>1,962</ns0:cell><ns0:cell>4,832</ns0:cell><ns0:cell>8,293</ns0:cell><ns0:cell>14,310</ns0:cell><ns0:cell>7,690</ns0:cell><ns0:cell>14,487</ns0:cell><ns0:cell>63,628</ns0:cell><ns0:cell>248,641</ns0:cell></ns0:row><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Shopping</ns0:cell><ns0:cell>Productivity</ns0:cell><ns0:cell>Social</ns0:cell><ns0:cell>Entertainment</ns0:cell><ns0:cell>Photography</ns0:cell><ns0:cell>Music and Audio</ns0:cell><ns0:cell>Social</ns0:cell><ns0:cell>Communication</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Example of emerging issue prediction alert for the 'Time of arrival' requirement of the Uber Eats app reviews triggered by MAPP-Reviews.Listed delivery times are inaccurate majority of the time.Everyone cancels and it ends up taking twice the estimated time to get the food delivered. You dont get updated on delays unless you actively monitor. Uber has failed at food delivery. Not easy to cancel. Also one restaurant that looked available said I was too far away after I had filled my basket. Other than that the app is easy to use.Use door dash or post mates, uber eats has definitely gone down in quality. Extremely inaccurate time estimates and they ignore your support requests until its to late to cancel an order and get a refund.App is good but this needs to be more reliable on its service. the estimated arrival time needs to be matched or there should be a option to cancel the order if they couldnt deliver on estimated time. Continuesly changing the estimated delivery time after the initial order confirmation is inappropriate.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>h</ns0:cell><ns0:cell>Vol.</ns0:cell><ns0:cell>Token</ns0:cell><ns0:cell>Review</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>768</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>849</ns0:cell><ns0:cell>Time</ns0:cell><ns0:cell>This app consistently gives incorrect, shorter delivery time frame to get you to order, but the deliveries</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>frame</ns0:cell><ns0:cell>are always late. The algorithm to predict the delivery time should be fixed so that you'll stop lying to</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>your customers.</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>896</ns0:cell><ns0:cell>Arrival</ns0:cell><ns0:cell>Ordered food and they told me it was coming. The wait time was supposed to be 45 minutes. They</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>kept pushing back the arrival time, and we waited an hour and 45 minutes for food, only to have</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>them CANCEL the order and tell us it wasn't coming. If an order is unable to be placed you need to</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>tell customers BEFORE they've waited almost 2 HOURS for their food.</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>1247</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>The app was easy to navigate but the estimated delivery time kept changing and it took almost 2hrs</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>to receive food and I live less than 4 blocks away pure ridiculousness if I would of know that I would</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>of just walked there and got it.</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>1056</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>997</ns0:cell><ns0:cell>More</ns0:cell><ns0:cell>Uber Eats lies. Several occasions showed delays because 'the restaurant requested more time' but</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>really it was Uber Eats unable to find a driver. I called the restaurants and they said the food has</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>been ready for over an hour!</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>939</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>Your app is unintuitive. Delivery times are wildly inaccurate and orders are canceled with no</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>explanation, information or help.</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>854</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell>This service is terrible. Delivery people never arrive during the estimated time.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>994</ns0:cell><ns0:cell>Time</ns0:cell><ns0:cell>Delivery times increase significantly once your order is accepted. 25-45 mins went up to almost</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>2hours! 10 1257 Time</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>esti-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>mate</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>1443</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>Delivery times are constantly updated, what was estimated at 25-35 minutes takes more than two</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>hours. I understand it's just an estimate, but 4X that is ridiculous.</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>1478</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>Inaccurate delivery time</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>1376</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>1446</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell>Terrible, the estimated time of arrival is never accurate and has regularly been up to 45 MINUTES</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>LATE with no refund. Doordash is infinitely better, install that instead, it also has more restaurants</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>1354</ns0:cell><ns0:cell>Estimed</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>1627</ns0:cell><ns0:cell>Estimed</ns0:cell><ns0:cell>I use this app a lot and recently my order are always late at least double the time im originally quoted.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>Every time my food is cold. Maybe the estimated time should be adjusted to reflect what the actual</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>time may be.</ns0:cell></ns0:row></ns0:table><ns0:note>Used to use this app a lot. Ever since they made it so you have to pay for your delivery to come on time the app is useless. You will be stuck waiting for food for an hour most of the time. The estimated time of arrival is never accurate. Have had my food brought to wrong addresses or not brought at all. I will just take the extra time out of my day to pick up the food myself rather than use this app.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>summarizes the main experimental results. The first scenario (1) with the default parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>obtains superior results compared to the second scenario (2) for all forecast horizons. In general, automatic</ns0:cell></ns0:row><ns0:row><ns0:cell>changepoints obtains 9.33% of model improvement, considering the average of MAPE values from all</ns0:cell></ns0:row><ns0:row><ns0:cell>horizons (h &#8793; 1 to h &#8793; 4).</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of MAPE in General. </ns0:figDesc><ns0:table><ns0:row><ns0:cell>h</ns0:cell><ns0:cell cols='2'>MAPE (Mean &#177; SD)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1) Automatic changepoint</ns0:cell><ns0:cell>(2) Specifying the changepoints</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>13.82 &#177; 16.42</ns0:cell><ns0:cell>15.47 &#177; 14.42</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>15.58 &#177; 19.09</ns0:cell><ns0:cell>16.94 &#177; 17.20</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>16.26 &#177; 20.18</ns0:cell><ns0:cell>17.60 &#177; 18.71</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>16.09 &#177; 19.24</ns0:cell><ns0:cell>17.47 &#177; 18.37</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>MAPE analysis (at the peaks of the time series) of each scenario considering the software requirements.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>h</ns0:cell><ns0:cell cols='2'>MAPE (Mean &#177; SD)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1) Automatic changepoint</ns0:cell><ns0:cell>(2) Specifying the changepoints</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>10.65 &#177; 8.41</ns0:cell><ns0:cell>10.30 &#177; 8.06</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>11.61 &#177; 8.80</ns0:cell><ns0:cell>11.00 &#177; 8.71</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>11.81 &#177; 8.86</ns0:cell><ns0:cell>11.42 &#177; 8.52</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>11.49 &#177; 8.71</ns0:cell><ns0:cell>11.19 &#177; 8.34</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>specific requirement, but the model predicts 800 negative reviews. Even with 20% of MAPE, we can</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>identify a significant increase in negative reviews for a requirement and trigger alerts for preventive</ns0:cell></ns0:row><ns0:row><ns0:cell>software maintenance.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='12'>/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:1:1:NEW 17 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Authors’ Response to Reviews Temporal dynamics of Requirements Engineering from mobile app reviews Vitor Mesaque Alves de Lima, Adailton Ferreira de Araújo, and Ricardo Marcondes Marcacini PeerJ Computer Science RC: Reviewer Comment, AR: Author Response,  Manuscript text The authors would like to thank the area editor and the reviewers for their precious time and invaluable comments. We have carefully addressed all the comments. We provided a point-by-point response to the reviewer’s comments. We’ve added an extra version of the article with a highlight in the parts of the text that have been modified (attached to this document). The corresponding changes and refinements made in the revised article are summarized in our response below. 1. Reviewer 1 1.1. Basic reporting RC: Line 73: reviews. First, we collect, pre-process and extract software requirements from large review datasets. The authors might need to the name of the apps and explain the process in details in section 3. AR: We thank the reviewer for pointing this out. We’ve improved this Section by adding a new subsection titled “App Review” to better clarify how reviews are collected and structured in the first stage of MAPP-Reviews. THE MAPP-REVIEWS METHOD (...) The app stores provide the textual content of the reviews, the publication date, and the rating stars of user-reported reviews. In the first stage of MAPP-Reviews, raw reviews are collected from the app stores using a web crawler tool through a RESTful API. At this stage, there is no pre-processing in the textual content of reviews. Data is organized in the appropriate data structure and automatically batched to be processed by the requirements extraction stage of MAPP-Reviews. In the experimental evaluation presented in this article, we used reviews collected from three food delivery apps: Uber Eats, Foodpanda, and Zomato. (...) RC: Line 74: Then, the software requirements associated with negative reviews are organized into groups according to their content similarity. The authors might need to explain how. AR: We thank the reviewer for pointing this out. We’ve added more details in this section about what technique we use for grouping reviews. 1 In “INTRODUCTION” section: (...) Then, the software requirements associated with negative reviews are organized into groups according to their content similarity by using clustering technique. (...) In Section “THE MAPP-REVIEWS METHOD”, we provide more details about the process of grouping requirements extracted from reviews. MAPP-Reviews uses the k-means algorithm to obtain a clustering model of semantically similar software requirements. RC: Figure 1. : It is very high level. Lack of analytical details need to show the steps in more explicit way. AR: We very much appreciate your observation. We’ve improved the figure by adding more detail about what is summarily done in each step. In “THE MAPP-REVIEWS METHOD” section: (...) (...) RC: Line 201: The reviews are from 8 apps of different categories. What are the name of these apps and their categories? AR: We thank the reviewer for pointing this out. We’ve added a new table to the text with detailed information about the datasets used for model training. In “THE MAPP-REVIEWS METHOD” section: 2 (...) (...) RC: Line 318-9: It would be better to move it up. After you introduce the model. AR: Thank you so much for the suggestion. We’ve moved the text explaining why we use Prophet to the beginning of the section when we introduced Prophet. In “THE MAPP-REVIEWS METHOD” section: (...) MAPP-Reviews uses the Prophet Forecasting Model (Taylor and Letham, 2018). Prophet is a model from Facebook researchers for forecasting time series data considering non-linear trends at different time intervals, such as yearly, weekly, and daily seasonality. We chose the Prophet model for the MAPP-Reviews method due to the ability to incorporate domain knowledge into the predictive model. The Prophet model consists of three main components, as defined in Equation 7, (...) RC: Line 321: Does it mean that you consider the negative review as the change-point?. This part is lack of a clear explanation. Moreover, it needs more supporting example (e.g., why did the author consider the negative review as the change-point). AR: In our research, time series presents the evolution of a software requirement over time observing negative reviews for this requirement. Time series frequently have abrupt changes in their trajectories. By default, our model will automatically detect these changepoints. Changepoints represent abrupt changes in the trend that may be associated with domain knowledge factors. Therefore, changepoints are not just negative reviews, but represent the trend of the time series. In summary, a changepoints is a date that represents a trend change in the time series that can also be associated with domain knowledge factors. In “THE MAPP-REVIEWS METHOD” section: (...) Changepoints play an important role in forecasting models, as they represent abrupt changes in the trend. Changepoints can be estimated automatically during model training, but domain knowledge, such as the date of app releases, marketing campaigns, and server failures, are changepoints that can be added manually by software engineers. Therefore, the changepoints could be specified by the analyst using known dates of product launches and other growth-altering events or may be automatically selected given a set of candidates. (...) 3 1.2. Experimental design RC: Line 76 and 217: software requirement from negative reviews: The authors need to explain how they classify the negative reviews in more details (not only by considering the low rate as negative), as well as The authors, might need to read the reviews to check if it can be considered as negative or positive reviews, even if the rate is low/high (1/5). Example, the reviewers might rate the app 3 out of 5 and it might be considered as negative review? AR: Thank you very much for your consideration, we completely agree with you. We assume that low rating stars are negative reviews according to research supported by the literature. One of the next important steps in our research is to incorporate sentiment analysis to classify the review polarity (positive, negative, and neutral). We’ve added more details in the article about this in the Limitations Subsection. In “DISCUSSION” section: (...) Another issue that is important to highlight is the sentiment analysis in app reviews. We assume that it is possible to improve the classification of negative reviews by incorporating sentiment analysis techniques. We can incorporate a polarity classification stage (positive, negative, and neutral) of the extracted requirement, allowing a software requirements-based sentiment analysis. In the current state of our research, we only consider negative reviews with low ratings and associate them with the software requirements mentioned in the review. (...) RC: Line 219: Thus, we use RE-BERT to extract... It is not clear how this step is preformed? How did the authors classify them into complaints, bad usage experience, or malfunction. AR: Thank you for noting this point. We do not classify complaints, bad usage experience, or malfunctions in separate groups. We only consider that the identified software requirements may be associated with these groups of issues. We’ve improved the text to clarify that we don’t do this separation of issues into subgroups yet. In “THE MAPP-REVIEWS METHOD” section: (...) Thus, we use RE-BERT to extract only software requirements mentioned in reviews that may involve complaints, bad usage experience, or malfunction of app features. (...) RC: Line 193: Does the training data contain negative review? How would the model be able to predict them ? The authors need to explain these points. AR: Predictive model training is performed with negative reviews. 4 In “THE MAPP-REVIEWS METHOD” section: (...) During model training, a time series can be divided into training and testing. The terms g(t), s(t) and h(t) can be automatically inferred by classical statistical methods in the area of time series analysis, such as the Generalized Additive Model (GAM) (Hastie and Tibshirani, 1987) used in Prophet. In the training step, the terms are adjusted to find an additive model that best fits the known observations in the training time series. Next, we evaluated the model in new data, i.e., the testing time series. (...) RC: Line 233: It is not clear how BERT would help your model? why do you need it at the first place? AR: We thank the reviewer for pointing this out. We’ve improved the text to better clarify the reason for using a BERT model-based strategy for requirements extraction. In “THE MAPP-REVIEWS METHOD” section: (...) We represent each software requirement through contextual word embedding. Word embeddings are vector representations for textual data in an embedding space, where we can compare two texts semantically using similarity measures. Different models of word embeddings have been proposed, such as Word2vec Word2vec (Mikolov et al., 2013), Glove (Pennington, Socher, and Manning, 2014), FastText236(Bojanowski et al., 2017) and BERT (Devlin et al., 2018). We use the BERT SentenceTransformers237model (Reimers and Gurevych, 2019) to maintain an neural network architecture similar to RE-BERT. BERT is a contextual neural language model, where for a given sequence of tokens, we can learn a word embedding representation for a token. The semantic proximity between tokens and entire sentences can be calculated by their word embeddings and the embeddings can be used as input to train the classifier. BERT is a contextual neural language model, where for a given sequence of tokens, we can learn a word embedding representation for a token. Word embeddings can calculate the semantic proximity between tokens and entire sentences, and the embeddings can be used as input to train the classifier. BERT-based models are promising to learn contextual word embeddings from long-term dependencies between tokens in sentences and sentences (Araujo and Marcacini, 2021). However, we highlight that a local context more impacts the extraction of software requirements from reviews, i.e., tokens closer to those of software requirements are more significant (Araujo and Marcacini, 2021). Therefore, RE-BERT explores local contexts to identify relevant candidates for software requirements. Formally, let E = {r1 , r2 , ..., rn } be a set of n extracted software requirements, where ri = (t1 , ..., tk ) are a sequence of k tokens of the requirement ri . BERT explore a masked language modeling procedure, i.e., BERT model first generates a corrupted x̂ version of the sequence, where approximately 15% of the words are randomly selected to be replaced by a special token called [MASK] (Araujo and Marcacini, 2021). One of the training objectives is the noisy reconstruction defined in Equation 1, (...) RC: Line 234: It is not clear how this objective is related to the proposed model. AR: We appreciate the reviewer’s comments. We’ve improved the text to provide more details about the aforemen- 5 tioned objective. In “THE MAPP-REVIEWS METHOD” section: (...) BERT explore a masked language modeling procedure, i.e., BERT model first generates a corrupted x̂ version of the sequence, where approximately 15% of the words are randomly selected to be replaced by a special token called [MASK]. One of the training objectives is the noisy reconstruction defined in Equation 1, (...) 1.3. Validity of the findings RC: In general: Lack of a comprehensive experiment. The model should be evaluated with some of the existing solutions to see how its performance. AR: Our proposal is new compared to existing works in the literature. As we demonstrated in Section 2, there is no previous research that explores temporal dynamics using a predictive model. Therefore, we don’t have a baseline to compare our proposal with others. RC: Line 355: we used a dataset with 86,610 reviews of: Did they collect the datasets or is it publicly available? if it is publicly available where is the reference and why did they choose it? What is the dimensionality of the dataset? What is the name of these apps. AR: Thank you for helping us with this question. We collected the data in stage 1 (App Reviews) of the MAPP-Reviews, described in detail in section 3. The raw data is available at https://github.com/ vitormesaque/mapp-reviews along with the application code. Raw reviews used are from the apps described in Section 4 of the text (Uber Eats, Zomato, and Foodpanda). We make it clearer in the paper that there is a GitHub repository with source code and datasets for reproducibility purposes. However, we better organize direct access to results. We share the app reviews dataset at https://github. com/vitormesaque/mapp-reviews/blob/main/App_Reviews_Dataset.zip and forecast results at https://github.com/vitormesaque/mapp-reviews/blob/main/results-all-forecast. xlsx. Additionally, more details of all results are available on the GitHub repository. In “RESULTS” section: (...) For this experimental evaluation, we used a dataset with 86,610 reviews of three food delivery apps:Uber Eats, Foodpanda, and Zomato. The dataset was obtained in the first stage (App Reviews) of MAPP-Reviews and is available at https://github.com/vitormesaque/mapp-reviews. The choice of these apps was based on their popularity and the number of reviews available. The reviews are from September 2018 to January 2021. (...) RC: Line 358: clustering stage (with k = 300 clusters) why k=300. AR: We use the Silhouette measure to find the optimal number of clusters. If too many points have a negative Silhouette value, it could indicate that we have created too many or too few clusters. The Silhouette score reaches its global maximum at the optimal k. 6 2. Reviewer 2 2.1. Basic reporting RC: The paper does not follow the standard sections for the journal. Furthermore, Acknowledgments acknowledge funders as additional contradiction to the guidelines. AR: We thank the reviewer for pointing this out. We reviewed the structure of the article in the following order: (1) Introduction, (2) Background and Related Work, (3) The MAPP-Reviews Method, (4) Results, (5) Discussions, and (6) Conclusions. We removed the acknowledgments section from the article. In “INTRODUCTION” section: (...) This paper is structured as follows. Section “Background and Related Work” presents the literature review and related work about mining user opinions to support requirement engineering and emerging issue detection. In “MAPP-Reviews method” section, we present the architecture of the proposed method. We present the main results in “Results” section. Thereafter, we evaluate and discuss the main findings of the research in “Discussion” section. Finally, in “Conclusions” section, we present the final considerations and future work. (...) RC: The authors do not share any raw data on their evaluation. Neither the app review nor the results are shared. Therefore, none of their results can be judged. AR: Thank you for helping us clarify this issue. We share raw data, source code and results in a repository. The link to the repository was provided in the article submission process. We make it clearer in the paper that there is a GitHub repository with source code and datasets for reproducibility purposes. Furthermore, we better organize direct access to results. We share the app reviews dataset at https://github. com/vitormesaque/mapp-reviews/blob/main/App_Reviews_Dataset.zip and forecast results at https://github.com/vitormesaque/mapp-reviews/blob/main/results-all-forecast. xlsx. Additionally, more details of all results are available on the GitHub repository. In “RESULTS” section: (...) For reproducibility purposes, we provide a GitHub repository at https://github.com/ vitormesaque/mapp-reviews containing the source code and details of each stage of the method, as well as the raw data and all the results obtained. (...) 2.2. Experimental design RC: No comment. 7 2.3. Validity of the findings RC: The data used for the evaluation was not made available. Therefore the validity cannot be judged. Explaining the data set as well as sharing it online would improve the paper a lot. Furthermore, the authors could also publish the results on the data when having applied the MAPP method. AR: Thanks again for helping us in this regard. We share the source code, datasets, and results in the experimental evaluation of our method at https://github.com/vitormesaque/mapp-reviews. 2.4. Additional comments RC: The paper covers an interesting and promising topic. It is written in a clear and well understandable language. The two mentioned problems can easily be addressed with a revision of the paper. AR: : We are grateful for the time spent reviewing the article. RC: As minor addition I would also suggest to extend the motivation with references that support the goal of continuous evaluation of user feedback side by side to the development process like T Palomba et al. [1] and Hassan et al. [2] as well as Scherr [3] et al. do. 1. Palomba, F., Linares-Vásquez, M., Bavota, G., Oliveto, R., Di Penta, , Poshyvanyk, D., De Lucia, A.: Crowdsourcing User Reviews to Support the Evolution of Mobile Apps. Journal of Systems and Software(March 2018), 143-162 DOI:10.1016/j.jss.2017.11.043 (137) 2. Hassan, S., Tantithamthavorn, C., Bezemer, C.-P., Hassan, A.: Studying the dialogue between users and developers of free apps in the Google Play Store. Empirical Software Engineering 23(3), 1275–1312 https://doi.org/10.1007/s10664-017-9538-9 (2018) 3. Scherr, S., Hupp, S., Elberzhager, F.: Establishing Continuous App Improvement by Considering Heterogenous Data Sources. International Journal of Interactive Mobile Technologies (iJIM) 15(10), 66-86 (2021). AR: We greatly appreciate your suggestions. At your suggestion, we decided to add the work (Palomba, LinaresVasquez, Bavota, Oliveto, Penta, et al., 2018) in the overview table of related works. Table 1: Overview of related works. Reference Data tion (...) (Palomba, LinaresVasquez, Bavota, Oliveto, Penta,et al., 2018) Representa- Pre-processing and Extraction of Requirements Requirements/Topics Clustering and Labeling Temporal Dynamics (...) (...) (...) (...) Bag-of-words. Topic-modeling. Yes. AR-Miner approach with labeling. No. In addition, following your suggestion we extend the motivation by mentioning the importance of continuously evaluating user feedback alongside the development process. 8 In “INTRODUCTION” section: (...) One of the main challenges for software quality maintenance is identifying emerging issues, e.g., bugs, in a timely manner (April and Abran, 2012). These issues can generate huge losses, as users can fail to perform important tasks or generate dissatisfaction that leads the user to uninstall the app. A recent survey showed that 78.3% of developers consider removing unnecessary and defective requirements to be equally or more important than adding new requirements (Nayebi, Kuznetsov, et al., 2018). According to Lientz and Swanson (1980), maintenance activities are categorized into four classes: i) adaptive - changes in the software environment; ii) perfective - new user requirements; iii) corrective - fixing errors; and iv) preventive - prevent problems in the future. The authors showed that around 21% of the maintenance effort was on the last two types (Bennett and Rajlich, 2000). Specifically, in the context of mobile apps Mcilroy, Ali, and Hassan (2016) found that rationale for the update most frequently communicated task in app stores is bug fixing which occurs in 63% of the updates. Thus, approaches that automate the analysis of potentially defective software requirements from app reviews are important to make strategic updates, as well as prioritization and planning of new releases (Licorish, Savarimuthu, and Keertipati, 2017). In addition, the app stores offer a more dynamic way of distributing the software directly to users, with shorter release times than traditional software systems, i.e., continuous update releases are performed every few weeks or even days (Nayebi, Adams, and Ruhe, 2016). Therefore, app reviews provide quick feedback from the crowd about software misbehavior that may not necessarily be reproducible during regular development/testing activities, e.g., device combinations, screen sizes, operating systems and network conditions (Palomba, LinaresVa ´squez, Bavota, Oliveto, Penta, et al., 2018). This continuous crowd feedback can be used by developers in the development and preventive maintenance process. (...) In “BACKGROUND AND RELATED WORK” section: (...) The statistical techniques have been used to find issues such as to examine how app features predict an app’s popularity (M. Chen and X. Liu, 2011), to analyze the correlations between the textual size of the reviews and users’ dissatisfaction (Vasa et al., 2012), lower rating and negative sentiments (Hoon 155 et al., 2012), correlations between the rating assigned by users and the number of app downloads (Harman, 156 Jia, and Yuanyuan Zhang, 2012), to the word usage patterns in reviews (Go ´mez et al., 2015; Licorish, 157 Savarimuthu, and Keertipati, 2017), to detect traceability links between app reviews and code changes addressing them (Palomba, Linares-Va ´squez, Bavota, Oliveto, Penta, et al., 2018), and explore the feature lifecycles in app stores (Sarro et al., 2015). There also exists some work focus on defining taxonomies of reviews to assist mobile app developers with planning maintenance and evolution activities (Di Sorbo et al., 2016; Ciurumelea et al., 2017; Nayebi, Kuznetsov, et al., 2018). In addition to user reviews, previous works (Guzman, Alkadhi, and Seyff, 2016; Guzman, Alkadhi, and Seyff, 2017; Nayebi, Cho, and Ruhe, 2018) explored how a dataset of tweets can provide complementary information to support mobile app development. 9 3. Reviewer 3 3.1. Basic reporting RC: - Missing the threats to validity and/or limitation section. AR: Thank you for pointing this out. We’ve added the limitations section to the article. In “DISCUSSION” section: 3.2. Limitations The results of our research show there are new promising prospects for the future. However, in the scope of our experimental evaluation, we just investigate the incorporation of software domain-specific information through trend changepoints. Company-sensitive information and the development team’s domain knowledge were not considered in the predictive model. Domain knowledge information provided by software engineers can potentially be exploited in the future to improve the predictive model. For this purpose, we depend on sensitive company data related to the software development and management process, e.g., release planning, server failures, and marketing campaigns. In particular, we can investigate the relationship between the release dates of app updates and the textual content of the update publication with the upward trend in negative evaluations of a software requirement. In a real-world scenario in the industry, software engineers using MAPP-Reviews will be able to provide domain-specific information. In the future, we intend to evaluate our proposed method in the industry and explore more specifics of the domain knowledge to improve the predictive model. Another issue that is important to highlight is the sentiment analysis in app reviews. We assume that it is possible to improve the classification of negative reviews by incorporating sentiment analysis techniques. We can incorporate a polarity classification stage (positive, negative, and neutral) of the extracted requirement, allowing a software requirements-based sentiment analysis. In the current state of our research, we only consider negative reviews with low ratings and associate them with the software requirements mentioned in the review. Finally, to use MAPP-Reviews in a real scenario, there must be already a sufficient amount of reviews distributed over time, i.e., a minimum number of time-series observations available for the predictive model to work properly. Therefore, in practical terms, our method is more suitable when large volumes of app reviews are available to be analyzed. RC: - Missing the discussions and/or implications section. AR: We very much appreciate your observation. Discussions and implications of the results are within the general section of Experimental Evaluation. For better understanding and organization of the text, We have relocated and improved the content of this section into two new sections (Results and Discussions). RC: - It still isn’t clear to me how this can be used concretely as this is not detecting emerging issues, but rather predicting the future frequency of a requirement in negative reviews. Could you elaborate more on why this prediction is needed as the requirements from negative reviews should have already impacted the app’s rating? Additionally, as a developer looking at the results, when would I say that the problem/requirement is becoming major? (e.g., the slope of change/the change in the frequency is greater than some particular threshold?). These could be addressed in the discussions and-or implications section. 10 AR: Thanks for your observation. We add a paragraph in the Results and Discussion section to summarize how MAPP-Reviews can be used concretely and we revisit the research question. In “DISCUSSION” section: An issue related to a software requirement reported in user reviews is defined as an emerging issue when there is an upward trend in these requirements in negative reviews. The MAPP-Reviews train predictive models to identify requirements with higher negative evaluation trends, but inevitably a negative review will impact the rating. However, our objective is to mitigate this negative impact. The results show that the best prediction horizon (h) is one week (h = 1). In practical terms, this means the initial trend of a defective requirement can be identified one week in advance. We can note that a prediction error rate (MAPE) of up to 20% is acceptable. For example, consider that the prediction is 1000 negative reviews for a specific requirement at a given point, but the model predicts 800 negative reviews. Even with 20% of MAPE, we can identify a significant increase in negative reviews for a requirement and trigger alerts for preventive software maintenance, i.e., when MAPPReviews predicts an uptrend, the software development team should receive an alert. In the time series forecast shown in Figure 7, we observe that the model would be able to predict the peaks of negative reviews for the software requirement one week in advance. We show that MAPP-Reviews provides software engineers with tools to perform software maintenance activities, particularly preventive maintenance, by automatically monitoring the temporal dynamics of software requirements. (...) 3.3. Experimental Design RC: - Some methodological decisions need more details (e.g., I’m not quite sure how you decide the starting number of k? does selecting a different starting number k affect the result in any way? what’s the rationale behind selecting six clusters in your experiment? Can a requirement belong to more than one cluster?). AR: We use the Silhouette measure to find the optimal number of clusters. If too many points have a negative Silhouette value, it could indicate that we have created too many or too few clusters. The Silhouette score reaches its global maximum at the optimal k. For the experimental evaluation we used the six most frequent clusters in negative reviews. A requirement can belong to two clusters, however we consider that it belongs to the cluster in which it is best allocated by the Silhouette measure. RC: - Your experiment should also include using app update dates as changepoints because prior research suggested that naturally, you will see a spike of user reviews shortly after app updates. AR: We agree with this suggestion. The next steps in our research is to input domain knowledge into the predictive model, as well as app update release dates. We have added a comment on this issue in the Limitations section. 11 In “DISCUSSION” section: (...) 3.4. Limitations The results of our research show there are new promising prospects for the future. However, in the scope of our experimental evaluation, we just investigate the incorporation of software domain-specific information through trend changepoints. Company-sensitive information and the development team’s domain knowledge were not considered in the predictive model. Domain knowledge information provided by software engineers can potentially be exploited in the future to improve the predictive model. For this purpose, we depend on sensitive company data related to the software development and management process, e.g., release planning, server failures, and marketing campaigns. In particular, we can investigate the relationship between the release dates of app updates and the textual content of the update publication with the upward trend in negative evaluations of a software requirement. In a real-world scenario in the industry, software engineers using MAPP-Reviews will be able to provide domain-specific information. In the future, we intend to evaluate our proposed method in the industry and explore more specifics of the domain knowledge to improve the predictive model. (...) 3.5. Validity of the findings RC: - For a journal paper, a further extensive evaluation is needed. I would argue that reviews from food ordering apps did not contain rich data regarding the problems/requirements with the app itself but contain a great number of information/complaints on the services. AR: Thank you very much for your observation. We’ve improved the text by explaining the pattern of user complaints about bad services and functionality issues. In “DISCUSSION” section: (...) The forecast presented in Figure Figure 7 shows that the model was able to predict the peak of negative reviews for the 'Arriving time' requirement. An emerging issue detection system based only on the frequency of a topic could trigger many false detections, i.e., it would not detect defective functionality but issues related to the quality of services offered. Analyzing user reviews, we found that some complaints are about service issues rather than defective requirements. For example, the user may complain about the delay in the delivery service and negatively rate the app, but in reality, they are complaining about the restaurant, i.e., a problem with the establishment service. We’ve seen that this pattern of user complaints is repeated across other app domains, not just the food delivery service. In delivery food apps, these complaints about service are constant, uniform, and distributed among all restaurants available in the app. In Table Table 4, it is clear that the emerging issue refers to the deficient implementation of the estimated delivery time prediction functionality. Our experiment showed that when there is a problem in the app related to a defective software requirement, there are increasing complaints associated with negative reviews regarding that requirement. (...) 12 RC: - How applicable can this be used for apps in other categories? Your experimentation is done only on one type of apps and only popular ones (please discuss this in the threats to validity and/or limitation section). AR: Thanks for pointing this out. Although only one category of apps was explored in our experiment, it is important to note that the method can be applied to apps in different categories because our requirements extractor uses a cross-domain training strategy, where the model was trained in different apps. In “THE MAPP-REVIEWS METHOD” section: (...) MAPP-Reviews uses the pre-trained RE-BERT (Araujo and Marcacini, 2021) model to extract software requirements from app reviews. RE-BERT is an extractor developed from our previous research. We trained the RE-BERT model using a labeled reviews dataset generated with a manual annotation process, as described by (Dabrowski et al., 2020). The reviews are from 8 apps of different categories as showed in Table 2. RE-BERT uses a cross-domain training strategy, where the model was trained in 7 apps and tested in one unknown app for the test step. (...) Additionally, we have placed restrictions on using our method in less popular apps in the Limitations section. In “DISCUSSION” section: (...) Finally, to use MAPP-Reviews in a real scenario, there must be already a sufficient amount of reviews distributed over time, i.e., a minimum number of time-series observations available for the predictive model to work properly. Therefore, in practical terms, our method is more suitable when large volumes of app reviews are available to be analyzed. (...) RC: - Sensitivity analysis may be needed. AR: Thank you for giving us this direction. We have added a paragraph in the Discussions section about a future direction of our research on sensitivity analysis. In “DISCUSSION” section: (...) We intend to explore further our method to deeply determine the input variables that most contribute to the output behavior and the non-influential inputs or to determine some interaction effects within the model. In addition, sensitivity analysis can help us reduce the uncertainties found more effectively and calibrate the model. 3.6. Additional comments RC: - Interesting research, relevant to the journal. AR: We are grateful for the time spent reviewing the article. 13 RC: - It is well written, structured, and presented. AR: Thank you so much for reviewing our paper. RC: - Extensive literature review. AR: Thanks for appreciating this. RC: - The authors provided a code to examine. AR: Thank you for reviewing this point. 14 "
Here is a paper. Please give your review comments after reading it.
328
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Opinion mining for app reviews aims to analyze people's comments from app stores to support data-driven requirements engineering activities, such as bug report classification, new feature requests, and usage experience. However, due to a large amount of textual data, manually analyzing these comments is challenging, and machine-learning-based methods have been used to automate opinion mining. Although recent methods have obtained promising results for extracting and categorizing requirements from users' opinions, the main focus of existing studies is to help software engineers to explore historical user behavior regarding software requirements. Thus, existing models are used to support corrective maintenance from app reviews, while we argue that this valuable user knowledge can be used for preventive software maintenance. This paper introduces the temporal dynamics of requirements analysis to answer the following question: how to predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation? We present the MAPP-Reviews (Monitoring App Reviews) method, which (i) extracts requirements with negative evaluation from app reviews, (ii) generates time series based on the frequency of negative evaluation, and (iii) trains predictive models to identify requirements with higher trends of negative evaluation. The experimental results from approximately 85,000 reviews show that opinions extracted from user reviews provide information about the future behavior of an app requirement, thereby allowing software engineers to anticipate the identification of requirements that may affect the future app's ratings.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Opinion mining for app reviews aims to analyze people's comments from app stores to support data-driven requirements engineering activities, such as bug report classification, new feature requests, and usage experience. However, due to a large amount of textual data, manually analyzing these comments is challenging, and machine-learning-based methods have been used to automate opinion mining. Although recent methods have obtained promising results for extracting and categorizing requirements from users' opinions, the main focus of existing studies is to help software engineers to explore historical user behavior regarding software requirements. Thus, existing models are used to support corrective maintenance from app reviews, while we argue that this valuable user knowledge can be used for preventive software maintenance. This paper introduces the temporal dynamics of requirements analysis to answer the following question: how to predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation? We present the MAPP-Reviews (Monitoring App Reviews) method, which (i) extracts requirements with negative evaluation from app reviews, (ii) generates time series based on the frequency of negative evaluation, and (iii) trains predictive models to identify requirements with higher trends of negative evaluation. The experimental results from approximately 85,000 reviews show that opinions extracted from user reviews provide information about the future behavior of an app requirement, thereby allowing software engineers to anticipate the identification of requirements that may affect the future app's ratings.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Opinions extracted from app reviews provide a wide range of user feedback to support requirements engineering activities, such as bug report classification, new feature requests, and usage experience <ns0:ref type='bibr' target='#b12'>(Dabrowski et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b42'>Martin et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>AlSubaihin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b4'>Araujo and Marcacini, 2021)</ns0:ref>.</ns0:p><ns0:p>However, manually analyzing a reviews dataset to extract useful knowledge from the opinions is challenging because of the large amount of data and the high frequency of new reviews published by users <ns0:ref type='bibr' target='#b33'>(Johanssen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b42'>Martin et al., 2016)</ns0:ref>. To deal with these challenges, opinion mining has been increasingly used for computational analysis of the people's opinions from free texts (B. <ns0:ref type='bibr' target='#b37'>Liu, 2012)</ns0:ref>.</ns0:p><ns0:p>In the context of app reviews, opinion mining allows extracting excerpts from comments and mapping them to software requirements, as well as classifying the positive, negative or neutral polarity of these requirements according to the users' experience <ns0:ref type='bibr' target='#b12'>(Dabrowski et al., 2020)</ns0:ref>.</ns0:p><ns0:p>One of the main challenges for software quality maintenance is identifying emerging issues, e.g., bugs, in a timely manner <ns0:ref type='bibr' target='#b1'>(April and Abran, 2012)</ns0:ref>. These issues can generate huge losses, as users can fail to perform important tasks or generate dissatisfaction that leads the user to uninstall the app. A recent survey showed that 78.3% of developers consider removing unnecessary and defective requirements to be equally or more important than adding new requirements <ns0:ref type='bibr' target='#b49'>(Nayebi, Kuznetsov, et al., 2018)</ns0:ref>. According to <ns0:ref type='bibr' target='#b36'>Lientz and Swanson (1980)</ns0:ref>, maintenance activities are categorized into four classes: i) adaptive -changes PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science in the software environment; ii) perfective -new user requirements; iii) corrective -fixing errors; and iv) preventive -prevent problems in the future. The authors showed that around 21% of the maintenance effort was on the last two types <ns0:ref type='bibr' target='#b5'>(Bennett and Rajlich, 2000)</ns0:ref>. Specifically, in the context of mobile apps <ns0:ref type='bibr'>Mcilroy, Ali, and Hassan (2016)</ns0:ref> found that rationale for the update most frequently communicated task in app stores is bug fixing which occurs in 63% of the updates. Thus, approaches that automate the analysis of potentially defective software requirements from app reviews are important to make strategic updates, as well as prioritization and planning of new releases <ns0:ref type='bibr' target='#b35'>(Licorish, Savarimuthu, and Keertipati, 2017)</ns0:ref>. In addition, the app stores offer a more dynamic way of distributing the software directly to users, with shorter release times than traditional software systems, i.e., continuous update releases are performed every few weeks or even days <ns0:ref type='bibr' target='#b47'>(Nayebi, Adams, and Ruhe, 2016)</ns0:ref>. Therefore, app reviews provide quick feedback from the crowd about software misbehavior that may not necessarily be reproducible during regular development/testing activities, e.g., device combinations, screen sizes, operating systems and network conditions <ns0:ref type='bibr' target='#b56'>(Palomba, Linares-V&#225;squez, Bavota, Oliveto, Penta, et al., 2018)</ns0:ref>. This continuous crowd feedback can be used by developers in the development and preventive maintenance process.</ns0:p><ns0:p>Using an opinion mining approach, we argue that software engineers can investigate bugs and misbehavior early when an app receives negative reviews. Opinion mining techniques can organize reviews based on the identified software requirements and their associated user's sentiment <ns0:ref type='bibr' target='#b12'>(Dabrowski et al., 2020)</ns0:ref>. Consequently, developers can examine negative reviews about a specific feature to understand the user's concerns about a defective requirement and potentially fix it more quickly, i.e., before impacting many users and negatively affecting the app's ratings.</ns0:p><ns0:p>Different strategies have recently been proposed to discover these emerging issues <ns0:ref type='bibr' target='#b77'>(Zhao et al., 2020)</ns0:ref>, such as issues categorization <ns0:ref type='bibr' target='#b70'>(Tudor and Walter, 2006;</ns0:ref><ns0:ref type='bibr' target='#b31'>Iacob and Harrison, 2013;</ns0:ref><ns0:ref type='bibr' target='#b16'>Galvis Carre&#241;o and Winbladh, 2013;</ns0:ref><ns0:ref type='bibr' target='#b54'>Pagano and W. Maalej, 2013;</ns0:ref><ns0:ref type='bibr' target='#b45'>Mcilroy, Ali, Khalid, et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Khalid et al., 2015;</ns0:ref><ns0:ref type='bibr'>Panichella et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b59'>Panichella et al., 2016)</ns0:ref>, sentiment analysis of the software requirements to identify certain levels of dissatisfaction <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref>, and analyze the degree of utility of a requirement <ns0:ref type='bibr' target='#b27'>(Guzman and Walid Maalej, 2014)</ns0:ref>. These approaches are concerned only with past reviews and acting in a corrective way, i.e., these approaches do not have preventive strategies to anticipate problems that can become frequent and impact more users in the coming days or weeks. Analyzing the temporal dynamics of a requirement from app reviews provides information about a requirement's future behavior. In this sense, we raise the following research question: how do we predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation?</ns0:p><ns0:p>In this paper, we present the MAPP-Reviews (Monitoring App Reviews) method. MAPP-Reviews explores the temporal dynamics of software requirements extracted from app reviews. First, we collect, pre-process and extract software requirements from large review datasets. Then, the software requirements associated with negative reviews are organized into groups according to their content similarity by using clustering technique. The temporal dynamics of each requirement group is modeled using a time series, which indicates the time frequency of a software requirement from negative reviews. Finally, we train predictive models on historical time series to forecast future points. Forecasting is interpreted as signals to identify which requirements may negatively impact the app in the future, e.g., identify signs of app misbehavior before impacting many users and prevent the low app ratings. Our main contributions are briefly summarized below:</ns0:p><ns0:p>&#8226; Although there are promising methods for extracting candidate software requirements from application reviews, such methods do not consider that users describe the same software requirement in different ways with non-technical and informal language. Our MAPP-Reviews method introduces software requirements clustering to standardize different software requirement writing variations.</ns0:p><ns0:p>In this case, we explore contextual word embeddings for software requirements representation, which have recently been proposed to support natural language processing. When considering the clustering structure, we can more accurately quantify the number of negative user mentions of a software requirement over time.</ns0:p><ns0:p>&#8226; We present a method to generate the temporal dynamics of negative ratings of a software requirements cluster by using time series. Our method uses equal-interval segmentation to calculate the frequency of software requirements mentions in each time interval. Thus, a time series is obtained and used to analyze and visualize the temporal dynamics of the cluster, where we are especially interested in intervals where sudden changes happen.</ns0:p></ns0:div> <ns0:div><ns0:head>2/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Time series forecasting is useful to identify in advance an upward trend of negative reviews for a given software requirement. However, most existing forecasting models do not consider domainspecific information that affects user behavior, such as holidays, new app releases and updates, marketing campaigns, and other external events. In the MAPP-Reviews method, we investigate the incorporation of software domain-specific information through trend changepoints. We explore both automatic and manual changepoint estimation.</ns0:p><ns0:p>We carried out an experimental evaluation involving approximately 85,000 reviews over 2.5 years for three food delivery apps. The experimental results show that it is possible to find significant points in the time series that can provide information about the future behavior of the requirement through app reviews.</ns0:p><ns0:p>Our method can provide important information to software engineers regarding software development and maintenance. Moreover, software engineers can act preventively through the proposed MAPP-Reviews approach and reduce the impacts of a defective requirement. This paper is structured as follows. Section 'Background and Related Work' presents the literature review and related work about mining user opinions to support requirement engineering and emerging issue detection. In 'MAPP-Reviews method' section, we present the architecture of the proposed method.</ns0:p><ns0:p>We present the main results in 'Results' section. Thereafter, we evaluate and discuss the main findings of the research in 'Discussion' section. Finally, in 'Conclusions' section, we present the final considerations and future work.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND AND RELATED WORK</ns0:head><ns0:p>The opinion mining of app reviews can involve several steps, such as software requirements organization from reviews <ns0:ref type='bibr' target='#b4'>(Araujo and Marcacini, 2021)</ns0:ref>, grouping similar apps using textual features <ns0:ref type='bibr' target='#b67'>(Al-Subaihin et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b28'>Harman, Jia, and Yuanyuan Zhang, 2012)</ns0:ref>, reviews classification in categories of interest to developers (e.g., Bug and New Features) <ns0:ref type='bibr' target='#b2'>(Araujo, Golo, et al., 2020)</ns0:ref>, sentiment analysis of the users' opinion about the requirements <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019;</ns0:ref><ns0:ref type='bibr' target='#b41'>Malik, Shakshuki, and Yoo, 2020)</ns0:ref>, and the prediction of the review utility score (Ying <ns0:ref type='bibr' target='#b76'>Zhang and Lin, 2018)</ns0:ref>. The requirements extraction has an essential role in these steps since the failure in this task directly affects the performance of the other steps. <ns0:ref type='bibr' target='#b12'>Dabrowski et al. (2020)</ns0:ref> evaluated the performance of the three state-of-the-art requirements extraction approaches: SAFE <ns0:ref type='bibr' target='#b32'>(Johann, Stanik, Walid Maalej, et al., 2017)</ns0:ref>, ReUS <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019)</ns0:ref> and GuMa <ns0:ref type='bibr' target='#b27'>(Guzman and Walid Maalej, 2014)</ns0:ref>. These approaches explore rule-based information extraction from linguistic features. GuMa <ns0:ref type='bibr' target='#b27'>(Guzman and Walid Maalej, 2014</ns0:ref>) used a co-location algorithm, thereby identifying expressions of two or more words that correspond to a conventional way of referring to things. SAFE <ns0:ref type='bibr' target='#b32'>(Johann, Stanik, Walid Maalej, et al., 2017)</ns0:ref> and ReUS <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019)</ns0:ref> defined linguistic rules based on grammatical classes and semantic dependence. The experimental evaluation of <ns0:ref type='bibr' target='#b12'>(Dabrowski et al., 2020)</ns0:ref> revealed that the low accuracy presented by the rule-based approaches could hinder its use in practice. <ns0:ref type='bibr' target='#b4'>Araujo and Marcacini (2021)</ns0:ref> After extracting requirements from app reviews, there is a step to identify more relevant requirements and organize them into groups of similar requirements. Traditionally, requirements obtained from user interviews are prioritized with manual analysis techniques, such as the MoSCoW <ns0:ref type='bibr' target='#b70'>(Tudor and Walter, 2006)</ns0:ref> method that categorizes each requirement into groups, and applies the AHP (Analytical Hierarchy Process) decision-making <ns0:ref type='bibr' target='#b65'>(Saaty, 1980)</ns0:ref>. These techniques are not suitable for prioritizing large numbers of software requirements because they require domain experts to categorize each requirement. Therefore, recent studies have applied data mining approaches and statistical techniques <ns0:ref type='bibr' target='#b54'>(Pagano and W. Maalej, 2013)</ns0:ref>.</ns0:p><ns0:p>The statistical techniques have been used to find issues such as to examine how app features predict an app's popularity (M. <ns0:ref type='bibr'>Chen and X. Liu, 2011)</ns0:ref>, to analyze the correlations between the textual size of Manuscript to be reviewed Computer Science the reviews and users' dissatisfaction <ns0:ref type='bibr' target='#b71'>(Vasa et al., 2012)</ns0:ref>, lower rating and negative sentiments <ns0:ref type='bibr' target='#b30'>(Hoon et al., 2012)</ns0:ref>, correlations between the rating assigned by users and the number of app downloads <ns0:ref type='bibr' target='#b28'>(Harman, Jia, and Yuanyuan Zhang, 2012)</ns0:ref>, to the word usage patterns in reviews <ns0:ref type='bibr' target='#b22'>(G&#243;mez et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b35'>Licorish, Savarimuthu, and Keertipati, 2017)</ns0:ref>, to detect traceability links between app reviews and code changes addressing them <ns0:ref type='bibr' target='#b56'>(Palomba, Linares-V&#225;squez, Bavota, Oliveto, Penta, et al., 2018)</ns0:ref>, and explore the feature lifecycles in app stores <ns0:ref type='bibr' target='#b66'>(Sarro et al., 2015)</ns0:ref>. There also exists some work focus on defining taxonomies of reviews to assist mobile app developers with planning maintenance and evolution activities <ns0:ref type='bibr' target='#b14'>(Di Sorbo et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b11'>Ciurumelea et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b49'>Nayebi, Kuznetsov, et al., 2018)</ns0:ref>. In addition to user reviews, previous works <ns0:ref type='bibr' target='#b25'>(Guzman, Alkadhi, and Seyff, 2016;</ns0:ref><ns0:ref type='bibr' target='#b26'>Guzman, Alkadhi, and Seyff, 2017;</ns0:ref><ns0:ref type='bibr' target='#b48'>Nayebi, Cho, and Ruhe, 2018)</ns0:ref> explored how a dataset of tweets can provide complementary information to support mobile app development.</ns0:p><ns0:p>From a labeling perspective, previous works classified and grouped software reviews into classes and categories <ns0:ref type='bibr' target='#b31'>(Iacob and Harrison, 2013;</ns0:ref><ns0:ref type='bibr' target='#b16'>Galvis Carre&#241;o and Winbladh, 2013;</ns0:ref><ns0:ref type='bibr' target='#b54'>Pagano and W. Maalej, 2013;</ns0:ref><ns0:ref type='bibr' target='#b45'>Mcilroy, Ali, Khalid, et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Khalid et al., 2015;</ns0:ref><ns0:ref type='bibr'>N. Chen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b22'>G&#243;mez et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b24'>Gu and Kim, 2015;</ns0:ref><ns0:ref type='bibr' target='#b38'>Walid Maalej and Nabil, 2015;</ns0:ref><ns0:ref type='bibr' target='#b73'>Villarroel et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b51'>Nayebi, Marbouti, et al., 2017)</ns0:ref>, such as feature requests, requests for improvements, requests for bug fixes, and usage experience. Noei, F. <ns0:ref type='bibr' target='#b53'>Zhang, and Zou (2021)</ns0:ref> used topic modeling to determine the key topics of user reviews for different app categories.</ns0:p><ns0:p>Regarding analyzing emerging issues from app reviews, existing studies are usually based on topic modeling or clustering techniques. For example, LDA (Latent Dirichlet Allocation) <ns0:ref type='bibr' target='#b6'>(Blei, Ng, and Jordan, 2003)</ns0:ref>, DIVER (iDentifying emerging app Issues Via usER feedback) <ns0:ref type='bibr' target='#b20'>(Gao, Zheng, et al., 2019)</ns0:ref> and IDEA <ns0:ref type='bibr' target='#b17'>(Gao, Zeng, Lyu, et al., 2018)</ns0:ref> approaches were used for app reviews. The LDA approach is a topic modeling method used to determine patterns of textual topics, i.e., to capture the pattern in a document that produces a topic. LDA is a probabilistic distribution algorithm for assigning topics to documents. A topic is a probabilistic distribution over words, and each document represents a mixture of latent topics <ns0:ref type='bibr' target='#b27'>(Guzman and Walid Maalej, 2014)</ns0:ref>. In the context of mining user opinions in app reviews, especially to detect emerging issues, the documents in the LDA are app reviews, and the extracted topics are used to detect emerging issues. The IDEA approach improves LDA by considering topic distributions in a context window when detecting emerging topics by tracking topic variations over versions <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref>. In addition, the IDEA approach implements an automatic topic interpretation method to label each topic with the most representative sentences and phrases <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref>.</ns0:p><ns0:p>In the same direction, the DIVER approach was proposed to detect emerging app issues, but mainly in beta test periods <ns0:ref type='bibr' target='#b20'>(Gao, Zheng, et al., 2019)</ns0:ref>. The IDEA, DIVER and LDA approaches have not been considered sentiment of user reviews. Recently, the MERIT (iMproved EmeRging Issue deTection) <ns0:ref type='bibr' target='#b18'>(Gao, Zeng, Wen, et al., 2020)</ns0:ref> approach was proposed and explore word embedding techniques to prioritize phrases/sentences of each positive and negative topic. <ns0:ref type='bibr' target='#b61'>Phong et al. (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b74'>Vu et al. (2016)</ns0:ref> grouped the keywords and phrases using clustering algorithms and then determine and monitor over time the emergent clusters based on the occurrence frequencies of the keywords and phrases in each cluster. <ns0:ref type='bibr' target='#b55'>Palomba, Linares-V&#225;squez, Bavota, Oliveto, Di Penta, et al. (2015)</ns0:ref> proposes an approach to tracking informative user reviews of source code changes and to monitor the extent to which developers addressing user reviews. These approaches are descriptive models, i.e., they analyze historical data to interpret and understand the behavior of past reviews. In our paper, we are interested in predictive models that aim to anticipate the growth of negative reviews that can impact the app's evaluation.</ns0:p><ns0:p>In short, app reviews formed the basis for many studies and decisions ranging from feature extraction to release planning of mobile apps. However, previous related works do not explore the temporal dynamics with a predictive model of requirements in reviews, as shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. Related works that incorporate temporal dynamics cover only descriptive models. In addition, existing studies focus on only a few steps of the opinion mining process from app reviews, which hinders its use in real-world applications. Our proposal instantiates a complete opinion mining process and incorporates temporal dynamics of software requirements extracted from app reviews into forecasting models to address these drawbacks.</ns0:p></ns0:div> <ns0:div><ns0:head>THE MAPP-REVIEWS METHOD</ns0:head><ns0:p>In order to analyze the temporal dynamics of software requirements, we present the MAPP-Reviews approach with five stages, as shown in Figure <ns0:ref type='figure'>1</ns0:ref>. First, we collect mobile app reviews in app stores through a web crawler. Second, we group the similar extracted requirements by using clustering methods. Third,</ns0:p></ns0:div> <ns0:div><ns0:head>4/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_9'>2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science No.</ns0:p></ns0:div> <ns0:div><ns0:head>5/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the most relevant clusters are identified to generate time series from negative reviews. Finally, we train the predictive model from time series to forecast software requirements involved with negative reviews, which will potentially impact the app's rating.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>. Overview of the proposed method for analyzing temporal dynamics of requirements engineering from mobile app reviews.</ns0:p></ns0:div> <ns0:div><ns0:head>App Reviews</ns0:head><ns0:p>The app stores provide the textual content of the reviews, the publication date, and the rating stars of user-reported reviews. In the first stage of MAPP-Reviews, raw reviews are collected from the app stores using a web crawler tool through a RESTful API. At this stage, there is no pre-processing in the textual content of reviews. Data is organized in the appropriate data structure and automatically batched to be processed by the requirements extraction stage of MAPP-Reviews. In the experimental evaluation presented in this article, we used reviews collected from three food delivery apps: Uber Eats, Foodpanda, and Zomato.</ns0:p></ns0:div> <ns0:div><ns0:head>Requirements Extraction</ns0:head><ns0:p>This section describes stages 2 of the MAPP-Reviews method, where there is the software requirements extraction from app reviews and text pre-processing using contextual word embeddings.</ns0:p><ns0:p>MAPP-Reviews uses the pre-trained RE-BERT <ns0:ref type='bibr' target='#b4'>(Araujo and Marcacini, 2021)</ns0:ref> model to extract software requirements from app reviews. RE-BERT is an extractor developed from our previous research.</ns0:p><ns0:p>We trained the RE-BERT model using a labeled reviews dataset generated with a manual annotation process, as described by <ns0:ref type='bibr' target='#b12'>Dabrowski et al. (2020)</ns0:ref>. The reviews are from 8 apps of different categories as showed in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. RE-BERT uses a cross-domain training strategy, where the model was trained in 7 apps and tested in one unknown app for the test step. RE-BERT software requirements extraction performance was compared to SAFE <ns0:ref type='bibr' target='#b32'>(Johann, Stanik, Walid Maalej, et al., 2017)</ns0:ref>, ReUS <ns0:ref type='bibr' target='#b15'>(Dragoni, Federici, and Rexha, 2019)</ns0:ref> and GuMa <ns0:ref type='bibr' target='#b27'>(Guzman and Walid Maalej, 2014)</ns0:ref>. Since RE-BERT uses pre-trained models for semantic representation of texts, the extraction performance is significantly superior to the rule-based methods. Given this scenario, we selected RE-BERT for the requirement extraction stage. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows an example of review and extracted software requirements. In the raw review 'I am ordering with delivery but it is automatically placing order with pick-up', four software requirements were extracted ('ordering', 'delivery', 'placing order', and 'pick-up'). Note that 'placing order' and 'ordering' are the same requirement in practice. In the clustering step of the MAPP-Reviews method, these requirements are grouped in the same cluster, as they refer to the same feature. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science more tokens. We filter reviews that are more associated with negative comments through user feedback.</ns0:p><ns0:p>Consider that the user gives a star rating when submitting a review for an app. Generally, the star rating ranges from 1 to 5. This rating can be considered as the level of user satisfaction. In particular, we are interested in defective software requirements, and only reviews with 1 or 2 rating stars were considered.</ns0:p><ns0:p>Thus, we use RE-BERT to extract only software requirements mentioned in reviews that may involve complaints, bad usage experience, or malfunction of app features.</ns0:p><ns0:p>RE-BERT extracts software requirements directly from the document reviews and we have to deal with the drawback that the same requirement can be written in different ways by users. Thus, we propose a software requirement semantic clustering, in which different writing variations of the same requirement must be standardized. However, the clustering step requires that the texts be pre-processed and structured in a format that allows the calculation of similarity measures between requirements.</ns0:p><ns0:p>We represent each software requirement through contextual word embedding. Word embeddings are vector representations for textual data in an embedding space, where we can compare two texts semantically using similarity measures. Different models of word embeddings have been proposed, such as Word2vec <ns0:ref type='bibr' target='#b46'>(Mikolov et al., 2013)</ns0:ref>, Glove <ns0:ref type='bibr' target='#b60'>(Pennington, Socher, and Manning, 2014)</ns0:ref>, FastText <ns0:ref type='bibr' target='#b7'>(Bojanowski et al., 2017)</ns0:ref> and BERT <ns0:ref type='bibr' target='#b13'>(Devlin et al., 2018)</ns0:ref>. We use the BERT Sentence-Transformers model <ns0:ref type='bibr' target='#b62'>(Reimers and Gurevych, 2019)</ns0:ref> to maintain an neural network architecture similar to RE-BERT.</ns0:p><ns0:p>BERT is a contextual neural language model, where for a given sequence of tokens, we can learn a word embedding representation for a token. Word embeddings can calculate the semantic proximity between tokens and entire sentences, and the embeddings can be used as input to train the classifier. BERT-based models are promising to learn contextual word embeddings from long-term dependencies between tokens in sentences and sentences <ns0:ref type='bibr' target='#b4'>(Araujo and Marcacini, 2021)</ns0:ref>. However, we highlight that a local context more impacts the extraction of software requirements from reviews, i.e., tokens closer to those of software requirements are more significant <ns0:ref type='bibr' target='#b4'>(Araujo and Marcacini, 2021)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science a special token called [MASK] <ns0:ref type='bibr' target='#b4'>(Araujo and Marcacini, 2021)</ns0:ref>. One of the training objectives is the noisy reconstruction defined in Equation <ns0:ref type='formula'>1</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_0'>p(r|r) &#8793; k &#8721; j&#8793;1 m j exp(h &#8890; c j w t j ) &#8721; t &#8242; exp(h &#8890; c j w t &#8242; ) (1)</ns0:formula><ns0:p>where r is a corrupted token sequence of requirement r, r is the masked tokens, m t is equal to 1 when t j is masked and 0 otherwise. The c t represents context information for the token t j , usually the neighboring tokens. We extract token embeddings from the pre-trained BERT model, where h c j is a context embedding and w t j is a word embedding of the token t j . The term &#8721; t &#8242; exp(h &#8890; c w t &#8242; ) is a normalization factor using all tokens t &#8242; from a context c. BERT uses the Transformer deep neural network to solve p(r|r) of the Equation <ns0:ref type='formula'>1</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> illustrates a set of software requirements in a two-dimensional space obtained from contextual word embeddings. Note that the vector space of embeddings preserves the proximity of similar requirements, but written in different ways by users such as 'search items', 'find items', 'handles my searches' and 'find special items'. </ns0:p></ns0:div> <ns0:div><ns0:head>Requirements Clustering</ns0:head><ns0:p>After mapping the software requirements into word embeddings, MAPP-Reviews uses the k-means algorithm <ns0:ref type='bibr' target='#b39'>(MacQueen et al., 1967)</ns0:ref> to obtain a clustering model of semantically similar software requirements.</ns0:p><ns0:p>Formally, let R &#8793; {r 1 ,r 2 ,...,r n } a set of extracted software requirements, where each requirement r is a m-dimensional real vector from an word embedding space. The k-means clustering aims to partition the</ns0:p><ns0:formula xml:id='formula_1'>n requirements into k (2 &#8804; k &#8804; n) clusters C &#8793; {C 1 ,C 2 ,.</ns0:formula><ns0:p>..,C k }, thereby minimizing the within-cluster sum of squares as defined in Equation <ns0:ref type='formula'>2</ns0:ref>, where &#181; i is the mean vector of all requirements in C i . Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>&#8721; C i &#8712;C &#8721; r&#8712;C i &#8741;r &#8722; &#181; i &#8741; 2 (2)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We observe that not all software requirements cluster represents a functional requirement in practice.</ns0:p><ns0:p>Then, we evaluated the clustering model using a statistical measure called silhouette <ns0:ref type='bibr' target='#b64'>(Rousseeuw, 1987)</ns0:ref> to discard clusters with many different terms and irrelevant requirements. The silhouette value of a data instance is a measure of how similar a software requirement is to its own cluster compared to other clusters.</ns0:p><ns0:p>The silhouette measure ranges from &#8722;1 to +1, where values close to +1 indicate that the requirement is well allocated to its own cluster <ns0:ref type='bibr' target='#b72'>(Vendramin, Campello, and Hruschka, 2010)</ns0:ref>. Finally, we use the requirements with higher silhouette values to support the cluster labeling, i.e., to determine the software requirement's cluster name. For example, Table <ns0:ref type='table'>3</ns0:ref> shows the software requirement cluster 'Payment' and some tokens allocated in the cluster with their respective silhouette values.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. Example of software requirement cluster 'Payment' and some tokens allocated in the cluster with their respective silhouette values.</ns0:p></ns0:div> <ns0:div><ns0:head>Cluster Label</ns0:head><ns0:p>Tokens with Silhouette (s)</ns0:p><ns0:p>Payment 'payment getting' (s = 0.2618), 'payment get' (s = 0.2547), 'getting payment' (s = 0.2530), 'take payment'(s = 0.2504), 'payment taking' (s = 0.2471), 'payment' (s = 0.2401)</ns0:p><ns0:p>To calculate the silhouette measure, let r i &#8712; C i a requirement r i in the cluster C i . Equation 3 compute the mean distance between r i and all other software requirements in the same cluster, where d(r i ,r j ) is the distance between requirements r i and r j in the cluster C i . In the equation, the expression 1</ns0:p><ns0:formula xml:id='formula_3'>|C i |&#8722;1 means the distance d(r i ,r i )</ns0:formula><ns0:p>is not added to the sum. A smaller value of the silhouette measure a(i) indicates that the requirement i is far from neighboring clusters and better assigned to its cluster.</ns0:p><ns0:formula xml:id='formula_4'>a(r i ) &#8793; 1 |C i | &#8722; 1 &#8721; r j &#8712;C i ,r i &#8800;r j d(r i ,r j )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Analogously, the mean distance from requirement r i to another cluster C k is the mean distance from r i to all requirements in C k , where C k &#8800; C i . For each requirement r i &#8712; C i , Equation 4 defines the minimum mean distance of r i for all requirements in any other cluster, of which r i is not a member. The cluster with this minimum mean distance is the neighbor cluster of r i . So this is the next best-assigned cluster for the r i requirement. The silhouette (value) of the software requirement r i is defined by Equation <ns0:ref type='formula'>5</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_5'>b(r i ) &#8793; min k&#8800;i 1 |C k | &#8721; r j &#8712;C k d(r i ,r j ) (4) s(r i ) &#8793; b(r i ) &#8722; a(r i ) max{a(r i ),b(r i )} , if |C i | &gt; 1 (5)</ns0:formula><ns0:p>At this point in the MAPP-Reviews method, we have software requirements pre-processed and represented through contextual word embeddings, as well as an organization of software requirements into k clusters. In addition, each cluster has a representative text (cluster label) obtained according to the requirements with higher silhouette values.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> shows a two-dimensional projection of clustered software requirements from approximately 85,000 food delivery app reviews, which were used in the experimental evaluation of this work. Highdensity regions represent clusters of similar requirements that must be mapped to the same software requirement during the analysis of temporal dynamics. In the next section, techniques for generating the time series from software requirements clusters are presented, as well as the predictive models to infer future trends.</ns0:p></ns0:div> <ns0:div><ns0:head>9/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Time Series Generation</ns0:head><ns0:p>Time series can be described as an ordered sequence of observations <ns0:ref type='bibr' target='#b8'>(Chatfield and Xing, 2019)</ns0:ref>. A time series of size s is defined as X &#8793; (x 1 ,x 2 ,...,x s ) in which x t &#8712; R represents an observation at time t.</ns0:p><ns0:p>MAPP-Reviews generates time series for each software requirements cluster, where the observations represent how many times each requirement occurred in a period. Consequently, we know how many times a specific requirement was mentioned in the app reviews for each period. Each series models the temporal dynamics of a software requirement, i.e., the temporal evolution considering occurrences in negative reviews.</ns0:p><ns0:p>Some software requirements are naturally more frequent than others, as well as the tokens used to describe these requirements. For the time series analysis to be compared uniformly, we generate a normalized series for each requirement. Each observation in the time series is normalized according to Equation <ns0:ref type='formula'>6</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_6'>x normalized &#8793; x z p (6)</ns0:formula><ns0:p>where x normalized is the result of the normalization, where x is the frequency of cluster (time series observation) C in the period p, z p is the total frequency of the period. Manuscript to be reviewed thereby indicating that users have negatively evaluated the app for that requirement. Predicting the occurrence of these periods for software maintenance, aiming to minimize the number of future negative reviews is the objective of the MAPP-Reviews predictive model discussed in the next section.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Predictive Models</ns0:head><ns0:p>Predictive models for time series are very useful to support an organization in its planning and decisionmaking. Such models explore past observations to estimate observations in future horizons, given a confidence interval. In our MAPP-Reviews method, we aim to detect the negative reviews of a software requirement that are starting to happen and make a forecast to see if they will become serious in the subsequent periods, i.e., high frequency in negative reviews. The general idea is to use p points from the time series to estimate the next p + h points, where h is the prediction horizon.</ns0:p><ns0:p>MAPP-Reviews uses the Prophet Forecasting Model <ns0:ref type='bibr' target='#b69'>(Taylor and Letham, 2018)</ns0:ref>. Prophet is a model from Facebook researchers for forecasting time series data considering non-linear trends at different time intervals, such as yearly, weekly, and daily seasonality. We chose the Prophet model for the MAPP-Reviews method due to the ability to incorporate domain knowledge into the predictive model. The Prophet model consists of three main components, as defined in Equation <ns0:ref type='formula'>7</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_7'>y(t) &#8793; g(t) + s(t) + h(t) +t&#949; (7)</ns0:formula><ns0:p>where g(t) represents the trend, s(t) represents the time series seasonality, h(t) represents significant events that impacts time series observations, and the error term t represents noisy data.</ns0:p><ns0:p>During model training, a time series can be divided into training and testing. The terms g(t), s(t) and h(t) can be automatically inferred by classical statistical methods in the area of time series analysis, such as the Generalized Additive Model (GAM) <ns0:ref type='bibr' target='#b29'>(Hastie and Tibshirani, 1987)</ns0:ref> In the experimental evaluation, we show the MAPP-Reviews ability to predict perceptually important points in the software requirements time series, allowing the identification of initial trends in defective requirements to support preventive strategies in software maintenance.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> shows an emerging issue being predicted 6 weeks in advance in the period from October 2020 to January 2021. The table presents a timeline represented by the horizon (h) in weeks, with the volume of negative raw reviews (Vol.</ns0:p><ns0:p>). An example of a negative review is shown for each week until reaching the critical week (peak), with h &#8793; 16. The table row with h &#8793; 10 highlighted in bold shows when MAPP-Reviews identified the uptrend. In this case, we show the MAPP-Reviews alert for the 'Time of arrival' requirement of the Uber Eats app. In particular, the emerging issue identified in the negative reviews is the low accuracy of the estimated delivery time in the app. The text of the user review samples has been entered in its entirety, without any pre-treatment. A graphical representation of this prediction is shown in Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The proposed approach is validated through an experimental evaluation with popular food delivery apps.</ns0:p><ns0:p>These apps represent a dynamic and complex environment consisting of restaurants, food consumers, and drivers operating in highly competitive conditions <ns0:ref type='bibr' target='#b75'>(Williams et al., 2020)</ns0:ref>. In addition, this environment means a real scenario of commercial limitations, technological restrictions, and different user experience contexts, which makes detecting emerging issues early an essential task. For this experimental evaluation, we used a dataset with 86,610 reviews of three food delivery apps: Uber Eats, Foodpanda, and Zomato. The dataset was obtained in the first stage (App Reviews) of MAPP-Reviews and is available at https://github.com/vitormesaque/mapp-reviews. The choice of these apps was based on their popularity and the number of reviews available. The reviews are from September 2018 to January 2021.</ns0:p><ns0:p>After the software requirements extraction and clustering stage (with k &#8793; 300 clusters), the six most popular (frequent) requirements clusters were considered for time series prediction. The following software requirements clusters were selected: 'Ordering', 'Go pick up', 'Delivery', 'Arriving time', 'Advertising', and 'Payment'. The requirements clusters are shown in Table <ns0:ref type='table'>5</ns0:ref> with the associated words ordered by silhouette.</ns0:p><ns0:p>In the MAPP-Reviews prediction stage, we evaluated two scenarios using Prophet. The first scenario is the baseline, where we use the automatic parameters fitting of the Prophet. By default, Prophet will automatically detect the changepoints. In the second scenario, we specify the potential changepoints, thereby providing domain knowledge for software requirements rather than automatic changepoint detection. Therefore, the changepoint parameters are used when we provide the dates of the changepoints instead of the Prophet determining them. In this case, we use the most recent observations that have a value greater than the average of observations, i.e., critical periods with high frequencies of negative reviews in the past.</ns0:p><ns0:p>We used the MAPE (Mean Absolute Percentage Error) metric to evaluate the forecasting performance <ns0:ref type='bibr' target='#b40'>(Makridakis, 1993)</ns0:ref>, as defined in Equation <ns0:ref type='formula'>8</ns0:ref>,</ns0:p></ns0:div> <ns0:div><ns0:head>13/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref>. Software requirements clusters for food delivery apps used in the experimental evaluation. Tokens well allocated in each cluster (silhoutte measure) were selected to support the cluster labeling.</ns0:p><ns0:p>Cluster Label Tokens with Silhouette values (s)</ns0:p><ns0:p>Ordering 'ordering' (s = 0.1337), 'order's' (s = 0.1250), 'order from' (s = 0.1243), 'order will' (s = 0.1221), 'order' (s = 0.1116), 'the order', (s = 0.1111)</ns0:p><ns0:p>Go pick up 'go pick up'(s = 0.1382)', 'pick up the' (s = 0.1289)', 'pick up at', (s = 0.1261), 'to take' (s = 0.1176), 'go get' (s = 0.1159) Delivery 'delivering parcels' (s = 0.1705), 'delivery options' (s = 0.1590), 'waive delivery' (s = 0.1566), 'delivery charges' (s = 0.1501), 'accept delivery' (s = 0.1492) Arriving time 'arrival time' (s=0.3303), 'waisting time' (s = 0.3046), 'arriving time' (s = 0.3042), 'estimate time' (s = 0.2877), 'delievery time' (s = 0.2743)</ns0:p><ns0:p>Advertising 'anoyning ads' (s = 0.3464), 'pop-up ads' (s = 0.3440), 'ads pop up' (s = 0.3388), 'commercials advertise' (s = 0.3272), 'advertising' (s = 0.3241)</ns0:p><ns0:p>Payment 'payment getting' (s = 0.2618), 'payment get' (s = 0.2547), 'getting payment' (s = 0.2530), 'take payment'(s = 0.2504), 'payment taking' (s = 0.2471), 'payment' (s = 0.2401)</ns0:p><ns0:formula xml:id='formula_8'>MAPE &#8793; 1 h &#8721; h t&#8793;1 |real t &#8722; pred t | real t (8)</ns0:formula><ns0:p>where real t is the real value and pred t is the predicted value by the method, and h is the number of forecast observations in the estimation period (prediction horizon). In practical terms, MAPE is a measure of the percentage error that, in a simulation, indicates how close the prediction was made to the known values of the time series. We consider a prediction horizon (h) ranging from 1 to 4, with weekly seasonality. In particular, we are interested in the peaks of the series since our hypothesis is that the peaks represent potential problems in a given software requirement. Thus, Table <ns0:ref type='table' target='#tab_9'>7</ns0:ref> shows MAPE calculated only for time series peaks during forecasting. In this case, predictions with the custom changepotins locations (scenario 2) obtained better results than the automatic detection for all prediction horizons (h &#8793; 1 to h &#8793; 4), obtaining 3.82% of forecasting improvement. These results provide evidence that domain knowledge can improve the detection of potential software requirements to be analyzed for preventive maintenance.</ns0:p><ns0:p>In particular, analyzing the prediction horizon, the results show that the best predictions were obtained with h &#8793; 1 (1 week). In practical terms, this means the initial trend of a defective requirement can be identified one week in advance. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science For reproducibility purposes, we provide a GitHub repository at https://github.com/vitormesaque/mappreviews containing the source code and details of each stage of the method, as well as the raw data and all the results obtained. </ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Timely and effective detection of software requirements issues is crucial for app developers. The results</ns0:p><ns0:p>show that MAPP-Reviews can detect significant points in the time series that provide information about the future behavior of a software requirement, allowing software engineers to anticipate the identification of emerging issues that may affect app evaluation. An issue related to a software requirement reported in user reviews is defined as an emerging issue when there is an upward trend for that requirement in negative reviews. Our method trains predictive models to identify requirements with higher negative evaluation trends, but a negative review will inevitably impact the rating. However, our objective is to mitigate this negative impact.</ns0:p><ns0:p>The prediction horizon (h) is an essential factor in detecting emerging issues to mitigate negative impacts. Software engineers and the entire development team need to know as soon as possible about software problems to anticipate them. In this context would not be feasible to predict the following months Manuscript to be reviewed</ns0:p><ns0:p>Computer Science months about bug reports. Therefore, MAPP-Reviews forecasts at the week level. This strategy allows us to identify the issues that are starting to happen and predict whether they will worsen in the coming weeks.</ns0:p><ns0:p>Even at the week level, the best forecast should be with the shortest forecast horizon, i.e., one week (h &#8793; 1). A longer horizon, i.e., three (h &#8793; 3) or four weeks (h &#8793; 4), could be too late to prevent an issue from becoming severe and having more impact on the overall app rating. The experimental evaluation shows that our method obtains the best predictions with the shortest horizon (h &#8793; 1). In practical terms, this means that MAPP-Reviews identifies the initial trend of a defective requirement a week in advance.</ns0:p><ns0:p>In addition, we can note that a prediction error rate (MAPE) of up to 20% is acceptable. For example, consider that the prediction is 1000 negative reviews for a specific requirement at a given point, but the model predicts 800 negative reviews. Even with 20% of MAPE, we can identify a significant increase in negative reviews for a requirement and trigger alerts for preventive software maintenance, i.e., when MAPP-Reviews predicts an uptrend, the software development team should receive an alert. In the time series forecast shown in Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>, we observe that the model would be able to predict the peaks of negative reviews for the software requirement one week in advance.</ns0:p><ns0:p>The forecast presented in Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> shows that the model was able to predict the peak of negative reviews for the 'Arriving time' requirement. An emerging issue detection system based only on the frequency of a topic could trigger many false detections, i.e., it would not detect defective functionality but issues related to the quality of services offered. Analyzing user reviews, we found that some complaints are about service issues rather than defective requirements. For example, the user may complain about the delay in the delivery service and negatively rate the app, but in reality, they are complaining about the restaurant, i.e., a problem with the establishment service. We've seen that this pattern of user complaints is repeated across other app domains, not just the food delivery service. In delivery food apps, these complaints about service are constant, uniform, and distributed among all restaurants available in the app. In Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>, it is clear that the emerging issue refers to the deficient implementation of the estimated delivery time prediction functionality. Our results show that when there is a problem in the app related to a defective software requirement, there are increasing complaints associated with negative reviews regarding that requirement.</ns0:p><ns0:p>An essential feature in MAPP-Reviews is changepoints. Assume that a time series represents the evolution of a software requirement over time, observing negative reviews for this requirement. Also, consider that time series frequently have abrupt changes in their trajectories. Given this, the changepoints describe abrupt changes in the time series trend, i.e., means a specific date that indicates a trend change.</ns0:p><ns0:p>Therefore, specifying custom changepoints becomes significantly important for the predictive model because the uptrend in time series can also be associated with domain knowledge factors. By default, our model will automatically detect these changepoints. However, we have found that specifying custom changepoints improves prediction significantly in critical situations for the emerging issue detection problem. In general, the automatic detection of changepoints had better MAPE results in most evaluations.</ns0:p><ns0:p>However, the custom changepoints obtained the best predictions at the time series peaks for all horizons (h &#8793; 1 to h &#8793; 4) of experiment simulations. Our experiment suggests a greater interest in identifying potential defective requirements trends in the time series peaks. As a result, we conclude that specifying custom changepoints in the predictive model is the best strategy to identify potential emerging issues.</ns0:p><ns0:p>Furthermore, the results indicate the potential impact of incorporating changepoints into the predictive model using the information of app developers, i.e., defining specific points over time with a meaningful influence on app evaluation. In addition, software engineers can provide sensitive company data and domain knowledge to explore and improve the predictive model potentially. For this purpose, we depend on sensitive company data related to the software development and management process, e.g., release planning, server failures, and marketing campaigns. In particular, we can investigate the relationship between the release dates of app updates and the textual content of the update publication with the upward trend in negative evaluations of a software requirement. In a real-world scenario in the industry, software engineers using MAPP-Reviews will provide domain-specific information.</ns0:p><ns0:p>We show that MAPP-Reviews provides software engineers with tools to perform software maintenance activities, particularly preventive maintenance, by automatically monitoring the temporal dynamics of software requirements.</ns0:p><ns0:p>The results of our research show there are new promising prospects for the future, and new possibilities for innovation research in this area emerge with our results so far. We intend to explore further our method to deeply determine the input variables that most contribute to the output behavior and the non-influential Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>inputs or to determine some interaction effects within the model. In addition, sensitivity analysis can help us reduce the uncertainties found more effectively and calibrate the model.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations</ns0:head><ns0:p>Despite the significant results obtained, we can still improve the predictive model. In the scope of our experimental evaluation, we only investigate the incorporation of software domain-specific information through trend changepoints. Company-sensitive information and the development team's domain knowledge were not considered in the predictive model because we don't have access to this information.</ns0:p><ns0:p>Therefore, we intend to evaluate our proposed method in the industry and explore more specifics of the domain knowledge to improve the predictive model.</ns0:p><ns0:p>Another issue that is important to highlight is the sentiment analysis in app reviews. We assume that it is possible to improve the classification of negative reviews by incorporating sentiment analysis techniques. We can incorporate a polarity classification stage (positive, negative, and neutral) of the extracted requirement, allowing a software requirements-based sentiment analysis. In the current state of our research, we only consider negative reviews with low ratings and associate them with the software requirements mentioned in the review.</ns0:p><ns0:p>Finally, to use MAPP-Reviews in a real scenario, there must be already a sufficient amount of reviews distributed over time, i.e., a minimum number of time-series observations available for the predictive model to work properly. Therefore, in practical terms, our method is more suitable when large volumes of app reviews are available to be analyzed.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>Opinion mining for app reviews can provide useful user feedback to support software engineering activities. We introduced the temporal dynamics of requirements analysis to predict initial trends on defective requirements from users' opinions before negatively impacting the overall app's evaluation. We presented the MAPP-Reviews (Monitoring App Reviews) approach to (1) extract and cluster software requirements, (2) generate time series with the time dynamics of requirements, (3) identify requirements with higher trends of negative evaluation.</ns0:p><ns0:p>The experimental results show that our method is able to find significant points in the time series that provide information about the future behavior of a requirement through app reviews, thereby allowing software engineers to anticipate the identification of requirements that may affect the app's evaluation.</ns0:p><ns0:p>In addition, we show that it's beneficial to incorporate changepoints into the predictive model by using domain knowledge, i.e., defining points over time with significant impacts on the app's evaluation.</ns0:p><ns0:p>We compared the MAPP-Reviews in two scenarios: first using automatic changepoint detection and second specifying the changepoint locations. In particular, the automatic detection of points of change had better MAPE results in most evaluations. On the other hand, the best predictions at the time series peaks (where there is a greater interest in identifying potential defective requirements trends) were obtained by specifying changepoints.</ns0:p><ns0:p>Future work directions involve evaluating MAPP-Reviews in other scenarios to incorporate and compare several other types of domain knowledge into the predictive model, such as new app releases, marketing campaigns, server failures, competing apps, among other information that may impact the evaluation of apps. Another direction for future work is to implement a dashboard tool for monitoring app reviews, thus allowing the dispatching of alerts and reports.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Example of a review and extracted requirements.</ns0:figDesc><ns0:graphic coords='8,203.77,160.86,289.51,192.51' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Set of software requirements in a two-dimensional space obtained from contextual word embeddings.</ns0:figDesc><ns0:graphic coords='9,183.09,263.99,330.91,285.35' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Two-dimensional projection of clustered software requirements from approximately 85,000 food delivery app reviews.</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.58,358.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5shows an example of one of the generated time series for a software requirement. The time dynamics represented in the time series indicate the behavior of the software requirement concerning negative reviews. Note that in some periods there are large increases in the mention of the requirement,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Time series with the normalized frequency of 'Arriving time' requirement from Zomato App in negative reviews.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.58,202.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>used in Prophet. In the training step, the terms are adjusted to find an additive model that best fits the known observations in the training time series. Next, we evaluated the model in new data, i.e., the testing time series.In the case of the temporal dynamics of the software requirements, domain knowledge is represented by specific points (e.g. changepoints) in the time series that indicate potential growth of the requirement in negative reviews. Figure6shows the forecasting for a software requirement. Original observations are the black dots and the blue line represents the forecast model. The light blue area is the confidence interval of the predictions. The vertical dashed lines are the time series changepoints. Changepoints play an important role in forecasting models, as they represent abrupt changes in the trend. Changepoints can be estimated automatically during model training, but domain knowledge, such as the date of app releases, marketing campaigns, and server failures, are changepoints that can be added 11/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)Manuscript to be reviewed Computer Science manually by software engineers. Therefore, the changepoints could be specified by the analyst using known dates of product launches and other growth-altering events or may be automatically selected given a set of candidates. In MAPP-Reviews, we have two possible options for selecting changepoints in the predictive model. The first option is automatic changepoint selection, where the Prophet specifies 25 potential changepoints which are uniformly placed in the first 80% of the time series. The second option is the manual specification which has a set of dates provided by a domain analyst. In this case, the changepoints could be entirely limited to a small set of dates. If no known dates are provided, by default we use the most recent observations which have a value greater than the average of the observations, i.e., we want to emphasize the highest peaks of the time series, as they indicate critical periods of negative revisions from the past.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Prophet forecasting with automatic changepoints of a requirement.</ns0:figDesc><ns0:graphic coords='13,141.73,197.59,413.58,229.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Finally</ns0:head><ns0:label /><ns0:figDesc>, to exemplify MAPP-Reviews forecasting, Figure 7 shows the training data (Arriving time software requirement) represented as black dots and the forecast as a blue line, with upper and lower 14/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Forecasting for software requirement cluster (Arriving time) from Uber Eats App reviews.</ns0:figDesc><ns0:graphic coords='16,141.73,289.37,413.58,227.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>as it is tough to find a correlation between what happens today and what will happen in the next few 15/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='7,141.73,110.44,413.58,220.94' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Overview of related works.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Reference</ns0:cell><ns0:cell /><ns0:cell cols='2'>Data Representa-</ns0:cell><ns0:cell>Pre-processing and</ns0:cell><ns0:cell>Requirements/Topics Cluster-</ns0:cell><ns0:cell>Temporal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell><ns0:cell /><ns0:cell>Extraction of Re-</ns0:cell><ns0:cell>ing and Labeling</ns0:cell><ns0:cell>Dynam-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>quirements</ns0:cell><ns0:cell>ics</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Araujo and Mar-</ns0:cell><ns0:cell cols='2'>Word embeddings.</ns0:cell><ns0:cell>Token Classification.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>cacini, 2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Gao, Zeng, Wen, et</ns0:cell><ns0:cell cols='2'>Word embeddings.</ns0:cell><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. It combines word embed-</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>dings with topic distributions as</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>the semantic representations of</ns0:cell><ns0:cell>Model.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>words.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Malik, Shakshuki,</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>and Yoo, 2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Gao, Zheng, et</ns0:cell><ns0:cell>Vector space.</ns0:cell><ns0:cell /><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. Anomaly Clustering Algo-</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>rithm.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Dragoni, Federici,</ns0:cell><ns0:cell cols='2'>Dependency tree.</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>and Rexha, 2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Gao, Zeng, Lyu, et</ns0:cell><ns0:cell cols='2'>Probability vector.</ns0:cell><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. AOLDA -Adaptively On-</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>line Latent Dirichlet Allocation.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>The topic labeling method con-</ns0:cell><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>siders the semantic similarity</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>between the candidates and the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>topics.</ns0:cell></ns0:row><ns0:row><ns0:cell>(Johann,</ns0:cell><ns0:cell cols='2'>Stanik,</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Rule-based.</ns0:cell><ns0:cell>No.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Walid Maalej, et</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>al., 2017)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>(Vu et al., 2016)</ns0:cell><ns0:cell /><ns0:cell cols='2'>Word embeddings.</ns0:cell><ns0:cell>Pre-defined.</ns0:cell><ns0:cell>Yes. Soft Clustering algorithm</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>that uses vector representation</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>of words from Word2vec.</ns0:cell><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(Villarroel</ns0:cell><ns0:cell>et</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>Yes. DBSCAN clustering algo-</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell>al., 2016)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>rithm. Each cluster has a label</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>composed of the five most fre-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>quent terms.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Gu and Kim, 2015)</ns0:cell><ns0:cell>Semantic</ns0:cell><ns0:cell>Depen-</ns0:cell><ns0:cell>Rule-based.</ns0:cell><ns0:cell>Yes. Clustering aspect-opinion</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>dence Graph.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>pairs with the same aspects.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Phong et al., 2015)</ns0:cell><ns0:cell>Vector space.</ns0:cell><ns0:cell /><ns0:cell>Rule-based.</ns0:cell><ns0:cell>Yes. Word2vec and K-means.</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Guzman and Walid</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. LDA approach.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Maalej, 2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(N. Chen et al., 2014)</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>Yes. LDA and ASUM approach</ns0:cell><ns0:cell>Yes. De-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>with labeling.</ns0:cell><ns0:cell>scriptive</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Iacob and Harri-</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Rule-based and</ns0:cell><ns0:cell>Yes. LDA approach.</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell>son, 2013)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Topic modeling.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Galvis Carre&#241;o and</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Topic modeling.</ns0:cell><ns0:cell>Yes. Aspect and Sentiment</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Winbladh, 2013)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Unification Model (ASUM) ap-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>proach.</ns0:cell></ns0:row><ns0:row><ns0:cell>(Harman,</ns0:cell><ns0:cell cols='2'>Jia,</ns0:cell><ns0:cell>Keywords.</ns0:cell><ns0:cell /><ns0:cell>Pre-defined.</ns0:cell><ns0:cell>Yes. Greedy-based clustering</ns0:cell><ns0:cell>No.</ns0:cell></ns0:row><ns0:row><ns0:cell>and</ns0:cell><ns0:cell cols='2'>Yuanyuan</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>algorithm.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Zhang, 2012)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(Palomba, Linares-</ns0:cell><ns0:cell cols='2'>Bag-of-words.</ns0:cell><ns0:cell>Topic-modeling.</ns0:cell><ns0:cell>Yes. AR-Miner approach with</ns0:cell></ns0:row><ns0:row><ns0:cell>V&#225;squez,</ns0:cell><ns0:cell cols='2'>Bavota,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>labeling.</ns0:cell></ns0:row><ns0:row><ns0:cell>Oliveto,</ns0:cell><ns0:cell cols='2'>Penta,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>et al., 2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistics about the datasets from 8 apps of different categories used to train the RE-BERT model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>eBay</ns0:cell><ns0:cell>Evernote</ns0:cell><ns0:cell>Facebook</ns0:cell><ns0:cell>Netflix</ns0:cell><ns0:cell>Photo editor</ns0:cell><ns0:cell>Spotify</ns0:cell><ns0:cell>Twitter</ns0:cell><ns0:cell>WhatsApp</ns0:cell></ns0:row><ns0:row><ns0:cell>Reviews</ns0:cell><ns0:cell>1,962</ns0:cell><ns0:cell>4,832</ns0:cell><ns0:cell>8,293</ns0:cell><ns0:cell>14,310</ns0:cell><ns0:cell>7,690</ns0:cell><ns0:cell>14,487</ns0:cell><ns0:cell>63,628</ns0:cell><ns0:cell>248,641</ns0:cell></ns0:row><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Shopping</ns0:cell><ns0:cell>Productivity</ns0:cell><ns0:cell>Social</ns0:cell><ns0:cell>Entertainment</ns0:cell><ns0:cell>Photography</ns0:cell><ns0:cell>Music and Audio</ns0:cell><ns0:cell>Social</ns0:cell><ns0:cell>Communication</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Example of emerging issue prediction alert for the 'Time of arrival' requirement of the Uber Eats app reviews triggered by MAPP-Reviews.Listed delivery times are inaccurate majority of the time.Everyone cancels and it ends up taking twice the estimated time to get the food delivered. You dont get updated on delays unless you actively monitor. Uber has failed at food delivery. Not easy to cancel. Also one restaurant that looked available said I was too far away after I had filled my basket. Other than that the app is easy to use.Use door dash or post mates, uber eats has definitely gone down in quality. Extremely inaccurate time estimates and they ignore your support requests until its to late to cancel an order and get a refund.App is good but this needs to be more reliable on its service. the estimated arrival time needs to be matched or there should be a option to cancel the order if they couldnt deliver on estimated time. Continuesly changing the estimated delivery time after the initial order confirmation is inappropriate.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>h</ns0:cell><ns0:cell>Vol.</ns0:cell><ns0:cell>Token</ns0:cell><ns0:cell>Review</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>768</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>849</ns0:cell><ns0:cell>Time</ns0:cell><ns0:cell>This app consistently gives incorrect, shorter delivery time frame to get you to order, but the deliveries</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>frame</ns0:cell><ns0:cell>are always late. The algorithm to predict the delivery time should be fixed so that you'll stop lying to</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>your customers.</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>896</ns0:cell><ns0:cell>Arrival</ns0:cell><ns0:cell>Ordered food and they told me it was coming. The wait time was supposed to be 45 minutes. They</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>kept pushing back the arrival time, and we waited an hour and 45 minutes for food, only to have</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>them CANCEL the order and tell us it wasn't coming. If an order is unable to be placed you need to</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>tell customers BEFORE they've waited almost 2 HOURS for their food.</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>1247</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>The app was easy to navigate but the estimated delivery time kept changing and it took almost 2hrs</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>to receive food and I live less than 4 blocks away pure ridiculousness if I would of know that I would</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>of just walked there and got it.</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>1056</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>997</ns0:cell><ns0:cell>More</ns0:cell><ns0:cell>Uber Eats lies. Several occasions showed delays because 'the restaurant requested more time' but</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>really it was Uber Eats unable to find a driver. I called the restaurants and they said the food has</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>been ready for over an hour!</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>939</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>Your app is unintuitive. Delivery times are wildly inaccurate and orders are canceled with no</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>explanation, information or help.</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>854</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell>This service is terrible. Delivery people never arrive during the estimated time.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>994</ns0:cell><ns0:cell>Time</ns0:cell><ns0:cell>Delivery times increase significantly once your order is accepted. 25-45 mins went up to almost</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>2hours! 10 1257 Time</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>esti-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>mate</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>1443</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>Delivery times are constantly updated, what was estimated at 25-35 minutes takes more than two</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>hours. I understand it's just an estimate, but 4X that is ridiculous.</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>1478</ns0:cell><ns0:cell>Delivery</ns0:cell><ns0:cell>Inaccurate delivery time</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>1376</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>1446</ns0:cell><ns0:cell>Estimated</ns0:cell><ns0:cell>Terrible, the estimated time of arrival is never accurate and has regularly been up to 45 MINUTES</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell>LATE with no refund. Doordash is infinitely better, install that instead, it also has more restaurants</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>1354</ns0:cell><ns0:cell>Estimed</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>1627</ns0:cell><ns0:cell>Estimed</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>time</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>Used to use this app a lot. Ever since they made it so you have to pay for your delivery to come on time the app is useless. You will be stuck waiting for food for an hour most of the time. The estimated time of arrival is never accurate. Have had my food brought to wrong addresses or not brought at all. I will just take the extra time out of my day to pick up the food myself rather than use this app.I use this app a lot and recently my order are always late at least double the time im originally quoted. Every time my food is cold. Maybe the estimated time should be adjusted to reflect what the actual time may be.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>summarizes the main experimental results. The first scenario (1) with the default parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>obtains superior results compared to the second scenario (2) for all forecast horizons. In general, automatic</ns0:cell></ns0:row><ns0:row><ns0:cell>changepoints obtains 9.33% of model improvement, considering the average of MAPE values from all</ns0:cell></ns0:row><ns0:row><ns0:cell>horizons (h &#8793; 1 to h &#8793; 4).</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of MAPE in General. </ns0:figDesc><ns0:table><ns0:row><ns0:cell>h</ns0:cell><ns0:cell cols='2'>MAPE (Mean &#177; SD)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1) Automatic changepoint</ns0:cell><ns0:cell>(2) Specifying the changepoints</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>13.82 &#177; 16.42</ns0:cell><ns0:cell>15.47 &#177; 14.42</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>15.58 &#177; 19.09</ns0:cell><ns0:cell>16.94 &#177; 17.20</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>16.26 &#177; 20.18</ns0:cell><ns0:cell>17.60 &#177; 18.71</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>16.09 &#177; 19.24</ns0:cell><ns0:cell>17.47 &#177; 18.37</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>MAPE analysis (at the peaks of the time series) of each scenario considering the software requirements. in a blue shaded area. At the end of the time series, the darkest line is the real values plotted over the predicted values in blue. The lines plotted vertically represent the changepoints.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>h</ns0:cell><ns0:cell cols='2'>MAPE (Mean &#177; SD)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1) Automatic changepoint</ns0:cell><ns0:cell>(2) Specifying the changepoints</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>10.65 &#177; 8.41</ns0:cell><ns0:cell>10.30 &#177; 8.06</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>11.61 &#177; 8.80</ns0:cell><ns0:cell>11.00 &#177; 8.71</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>11.81 &#177; 8.86</ns0:cell><ns0:cell>11.42 &#177; 8.52</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>11.49 &#177; 8.71</ns0:cell><ns0:cell>11.19 &#177; 8.34</ns0:cell></ns0:row></ns0:table><ns0:note>bounds</ns0:note></ns0:figure> <ns0:note place='foot' n='12'>/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64857:2:0:NEW 12 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Authors’ Response to Reviews Temporal dynamics of Requirements Engineering from mobile app reviews Vitor Mesaque Alves de Lima, Adailton Ferreira de Araújo, and Ricardo Marcondes Marcacini PeerJ Computer Science RC: Reviewer Comment, AR: Author Response, □ Manuscript text The authors would like to thank the area editor and the reviewers for their precious time and invaluable comments. We have carefully addressed all the comments. We provided a point-by-point response to the reviewer’s comments. We’ve added an extra version of the article with a highlight in the parts of the text that have been modified (attached to this document). The corresponding changes and refinements made in the revised article are summarized in our response below. 1. Editor RC: [ PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ ] AR: We thank the editor for the considerations, and we have tried to attend to each one of them. We have included all the answers to the questions we provided to the reviewers in the manuscript. We also considered the directions given to prepare the rebuttal letter. 2. Reviewer 1 2.1. Basic reporting RC: All good. AR: We are grateful for the time spent reviewing the article. 2.2. Experimental design RC: All good. AR: We are grateful for the time spent reviewing the article. 1 2.3. Validity of the findings RC: All good. AR: We are grateful for the time spent reviewing the article. 2.4. Additional comments RC: All good. AR: We are grateful for the time spent reviewing the article. 3. Reviewer 2 3.1. Basic reporting RC: No comment. 3.2. Experimental design RC: No comment. 3.3. Validity of the findings RC: The findings are explained in a more solid way due to the introduction of a discussion section. Nevertheless half of this rather short section is spent on the limitations and not on discussion the findings. I would like to see a deeper discussions of the interesting findings. AR: Thank you very much for your considerations. We fully agree. We rewrote the “Discution” section, including more details about the main findings in the experimental evaluation results. In addition, we concluded that the “Discution” section should cover some information from the “Limations” section. In this way, the Discussion section is more detailed, and the Limitations section is more objective. The full text in PDF with the tracking of all changes is attached to the submission form. At Rebuttal Letter, we prefer to place the modified text in full to facilitate its analysis. In “DISCUSSION” section (lines 432 to 501): Timely and effective detection of software requirements issues is crucial for app developers. The results show that MAPP-Reviews can detect significant points in the time series that provide information about the future behavior of a software requirement, allowing software engineers to anticipate the identification of emerging issues that may affect app evaluation. An issue related to a software requirement reported in user reviews is defined as an emerging issue when there is an upward trend for that requirement in negative reviews. Our method trains predictive models to identify requirements with higher negative evaluation trends, but a negative review will inevitably impact the rating. However, our objective is to mitigate this negative impact. The prediction horizon (h) is an essential factor in detecting emerging issues to mitigate negative impacts. Software engineers and the entire development team need to know as soon as possible about software problems 2 to anticipate them. In this context would not be feasible to predict the following months as it is tough to find a correlation between what happens today and what will happen in the next few months about bug reports. Therefore, MAPP-Reviews forecasts at the week level. This strategy allows us to identify the issues that are starting to happen and predict whether they will worsen in the coming weeks. Even at the week level, the best forecast should be with the shortest forecast horizon, i.e., one week (h = 1). A longer horizon, i.e., three (h = 3) or four weeks (h = 4), could be too late to prevent an issue from becoming severe and having more impact on the overall app rating. The experimental evaluation shows that our method obtains the best predictions with the shortest horizon (h = 1). In practical terms, this means that MAPP-Reviews identifies the initial trend of a defective requirement a week in advance. In addition, we can note that a prediction error rate (MAPE) of up to 20% is acceptable. For example, consider that the prediction is 1000 negative reviews for a specific requirement at a given point, but the model predicts 800 negative reviews. Even with 20% of MAPE, we can identify a significant increase in negative reviews for a requirement and trigger alerts for preventive software maintenance, i.e., when MAPP-Reviews predicts an uptrend, the software development team should receive an alert. In the time series forecast shown in Figure 7, we observe that the model would be able to predict the peaks of negative reviews for the software requirement one week in advance. The forecast presented in Figure 7 shows that the model was able to predict the peak of negative reviews for the “Arriving time’ requirement. An emerging issue detection system based only on the frequency of a topic could trigger many false detections, i.e., it would not detect defective functionality but issues related to the quality of services offered. Analyzing user reviews, we found that some complaints are about service issues rather than defective requirements. For example, the user may complain about the delay in the delivery service and negatively rate the app, but in reality, they are complaining about the restaurant, i.e., a problem with the establishment service. We’ve seen that this pattern of user complaints is repeated across other app domains, not just the food delivery service. In delivery food apps, these complaints about service are constant, uniform, and distributed among all restaurants available in the app. In Table 4, it is clear that the emerging issue refers to the deficient implementation of the estimated delivery time prediction functionality. Our results show that when there is a problem in the app related to a defective software requirement, there are increasing complaints associated with negative reviews regarding that requirement. An essential feature in MAPP-Reviews is changepoints. Assume that a time series represents the evolution of a software requirement over time, observing negative reviews for this requirement. Also, consider that time series frequently have abrupt changes in their trajectories. Given this, the changepoints describe abrupt changes in the time series trend, i.e., means a specific date that indicates a trend change. Therefore, specifying custom changepoints becomes significantly important for the predictive model because the uptrend in time series can also be associated with domain knowledge factors. By default, our model will automatically detect these changepoints. However, we have found that specifying custom changepoints improves prediction significantly in critical situations for the emerging issue detection problem. In general, the automatic detection of changepoints had better MAPE results in most evaluations. However, the custom changepoints obtained the best predictions at the time series peaks for all horizons (h = 1 to h = 4) of experiment simulations. Our experiment suggests a greater interest in identifying potential defective requirements trends in the time series peaks. As a result, we conclude that specifying custom changepoints in the predictive model is the best strategy to identify potential emerging issues. Furthermore, the results indicate the potential impact of incorporating changepoints into the predictive model using the information of app developers, i.e., defining specific points over time with a meaningful influence on app evaluation. In addition, software engineers can provide sensitive company data and domain knowledge to explore and improve the predictive model potentially. For this purpose, we depend on sensitive company data related to the software development and management process, e.g., release planning, server failures, and marketing campaigns. In particular, we can investigate the relationship between the release dates of app 3 updates and the textual content of the update publication with the upward trend in negative evaluations of a software requirement. In a real-world scenario in the industry, software engineers using MAPP-Reviews will provide domain-specific information. We show that MAPP-Reviews provides software engineers with tools to perform software maintenance activities, particularly preventive maintenance, by automatically monitoring the temporal dynamics of software requirements. The results of our research show there are new promising prospects for the future, and new possibilities for innovation research in this area emerge with our results so far. We intend to explore further our method to deeply determine the input variables that most contribute to the output behavior and the non-influential inputs or to determine some interaction effects within the model. In addition, sensitivity analysis can help us reduce the uncertainties found more effectively and calibrate the model. In “Limitations” subsection (lines 502 to 518): Despite the significant results obtained, we can still improve the predictive model. In the scope of our experimental evaluation, we only investigate the incorporation of software domain-specific information through trend changepoints. Company-sensitive information and the development team’s domain knowledge were not considered in the predictive model because we don’t have access to this information. Therefore, we intend to evaluate our proposed method in the industry and explore more specifics of the domain knowledge to improve the predictive model. Another issue that is important to highlight is the sentiment analysis in app reviews. We assume that it is possible to improve the classification of negative reviews by incorporating sentiment analysis techniques. We can incorporate a polarity classification stage (positive, negative, and neutral) of the extracted requirement, allowing a software requirements-based sentiment analysis. In the current state of our research, we only consider negative reviews with low ratings and associate them with the software requirements mentioned in the review. Finally, to use MAPP-Reviews in a real scenario, there must be already a sufficient amount of reviews distributed over time, i.e., a minimum number of time-series observations available for the predictive model to work properly. Therefore, in practical terms, our method is more suitable when large volumes of app reviews are available to be analyzed. 3.4. Additional comments RC: The paper covers an interesting and promising topic. It is written in a clear and well understandable language. The authors provided a significant improvement over the first version. AR: We are grateful for the time spent reviewing the article and the essential contributions to our research. 4 "
Here is a paper. Please give your review comments after reading it.
329
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The plaque assay is a standard quantification system in virology for verifying infectious particles. One of the complex steps of plaque assay is the counting of the number of viral plaques in multiwell plates to study and evaluate viruses. Manual counting plaques are time-consuming and subjective. There is a need to reduce the workload in plaque counting and for a machine to read virus plaque assay; thus, herein, we developed a machinelearning (ML)-based automated quantification machine for viral plaque counting. The machine consists of two major systems: hardware for image acquisition and ML-based software for image viral plaque counting. The hardware is relatively simple to set up, affordable, portable, and automatically acquires a single image or multiple images from a multiwell plate for users. For a 96-well plate, the machine could capture and display all images in less than 1 min. The software is implemented by K-mean clustering using ML and unsupervised learning algorithms to help users and reduce the number of setup parameters for counting and is evaluated using 96-well plates of dengue virus.</ns0:p><ns0:p>Bland-Altman analysis indicates that more than 95% of the measurement error is in the upper and lower boundaries [&#177;2 standard deviation]. Also, gage repeatability and reproducibility analysis showed that the machine is capable of applications. Moreover, the average correct measurements by the machine are 85.8%. The ML-based automated quantification machine effectively quantifies the number of viral plaques.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>A plaque assay is one of the standard measurements <ns0:ref type='bibr' target='#b0'>(Dulbecco, 1952)</ns0:ref> for viable viruses in virology laboratories. Applications of this method range from basic research through drug discovery to vaccine development <ns0:ref type='bibr' target='#b1'>(Premsattham et al., 2019)</ns0:ref>. In principle, viruses infect the cell monolayer in a semisolid medium, thus limiting only the horizontal spread. The area of cell death (i.e. plaque) is visualized under a microscope before staining with neutral red or crystal violet (BioTek Instruments, 2021; <ns0:ref type='bibr'>Cacciabue et al., 2019)</ns0:ref>. The plate is manually counted and calculated to quantify the virus. The manual counting of plaques is tedious and requires welltrained personnel for verification. The plaque assay can be performed in low to medium throughput formats, such as 6-, 12-, or 24-well plates. This method requires more reagents and highly skilled operators to appropriately perform the assay. Recently, a simplified microwell plaque titration assay was developed with Herpes simplex <ns0:ref type='bibr' target='#b4'>(Bhattarakosol, Yoosook &amp; Cross, 1990</ns0:ref>) and dengue viruses <ns0:ref type='bibr' target='#b5'>(Boonyasuppayakorn et al., 2016)</ns0:ref>. Automated data acquisition and quantification methods have been developed under a flatbed scanner <ns0:ref type='bibr' target='#b5'>(Boonyasuppayakorn et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b6'>Sullivan et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Recently, automated imaging-based counters have been applied to plaque assays; however, the current versions face multiple challenges. For example, Cellular Technology Limited developed a commercial Elispot and viral plaque-counting machine called ImmunoSpot CLT Analyzers (e.g. Cellular Technology Limited, 2021). The machines can automatically acquire high-resolution images for each well plate with a high speed and automatically count plaque using software. Even though the software focuses on counting Elispot assays the company can optimize the setup, such as plaque intensity and size to develop programs for counting viral plaques for each well plate <ns0:ref type='bibr' target='#b8'>(Smith et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b9'>Sukupolvi-Petty et al., 2013)</ns0:ref>. However, the program has limitations: it cannot be easily optimized and standardized for various viral plaques. Moreover, commercial machines and their services are often expensive and proprietary.</ns0:p><ns0:p>For plaque counting on a personal computer, general image-processing tools, such as ImageJ, OpenCV, Labview, R, have been employed <ns0:ref type='bibr' target='#b5'>(Boonyasuppayakorn et al., 2016;</ns0:ref><ns0:ref type='bibr'>Rasband, 2015</ns0:ref> <ns0:ref type='formula'>2016</ns0:ref>) developed an ImageJ program and employed it for a modified 96-well plaque assay for the dengue virus. A flatbed scanner was used in the assay to acquire the 96-well plate image before cropping it into each well image. However, the plate must be contrast-enhanced by adding nontransparent liquid, such as milk, before scanning, which increases the workload. Work in several studies <ns0:ref type='bibr' target='#b12'>(Cai Z et al., 2011</ns0:ref><ns0:ref type='bibr'>, Cacciabue, Curr&#225;, &amp; Gismondi, 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Geissmann, 2013</ns0:ref><ns0:ref type='bibr' target='#b16'>, Katzelnick et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Moorman &amp; Dong, 2012)</ns0:ref> have developed programs to count viral plaque based on image segmentation, morphological analysis, and image threshold, but the programs require imageprocessing knowledge to implement and may not be suitable for some assays. To date, no machine learning (ML)-based program has been developed for plaque counting.</ns0:p><ns0:p>Due to the need of reducing the number of staff that conduct viral plaque assay, we developed an automated quantification machine based on ML. The machine comprises the hardware system for image acquisition and ML-based software for image viral plaque counting. The hardware is relatively simple to set up, affordable, portable, and automatically acquires a single image or multiple images from a multiwell plate for users. The software is implemented using K-mean clustering, which is a ML algorithm and unsupervised learning algorithm to help users. The algorithm helps by reducing the number of setup parameters for counting.</ns0:p><ns0:p>The automated quantification machine was built and evaluated with 96-well plates of dengue virus. The processes for standardizing the manual and software counting algorithm and evaluating the performance of the system on several plaque images are described in the next section. The counting results from the machine are consistent with those from manual counting.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>A. Cell and Viruses</ns0:head><ns0:p>Simplified dengue microwell plaque assays were performed following the procedure reported in <ns0:ref type='bibr' target='#b5'>Boonyasuppayakorn et al. (2016)</ns0:ref>. In summary, the10-fold serially diluted viruses in a maintenance medium to 10 &#8722;6 were used as the reference. LLC/MK2 (ATCC&#174;CCL-7) cells at 1 x 10 4 cells per well (50 &#956;l) of a 96-well plate were mixed with all dilutions and undiluted samples. Then, the dilution was prepared and the cells were incubated as explained in <ns0:ref type='bibr' target='#b5'>Boonyasuppayakorn et al. (2016)</ns0:ref>. Finally, the dengue virus cells were fixed and then stained as the standard assay. The number of plaque-forming units (p.f.u.) per ml was determined manually and using the automated quantification machine for evaluation.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Automated Quantification Machine</ns0:head><ns0:p>The automated quantification machine was developed to acquire viral plaque images of each well plate and automatically display the counting results in an easy-to-use manner. The major components of the machine are the hardware for image acquisition and ML-based software for image viral plaque counting.</ns0:p></ns0:div> <ns0:div><ns0:head>1) Hardware for Image Acquisition</ns0:head><ns0:p>The automated quantification machine was developed to reduce workload and enable personal use in laboratories. Thus, the machine is relatively simple to set up, affordable, portable, and automatically acquires single or multiple images from multiwell plates for users. Then, the hardware components of the machine include an IAI Tabletop, Model TT-A3-I-2020-10B-SP (IAI Robot, 2021), a USB microscope camera, Dino-Lite AM4113T (AnMo Electronics Corporation, 2021), and an adjustable light source panel, 150 mm &#215; 150 mm surface LED illumination source. A photo and a schematic of the automated quantification machine are shown in Figs. <ns0:ref type='figure' target='#fig_15'>1A and 1B</ns0:ref>, respectively. The IAI Tabletop is a 3-axis Cartesian robot with a working space of 200 mm &#215; 200 mm &#215; 100 mm (x-, y-, and z-axis). Its repeating positioning accuracy is less than &#177;0.02 mm. The x-axis of the IAI Tabletop is installed with the multiwell plate fixture and the light source panel and the z-axis is installed with a camera holder and a USB microscope camera. With the Cartesian robot configuration, the IAI Tabletop can move a multiwell plate along the x-y plane for automatic and accurate positioning to capture the well images with the USB microscope camera and move the USB microscope camera along the z-axis to automatically focus the image.</ns0:p><ns0:p>The USB microscope camera is a color camera with a resolution of 1280 &#215; 1024 pixels and 10&#215;-50&#215;, 220&#215; magnification range. It is equipped with an LED coaxial light source. With low magnification, the camera can capture a full-size well image of a six-well plate, and with medium magnification, the camera can capture a full-size well image of a ninety-six-well plate. To change the magnification of the camera, the magnification ring is manually adjusted.</ns0:p><ns0:p>The adjustable light source panel is a square surface LED illumination source. It is installed on the x-axis of the IAI Tabletop and underneath the multiwell plate fixture. The light source panel has high parallelism of light. The high parallelism of the light source panel renders the object edges clearer and sharper in an image. For viral plaque counting, the back-light technique is employed since it gives better image quality than the bright-and dark-field techniques. The back-light technique can be performed by turning off the LED coaxial light source and turning on the light source panel.</ns0:p><ns0:p>To control the machine hardware, the hardware is connected to a personal computer via USB cables and is controlled by a window application developed using C# language. The window application communicates to the IAI Tabletop by PSEL protocol to control the position of the multiwell plate. It also communicates to the USB microscope camera by DNVIT SDK to control the coaxial light source and image acquisition (AnMo Electronics Corporation, 2021). Images are saved in PNG format. The user interface and user experience (UI/UX) design was considered in creating the window application. The window application consists of 1. calibration routine 2. single image and multiwell plate image acquisition 3. Firebase database (Google, 2021).</ns0:p><ns0:p>The calibration routine is used to calibrate the movement of the IAI Tabletop. A screenshot of the calibration control is shown in Fig. <ns0:ref type='figure' target='#fig_16'>2A</ns0:ref>. Due to the installation process, the coordinate of the multiwell plate fixture is not the same as the coordinate of the IAI Tabletop. This causes the misalignment of the well center position when capturing an image. To apply the calibration routine, the user can work in two steps. The first step is to control the IAI Tabletop by UI control to locate the A1, A12, and H1 well center positions, as shown in Fig. <ns0:ref type='figure' target='#fig_16'>2A</ns0:ref>. The second step is to click the calibration routine button. Then, the algorithm automatically recalculates the new coordinate of the IAI Tabletop and makes the coordinate of the multiwell plate fixture and IAI Tabletop the same.</ns0:p><ns0:p>The single and multiwell plate image acquisition allows the user to select how to acquire well images. A screenshot of the image acquisition control is shown in Fig. <ns0:ref type='figure' target='#fig_16'>2B</ns0:ref>. With the single image acquisition, the user only selects or keys in the desired well number, such as A1, B2, and H12. Then, the machine moves, captures, and displays the desired well image. Moreover, with the multiwell plate image acquisition, the user only clicks one button to make the machine automatically capture and display all multiwell plates. For the 96-well plate, the machine can capture and display all images within less than 1 min. Once images are captured, the user may use the ML-based software for image viral plaque counting.</ns0:p><ns0:p>The Firebase database is used to collect all results of the machine for backup. The data in the Firebase database allows new features of the machine to be developed in the future, such as the data storage and exploration features, data analysis tools, data dashboard, web applications, and remote operation. These will provide greater convenience to users. </ns0:p></ns0:div> <ns0:div><ns0:head>2) Machine Learning Software for Image Viral Plaque Counting</ns0:head><ns0:p>The algorithm for image viral plaque counting is ML and K-mean clustering <ns0:ref type='bibr' target='#b19'>(Arora &amp; Varshney, 2016)</ns0:ref>. It helps users and reduces the number of setup parameters for counting. The algorithm is also implemented in a window application developed by visual studio C# language and imageprocessing library, MVtec Halcon (MVTec Software GmbH, 2021a).</ns0:p><ns0:p>The viral plaque is surrounded by noninfected cells and is identified using a counterstain, typically a neutral red or crystal violet solution. The white areas indicate the virus plaques, which cause the solution around them to degrade. To count the viral plaques, three major steps are The region of interest (ROI) is the focus region, which is important for counting viral plaques. The ROI is determined first, followed by the algorithms applied to the corrected location to prevent algorithm error. To locate the well position, a shape-based matching algorithm is employed to find the best matches of a shape model in an image (MVTec Software GmbH, 2021b). It does not use the gray values of pixels and their neighborhood as a template, but it defines the model by the shape of contours. In this case, the circular shape of the well is used as the template. Then, the algorithm finds the best-matched instance of the shape model. The position and rotation of the found instances of the model are returned in the pixel coordinates and angle, which are used to create the ROI image.</ns0:p><ns0:p>There are many techniques for filtering color images for image enhancement, including converting the RGB color space to HSV color space or binary image and applying a mean filter. Herein, the CIELAB color space is employed (MVTec Software GmbH, 2021a; Ly, 2020). It presents a quantitative relationship of color on three axes: L* represents the lightness, a* represents the red-green component of a color, and b* represents the yellow-blue component of a color. The CIELAB color space decouples the relationship between lightness and color of an image. Thus, the image brightness is less effective for image-processing. Only a* and b* are used to convert the image to a binary image. The a* and b* values of image pixels in ROI are used for clustering into the two groups, white and dark groups, by K-mean clustering, as shown in Fig. <ns0:ref type='figure' target='#fig_17'>3A</ns0:ref>. The yellow area represents the white group or white area, and the blue area represents the dark group or dark area. The red dot represents the centroid of each area. The result of the binary image is shown in Fig. <ns0:ref type='figure' target='#fig_17'>3B</ns0:ref>. After this process, a mean filtering process is applied to the binary image. To count the number of viral plaques, a simple image threshold is employed for region selection. The suspected viral plaque regions are selected and analyzed. An example of suspect regions is shown in Fig. <ns0:ref type='figure' target='#fig_18'>4A</ns0:ref>. If the size, length, area, and roundness of a region are appropriated, the region is counted as the viral plaques. An example is shown in Fig. <ns0:ref type='figure' target='#fig_18'>4B</ns0:ref>. However, if the size and area of a region are too large because of their overlay, the K-mean clustering is employed in the region. To do that, the grid points are generated inside the large regions along the x-y coordinate, as shown in Fig. <ns0:ref type='figure' target='#fig_18'>4C</ns0:ref>. Then, K-mean clustering is employed to cluster the grid points. Moreover, to determine the optimal value of k or the appropriate cluster number, the Silhouette algorithm (Ogbuabor &amp; Ugwoke, 2018) is implemented. The grid points are clustered into k clusters by K-mean clustering. Then, the average Silhouette coefficients for each k cluster are calculated. The Silhouette plot is shown in Fig. <ns0:ref type='figure' target='#fig_18'>4D</ns0:ref>. The maximum value of the average Silhouette coefficients is considered the optimal number of clusters. Finally, the number of viral plaques is the summation of the optimal number of clusters and the previous number of viral plaques (Fig. <ns0:ref type='figure' target='#fig_18'>4E</ns0:ref>). A flowchart of the ML-based software for image viral plaque counting is shown in Fig. <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, and an example of the machine counting results is shown in Fig. <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Viral Plaque Reading Differences by Humans</ns0:head><ns0:p>Plaque assays were created at the Medical Virology Research Center, Chulalongkorn University, Thailand. More than 25 96-well plates of dengue virus were used to evaluate differences in viral plaque readings. The number of viral plaques from 96-well plates was read by six experts, which included 1,777 wells that could be read by all experts. The remaining wells could not be read by experts, either because they were unclear or because no viral plaques were present; hence, they were not included in the evaluation. The number of viral plaques read by experts ranged between 0 and 31. To evaluate variations in expert readings, a boxplot of expert reading results for a given number of viral plaques is presented in Fig. <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>. Since the actual number of viral plaques is unknown, the mode value of six expert readings was used as a reference in this case. Data that had no mode was not included in the evaluation. Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> shows that the expert reading distributions in the range of 0-3 are small, but the distribution of expert readings increases when the number of viral plaques increased. When the number of viral plaques exceeds 12, the range of possible readings by an expert varies by more than a factor of 5.</ns0:p><ns0:p>To understand the variation in expert readings, a gage repeatability and reproducibility (R&amp;R) analysis was employed (Automotive Industry Action Group, 2010; Burdick, Borror &amp; Montgomer, 2005). A random sample of 300 wells (part) with and without viral plaques was used, in which the number of viral plaques varied from 0-25. Each well was subsequently read by six experts/operators. The results of the gage R&amp;R were evaluated using a nested model (The MathWorks Inc., 2021) and analysis of the expert data was executed in MATLAB (The MathWorks Inc., 2021) using the MATLAB Toolbox to examine the results.</ns0:p><ns0:p>The gage R&amp;R analysis was performed for various numbers of viral plaques in six ranges: 0-1, 2-3, 4-8, 9-10, 11-15, 16-25, and 0-25. The results are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> indicates that in the range of 0-1, the variation or sigma of expert readings according to the R&amp;R process is 0.9406. The variation or sigma increases as the range of viral plaque numbers increases. The variation or sigma reaches as high as 4.6900 in the range of <ns0:ref type='bibr'>16-25.</ns0:ref> For the overall range of 0-25, the variation or sigma is 2.8063. The reading distribution and reading variation values were used to design the experiment setup described in the following section. </ns0:p></ns0:div> <ns0:div><ns0:head>Experiment</ns0:head><ns0:p>More than 25 96-well plates of dengue virus were used to evaluate the automated quantification machine. The 96-well plates were put in the machine, and their images were captured. Then, the images were screened for evaluation. There were 1,777 images qualified for evaluation. Next, the number of viral plaques in each image was manually determined by six experts from the research center and by the automated quantification machine software (note that 721 images showed 0 viral plaque). The number of viral plaques was evaluated based on the mode of expert readings and well as by the machine.</ns0:p><ns0:p>In the case of many viral plaques in an image, the image was ambiguous. For this reason, the number of viral plaques complicated efforts by the experts to read them. Therefore, an experimental counting criteria was defined based on the reading distribution in Fig. <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>, the variation in expert readings in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, and the criterion of the Medical Virology Research Center. If the difference between the number of viral plaques counted by the expert reading mode and the machine is within the maximum number of errors (Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>), the number of viral plaques counted by the machine is correct. This counting criterion was employed in the evaluation, and the results are presented in the Results section.</ns0:p></ns0:div> <ns0:div><ns0:head>TABLE 2. Maximum number of errors for the software evaluation</ns0:head><ns0:p>The gage R&amp;R analysis was employed to evaluate the repeatability and reproducibility of the machine (Automotive Industry Action Group, 2010; Burdick, Borror &amp; Montgomer, 2005). For the analysis, 240 wells (part) with and without viral plaques were used. Due to the number of samples, the number of viral plaques of the 240 wells varied from 0 to 18. Then, each well was put into the machine to be captured the image and count the number of viral plaques. This procedure was repeated three times to represent three operators using the machine to count the number of viral plaques. Finally, each well has three measurement results from the machine. The gage R&amp;R using the nested model is employed to evaluate the results (The MathWorks Inc., 2021).</ns0:p><ns0:p>MATLAB and MATLAB Toolbox were used to perform the analysis and the gage R&amp;R analysis of the machine data.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>A. Distribution Software Counting for Each Number of Viral Plaques</ns0:head><ns0:p>The measurement of the number of viral plaques is depicted in Fig. <ns0:ref type='figure' target='#fig_11'>8</ns0:ref>. There are 1,777 measurements in the range of 0 to 29 viral plaques. For each number of viral plaques, the boxplot is used to show the mean and distribution of the measurement by the machine. The red + marks represent the outliners of the measurement, which includes less than 5% of the measurement. The boxplot shows that in the range of 0 to 9 viral plaques, the standard deviation (SD) of the measurement is less than 1, and the SD increases as the number of viral plaques increases.</ns0:p><ns0:p>The blue line having the slope of 1 represents the measurement reference values. The red dashed line represents the trend line of the measurement results of the machine. The number of viral plaques obtained by the machine ranges from 0 to 29, as calculated form From Fig. <ns0:ref type='figure' target='#fig_22'>8A</ns0:ref> using Eq. ( <ns0:ref type='formula'>1</ns0:ref>). The goodness-of-fit equation generated an R 2 of 0.8408. y = 0.8054 &#215; x, (1a) Goodness-of-fit: R 2 = 0.8408, (1b) where y is the number of viral plaques by the machine, and x is the number of viral plaques by the expert. The values range from 0 to 29. For Fig. <ns0:ref type='figure' target='#fig_22'>8B</ns0:ref>, the trendline of the measurement results of the machine shows the number of viral plagues ranging from 0 to 12, as calculated using Eq. ( <ns0:ref type='formula'>2</ns0:ref>). The goodness-of-fit equation generated an R 2 of 0.8870. y = 0.9539 &#215; x, (2a) Goodness-of-fit: R 2 = 0.8870.</ns0:p><ns0:p>(2b)</ns0:p><ns0:p>In the range of 0 to 29 viral plaques, the slope of the trend line of the machine is 0.8054, whereas, in the range of 0 to 12 viral plaques, the slope is 0.9539. For a number of viral plaques in the range of 0-12, the line has a slope close to 1 with R 2 of 0.8870. Therefore, the counting results by the machine and experts correspond to the highest evaluation quality. Moreover, to evaluate the correlation between the counting by the expert and machine, Pearson's coefficient method was used to measure the strength of the association between the machine and manual counting <ns0:ref type='bibr'>(Mukaka, 2021)</ns0:ref>. The Pearson's coefficient r of these variables is 0.9221 with a p-value of less than 0.0001. Since the p-value is less than 0.05, the variables reliably correlate. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>B. Bland-Altman</ns0:head><ns0:p>To evaluate the performance of the machine, the Bland-Altman plot was used <ns0:ref type='bibr' target='#b28'>(Myles, 2007)</ns0:ref>. The measurements from the expert and machine were considered. The difference in the measurement results from the expert and machine, error mean bias, and error SD were determined. The Bland-Altman plot is shown in Fig. <ns0:ref type='figure' target='#fig_12'>9</ns0:ref>. The results indicate that 95.01% of the measurement error is in the upper and lower boundaries (&#177;2 SD). Therefore, the error from the machine is within the boundary limit. The machine effectively and efficiently quantified the number of viral plaques. </ns0:p></ns0:div> <ns0:div><ns0:head>C. Performance of the Machine</ns0:head><ns0:p>Counting viral plaques is ambiguous, even by an expert. Thus, counting criteria were defined, as shown in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. If the error between the number of viral plaques counted by the expert reading mode and machine is within the maximum number of errors, as shown in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>, the number of viral plaques counted by the machine is correct.</ns0:p><ns0:p>The counting error between expert mode and the machine is presented using the boxplot (Fig. <ns0:ref type='figure' target='#fig_13'>10</ns0:ref>). The magenta line presents the maximum number of errors for each reference number of viral plaques. Figure <ns0:ref type='figure' target='#fig_13'>10</ns0:ref> shows that in the range of 0-18 viral plaques, the mean measurements of the machine are within the boundary error, and the error is acceptable. For the range of 19-29 viral plaques, the error is large and out of the boundary. Consequently, when the number of viral plaques is larger, it is more difficult to measure the number of viral plaques. The accuracy of the reading from the large number is not significant to quantify viral infection or research. Moreover, only 55 of the 1,777 images are in the range of 19-29 viral plaques. Based on the criteria in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>, the percentage of correct and incorrect measurements by the machine is shown in Fig. <ns0:ref type='figure' target='#fig_14'>11</ns0:ref>. The light blue bar represents the percentage of correct measurement which the machine measurement and expert reading mode are exactly the same. The dark blue bar represents the percentage of correct measurement which the machine measurement is within the defined range of expert reading mode and the red bar represents the percentage of incorrect measurement of the machine. The percentage of correct measurement is high when the number PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science of viral plaques is low. In the range of 0-29 viral plaques, the percentage of correct measurement is more than 80%. In addition, the average correct measurement by the machine is 85.8%. This number is consisted of the percentages of exact correct and within the range are 60.0% and 25.8%, respectively. </ns0:p></ns0:div> <ns0:div><ns0:head>D. R&amp;R of the Machine</ns0:head><ns0:p>The R&amp;R of the machine was evaluated by gage R&amp;R analysis according to Automotive Industry Action Group (2010) and <ns0:ref type='bibr' target='#b25'>Burdick, Borror &amp; Montgomer (2005)</ns0:ref>. The gage R&amp;R is shown in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>. Previous authors (Automotive Industry Action Group, 2010; Burdick, Borror &amp; Montgomer, 2005) suggest that the measurement system is acceptable if the number of distinct categories (NDC) is greater than or equal to 5. Moreover, for the percentage of gage R&amp;R of total variations (PRR), the measurement system is capable if PRR is less than 10% and not capable if PRR is more than 30%. Otherwise, the measurement system is acceptable. NDC of the machine is 6 and the PRR is 21.72%, which is between 10% and 30% criteria (Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>). Due to the cost of the machine and previous results and evaluations, the R&amp;R of the machine is acceptable, and the machine is capable of application. TABLE <ns0:ref type='table' target='#tab_5'>3</ns0:ref>. Gage R&amp;R analysis parameters for the machine.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Counting the number of viral plaques is ambiguous, even for experts. Consequently, the Medical Virology Research Center defined experimental counting criteria for evaluating the machine output (Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>) based on the distribution and variation of expert readings. If the number of viral plaques is small, the maximum number of errors is small, and as the number of viral plaques increases, the maximum number of errors increases. However, in the case of a large number of viral plaques, the reading accuracy is not sufficient for quantifying viral infection or in viral research.</ns0:p><ns0:p>The automated quantification machine was evaluated using 96-well plates of dengue virus. According to the analysis, the measurement results obtained using the machine correlate well with the manual counting results. For 0-29 viral plaques, the trend line of the result of the machine has a slope of 0.8054 with R 2 of 0.8408, and in the range of 0-12 viral plaques, the slope is 0.9539 with R 2 of 0.8870. The correlation in the range of 0-12 viral plaques is better than that in the range of 0-29 viral plaques. The Pearson's coefficient r of the data is 0.9221 with a p-value of less than 0.0001. This shows that the machine counting results correlate with the manual counting results. Further, the Bland-Altman plot shows that more than 95% of the measurement errors are in the upper and lower boundaries (&#177;2 SD). Thus, the manual and machine counting results are consistent.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_22'>8B</ns0:ref>, for 0-12 viral plaques, the boxplot is close to the reference line, indicating the manual and machine counting results are in good agreement. However, for 13-29 viral plaques, the boxplot mean is below the reference line, indicating that the measurement results of the machine are less than the manual measurement results. This information can be used to improve the algorithm for measurement in the range of 13-29 viral plaques in future studies.</ns0:p><ns0:p>The large error in the results of the machine in the range of 13-29 viral plaques could be attributed to the criteria of the Silhouette algorithm. The Silhouette algorithm uses only the location information of the grid points to calculate the Silhouette score for selecting the suitable number of clusters. To improve the algorithm, other criteria, such as the number of grid points and areas, can be included in the algorithm to calculate the new Silhouette score.</ns0:p><ns0:p>Base on the gage R&amp;R analysis, the major variations in PRR results from reproducibility, which occurs when the user puts the sample to the machine and measures the number of viral plaques. If the brightness of the environment light is changed, the image color may be changed, which affects the algorithm. This may be overcome by creating a cover for the machine to reduce the effect of environmental light. The repeatability of the machine is 0 since the counting algorithm has no variation.</ns0:p><ns0:p>According to the performance analysis, the average correct measurement of the machine is 85.8% in the range of 0-29 viral plaques. However, most of the errors occur when the number of viral plaques is high. If only small and medium numbers of viral plaques are evaluated, the average correct measurement of the machine would be more than 85.8%.</ns0:p><ns0:p>Even though there are automated imaging-based counters for plaque assay, the current commercial versions have multiple challenges, especially their complexity and cost. They may unsuitable for personal use in the laboratory. Thus, for situations of limited resources, the developed machine may be more suitable for a personal laboratory. Additionally, the developed machine can be applied to Elispot counting <ns0:ref type='bibr' target='#b29'>[31]</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:note type='other'>Computer Science Figure 11</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>; R Core Team, 2016; Cai Z et al., 2011, Cacciabue, Curr&#225;, &amp; Gismondi, 2019; Geissmann, 2013, Katzelnick et al., 2018; Moorman &amp; Dong, 2012). Boonyasuppayakorn et al. (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Automated quantification machine. (A) Prototype of automated quantification machine. (B) Schematic of automated quantification machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Screenshot of calibration control and image acquisition control. (A) Calibration control. (B) Image acquisition control.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021) Manuscript to be reviewed Computer Science taken to process each image: locating the well position, filtering the color image for image enhancement, and counting the number of viral plaques with K-mean clustering.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Converting RGB image to binary image. (A) a* and b* values of image pixels. (B) Binary image by K-mean clustering</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Viral plaque counting with K-mean clustering algorithm. (A) Suspect viral plaque regions. (B) Appropriated region. (C) Grid points inside the larger region. (D) Silhouette plot. (E) Final result.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Machine learning (ML)-based software for image viral plaque counting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Example of machine counting. (A) Counting results by an expert. (B) Counting by machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>(A) shows the distribution of readings by all experts for each number of viral plaques and Figure7(B) shows an example reading distribution by an expert for different numbers of viral plaques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Expert reading distributions. (A) Reading distribution of all experts for each number of viral plaques. (B) Reading distribution examples of an expert for each number of viral plaques. The red + marks represent the outliners of the measurement. The blue line having the slope of 1 represents the measurement reference values. The red dashed line represents the trend line of the measurement results of experts.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Distribution software counting for each number of viral plaques. (A) Distribution of software counting for each number of viral plaques. (B) Distribution of software counting for the range [0-12] of number of viral plaques. The red + marks represent the outliners of the measurement. The blue line having the slope of 1 represents the measurement reference values. The red dashed line represents the trend line of the measurement results of the machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure: 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure: 9 Bland-Altman plot of the machine measurement results.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Distribution software error for each number of viral plaques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Percentage of correct measurement by the machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Automated quantification machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Screenshot of calibration control and image acquisition control.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Converting RGB image to binary image.</ns0:figDesc><ns0:graphic coords='26,42.52,204.37,525.00,355.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Viral plaque counting with K-mean clustering algorithm.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Machine learning (ML)-based software for image viral plaque counting.</ns0:figDesc><ns0:graphic coords='28,42.52,178.87,525.00,409.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Example of machine counting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Expert reading distributions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Distribution software counting for each number of viral plaques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure: 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure: 9 Bland-Altman plot of the machine measurement results.</ns0:figDesc><ns0:graphic coords='34,42.52,178.87,525.00,426.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>Fig ure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Fig ure 10: Distribution software error for each number of viral plaques.</ns0:figDesc><ns0:graphic coords='35,42.52,178.87,525.00,408.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Percentage of correct measurement by the machine.</ns0:figDesc><ns0:graphic coords='36,42.52,178.87,525.00,301.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>TABLE 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Gage repeatability and reproducibility (R&amp;R) analysis for each range of viral plaque numbers.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>TABLE 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Gage repeatability and reproducibility (R&amp;R) analysis for each range of viral plaque numbers.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021)Manuscript to be reviewedComputer Science1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>TABLE 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Gage repeatability and reproducibility (R&amp;R) analysis for each range of viral plaque numbers.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Number of Viral Plaques</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>TABLE 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Maximum number of errors for the software evaluation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>TABLE 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Maximum number of errors for the software evaluation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Number of</ns0:cell><ns0:cell cols='2'>Maximum Number</ns0:cell></ns0:row><ns0:row><ns0:cell>Viral Plaques</ns0:cell><ns0:cell cols='2'>of Errors</ns0:cell></ns0:row><ns0:row><ns0:cell>0-1</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>2-3</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>4-8</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>9-10</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>11-15</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>&gt;16</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>TABLE 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Gage R&amp;R analysis parameters for the machine. Gage R&amp;R of total variations (PRR): 21.72 Note: The last column of the above table does not to necessarily sum to 100%</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Source</ns0:cell><ns0:cell>Variance</ns0:cell><ns0:cell>%Variance</ns0:cell><ns0:cell>Sigma</ns0:cell><ns0:cell cols='2'>5.15 x sigma % 5.15 x sigma</ns0:cell></ns0:row><ns0:row><ns0:cell>Gage R&amp;R</ns0:cell><ns0:cell>0.8861</ns0:cell><ns0:cell>4.7196</ns0:cell><ns0:cell>0.9413</ns0:cell><ns0:cell>4.8479</ns0:cell><ns0:cell>21.7246</ns0:cell></ns0:row><ns0:row><ns0:cell>Repeatability</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>Reproducibility</ns0:cell><ns0:cell>0.8861</ns0:cell><ns0:cell>4.7196</ns0:cell><ns0:cell>0.9413</ns0:cell><ns0:cell>4.8479</ns0:cell><ns0:cell>21.7246</ns0:cell></ns0:row><ns0:row><ns0:cell>Operator</ns0:cell><ns0:cell>0.8861</ns0:cell><ns0:cell>4.7196</ns0:cell><ns0:cell>0.9413</ns0:cell><ns0:cell>4.8479</ns0:cell><ns0:cell>21.7246</ns0:cell></ns0:row><ns0:row><ns0:cell>Part</ns0:cell><ns0:cell>17.8891</ns0:cell><ns0:cell>95.2804</ns0:cell><ns0:cell>4.2296</ns0:cell><ns0:cell>21.7822</ns0:cell><ns0:cell>97.6117</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell>18.7752</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>4.3330</ns0:cell><ns0:cell>22.3151</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Number of distinct categories (NDC): 6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>% of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:1:2:NEW 13 Dec 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Equation Chapter 1 Section 1Changes Made in Response to the Comments of Editor and Reviewer We would like to thank the editor and reviewers for their comments, which certainly improved the quality of our paper. The following changes have been made to the paper in response to the reviewers’ comments regarding the relevance of the paper: The revised portions and language corrections are indicated by red text in the revised manuscript. (in .docx format). Editor comments (Shawn Gomez) While the reviewers were generally positive, they did bring up a number of, primarily methodological, concerns. Please address these as part of your revision. [# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] We sincerely appreciate for the referees’ suggestions and comments and our response to the reviewers’ comments is given below. REVIEWERS' COMMENTS AND OUR REPLIES: Reviewer 1 (Marco Cacciabue): Our reply to comment of reviewer (Marco Cacciabue): Basic reporting • Language used is NOT clear throughout the article, several ambiguities are present (please see general comments). • Literature is well referenced & relevant. • Structure conforms to PeerJ standards. • Figures are high quality and described, but too many (please see general comments). • Complete raw data is supplied. We agree with the reviewer comments and the revised manuscript has been edited for proper English language usage by native English-speaking editors at Enago. We have included an editorial certificate (please see the attached file). We have addressed the excess number of figures by removing some figures and merging images in others, according to your suggestions. Validity of the findings • Data is robust but critical controls, e.g. different forms of plaques (from different virus) is not present. • Conclusions are well stated, linked to original research question & limited to supporting results. Thank you for the suggestions. It is true that different viruses could exhibit various forms of plaques. However, our lab has optimized the conditions (e.g., the semisolid concentrations, the incubation period, strain selection, etc.) to yield 96-well countable plaques. For example, the plaques of enterovirus A71 (EV-A71) and coxsackievirus A16 (CV-A16) are shown (supplementary figure). Moreover, the algorithm distinguished between plaques and artifacts were manually incorporated by a user (diameter range). Supplementary figure 96-well plaque titration of A) EV-A71 and B) CV-A16 Additional comments The manuscript by Phanomchoeng et al. presents an automated quantification machine for viral plaque counting. This machine is a convenient way to reduce the workload in plaque counting. The authors show the performance of the machine with an example of a Dengue virus dataset that they have produced, comparing it to manual (expert) counting. I commend the authors for their extensive work, tutorial videos and found their proposed method interesting and useful. However, I believe there are several concerns that should be addressed before Acceptance. Major Concerns: 1. As the contribution of the authors is a bioinformatics software tool aimed to virologists with no prior experience in image analysis, the Computer Science category of the PeerJ journal seems inappropriate. We received a comment from a prior reviewer suggesting that we should submit the paper to PeerJ Computer Science. We also verified the scope of the journal, which is described as follows: “PeerJ Computer Science is an Open Access, peer-reviewed multidisciplinary journal for the computer sciences. PeerJ Computer Science considers Research Articles, Literature Reviews and Application Notes in over 42 subjects in Computer Science. PeerJ Computer Science evaluates articles on an objective determination of scientific and methodological soundness, not on subjective determinations of 'impact,' 'novelty' or 'interest'. PeerJ Computer Science applies the highest standards to everything it does - specifically, the publication places an emphasis on research integrity; high ethical standards; constructive and developmental peer-review; exemplary production quality; and leading-edge online functionality.”. Thus, we believe that the paper is very useful to publish to PeerJ Computer Science. 2. The English language is not clear enough and should be improved. Some examples where the language could be improved include lines 57, 58, 81, 82, 85, 100, 140, 170, 171, 191, 192, 260, 261 282, 308, 370, 373 and 407. The current phrasing makes comprehension difficult. I suggest you have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or contact a professional editing service. We agree with the reviewer comments and have accordingly had the revised manuscript corrected and edited for English language usage by native English-speaking editors at Enago. We have also included an editorial certificate for this work. Line 57: Recently, automated imaging-based counters have been applied to plaque assays; however, the current versions face multiple challenges. Line 58 Recently, automated imaging-based counters have been applied to plaque assays; however, the current versions face multiple challenges. For example, Cellular Technology Limited developed a commercial Elispot and viral plaque-counting machine called ImmunoSpot CLT Analyzers (e.g. Cellular Technology Limited, 2021). Line 81 Due to the need of reducing the number of staff that conduct viral plaque assay, we developed an automated quantification machine based on ML. Line 82 The machine comprises the hardware system for image acquisition and ML-based software for image viral plaque counting. Line 85 The software is implemented using K-mean clustering, which is a ML algorithm and unsupervised learning algorithm to help users. The algorithm helps by reducing the number of setup parameters for counting. Line 100 Cells were incubated at 37°C under 5% CO2 for 3 h before being added to a 100-μl overlay medium. Line 140 The high parallelism of the light source panel renders the object edges clearer and sharper in an image. Line 170 The data in the Firebase database allows new features of the machine to be developed in the future, such as the data storage and exploration features, data analysis tools, data dashboard, web applications, and remote operation. These will provide greater convenience to users. Line 191-192 The region of interest (ROI) is the focus region, which is important for counting viral plaques. The ROI is determined first, followed by the algorithms applied to the corrected location to prevent algorithm error. Line 260-261 For this reason, the number of viral plaques complicated efforts by the experts to read them. Therefore, an experimental counting criteria was defined based on the reading distribution in Fig. 7, the variation in expert readings in Table 1, and the criterion of the Medical Virology Research Center. If the difference between the number of viral plaques counted by the expert reading mode and the machine is within the maximum number of errors (Table 2), the number of viral plaques counted by the machine is correct. This counting criterion was employed in the evaluation, and the results are presented in the Results section. Line 282 MATLAB and MATLAB Toolbox were used to perform the analysis and the gage R&R analysis of the machine data. Line 308 For a number of viral plagues in the range of 0–12, the line has a slope close to 1 with R2 of 0.8870. Therefore, the counting results by the machine and the expert both correspond to the highest evaluation quality. Line 370 Consequently, the Medical Virology Research Center defined experimental counting criteria for evaluating the machine output (Table 2) based on the distribution and variation of expert readings. Line 373 Consequently, the Medical Virology Research Center defined experimental counting criteria for evaluating the machine output (Table 2) based on the distribution and variation of expert readings. If the number of viral plaques is small, the maximum number of errors is small, and as the number of viral plaques increases, the maximum number of errors increases. However, in the case of a large number of viral plaques, the reading accuracy is not sufficient for quantifying viral infection or in viral research. Line 407 Thus, for situations of limited resources, the developed machine may be more suitable for a personal laboratory. Additionally, the developed machine can be applied to ELISpot counting [31]. 3. The authors propose an automated quantification machine and the software to operate it. Although the authors claim that the hardware is relatively simple to set up, I could not test it accordingly due to not having access to the specific instruments. We understand that it is not easy to visualize how the hardware works. Thus, we recorded video clips to demonstrate the operation of the hardware. Please see the accompanying video clips in the supplemental files of Peer J Computer Science or the below link. https://github.com/SuphanutP/Machine-learning-based-automated-quantification-machine-for-virus-plaque-assay-counting 4. It is not clear how the machine would perform if presented with plaque assays from different viruses, or with differently shaped plaques. In fact, the software was only evaluated using Dengue virus. The author should discuss it or change the scope of the Main title accordingly. To apply the machine to different sizes of multiwell plates, the user must manually adjust the magnification of the USB microscope camera as the camera does not have an in-built auto-zoom feature. After the users run the calibration routine of the machine, as shown in the calibration screen, the machine can collect images of different multiwell plate sizes. Calibration Screen The software program provides four parameters, as shown on the program screen, to set up different counting procedures. 1. Radius Threshold is used to classify the plaque and the red or violet crystal solution area. 2. Grid Size is used to generate grid points for the K-mean algorithm. Large and small plaque areas should be used with large and small grid points, respectively. 3. Radius Silhouette is the distance between plaque areas. 4. Radius Kmean is the minimum plaque area that will be counted. With these parameters, the program can be adjusted for different viruses. Program Screen We have also changed the scope of the paper as per your suggestion. We specifically mention that the software was evaluated using 96-well plates of dengue virus. 5. The authors should consider reducing the total number of main figures. Perhaps some figures could be supplementary? For example Figure 5, 6, 10, 12. Also, Figures 13 and 14 could be merged into a single image. We agree with the reviewer. We have removed Figures 5, 6, 10, and 12 and merged Figures 13 and 14 as per your suggestions. Please see the revised paper. 6. How were the “Maximum Number of Errors” for the expert defined (Table I)? This relevant point seems too arbitrary. We agree with the reviewer that Table 1 is not clearly defined. We have collected additional data on the distributions of expert readings and have added a section titled Viral Plaque Reading Differences by Humans in the paper. This section includes a gage R&R analysis to present the variation or sigma values of the expert reading process in terms of its repeatability and reproducibility. Please see the paper (Viral Plaque Reading Differences by Humans Section). Minor points: 1. The phrase “The viral plaques appear in the image as white circled areas (Fig. 3) since the viruses eat the cell around themselves.” should be rewritten. This sentence has been corrected. The white areas indicate the virus plaques, which cause the solution around them to degrade. 2. The phrase “Thus, when the number of viral plaques is large, it is more difficult to justify the number of viral plaques.” should be rewritten, having in mind that the authors should not justify their results, rather than present them. The sentence has been corrected. Consequently, when the number of viral plaques is larger, it is more difficult to measure the number of viral plaques. 3. Typo: in line 98 the number 4 should be superindexed (“cells at 1 x 104 cells”). The sentence has been corrected. In summary, the reference viruses were 10-fold serially diluted in a maintenance medium to 10−6. All dilutions (50 μl), including undiluted, 10−1, 10−2, 10−3, 10−4, 10−5, 10−6 samples, were instantaneously mixed with LLC/MK2 (ATCC®CCL-7) cells at 1 x 104 cells per well (50 μl) of a 96-well plate. 4. Typo: in line 310 there seems to be an extra period(“the counting by the expert and machine. Pearson's”) The sentence has been corrected. Moreover, to evaluate the correlation between the counting by the expert and machine, Pearson’s coefficient method was used to measure the strength of the association between the machine and manual counting (Mukaka, 2021). 5. The authors should be more explicit about the future developments regarding the Firebase database. We have provided the feature that we plan to do in the paper. The data in the Firebase database allows new features of the machine to be developed in the future, such as data storage and exploration features, data analysis tools, a data dashboard, web applications, and remote operation. 6. The authors should be more explicit about which filters are applied to the binary image (line 216). We have provided the filter in the paper After this process, the mean filtering is applied to the binary image. Reviewer 2: Our reply to comment of reviewer 2: Basic reporting 1. Is an automated quantification machine really needed in counting plaques? The authors should provide more application scenarios of automatic quantification for viral plaques. It is easy to count 1-10 plaques in one well, in an appropriate dilution of viral stocks. The authors are focusing on statistic and algorithm of image recognition, ignoring the repeatability of viral plaque assay itself. 1. The Medical Virology Research Center, Chulalongkorn University is a virology research laboratory that conducts a large amount of research on viruses. Some experiments, such as drug screening or vaccine trials, require at least 50–100 plates. An automated system can provide accurate measurements and data storage. Thus, there is high demand for an automated quantification machine. 2. The machine is designed to analyze not only 96-well plates but also 6- or 16-well plates. The following step describes how to change the setup. To apply the machine to different sizes of multiwell plates, the user must manually adjust the magnification of the USB microscope camera as the camera does not have an in-built auto-zoom feature. After the users run the calibration routine of the machine, as shown in the calibration screen, the machine can collect images of different multiwell plate sizes. Calibration Screen The software program provides four parameters, as shown on the program screen, to set up different counting procedures. 1. Radius Threshold is used to classify the plaque and the red or violet crystal solution area. 2. Grid Size is used to generate grid points for the K-mean algorithm. Large and small plaque areas should be used with large and small grid points, respectively. 3. Radius Silhouette is the distance between plaque areas. 4. Radius Kmean is the minimum plaque area that will be counted. With these parameters, the program can be adjusted for different viruses. Program Screen Moreover, the machine can be applied to ELISpot counting. Results of this are shown in Kukiattikoon C., Vongsoasup N., Ajavakom N., and Phanomchoeng G. 2021. Automate platform for Capturing and Counting ELISpot on 96-Well Plate. Proc. of the 7th International Conference on Engineering and Emerging Technologies (ICEET) 27-28 October 2021, Istanbul, Turkey 3. We have evaluated the repeatability and reproducibility of the machine by conducting a gage repeatability and reproducibility (R&R) analysis. The result is shown in Table. 1. 2. On the other hand, the shapes of plaques are variable in some viruses. How to distinguish overlapped plaques(two or more)from a single enhanced big plaque caused by viral mutations. What about those smaller plaques caused by attenuated viruses in the picture of plaque assay? It is true that different viruses could exhibit various forms of plaques. However, our lab has optimized the conditions (e.g., the semisolid concentrations, the incubation period, strain selection, etc.) to yield 96-well countable plaques. For example, the plaques of enterovirus A71 (EV-A71) and coxsackievirus A16 (CV-A16) are shown (supplementary figure). Moreover, the algorithm distinguished between plaques and artifacts that were manually incorporated by a user (diameter range). Supplementary figure 96-well plaque titration of A) EV-A71 and B) CV-A16 With traditional imaging technique, color is the main property used to distinguish overlapping plaques. This method is ineffective because the information about the shape of the plaques is not used to distinguish overlapping plaques. Thus, the developed K-mean is applied in this case. First the grid points are generated and applied to all plaque areas and then the developed K-mean is applied. The algorithm classifies the plaques area based on the grid points, which includes information about the plaque shape. Thus, this technique collects plaque shape information; this information can be optimized by tuning the grid size parameters that control the grid point generation. Experimental design The experiments about image recognition are well designed within the scope of the journal. We thank you for checking our data and paper organization. Validity of the findings I have no questions about this part. We thank you for checking our data and paper organization. Additional comments Citation 1 (Delbruck, 1940) is inappropriate for the first sentence of the introduction section. It is well accepted that the first viral plaque assay on eukaryotic cell lines was described by Dulbecco et al. (Dulbecco, R., 1952. Production of Plaques in Monolayer Tissue Cultures by Single Particles of an Animal Virus. Proc. Natl. Acad. Sci. U. S. A., 38 (8), 747-752.) We thank you for providing this information as well as the citation. We have corrected the citation as you recommended. Reviewer 3: Our reply to comment of reviewer 3: Basic reporting From figure 5 onwards, the numbers do not match the description, and it becomes very difficult to follow. Some sentences in the text are not finished (e.g. line 58: 'Recently, automated imaging-based counters for plaque assays have been employed, but the current versions.'). There are also parts that are vague regarding the description of the algorithm. For instance line 194, the authors write 'a shape-based matching algorithm is employed'. We agree with the reviewer comments and the revised manuscript has been corrected and edited for proper English language usage by native English-speaking editors at Enago. We have included an editorial certificate (please see the attached file). Line 57: Recently, automated imaging-based counters have been applied to plaque assays; however, the current versions face multiple challenges. Line 58 Recently, automated imaging-based counters have been applied to plaque assays; however, the current versions face multiple challenges. For example, Cellular Technology Limited developed a commercial Elispot and viral plaque-counting machine called ImmunoSpot CLT Analyzers (e.g. Cellular Technology Limited, 2021). Line 81 Due to the need of reducing the number of staff that conduct viral plaque assay, we developed an automated quantification machine based on ML. Line 82 The machine comprises the hardware system for image acquisition and ML-based software for image viral plaque counting. Line 85 The software is implemented using K-mean clustering, which is a ML algorithm and unsupervised learning algorithm to help users. The algorithm helps by reducing the number of setup parameters for counting. Line 100 Cells were incubated at 37°C under 5% CO2 for 3 h before being added to a 100-μl overlay medium. Line 140 The high parallelism of the light source panel renders the object edges clearer and sharper in an image. Line 170 The data in the Firebase database allows new features of the machine to be developed in the future, such as the data storage and exploration features, data analysis tools, data dashboard, web applications, and remote operation. These will provide greater convenience to users. Line 191-192 The region of interest (ROI) is the focus region, which is important for counting viral plaques. The ROI is determined first, followed by the algorithms applied to the corrected location to prevent algorithm error. Line 260-261 For this reason, the number of viral plaques complicated efforts by the experts to read them. Therefore, an experimental counting criteria was defined based on the reading distribution in Fig. 7, the variation in expert readings in Table 1, and the criterion of the Medical Virology Research Center. If the difference between the number of viral plaques counted by the expert reading mode and the machine is within the maximum number of errors (Table 2), the number of viral plaques counted by the machine is correct. This counting criterion was employed in the evaluation, and the results are presented in the Results section. Line 282 MATLAB and MATLAB Toolbox were used to perform the analysis and the gage R&R analysis of the machine data. Line 308 For a number of viral plagues in the range of 0–12, the line has a slope close to 1 with R2 of 0.8870. Therefore, the counting results by the machine and the expert both correspond to the highest evaluation quality. Line 370 Consequently, the Medical Virology Research Center defined experimental counting criteria for evaluating the machine output (Table 2) based on the distribution and variation of expert readings. Line 373 Consequently, the Medical Virology Research Center defined experimental counting criteria for evaluating the machine output (Table 2) based on the distribution and variation of expert readings. If the number of viral plaques is small, the maximum number of errors is small, and as the number of viral plaques increases, the maximum number of errors increases. However, in the case of a large number of viral plaques, the reading accuracy is not sufficient for quantifying viral infection or in viral research. Line 407 Thus, for situations of limited resources, the developed machine may be more suitable for a personal laboratory. Additionally, the developed machine can be applied to ELISpot counting [31]. Experimental design see next section We thank you for checking our data and paper organization. Validity of the findings Phanomchoeng et al. describe a new method to enumerate viral plaques in well plates. Their work is original in that it combines hardware and software into a single solution. There were, however, several points I found would need to be addressed. Accessibility -- A publication such as this one the software described should be published as well. I could not find in the manuscript a link to a repository and a set of data/images. The PeerJ editorial policy states: 'For software papers, 'materials' are taken to mean the source code and/or relevant software components required to run the software and reproduce the reported results. The software should be open source, made available under an appropriate license, and deposited in an appropriate archive. Data used to validate a software tool is subject to the same sharing requirements as any data in PeerJ publications.' Furthermore, I advise the author to name their method and to provide online documentation (otherwise, I anticipate low adoption). In several instances, the authors describe their method as cost-effective. It would be necessary to give an estimate of the overall price. 1. We have submitted our raw data, software, and a video clip file to the supplemental files of Peer J Computer Science to show how to install and run the program. To help you easily find these features, we have stored them on the Github website. Please see the link to access the data. https://github.com/SuphanutP/Machine-learning-based-automated-quantification-machine-for-virus-plaque-assay-counting 2. We have published our algorithms and codes in the Github website where they can be publically accessed. However, to run the codes, you must use an MVtec Halcon evaluation license, which we have provided in the PeerJ Computer Science system, or you can contact MVtec for an evaluation license. https://github.com/SuphanutP/Machine-learning-based-automated-quantification-machine-for-virus-plaque-assay-counting 3. We used a commercial library to develop the machine because we wanted the machine to be useful to others working outside of research. The software must be easy to use without needing to write any code. The commercial library provides many tools for creating a GUI application, which saves time in developing a window application with a GUI for users. For research purposes, we believe that other researchers can learn and develop their own research methods based on our published algorithms and the codes on the Github website. 4. Based on information from the Chulalongkorn University Lab, which bought ImmunoSpot CLT Analyzers from Cellular Technology Limited 5 years ago, the cost of this system was 155,000 USD/machine. The cost of our automated quantification machine consists of the hardware and the software. The hardware cost is 4,500 USD/machine and the commercial library software is 1,600 USD/license. Thus, the total cost is 6,100 USD/machine. Performance evaluation -- The number of plaques is the variable of interest, and the authors find a good correlation between expert and machine. It is difficult for the reader to understand why errors are negligible for large (>12) plaques. Could the authors explain or provide references? In several instances, the authors describe their correlation between expert and machine as significant, which is trivial. A fairer assessment would be to compare the machine vs human error with human vs human error (in other words, is the error/bias) due to automation close to the error between experimenters. What is the justification for table 1. It seems extremely arbitrary and unnecessary. We agree with the reviewer that we may not have adequately explained these points. Thus, we have collected more data on reading distributions of experts and have added a section titled: Viral Plaque Reading Differences by Humans, in the paper. This section includes a gage R&R analysis to present the variation or sigma of expert reading in terms of its repeatability and reproducibility. With the new data, we can see that for a number of viral plaques over 12, the range read by an expert may varies by more than a factor of ±5. Moreover, we have modified the comparison between the machine and experts to contain a comparison between the machine result with the mode of six expert readings. Please see the paper sections Viral Plaque Reading Differences by Humans and Experiment for more detail. "
Here is a paper. Please give your review comments after reading it.
330
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The plaque assay is a standard quantification system in virology for verifying infectious particles. One of the complex steps of plaque assay is the counting of the number of viral plaques in multiwell plates to study and evaluate viruses. Manual counting plaques are time-consuming and subjective. There is a need to reduce the workload in plaque counting and for a machine to read virus plaque assay; thus, herein, we developed a machinelearning (ML)-based automated quantification machine for viral plaque counting. The machine consists of two major systems: hardware for image acquisition and ML-based software for image viral plaque counting. The hardware is relatively simple to set up, affordable, portable, and automatically acquires a single image or multiple images from a multiwell plate for users. For a 96-well plate, the machine could capture and display all images in less than 1 min. The software is implemented by K-mean clustering using ML and unsupervised learning algorithms to help users and reduce the number of setup parameters for counting and is evaluated using 96-well plates of dengue virus.</ns0:p><ns0:p>Bland-Altman analysis indicates that more than 95% of the measurement error is in the upper and lower boundaries [&#177;2 standard deviation]. Also, gage repeatability and reproducibility analysis showed that the machine is capable of applications. Moreover, the average correct measurements by the machine are 85.8%. The ML-based automated quantification machine effectively quantifies the number of viral plaques.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>A plaque assay is one of the standard measurements <ns0:ref type='bibr' target='#b0'>(Dulbecco, 1952)</ns0:ref> for viable viruses in virology laboratories. Applications of this method range from basic research through drug discovery to vaccine development <ns0:ref type='bibr' target='#b1'>(Premsattham et al., 2019)</ns0:ref>. In principle, viruses infect the cell monolayer in a semisolid medium, thus limiting only the horizontal spread. The area of cell death (i.e. plaque) is visualized under a microscope before staining with neutral red or crystal violet (BioTek Instruments, 2021; <ns0:ref type='bibr'>Cacciabue et al., 2019)</ns0:ref>. The plate is manually counted and calculated to quantify the virus. The manual counting of plaques is tedious and requires welltrained personnel for verification. The plaque assay can be performed in low to medium throughput formats, such as 6-, 12-, or 24-well plates. This method requires more reagents and highly skilled operators to appropriately perform the assay. Recently, a simplified microwell plaque titration assay was developed with Herpes simplex <ns0:ref type='bibr' target='#b4'>(Bhattarakosol, Yoosook &amp; Cross, 1990</ns0:ref>) and dengue viruses <ns0:ref type='bibr' target='#b5'>(Boonyasuppayakorn et al., 2016)</ns0:ref>. Automated data acquisition and quantification methods have been developed under a flatbed scanner <ns0:ref type='bibr' target='#b5'>(Boonyasuppayakorn et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b6'>Sullivan et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Recently, automated imaging-based counters have been applied to plaque assays; however, the current versions face multiple challenges. For example, Cellular Technology Limited developed a commercial Elispot and viral plaque-counting machine called ImmunoSpot CLT Analyzers (e.g. Cellular Technology Limited, 2021). The machines can automatically acquire high-resolution images for each well plate with a high speed and automatically count plaque using software. Even though the software focuses on counting Elispot assays the company can optimize the setup, such as plaque intensity and size to develop programs for counting viral plaques for each well plate <ns0:ref type='bibr' target='#b8'>(Smith et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b9'>Sukupolvi-Petty et al., 2013)</ns0:ref>. However, the program has limitations: it cannot be easily optimized and standardized for various viral plaques. Moreover, commercial machines and their services are often expensive and proprietary.</ns0:p><ns0:p>For plaque counting on a personal computer, general image-processing tools, such as ImageJ, OpenCV, Labview, R, have been employed <ns0:ref type='bibr' target='#b5'>(Boonyasuppayakorn et al., 2016;</ns0:ref><ns0:ref type='bibr'>Rasband, 2015</ns0:ref> <ns0:ref type='formula'>2016</ns0:ref>) developed an ImageJ program and employed it for a modified 96-well plaque assay for the dengue virus. A flatbed scanner was used in the assay to acquire the 96-well plate image before cropping it into each well image. However, the plate must be contrast-enhanced by adding nontransparent liquid, such as milk, before scanning, which increases the workload. Work in several studies <ns0:ref type='bibr' target='#b12'>(Cai Z et al., 2011</ns0:ref><ns0:ref type='bibr'>, Cacciabue, Curr&#225;, &amp; Gismondi, 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Geissmann, 2013</ns0:ref><ns0:ref type='bibr'>, Katzelnick et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Moorman &amp; Dong, 2012)</ns0:ref> have developed programs to count viral plaque based on image segmentation, morphological analysis, and image threshold, but the programs require imageprocessing knowledge to implement and may not be suitable for some assays. To date, no machine learning (ML)-based program has been developed for plaque counting.</ns0:p><ns0:p>Due to the need of reducing the number of staff that conduct viral plaque assay, we developed an automated quantification machine based on ML. The machine comprises the hardware system for image acquisition and ML-based software for image viral plaque counting. The hardware is relatively simple to set up, affordable, portable, and automatically acquires a single image or multiple images from a multiwell plate for users. The software is implemented using K-mean clustering, which is a ML algorithm and unsupervised learning algorithm to help users. The algorithm helps by reducing the number of setup parameters for counting.</ns0:p><ns0:p>The automated quantification machine was built and evaluated with 96-well plates of dengue virus. The processes for standardizing the manual and software counting algorithm and evaluating the performance of the system on several plaque images are described in the next section. The counting results from the machine are consistent with those from manual counting.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>A. Cell and Viruses</ns0:head><ns0:p>Simplified dengue microwell plaque assays were performed following the procedure reported in <ns0:ref type='bibr' target='#b5'>Boonyasuppayakorn et al. (2016)</ns0:ref>. In summary, the10-fold serially diluted viruses in a maintenance medium to 10 &#8722;6 were used as the reference. LLC/MK2 (ATCC&#174;CCL-7) cells at 1 x 10 4 cells per well (50 &#956;l) of a 96-well plate were mixed with all dilutions and undiluted samples. Then, the dilution was prepared and the cells were incubated as explained in <ns0:ref type='bibr' target='#b5'>Boonyasuppayakorn et al. (2016)</ns0:ref>. Finally, the dengue virus cells were fixed and then stained as the standard assay. The number of plaque-forming units (p.f.u.) per ml was determined manually and using the automated quantification machine for evaluation.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Automated Quantification Machine</ns0:head><ns0:p>The automated quantification machine was developed to acquire viral plaque images of each well plate and automatically display the counting results in an easy-to-use manner. The major components of the machine are the hardware for image acquisition and ML-based software for image viral plaque counting.</ns0:p></ns0:div> <ns0:div><ns0:head>1) Hardware for Image Acquisition</ns0:head><ns0:p>The automated quantification machine was developed to reduce workload and enable personal use in laboratories. Thus, the machine is relatively simple to set up, affordable, portable, and automatically acquires single or multiple images from multiwell plates for users. Then, the hardware components of the machine include an IAI Tabletop, Model TT-A3-I-2020-10B-SP (IAI Robot, 2021), a USB microscope camera, Dino-Lite AM4113T (AnMo Electronics Corporation, 2021), and an adjustable light source panel, 150 mm &#215; 150 mm surface LED illumination source. A photo and a schematic of the automated quantification machine are shown in Figs. <ns0:ref type='figure' target='#fig_16'>1A and 1B</ns0:ref>, respectively. The IAI Tabletop is a 3-axis Cartesian robot with a working space of 200 mm &#215; 200 mm &#215; 100 mm (x-, y-, and z-axis). Its repeating positioning accuracy is less than &#177;0.02 mm. The x-axis of the IAI Tabletop is installed with the multiwell plate fixture and the light source panel and the z-axis is installed with a camera holder and a USB microscope camera. With the Cartesian robot configuration, the IAI Tabletop can move a multiwell plate along the x-y plane for automatic and accurate positioning to capture the well images with the USB microscope camera and move the USB microscope camera along the z-axis to automatically focus the image.</ns0:p><ns0:p>The USB microscope camera is a color camera with a resolution of 1280 &#215; 1024 pixels and 10&#215;-50&#215;, 220&#215; magnification range. It is equipped with an LED coaxial light source. With low magnification, the camera can capture a full-size well image of a six-well plate, and with medium magnification, the camera can capture a full-size well image of a ninety-six-well plate. To change the magnification of the camera, the magnification ring is manually adjusted.</ns0:p><ns0:p>The adjustable light source panel is a square surface LED illumination source. It is installed on the x-axis of the IAI Tabletop and underneath the multiwell plate fixture. The light source panel has high parallelism of light. The high parallelism of the light source panel renders the object edges clearer and sharper in an image. For viral plaque counting, the back-light technique is employed since it gives better image quality than the bright-and dark-field techniques. The back-light technique can be performed by turning off the LED coaxial light source and turning on the light source panel.</ns0:p><ns0:p>To control the machine hardware, the hardware is connected to a personal computer via USB cables and is controlled by a window application developed using C# language. The window application communicates to the IAI Tabletop by PSEL protocol to control the position of the multiwell plate. It also communicates to the USB microscope camera by DNVIT SDK to control the coaxial light source and image acquisition (AnMo Electronics Corporation, 2021). Images are saved in PNG format. The user interface and user experience (UI/UX) design was considered in creating the window application. The window application consists of 1. calibration routine 2. single image and multiwell plate image acquisition 3. Firebase database (Google, 2021).</ns0:p><ns0:p>The calibration routine is used to calibrate the movement of the IAI Tabletop. A screenshot of the calibration control is shown in Fig. <ns0:ref type='figure' target='#fig_17'>2A</ns0:ref>. Due to the installation process, the coordinate of the multiwell plate fixture is not the same as the coordinate of the IAI Tabletop. This causes the misalignment of the well center position when capturing an image. To apply the calibration routine, the user can work in two steps. The first step is to control the IAI Tabletop by UI control to locate the A1, A12, and H1 well center positions, as shown in Fig. <ns0:ref type='figure' target='#fig_17'>2A</ns0:ref>. The second step is to click the calibration routine button. Then, the algorithm automatically recalculates the new coordinate of the IAI Tabletop and makes the coordinate of the multiwell plate fixture and IAI Tabletop the same.</ns0:p><ns0:p>The single and multiwell plate image acquisition allows the user to select how to acquire well images. A screenshot of the image acquisition control is shown in Fig. <ns0:ref type='figure' target='#fig_17'>2B</ns0:ref>. With the single image acquisition, the user only selects or keys in the desired well number, such as A1, B2, and H12. Then, the machine moves, captures, and displays the desired well image. Moreover, with the multiwell plate image acquisition, the user only clicks one button to make the machine automatically capture and display all multiwell plates. For the 96-well plate, the machine can capture and display all images within less than 1 min. Once images are captured, the user may use the ML-based software for image viral plaque counting.</ns0:p><ns0:p>The Firebase database is used to collect all results of the machine for backup. The data in the Firebase database allows new features of the machine to be developed in the future, such as the data storage and exploration features, data analysis tools, data dashboard, web applications, and remote operation. These will provide greater convenience to users. </ns0:p></ns0:div> <ns0:div><ns0:head>2) Machine Learning Software for Image Viral Plaque Counting</ns0:head><ns0:p>The algorithm for image viral plaque counting is ML and K-mean clustering <ns0:ref type='bibr' target='#b18'>(Arora &amp; Varshney, 2016)</ns0:ref>. It helps users and reduces the number of setup parameters for counting. The algorithm is also implemented in a window application developed by visual studio C# language and imageprocessing library, MVtec Halcon (MVTec Software GmbH, 2021a).</ns0:p><ns0:p>The viral plaque is surrounded by noninfected cells and is identified using a counterstain, typically a neutral red or crystal violet solution. The white areas indicate the virus plaques, which cause the solution around them to degrade. To count the viral plaques, three major steps are The region of interest (ROI) is the focus region, which is important for counting viral plaques. The ROI is determined first, followed by the algorithms applied to the corrected location to prevent algorithm error. To locate the well position, a shape-based matching algorithm is employed to find the best matches of a shape model in an image (MVTec Software GmbH, 2021b). It does not use the gray values of pixels and their neighborhood as a template, but it defines the model by the shape of contours. In this case, the circular shape of the well is used as the template. Then, the algorithm finds the best-matched instance of the shape model. The position and rotation of the found instances of the model are returned in the pixel coordinates and angle, which are used to create the ROI image.</ns0:p><ns0:p>There are many techniques for filtering color images for image enhancement, including converting the RGB color space to HSV color space or binary image and applying a mean filter. Herein, the CIELAB color space is employed (MVTec Software GmbH, 2021a; Ly, 2020). It presents a quantitative relationship of color on three axes: L* represents the lightness, a* represents the red-green component of a color, and b* represents the yellow-blue component of a color. The CIELAB color space decouples the relationship between lightness and color of an image. Thus, the image brightness is less effective for image-processing. Only a* and b* are used to convert the image to a binary image. The a* and b* values of image pixels in ROI are used for clustering into the two groups, white and dark groups, by K-mean clustering, as shown in Fig. <ns0:ref type='figure' target='#fig_18'>3A</ns0:ref>. The yellow area represents the white group or white area, and the blue area represents the dark group or dark area. The red dot represents the centroid of each area. The result of the binary image is shown in Fig. <ns0:ref type='figure' target='#fig_18'>3B</ns0:ref>. After this process, a mean filtering process is applied to the binary image. To count the number of viral plaques, a simple image threshold is employed for region selection. The suspected viral plaque regions are selected and analyzed. An example of suspect regions is shown in Fig. <ns0:ref type='figure' target='#fig_19'>4A</ns0:ref>. If the size, length, area, and roundness of a region are appropriated, the region is counted as the viral plaques. An example is shown in Fig. <ns0:ref type='figure' target='#fig_19'>4B</ns0:ref>. However, if the size and area of a region are too large because of their overlay, the K-mean clustering is employed in the region. To do that, the grid points are generated inside the large regions along the x-y coordinate, as shown in Fig. <ns0:ref type='figure' target='#fig_19'>4C</ns0:ref>. Then, K-mean clustering is employed to cluster the grid points. Moreover, to determine the optimal value of k or the appropriate cluster number, the Silhouette algorithm (Ogbuabor &amp; Ugwoke, 2018) is implemented. The grid points are clustered into k clusters by K-mean clustering. Then, the average Silhouette coefficients for each k cluster are calculated. The Silhouette plot is shown in Fig. <ns0:ref type='figure' target='#fig_19'>4D</ns0:ref>. The maximum value of the average Silhouette coefficients is considered the optimal number of clusters. Finally, the number of viral plaques is the summation of the optimal number of clusters and the previous number of viral plaques (Fig. <ns0:ref type='figure' target='#fig_19'>4E</ns0:ref>). A flowchart of the ML-based software for image viral plaque counting is shown in Fig. <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, and an example of the machine counting results is shown in Fig. <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Viral Plaque Reading Differences by Humans</ns0:head><ns0:p>Plaque assays were created at the Medical Virology Research Center, Chulalongkorn University, Thailand. More than 25 96-well plates of dengue virus were used to evaluate differences in viral plaque readings. The number of viral plaques from 96-well plates was read by six experts, which included 1,777 wells that could be read by all experts. The remaining wells could not be read by experts, either because they were unclear or because no viral plaques were present; hence, they were not included in the evaluation. The number of viral plaques read by experts ranged between 0 and 31. To evaluate variations in expert readings, a boxplot of expert reading results for a given number of viral plaques is presented in Fig. <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>. Since the actual number of viral plaques is unknown, the mode value of six expert readings was used as a reference in this case. Data that had no mode was not included in the evaluation. Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> shows that the expert reading distributions in the range of 0-3 are small, but the distribution of expert readings increases when the number of viral plaques increased. When the number of viral plaques exceeds 12, the range of possible readings by an expert varies by more than a factor of 5.</ns0:p><ns0:p>To understand the variation in expert readings, a gage repeatability and reproducibility (R&amp;R) analysis was employed (Automotive Industry Action Group, 2010; Burdick, Borror &amp; Montgomer, 2005). A random sample of 300 wells (part) with and without viral plaques was used, in which the number of viral plaques varied from 0-25. Each well was subsequently read by six experts/operators. The results of the gage R&amp;R were evaluated using a nested model (The MathWorks Inc., 2021) and analysis of the expert data was executed in MATLAB (The MathWorks Inc., 2021) using the MATLAB Toolbox to examine the results.</ns0:p><ns0:p>The gage R&amp;R analysis was performed for various numbers of viral plaques in six ranges: 0-1, 2-3, 4-8, 9-10, 11-15, 16-25, and 0-25. The results are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> indicates that in the range of 0-1, the variation or sigma of expert readings according to the R&amp;R process is 0.9406. The variation or sigma increases as the range of viral plaque numbers increases. The variation or sigma reaches as high as 4.6900 in the range of <ns0:ref type='bibr'>16-25.</ns0:ref> For the overall range of 0-25, the variation or sigma is 2.8063. The reading distribution and reading variation values were used to design the experiment setup described in the following section. </ns0:p></ns0:div> <ns0:div><ns0:head>Experiment</ns0:head><ns0:p>More than 25 96-well plates of dengue virus were used to evaluate the automated quantification machine. The 96-well plates were put in the machine, and their images were captured. Then, the images were screened for evaluation. There were 1,777 images qualified for evaluation. Next, the number of viral plaques in each image was manually determined by six experts from the research center and by the automated quantification machine software (note that 721 images showed 0 viral plaque). The number of viral plaques was evaluated based on the mode of expert readings and well as by the machine.</ns0:p><ns0:p>In the case of many viral plaques in an image, the image was ambiguous. For this reason, the number of viral plaques complicated efforts by the experts to read them. Therefore, an experimental counting criteria was defined based on the reading distribution in Fig. <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>, the variation in expert readings in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, and the criterion of the Medical Virology Research Center. If the difference between the number of viral plaques counted by the expert reading mode and the machine is within the maximum number of errors (Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>), the number of viral plaques counted by the machine is correct. This counting criterion was employed in the evaluation, and the results are presented in the Results section.</ns0:p></ns0:div> <ns0:div><ns0:head>TABLE 2. Maximum number of errors for the software evaluation</ns0:head><ns0:p>The gage R&amp;R analysis was employed to evaluate the repeatability and reproducibility of the machine (Automotive Industry Action Group, 2010; Burdick, Borror &amp; Montgomer, 2005). For the analysis, 240 wells (part) with and without viral plaques were used. Due to the number of samples, the number of viral plaques of the 240 wells varied from 0 to 18. Then, each well was put into the machine to be captured the image and count the number of viral plaques. This procedure was repeated three times to represent three operators using the machine to count the number of viral plaques. Finally, each well has three measurement results from the machine. The gage R&amp;R using the nested model is employed to evaluate the results (The MathWorks Inc., 2021).</ns0:p><ns0:p>MATLAB and MATLAB Toolbox were used to perform the analysis and the gage R&amp;R analysis of the machine data.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>A. Distribution Software Counting for Each Number of Viral Plaques</ns0:head><ns0:p>The measurement of the number of viral plaques is depicted in Fig. <ns0:ref type='figure' target='#fig_11'>8</ns0:ref>. There are 1,777 measurements in the range of 0 to 29 viral plaques. For each number of viral plaques, the boxplot is used to show the mean and distribution of the measurement by the machine. The red + marks represent the outliners of the measurement, which includes less than 5% of the measurement. The boxplot shows that in the range of 0 to 9 viral plaques, the standard deviation (SD) of the measurement is less than 1, and the SD increases as the number of viral plaques increases.</ns0:p><ns0:p>The blue line having the slope of 1 represents the measurement reference values. The red dashed line represents the trend line of the measurement results of the machine. The number of viral plaques obtained by the machine ranges from 0 to 29, as calculated form From Fig. <ns0:ref type='figure' target='#fig_23'>8A</ns0:ref> using Eq. ( <ns0:ref type='formula'>1</ns0:ref>). The goodness-of-fit equation generated an R 2 of 0.8408. y = 0.8054 &#215; x, (1a) Goodness-of-fit: R 2 = 0.8408, (1b) where y is the number of viral plaques by the machine, and x is the number of viral plaques by the expert. The values range from 0 to 29. For Fig. <ns0:ref type='figure' target='#fig_23'>8B</ns0:ref>, the trendline of the measurement results of the machine shows the number of viral plagues ranging from 0 to 12, as calculated using Eq. ( <ns0:ref type='formula'>2</ns0:ref>). The goodness-of-fit equation generated an R 2 of 0.8870. y = 0.9539 &#215; x, (2a) Goodness-of-fit: R 2 = 0.8870.</ns0:p><ns0:p>(2b)</ns0:p><ns0:p>In the range of 0 to 29 viral plaques, the slope of the trend line of the machine is 0.8054, whereas, in the range of 0 to 12 viral plaques, the slope is 0.9539. For a number of viral plaques in the range of 0-12, the line has a slope close to 1 with R 2 of 0.8870. Therefore, the counting results by the machine and experts correspond to the highest evaluation quality. Moreover, to evaluate the correlation between the counting by the expert and machine, Pearson's coefficient method was used to measure the strength of the association between the machine and manual counting <ns0:ref type='bibr'>(Mukaka, 2021)</ns0:ref>. The Pearson's coefficient r of these variables is 0.9221 with a p-value of less than 0.0001. Since the p-value is less than 0.05, the variables reliably correlate. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>B. Bland-Altman</ns0:head><ns0:p>To evaluate the performance of the machine, the Bland-Altman plot was used <ns0:ref type='bibr' target='#b27'>(Myles, 2007)</ns0:ref>. The measurements from the expert and machine were considered. The difference in the measurement results from the expert and machine, error mean bias, and error SD were determined. The Bland-Altman plot is shown in Fig. <ns0:ref type='figure' target='#fig_12'>9</ns0:ref>. The results indicate that 95.01% of the measurement error is in the upper and lower boundaries (&#177;2 SD). Therefore, the error from the machine is within the boundary limit. The machine effectively and efficiently quantified the number of viral plaques. </ns0:p></ns0:div> <ns0:div><ns0:head>C. Performance of the Machine</ns0:head><ns0:p>Counting viral plaques is ambiguous, even by an expert. Thus, counting criteria were defined, as shown in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. If the error between the number of viral plaques counted by the expert reading mode and machine is within the maximum number of errors, as shown in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>, the number of viral plaques counted by the machine is correct.</ns0:p><ns0:p>The counting error between expert mode and the machine is presented using the boxplot (Fig. <ns0:ref type='figure' target='#fig_13'>10</ns0:ref>). The magenta line presents the maximum number of errors for each reference number of viral plaques. Figure <ns0:ref type='figure' target='#fig_13'>10</ns0:ref> shows that in the range of 0-18 viral plaques, the mean measurements of the machine are within the boundary error, and the error is acceptable. For the range of 19-29 viral plaques, the error is large and out of the boundary. Consequently, when the number of viral plaques is larger, it is more difficult to measure the number of viral plaques. The accuracy of the reading from the large number is not significant to quantify viral infection or research. Moreover, only 55 of the 1,777 images are in the range of 19-29 viral plaques. Based on the criteria in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>, the percentage of correct and incorrect measurements by the machine is shown in Fig. <ns0:ref type='figure' target='#fig_14'>11</ns0:ref>. The light blue bar represents the percentage of correct measurement which the machine measurement and expert reading mode are exactly the same. The dark blue bar represents the percentage of correct measurement which the machine measurement is within the defined range of expert reading mode and the red bar represents the percentage of incorrect measurement of the machine. The percentage of correct measurement is high when the number PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science of viral plaques is low. In the range of 0-29 viral plaques, the percentage of correct measurement is more than 80%. In addition, the average correct measurement by the machine is 85.8%. This number is consisted of the percentages of exact correct and within the range are 60.0% and 25.8%, respectively. </ns0:p></ns0:div> <ns0:div><ns0:head>D. R&amp;R of the Machine</ns0:head><ns0:p>The R&amp;R of the machine was evaluated by gage R&amp;R analysis according to Automotive Industry Action Group (2010) and <ns0:ref type='bibr' target='#b24'>Burdick, Borror &amp; Montgomer (2005)</ns0:ref>. The gage R&amp;R is shown in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>. Previous authors (Automotive Industry Action Group, 2010; Burdick, Borror &amp; Montgomer, 2005) suggest that the measurement system is acceptable if the number of distinct categories (NDC) is greater than or equal to 5. Moreover, for the percentage of gage R&amp;R of total variations (PRR), the measurement system is capable if PRR is less than 10% and not capable if PRR is more than 30%. Otherwise, the measurement system is acceptable. NDC of the machine is 6 and the PRR is 21.72%, which is between 10% and 30% criteria (Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>). Due to the cost of the machine and previous results and evaluations, the R&amp;R of the machine is acceptable, and the machine is capable of application. TABLE <ns0:ref type='table' target='#tab_5'>3</ns0:ref>. Gage R&amp;R analysis parameters for the machine.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Counting the number of viral plaques is ambiguous, even for experts. Consequently, the Medical Virology Research Center defined experimental counting criteria for evaluating the machine output (Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>) based on the distribution and variation of expert readings. If the number of viral plaques is small, the maximum number of errors is small, and as the number of viral plaques increases, the maximum number of errors increases. However, in the case of a large number of viral plaques, the reading accuracy is not sufficient for quantifying viral infection or in viral research.</ns0:p><ns0:p>The automated quantification machine was evaluated using 96-well plates of dengue virus. According to the analysis, the measurement results obtained using the machine correlate well with the manual counting results. For 0-29 viral plaques, the trend line of the result of the machine has a slope of 0.8054 with R 2 of 0.8408, and in the range of 0-12 viral plaques, the slope is 0.9539 with R 2 of 0.8870. The correlation in the range of 0-12 viral plaques is better than that in the range of 0-29 viral plaques. The Pearson's coefficient r of the data is 0.9221 with a p-value of less than 0.0001. This shows that the machine counting results correlate with the manual counting results. Further, the Bland-Altman plot shows that more than 95% of the measurement errors are in the upper and lower boundaries (&#177;2 SD). Thus, the manual and machine counting results are consistent. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_23'>8B</ns0:ref>, for 0-12 viral plaques, the boxplot is close to the reference line, indicating the manual and machine counting results are in good agreement. However, for 13-29 viral plaques, the boxplot mean is below the reference line, indicating that the measurement results of the machine are less than the manual measurement results. This information can be used to improve the algorithm for measurement in the range of 13-29 viral plaques in future studies.</ns0:p><ns0:p>The large error in the results of the machine in the range of 13-29 viral plaques could be attributed to the criteria of the Silhouette algorithm. The Silhouette algorithm uses only the location information of the grid points to calculate the Silhouette score for selecting the suitable number of clusters. To improve the algorithm, other criteria, such as the number of grid points and areas, can be included in the algorithm to calculate the new Silhouette score.</ns0:p><ns0:p>Base on the gage R&amp;R analysis, the major variations in PRR results from reproducibility, which occurs when the user puts the sample to the machine and measures the number of viral plaques. If the brightness of the environment light is changed, the image color may be changed, which affects the algorithm. This may be overcome by creating a cover for the machine to reduce the effect of environmental light. The repeatability of the machine is 0 since the counting algorithm has no variation.</ns0:p><ns0:p>According to the performance analysis, the average correct measurement of the machine is 85.8% in the range of 0-29 viral plaques. However, most of the errors occur when the number of viral plaques is high. If only small and medium numbers of viral plaques are evaluated, the average correct measurement of the machine would be more than 85.8%.</ns0:p><ns0:p>The plaque sizes varied by various conditions such as pH of the buffers, concentrations of the semisolid, or under antiviral treatment. Moreover, previous reports have optimized the plaque readings in 24-and 96-well plates in antiviral drug discovery <ns0:ref type='bibr' target='#b5'>(Boonyasuppayakorn et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b28'>Katzelnick et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Yin et al., 2019)</ns0:ref>. In order to cover this issue, our developed software provides the manually adjustable plaque sizes customized by users. The user can config the parameters the range of plaque sizes, save, and use them for the specific scenario.</ns0:p><ns0:p>Even though there are automated imaging-based counters for plaque assay, the current commercial versions have multiple challenges, especially their complexity and cost. They may unsuitable for personal use in the laboratory. Thus, for situations of limited resources, the developed machine may be more suitable for a personal laboratory. Additionally, the developed machine can be applied to Elispot counting <ns0:ref type='bibr' target='#b30'>(Kukiattikoon et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The developed automated quantification machine with ML-based software effectively quantified the number of viral plaques in a 96-well plate in a simple process and at a low cost. The automated quantification machine was evaluated using 96-well plates of dengue virus. The performance analysis showed that the machine measurement results correlate well with manual counting results, and the average correct measurement by the machine is 85.8%. The machine meets the requirements of reducing workload and performing virus plaque assay reading in the laboratory. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:note type='other'>Computer Science Figure 11</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>; R Core Team, 2016; Cai Z et al., 2011, Cacciabue, Curr&#225;, &amp; Gismondi, 2019; Geissmann, 2013, Katzelnick et al., 2018; Moorman &amp; Dong, 2012). Boonyasuppayakorn et al. (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Automated quantification machine. (A) Prototype of automated quantification machine. (B) Schematic of automated quantification machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Screenshot of calibration control and image acquisition control. (A) Calibration control. (B) Image acquisition control.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022) Manuscript to be reviewed Computer Science taken to process each image: locating the well position, filtering the color image for image enhancement, and counting the number of viral plaques with K-mean clustering.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Converting RGB image to binary image. (A) a* and b* values of image pixels. (B) Binary image by K-mean clustering</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Viral plaque counting with K-mean clustering algorithm. (A) Suspect viral plaque regions. (B) Appropriated region. (C) Grid points inside the larger region. (D) Silhouette plot. (E) Final result.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Machine learning (ML)-based software for image viral plaque counting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Example of machine counting. (A) Counting results by an expert. (B) Counting by machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>(A) shows the distribution of readings by all experts for each number of viral plaques and Figure7(B) shows an example reading distribution by an expert for different numbers of viral plaques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Expert reading distributions. (A) Reading distribution of all experts for each number of viral plaques. (B) Reading distribution examples of an expert for each number of viral plaques. The red + marks represent the outliners of the measurement. The blue line having the slope of 1 represents the measurement reference values. The red dashed line represents the trend line of the measurement results of experts.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Distribution software counting for each number of viral plaques. (A) Distribution of software counting for each number of viral plaques. (B) Distribution of software counting for the range [0-12] of number of viral plaques. The red + marks represent the outliners of the measurement. The blue line having the slope of 1 represents the measurement reference values. The red dashed line represents the trend line of the measurement results of the machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure: 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure: 9 Bland-Altman plot of the machine measurement results.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Distribution software error for each number of viral plaques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Percentage of correct measurement by the machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Automated quantification machine.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Screenshot of calibration control and image acquisition control.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Converting RGB image to binary image.</ns0:figDesc><ns0:graphic coords='27,42.52,204.37,525.00,355.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Viral plaque counting with K-mean clustering algorithm.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Machine learning (ML)-based software for image viral plaque counting.</ns0:figDesc><ns0:graphic coords='29,42.52,178.87,525.00,409.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Example of machine counting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Expert reading distributions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Distribution software counting for each number of viral plaques.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>Figure: 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure: 9 Bland-Altman plot of the machine measurement results.</ns0:figDesc><ns0:graphic coords='35,42.52,178.87,525.00,426.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head>Fig ure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Fig ure 10: Distribution software error for each number of viral plaques.</ns0:figDesc><ns0:graphic coords='36,42.52,178.87,525.00,408.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_26'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Percentage of correct measurement by the machine.</ns0:figDesc><ns0:graphic coords='37,42.52,178.87,525.00,301.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>TABLE 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Gage repeatability and reproducibility (R&amp;R) analysis for each range of viral plaque numbers.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>TABLE 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Gage repeatability and reproducibility (R&amp;R) analysis for each range of viral plaque numbers.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022)Manuscript to be reviewedComputer Science1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>TABLE 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Gage repeatability and reproducibility (R&amp;R) analysis for each range of viral plaque numbers.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Number of Viral Plaques</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>TABLE 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Maximum number of errors for the software evaluation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>TABLE 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Maximum number of errors for the software evaluation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Number of</ns0:cell><ns0:cell cols='2'>Maximum Number</ns0:cell></ns0:row><ns0:row><ns0:cell>Viral Plaques</ns0:cell><ns0:cell cols='2'>of Errors</ns0:cell></ns0:row><ns0:row><ns0:cell>0-1</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>2-3</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>4-8</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>9-10</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>11-15</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>&gt;16</ns0:cell><ns0:cell>&#177;</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>TABLE 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Gage R&amp;R analysis parameters for the machine. Gage R&amp;R of total variations (PRR): 21.72 Note: The last column of the above table does not to necessarily sum to 100%</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Source</ns0:cell><ns0:cell>Variance</ns0:cell><ns0:cell>%Variance</ns0:cell><ns0:cell>Sigma</ns0:cell><ns0:cell cols='2'>5.15 x sigma % 5.15 x sigma</ns0:cell></ns0:row><ns0:row><ns0:cell>Gage R&amp;R</ns0:cell><ns0:cell>0.8861</ns0:cell><ns0:cell>4.7196</ns0:cell><ns0:cell>0.9413</ns0:cell><ns0:cell>4.8479</ns0:cell><ns0:cell>21.7246</ns0:cell></ns0:row><ns0:row><ns0:cell>Repeatability</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell><ns0:cell>0.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>Reproducibility</ns0:cell><ns0:cell>0.8861</ns0:cell><ns0:cell>4.7196</ns0:cell><ns0:cell>0.9413</ns0:cell><ns0:cell>4.8479</ns0:cell><ns0:cell>21.7246</ns0:cell></ns0:row><ns0:row><ns0:cell>Operator</ns0:cell><ns0:cell>0.8861</ns0:cell><ns0:cell>4.7196</ns0:cell><ns0:cell>0.9413</ns0:cell><ns0:cell>4.8479</ns0:cell><ns0:cell>21.7246</ns0:cell></ns0:row><ns0:row><ns0:cell>Part</ns0:cell><ns0:cell>17.8891</ns0:cell><ns0:cell>95.2804</ns0:cell><ns0:cell>4.2296</ns0:cell><ns0:cell>21.7822</ns0:cell><ns0:cell>97.6117</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell>18.7752</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>4.3330</ns0:cell><ns0:cell>22.3151</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Number of distinct categories (NDC): 6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>% of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65975:2:0:NEW 11 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Equation Chapter 1 Section 1Changes Made in Response to the Comments of Editor and Reviewer We would like to thank the editor and reviewers for their comments, which certainly improved the quality of our paper. The following changes have been made to the paper in response to the reviewers’ comments regarding the relevance of the paper: The revised portions is indicated by red text in the revised manuscript. (in .docx format). Editor comments (Shawn Gomez) Thank you for the extensive additional work that has greatly improved the manuscript. One reviewer has brought up a couple of points that may be worth consideration. As such, the decision is 'minor revisions', but the decision to address these comments is left to the authors' discretion. No additional external reviews will be required for the manuscript, but please provide a rebuttal if you do not choose to make these modifications. The manuscript was originally accepted into PeerJ Computer Science. However, Reviewer 1 does bring up a valid point with regard to choice of journal that could potentially impact (positively) the scope of audience that would likely see this work. After consultation with editorial staff, we agree that your submission could be transferred to PeerJ Life & Environment. Again, thank you for your work. [# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful #] We sincerely appreciate for the editor and reviewers’ suggestions and comments and our response to the reviewers’ comments is given below. For the publication, we think PeerJ Computer Science is suitable for the manuscript after we considered all information. Thus, we would like to publish the manuscript with PeerJ Computer Science. REVIEWERS' COMMENTS AND OUR REPLIES: Reviewer 1 (Marco Cacciabue): Our reply to comment of reviewer (Marco Cacciabue): Basic reporting • Language used is clear throughout the article. • Literature is well referenced & relevant. • Structure conforms to PeerJ standards. • Figures are high quality and described, but too many (please see general comments). • Complete raw data is supplied We thank you for checking our data and paper organization. For the number of the figures, we have removed unnecessary figures from the manuscript. Now, the manuscript has only the necessary figures to explain the work. Experimental design • Original primary research is NOT within Scope of the journal. • Research question well defined, relevant & meaningful. • It is stated how the research fills an identified knowledge gap. • Rigorous investigation performed to a high technical & ethical standard. We thank you for checking our experimental design. The editor has suggested and provided us the information on publishing the manuscript with PeerJ Computer Science and PeerJ Life & Environment. Moreover, he gives us the option to select to publish between the manuscript with PeerJ Computer Science or PeerJ Life & Environment. After, we considered all information, we think PeerJ Computer Science is suitable for the manuscript and we have decided to publish the manuscript with PeerJ Computer Science. Validity of the findings • Conclusions are well stated, linked to original research question & limited to supporting results. We thank you for checking our validity of the findings. Additional comments I would like to thank the authors for their extensive work and the quality of their manuscript. However I disagree with the authors and I still think that the Original primary research is NOT within the Aims and Scope of the journal, as per stated in point 5 of the Aim and Scope page (please see https://peerj.com/about/aims-and-scope/cs). For clarity, I have copied the cited paragraph: “Submissions should be directed to an audience of Computer Scientists. Articles that are primarily concerned with biology or medicine and do not have a clearly articulated applicability to the broader field of computer science should be submitted to PeerJ - the journal of Life and Environmental Sciences. For example, bioinformatics software tools should be submitted to PeerJ, rather than to PeerJ Computer Science. “ Finally, as I understand that the editor considers that the paper IS within the scope of the journal (because of the reviewing process), I will recommend the paper for publication as is. We would like to thank you for suggesting us on this point. The editor has suggested and provided us the information on publishing the manuscript with PeerJ Computer Science and PeerJ Life & Environment. Moreover, he gives us the option to select to publish between the manuscript with PeerJ Computer Science or PeerJ Life & Environment. After, we considered all information, we think PeerJ Computer Science is suitable for the manuscript and we have decided to publish the manuscript with PeerJ Computer Science. Reviewer 2: Our reply to comment of reviewer 2: Basic reporting The authors have answered all my questions, but the importance of automatic quantification has not been well explained. I believe that the improved Englished language will not be a problem for understanding. They have also corrected the wrong references. We thank you for checking our paper organization. We have corrected the manuscript as the reviewer’s suggestion. Experimental design no comment We thank you for checking our manuscript. Validity of the findings I believe validity of certain plaque plates (96-well,or other sizes) with Image Algorithm is not a problem. On the contrary, the repeatability of plaque assay itself should be addressed. We thank you for checking our validity of the findings. To evaluate the repeatability and reproducibility of the machine, the gage R&R analysis (Automotive Industry Action Group, 2010; Burdick, Borror & Montgomer, 2005) is used to determine the repeatability and reproducibility. The detail of the R&R analysis is in the experiment section. Additional comments I have said that the authors should provide more application scenarios of automatic quantification for viral plaques. The authors have not provided substantial applications and failed to emphasize the importance of automatic quantification. In other words, they should not limited it to counting viral plaques. The sizes of plaques caused by antivirals should be included. In fact, a high-throughput antiviral drug screening using plaque assay in 96-well plates can provide this scenario(DOI: 10.1002/jmv.25463). 1. We have provided more application scenarios of automatic quantification include ELISpot reader detecting cytokine release from cells in the discussion section. The developed machine can be applied to Elispot counting and the detail of the machine presents in the reference (Kukiattikoon, 2021). Kukiattikoon C, Vongsoasup N, Ajavakom N, and Phanomchoeng G. 2021. Automate platform for Capturing and Counting ELISpot on 96-Well Plate. Proc. of the 7th International Conference on Engineering and Emerging Technologies (ICEET) 27-28 October 2021, Istanbul, Turkey. 2. For the issue of the plaque size, we have explained the method to handle this scenario in the discussion section. The plaque sizes varied by various conditions such as pH of the buffers, concentrations of the semisolid, or under antiviral treatment. Moreover, previous reports have optimized the plaque readings in 24- and 96-well plates in antiviral drug discovery (Boonyasuppayakorn et al., 2016; Katzelnick et al., 2018; Yin et al., 2019). In order to cover this issue, our developed software provides the manually adjustable plaque sizes customized by users. The user can config the parameters the range of plaque sizes, save, and use them for the specific scenario. "
Here is a paper. Please give your review comments after reading it.
331
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A Completely Automated Public Turing Test to tell Computers and Humans Apart (CAPTCHA) is used in web systems to secure authentication purposes; it may break using Optical Character Recognition (OCR) type methods. CAPTCHA breakers make web systems highly insecure. However, several techniques to break CAPTCHA suggest CAPTCHA designers about their designed CAPTCHA's need improvement to prevent computer visionbased malicious attacks. This research primarily used deep learning methods to break state-of-the-art CAPTCHA codes; however, the validation scheme and conventional Convolutional Neural Network (CNN) design still need more confident validation and multiaspect covering feature schemes. Several public datasets are available of text-based CAPTCHa, including Kaggle and other dataset repositories where self-generation of CAPTCHA datasets are available. The previous studies are dataset-specific only and cannot perform well on other CAPTCHA's. Therefore, the proposed study uses two publicly available datasets of 4-and 5-character text-based CAPTCHA images to propose a CAPTCHA solver. Furthermore, the proposed study used a skip-connection-based CNN model to solve a CAPTCHA. The proposed research employed 5-folds on data that delivers 10 Different CNN models on two datasets with promising results compared to the other studies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The first secure and fully automated mechanism, named CAPTCHA, was developed in 2000. The term CAPTCHA was first used by Alta Vista in 1997. It reduces spamming by <ns0:ref type='bibr'>95% Baird and Popat (2002)</ns0:ref>.</ns0:p><ns0:p>CAPTCHA is also known as a reverse Turing test. The Turing test was the first test to distinguish human and machine Von <ns0:ref type='bibr' target='#b47'>Ahn et al. (2003)</ns0:ref>. It was developed to determine whether a user was a human or a machine. It increases efficiency against different attacks that seek websites <ns0:ref type='bibr' target='#b15'>Danchev (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b38'>Obimbo et al. (2013)</ns0:ref>. It is said that CAPTCHA should be generic such that any human can easily interpret and solve it and difficult for machines to recognize it <ns0:ref type='bibr' target='#b5'>Bostik and Klecka (2018)</ns0:ref>. To protect against robust malicious attacks, various security authentication methods have been developed <ns0:ref type='bibr' target='#b25'>Goswami et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b39'>Priya and Karthik (2013)</ns0:ref>, <ns0:ref type='bibr' target='#b2'>Azad and Jain (2013)</ns0:ref>. CAPTCHA can be used for authentication in login forms with various web credentials. Furthermore, CAPTCHA can be used as a spam text reducer, e.g., in email; as a secret graphical key to log in for email. In this way, a spam bot would not be able to recognize and log in to the email Sudarshan <ns0:ref type='bibr' target='#b45'>Soni and Bonde (2017)</ns0:ref>.</ns0:p><ns0:p>Many prevention strategies against malicious attacks have been adopted in recent years, such as cloud computing-based voice-processing <ns0:ref type='bibr'>Gao et al. (2020b,a)</ns0:ref>, mathematical and logical puzzles, and text and image recognition tasks <ns0:ref type='bibr' target='#b21'>Gao et al. (2020c)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:note type='other'>.</ns0:note><ns0:p>Computer Science due to their easier interpretation and implementation <ns0:ref type='bibr' target='#b30'>Madar et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Gheisari et al. (2021)</ns0:ref>. A set of rules may define a kind of automated creation of CAPTCHA-solving tasks. It leads to easy API creation and usage for security web developers, so as to make more mature CAPTCHAs <ns0:ref type='bibr' target='#b7'>Bursztein et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>Cruz-Perez et al. (2012)</ns0:ref>. The text-based CAPTCHA is used for Optical Character Recognition (OCR). OCR is strong enough to solve text-based CAPTCHA challenges. However, it still has challenges regarding its robustness in solving CAPTCHA problems <ns0:ref type='bibr' target='#b27'>Kaur and Behal (2015)</ns0:ref>. These CAPTCHA challenges are extensive with ongoing modern technologies. Machines can solve them, but humans cannot. These automated, complex CAPTCHA-creating tools can be broken down using various OCR techniques. Some studies claim that they can break any CAPTCHA with high efficiency. The existing work also recommends strategies to increase the keyword size along with another method of crossing lines from keywords that use only straight lines and a horizontal direction. It can break easily using different transformations, such as the Hough transformation. It is also suggested that single-character recognition is used from various angles, rotations, and views to make more robust and challenging CAPTCHAs. <ns0:ref type='bibr' target='#b6'>Bursztein et al. (2011)</ns0:ref>.</ns0:p><ns0:p>The concept of reCAPTCHA was introduced in 2008. It was initially a rough estimation. It was later improved and was owned by Google to decrease the time taken to solve it. The un-solvable reCAPTCHA's were then considered to be a new challenge for OCRs Von <ns0:ref type='bibr' target='#b48'>Ahn et al. (2008)</ns0:ref>. The usage of computer vision and image processing as a CAPTCHA solver or breaker was increased if segmentation was performed efficiently <ns0:ref type='bibr' target='#b23'>George et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b57'>Ye et al. (2018)</ns0:ref>. The main objective or purpose of making a CAPTCHA solver is to protect CAPTCHA breakers. By looking into CAPTCHA solvers, more challenging CAPTCHAs can be generated, and they may lead to a more secure web that is protected against malicious attacks <ns0:ref type='bibr' target='#b40'>Rai et al. (2021)</ns0:ref>. A benchmark or suggestion for CAPTCHA creation was given by <ns0:ref type='bibr'>Chellapilla et al.:</ns0:ref> Humans should solve the given CAPTCHA challenge with a 90% success rate, while machines ideally solve only one in every 10,000 CAPTCHAs <ns0:ref type='bibr' target='#b10'>Chellapilla et al. (2005)</ns0:ref>.</ns0:p><ns0:p>Modern AI yields CAPTCHAs that can solve problems in a few seconds. Therefore, creating CAPTCHAs that are easily interpretable for humans and unsolvable for machines is an open challenge. It is also observed that humans invest a substantial amount of time daily solving CAPTCHAs Von <ns0:ref type='bibr' target='#b48'>Ahn et al. (2008)</ns0:ref>. Therefore, reducing the amount of time humans need to solve them is another challenge. Various considerations need to be made for this, including text familiarity, visual appearance, and distortions, etc. Commonly in text-based CAPTCHAs, the well-recognized languages are used that have many dictionaries that make them easily breakable. Therefore, we may need to make unfamiliar text from common languages such as phonetic text is not ordinary language that is pronounceable <ns0:ref type='bibr' target='#b50'>Wang and Bentley (2006)</ns0:ref>. Similarly, the color of the foreground and the background of CAPTCHA images is also an important factor, as many people have low or normal eyesight or may not be able to see them.</ns0:p><ns0:p>Therefore, a visually appealing foreground and background with distinguishing colors are recommended when creating CAPTCHAs. Thirdly, distortions that come from periodic or random manners, such as affine transformations, scaling, and the rotation of specific angles, are needed. These distortions are solvable for computers and humans. If the CAPTCHAs become unsolvable, then multiple attempts by a user are needed to read and solve them <ns0:ref type='bibr' target='#b56'>Yan and El Ahmad (2008)</ns0:ref>.</ns0:p><ns0:p>In current times, Deep Convolutional neural networks (DCNN) are used in many medical <ns0:ref type='bibr' target='#b34'>Meraj et al. (2019)</ns0:ref>, <ns0:ref type='bibr' target='#b33'>Manzoor et al. (2022)</ns0:ref>, <ns0:ref type='bibr' target='#b31'>Mahum et al. (2021)</ns0:ref> and other real-life recognition applications <ns0:ref type='bibr' target='#b36'>Namasudra (2020)</ns0:ref> as well as in security threat solutions <ns0:ref type='bibr' target='#b29'>Lal et al. (2021)</ns0:ref>. The security threats in IoT and many other aspects can also be controlled using blockchain methods <ns0:ref type='bibr' target='#b37'>Namasudra et al. (2021)</ns0:ref>. Utilizing deep learning, the proposed study uses various image processing operations to normalize text-based image datasets. After normalizing the data, a single-word-caption-based OCR was designed with skipping connections. These skipping connections connect previous pictorial information to various outputs in simple Convolutional Neural Networks (CNNs), which possess visual information in the next layer only <ns0:ref type='bibr' target='#b1'>Ahn and Yim (2020)</ns0:ref>.</ns0:p><ns0:p>The main contribution of this research work is as follows:</ns0:p><ns0:p>&#8226; A skipping-connection-based CNN framework is proposed and covers multiple aspect of features.</ns0:p><ns0:p>&#8226; A 5-fold validation scheme is used in a deep-learning-based network to remove bias, if any, which leads to more promising results.</ns0:p></ns0:div> <ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Today in the growing and dominant field of AI, many real-life problems have been solved with the help of deep learning and other evolutionary optimized computing algorithms <ns0:ref type='bibr' target='#b42'>Rauf et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b43'>Rauf et al. (2020)</ns0:ref>. The other application such as energy conusmption predictions using DL <ns0:ref type='bibr' target='#b20'>Gao et al. (2020b)</ns0:ref>, for time scheduling the avaidance of time wastage <ns0:ref type='bibr' target='#b21'>Gao et al. (2020c)</ns0:ref>, a survey regarding edge computing using DL also explored many usage of it <ns0:ref type='bibr' target='#b51'>Wang et al. (2020)</ns0:ref>, vehicle usage causing many kind of air pollution that can be automate by sharing vehicles that can also be optimized using ML approaches <ns0:ref type='bibr' target='#b32'>Malik et al. (2021)</ns0:ref> and IoT based smart city can also be developed in this regard <ns0:ref type='bibr' target='#b24'>Gheisari et al. (2021)</ns0:ref>, Similarly, in cybersecurity, many automated AI solutions have been provided by a CAPTCHA solver, except OCR. Multiple proposed CNN models have used various types of CAPTCHA datasets to solve CAPTCHAs. The collected datasets have been divided into three categories: selection-, slide-, and click-based. Ten famous CAPTCHAs were collected from google.com, tencent.com, etc. The breaking rate of these CAPTCHAs was compared. CAPTCHA design flaws that may help to break CAPTCHAs easily were also investigated. The underground market used to solve CAPTCHAs was also investigated, and findings with respect to scale, the commercial sizing of keywords, and their impact on CAPTCHas were reported <ns0:ref type='bibr' target='#b55'>Weng et al. (2019)</ns0:ref>. A proposed sparsity-integrated CNN used constraints to deactivate the fully connected connections in CNN. It ultimately increased the accuracy results compared to transfer learning and simple CNN solutions <ns0:ref type='bibr' target='#b18'>Ferreira et al. (2019)</ns0:ref>.</ns0:p><ns0:p>Image processing operations regarding erosion, binarization, and smoothing filters were performed for data normalization, where adhesion-character-based features were introduced and fed to a neural network for character recognition <ns0:ref type='bibr' target='#b26'>Hua and Guoqin (2017)</ns0:ref>. The back propagation method was claimed as a better approach for image-based CAPTCHA recognition. It has also been said that CAPTCHA has become the normal, secure authentication method in the majority of websites, and that image-based CAPTCHAs are more useful than text-based CAPTCHAs <ns0:ref type='bibr' target='#b44'>Saroha and Gill (2021)</ns0:ref>. Template-based matching is performed to solve text-based CAPTCHAs, and preprocessing is also performed using Hough transformation and skeletonization. Features based on edge points are also extracted, and the points of reference with the most potential are taken. It is also claimed that the extracted features are invariant to position, language, and shapes. Therefore, it can be used for any kind of merged, rotated, and other variation-based CAPTCHAs WANG (2017).</ns0:p><ns0:p>PayPal CAPTCHAs have been solved using correlation and Principal Component Analysis (PCA) approaches. The primary steps of these studies include pre-processing, segmentation, and the recognition of characters. A success rate of to 90% was reported using correlation analysis of PCA, and using PCA only increased the efficiency to 97% <ns0:ref type='bibr' target='#b41'>Rathoura and Bhatiab (2018)</ns0:ref>. A Faster Recurrent Neural Network (F-RNN) has been proposed to detect CAPTCHAs. It was suggested that the depth of a network can increase the mean average precision value of CAPTCHA solvers, and experimental results showed that feature maps of a network can be obtained from convolutional layers <ns0:ref type='bibr' target='#b17'>Du et al. (2017)</ns0:ref>. Data creation and cracking has also been used in some studies. For visually impaired people, there should be solutions to CAPTCHAs. A CNN network named CAPTCHANet has been proposed.</ns0:p><ns0:p>A 10-layer network was designed and was improved later with training strategies. A new CAPTCHA using Chinese characters was also created, and it removed the imbalancing issue of class for model training. A statistical evaluation led to a higher success rate <ns0:ref type='bibr' target='#b58'>Zhang et al. (2021)</ns0:ref>. A data selection approach automatically selected data for training purposes. The data augmenter later created four types of noise to make CAPTCHAs difficult for machines to break. However, the reported results showed that, in combination with the proposed preprocessing method, the results were improved to 5.69% <ns0:ref type='bibr' target='#b9'>Che et al. (2021)</ns0:ref>. Some recent studies on CAPTCHA recognition are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>The pretrained model of object recognition have an excellent structural CNN. A similar study used a well-known VGG network and improved the structure using focal loss <ns0:ref type='bibr' target='#b53'>Wang and Shi (2021)</ns0:ref>. The image processing operations generated complex data in text-based CAPTCHAs, but there may be a high risk of breaking CAPTCHAs using regular languages. One study used the Python Pillow library to create Bengali-, Tamil-, and Hindi-language-based CAPTCHAs. These language-based CAPTCHAs were solved using D-CNN, which proved that the model was also confined by these three languages <ns0:ref type='bibr' target='#b0'>Ahmed and Anand (2021)</ns0:ref>. To remove the manual annotation problem, a new, automatic CAPTCHA creating and solving technique using a simple 15-layer CNN was proposed.</ns0:p><ns0:p>Various fine-tuning techniques have been used to break 5-digit CAPTCHAs and have achieved 80% classification accuracies <ns0:ref type='bibr' target='#b4'>Bostik et al. (2021)</ns0:ref>. A privately collected dataset was used in a CNN approach</ns0:p></ns0:div> <ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science <ns0:ref type='table' target='#tab_5'>2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:ref> Manuscript to be reviewed Computer Science with 7 layers that utilize correlated features of text-based CAPTCHAs. It achieved a 99.7% accuracy using its own image database and CNN architecture <ns0:ref type='bibr' target='#b28'>Kumar and Singh (2021)</ns0:ref>. Another similar approach was based on handwritten digit recognition. The introduction of a CNN was initially discussed, and a CNN was proposed for twisted and noise-added CAPTCHA images <ns0:ref type='bibr' target='#b8'>Cao (2021)</ns0:ref>. A deep, separable CNN for four-word CAPTCHA recognition achieved 100% accurate results with the fine tuning of a separable CNN with respect to their depth. A fine-tuned, pre-trained model architecture was used with the proposed architecture and significantly reduced the training parameters with increased efficiency <ns0:ref type='bibr' target='#b16'>Dankwa and Yang (2021)</ns0:ref>.</ns0:p><ns0:p>A visual-reasoning CAPTCHA (known as a Visual Turing Test (VTT)) has been used in security authentication methods, and it was easy to break using holistic and modular attacks. One study focused on a visual-reasoning CAPTCHA and showed an accuracy of 67.3% against holistic CAPTCHAs and an accuracy of 88% against VTT CAPTCHAs. Future directions were to design VTT CAPTCHAs to protect against these malicious attacks <ns0:ref type='bibr' target='#b22'>Gao et al. (2021)</ns0:ref>. To provide a more secure system in text-based CAPTCHAs, a CAPTCHA defense algorithm was proposed. It used a multi-character CAPTCHA generator using an adversarial perturbation method. The reported results showed that complex CAPTCHA generation reduces the accuracy of CAPTCHA breaker up-to 0.06% <ns0:ref type='bibr' target='#b49'>Wang et al. (2021a)</ns0:ref>. The Generative Adversarial Network (GAN) based simplification of CAPTCHA images adopted before segmentation and classification. A CAPTCHA solver is presented that achieves 96% success rate character recognition. All other CAPTCHA schemes were evaluated and showed a 74% recognition rate. These suggestions for CAPTCHA designers may lead to improved CAPTCHA generation <ns0:ref type='bibr' target='#b52'>Wang et al. (2021b)</ns0:ref>. A binary images-based CAPTCHA recognition framework is proposed that generated certain number of image copies from given CAPTCHA image to train a CNN model. The The Weibo dataset showed that the 4-character recognition accuracy on the testing set was 92.68%, and the Gregwar dataset achieved a 54.20% accuracy on the testing set <ns0:ref type='bibr' target='#b46'>Thobhani et al. (2020)</ns0:ref>.</ns0:p><ns0:p>The studies discussed above yield information about text-based CAPTCHAs as well as other types of CAPTCHAs. Most studies used DL methods to break CAPTCHAs, and problems regarding time and unsolvable CAPTCHAs are still an open challenge. More efficient DL methods need to be used that, though they may not cover other datasets, should be robust to them.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>Recent studies based on deep learning have shown excellent results to solve CAPTCHA. However, simple CNN approaches may detect lossy pooled incoming features when passing between convolution and other pooling layers. Therefore, the proposed study utilizes skip connection. To remove further bias, a 5-fold validation approach is adopted. The proposed study presents a CAPTCHA solver framework using various steps, as shown in Figure <ns0:ref type='figure'>.</ns0:ref> 1. The data are normalized using various image processing steps to make it more understandable for the deep learning model. This normalized data is segmented per character to make an OCR-type deep learning model that can detect each character from each aspect. At last, the 5-fold validation method is reported and yields promising results.</ns0:p><ns0:p>The two datasets used for CAPTCHA recognition have 4 and 5 words in them. The 5-word dataset has a horizontal line in it with overlapping text. Segmenting and recognizing such text is challenging due to its un-clearance. The other dataset of 4 characters was not as challenging to segment, as no line intersected them, and character rotation scaling needs to be considered. Their preprocessing and segmentation is explained in the next section. The dataset is explored in detail before and after of pre-processing and segmentation.</ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>There are two public datasets available on Kaggle that are used in the proposed study. There are 5 and 4 characters in both datasets. There are different numbers of numeric and alphabetic characters in them.</ns0:p><ns0:p>There are 1040 images in the five-character dataset (d 1 ) and 9955 images in the 4-character dataset (d 2 ).</ns0:p><ns0:p>There are 19 types of characters in the d 1 dataset, and there are 32 types of characters in the d 2 dataset.</ns0:p><ns0:p>Their respective dimensions and extension details before and after segmentation are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>.</ns0:p><ns0:p>The frequencies of each character in both datasets are shown in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>.</ns0:p><ns0:p>The frequency of each character varies in both datasets, and the number of characters also varies. In the d 2 dataset, although there is no complex inner line intersection and a merging of texts is found, more characters and their frequencies are. However, the d 1 dataset has complex data and a low number of has image dimensions of 24 x 72 x 3, where 24 is the rows, 72 is the columns, and 3 is the color depth of given images. These datasets have almost the same character location. Therefore, they can be manually cropped to train the model on each character in an isolated form. However, their dimensions may vary for each character, which may need to be equally resized. The input images of both datasets were in Portable Graphic Format (PNG) and did not need to change. After segmenting both dataset images, each character is resized to 20 x 24 in both datasets. This size covers each aspect of the visual binary patterns of each character. The dataset details before and after resizing are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The summarized details of the used datasets in the proposed study are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The dimensions of the resized image per character mean that, when we segment the characters from the given dataset images, their sizes vary from dataset to dataset and from character type to character type.</ns0:p><ns0:p>Therefore, the optimal size at which the data of the image for each character are not lost is 20 rows by 24 columns, and this is set for each character.</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing and Segmentation</ns0:head><ns0:p>d 1 dataset images do not need any complex image processing to segment them into a normalized form.</ns0:p><ns0:p>d 2 needs this operation to remove the central intersecting line of each character. This dataset can be normalized to isolate each character correctly. Therefore, three steps are performed on the d 1 dataset.</ns0:p><ns0:p>It is firstly converted to greyscale; it is then converted to a binary form, and their complement is lastly taken. In the d 2 dataset, 2 additional steps of erosion and area-wise selection are performed to remove the intersection line and the edges of characters. The primary steps of both datasets and each character isolation are shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>.</ns0:p><ns0:p>Binarization is the most needed step in order to understand the structural morphology of a certain character in a given image. Therefore, to perform binarization, grayscale conversion of images is performed, and images are converted from a greyscale to a binary format. The RGB format image has 3 channels in them: Red, Green, and Blue. Let Image I (x,y) be the input RGB image, as shown in Eq. 1. To convert these input images into grayscale, Eq. 2 is performed.</ns0:p><ns0:formula xml:id='formula_0'>Input Image = I (x,y)<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>In Eq. 1, I is the given image, and x and yx and y represent the rows and columns. The grayscale conversion is performed using Eq. 2:</ns0:p><ns0:formula xml:id='formula_1'>Grey (x, y) &#8592; j &#8721; i=n (0.2989 * R, 0.5870 * G, 0.1140 * B)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>In Eq. 2, i is the iterating row position, j is the interacting column position of the operating pixel Manuscript to be reviewed</ns0:p><ns0:p>Computer Science of 0-255. Grey (x, y) is the output grey-level of a given pixel at a certain iteration. After converting to grey-level, the binarization operation is performed using Bradly's method, which basically calculates a neighborhood base threshold to convert into 1 and 0 values to a given grey-level matrix of dimension 2.</ns0:p><ns0:p>The neighborhood threshold operation is performed using Eq. 3.</ns0:p><ns0:formula xml:id='formula_2'>B (x, y) &#8592; 2 * &#8970;size ( Grey (x, y) 16 + 1)&#8971;<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>In Eq. 3, the output B (x, y) is the neighborhood-based threshold that is calculated as the 1/8 th neighborhood of a given Grey (x, y) image. However, the floor is used to obtain a lower value to avoid any miscalculated threshold value. This calculated threshold is also called the adaptive threshold method.</ns0:p><ns0:p>The neighborhood value can be changed to increase or decrease the binarization of a given image. After obtaining a binary image, the complement is necessary to highlight the object in a given image, which is taken as a simple inverse operation, calculated as shown in Eq. 4.</ns0:p><ns0:formula xml:id='formula_3'>C (x, y) &#8592; 1 B(x, y)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>In Eq. 4, the available 0 and values are simply inverted to their respective values of each pixel position</ns0:p><ns0:p>x and y. The inverted image is used as an isolation process in the case of the d 2 dataset. In the case of the d 1 , further erosion is needed. Erosion is basically an operation that uses a structuring element with respect to its shape. The respective shape is used to remove pixels from a given binary image. In the case of a CAPTCHA image, the intersected line is removed using a line type structuring element. The line type structuring element uses a neighborhood operation. In the proposed study case, a line of size 5 with an angle dimension of 90 is used, and the intersecting line for each character in the binary image is removed, as we can see in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, row 1. The erosion operation with respect to a 5 length and a 90 angle is calculated as shown in Eq. 5.</ns0:p><ns0:formula xml:id='formula_4'>C &#8854; L &#8592; x &#8712; E| B x &#8838; C<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>In Eq. 5, C is the binary image, L is the line type structuring element of line type, and x is the eroded resultant matrix of the input binary image C. B x is the subset of a given image, as it is extracted from a given image C. After erosion, there is noise in some images that may lead to the wrong interpretation of that character. Therefore, to remove noise, the neighborhood operation is again utilized, and 8 neighborhood operations are used to a given threshold of 20 pixels for 1 value, as the noise value remains Manuscript to be reviewed</ns0:p><ns0:p>Computer Science lower than the character in that binary image. To calculate it, an area calculation using each pixel is necessary. Therefore, by iterating an 8 by 8 neighborhood operation, 20 pixels consisting of area are checked to remove those areas, and other larger areas remain in the output image. The sum of a certain area with a maximum of 1 is calculated as shown in Eq. 6.</ns0:p><ns0:p>S (x, y)</ns0:p><ns0:formula xml:id='formula_5'>&#8592; j &#8721; i=1 max(B x |xi &#8722; x j|, B x |yi &#8722; y j|)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>In Eq. 6, the given rows (i) and columns ( j) of a certain eroded image B x are used to calculate the resultant matrix by extracting each pixel value to obtain one's value from the binary image. The max will return only values that will be summed to obtain an area that will be compared with threshold value T.</ns0:p><ns0:p>The noise will then be removed, and final isolation is performed to separate each normalized character.</ns0:p></ns0:div> <ns0:div><ns0:head>CNN Training for Text Recognition</ns0:head><ns0:formula xml:id='formula_6'>convo (I,W ) x,y = N C &#8721; a=1 N R &#8721; b=1 W a,b * I x+a&#8722;1,y+b&#8722;1 (7)</ns0:formula><ns0:p>In the above equation, we formulate a convolutional operation for a 2D image that represents I x,y , where x and y are the rows and columns of the image, respectively. W x,y represents the convolving window with respect to rows and columns x and y. The window will iteratively be multiplied with the respective element of the given image and then return the resultant image in convo (I,W ) x,y . N C and N R are the number of rows and columns starting from 1, a represents columns, and b represents rows.</ns0:p></ns0:div> <ns0:div><ns0:head>Batch Normalization Layer</ns0:head><ns0:p>Its basic formula is to calculate a single component value, which can be represented as</ns0:p><ns0:formula xml:id='formula_7'>Bat &#8242; = a &#8722; M [a] var(a)<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>The calculated new value is represented as Bat &#8242; , a is any given input value, and M[a] is the mean of that given value, where in the denominator the variance of input a is represented as var(a). The further value is improved layer by layer to give a finalized normal value with the help of alpha gammas, as shown below:</ns0:p><ns0:formula xml:id='formula_8'>Bat &#8242;&#8242; = &#947; * Bat &#8242; + &#946; (9)</ns0:formula><ns0:p>The extended batch normalization formulation improved in each layer with the previous Bat &#8242; value.</ns0:p></ns0:div> <ns0:div><ns0:head>ReLU</ns0:head><ns0:p>ReLU excludes the input values that are negative and retains positive values. Its equation can be written as</ns0:p><ns0:formula xml:id='formula_9'>reLU = x = x i f x &gt; 0 x = 0 i f x &#8804; 0 (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>where x is the input value and directly outputs the value if it is greater than zero; if values are less than 0, negative values are replaced with 0.</ns0:p></ns0:div> <ns0:div><ns0:head>Skip-Connection</ns0:head><ns0:p>The Skip connection is basically concetnating the previous sort of pictoral information to the next convolved feature maps of network. In proposed network, the ReLU-1 information is saved and then after 2nd and 3rd ReLU layer, these saved information is concatenated with the help of an addition layer. In this way, the skip-connection is added that makes it different as compared to conventional deep learning approaches to classify the guava disease. Moreover, the visualization of these added feature information is shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Average Pooling</ns0:head><ns0:p>The average pooling layer is simple as we convolve to the whole input coming from the previous layer or node. The coming input is fitted using a window of size mxn, where m represents the rows, and n represents the column. The movement in the horizontal and vertical directions continue using stride parameters.</ns0:p><ns0:p>Many deep learning-based algorithms introduced previously, as we can see in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, ultimately use CNN-based methods. However, all traditional CNN approaches using convolve blocks and transfer learning approaches may take important information when they pool down to incoming feature maps from previous layers. Similarly, the testing and validation using conventional training, validation, and testing may be biased due to less data testing, as compared to the training data. Therefore, the proposed study uses a 1-skip connection while maintaining other convolve blocks; inspired by the K-Fold validation method, it splits up both datasets' data into five respective folds. The dataset after splitting into five folds is trained and tested in a sequence. However, these five-fold results are taken as a means to report final accuracy results. The proposed CNN contains 16 layers in total, and it includes three major blocks containing convolutional, batch normalization, and ReLU layers. After these nine layers, an additional layer adds incoming connections, a skip connection, and 3rd-ReLU-layer inputs from the three respective blocks. Average pooling, fully connected, and softmax layers are added after skipping connections. All layer parameters and details are shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>, all learnable weights of each layer are shown. For both datasets, output categories of characters are different. Therefore, in the dense layer of the five-fold CNN models, the output class was 19 for five models, and the output class was 32 categories in the other five models. The skip connection has more weights than other convolution layers. Each model is compared regarding its weight learning and is shown in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the training and testing data and take the testing results as the mean of all models. In this way, no data will remain for training, and no data will be untested. The results ultimately become more confident than previous conventional approaches of CNN. The d 2 dataset has a clear structured element in its segmented images; in d 1, , the isolated text images were not much clearer. Therefore, the classification results remain lower in this case, whereas in the d2 dataset, the classification results remain high and usable as a CAPTCHA solver. The results of each character and dataset for each fold are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>As discussed earlier, there are two datasets in the proposed framework. Both have a different number of categories and a different number of images. Therefore, separate evaluations of both are discussed and described in this section. Firstly, the five-character dataset is used by the 5-CNN models of same architecture, with a different split in the data. Secondly, the four-character dataset is used by the same architecture of the model, with a different output of classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Five-character Dataset (d 1 )</ns0:head><ns0:p>The five-character dataset has 1040 images in it. After segmenting each type of character, it has 5200 total images. The data are then split into five folds: 931, 941, 925, 937, and 924. The remaining data difference is adjusted into the training set, and splitting was adjusted during the random selection of 20-20% of the total data. The training on four-fold data and the testing on the one-fold data are shown in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>, there are 19 types of characters that have their fold-by-fold varying accuracy. The mean of all folds is given. The overall or mean of each fold as well as the mean of all folds is given in the last row. We can see that the Y character has a significant or the highest accuracy rate (95.40%) of validation, compared to other characters. This may be due to its almost entirely different structure from other characters. The other highest accuracy is of the G character with 95.06%, which is almost equal to the highest with a slight difference. However, these two characters have a more than 95% recognition accuracy, and no other character is nearer to 95. The other characters have a range of accuracies from 81 to 90%. The most least accurate M character is 62.08, and it varies in five folds from 53 to 74%. Therefore, we can say that M matches with other characters, and for this character recognition, we may need to concentrate on structural polishing for M input characters. To prevent CAPTCHA from breaking further complex designs among machine and to make it easy for humans to do so, the other characters that achieve higher results need a high angle and structural change, so as to not break with any machine learning model.</ns0:p><ns0:p>This complex structure may be improved from some other fine tuning of a CNN, increasing or decreasing the skipping connection. The accuracy value can also improve. The other four-character dataset is more important, as it has 32 types of characters and a greater number of images. This five-character dataset's lower accuracy may also be due to little data and less training. The other character recognition studies have higher accuracy rates on similar datasets, but they might be less confident compared to the proposed study due to an un-biased validation method. The four-character dataset recognition results are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>Four-Character Dataset (d 2 )</ns0:head><ns0:p>The four-character dataset has a higher frequency of each character compared to the five-character dataset, and the number of characters is also higher. The same five-fold splits were performed on this dataset characters as well. After applying the five folds, the number of characters in each fold was 7607, 7624, 7602, 7617, and 7595, respectively, and the remaining images from the 38,045 images of individual characters were adjusted into the training sets of each fold. The results of each character w.r.t each fold and the overall mean are given in Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>.</ns0:p><ns0:p>From Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>, it can be observed that almost every character was recognized with 99% accuracy. The highest accuracy of character D was 99.92 and remains 100% in the four folds. Only one fold showed a 99.57% accuracy. From this point, we can state that the proposed study removed bias, if there was any, from the dataset by making splits. Therefore, it is necessary to make folds in a deep learning network.</ns0:p><ns0:p>Most studies use a 1-fold approach only. The 1-fold approach is at a high risk. It is also important that the character m achieved the lowest accuracy in the case of the five-character CAPTCHA. In this four-character CAPTCHA, 98.58% was accurately recognized. Therefore, we can say that the structural morphology of M in the five-character CAPTCHA better avoids any CAPTCHA solver method. The Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>highest results show that this four-character CAPTCHA is at a high risk, and line intersection, word joining, and correlation may break prevent the CAPTCHA from breaking. To recognize the CAPTCHA, many approaches before have been proposed, and most of them have used a conventional structure.</ns0:p><ns0:p>The proposed study has used a more confident validation approach with multi-aspect feature extraction.</ns0:p><ns0:p>Therefore, it can be used as a more promising approach to break CAPTCHA images and to test the CAPTCHA design made by CAPTCHA designers. In this way, CAPTCHA designs can be protected against new approaches of deep learning. The graphical illustration of validation accuracy and the losses for both datasets on all folds is shown in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>. In Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, we can see that various studies have used different numbers of characters with self-collected and generated datasets, and comparisons have been made. Some studies have considered the number of dataset characters. Accuracy is not comparable, as it uses the five-fold validation method, and the others only used 1-fold. Therefore, the proposed study outperforms in each aspect, in terms of the proposed CNN framework and its validation scheme.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The proposed study uses a different approach of deep learning to solve CAPTCHA problems. It proposed a skip-CNN connection network to break text-based CAPTCHA's. Two CAPTCHA datasets are discussed and evaluated character by character. The proposed study is confident to report results, as it removed biases (if any) in datasets using five-folds validation method. The results are also improved as compared to previous studies. The reported higher results claims that these CAPTCHA designs are at high risk, as Manuscript to be reviewed</ns0:p><ns0:p>Computer Science any malicious attack can break them on the web. Therefore, the proposed CNN could be used to test CAPTCHA designs to solve them more confidently in real-time. Furthermore, the proposed study has used the publically available datasets to perform training and testing on them that makes it more robust approach to solve text-based CAPTCHA's.</ns0:p><ns0:p>Many studies have used deep learning to break CAPTCHAs, as they have focused on the need to design CAPTCHAs that do not consume user time and resist CAPTCHA solvers. It would make our web systems more secure against malicious attacks.However, In future the data augmentation methods and more robust data creation method can be applied on CAPTCHA datasets where intersecting line based CAPTCHA's are more challenging to break that can be used.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Text-based authentication methods are mostly used PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The Proposed Framework for CAPTCHA Recognition.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Character-wise Frequencies (Row-1: 4-Character Dataset 1 (d 2 ); Row-2: five-character Dataset 2 (d 1 )).</ns0:figDesc><ns0:graphic coords='7,183.09,397.92,330.86,283.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>at a certain time, and R, G and B are the red, green, and blue pixel value of that pixel. The multiplying constant values convert to all three values of the respective channels to a new grey-level value in the range 7/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Preprocessing and Isolation of characters in both datasets (Row-1: the d1 dataset, binarization, erosion, area-wise selection, and segmentation; Row-2: binarization and isolation of each character).</ns0:figDesc><ns0:graphic coords='9,162.41,63.77,372.20,183.11' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Five-fold weights with respective layers shown for multiple proposed CNN architectures.</ns0:figDesc><ns0:graphic coords='11,141.73,351.07,413.54,225.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The validation loss and validation accuracy graphs are shown for each fold of the CNN (Row-1: five-character CAPTCHA; Row-2: four-character CAPTCHA).</ns0:figDesc><ns0:graphic coords='16,141.73,171.69,413.54,196.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Recent CAPTCHA recognition-based studies and their details.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Year Dataset</ns0:cell><ns0:cell /><ns0:cell>Method</ns0:cell><ns0:cell /><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang and Shi (2021)</ns0:cell><ns0:cell>2021 CNKI</ns0:cell><ns0:cell /><ns0:cell cols='2'>Binarization,</ns0:cell><ns0:cell cols='2'>Recognition rate= 99%,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>CAPTCHA,</ns0:cell><ns0:cell>smoothing,</ns0:cell><ns0:cell>seg-</ns0:cell><ns0:cell cols='2'>98.5%, 97.84%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random Gener-</ns0:cell><ns0:cell>mentation</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ated, Zhengfang</ns0:cell><ns0:cell>annotation</ns0:cell><ns0:cell>with</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>CAPTCHA</ns0:cell><ns0:cell cols='2'>Adhesian and more</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>interference</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ahmed and Anand</ns0:cell><ns0:cell>2021 Tamil,</ns0:cell><ns0:cell>Hindi</ns0:cell><ns0:cell>Pillow</ns0:cell><ns0:cell>Library,</ns0:cell><ns0:cell>&#8764;</ns0:cell></ns0:row><ns0:row><ns0:cell>(2021)</ns0:cell><ns0:cell cols='2'>and Bengali</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Bostik et al. (2021)</ns0:cell><ns0:cell cols='2'>2021 Private created</ns0:cell><ns0:cell cols='2'>15-layer CNN</ns0:cell><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy= 80%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Kumar and Singh (2021) 2021 Private</ns0:cell><ns0:cell /><ns0:cell cols='2'>7-Layer CNN</ns0:cell><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>racy= 99.7%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Dankwa and Yang (2021) 2021 4-words Kaggle</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy=100%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2021b)</ns0:cell><ns0:cell cols='2'>2021 Private GAN</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>based dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>racy= 96%, overall =</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>74%</ns0:cell></ns0:row><ns0:row><ns0:cell>Thobhani et al. (2020)</ns0:cell><ns0:cell cols='3'>2020 Weibo, Gregwar CNN</ns0:cell><ns0:cell /><ns0:cell>Testing</ns0:cell><ns0:cell>Accuracy=</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>92.68%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Testing</ns0:cell><ns0:cell>Accuracy=</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>54.20%</ns0:cell></ns0:row></ns0:table><ns0:note>4/18PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Description of the employed dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Properties</ns0:cell><ns0:cell>d1</ns0:cell><ns0:cell>d2</ns0:cell></ns0:row><ns0:row><ns0:cell>Image dimension</ns0:cell><ns0:cell>50x200x3</ns0:cell><ns0:cell>24x72x3</ns0:cell></ns0:row><ns0:row><ns0:cell>Extension</ns0:cell><ns0:cell>PNG</ns0:cell><ns0:cell>PNG</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Images</ns0:cell><ns0:cell>9955</ns0:cell><ns0:cell>1040</ns0:cell></ns0:row><ns0:row><ns0:cell>Character Types</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>19</ns0:cell></ns0:row><ns0:row><ns0:cell>Resized Image Dimension (Per Character)</ns0:cell><ns0:cell>20x24x1</ns0:cell><ns0:cell>20x24x1</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The proposed 16-CNN with Parameters and learnable weights</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Number</ns0:cell><ns0:cell>Layers Name</ns0:cell><ns0:cell>Category</ns0:cell><ns0:cell>Parameters</ns0:cell><ns0:cell cols='3'>Weights/Offset Padding Stride</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Input</ns0:cell><ns0:cell>Image Input</ns0:cell><ns0:cell>24 x 20 x 1</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Conv (1)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>3x3x1x8</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BN (1)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>1x1x8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ReLU (1)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Conv (2)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>3x3x8x16</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BN (2)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>1x1x16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ReLU (2)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Conv (3)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>3x3x16x32</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BN (3)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>ReLU (3)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Skip-connection</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x8x32</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Add</ns0:cell><ns0:cell>Addition</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>Pool</ns0:cell><ns0:cell>Average Pooling</ns0:cell><ns0:cell>6 x 5 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 19 (d2)</ns0:cell><ns0:cell>19 x 960 (d2)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>FC</ns0:cell><ns0:cell>Fully connected</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 32 (d1)</ns0:cell><ns0:cell>32 x 960 (d1)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>1 x 1 x 19</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>Class Output</ns0:cell><ns0:cell>Classification</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row></ns0:table><ns0:note>11/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Five-character Dataset Accuracy (%) with five-fold text recognition testing on the CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>Overall</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>89.63</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>78.72</ns0:cell><ns0:cell>84.48</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>87.76</ns0:cell><ns0:cell>75.51</ns0:cell><ns0:cell>87.75</ns0:cell><ns0:cell>85.71</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>86.12</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>88.46</ns0:cell><ns0:cell>90.196</ns0:cell><ns0:cell>90.19</ns0:cell><ns0:cell>92.15</ns0:cell><ns0:cell>89.06</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>80.39</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>86.56</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>76.59</ns0:cell><ns0:cell>82.61</ns0:cell><ns0:cell>91.304</ns0:cell><ns0:cell>80.43</ns0:cell><ns0:cell>87.58</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>89.36</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>85.10</ns0:cell><ns0:cell>84.78</ns0:cell><ns0:cell>86.68</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>91.66</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>87.49</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>81.81</ns0:cell><ns0:cell>73.33</ns0:cell><ns0:cell>97.72</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>90.09</ns0:cell><ns0:cell>85.03</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>85.10</ns0:cell><ns0:cell>80.85</ns0:cell><ns0:cell>80.85</ns0:cell><ns0:cell>82.64</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>95.55</ns0:cell><ns0:cell>88.67</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>62.79</ns0:cell><ns0:cell>79.54</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>93.18</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>78.73</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>92.00</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>94.00</ns0:cell><ns0:cell>81.63</ns0:cell><ns0:cell>89.1</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>95.83</ns0:cell><ns0:cell>91.83</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>93.75</ns0:cell><ns0:cell>95.06</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>64.00</ns0:cell><ns0:cell>56.00</ns0:cell><ns0:cell>53.061</ns0:cell><ns0:cell>74.00</ns0:cell><ns0:cell>67.34</ns0:cell><ns0:cell>62.08</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>81.40</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>87.59</ns0:cell><ns0:cell>76.74</ns0:cell><ns0:cell>82.35</ns0:cell><ns0:cell>81.43</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>95.65</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>90.34</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>95.24</ns0:cell><ns0:cell>83.72</ns0:cell><ns0:cell>90.47</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>90.55</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>82.97</ns0:cell><ns0:cell>85.41</ns0:cell><ns0:cell>82.98</ns0:cell><ns0:cell>85.68</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>93.02</ns0:cell><ns0:cell>95.45</ns0:cell><ns0:cell>97.67</ns0:cell><ns0:cell>95.53</ns0:cell><ns0:cell>95.35</ns0:cell><ns0:cell>95.40</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>86.14</ns0:cell><ns0:cell>80.77</ns0:cell><ns0:cell>87.24</ns0:cell><ns0:cell>87.73</ns0:cell><ns0:cell>85.71</ns0:cell><ns0:cell>85.52</ns0:cell></ns0:row></ns0:table><ns0:note>13/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Four-character dataset Accuracy (%) with five-fold text recognition testing on the CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>Overall</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>97.84</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>98.27</ns0:cell><ns0:cell>98.79</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>97.02</ns0:cell><ns0:cell>94.92</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>95.75</ns0:cell><ns0:cell>96.17</ns0:cell><ns0:cell>96.52</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>97.87</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>98.55</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.35</ns0:cell><ns0:cell>99.01</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.65</ns0:cell><ns0:cell>99.56</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.69</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>98.80</ns0:cell><ns0:cell>99.60</ns0:cell><ns0:cell>99.19</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.20</ns0:cell><ns0:cell>99.36</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>97.42</ns0:cell><ns0:cell>97.86</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.29</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>98.85</ns0:cell><ns0:cell>96.55</ns0:cell><ns0:cell>98.08</ns0:cell><ns0:cell>98.46</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.39</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>97.85</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.54</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>96.59</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>96.15</ns0:cell><ns0:cell>97.95</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.25</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.92</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>99.18</ns0:cell><ns0:cell>97.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.59</ns0:cell><ns0:cell>98.37</ns0:cell><ns0:cell>98.94</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>98.69</ns0:cell><ns0:cell>98.26</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97.82</ns0:cell><ns0:cell>97.83</ns0:cell><ns0:cell>98.52</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>97.93</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.69</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>98.43</ns0:cell></ns0:row><ns0:row><ns0:cell>H</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.33</ns0:cell></ns0:row><ns0:row><ns0:cell>J</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.66</ns0:cell></ns0:row><ns0:row><ns0:cell>K</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell></ns0:row><ns0:row><ns0:cell>L</ns0:cell><ns0:cell>97.41</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>98.79</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>96.23</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.33</ns0:cell><ns0:cell>98.58</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.10</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>98.83</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>98.35</ns0:cell><ns0:cell>97.94</ns0:cell><ns0:cell>98.77</ns0:cell><ns0:cell>97.94</ns0:cell><ns0:cell>96.28</ns0:cell><ns0:cell>97.86</ns0:cell></ns0:row><ns0:row><ns0:cell>Q</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.75</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.59</ns0:cell><ns0:cell>97.50</ns0:cell><ns0:cell>99.00</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>99.42</ns0:cell></ns0:row><ns0:row><ns0:cell>T</ns0:cell><ns0:cell>97.47</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.73</ns0:cell><ns0:cell>97.47</ns0:cell><ns0:cell>98.31</ns0:cell><ns0:cell>97.98</ns0:cell></ns0:row><ns0:row><ns0:cell>U</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97.43</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.80</ns0:cell></ns0:row><ns0:row><ns0:cell>V</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.22</ns0:cell><ns0:cell>98.47</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.67</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.15</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.33</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.66</ns0:cell></ns0:row><ns0:row><ns0:cell>Z</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.16</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>98.97</ns0:cell><ns0:cell>98.18</ns0:cell><ns0:cell>99.32</ns0:cell><ns0:cell>98.92</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.82</ns0:cell></ns0:row></ns0:table><ns0:note>14/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Four-character dataset with five-fold text recognition testing on a CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>References</ns0:cell><ns0:cell>No. of Characters</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>Faster R-CNN</ns0:cell><ns0:cell>Accuracy= 98.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Du et al. (2017)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Chen et al. (2019)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Selective D-CNN</ns0:cell><ns0:cell>Success rate= 95.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik et al. (2021)</ns0:cell><ns0:cell>Different</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>Accuracy= 80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Different</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>Precision=98.99%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik and Klecka (2018)</ns0:cell><ns0:cell /><ns0:cell>SVN</ns0:cell><ns0:cell>99.80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Feed forward-Net</ns0:cell><ns0:cell>98.79%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Study</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>Skip-CNN with 5-Fold Validation Accuracy= 98.82%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Accuracy=85.52%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:1:0:NEW 18 Sep 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Reviewer 1 Comments Response 1. The proposed approach used two different types of datasets on the same model. How can the same prediction model predict the different types of captchas? Please elaborate. Thanks for your concern. It is correct that two different datasets have been used in proposed study where same prediction model is used. However, the two datasets are trained differently on same architect of proposed network. 2. The previous studies have used dataset generation, where the proposed study used only a public study dataset. Why? Thanks for your concern. We think that the previous studies have used the self-generated data that make those studies, case-specific where public or real-time used data is more challenging and robust. Therefore, the proposed study used public datasets to make it more robust study over the public captcha solvers. 3. It is said that 5-folds have been used, where no detail of each fold is given that how it is calculated. Please evaluate it in terms of table or diagram etc. Thanks for your concern. The both dataset folds detail and number of instances are mentioned in 296-299 lines for 5- Character dataset and for 4-character dataset, it is mentioned in 320-324. 4. The 'conclusions' are a crucial component of the paper. It should complement the 'abstract' and is normally used by experts to value the paper's engineering content. In general, it should sum up the most important outcomes of the paper. It should simply provide critical facts and figures achieved in this paper for supporting the claims. Thanks for your concern. Yes, you are right. However, we further conclude the contributing claims of proposed study. 5. The proposed study has used a proposed CNN, where in previous studies, most of the researchers have used similar CNNs. How the proposed CNN is different, and novel as compared to previous studies? Clearly mention and highlight. Thanks for your concern. The novelty or contribution regarding proposed CNN as compared to previous CNN is highlighted in Lines 93-96. 6. The abstract is too general and not prepared objectively. It should briefly highlight the paper's novelty as the main problem, how it has been resolved, and where the novelty lies? Thanks for your concern. The acquired changes are updated in Abstract and highlighted the contributing parts of proposed research. 7. The existing literature should be classified and systematically reviewed instead of being independently introduced one by one. Thanks for your concern. The literature review is updated in this regard. 8.  It seems that Skip connection is used in the proposed CNN; how does it work differently? No detail of this part is given. Please evaluate it specifically in some separate sections to get into depth. Thanks for your concern. The acquired section is added successfully in corresponding section. Reviewer 2 Comments Response 1. In the Literature review, the table shows only CNN-based approaches in 2021, mainly where if the table is created, it must highlight different aspects of studies from different years. Thanks for your concern. The table is updated regarding more work. 2. The proposed study has used text-based CAPTCHA images that contain multiple characters where single character recognition results and predictions are shown? How is the segmentation processed? Are previous studies similarly doing segmentation? Thanks for your concern. Yes, the many of mentioned studies have used manual cropping-based segmentation for the sake of training of each character. Therefore, the segmentation is performed manual cropping to isolate each character form each image. 3. What is the reason for choosing these specific datasets? When researchers using their own generated dataset, that is more in several images as well. The main reason is to choose these public available datasets is that the proposed study does not remain dataset specific of one type. The different type of datasets chosen make it more robust where previous studies have used their own generated captchas to develop captcha solver. These self-generated data make them dataset specific where publicly available datasets are collected from real-world that used mostly in different websites. These all reasons are also mentioned in lines 362-368. 4. What does mean by 5-fold validation in CNN? Please clearly make a separate section, also mention what its significance? Thanks for your concern. The significance is literally to remove biasness from the validation method. Normally, the DL approaches use one-fold validation in which they split-up data into training, testing and validation ratios. To make proposed study more robust and confident, we split up data into 5-folds and performed separate evaluation of each model and then conclude the overall results. However, this detail also reflecting into the lines 181-188. 5. In the comparison section, in the table, it is said “different” type of characters! What does it mean by that? Please elaborate on the results and comparison section. Thanks for your concern. As mentioned in compared section that the number of characters is evaluated where as far as different is concerned, it means it have used different number of characters from different languages that make it not a whole countable digit to mention. Also, the heading for comparison is highlighted to get more intension about compared studies. 6. There are some typos and grammatical errors in the manuscript. It is strongly suggested that the whole work to be carefully checked by someone has expertise in technical English writing. Thanks for your concern. The proposed study looked in depth regarding grammatical and English writing issues and update the manuscript correspondingly where need changes. 7. Key contribution and novelty has not been detailed in manuscript. Please include it in the introduction section Thanks for your concern. The acquired section is reflecting into the last of Introduction section in lines 93-96. 8. What are the limitations of the related works The simple and intersecting line-based text image CAPTCHA images have been used in proposed study where big data issues also concerned about limitation. However, these limitations and future suggestion are highlighted in conclusion in lines 363-367. 9. Are there any limitations of this carried out study? Thanks for your concern. The concerned is already discussed in lines 369-373. 10. How to select and optimize the user-defined parameters in the proposed model? Thanks for your concern. The performance-based selection of layers and parameters is utilized where the optimal fine-tuned CNN parameters, layers are described in Table.4. 11. There are quite a few abbreviations are used in the manuscript. It is suggested to use a table to host all the frequently used abbreviations with their descriptions to improve the readability Thanks for your concern. The Abbreviation Table is added in updated manuscript. 12. Explain the evaluation metrics and justify why those evaluation metrics are used? Thanks for your concern. The accuracy or success rate is used in proposed study evaluation as in many previous studies, the same evaluation measure is used. Therefore, the proposed study also uses the same measure so that we can compare the performance of proposed study with previous studies. 13. It seems that the authors used images of equations, please use editable equation format. Thanks for your concern. The updated manuscript is updated regarding Equation format of all equations. 14. The Related Works section is also fair, yet the criteria behind the selection of the works described should be explained. Thanks for your concern. The updated manuscript contains the explanation of selected study and the previous research gaps. Reviewer 3 Comments Response 1.What are the research gaps in previous studies, and what are your contributions? Have not to mention clearly. Thanks for your concern. The contributions are highlighted at the end of Introduction section. However, the research gaps are discussed at the end of Literature Review. 2. Does the proposed CNN have a skip connection? What is that? How is it working? Why do you use it? Thanks for your concern. The separate section of Skip-connection regarding its working and features. How its being working where parameters are shown in Table. 4. and visualization is shown in Figure. 1. 3. How is the segmentation step processed? It is missing in the manuscript. It should be separately discussed. Thanks for your concern. The segmentation is discussed in subsection 2 of methodology section where also highlighted in subsection heading. 4. The substantial contributions should be highlighted and discussed. The results and comparative analysis should be discussed in detail. Thanks for your concern. The contributions are highlighted at the end of Introduction section where their discuss in detail in last Conclusion section. 5. Every time a method/formula is used for something, it needs to be justified by either (a) prior work showing the superiority of this method, or (b) by your experiments showing its advantage over prior work methods - comparison is needed, or (c) formal proof of optimality. Please consider more prior works. Thanks for your concern. In literature review, the previous studies that have used deep learning-based network or approaches are discussed in detail where research gaps regarding multi-feature and more validated method of results evaluation are highlighted and used in proposed study. 6. The data is not described. Proper data description should contain the number of data items, number of parameters, distribution analysis of parameters, and target parameter for classification. Thanks for your concern. The dataset is the first subsection of methodology section where character by character frequencies of both datasets are also visualized in Figure. 2 . However, the normalized dataset images dimension, extension and resolution is explored in Table. 3. 7. Figure resolutions are bad. The text in figures became unreadible due to digitizing. Some figures have black background. Why the background is black for them? Thanks for your concern. The images resolution is increased where the black background is concerned, those images are in binary form, means they have only 0,1 pixel values where due to 0 value of pixel, those images are black and those black images are the part of preprocessing that’s why they are shown as it is. 8. The tables are too long. What te authors are trying to show in the tables are not clear? Thanks for your concern. The tables are long as there is separate discussion for each character is discussed and Accuracy or success rate of each character is measured. The table captions are also updated for what we are trying to show in long tables. 9. Introduction contains too many citations. Please keep them at literature review. Authors are not supposed to explain the results in the introduction part. Thanks for your concern. The introduction section is updated in this regard. "
Here is a paper. Please give your review comments after reading it.
332
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A Completely Automated Public Turing Test to tell Computers and Humans Apart (CAPTCHA) is used in web systems to secure authentication purposes; it may break using Optical Character Recognition (OCR) type methods. CAPTCHA breakers make web systems highly insecure. However, several techniques to break CAPTCHA suggest CAPTCHA designers about their designed CAPTCHA's need improvement to prevent computer visionbased malicious attacks. This research primarily used deep learning methods to break state-of-the-art CAPTCHA codes; however, the validation scheme and conventional Convolutional Neural Network (CNN) design still need more confident validation and multiaspect covering feature schemes. Several public datasets are available of text-based CAPTCHa, including Kaggle and other dataset repositories where self-generation of CAPTCHA datasets are available. The previous studies are dataset-specific only and cannot perform well on other CAPTCHA's. Therefore, the proposed study uses two publicly available datasets of 4-and 5-character text-based CAPTCHA images to propose a CAPTCHA solver. Furthermore, the proposed study used a skip-connection-based CNN model to solve a CAPTCHA. The proposed research employed 5-folds on data that delivers 10 Different CNN models on two datasets with promising results compared to the other studies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The first secure and fully automated mechanism, named CAPTCHA, was developed in 2000. Alta Vista first used the term CAPTCHA in 1997. It reduces spamming by <ns0:ref type='bibr'>95% Baird and Popat (2002)</ns0:ref>. CAPTCHA is also known as a reverse Turing test. The Turing test was the first test to distinguish human, and machine Von <ns0:ref type='bibr' target='#b48'>Ahn et al. (2003)</ns0:ref>. It was developed to determine whether a user was a human or a machine. It increases efficiency against different attacks that seek websites <ns0:ref type='bibr' target='#b15'>Danchev (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b38'>Obimbo et al. (2013)</ns0:ref>.</ns0:p><ns0:p>It is said that CAPTCHA should be generic such that any human can easily interpret and solve it and difficult for machines to recognize it <ns0:ref type='bibr' target='#b6'>Bostik and Klecka (2018)</ns0:ref>. To protect against robust malicious attacks, various security authentication methods have been developed <ns0:ref type='bibr' target='#b25'>Goswami et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b39'>Priya and Karthik (2013)</ns0:ref>, <ns0:ref type='bibr' target='#b2'>Azad and Jain (2013)</ns0:ref>. CAPTCHA can be used for authentication in login forms, spam text reducer, e.g., in email, as a secret graphical key to log in for email. In this way, a spam-bot would not be able to recognize and log in to the email Sudarshan <ns0:ref type='bibr' target='#b46'>Soni and Bonde (2017)</ns0:ref>. However, recent advancements make the CAPTCHA's designs to be at high risk where the current gaps and robustness of models that are the concern is discussed in depth <ns0:ref type='bibr' target='#b44'>(Roshanbin and Miller, 2013)</ns0:ref>. Similarly, the image, text, colorful CAPTCHA's, and other types of CAPTCHA's are being attacked by various malicious attacks. Manuscript to be reviewed Computer Science and confidence <ns0:ref type='bibr' target='#b57'>(Xu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Many prevention strategies against malicious attacks have been adopted in recent years, such as cloud computing-based voice-processing <ns0:ref type='bibr'>Gao et al. (2020b,a)</ns0:ref>, mathematical and logical puzzles, and text and image recognition tasks <ns0:ref type='bibr' target='#b21'>Gao et al. (2020c)</ns0:ref>. Text-based authentication methods are mostly used due to their easier interpretation, and implementation <ns0:ref type='bibr' target='#b32'>Madar et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Gheisari et al. (2021)</ns0:ref>. A set of rules may define a kind of automated creation of CAPTCHA-solving tasks. It leads to easy API creation and usage for security web developers to make more mature CAPTCHAs <ns0:ref type='bibr' target='#b8'>Bursztein et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>Cruz-Perez et al. (2012)</ns0:ref>. The text-based CAPTCHA is used for Optical Character Recognition (OCR). OCR is strong enough to solve text-based CAPTCHA challenges. However, it still has challenges regarding its robustness in solving CAPTCHA problems <ns0:ref type='bibr' target='#b27'>Kaur and Behal (2015)</ns0:ref>. These CAPTCHA challenges are extensive with ongoing modern technologies. Machines can solve them, but humans cannot. These automated, complex CAPTCHA-creating tools can be broken down using various OCR techniques. Some studies claim that they can break any CAPTCHA with high efficiency. The existing work also recommends strategies to increase the keyword size and another method of crossing lines from keywords that use only straight lines and a horizontal direction. It can break easily using different transformations, such as the Hough transformation. It is also suggested that single-character recognition is used from various angles, rotations, and views to make more robust and challenging CAPTCHAs. <ns0:ref type='bibr' target='#b7'>Bursztein et al. (2011)</ns0:ref>.</ns0:p><ns0:p>The concept of reCAPTCHA was introduced in 2008. It was initially a rough estimation. It was later improved and was owned by Google to decrease the time taken to solve it. The un-solvable reCAPTCHA's were then considered to be a new challenge for OCRs Von <ns0:ref type='bibr' target='#b49'>Ahn et al. (2008)</ns0:ref>. The usage of computer vision and image processing as a CAPTCHA solver or breaker was increased if segmentation was performed efficiently <ns0:ref type='bibr' target='#b23'>George et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b59'>Ye et al. (2018)</ns0:ref>. The main objective or purpose of making a CAPTCHA solver is to protect CAPTCHA breakers. By looking into CAPTCHA solvers, more challenging CAPTCHAs can be generated, and they may lead to a more secure web that is protected against malicious attacks <ns0:ref type='bibr' target='#b40'>Rai et al. (2021)</ns0:ref>. A benchmark or suggestion for CAPTCHA creation was given by <ns0:ref type='bibr'>Chellapilla et al.:</ns0:ref> Humans should solve the given CAPTCHA challenge with a 90% success rate, while machines ideally solve only one in every 10,000 CAPTCHAs <ns0:ref type='bibr' target='#b11'>Chellapilla et al. (2005)</ns0:ref>.</ns0:p><ns0:p>Modern AI yields CAPTCHAs that can solve problems in a few seconds. Therefore, creating CAPTCHAs that are easily interpretable for humans and unsolvable for machines is an open challenge. It is also observed that humans invest a substantial amount of time daily solving CAPTCHAs Von <ns0:ref type='bibr' target='#b49'>Ahn et al. (2008)</ns0:ref>. Therefore, reducing the amount of time humans need to solve them is another challenge. Various considerations need to be made, including text familiarity, visual appearance, distortions, etc. Commonly in text-based CAPTCHAs, the well-recognized languages are used that have many dictionaries that make them easily breakable. Therefore, we may need to make unfamiliar text from common languages such as phonetic text is not ordinary language that is pronounceable <ns0:ref type='bibr' target='#b51'>Wang and Bentley (2006)</ns0:ref>. Similarly, the color of the foreground and the background of CAPTCHA images is also an essential factor, as many people have low or normal eyesight or may not see them. Therefore, a visually appealing foreground and background with distinguishing colors are recommended when creating CAPTCHAs. Distortions from periodic or random manners, such as affine transformations, scaling, and the rotation of specific angles, are needed. These distortions are solvable for computers and humans. If the CAPTCHAs become unsolvable, then multiple attempts by a user are needed to read and solve them <ns0:ref type='bibr' target='#b58'>Yan and El Ahmad (2008)</ns0:ref>.</ns0:p><ns0:p>In current times, Deep Convolutional neural networks (DCNN) are used in many medical <ns0:ref type='bibr' target='#b35'>Meraj et al. (2019)</ns0:ref>, <ns0:ref type='bibr' target='#b34'>Manzoor et al. (2022)</ns0:ref>, <ns0:ref type='bibr' target='#b33'>Mahum et al. (2021)</ns0:ref> and other real-life recognition applications <ns0:ref type='bibr' target='#b36'>Namasudra (2020)</ns0:ref> as well as insecurity threat solutions <ns0:ref type='bibr'>Lal et al. (2021)</ns0:ref>. The security threats in IoT and many other aspects can also be controlled using blockchain methods <ns0:ref type='bibr' target='#b37'>Namasudra et al. (2021)</ns0:ref>. Utilizing deep learning, the proposed study uses various image processing operations to normalize text-based image datasets. After normalizing the data, a single-word-caption-based OCR was designed with skipping connections. These skipping connections connect previous pictorial information to various outputs in simple Convolutional Neural Networks (CNNs), which possess visual information in the next layer only <ns0:ref type='bibr' target='#b1'>Ahn and Yim (2020)</ns0:ref>.</ns0:p><ns0:p>The main contribution of this research work is as follows:</ns0:p><ns0:p>&#8226; A skipping-connection-based CNN framework is proposed that covers multiple aspects of features.</ns0:p><ns0:p>&#8226; We segment the characters from the given dataset images based on skipping-connection.</ns0:p></ns0:div> <ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; A 5-fold validation scheme is used in a deep-learning-based network to remove bias, if any, which leads to more promising results.</ns0:p><ns0:p>&#8226; The data are normalized using various image processing steps to make it more understandable for the deep learning model.</ns0:p></ns0:div> <ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Today in the growing and dominant field of AI, many real-life problems have been solved with the help of deep learning and other evolutionary optimized intelligent algorithms <ns0:ref type='bibr' target='#b42'>Rauf et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b43'>Rauf et al. (2020)</ns0:ref>. Various problems of different aspects using DL methods are solved, such as energy consumption analysis <ns0:ref type='bibr' target='#b20'>Gao et al. (2020b)</ns0:ref>, time scheduling of resources to avoid time and resources wastage <ns0:ref type='bibr' target='#b21'>Gao et al. (2020c)</ns0:ref>. Similarly, in cybersecurity, a CAPTCHA solver has provided many automated AI solutions, except OCR. Multiple proposed CNN models have used various types of CAPTCHA datasets to solve CAPTCHAs. The collected datasets have been divided into three categories: selection-, slide-, and click-based. Ten famous CAPTCHAs were collected from google.com, tencent.com, etc. The breaking rate of these CAPTCHAs was compared. CAPTCHA design flaws that may help to break CAPTCHAs easily were also investigated. The underground market used to solve CAPTCHAs was also investigated, and findings concerning scale, the commercial sizing of keywords, and their impact on CAPTCHas were reported <ns0:ref type='bibr' target='#b55'>Weng et al. (2019)</ns0:ref>. A proposed sparsity-integrated CNN used constraints to deactivate the fully connected connections in CNN. It ultimately increased the accuracy results compared to transfer learning, and simple CNN solutions <ns0:ref type='bibr' target='#b18'>Ferreira et al. (2019)</ns0:ref>.</ns0:p><ns0:p>Image processing operations regarding erosion, binarization, and smoothing filters were performed for data normalization, where adhesion-character-based features were introduced and fed to a neural network for character recognition <ns0:ref type='bibr' target='#b26'>Hua and Guoqin (2017)</ns0:ref>. The backpropagation method was claimed as a better approach for image-based CAPTCHA recognition. It has also been said that CAPTCHA has become the normal, secure authentication method in the majority of websites and that image-based CAPTCHAs are more valuable than text-based CAPTCHAs <ns0:ref type='bibr' target='#b45'>Saroha and Gill (2021)</ns0:ref>. Template-based matching is performed to solve text-based CAPTCHAs, and preprocessing is also performed using Hough transformation and skeletonization. Features based on edge points are also extracted, and the points of reference with the most potential are taken . It is also claimed that the extracted features are invariant to position, language, and shapes. Therefore, it can be used for any merged, rotated, and other variation-based CAPTCHAs WANG (2017).</ns0:p><ns0:p>PayPal CAPTCHAs have been solved using correlation, and Principal Component Analysis (PCA) approaches. The primary steps of these studies include preprocessing, segmentation, and the recognition of characters. A success rate of 90% was reported using correlation analysis of PCA and using PCA only increased the efficiency to 97% <ns0:ref type='bibr' target='#b41'>Rathoura and Bhatiab (2018)</ns0:ref>. A Faster Recurrent Neural Network (F-RNN) has been proposed to detect CAPTCHAs. It was suggested that the depth of a network could increase the mean average precision value of CAPTCHA solvers, and experimental results showed that feature maps of a network could be obtained from convolutional layers <ns0:ref type='bibr' target='#b17'>Du et al. (2017)</ns0:ref>. Data creation and cracking have also been used in some studies. For visually impaired people, there should be solutions to CAPTCHAs. A CNN network named CAPTCHANet has been proposed.</ns0:p><ns0:p>A 10-layer network was designed and was improved later with training strategies. A new CAPTCHA using Chinese characters was also created, and it removed the imbalance issue of class for model training.</ns0:p><ns0:p>A statistical evaluation led to a higher success rate <ns0:ref type='bibr' target='#b60'>Zhang et al. (2021)</ns0:ref>. A data selection approach automatically selected data for training purposes. The data augmenter later created four types of noise to make CAPTCHAs difficult for machines to break. However, the reported results showed that, in combination with the proposed preprocessing method, the results were improved to 5.69% <ns0:ref type='bibr' target='#b10'>Che et al. (2021)</ns0:ref>. Some recent studies on CAPTCHA recognition are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>The pre-trained model of object recognition has an excellent structural CNN. A similar study used a well-known VGG network and improved the structure using focal loss <ns0:ref type='bibr' target='#b53'>Wang and Shi (2021)</ns0:ref>. The image processing operations generated complex data in text-based CAPTCHAs, but there may be a high risk of breaking CAPTCHAs using common languages. One study used the Python Pillow library to create Bengali-, Tamil-, and Hindi-language-based CAPTCHAs. These language-based CAPTCHAs were solved using D-CNN, which proved that the model was also confined by these three languages <ns0:ref type='bibr' target='#b0'>Ahmed and Anand (2021)</ns0:ref>. A new, automatic CAPTCHA creating and solving technique using a simple 15-layer</ns0:p></ns0:div> <ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:ref> Manuscript to be reviewed Computer Science CNN was proposed to remove the manual annotation problem.</ns0:p><ns0:p>Various fine-tuning techniques have been used to break 5-digit CAPTCHAs and have achieved 80% classification accuracies <ns0:ref type='bibr' target='#b4'>Bostik et al. (2021)</ns0:ref>. A privately collected dataset was used in a CNN approach with 7 layers that utilize correlated features of text-based CAPTCHAs. It achieved a 99.7% accuracy using its image database, and CNN architecture <ns0:ref type='bibr' target='#b28'>Kumar and Singh (2021)</ns0:ref>. Another similar approach was based on handwritten digit recognition. The introduction of a CNN was initially discussed, and a CNN was proposed for twisted and noise-added CAPTCHA images <ns0:ref type='bibr' target='#b9'>Cao (2021)</ns0:ref>. A deep, separable CNN for four-word CAPTCHA recognition achieved 100% accurate results with the fine-tuning of a separable CNN concerning their depth. A fine-tuned, pre-trained model architecture was used with the proposed architecture and significantly reduced the training parameters with increased efficiency <ns0:ref type='bibr' target='#b16'>Dankwa and Yang (2021)</ns0:ref>.</ns0:p><ns0:p>A visual-reasoning CAPTCHA (known as a Visual Turing Test (VTT)) has been used in security authentication methods, and it was easy to break using holistic and modular attacks. One study focused on a visual-reasoning CAPTCHA and showed an accuracy of 67.3% against holistic CAPTCHAs and an accuracy of 88% against VTT CAPTCHAs. Future directions were to design VTT CAPTCHAs to protect against these malicious attacks <ns0:ref type='bibr' target='#b22'>Gao et al. (2021)</ns0:ref>. To provide a more secure system in text-based CAPTCHAs, a CAPTCHA defense algorithm was proposed. It used a multi-character CAPTCHA generator using an adversarial perturbation method. The reported results showed that complex CAPTCHA generation reduces the accuracy of CAPTCHA breaker up to 0.06% <ns0:ref type='bibr' target='#b50'>Wang et al. (2021a)</ns0:ref>. The Generative Adversarial Network (GAN) based simplification of CAPTCHA images adopted before segmentation and classification. A CAPTCHA solver is presented that achieves 96% success rate character recognition. All other CAPTCHA schemes were evaluated and showed a 74% recognition rate. These suggestions for CAPTCHA designers may lead to improved CAPTCHA generation <ns0:ref type='bibr' target='#b52'>Wang et al. (2021b)</ns0:ref>. A binary are still an open challenge. More efficient DL methods need to be used that, though they may not cover other datasets, should be robust to them. The locally developed datasets are used by many of the studies make the proposed studies less robust. However, publicly available datasets could be used so that they could provide more robust and confident solutions.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>Recent studies based on deep learning have shown excellent results to solve a CAPTCHA. However, simple CNN approaches may detect lossy pooled incoming features when passing between convolution and other pooling layers. Therefore, the proposed study utilizes skip connection. To remove further bias, a 5-fold validation approach is adopted. The proposed study presents a CAPTCHA solver framework using various steps, as shown in Figure <ns0:ref type='figure'>.</ns0:ref> 1. The data are normalized using various image processing steps to make it more understandable for the deep learning model. This normalized data is segmented per character to make an OCR-type deep learning model that can detect each character from each aspect. At last, the 5-fold validation method is reported and yields promising results. </ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>There are two public datasets available on Kaggle that are used in the proposed study. There are 5 and 4 characters in both datasets. There are different numbers of numeric and alphabetic characters in them.</ns0:p><ns0:p>There are 1040 images in the five-character dataset (d 1 ) and 9955 images in the 4-character dataset (d 2 ).</ns0:p></ns0:div> <ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There are 19 types of characters in the d 1 dataset, and there are 32 types of characters in the d 2 dataset.</ns0:p><ns0:p>Their respective dimensions and extension details before and after segmentation are shown in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p><ns0:p>The frequencies of each character in both datasets are shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. The frequency of each character varies in both datasets, and the number of characters also varies. In the d 2 dataset, although there is no complex inner line intersection and a merging of texts is found, more characters and their frequencies are. However, the d 1 dataset has complex data and a low number of characters and frequencies, as compared to d 2 . Initially, d 1 has the dimensions 50 x 200 x 3, where 50 represents the rows, 200 represents the columns, and 3 represents the color depth of the given images. d 2 has image dimensions of 24 x 72 x 3, where 24 is the rows, 72 is the columns, and 3 is the color depth of given images. These datasets have almost the same character location. Therefore, they can be manually cropped to train the model on each character in an isolated form. However, their dimensions may vary for each character, which may need to be equally resized. The input images of both datasets were in Portable Graphic Format (PNG) and did not need to change. After segmenting both dataset images, each character is resized to 20 x 24 in both datasets. This size covers each aspect of the visual binary patterns of each character. The dataset details before and after resizing are shown in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p><ns0:p>The summarized details of the used datasets in the proposed study are shown in Table <ns0:ref type='table'>2</ns0:ref>. The dimensions of the resized image per character mean that, when we segment the characters from the given dataset images, their sizes vary from dataset to dataset and from character type to character type. Therefore, the optimal size at which the image data for each character is not lost is 20 rows by 24 columns, which is set for each character.</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing and Segmentation</ns0:head><ns0:p>d 1 dataset images do not need any complex image processing to segment them into a normalized form. Binarization is the most needed step in order to understand the structural morphology of a certain character in a given image. Therefore, grayscale conversion of images is performed to perform binarization, and images are converted from greyscale to a binary format. The RGB format image has 3 channels in them: Red, Green, and Blue. Let Image I (x,y) be the input RGB image, as shown in Eq. 1. To convert these input images into grayscale, Eq. 2 is performed.</ns0:p><ns0:formula xml:id='formula_0'>Input Image = I (x,y)<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>In Eq. 1, I is the given image, and x and yx and y represent the rows and columns. The grayscale conversion is performed using Eq. 2:</ns0:p><ns0:formula xml:id='formula_1'>Grey (x, y) &#8592; j &#8721; i=n (0.2989 * R, 0.5870 * G, 0.1140 * B)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>In Eq. 2, i is the iterating row position, j is the interacting column position of the operating pixel at Manuscript to be reviewed</ns0:p><ns0:p>Computer Science of 0-255. Grey (x, y) is the output grey-level of a given pixel at a certain iteration. After converting to greylevel, the binarization operation is performed using Bradly's method, which calculates a neighborhood base threshold to convert into 1 and 0 values to a given grey-level matrix of dimension 2. The neighborhood threshold operation is performed using Eq. 3.</ns0:p><ns0:p>B (x, y) &#8592; 2 * &#8970;size ( Grey (x, y) 16 + 1)&#8971;</ns0:p><ns0:p>In Eq. 3, the output B (x, y) is the neighborhood-based threshold that is calculated as the 1/8 th neighborhood of a given Grey (x, y) image. However, the floor is used to obtain a lower value to avoid any miscalculated threshold value. This calculated threshold is also called the adaptive threshold method.</ns0:p><ns0:p>The neighborhood value can be changed to increase or decrease the binarization of a given image. After obtaining a binary image, the complement is necessary to highlight the object in a given image, taken as a simple inverse operation, calculated as shown in Eq. 4.</ns0:p><ns0:formula xml:id='formula_3'>C (x, y) &#8592; 1 B(x, y)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>In Eq. 4, the available 0 and values are inverted to their respective values of each pixel position x and y. The inverted image is used as an isolation process in the case of the d 2 dataset. In the case of the d 1 , further erosion is needed. Erosion is an operation that uses a structuring element concerning its shape. The respective shape is used to remove pixels from a given binary image. In the case of a CAPTCHA image, the intersected line is removed using a line-type structuring element. The line-type structuring element uses a neighborhood operation. In the proposed study case, a line of size 5 with an angle dimension of 90 is used, and the intersecting line for each character in the binary image is removed, as we can see in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>, row 1. The erosion operation with respect to a 5 length and a 90 angle is calculated as shown in Eq. 5.</ns0:p><ns0:formula xml:id='formula_4'>C &#8854; L &#8592; x &#8712; E| B x &#8838; C<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>In Eq. 5, C is the binary image, L is the line type structuring element of line type, and x is the resultant eroded matrix of the input binary image C. B x is the subset of a given image, as it is extracted from a given image C. After erosion, there is noise in some images that may lead to the wrong interpretation of that character. Therefore, to remove noise, the neighborhood operation is again utilized, and 8 neighborhood operations are used to a given threshold of 20 pixels for 1 value, as the noise value remains lower than the character in that binary image. To calculate it, an area calculation using each pixel is necessary. Therefore, by iterating an 8 by 8 neighborhood operation, 20 pixels consisting of the area are checked to remove those areas, and other more significant areas remain in the output image. The sum of a certain area with a maximum of 1 is calculated as shown in Eq. 6.</ns0:p><ns0:p>S (x, y)</ns0:p><ns0:formula xml:id='formula_5'>&#8592; j &#8721; i=1 max(B x |xi &#8722; x j|, B x |yi &#8722; y j|)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>In Eq. 6, the given rows (i) and columns ( j) of a specific eroded image B x are used to calculate the resultant matrix by extracting each pixel value to obtain one's value from the binary image. The max will return only values that will be summed to obtain an area that will be compared with threshold value T.</ns0:p><ns0:p>The noise will then be removed, and final isolation is performed to separate each normalized character.</ns0:p></ns0:div> <ns0:div><ns0:head>CNN Training for Text Recognition</ns0:head><ns0:formula xml:id='formula_6'>convo (I,W ) x,y = N C &#8721; a=1 N R &#8721; b=1 W a,b * I x+a&#8722;1,y+b&#8722;1 (7)</ns0:formula><ns0:p>In the above equation, we formulate a convolutional operation for a 2D image that represents I x,y , where x and y are the rows and columns of the image, respectively. W x,y represents the convolving window concerning rows and columns x and y. The window will iteratively be multiplied with the respective element of the given image and then return the resultant image in convo (I,W ) x,y . N C and N R are the numbers of rows and columns starting from 1, a represents columns, and b represents rows.</ns0:p></ns0:div> <ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Batch Normalization Layer</ns0:head><ns0:p>Its basic formula is to calculate a single component value, which can be represented as</ns0:p><ns0:formula xml:id='formula_7'>Bat &#8242; = a &#8722; M [a] var(a) (8)</ns0:formula><ns0:p>The calculated new value is represented as Bat &#8242; , a is any given input value, and M[a] is the mean of that given value, where in the denominator the variance of input a is represented as var(a). The further value is improved layer by layer to give a finalized normal value with the help of alpha gammas, as shown below:</ns0:p><ns0:formula xml:id='formula_8'>Bat &#8242;&#8242; = &#947; * Bat &#8242; + &#946; (9)</ns0:formula><ns0:p>The extended batch normalization formulation improved in each layer with the previous Bat &#8242; value.</ns0:p></ns0:div> <ns0:div><ns0:head>ReLU</ns0:head><ns0:p>ReLU excludes the input values that are negative and retains positive values. Its equation can be written as</ns0:p><ns0:formula xml:id='formula_9'>reLU = x = x i f x &gt; 0 x = 0 i f x &#8804; 0 (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>where x is the input value and directly outputs the value if it is greater than zero; if values are less than 0, negative values are replaced with 0.</ns0:p></ns0:div> <ns0:div><ns0:head>Skip-Connection</ns0:head><ns0:p>The Skip connection is basically concetnating the previous sort of pictoral information to the next convolved feature maps of network. In proposed network, the ReLU-1 information is saved and then after 2nd and 3rd ReLU layer, these saved information is concatenated with the help of an addition layer. In this way, the skip-connection is added that makes it different as compared to conventional deep learning approaches to classify the guava disease. Moreover, the visualization of these added feature information is shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Average Pooling</ns0:head><ns0:p>The average pooling layer is superficial as we convolve to the input from the previous layer or node. The coming input is fitted using a window of size mxn, where m represents the rows, and n represents the column. The movement in the horizontal and vertical directions continues using stride parameters.</ns0:p><ns0:p>Many deep learning-based algorithms introduced previously, as we can see in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, ultimately use CNN-based methods. However, all traditional CNN approaches using convolve blocks and transfer learning approaches may take important information when they pool down to incoming feature maps from previous layers. Similarly, the testing and validation using conventional training, validation, and testing may be biased due to less data testing than the training data. Therefore, the proposed study uses a 1-skip connection while maintaining other convolve blocks; inspired by the K-Fold validation method, it splits up both datasets' data into five respective folds. The dataset, after splitting into five folds, is trained and tested in a sequence. However, these five-fold results are taken as a means to report final accuracy results. The proposed CNN contains 16 layers in total, and it includes three major blocks containing convolutional, batch normalization, and ReLU layers. After these nine layers, an additional layer adds incoming connections, a skip connection, and 3rd-ReLU-layer inputs from the three respective blocks.</ns0:p><ns0:p>Average pooling, fully connected, and softmax layers are added after skipping connections. All layer parameters and details are shown in Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>, all learnable weights of each layer are shown. For both datasets, output categories of characters are different. Therefore, in the dense layer of the five-fold CNN models, the output class was Manuscript to be reviewed</ns0:p><ns0:p>Computer Science However, the weights of one dataset are shown. In the other dataset, these weights may vary slightly. The skip-connection weights have multiple features that are not in a simple convolve layer. Therefore, we can say that the proposed CNN architecture is a new way to learn multiple types of features compared to previous studies that use a traditional CNN. This connection may be used in other aspects of text and object recognition and classification.</ns0:p><ns0:p>Later on, by obtaining these significant, multiple features, the proposed study utilizes the K-fold validation technique by splitting the data into five splits. These multiple splits remove bias in the training and testing data and take the testing results as the mean of all models. In this way, no data will remain</ns0:p><ns0:p>for training, and no data will be untested. The results ultimately become more confident than previous conventional approaches of CNN. The d 2 dataset has a clear, structured element in its segmented images;</ns0:p><ns0:p>in d 1 , the isolated text images were not much clearer. Therefore, the classification results remain lower in this case, whereas in the d2 dataset, the classification results remain high and usable as a CAPTCHA solver. The results of each character and dataset for each fold are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>10/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>As discussed earlier, there are two datasets in the proposed framework. Both have a different number of categories and a different number of images. Therefore, separate evaluations of both are discussed and described in this section. Firstly, the five-character dataset is used by the 5-CNN models of same architecture, with a different split in the data. Secondly, the four-character dataset is used by the same architecture of the model, with a different output of classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Five-character Dataset (d 1 )</ns0:head><ns0:p>The five-character dataset has 1040 images in it. After segmenting each type of character, it has 5200 total images. The data are then split into five folds: 931, 941, 925, 937, and 924. The remaining data difference is adjusted into the training set, and splitting was adjusted during the random selection of 20-20% of the total data. The training on four-fold data and the testing on the one-fold data are shown in Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>, there are 19 types of characters that have their fold-by-fold varying accuracy. The mean of all folds is given. The overall or mean of each fold and the mean of all folds are given in the last row. We can see that the Y character has a significant or the highest accuracy rate (95.40%) of validation compared to other characters. This may be due to its almost entirely different structure from other characters. The other highest accuracy is of the G character with 95.06%, which is almost equal to the highest with a slight difference. However, these two characters have a more than 95% recognition accuracy, and no other character is nearer to 95. The other characters have a range of accuracies from 81 to 90%. The least accurate M character is 62.08, and it varies in five folds from 53 to 74%. Therefore, we can say that M matches with other characters, and for this character recognition, we may need to concentrate on structural polishing for M input characters. To prevent CAPTCHA from breaking further complex designs among machines and making it easy for humans to do so, the other characters that achieve higher results need a high angle and structural change to not break with any machine learning model. This complex structure may be improved from other fine-tuning of a CNN, increasing or decreasing the skipping connection. The accuracy value can also improve. The other four-character dataset is more important because it has 32 types of characters and more images. This five-character dataset's lower accuracy may also be due to little data and less training. The other character recognition studies have higher accuracy rates on similar datasets, but they might be less confident than the proposed study due to an unbiased validation method.</ns0:p><ns0:p>The four-character dataset recognition results are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>11/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Four-Character Dataset (d 2 )</ns0:p><ns0:p>The four-character dataset has a higher frequency of each character compared to the five-character dataset, and the number of characters is also higher. The same five-fold splits were performed on this dataset characters as well. After applying the five folds, the number of characters in each fold was 7607, 7624, 7602, 7617, and 7595, respectively, and the remaining images from the 38,045 images of individual characters were adjusted into the training sets of each fold. The results of each character w.r.t each fold and the overall mean are given in Table <ns0:ref type='table' target='#tab_3'>5</ns0:ref>.</ns0:p><ns0:p>From Table <ns0:ref type='table' target='#tab_3'>5</ns0:ref>, it can be observed that almost every character was recognized with 99% accuracy. The highest accuracy of character D was 99.92 and remained 100% in the four-folds. Only one fold showed a 99.57% accuracy. From this point, we can state that the proposed study removed bias, if there was any, from the dataset by doing splits. Therefore, it is necessary to make folds in a deep learning network.</ns0:p><ns0:p>Most studies use a 1-fold approach only. The 1-fold approach is at a high risk. It is also important that the character m achieved the lowest accuracy in the case of the five-character CAPTCHA. In this four-character CAPTCHA, 98.58% was accurately recognized. Therefore, we can say that the structural morphology of M in the five-character CAPTCHA better avoids any CAPTCHA solver method. The highest results show that this four-character CAPTCHA is at a high risk, and line intersection, word joining, and correlation may break prevent the CAPTCHA from breaking. Many approaches before have been proposed to recognize the CAPTCHA, and most of them have used a conventional structure.</ns0:p><ns0:p>The proposed study has used a more confident validation approach with multi-aspect feature extraction.</ns0:p><ns0:p>Therefore, it can be used as a more promising approach to break CAPTCHA images and to test the CAPTCHA design made by CAPTCHA designers. In this way, CAPTCHA designs can be protected against new approaches to deep learning. The graphical illustration of validation accuracy and the losses for both datasets on all folds is shown in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Table <ns0:ref type='table' target='#tab_4'>6</ns0:ref>, we can see that various studies have used different numbers of characters with self-collected and generated datasets, and comparisons have been made. Some studies have considered the number of dataset characters. Accuracy is not comparable, as it uses the five-fold validation method, and the others only used 1-fold. Therefore, the proposed study outperforms in each aspect, in terms of the proposed CNN framework and its validation scheme.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The proposed study uses a different approach to deep learning to solve CAPTCHA problems. It proposed a skip-CNN connection network to break text-based CAPTCHA's. Two CAPTCHA datasets are discussed and evaluated character by character. The proposed study is confident to report results, as it removed biases (if any) in datasets using a five-fold validation method. The results are also improved as compared to previous studies. The reported higher results claim that these CAPTCHA designs are at high risk, as any malicious attack can break them on the web. Therefore, the proposed CNN could test CAPTCHA designs to solve them more confidently in real-time. Furthermore, the proposed study has used the publicly available datasets to perform training and testing on them, making it a more robust approach to solve text-based CAPTCHA's.</ns0:p><ns0:p>Many studies have used deep learning to break CAPTCHAs, as they have focused on the need to design CAPTCHAs that do not consume user time and resist CAPTCHA solvers. It would make our web systems more secure against malicious attacks. However, In the future, the data augmentation methods and more robust data creation methods can be applied on CAPTCHA datasets where intersecting line-based CAPTCHA's are more challenging to break that can be used. Similarly, the other local languages based CAPTCHA's also can be solved using similar DL models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>However, most of them have used Deep Learning based methods to crack them due to their robustness PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021) Manuscript to be reviewed Computer Science image-based CAPTCHA recognition framework is proposed to generate a certain number of image copies from a given CAPTCHA image to train a CNN model. The The Weibo dataset showed that the 4-character recognition accuracy on the testing set was 92.68%, and the Gregwar dataset achieved a 54.20% accuracy on the testing set Thobhani et al. (2020). The studies discussed above yield information about text-based CAPTCHAs as well as other types of CAPTCHAs. Most studies used DL methods to break CAPTCHAs, and time and unsolvable CAPTCHAs</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The Proposed Framework for CAPTCHA Recognition.</ns0:figDesc><ns0:graphic coords='6,162.41,324.07,372.15,241.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Character-wise Frequencies (Row-1: 4-Character Dataset 1 (d 2 ); Row-2: five-character Dataset 2 (d 1 )).</ns0:figDesc><ns0:graphic coords='7,183.09,112.32,330.86,283.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>d 2 needs this operation to remove the central intersecting line of each character. This dataset can be normalized to isolate each character correctly. Therefore, three steps are performed on the d 1 dataset.It is firstly converted to greyscale; it is then converted to a binary form, and their complement is lastly taken. In the d 2 dataset, 2 additional steps of erosion and area-wise selection are performed to remove6/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021) and the edges of characters. The primary steps of both datasets and each character isolation are shown in Figure 3.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Preprocessing and Isolation of characters in both datasets (Row-1: the d1 dataset, binarization, erosion, area-wise selection, and segmentation; Row-2: binarization and isolation of each character).</ns0:figDesc><ns0:graphic coords='8,162.41,275.22,372.20,183.11' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>a</ns0:head><ns0:label /><ns0:figDesc>certain time, and R, G, and B are the red, green, and blue pixel values of that pixel. The multiplying constant values convert to all three values of the respective channels to a new grey-level value in the range 7/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>19 for five models, and the output class was 32 categories in the other five models. The skip connection has more weights than other convolution layers. Each model is compared regarding its weight learning and is shown in Figure 4. The figure shows convolve 1, batch normalization, and skip connection weights. The internal layers have a more significant number of weights or learnable parameters, and the different or contributing 9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Five-fold weights with respective layers shown for multiple proposed CNN architectures.</ns0:figDesc><ns0:graphic coords='12,141.73,63.80,413.54,225.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The validation loss and validation accuracy graphs are shown for each fold of the CNN (Row-1: five-character CAPTCHA; Row-2: four-character CAPTCHA).</ns0:figDesc><ns0:graphic coords='14,141.73,354.99,413.54,196.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Recent CAPTCHA recognition-based studies and their details.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Year Dataset</ns0:cell><ns0:cell /><ns0:cell>Method</ns0:cell><ns0:cell /><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang and Shi (2021)</ns0:cell><ns0:cell>2021 CNKI</ns0:cell><ns0:cell /><ns0:cell cols='2'>Binarization,</ns0:cell><ns0:cell cols='2'>Recognition rate= 99%,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>CAPTCHA,</ns0:cell><ns0:cell>smoothing,</ns0:cell><ns0:cell>seg-</ns0:cell><ns0:cell cols='2'>98.5%, 97.84%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random Gener-</ns0:cell><ns0:cell>mentation</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ated, Zhengfang</ns0:cell><ns0:cell>annotation</ns0:cell><ns0:cell>with</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>CAPTCHA</ns0:cell><ns0:cell cols='2'>Adhesian and more</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>interference</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ahmed and Anand</ns0:cell><ns0:cell>2021 Tamil,</ns0:cell><ns0:cell>Hindi</ns0:cell><ns0:cell>Pillow</ns0:cell><ns0:cell>Library,</ns0:cell><ns0:cell>&#8764;</ns0:cell></ns0:row><ns0:row><ns0:cell>(2021)</ns0:cell><ns0:cell cols='2'>and Bengali</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Bostik et al. (2021)</ns0:cell><ns0:cell cols='2'>2021 Private created</ns0:cell><ns0:cell cols='2'>15-layer CNN</ns0:cell><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy= 80%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Kumar and Singh (2021) 2021 Private</ns0:cell><ns0:cell /><ns0:cell cols='2'>7-Layer CNN</ns0:cell><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>racy= 99.7%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Dankwa and Yang (2021) 2021 4-words Kaggle</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy=100%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2021b)</ns0:cell><ns0:cell cols='2'>2021 Private GAN</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>based dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>racy= 96%, overall =</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>74%</ns0:cell></ns0:row><ns0:row><ns0:cell>Thobhani et al. (2020)</ns0:cell><ns0:cell cols='3'>2020 Weibo, Gregwar CNN</ns0:cell><ns0:cell /><ns0:cell>Testing</ns0:cell><ns0:cell>Accuracy=</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>92.68%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Testing</ns0:cell><ns0:cell>Accuracy=</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>54.20%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Parameters setting and learnable weights for proposed framwork</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Number</ns0:cell><ns0:cell>Layers Name</ns0:cell><ns0:cell>Category</ns0:cell><ns0:cell>Parameters</ns0:cell><ns0:cell cols='3'>Weights/Offset Padding Stride</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Input</ns0:cell><ns0:cell>Image Input</ns0:cell><ns0:cell>24 x 20 x 1</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Conv (1)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>3x3x1x8</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>BN (1)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>1x1x8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>ReLU (1)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Conv (2)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>3x3x8x16</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>BN (2)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>1x1x16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>ReLU (2)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Conv (3)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>3x3x16x32</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>BN (3)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>ReLU (3)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Skip-connection</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x8x32</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Add</ns0:cell><ns0:cell>Addition</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>Pool</ns0:cell><ns0:cell>Average Pooling</ns0:cell><ns0:cell>6 x 5 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 19 (d2)</ns0:cell><ns0:cell>19 x 960 (d2)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>FC</ns0:cell><ns0:cell>Fully connected</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 32 (d1)</ns0:cell><ns0:cell>32 x 960 (d1)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>1 x 1 x 19</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>Class Output</ns0:cell><ns0:cell>Classification</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>connection weights are shown in Figure 4. Multiple types of feature maps are included in the figure.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Five-character Dataset Accuracy (%) with five-fold text recognition testing on the CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>Overall</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>89.63</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>78.72</ns0:cell><ns0:cell>84.48</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>87.76</ns0:cell><ns0:cell>75.51</ns0:cell><ns0:cell>87.75</ns0:cell><ns0:cell>85.71</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>86.12</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>88.46</ns0:cell><ns0:cell>90.196</ns0:cell><ns0:cell>90.19</ns0:cell><ns0:cell>92.15</ns0:cell><ns0:cell>89.06</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>80.39</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>86.56</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>76.59</ns0:cell><ns0:cell>82.61</ns0:cell><ns0:cell>91.304</ns0:cell><ns0:cell>80.43</ns0:cell><ns0:cell>87.58</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>89.36</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>85.10</ns0:cell><ns0:cell>84.78</ns0:cell><ns0:cell>86.68</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>91.66</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>87.49</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>81.81</ns0:cell><ns0:cell>73.33</ns0:cell><ns0:cell>97.72</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>90.09</ns0:cell><ns0:cell>85.03</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>85.10</ns0:cell><ns0:cell>80.85</ns0:cell><ns0:cell>80.85</ns0:cell><ns0:cell>82.64</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>95.55</ns0:cell><ns0:cell>88.67</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>62.79</ns0:cell><ns0:cell>79.54</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>93.18</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>78.73</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>92.00</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>94.00</ns0:cell><ns0:cell>81.63</ns0:cell><ns0:cell>89.1</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>95.83</ns0:cell><ns0:cell>91.83</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>93.75</ns0:cell><ns0:cell>95.06</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>64.00</ns0:cell><ns0:cell>56.00</ns0:cell><ns0:cell>53.061</ns0:cell><ns0:cell>74.00</ns0:cell><ns0:cell>67.34</ns0:cell><ns0:cell>62.08</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>81.40</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>87.59</ns0:cell><ns0:cell>76.74</ns0:cell><ns0:cell>82.35</ns0:cell><ns0:cell>81.43</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>95.65</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>90.34</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>95.24</ns0:cell><ns0:cell>83.72</ns0:cell><ns0:cell>90.47</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>90.55</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>82.97</ns0:cell><ns0:cell>85.41</ns0:cell><ns0:cell>82.98</ns0:cell><ns0:cell>85.68</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>93.02</ns0:cell><ns0:cell>95.45</ns0:cell><ns0:cell>97.67</ns0:cell><ns0:cell>95.53</ns0:cell><ns0:cell>95.35</ns0:cell><ns0:cell>95.40</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>86.14</ns0:cell><ns0:cell>80.77</ns0:cell><ns0:cell>87.24</ns0:cell><ns0:cell>87.73</ns0:cell><ns0:cell>85.71</ns0:cell><ns0:cell>85.52</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>12/18</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Four-character dataset Accuracy (%) with five-fold text recognition testing on the CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>Overall</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>97.84</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>98.27</ns0:cell><ns0:cell>98.79</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>97.02</ns0:cell><ns0:cell>94.92</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>95.75</ns0:cell><ns0:cell>96.17</ns0:cell><ns0:cell>96.52</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>97.87</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>98.55</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.35</ns0:cell><ns0:cell>99.01</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.65</ns0:cell><ns0:cell>99.56</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.69</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>98.80</ns0:cell><ns0:cell>99.60</ns0:cell><ns0:cell>99.19</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.20</ns0:cell><ns0:cell>99.36</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>97.42</ns0:cell><ns0:cell>97.86</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.29</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>98.85</ns0:cell><ns0:cell>96.55</ns0:cell><ns0:cell>98.08</ns0:cell><ns0:cell>98.46</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.39</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>97.85</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.54</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>96.59</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>96.15</ns0:cell><ns0:cell>97.95</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.25</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.92</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>99.18</ns0:cell><ns0:cell>97.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.59</ns0:cell><ns0:cell>98.37</ns0:cell><ns0:cell>98.94</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>98.69</ns0:cell><ns0:cell>98.26</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97.82</ns0:cell><ns0:cell>97.83</ns0:cell><ns0:cell>98.52</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>97.93</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.69</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>98.43</ns0:cell></ns0:row><ns0:row><ns0:cell>H</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.33</ns0:cell></ns0:row><ns0:row><ns0:cell>J</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.66</ns0:cell></ns0:row><ns0:row><ns0:cell>K</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell></ns0:row><ns0:row><ns0:cell>L</ns0:cell><ns0:cell>97.41</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>98.79</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>96.23</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.33</ns0:cell><ns0:cell>98.58</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.10</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>98.83</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>98.35</ns0:cell><ns0:cell>97.94</ns0:cell><ns0:cell>98.77</ns0:cell><ns0:cell>97.94</ns0:cell><ns0:cell>96.28</ns0:cell><ns0:cell>97.86</ns0:cell></ns0:row><ns0:row><ns0:cell>Q</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.75</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.59</ns0:cell><ns0:cell>97.50</ns0:cell><ns0:cell>99.00</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>99.42</ns0:cell></ns0:row><ns0:row><ns0:cell>T</ns0:cell><ns0:cell>97.47</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.73</ns0:cell><ns0:cell>97.47</ns0:cell><ns0:cell>98.31</ns0:cell><ns0:cell>97.98</ns0:cell></ns0:row><ns0:row><ns0:cell>U</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97.43</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.80</ns0:cell></ns0:row><ns0:row><ns0:cell>V</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.22</ns0:cell><ns0:cell>98.47</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.67</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.15</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.33</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.66</ns0:cell></ns0:row><ns0:row><ns0:cell>Z</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.16</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>98.97</ns0:cell><ns0:cell>98.18</ns0:cell><ns0:cell>99.32</ns0:cell><ns0:cell>98.92</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.82</ns0:cell></ns0:row></ns0:table><ns0:note>14/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:2:0:NEW 2 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Four-character dataset with five-fold text recognition testing on a CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>References</ns0:cell><ns0:cell>No. of Characters</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>Faster R-CNN</ns0:cell><ns0:cell>Accuracy= 98.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Du et al. (2017)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Chen et al. (2019)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Selective D-CNN</ns0:cell><ns0:cell>Success rate= 95.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik et al. (2021)</ns0:cell><ns0:cell>Different</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>Accuracy= 80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Different</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>Precision=98.99%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik and Klecka (2018)</ns0:cell><ns0:cell /><ns0:cell>SVN</ns0:cell><ns0:cell>99.80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Feed forward-Net</ns0:cell><ns0:cell>98.79%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Study</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>Skip-CNN with 5-Fold Validation Accuracy= 98.82%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Accuracy=85.52%</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
" Reviewer 2 Comments Responses *Please fix Table 1 position; It should be in line with the LR section. Thanks for your concern. The concerning points are successfully updated. Please see Table 1 *Similar Issue with Table 3, also please rewrite Table 3 caption Thanks for your concern. The concerning points are successfully updated. Please see Table 3 *Table 5 borders are out of page width; please fix this as well. Thanks for your concern. The concerning points are successfully updated. Please see Table 5 *Would you please add two-three more points on the contribution of the methodology? Thanks, as per your suggestion we have updated contribution paragraph. Please see last paragraph of introduction. Reviewer 4 Comments Responses The manuscript is written in poor English that not suitable for publish in this journal until it be improved to ensure that audiences can clearly understand the text. There are a lot of short sentences that lack context. Thanks for your concern. As per your suggestion, the manuscript was sent to professional proofread service in order to increase the readability. Please see the attached certificate of English proofread. Reviewer 5 Comments Responses 1. The text, throughout the manuscript and more specifically in the abstract, introduction and literature review is not integrated, well-written. It mostly seems several short, and in some cases unrelated sentences without any coherence. So, the text does not seem natural and should be revised, maybe by a native person. Thanks for your concern. The concerning points are successfully updated in corresponding sections where needed. Furthermore, the manuscript is proofread by professional English editing service. Please see the attached certificate of English editing. 2. Some of the references in Introduction section are not necessary, since those are related to obvious and common information not something specific and important. For example, in line 49, 'Azad and Jain (2013). CAPTCHA can be used for authentication in login forms with various web credentials'. Instead, the authors can refer to some of the important survey papers (such as [1,2]) in the this field for all of such information Thanks for your concern. The useless and general information is removed and adjusted where the recommended surveys of CAPTCHA’s solver are also being added and discussed. Please see the revised related work section. 3. In the Literature Review section, specifically lines 101 to 109, the organization of the information is poor and need to be revised to be more clear and readable Thanks for your concern. Sorry for disorganization of your concerning part of Literature review. The paragraphs are being updated and synched that makes it more understandable. 4. Generally, the Literature Review section is not well-organized and both its text and structure should be improved. For example, the authors may introduce some of classical CPATCHA breaking methods and elaborate on their deficiency. Then, they can speak about benefits of deep learning based methods and mention the notable works in this domain, either chronologically or based on underlying approach. Thanks for your concern. The Literature Review section is being discussed and updated successfully. 5. Some of the tables are not appropriately located in the manuscript. It may be due to inconsistency in Latex but anyway the issue should be managed. Thanks for your concern. As per your suggestion we have changed the table location. Please see the revised positions of tables. 6. Figure 1 should be revised. There are some problems with the figure. For example, the connection between different sections is not illustrated appropriately. This is not clear, for example, how input image will be passed to the next step. Also, the circle in the upper section of the figure is not large enough to fit the 'preprocessing' term. Thanks for your concern. The figure is being revised that reflects the acquired visual changes. It looks now more appealing and understandable. Furthermore, Figure. 3 have shown the input data in detail. 7. The section which is entitled 'Comparison' seems to be unnecessary since its information could be presented in the discussion section Thanks for your concern, as per your suggestion, the section is being synched with discussion. 8. The authors are encouraged to tell about the possible future works in the field. Thanks for your concern. The future work is reflecting at the end of Conclusion Section. 9. The major motivation of the study should be clearly stated, may be in a separate section. What is the advantage of this study and what it brings to the community? What problem(s) this work is intended to solve and what is new in this research? Thanks for your concern. The motivation and contributions are reflecting at the end of Introduction Section. The advantages and significance are also being highlighted at the end of Introduction, Literature Review and Comparison sections. 10. Since the most of today's CAPTCHAs are image-based, the authors should clarify why such works are still useful and worth considering. Can such research come in handy for other applications? Thanks for your concern. The different types of CAPTCHA’s are discussed in Literature review where their significance and usage are also discussed there. Yeah, this research is handy for other type of CAPTCHA’s solving as well where the same model with slight variations could be applied on them, it is also discussed in future work, at the end of Conclusion Section. 11. The experiment is well-designed and conducted, so it seems appropriate. It is suggested that the authors speak about the efficiency of the proposed method for text-based CAPTCHAs in other languages, either experimentally or theoretically Thanks for your appreciation. The experimental results are convincing as per your suggestion. However, the future work is also being recommended for other type of languages solving CAPTCHA’s. The other local language-based CAPTCHA’s solution are also discussed in Literature Review Section. "
Here is a paper. Please give your review comments after reading it.
333
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A Completely Automated Public Turing Test to tell Computers and Humans Apart (CAPTCHA) is used in web systems to secure authentication purposes; it may break using Optical Character Recognition (OCR) type methods. CAPTCHA breakers make web systems highly insecure. However, several techniques to break CAPTCHA suggest CAPTCHA designers about their designed CAPTCHA's need improvement to prevent computer visionbased malicious attacks. This research primarily used deep learning methods to break state-of-the-art CAPTCHA codes; however, the validation scheme and conventional Convolutional Neural Network (CNN) design still need more confident validation and multiaspect covering feature schemes. Several public datasets are available of text-based CAPTCHa, including Kaggle and other dataset repositories where self-generation of CAPTCHA datasets are available. The previous studies are dataset-specific only and cannot perform well on other CAPTCHA's. Therefore, the proposed study uses two publicly available datasets of 4-and 5-character text-based CAPTCHA images to propose a CAPTCHA solver. Furthermore, the proposed study used a skip-connection-based CNN model to solve a CAPTCHA. The proposed research employed 5-folds on data that delivers 10 Different CNN models on two datasets with promising results compared to the other studies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The first secure and fully automated mechanism, named CAPTCHA, was developed in 2000. Alta Vista first used the term CAPTCHA in 1997. It reduces spamming by <ns0:ref type='bibr'>95% Baird and Popat (2002)</ns0:ref>. CAPTCHA is also known as a reverse Turing test. The Turing test was the first test to distinguish human, and machine Von <ns0:ref type='bibr' target='#b51'>Ahn et al. (2003)</ns0:ref>. It was developed to determine whether a user was a human or a machine. It increases efficiency against different attacks that seek websites <ns0:ref type='bibr' target='#b16'>Danchev (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b38'>Obimbo et al. (2013)</ns0:ref>.</ns0:p><ns0:p>It is said that CAPTCHA should be generic such that any human can easily interpret and solve it and difficult for machines to recognize it <ns0:ref type='bibr' target='#b7'>Bostik and Klecka (2018)</ns0:ref>. To protect against robust malicious attacks, various security authentication methods have been developed <ns0:ref type='bibr' target='#b27'>Goswami et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b41'>Priya and Karthik (2013)</ns0:ref>, <ns0:ref type='bibr' target='#b4'>Azad and Jain (2013)</ns0:ref>. CAPTCHA can be used for authentication in login forms, spam text reducer, e.g., in email, as a secret graphical key to log in for email. In this way, a spam-bot would not be able to recognize and log in to the email Sudarshan <ns0:ref type='bibr' target='#b49'>Soni and Bonde (2017)</ns0:ref>. However, recent advancements make the CAPTCHA's designs to be at high risk where the current gaps and robustness of models that are the concern is discussed in depth <ns0:ref type='bibr' target='#b46'>(Roshanbin and Miller, 2013)</ns0:ref>. Similarly, the image, text, colorful CAPTCHA's, and other types of CAPTCHA's are being attacked by various malicious attacks. Manuscript to be reviewed Computer Science and confidence <ns0:ref type='bibr' target='#b60'>(Xu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Many prevention strategies against malicious attacks have been adopted in recent years, such as cloud computing-based voice-processing <ns0:ref type='bibr'>Gao et al. (2020b,a)</ns0:ref>, mathematical and logical puzzles, and text and image recognition tasks <ns0:ref type='bibr' target='#b22'>Gao et al. (2020c)</ns0:ref>. Text-based authentication methods are mostly used due to their easier interpretation, and implementation <ns0:ref type='bibr' target='#b32'>Madar et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b25'>Gheisari et al. (2021)</ns0:ref>. A set of rules may define a kind of automated creation of CAPTCHA-solving tasks. It leads to easy API creation and usage for security web developers to make more mature CAPTCHAs <ns0:ref type='bibr' target='#b9'>Bursztein et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b14'>Cruz-Perez et al. (2012)</ns0:ref>. The text-based CAPTCHA is used for Optical Character Recognition (OCR). OCR is strong enough to solve text-based CAPTCHA challenges. However, it still has challenges regarding its robustness in solving CAPTCHA problems <ns0:ref type='bibr' target='#b29'>Kaur and Behal (2015)</ns0:ref>. These CAPTCHA challenges are extensive with ongoing modern technologies. Machines can solve them, but humans cannot. These automated, complex CAPTCHA-creating tools can be broken down using various OCR techniques. Some studies claim that they can break any CAPTCHA with high efficiency. The existing work also recommends strategies to increase the keyword size and another method of crossing lines from keywords that use only straight lines and a horizontal direction. It can break easily using different transformations, such as the Hough transformation. It is also suggested that single-character recognition is used from various angles, rotations, and views to make more robust and challenging CAPTCHAs. <ns0:ref type='bibr' target='#b8'>Bursztein et al. (2011)</ns0:ref>.</ns0:p><ns0:p>The concept of reCAPTCHA was introduced in 2008. It was initially a rough estimation. It was later improved and was owned by Google to decrease the time taken to solve it. The un-solvable reCAPTCHA's were then considered to be a new challenge for OCRs Von <ns0:ref type='bibr' target='#b53'>Ahn et al. (2008)</ns0:ref>. The usage of computer vision and image processing as a CAPTCHA solver or breaker was increased if segmentation was performed efficiently <ns0:ref type='bibr' target='#b24'>George et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b62'>Ye et al. (2018)</ns0:ref>. The main objective or purpose of making a CAPTCHA solver is to protect CAPTCHA breakers. By looking into CAPTCHA solvers, more challenging CAPTCHAs can be generated, and they may lead to a more secure web that is protected against malicious attacks <ns0:ref type='bibr' target='#b42'>Rai et al. (2021)</ns0:ref>. A benchmark or suggestion for CAPTCHA creation was given by <ns0:ref type='bibr'>Chellapilla et al.:</ns0:ref> Humans should solve the given CAPTCHA challenge with a 90% success rate, while machines ideally solve only one in every 10,000 CAPTCHAs <ns0:ref type='bibr' target='#b12'>Chellapilla et al. (2005)</ns0:ref>.</ns0:p><ns0:p>Modern AI yields CAPTCHAs that can solve problems in a few seconds. Therefore, creating CAPTCHAs that are easily interpretable for humans and unsolvable for machines is an open challenge. It is also observed that humans invest a substantial amount of time daily solving CAPTCHAs Von <ns0:ref type='bibr' target='#b53'>Ahn et al. (2008)</ns0:ref>. Therefore, reducing the amount of time humans need to solve them is another challenge. Various considerations need to be made, including text familiarity, visual appearance, distortions, etc. Commonly in text-based CAPTCHAs, the well-recognized languages are used that have many dictionaries that make them easily breakable. Therefore, we may need to make unfamiliar text from common languages such as phonetic text is not ordinary language that is pronounceable <ns0:ref type='bibr' target='#b55'>Wang and Bentley (2006)</ns0:ref>. Similarly, the color of the foreground and the background of CAPTCHA images is also an essential factor, as many people have low or normal eyesight or may not see them. Therefore, a visually appealing foreground and background with distinguishing colors are recommended when creating CAPTCHAs. Distortions from periodic or random manners, such as affine transformations, scaling, and the rotation of specific angles, are needed. These distortions are solvable for computers and humans. If the CAPTCHAs become unsolvable, then multiple attempts by a user are needed to read and solve them <ns0:ref type='bibr' target='#b61'>Yan and El Ahmad (2008)</ns0:ref>.</ns0:p><ns0:p>In current times, Deep Convolutional neural networks (DCNN) are used in many medical <ns0:ref type='bibr' target='#b35'>Meraj et al. (2019)</ns0:ref>, <ns0:ref type='bibr' target='#b34'>Manzoor et al. (2022)</ns0:ref>, <ns0:ref type='bibr' target='#b33'>Mahum et al. (2021)</ns0:ref> and other real-life recognition applications <ns0:ref type='bibr' target='#b36'>Namasudra (2020)</ns0:ref> as well as insecurity threat solutions <ns0:ref type='bibr' target='#b31'>Lal et al. (2021)</ns0:ref>. The security threats in IoT and many other aspects can also be controlled using blockchain methods <ns0:ref type='bibr' target='#b37'>Namasudra et al. (2021)</ns0:ref>. Utilizing deep learning, the proposed study uses various image processing operations to normalize text-based image datasets. After normalizing the data, a single-word-caption-based OCR was designed with skipping connections. These skipping connections connect previous pictorial information to various outputs in simple Convolutional Neural Networks (CNNs), which possess visual information in the next layer only <ns0:ref type='bibr' target='#b2'>Ahn and Yim (2020)</ns0:ref>.</ns0:p><ns0:p>The main contribution of this research work is as follows:</ns0:p><ns0:p>&#8226; A skipping-connection-based CNN framework is proposed that covers multiple aspects of features.</ns0:p><ns0:p>&#8226; A 5-fold validation scheme is used in a deep-learning-based network to remove bias, if any, which leads to more promising results.</ns0:p></ns0:div> <ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; The data are normalized using various image processing steps to make it more understandable for the deep learning model.</ns0:p></ns0:div> <ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Today in the growing and dominant field of AI, many real-life problems have been solved with the help of deep learning and other evolutionary optimized intelligent algorithms <ns0:ref type='bibr' target='#b44'>Rauf et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b45'>Rauf et al. (2020)</ns0:ref>. Various problems of different aspects using DL methods are solved, such as energy consumption analysis <ns0:ref type='bibr' target='#b21'>Gao et al. (2020b)</ns0:ref>, time scheduling of resources to avoid time and resources wastage <ns0:ref type='bibr' target='#b22'>Gao et al. (2020c)</ns0:ref>. Similarly, in cybersecurity, a CAPTCHA solver has provided many automated AI solutions, except OCR. Multiple proposed CNN models have used various types of CAPTCHA datasets to solve CAPTCHAs. The collected datasets have been divided into three categories: selection-, slide-, and click-based. Ten famous CAPTCHAs were collected from google.com, tencent.com, etc. The breaking rate of these CAPTCHAs was compared. CAPTCHA design flaws that may help to break CAPTCHAs easily were also investigated. The underground market used to solve CAPTCHAs was also investigated, and findings concerning scale, the commercial sizing of keywords, and their impact on CAPTCHas were reported <ns0:ref type='bibr' target='#b59'>Weng et al. (2019)</ns0:ref>. A proposed sparsity-integrated CNN used constraints to deactivate the fully connected connections in CNN. It ultimately increased the accuracy results compared to transfer learning, and simple CNN solutions <ns0:ref type='bibr' target='#b19'>Ferreira et al. (2019)</ns0:ref>.</ns0:p><ns0:p>Image processing operations regarding erosion, binarization, and smoothing filters were performed for data normalization, where adhesion-character-based features were introduced and fed to a neural network for character recognition <ns0:ref type='bibr' target='#b28'>Hua and Guoqin (2017)</ns0:ref>. The backpropagation method was claimed as a better approach for image-based CAPTCHA recognition. It has also been said that CAPTCHA has become the normal, secure authentication method in the majority of websites and that image-based CAPTCHAs are more valuable than text-based CAPTCHAs <ns0:ref type='bibr' target='#b47'>Saroha and Gill (2021)</ns0:ref>. Template-based matching is performed to solve text-based CAPTCHAs, and preprocessing is also performed using Hough transformation and skeletonization. Features based on edge points are also extracted, and the points of reference with the most potential are taken . It is also claimed that the extracted features are invariant to position, language, and shapes. Therefore, it can be used for any merged, rotated, and other variation-based CAPTCHAs WANG (2017).</ns0:p><ns0:p>PayPal CAPTCHAs have been solved using correlation, and Principal Component Analysis (PCA) approaches. The primary steps of these studies include preprocessing, segmentation, and the recognition of characters. A success rate of 90% was reported using correlation analysis of PCA and using PCA only increased the efficiency to 97% <ns0:ref type='bibr' target='#b43'>Rathoura and Bhatiab (2018)</ns0:ref>. A Faster Recurrent Neural Network (F-RNN) has been proposed to detect CAPTCHAs. It was suggested that the depth of a network could increase the mean average precision value of CAPTCHA solvers, and experimental results showed that feature maps of a network could be obtained from convolutional layers <ns0:ref type='bibr' target='#b18'>Du et al. (2017)</ns0:ref>. Data creation and cracking have also been used in some studies. For visually impaired people, there should be solutions to CAPTCHAs. A CNN network named CAPTCHANet has been proposed.</ns0:p><ns0:p>A 10-layer network was designed and was improved later with training strategies. A new CAPTCHA using Chinese characters was also created, and it removed the imbalance issue of class for model training.</ns0:p><ns0:p>A statistical evaluation led to a higher success rate <ns0:ref type='bibr' target='#b63'>Zhang et al. (2021)</ns0:ref>. A data selection approach automatically selected data for training purposes. The data augmenter later created four types of noise to make CAPTCHAs difficult for machines to break. However, the reported results showed that, in combination with the proposed preprocessing method, the results were improved to 5.69% <ns0:ref type='bibr' target='#b11'>Che et al. (2021)</ns0:ref>. Some recent studies on CAPTCHA recognition are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>The pre-trained model of object recognition has an excellent structural CNN. A similar study used a well-known VGG network and improved the structure using focal loss <ns0:ref type='bibr' target='#b57'>Wang and Shi (2021)</ns0:ref>. The image processing operations generated complex data in text-based CAPTCHAs, but there may be a high risk of breaking CAPTCHAs using common languages. One study used the Python Pillow library to create Bengali-, Tamil-, and Hindi-language-based CAPTCHAs. These language-based CAPTCHAs were solved using D-CNN, which proved that the model was also confined by these three languages <ns0:ref type='bibr' target='#b0'>Ahmed and Anand (2021)</ns0:ref>. A new, automatic CAPTCHA creating and solving technique using a simple 15-layer CNN was proposed to remove the manual annotation problem.</ns0:p><ns0:p>Various fine-tuning techniques have been used to break 5-digit CAPTCHAs and have achieved 80% classification accuracies <ns0:ref type='bibr' target='#b6'>Bostik et al. (2021)</ns0:ref>. A privately collected dataset was used in a CNN approach</ns0:p></ns0:div> <ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:ref> Manuscript to be reviewed Computer Science with 7 layers that utilize correlated features of text-based CAPTCHAs. It achieved a 99.7% accuracy using its image database, and CNN architecture <ns0:ref type='bibr' target='#b30'>Kumar and Singh (2021)</ns0:ref>. Another similar approach was based on handwritten digit recognition. The introduction of a CNN was initially discussed, and a CNN was proposed for twisted and noise-added CAPTCHA images <ns0:ref type='bibr' target='#b10'>Cao (2021)</ns0:ref>. A deep, separable CNN for four-word CAPTCHA recognition achieved 100% accurate results with the fine-tuning of a separable CNN concerning their depth. A fine-tuned, pre-trained model architecture was used with the proposed architecture and significantly reduced the training parameters with increased efficiency <ns0:ref type='bibr' target='#b17'>Dankwa and Yang (2021)</ns0:ref>.</ns0:p><ns0:p>A visual-reasoning CAPTCHA (known as a Visual Turing Test (VTT)) has been used in security authentication methods, and it was easy to break using holistic and modular attacks. One study focused on a visual-reasoning CAPTCHA and showed an accuracy of 67.3% against holistic CAPTCHAs and an accuracy of 88% against VTT CAPTCHAs. Future directions were to design VTT CAPTCHAs to protect against these malicious attacks <ns0:ref type='bibr' target='#b23'>Gao et al. (2021)</ns0:ref>. To provide a more secure system in text-based CAPTCHAs, a CAPTCHA defense algorithm was proposed. It used a multi-character CAPTCHA generator using an adversarial perturbation method. The reported results showed that complex CAPTCHA generation reduces the accuracy of CAPTCHA breaker up to 0.06% <ns0:ref type='bibr' target='#b54'>Wang et al. (2021a)</ns0:ref>. The Generative Adversarial Network (GAN) based simplification of CAPTCHA images adopted before segmentation and classification. A CAPTCHA solver is presented that achieves 96% success rate character recognition. All other CAPTCHA schemes were evaluated and showed a 74% recognition rate. These suggestions for CAPTCHA designers may lead to improved CAPTCHA generation <ns0:ref type='bibr' target='#b56'>Wang et al. (2021b)</ns0:ref>. A binary image-based CAPTCHA recognition framework is proposed to generate a certain number of image copies from a given CAPTCHA image to train a CNN model. The The Weibo dataset showed that the 4-character recognition accuracy on the testing set was 92.68%, and the Gregwar dataset achieved a 54.20% accuracy</ns0:p></ns0:div> <ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:ref> Manuscript to be reviewed Computer Science on the testing set <ns0:ref type='bibr' target='#b50'>Thobhani et al. (2020)</ns0:ref>.</ns0:p><ns0:p>The reCAPTCHA images are a specific type of security layer used by some sites and set a benchmark by Google to meet their broken challenges. This kind of image would deliver specific images, and then humans have to pick up any similar image that could be realized by humans efficiently. The machine learning-based studies also discuss and work on these kinds of CAPTCHA images now a day <ns0:ref type='bibr' target='#b3'>(Alqahtani and Alsulaiman, 2020)</ns0:ref>. The drag and drop image CAPTCHA-based security schemes are also applied nowadays. An inevitable part of the image is missed that needs to be dragged and filled the blank in a particular location and shape. However, it could also be broken by finding space areas using neighborhood differences of pixels. Anyhow, it is far good enough to avoid any malicious attacks <ns0:ref type='bibr' target='#b40'>(Ouyang et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Adversarial Attacks are the rising challenge to deceive the deep learning models nowadays. To prevent deep learning model-based CAPTCHA attacks, many different adversarial noises are being introduced and used in security questions that create similar images. It needs to be found by the user. A sample image-based noise-images are generated and shown in the puzzle that could be found by human-eye with keen intention <ns0:ref type='bibr' target='#b48'>(Shi et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b39'>Osadchy et al., 2017)</ns0:ref>. However, these studies need self-work because noise-generated images can consume more time for users. Also, some of the adversarial noise-generating methods could generate unsolvable samples for some of the real-time users.</ns0:p><ns0:p>The studies discussed above yield information about text-based CAPTCHAs as well as other types of CAPTCHAs. Most studies used DL methods to break CAPTCHAs, and time and unsolvable CAPTCHAs are still an open challenge. More efficient DL methods need to be used that, though they may not cover other datasets, should be robust to them. The locally developed datasets are used by many of the studies make the proposed studies less robust. However, publicly available datasets could be used so that they could provide more robust and confident solutions.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>Recent studies based on deep learning have shown excellent results to solve a CAPTCHA. However, simple CNN approaches may detect lossy pooled incoming features when passing between convolution and other pooling layers. Therefore, the proposed study utilizes skip connection. To remove further bias, a 5-fold validation approach is adopted. The proposed study presents a CAPTCHA solver framework using various steps, as shown in Figure <ns0:ref type='figure'>.</ns0:ref> 1. The data are normalized using various image processing steps to make it more understandable for the deep learning model. This normalized data is segmented per character to make an OCR-type deep learning model that can detect each character from each aspect. At last, the 5-fold validation method is reported and yields promising results.</ns0:p><ns0:p>The two datasets used for CAPTCHA recognition have 4 and 5 words in them. The 5-word dataset has a horizontal line in it with overlapping text. Segmenting and recognizing such text is challenging due to its un-clearance. The other dataset of 4 characters was not as challenging to segment, as no line intersected them, and character rotation scaling needs to be considered. Their preprocessing and segmentation are explained in the next section. The dataset is explored in detail before and after preprocessing and segmentation.</ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>There are two public datasets available on Kaggle that are used in the proposed study. There are 5 and 4 characters in both datasets. There are different numbers of numeric and alphabetic characters in them.</ns0:p><ns0:p>There are 1040 images in the five-character dataset (d 1 ) and 9955 images in the 4-character dataset (d 2 ).</ns0:p><ns0:p>There are 19 types of characters in the d 1 dataset, and there are 32 types of characters in the d 2 dataset.</ns0:p><ns0:p>Their respective dimensions and extension details before and after segmentation are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>.</ns0:p><ns0:p>The frequencies of each character in both datasets are shown in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>.</ns0:p><ns0:p>The frequency of each character varies in both datasets, and the number of characters also varies. In the d 2 dataset, although there is no complex inner line intersection and a merging of texts is found, more characters and their frequencies are. However, the d 1 dataset has complex data and a low number of characters and frequencies, as compared to d 2 . Initially, d 1 has the dimensions 50 x 200 x 3, where 50 represents the rows, 200 represents the columns, and 3 represents the color depth of the given images. d 2 has image dimensions of 24 x 72 x 3, where 24 is the rows, 72 is the columns, and 3 is the color depth of given images. These datasets have almost the same character location. Therefore, they can be manually cropped to train the model on each character in an isolated form. However, their dimensions may vary for each character, which may need to be equally resized. The input images of both datasets were in Portable Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Graphic Format (PNG) and did not need to change. After segmenting both dataset images, each character is resized to 20 x 24 in both datasets. This size covers each aspect of the visual binary patterns of each character. The dataset details before and after resizing are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The summarized details of the used datasets in the proposed study are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The dimensions of the resized image per character mean that, when we segment the characters from the given dataset images, their sizes vary from dataset to dataset and from character type to character type.</ns0:p><ns0:p>Therefore, the optimal size at which the image data for each character is not lost is 20 rows by 24 columns, which is set for each character.</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing and Segmentation</ns0:head><ns0:p>d 1 dataset images do not need any complex image processing to segment them into a normalized form. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science them: Red, Green, and Blue. Let Image I (x,y) be the input RGB image, as shown in Eq. 1. To convert these input images into grayscale, Eq. 2 is performed.</ns0:p><ns0:formula xml:id='formula_0'>Input Image = I (x,y)<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>In Eq. 1, I is the given image, and x and yx and y represent the rows and columns. The grayscale conversion is performed using Eq. 2:</ns0:p><ns0:formula xml:id='formula_1'>Grey (x, y) &#8592; j &#8721; i=n (0.2989 * R, 0.5870 * G, 0.1140 * B)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>In Eq. 2, i is the iterating row position, j is the interacting column position of the operating pixel at a certain time, and R, G, and B are the red, green, and blue pixel values of that pixel. The multiplying constant values convert to all three values of the respective channels to a new grey-level value in the range of 0-255. Grey (x, y) is the output grey-level of a given pixel at a certain iteration. After converting to greylevel, the binarization operation is performed using Bradly's method, which calculates a neighborhood base threshold to convert into 1 and 0 values to a given grey-level matrix of dimension 2. The neighborhood threshold operation is performed using Eq. 3.</ns0:p><ns0:formula xml:id='formula_2'>B (x, y) &#8592; 2 * &#8970;size ( Grey (x, y) 16 + 1)&#8971;<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>In Eq. 3, the output B (x, y) is the neighborhood-based threshold that is calculated as the 1/8 th neighborhood of a given Grey (x, y) image. However, the floor is used to obtain a lower value to avoid any miscalculated threshold value. This calculated threshold is also called the adaptive threshold method.</ns0:p><ns0:p>The neighborhood value can be changed to increase or decrease the binarization of a given image. After obtaining a binary image, the complement is necessary to highlight the object in a given image, taken as a simple inverse operation, calculated as shown in Eq. 4.</ns0:p><ns0:formula xml:id='formula_3'>C (x, y) &#8592; 1 B(x, y)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>In Eq. 4, the available 0 and values are inverted to their respective values of each pixel position x and y. The inverted image is used as an isolation process in the case of the d 2 dataset. In the case of the d 1 , further erosion is needed. Erosion is an operation that uses a structuring element concerning its shape. The respective shape is used to remove pixels from a given binary image. In the case of a CAPTCHA image, the intersected line is removed using a line-type structuring element. The line-type structuring element uses a neighborhood operation. In the proposed study case, a line of size 5 with an angle dimension of 90 is used, and the intersecting line for each character in the binary image is removed, as we can see in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, row 1. The erosion operation with respect to a 5 length and a 90 angle is calculated as shown in Eq. 5.</ns0:p><ns0:formula xml:id='formula_4'>C &#8854; L &#8592; x &#8712; E| B x &#8838; C<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>In Eq. 5, C is the binary image, L is the line type structuring element of line type, and x is the resultant eroded matrix of the input binary image C. B x is the subset of a given image, as it is extracted from a given image C. After erosion, there is noise in some images that may lead to the wrong interpretation of that character. Therefore, to remove noise, the neighborhood operation is again utilized, and 8 neighborhood operations are used to a given threshold of 20 pixels for 1 value, as the noise value remains lower than the character in that binary image. To calculate it, an area calculation using each pixel is necessary. Therefore, by iterating an 8 by 8 neighborhood operation, 20 pixels consisting of the area are checked to remove those areas, and other more significant areas remain in the output image. The sum of a certain area with a maximum of 1 is calculated as shown in Eq. 6.</ns0:p></ns0:div> <ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_5'>S (x, y) &#8592; j &#8721; i=1 max(B x |xi &#8722; x j|, B x |yi &#8722; y j|)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>In Eq. 6, the given rows (i) and columns ( j) of a specific eroded image B x are used to calculate the resultant matrix by extracting each pixel value to obtain one's value from the binary image. The max will return only values that will be summed to obtain an area that will be compared with threshold value T.</ns0:p><ns0:p>The noise will then be removed, and final isolation is performed to separate each normalized character.</ns0:p></ns0:div> <ns0:div><ns0:head>CNN Training for Text Recognition</ns0:head><ns0:formula xml:id='formula_6'>convo (I,W ) x,y = N C &#8721; a=1 N R &#8721; b=1 W a,b * I x+a&#8722;1,y+b&#8722;1 (7)</ns0:formula><ns0:p>In the above equation, we formulate a convolutional operation for a 2D image that represents I x,y , where x and y are the rows and columns of the image, respectively. W x,y represents the convolving window concerning rows and columns x and y. The window will iteratively be multiplied with the respective element of the given image and then return the resultant image in convo (I,W ) x,y . N C and N R are the numbers of rows and columns starting from 1, a represents columns, and b represents rows.</ns0:p></ns0:div> <ns0:div><ns0:head>Batch Normalization Layer</ns0:head><ns0:p>Its basic formula is to calculate a single component value, which can be represented as</ns0:p><ns0:formula xml:id='formula_7'>Bat &#8242; = a &#8722; M [a] var(a)<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>The calculated new value is represented as Bat &#8242; , a is any given input value, and M[a] is the mean of that given value, where in the denominator the variance of input a is represented as var(a). The further value is improved layer by layer to give a finalized normal value with the help of alpha gammas, as shown below:</ns0:p><ns0:formula xml:id='formula_8'>Bat &#8242;&#8242; = &#947; * Bat &#8242; + &#946; (9)</ns0:formula><ns0:p>The extended batch normalization formulation improved in each layer with the previous Bat &#8242; value.</ns0:p></ns0:div> <ns0:div><ns0:head>ReLU</ns0:head><ns0:p>ReLU excludes the input values that are negative and retains positive values. Its equation can be written as</ns0:p><ns0:formula xml:id='formula_9'>reLU = x = x i f x &gt; 0 x = 0 i f x &#8804; 0 (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>where x is the input value and directly outputs the value if it is greater than zero; if values are less than 0, negative values are replaced with 0.</ns0:p></ns0:div> <ns0:div><ns0:head>Skip-Connection</ns0:head><ns0:p>The Skip connection is basically concetnating the previous sort of pictoral information to the next convolved feature maps of network. In proposed network, the ReLU-1 information is saved and then after 2nd and 3rd ReLU layer, these saved information is concatenated with the help of an addition layer. In this way, the skip-connection is added that makes it different as compared to conventional deep learning approaches to classify the guava disease. Moreover, the visualization of these added feature information is shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Average Pooling</ns0:head><ns0:p>The average pooling layer is superficial as we convolve to the input from the previous layer or node. The coming input is fitted using a window of size mxn, where m represents the rows, and n represents the column. The movement in the horizontal and vertical directions continues using stride parameters.</ns0:p><ns0:p>Many deep learning-based algorithms introduced previously, as we can see in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, ultimately use CNN-based methods. However, all traditional CNN approaches using convolve blocks and transfer learning approaches may take important information when they pool down to incoming feature maps from previous layers. Similarly, the testing and validation using conventional training, validation, and testing may be biased due to less data testing than the training data. Therefore, the proposed study uses a 1-skip connection while maintaining other convolve blocks; inspired by the K-Fold validation method, it splits up both datasets' data into five respective folds. The dataset, after splitting into five folds, is trained and tested in a sequence. However, these five-fold results are taken as a means to report final accuracy results. The proposed CNN contains 16 layers in total, and it includes three major blocks containing convolutional, batch normalization, and ReLU layers. After these nine layers, an additional layer adds incoming connections, a skip connection, and 3rd-ReLU-layer inputs from the three respective blocks.</ns0:p><ns0:p>Average pooling, fully connected, and softmax layers are added after skipping connections. All layer parameters and details are shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>, all learnable weights of each layer are shown. For both datasets, output categories of characters are different. Therefore, in the dense layer of the five-fold CNN models, the output class was 19 for five models, and the output class was 32 categories in the other five models. The skip connection has more weights than other convolution layers. Each model is compared regarding its weight learning and is shown in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. Later on, by obtaining these significant, multiple features, the proposed study utilizes the K-fold validation technique by splitting the data into five splits. These multiple splits remove bias in the training and testing data and take the testing results as the mean of all models. In this way, no data will remain</ns0:p><ns0:p>for training, and no data will be untested. The results ultimately become more confident than previous conventional approaches of CNN. The d 2 dataset has a clear, structured element in its segmented images; in d 1 , the isolated text images were not much clearer. Therefore, the classification results remain lower in this case, whereas in the d2 dataset, the classification results remain high and usable as a CAPTCHA solver. The results of each character and dataset for each fold are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>As discussed earlier, there are two datasets in the proposed framework. Both have a different number of categories and a different number of images. Therefore, separate evaluations of both are discussed and described in this section. Firstly, the five-character dataset is used by the 5-CNN models of same architecture, with a different split in the data. Secondly, the four-character dataset is used by the same architecture of the model, with a different output of classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Five-character Dataset (d 1 )</ns0:head><ns0:p>The five-character dataset has 1040 images in it. After segmenting each type of character, it has 5200 total <ns0:ref type='table' target='#tab_3'>4</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>high angle and structural change to not break with any machine learning model. This complex structure may be improved from other fine-tuning of a CNN, increasing or decreasing the skipping connection. The accuracy value can also improve. The other four-character dataset is more important because it has 32 types of characters and more images. This five-character dataset's lower accuracy may also be due to little data and less training. The other character recognition studies have higher accuracy rates on similar datasets, but they might be less confident than the proposed study due to an unbiased validation method.</ns0:p><ns0:p>The four-character dataset recognition results are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>Four-Character Dataset (d 2 )</ns0:head><ns0:p>The four-character dataset has a higher frequency of each character compared to the five-character dataset, and the number of characters is also higher. The same five-fold splits were performed on this dataset characters as well. After applying the five folds, the number of characters in each fold was 7607, 7624, 7602, 7617, and 7595, respectively, and the remaining images from the 38,045 images of individual characters were adjusted into the training sets of each fold. The results of each character w.r.t each fold and the overall mean are given in Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>.</ns0:p><ns0:p>From Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>, it can be observed that almost every character was recognized with 99% accuracy. The highest accuracy of character D was 99.92 and remained 100% in the four-folds. Only one fold showed a 99.57% accuracy. From this point, we can state that the proposed study removed bias, if there was any, from the dataset by doing splits. Therefore, it is necessary to make folds in a deep learning network.</ns0:p><ns0:p>Most studies use a 1-fold approach only. The 1-fold approach is at a high risk. It is also important that the character m achieved the lowest accuracy in the case of the five-character CAPTCHA. In this four-character CAPTCHA, 98.58% was accurately recognized. Therefore, we can say that the structural morphology of M in the five-character CAPTCHA better avoids any CAPTCHA solver method. The highest results show that this four-character CAPTCHA is at a high risk, and line intersection, word joining, and correlation may break prevent the CAPTCHA from breaking. Many approaches before have been proposed to recognize the CAPTCHA, and most of them have used a conventional structure.</ns0:p><ns0:p>The proposed study has used a more confident validation approach with multi-aspect feature extraction.</ns0:p><ns0:p>Therefore, it can be used as a more promising approach to break CAPTCHA images and to test the CAPTCHA design made by CAPTCHA designers. In this way, CAPTCHA designs can be protected against new approaches to deep learning. The graphical illustration of validation accuracy and the losses for both datasets on all folds is shown in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, we can see that various studies have used different numbers of characters with self-collected and generated datasets, and comparisons have been made. Some studies have considered the number of dataset characters. Accuracy is not comparable, as it uses the five-fold validation method, and the others only used 1-fold. Therefore, the proposed study outperforms in each aspect, in terms of the proposed CNN framework and its validation scheme.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The proposed study uses a different approach to deep learning to solve CAPTCHA problems. It proposed a skip-CNN connection network to break text-based CAPTCHA's. Two CAPTCHA datasets are discussed and evaluated character by character. The proposed study is confident to report results, as it removed biases (if any) in datasets using a five-fold validation method. The results are also improved as compared to previous studies. The reported higher results claim that these CAPTCHA designs are at high risk, as any malicious attack can break them on the web. Therefore, the proposed CNN could test CAPTCHA designs to solve them more confidently in real-time. Furthermore, the proposed study has used the publicly available datasets to perform training and testing on them, making it a more robust approach to solve text-based CAPTCHA's.</ns0:p><ns0:p>Many studies have used deep learning to break CAPTCHAs, as they have focused on the need to design CAPTCHAs that do not consume user time and resist CAPTCHA solvers. It would make our web systems more secure against malicious attacks. However, In the future, the data augmentation methods and more robust data creation methods can be applied on CAPTCHA datasets where intersecting line-based CAPTCHA's are more challenging to break that can be used. Similarly, the other local languages based CAPTCHA's also can be solved using similar DL models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>However, most of them have used Deep Learning based methods to crack them due to their robustness PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The Proposed Framework for CAPTCHA Recognition.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Character-wise Frequencies (Row-1: 4-Character Dataset 1 (d 2 ); Row-2: five-character Dataset 2 (d 1 )).</ns0:figDesc><ns0:graphic coords='7,183.09,391.65,330.86,283.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>d</ns0:head><ns0:label /><ns0:figDesc>2 needs this operation to remove the central intersecting line of each character. This dataset can be normalized to isolate each character correctly. Therefore, three steps are performed on the d 1 dataset.It is firstly converted to greyscale; it is then converted to a binary form, and their complement is lastly taken. In the d 2 dataset, 2 additional steps of erosion and area-wise selection are performed to remove the intersection line and the edges of characters. The primary steps of both datasets and each character isolation are shown in Figure3.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Preprocessing and Isolation of characters in both datasets (Row-1: the d1 dataset, binarization, erosion, area-wise selection, and segmentation; Row-2: binarization and isolation of each character).</ns0:figDesc><ns0:graphic coords='8,162.41,464.12,372.20,183.11' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Five-fold weights with respective layers shown for multiple proposed CNN architectures.</ns0:figDesc><ns0:graphic coords='12,141.73,136.26,413.54,225.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>images. The data are then split into five folds: 931, 941, 925, 937, and 924. The remaining data difference 11/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021) Manuscript to be reviewed Computer Science is adjusted into the training set, and splitting was adjusted during the random selection of 20-20% of the total data. The training on four-fold data and the testing on the one-fold data are shown in Table</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The validation loss and validation accuracy graphs are shown for each fold of the CNN (Row-1: five-character CAPTCHA; Row-2: four-character CAPTCHA).</ns0:figDesc><ns0:graphic coords='14,141.73,451.31,413.54,196.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>deep learning approaches, and their results remain at risk. Similarly, a four-character CAPTCHA with a greater number of samples and less complex characters should not be used, as it can break easily compared to the five-character CAPTCHA. CAPTCHA-recognition-based studies have used self-generated or augmented datasets to propose CAPTCHA solvers. Therefore, the number of images, their spatial resolution sizes and styles, and other results have become incomparable. The proposed study mainly focuses on a better validation technique using deep learning with multi-aspect feature via skipping connections in a CNN. With some character-matching studies, we performed a comparison to make the proposed study more reliable.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Recent CAPTCHA recognition-based studies and their details.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Year Dataset</ns0:cell><ns0:cell /><ns0:cell>Method</ns0:cell><ns0:cell /><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang and Shi (2021)</ns0:cell><ns0:cell>2021 CNKI</ns0:cell><ns0:cell /><ns0:cell cols='2'>Binarization,</ns0:cell><ns0:cell cols='2'>Recognition rate= 99%,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>CAPTCHA,</ns0:cell><ns0:cell>smoothing,</ns0:cell><ns0:cell>seg-</ns0:cell><ns0:cell cols='2'>98.5%, 97.84%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random Gener-</ns0:cell><ns0:cell>mentation</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ated, Zhengfang</ns0:cell><ns0:cell>annotation</ns0:cell><ns0:cell>with</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>CAPTCHA</ns0:cell><ns0:cell cols='2'>Adhesian and more</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>interference</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ahmed and Anand</ns0:cell><ns0:cell>2021 Tamil,</ns0:cell><ns0:cell>Hindi</ns0:cell><ns0:cell>Pillow</ns0:cell><ns0:cell>Library,</ns0:cell><ns0:cell>&#8764;</ns0:cell></ns0:row><ns0:row><ns0:cell>(2021)</ns0:cell><ns0:cell cols='2'>and Bengali</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Bostik et al. (2021)</ns0:cell><ns0:cell cols='2'>2021 Private created</ns0:cell><ns0:cell cols='2'>15-layer CNN</ns0:cell><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy= 80%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Kumar and Singh (2021) 2021 Private</ns0:cell><ns0:cell /><ns0:cell cols='2'>7-Layer CNN</ns0:cell><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>racy= 99.7%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Dankwa and Yang (2021) 2021 4-words Kaggle</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy=100%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2021b)</ns0:cell><ns0:cell cols='2'>2021 Private GAN</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell cols='2'>Classification</ns0:cell><ns0:cell>Accu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>based dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>racy= 96%, overall =</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>74%</ns0:cell></ns0:row><ns0:row><ns0:cell>Thobhani et al. (2020)</ns0:cell><ns0:cell cols='3'>2020 Weibo, Gregwar CNN</ns0:cell><ns0:cell /><ns0:cell>Testing</ns0:cell><ns0:cell>Accuracy=</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>92.68%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Testing</ns0:cell><ns0:cell>Accuracy=</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>54.20%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Description of the employed dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Properties</ns0:cell><ns0:cell>d1</ns0:cell><ns0:cell>d2</ns0:cell></ns0:row><ns0:row><ns0:cell>Image dimension</ns0:cell><ns0:cell>50x200x3</ns0:cell><ns0:cell>24x72x3</ns0:cell></ns0:row><ns0:row><ns0:cell>Extension</ns0:cell><ns0:cell>PNG</ns0:cell><ns0:cell>PNG</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Images</ns0:cell><ns0:cell>9955</ns0:cell><ns0:cell>1040</ns0:cell></ns0:row><ns0:row><ns0:cell>Character Types</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>19</ns0:cell></ns0:row><ns0:row><ns0:cell>Resized Image Dimension (Per Character)</ns0:cell><ns0:cell>20x24x1</ns0:cell><ns0:cell>20x24x1</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Parameters setting and learnable weights for proposed framwork</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Number</ns0:cell><ns0:cell>Layers Name</ns0:cell><ns0:cell>Category</ns0:cell><ns0:cell>Parameters</ns0:cell><ns0:cell cols='3'>Weights/Offset Padding Stride</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Input</ns0:cell><ns0:cell>Image Input</ns0:cell><ns0:cell>24 x 20 x 1</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Conv (1)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>3x3x1x8</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>BN (1)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>1x1x8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>ReLU (1)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Conv (2)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>3x3x8x16</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>BN (2)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>1x1x16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>ReLU (2)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Conv (3)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>3x3x16x32</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>BN (3)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>ReLU (3)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Skip-connection</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x8x32</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Add</ns0:cell><ns0:cell>Addition</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>Pool</ns0:cell><ns0:cell>Average Pooling</ns0:cell><ns0:cell>6 x 5 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 19 (d2)</ns0:cell><ns0:cell>19 x 960 (d2)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>FC</ns0:cell><ns0:cell>Fully connected</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 32 (d1)</ns0:cell><ns0:cell>32 x 960 (d1)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>1 x 1 x 19</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>Class Output</ns0:cell><ns0:cell>Classification</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row></ns0:table><ns0:note>10/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Five-character Dataset Accuracy (%) with five-fold text recognition testing on the CNN. , there are 19 types of characters that have their fold-by-fold varying accuracy. The mean of all folds is given. The overall or mean of each fold and the mean of all folds are given in the last row. We can see that the Y character has a significant or the highest accuracy rate (95.40%) of validation compared to other characters. This may be due to its almost entirely different structure from other characters. The other highest accuracy is of the G character with 95.06%, which is almost equal to the highest with a slight difference. However, these two characters have a more than 95% recognition accuracy, and no other character is nearer to 95. The other characters have a range of accuracies from 81 to 90%. The least accurate M character is 62.08, and it varies in five folds from 53 to 74%. Therefore, we can say that M matches with other characters, and for this character recognition, we may need to concentrate on structural polishing for M input characters. To prevent CAPTCHA from breaking further complex designs among machines and making it easy for humans to do so, the other characters that achieve higher results need a</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>Overall</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>89.63</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>78.72</ns0:cell><ns0:cell>84.48</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>87.76</ns0:cell><ns0:cell>75.51</ns0:cell><ns0:cell>87.75</ns0:cell><ns0:cell>85.71</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>86.12</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>88.46</ns0:cell><ns0:cell>90.196</ns0:cell><ns0:cell>90.19</ns0:cell><ns0:cell>92.15</ns0:cell><ns0:cell>89.06</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>80.39</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>86.56</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>76.59</ns0:cell><ns0:cell>82.61</ns0:cell><ns0:cell>91.304</ns0:cell><ns0:cell>80.43</ns0:cell><ns0:cell>87.58</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>89.36</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>85.10</ns0:cell><ns0:cell>84.78</ns0:cell><ns0:cell>86.68</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>91.66</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>87.49</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>81.81</ns0:cell><ns0:cell>73.33</ns0:cell><ns0:cell>97.72</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>90.09</ns0:cell><ns0:cell>85.03</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>85.10</ns0:cell><ns0:cell>80.85</ns0:cell><ns0:cell>80.85</ns0:cell><ns0:cell>82.64</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>95.55</ns0:cell><ns0:cell>88.67</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>62.79</ns0:cell><ns0:cell>79.54</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>93.18</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>78.73</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>92.00</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>94.00</ns0:cell><ns0:cell>81.63</ns0:cell><ns0:cell>89.1</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>95.83</ns0:cell><ns0:cell>91.83</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>93.75</ns0:cell><ns0:cell>95.06</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>64.00</ns0:cell><ns0:cell>56.00</ns0:cell><ns0:cell>53.061</ns0:cell><ns0:cell>74.00</ns0:cell><ns0:cell>67.34</ns0:cell><ns0:cell>62.08</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>81.40</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>87.59</ns0:cell><ns0:cell>76.74</ns0:cell><ns0:cell>82.35</ns0:cell><ns0:cell>81.43</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>95.65</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>90.34</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>95.24</ns0:cell><ns0:cell>83.72</ns0:cell><ns0:cell>90.47</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>90.55</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>82.97</ns0:cell><ns0:cell>85.41</ns0:cell><ns0:cell>82.98</ns0:cell><ns0:cell>85.68</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>93.02</ns0:cell><ns0:cell>95.45</ns0:cell><ns0:cell>97.67</ns0:cell><ns0:cell>95.53</ns0:cell><ns0:cell>95.35</ns0:cell><ns0:cell>95.40</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>86.14</ns0:cell><ns0:cell>80.77</ns0:cell><ns0:cell>87.24</ns0:cell><ns0:cell>87.73</ns0:cell><ns0:cell>85.71</ns0:cell><ns0:cell>85.52</ns0:cell></ns0:row><ns0:row><ns0:cell>In Table 4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>12/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Four-character dataset Accuracy (%) with five-fold text recognition testing on the CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>Overall</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>97.84</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>98.27</ns0:cell><ns0:cell>98.79</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>97.02</ns0:cell><ns0:cell>94.92</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>95.75</ns0:cell><ns0:cell>96.17</ns0:cell><ns0:cell>96.52</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>97.87</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>98.55</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.35</ns0:cell><ns0:cell>99.01</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.65</ns0:cell><ns0:cell>99.56</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.69</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>98.80</ns0:cell><ns0:cell>99.60</ns0:cell><ns0:cell>99.19</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.20</ns0:cell><ns0:cell>99.36</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>97.42</ns0:cell><ns0:cell>97.86</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.29</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>98.85</ns0:cell><ns0:cell>96.55</ns0:cell><ns0:cell>98.08</ns0:cell><ns0:cell>98.46</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.39</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>97.85</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.54</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>96.59</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>96.15</ns0:cell><ns0:cell>97.95</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.25</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.92</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>99.18</ns0:cell><ns0:cell>97.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.59</ns0:cell><ns0:cell>98.37</ns0:cell><ns0:cell>98.94</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>98.69</ns0:cell><ns0:cell>98.26</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97.82</ns0:cell><ns0:cell>97.83</ns0:cell><ns0:cell>98.52</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>97.93</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.69</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>98.43</ns0:cell></ns0:row><ns0:row><ns0:cell>H</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.33</ns0:cell></ns0:row><ns0:row><ns0:cell>J</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.66</ns0:cell></ns0:row><ns0:row><ns0:cell>K</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell></ns0:row><ns0:row><ns0:cell>L</ns0:cell><ns0:cell>97.41</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>98.79</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>96.23</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.33</ns0:cell><ns0:cell>98.58</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.10</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>98.83</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>98.35</ns0:cell><ns0:cell>97.94</ns0:cell><ns0:cell>98.77</ns0:cell><ns0:cell>97.94</ns0:cell><ns0:cell>96.28</ns0:cell><ns0:cell>97.86</ns0:cell></ns0:row><ns0:row><ns0:cell>Q</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.75</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.59</ns0:cell><ns0:cell>97.50</ns0:cell><ns0:cell>99.00</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>99.42</ns0:cell></ns0:row><ns0:row><ns0:cell>T</ns0:cell><ns0:cell>97.47</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.73</ns0:cell><ns0:cell>97.47</ns0:cell><ns0:cell>98.31</ns0:cell><ns0:cell>97.98</ns0:cell></ns0:row><ns0:row><ns0:cell>U</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97.43</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.80</ns0:cell></ns0:row><ns0:row><ns0:cell>V</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.22</ns0:cell><ns0:cell>98.47</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.67</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.15</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.33</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.66</ns0:cell></ns0:row><ns0:row><ns0:cell>Z</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.16</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>98.97</ns0:cell><ns0:cell>98.18</ns0:cell><ns0:cell>99.32</ns0:cell><ns0:cell>98.92</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>98.82</ns0:cell></ns0:row></ns0:table><ns0:note>14/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Four-character dataset with five-fold text recognition testing on a CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>References</ns0:cell><ns0:cell>No. of Characters</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>Faster R-CNN</ns0:cell><ns0:cell>Accuracy= 98.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Du et al. (2017)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Chen et al. (2019)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Selective D-CNN</ns0:cell><ns0:cell>Success rate= 95.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik et al. (2021)</ns0:cell><ns0:cell>Different</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>Accuracy= 80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Different</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>Precision=98.99%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik and Klecka (2018)</ns0:cell><ns0:cell /><ns0:cell>SVN</ns0:cell><ns0:cell>99.80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Feed forward-Net</ns0:cell><ns0:cell>98.79%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Study</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>Skip-CNN with 5-Fold Validation Accuracy= 98.82%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Accuracy=85.52%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:3:0:CHECK 24 Dec 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Reviewer 4 Although the author has improved the manuscript, I still have some concerns on the benchmark models. I check the references of the investigated models, and find the benchmark models are not convincing. Here, I suggest authors to compare the proposed algorithm with some recent state-of-art algorithms from some top tier journals (IEEE Transactions on Information Forensics and Security, Computers & Security) in computer security. Validity of the findings Authors have to compare the proposed algorithms with some state-of-art methods in some top journals. I find many work on the CAPTCHA solver framework from the google scholar as below. Response Dear reviewer, thank you for considering our manuscript to review. We have looked at the suggested articles for state-of-the-art comparison; however, after careful analysis, we cannot perform comparison due to the different nature of data sets used, different parameter settings, and different research problems considered in those studies. The detailed comments are given below. However, we have discussed those studies in our related work. Regarding our current benchmark, please see Table 6, the compared studies are reliable, and the targeted datasets are open research problems. Comments Responses 1. Alqahtani, Fatmah H., and Fawaz A. Alsulaiman. 'Is image-based CAPTCHA secure against attacks based on machine learning? An experimental study.' Computers & Security 88 (2020): 101635. Thanks for your concern. The suggested study is undoubtedly a good and valid study to propose machine learning techniques to break the Recaptcha. Although, it is being proposed on a google dataset of reCAPTCHA. It is a different benchmark of the dataset to address it. The dataset contains nine images for each challenge and a hint of each challenge to train and predict efficiently. It does not match even in terms of input data and their predicting results. However, the proposed study enhances and makes more promising to the CNN-based CAPTCHA images based breaking algorithms by introducing the 5-folds validation and skipping connections. Our proposed study contains two types of text-based CAPTCHAs’. The 4 and 5 digit images that are not comparable with the suggested study use another type of CAPTCHA data known as reCAPTCHAs. 2. Ouyang, Zhiyou, et al. 'A cloud endpoint coordinating CAPTCHA based on multi-view stacking ensemble.' Computers & Security 103 (2021): 102178. Thanks for your concern. The suggested study contains the drag and drop-based image CAPTCHA containing another image type, not even the text-based CAPTCHAs. Similarly, as explained above, our proposed study does not contain or use any similar dataset type. It is another type of CAPTCHAs that are being worked on and solved. 3. Osadchy, Margarita, et al. 'No bot expects the DeepCAPTCHA! Introducing immutable adversarial examples, with applications to CAPTCHA generation.' IEEE Transactions on Information Forensics and Security 12.11 (2017): 2640-2653. Thanks for your concern. The suggested study contains the adversarial noise-based CAPTCHA image generative model that keeps the images as easy as to recognize by the human eye only. The suggested study is another type of deep learning model that makes another strategy to deceive the deep learning-based CAPTCHA breaking methods by adding adversarial noise. Therefore, this data and results are also not comparable with our proposed study. 4. Shi, Chenghui, et al. 'Adversarial captchas.' IEEE Transactions on Cybernetics (2021). Thanks for your concern. The suggested study is well established by introducing many adversarial noise generation methods and using 12 different image processing techniques. Therefore, this study mainly introduces the adversarial CAPTCHA generation methods and then breaks them. However, our proposed study is different and targets the different scenarios in text-based image-CAPTCHAs. Concluding remarks on suggestions of reviewer: Thanks for all your given suggestions and concerns about the CAPTCHA-based studies. However, these different studies target the problems that occur in CAPTCHA breakings, such as suggested studies finding three different kinds of problems. Even these three kinds of studies are not comparable with each other. Similarly, our proposed study targets different types of text-based CAPTCHA problems by giving future directions to validate the coming Deep learning models to validate using Folding methods and to use the skip-connections that will make the proposed CNN results stronger. The proposed study-based skip connections and folding methods are also robust that could be used in any field of machine learning to deliver more precise and robust predicting models. However, as per your suggestion, the suggested studies discussed in the related work highlight the state-of-the-art methods of these specific types of reCAPTCHA and Adversarial Noise-based CAPTCHA generations. "
Here is a paper. Please give your review comments after reading it.
334
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A Completely Automated Public Turing Test to tell Computers and Humans Apart (CAPTCHA) is used in web systems to secure authentication purposes; it may break using Optical Character Recognition (OCR) type methods. CAPTCHA breakers make web systems highly insecure. However, several techniques to break CAPTCHA suggest CAPTCHA designers about their designed CAPTCHA's need improvement to prevent computer visionbased malicious attacks. This research primarily used deep learning methods to break state-of-the-art CAPTCHA codes; however, the validation scheme and conventional Convolutional Neural Network (CNN) design still need more confident validation and multiaspect covering feature schemes. Several public datasets are available of text-based CAPTCHa, including Kaggle and other dataset repositories where self-generation of CAPTCHA datasets are available. The previous studies are dataset-specific only and cannot perform well on other CAPTCHA's. Therefore, the proposed study uses two publicly available datasets of 4-and 5-character text-based CAPTCHA images to propose a CAPTCHA solver. Furthermore, the proposed study used a skip-connection-based CNN model to solve a CAPTCHA. The proposed research employed 5-folds on data that delivers 10 Different CNN models on two datasets with promising results compared to the other studies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The first secure and fully automated mechanism, named CAPTCHA, was developed in 2000. Alta Vista first used the term CAPTCHA in 1997. It reduces spamming by <ns0:ref type='bibr'>95% Baird and Popat (2002)</ns0:ref>. CAPTCHA is also known as a reverse Turing test. The Turing test was the first test to distinguish human, and machine Von <ns0:ref type='bibr' target='#b51'>Ahn et al. (2003)</ns0:ref>. It was developed to determine whether a user was a human or a machine. It increases efficiency against different attacks that seek websites <ns0:ref type='bibr' target='#b15'>Danchev (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b37'>Obimbo et al. (2013)</ns0:ref>.</ns0:p><ns0:p>It is said that CAPTCHA should be generic such that any human can easily interpret and solve it and difficult for machines to recognize it <ns0:ref type='bibr' target='#b6'>Bostik and Klecka (2018)</ns0:ref>. To protect against robust malicious attacks, various security authentication methods have been developed <ns0:ref type='bibr' target='#b26'>Goswami et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b40'>Priya and Karthik (2013)</ns0:ref>, <ns0:ref type='bibr' target='#b3'>Azad and Jain (2013)</ns0:ref>. CAPTCHA can be used for authentication in login forms, spam text reducer, e.g., in email, as a secret graphical key to log in for email. In this way, a spam-bot would not be able to recognize and log in to the email Sudarshan <ns0:ref type='bibr' target='#b49'>Soni and Bonde (2017)</ns0:ref>. However, recent advancements make the CAPTCHA's designs to be at high risk where the current gaps and robustness of models that are the concern is discussed in depth <ns0:ref type='bibr' target='#b46'>(Roshanbin and Miller, 2013)</ns0:ref>. Similarly, the image, text, colorful CAPTCHA's, and other types of CAPTCHA's are being attacked by various malicious attacks. Manuscript to be reviewed Computer Science and confidence <ns0:ref type='bibr' target='#b59'>(Xu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Many prevention strategies against malicious attacks have been adopted in recent years, such as cloud computing-based voice-processing <ns0:ref type='bibr'>Gao et al. (2020b,a)</ns0:ref>, mathematical and logical puzzles, and text and image recognition tasks <ns0:ref type='bibr' target='#b22'>Gao et al. (2020c)</ns0:ref>. Text-based authentication methods are mostly used due to their easier interpretation, and implementation <ns0:ref type='bibr' target='#b31'>Madar et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b25'>Gheisari et al. (2021)</ns0:ref>. A set of rules may define a kind of automated creation of CAPTCHA-solving tasks. It leads to easy API creation and usage for security web developers to make more mature CAPTCHAs <ns0:ref type='bibr' target='#b8'>Bursztein et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>Cruz-Perez et al. (2012)</ns0:ref>. The text-based CAPTCHA is used for Optical Character Recognition (OCR). OCR is strong enough to solve text-based CAPTCHA challenges. However, it still has challenges regarding its robustness in solving CAPTCHA problems <ns0:ref type='bibr' target='#b28'>Kaur and Behal (2015)</ns0:ref>. These CAPTCHA challenges are extensive with ongoing modern technologies. Machines can solve them, but humans cannot. These automated, complex CAPTCHA-creating tools can be broken down using various OCR techniques. Some studies claim that they can break any CAPTCHA with high efficiency. The existing work also recommends strategies to increase the keyword size and another method of crossing lines from keywords that use only straight lines and a horizontal direction. It can break easily using different transformations, such as the Hough transformation. It is also suggested that single-character recognition is used from various angles, rotations, and views to make more robust and challenging CAPTCHAs. <ns0:ref type='bibr' target='#b7'>Bursztein et al. (2011)</ns0:ref>.</ns0:p><ns0:p>The concept of reCAPTCHA was introduced in 2008. It was initially a rough estimation. It was later improved and was owned by Google to decrease the time taken to solve it. The un-solvable reCAPTCHA's were then considered to be a new challenge for OCRs Von <ns0:ref type='bibr' target='#b52'>Ahn et al. (2008)</ns0:ref>. The usage of computer vision and image processing as a CAPTCHA solver or breaker was increased if segmentation was performed efficiently <ns0:ref type='bibr' target='#b24'>George et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b61'>Ye et al. (2018)</ns0:ref>. The main objective or purpose of making a CAPTCHA solver is to protect CAPTCHA breakers. By looking into CAPTCHA solvers, more challenging CAPTCHAs can be generated, and they may lead to a more secure web that is protected against malicious attacks <ns0:ref type='bibr' target='#b41'>Rai et al. (2021)</ns0:ref>. A benchmark or suggestion for CAPTCHA creation was given by <ns0:ref type='bibr'>Chellapilla et al.:</ns0:ref> Humans should solve the given CAPTCHA challenge with a 90% success rate, while machines ideally solve only one in every 10,000 CAPTCHAs <ns0:ref type='bibr' target='#b11'>Chellapilla et al. (2005)</ns0:ref>.</ns0:p><ns0:p>Modern AI yields CAPTCHAs that can solve problems in a few seconds. Therefore, creating CAPTCHAs that are easily interpretable for humans and unsolvable for machines is an open challenge. It is also observed that humans invest a substantial amount of time daily solving CAPTCHAs Von <ns0:ref type='bibr' target='#b52'>Ahn et al. (2008)</ns0:ref>. Therefore, reducing the amount of time humans need to solve them is another challenge. Various considerations need to be made, including text familiarity, visual appearance, distortions, etc. Commonly in text-based CAPTCHAs, the well-recognized languages are used that have many dictionaries that make them easily breakable. Therefore, we may need to make unfamiliar text from common languages such as phonetic text is not ordinary language that is pronounceable <ns0:ref type='bibr' target='#b54'>Wang and Bentley (2006)</ns0:ref>. Similarly, the color of the foreground and the background of CAPTCHA images is also an essential factor, as many people have low or normal eyesight or may not see them. Therefore, a visually appealing foreground and background with distinguishing colors are recommended when creating CAPTCHAs. Distortions from periodic or random manners, such as affine transformations, scaling, and the rotation of specific angles, are needed. These distortions are solvable for computers and humans. If the CAPTCHAs become unsolvable, then multiple attempts by a user are needed to read and solve them <ns0:ref type='bibr' target='#b60'>Yan and El Ahmad (2008)</ns0:ref>.</ns0:p><ns0:p>In current times, Deep Convolutional neural networks (DCNN) are used in many medical <ns0:ref type='bibr' target='#b34'>Meraj et al. (2019)</ns0:ref>, <ns0:ref type='bibr' target='#b33'>Manzoor et al. (2022)</ns0:ref>, <ns0:ref type='bibr' target='#b32'>Mahum et al. (2021)</ns0:ref> and other real-life recognition applications <ns0:ref type='bibr' target='#b35'>Namasudra (2020)</ns0:ref> as well as insecurity threat solutions <ns0:ref type='bibr' target='#b30'>Lal et al. (2021)</ns0:ref>. The security threats in IoT and many other aspects can also be controlled using blockchain methods <ns0:ref type='bibr' target='#b36'>Namasudra et al. (2021)</ns0:ref>. Utilizing deep learning, the proposed study uses various image processing operations to normalize text-based image datasets. After normalizing the data, a single-word-caption-based OCR was designed with skipping connections. These skipping connections connect previous pictorial information to various outputs in simple Convolutional Neural Networks (CNNs), which possess visual information in the next layer only <ns0:ref type='bibr' target='#b1'>Ahn and Yim (2020)</ns0:ref>.</ns0:p><ns0:p>The main contribution of this research work is as follows:</ns0:p><ns0:p>&#8226; A skipping-connection-based CNN framework is proposed that covers multiple aspects of features.</ns0:p><ns0:p>&#8226; A 5-fold validation scheme is used in a deep-learning-based network to remove bias, if any, which leads to more promising results.</ns0:p></ns0:div> <ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; The data are normalized using various image processing steps to make it more understandable for the deep learning model.</ns0:p></ns0:div> <ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Today in the growing and dominant field of AI, many real-life problems have been solved with the help of deep learning and other evolutionary optimized intelligent algorithms <ns0:ref type='bibr' target='#b44'>Rauf et al. (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b45'>Rauf et al. (2020)</ns0:ref>. Various problems of different aspects using DL methods are solved, such as energy consumption analysis <ns0:ref type='bibr' target='#b21'>Gao et al. (2020b)</ns0:ref>, time scheduling of resources to avoid time and resources wastage <ns0:ref type='bibr' target='#b22'>Gao et al. (2020c)</ns0:ref>. Similarly, in cybersecurity, a CAPTCHA solver has provided many automated AI solutions, except OCR. Multiple proposed CNN models have used various types of CAPTCHA datasets to solve CAPTCHAs. The collected datasets have been divided into three categories: selection-, slide-, and click-based. Ten famous CAPTCHAs were collected from google.com, tencent.com, etc. The breaking rate of these CAPTCHAs was compared. CAPTCHA design flaws that may help to break CAPTCHAs easily were also investigated. The underground market used to solve CAPTCHAs was also investigated, and findings concerning scale, the commercial sizing of keywords, and their impact on CAPTCHas were reported <ns0:ref type='bibr' target='#b58'>Weng et al. (2019)</ns0:ref>. A proposed sparsity-integrated CNN used constraints to deactivate the fully connected connections in CNN. It ultimately increased the accuracy results compared to transfer learning, and simple CNN solutions <ns0:ref type='bibr' target='#b18'>Ferreira et al. (2019)</ns0:ref>.</ns0:p><ns0:p>Image processing operations regarding erosion, binarization, and smoothing filters were performed for data normalization, where adhesion-character-based features were introduced and fed to a neural network for character recognition <ns0:ref type='bibr' target='#b27'>Hua and Guoqin (2017)</ns0:ref>. The backpropagation method was claimed as a better approach for image-based CAPTCHA recognition. It has also been said that CAPTCHA has become the normal, secure authentication method in the majority of websites and that image-based CAPTCHAs are more valuable than text-based CAPTCHAs <ns0:ref type='bibr' target='#b47'>Saroha and Gill (2021)</ns0:ref>. Template-based matching is performed to solve text-based CAPTCHAs, and preprocessing is also performed using Hough transformation and skeletonization. Features based on edge points are also extracted, and the points of reference with the most potential are taken . It is also claimed that the extracted features are invariant to position, language, and shapes. Therefore, it can be used for any merged, rotated, and other variation-based CAPTCHAs WANG (2017).</ns0:p><ns0:p>PayPal CAPTCHAs have been solved using correlation, and Principal Component Analysis (PCA) approaches. The primary steps of these studies include preprocessing, segmentation, and the recognition of characters. A success rate of 90% was reported using correlation analysis of PCA and using PCA only increased the efficiency to 97% <ns0:ref type='bibr' target='#b42'>Rathoura and Bhatiab (2018)</ns0:ref>. A Faster Recurrent Neural Network (F-RNN) has been proposed to detect CAPTCHAs. It was suggested that the depth of a network could increase the mean average precision value of CAPTCHA solvers, and experimental results showed that feature maps of a network could be obtained from convolutional layers <ns0:ref type='bibr' target='#b17'>Du et al. (2017)</ns0:ref>. Data creation and cracking have also been used in some studies. For visually impaired people, there should be solutions to CAPTCHAs. A CNN network named CAPTCHANet has been proposed.</ns0:p><ns0:p>A 10-layer network was designed and was improved later with training strategies. A new CAPTCHA using Chinese characters was also created, and it removed the imbalance issue of class for model training.</ns0:p><ns0:p>A statistical evaluation led to a higher success rate <ns0:ref type='bibr' target='#b62'>Zhang et al. (2021)</ns0:ref>. A data selection approach automatically selected data for training purposes. The data augmenter later created four types of noise to make CAPTCHAs difficult for machines to break. However, the reported results showed that, in combination with the proposed preprocessing method, the results were improved to 5.69% <ns0:ref type='bibr' target='#b10'>Che et al. (2021)</ns0:ref>. Some recent studies on CAPTCHA recognition are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>The pre-trained model of object recognition has an excellent structural CNN. A similar study used a well-known VGG network and improved the structure using focal loss <ns0:ref type='bibr' target='#b56'>Wang and Shi (2021)</ns0:ref>. The image processing operations generated complex data in text-based CAPTCHAs, but there may be a high risk of breaking CAPTCHAs using common languages. One study used the Python Pillow library to create Bengali-, Tamil-, and Hindi-language-based CAPTCHAs. These language-based CAPTCHAs were solved using D-CNN, which proved that the model was also confined by these three languages <ns0:ref type='bibr' target='#b0'>Ahmed and Anand (2021)</ns0:ref>. A new, automatic CAPTCHA creating and solving technique using a simple 15-layer CNN was proposed to remove the manual annotation problem.</ns0:p><ns0:p>Various fine-tuning techniques have been used to break 5-digit CAPTCHAs and have achieved 80% classification accuracies <ns0:ref type='bibr' target='#b5'>Bostik et al. (2021)</ns0:ref>. A privately collected dataset was used in a CNN approach with 7 layers that utilize correlated features of text-based CAPTCHAs. It achieved a 99.7% accuracy using its image database, and CNN architecture <ns0:ref type='bibr' target='#b29'>Kumar and Singh (2021)</ns0:ref>. Another similar approach was based on handwritten digit recognition. The introduction of a CNN was initially discussed, and a CNN was proposed for twisted and noise-added CAPTCHA images <ns0:ref type='bibr' target='#b9'>Cao (2021)</ns0:ref>. A deep, separable CNN for four-word CAPTCHA recognition achieved 100% accurate results with the fine-tuning of a separable CNN concerning their depth. A fine-tuned, pre-trained model architecture was used with the proposed architecture and significantly reduced the training parameters with increased efficiency <ns0:ref type='bibr' target='#b16'>Dankwa and Yang (2021)</ns0:ref>.</ns0:p><ns0:p>A visual-reasoning CAPTCHA (known as a Visual Turing Test (VTT)) has been used in security authentication methods, and it was easy to break using holistic and modular attacks. One study focused on a visual-reasoning CAPTCHA and showed an accuracy of 67.3% against holistic CAPTCHAs and an accuracy of 88% against VTT CAPTCHAs. Future directions were to design VTT CAPTCHAs to protect against these malicious attacks <ns0:ref type='bibr' target='#b23'>Gao et al. (2021)</ns0:ref>. To provide a more secure system in text-based CAPTCHAs, a CAPTCHA defense algorithm was proposed. It used a multi-character CAPTCHA generator using an adversarial perturbation method. The reported results showed that complex CAPTCHA generation reduces the accuracy of CAPTCHA breaker up to 0.06% <ns0:ref type='bibr' target='#b53'>Wang et al. (2021a)</ns0:ref>. The Generative Adversarial Network (GAN) based simplification of CAPTCHA images adopted before segmentation and classification. A CAPTCHA solver is presented that achieves 96% success rate character recognition. All other CAPTCHA schemes were evaluated and showed a 74% recognition rate. These suggestions for CAPTCHA designers may lead to improved CAPTCHA generation <ns0:ref type='bibr' target='#b55'>Wang et al. (2021b)</ns0:ref>. A binary image-based CAPTCHA recognition framework is proposed to generate a certain number of image copies from a given CAPTCHA image to train a CNN model. The The Weibo dataset showed that the 4-character recognition accuracy on the testing set was 92.68%, and the Gregwar dataset achieved a 54.20% accuracy</ns0:p></ns0:div> <ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science on the testing set <ns0:ref type='bibr' target='#b50'>Thobhani et al. (2020)</ns0:ref>.</ns0:p><ns0:p>The reCAPTCHA images are a specific type of security layer used by some sites and set a benchmark by Google to meet their broken challenges. This kind of image would deliver specific images, and then humans have to pick up any similar image that could be realized by humans efficiently. The machine learning-based studies also discuss and work on these kinds of CAPTCHA images now a day <ns0:ref type='bibr' target='#b2'>(Alqahtani and Alsulaiman, 2020)</ns0:ref>. The drag and drop image CAPTCHA-based security schemes are also applied nowadays. An inevitable part of the image is missed that needs to be dragged and filled the blank in a particular location and shape. However, it could also be broken by finding space areas using neighborhood differences of pixels. Anyhow, it is far good enough to avoid any malicious attacks <ns0:ref type='bibr' target='#b39'>(Ouyang et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Adversarial Attacks are the rising challenge to deceive the deep learning models nowadays. To prevent deep learning model-based CAPTCHA attacks, many different adversarial noises are being introduced and used in security questions that create similar images. It needs to be found by the user. A sample image-based noise-images are generated and shown in the puzzle that could be found by human-eye with keen intention <ns0:ref type='bibr' target='#b48'>(Shi et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b38'>Osadchy et al., 2017)</ns0:ref>. However, these studies need self-work because noise-generated images can consume more time for users. Also, some of the adversarial noise-generating methods could generate unsolvable samples for some of the real-time users.</ns0:p><ns0:p>The studies discussed above yield information about text-based CAPTCHAs as well as other types of CAPTCHAs. Most studies used DL methods to break CAPTCHAs, and time and unsolvable CAPTCHAs are still an open challenge. More efficient DL methods need to be used that, though they may not cover other datasets, should be robust to them. The locally developed datasets are used by many of the studies make the proposed studies less robust. However, publicly available datasets could be used so that they could provide more robust and confident solutions.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>Recent studies based on deep learning have shown excellent results to solve a CAPTCHA. However, simple CNN approaches may detect lossy pooled incoming features when passing between convolution and other pooling layers. Therefore, the proposed study utilizes skip connection. To remove further bias, a 5-fold validation approach is adopted. The proposed study presents a CAPTCHA solver framework using various steps, as shown in Figure <ns0:ref type='figure'>.</ns0:ref> 1. The data are normalized using various image processing steps to make it more understandable for the deep learning model. This normalized data is segmented per character to make an OCR-type deep learning model that can detect each character from each aspect. At last, the 5-fold validation method is reported and yields promising results.</ns0:p><ns0:p>The two datasets used for CAPTCHA recognition have 4 and 5 words in them. The 5-word dataset has a horizontal line in it with overlapping text. Segmenting and recognizing such text is challenging due to its un-clearance. The other dataset of 4 characters was not as challenging to segment, as no line intersected them, and character rotation scaling needs to be considered. Their preprocessing and segmentation are explained in the next section. The dataset is explored in detail before and after preprocessing and segmentation.</ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>There are two public datasets available on Kaggle that are used in the proposed study. There are 5 and 4 characters in both datasets. There are different numbers of numeric and alphabetic characters in them.</ns0:p><ns0:p>There are 1040 images in the five-character dataset (d 1 ) and 9955 images in the 4-character dataset (d 2 ).</ns0:p><ns0:p>There are 19 types of characters in the d 1 dataset, and there are 32 types of characters in the d 2 dataset.</ns0:p><ns0:p>Their respective dimensions and extension details before and after segmentation are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>.</ns0:p><ns0:p>The frequencies of each character in both datasets are shown in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>.</ns0:p><ns0:p>The frequency of each character varies in both datasets, and the number of characters also varies. In the d 2 dataset, although there is no complex inner line intersection and a merging of texts is found, more characters and their frequencies are. However, the d 1 dataset has complex data and a low number of characters and frequencies, as compared to d 2 . Initially, d 1 has the dimensions 50 x 200 x 3, where 50 represents the rows, 200 represents the columns, and 3 represents the color depth of the given images. d 2 has image dimensions of 24 x 72 x 3, where 24 is the rows, 72 is the columns, and 3 is the color depth of given images. These datasets have almost the same character location. Therefore, they can be manually cropped to train the model on each character in an isolated form. However, their dimensions may vary for each character, which may need to be equally resized. The input images of both datasets were in Portable Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Graphic Format (PNG) and did not need to change. After segmenting both dataset images, each character is resized to 20 x 24 in both datasets. This size covers each aspect of the visual binary patterns of each character. The dataset details before and after resizing are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The summarized details of the used datasets in the proposed study are shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The dimensions of the resized image per character mean that, when we segment the characters from the given dataset images, their sizes vary from dataset to dataset and from character type to character type.</ns0:p><ns0:p>Therefore, the optimal size at which the image data for each character is not lost is 20 rows by 24 columns, which is set for each character.</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing and Segmentation</ns0:head><ns0:p>d 1 dataset images do not need any complex image processing to segment them into a normalized form. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science them: Red, Green, and Blue. Let Image I (x,y) be the input RGB image, as shown in Eq. 1. To convert these input images into grayscale, Eq. 2 is performed.</ns0:p><ns0:formula xml:id='formula_0'>Input Image = I (x,y)<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>In Eq. 1, I is the given image, and x and yx and y represent the rows and columns. The grayscale conversion is performed using Eq. 2:</ns0:p><ns0:formula xml:id='formula_1'>Grey (x, y) &#8592; j &#8721; i=n (0.2989 * R, 0.5870 * G, 0.1140 * B)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>In Eq. 2, i is the iterating row position, j is the interacting column position of the operating pixel at a certain time, and R, G, and B are the red, green, and blue pixel values of that pixel. The multiplying constant values convert to all three values of the respective channels to a new grey-level value in the range of 0-255. Grey (x, y) is the output grey-level of a given pixel at a certain iteration. After converting to greylevel, the binarization operation is performed using Bradly's method, which calculates a neighborhood base threshold to convert into 1 and 0 values to a given grey-level matrix of dimension 2. The neighborhood threshold operation is performed using Eq. 3.</ns0:p><ns0:formula xml:id='formula_2'>B (x, y) &#8592; 2 * &#8970;size ( Grey (x, y) 16 + 1)&#8971;<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>In Eq. 3, the output B (x, y) is the neighborhood-based threshold that is calculated as the 1/8 th neighborhood of a given Grey (x, y) image. However, the floor is used to obtain a lower value to avoid any miscalculated threshold value. This calculated threshold is also called the adaptive threshold method.</ns0:p><ns0:p>The neighborhood value can be changed to increase or decrease the binarization of a given image. After obtaining a binary image, the complement is necessary to highlight the object in a given image, taken as a simple inverse operation, calculated as shown in Eq. 4.</ns0:p><ns0:formula xml:id='formula_3'>C (x, y) &#8592; 1 B(x, y)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>In Eq. 4, the available 0 and values are inverted to their respective values of each pixel position x and y. The inverted image is used as an isolation process in the case of the d 2 dataset. In the case of the d 1 , further erosion is needed. Erosion is an operation that uses a structuring element concerning its shape. The respective shape is used to remove pixels from a given binary image. In the case of a CAPTCHA image, the intersected line is removed using a line-type structuring element. The line-type structuring element uses a neighborhood operation. In the proposed study case, a line of size 5 with an angle dimension of 90 is used, and the intersecting line for each character in the binary image is removed, as we can see in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, row 1. The erosion operation with respect to a 5 length and a 90 angle is calculated as shown in Eq. 5.</ns0:p><ns0:formula xml:id='formula_4'>C &#8854; L &#8592; x &#8712; E| B x &#8838; C<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>In Eq. 5, C is the binary image, L is the line type structuring element of line type, and x is the resultant eroded matrix of the input binary image C. B x is the subset of a given image, as it is extracted from a given image C. After erosion, there is noise in some images that may lead to the wrong interpretation of that character. Therefore, to remove noise, the neighborhood operation is again utilized, and 8 neighborhood operations are used to a given threshold of 20 pixels for 1 value, as the noise value remains lower than the character in that binary image. To calculate it, an area calculation using each pixel is necessary. Therefore, by iterating an 8 by 8 neighborhood operation, 20 pixels consisting of the area are checked to remove those areas, and other more significant areas remain in the output image. The sum of a certain area with a maximum of 1 is calculated as shown in Eq. 6.</ns0:p></ns0:div> <ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_5'>S (x, y) &#8592; j &#8721; i=1 max(B x |xi &#8722; x j|, B x |yi &#8722; y j|)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>In Eq. 6, the given rows (i) and columns ( j) of a specific eroded image B x are used to calculate the resultant matrix by extracting each pixel value to obtain one's value from the binary image. The max will return only values that will be summed to obtain an area that will be compared with threshold value T.</ns0:p><ns0:p>The noise will then be removed, and final isolation is performed to separate each normalized character.</ns0:p></ns0:div> <ns0:div><ns0:head>CNN Training for Text Recognition</ns0:head><ns0:formula xml:id='formula_6'>convo (I,W ) x,y = N C &#8721; a=1 N R &#8721; b=1 W a,b * I x+a&#8722;1,y+b&#8722;1 (7)</ns0:formula><ns0:p>In the above equation, we formulate a convolutional operation for a 2D image that represents I x,y , where x and y are the rows and columns of the image, respectively. W x,y represents the convolving window concerning rows and columns x and y. The window will iteratively be multiplied with the respective element of the given image and then return the resultant image in convo (I,W ) x,y . N C and N R are the numbers of rows and columns starting from 1, a represents columns, and b represents rows.</ns0:p></ns0:div> <ns0:div><ns0:head>Batch Normalization Layer</ns0:head><ns0:p>Its basic formula is to calculate a single component value, which can be represented as</ns0:p><ns0:formula xml:id='formula_7'>Bat &#8242; = a &#8722; M [a] var(a)<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>The calculated new value is represented as Bat &#8242; , a is any given input value, and M[a] is the mean of that given value, where in the denominator the variance of input a is represented as var(a). The further value is improved layer by layer to give a finalized normal value with the help of alpha gammas, as shown below:</ns0:p><ns0:formula xml:id='formula_8'>Bat &#8242;&#8242; = &#947; * Bat &#8242; + &#946;<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>The extended batch normalization formulation improved in each layer with the previous Bat &#8242; value.</ns0:p></ns0:div> <ns0:div><ns0:head>ReLU</ns0:head><ns0:p>ReLU excludes the input values that are negative and retains positive values. Its equation can be written as</ns0:p><ns0:formula xml:id='formula_9'>reLU = x = x i f x &gt; 0 x = 0 i f x &#8804; 0 (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>where x is the input value and directly outputs the value if it is greater than zero; if values are less than 0, negative values are replaced with 0.</ns0:p></ns0:div> <ns0:div><ns0:head>Skip-Connection</ns0:head><ns0:p>The Skip connection is basically concetnating the previous sort of pictoral information to the next convolved feature maps of network. In proposed network, the ReLU-1 information is saved and then after 2nd and 3rd ReLU layer, these saved information is concatenated with the help of an addition layer. In this way, the skip-connection is added that makes it different as compared to conventional deep learning approaches to classify the guava disease. Moreover, the visualization of these added feature information is shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Average Pooling</ns0:head><ns0:p>The average pooling layer is superficial as we convolve to the input from the previous layer or node. The coming input is fitted using a window of size mxn, where m represents the rows, and n represents the column. The movement in the horizontal and vertical directions continues using stride parameters.</ns0:p><ns0:p>Many deep learning-based algorithms introduced previously, as we can see in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, ultimately use CNN-based methods. However, all traditional CNN approaches using convolve blocks and transfer learning approaches may take important information when they pool down to incoming feature maps from previous layers. Similarly, the testing and validation using conventional training, validation, and testing may be biased due to less data testing than the training data. Therefore, the proposed study uses a 1-skip connection while maintaining other convolve blocks; inspired by the K-Fold validation method, it splits up both datasets' data into five respective folds. The dataset, after splitting into five folds, is trained and tested in a sequence. However, these five-fold results are taken as a means to report final accuracy results. The proposed CNN contains 16 layers in total, and it includes three major blocks containing convolutional, batch normalization, and ReLU layers. After these nine layers, an additional layer adds incoming connections, a skip connection, and 3rd-ReLU-layer inputs from the three respective blocks.</ns0:p><ns0:p>Average pooling, fully connected, and softmax layers are added after skipping connections. All layer parameters and details are shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>, all learnable weights of each layer are shown. For both datasets, output categories of characters are different. Therefore, in the dense layer of the five-fold CNN models, the output class was 19 for five models, and the output class was 32 categories in the other five models. The skip connection has more weights than other convolution layers. Each model is compared regarding its weight learning and is shown in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. Later on, by obtaining these significant, multiple features, the proposed study utilizes the K-fold validation technique by splitting the data into five splits. These multiple splits remove bias in the training and testing data and take the testing results as the mean of all models. In this way, no data will remain</ns0:p><ns0:p>for training, and no data will be untested. The results ultimately become more confident than previous conventional approaches of CNN. The d 2 dataset has a clear, structured element in its segmented images; in d 1 , the isolated text images were not much clearer. Therefore, the classification results remain lower in this case, whereas in the d2 dataset, the classification results remain high and usable as a CAPTCHA solver. The results of each character and dataset for each fold are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>As discussed earlier, there are two datasets in the proposed framework. Both have a different number of categories and a different number of images. Therefore, separate evaluations of both are discussed and described in this section. Firstly, the five-character dataset is used by the 5-CNN models of same architecture, with a different split in the data. Secondly, the four-character dataset is used by the same architecture of the model, with a different output of classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Five-character Dataset (d 1 )</ns0:head><ns0:p>The five-character dataset has 1040 images in it. After segmenting each type of character, it has 5200 total <ns0:ref type='table' target='#tab_3'>4</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science matches with other characters, and for this character recognition, we may need to concentrate on structural polishing for M input characters. To prevent CAPTCHA from breaking further complex designs among machines and making it easy for humans to do so, the other characters that achieve higher results need a high angle and structural change to not break with any machine learning model. This complex structure may be improved from other fine-tuning of a CNN, increasing or decreasing the skipping connection.</ns0:p><ns0:p>The accuracy value can also improve. The other four-character dataset is essential because it has 32 characters and more images. This five-character dataset's lower accuracy may also be due to little data and less training. The other character recognition studies have higher accuracy rates on similar datasets, but they might be less confident than the proposed study due to an unbiased validation method. For further validation, precision and recall-based F1-Score for all five folds mean are shown in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>; the Y character again received the highest value of F1-measure with 95.82%. Using the proposed method again validates the 'Y' character as the most promisingly broken character. The second highest accuracy gaining character 'G' got the second-highest F1-score (94.522%) among all 19 characters. The overall mean F1-Score of all 5-folds is 85.97% that is more than overall accuracy. However, F1-Score is the harmonic mean of precision and recall, wherein this regard, it could be more suitable than the accuracy as it covers the class balancing issue between all categories. Therefore, in terms of F1-Score, the proposed study could be considered a more robust approach. The four-character dataset recognition results are discussed in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head>Four-Character Dataset (d 2 )</ns0:head><ns0:p>The four-character dataset has a higher frequency of each character than the five-character dataset, and the number of characters is also higher. The same five-fold splits were performed on this dataset characters as well. After applying the five-folds, the number of characters in each fold was 7607, 7624, 7602, 7617, and 7595, respectively, and the remaining images from the 38,045 images of individual characters were adjusted into the training sets of each fold. The results of each character w.r.t each fold and the overall mean are given in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>.</ns0:p><ns0:p>From Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>, it can be observed that almost every character was recognized with 99% accuracy. The highest accuracy of character D was 99.92 and remained 100% in the four-folds. Only one fold showed a 99.57% accuracy. From this point, we can state that the proposed study removed bias, if there was any, from the dataset by doing splits. Therefore, it is necessary to make folds in a deep learning network. Most studies use a 1-fold approach only. The 1-fold approach is at high risk. It is also essential that character M achieved the lowest accuracy in the case of the five-character CAPTCHA. In this four-character CAPTCHA, 98.58% was accurately recognized. Therefore, we can say that the structural morphology of M in the five-character CAPTCHA better avoids any CAPTCHA solver method. If we look at the F1-Score in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>, all character's recognition values range from 97 to 99%. However, the variation in all folds results remains almost the same as Folds accuracies. The mean F1-Scores against each character validate the confidence of the proposed method and the breaking of each type of character. The class balance issue in 32 types of classes is the big issue that could make less confident to the proposed method accuracy. However, the F1-Score is discussed and added in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>, we can see that various studies have used different numbers of characters with self-collected and generated datasets, and comparisons have been made. Some studies have considered the number of dataset characters. Accuracy is not comparable, as it uses the five-fold validation method, and the others only used 1-fold. Therefore, the proposed study outperforms in each aspect, in terms of the proposed CNN framework and its validation scheme.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The proposed study uses a different approach to deep learning to solve CAPTCHA problems. It proposed Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and evaluated character by character. The proposed study is confident to report results, as it removed biases (if any) in datasets using a five-fold validation method. The results are also improved as compared to previous studies. The reported higher results claim that these CAPTCHA designs are at high risk, as any malicious attack can break them on the web. Therefore, the proposed CNN could test CAPTCHA designs to solve them more confidently in real-time. Furthermore, the proposed study has used the publicly available datasets to perform training and testing on them, making it a more robust approach to solve text-based CAPTCHA's.</ns0:p><ns0:p>Many studies have used deep learning to break CAPTCHAs, as they have focused on the need to design CAPTCHAs that do not consume user time and resist CAPTCHA solvers. It would make our web systems more secure against malicious attacks. However, In the future, the data augmentation methods and more robust data creation methods can be applied on CAPTCHA datasets where intersecting line-based CAPTCHA's are more challenging to break that can be used. Similarly, the other local languages based CAPTCHA's also can be solved using similar DL models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>However, most of them have used Deep Learning based methods to crack them due to their robustness PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The Proposed Framework for CAPTCHA Recognition for both 4 and 5 Character Datasets</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. 5 and 4 Characters' Datasets used in proposed study, their Character-wise Frequencies (Row-1: 4-Character Dataset 1 (d 2 ) ; Row-2: five-character Dataset 2 (d 1 )).</ns0:figDesc><ns0:graphic coords='7,183.09,391.65,330.86,283.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>d</ns0:head><ns0:label /><ns0:figDesc>2 needs this operation to remove the central intersecting line of each character. This dataset can be normalized to isolate each character correctly. Therefore, three steps are performed on the d 1 dataset.It is firstly converted to greyscale; it is then converted to a binary form, and their complement is lastly taken. In the d 2 dataset, 2 additional steps of erosion and area-wise selection are performed to remove the intersection line and the edges of characters. The primary steps of both datasets and each character isolation are shown in Figure3.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Preprocessing and Isolation of characters in both datasets (Row-1: the d1 dataset, binarization, erosion, area-wise selection, and segmentation; Row-2: binarization and isolation of each character).</ns0:figDesc><ns0:graphic coords='8,162.41,464.12,372.20,183.11' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Five-folds based trained CNN weights with their respective layers are shown that shows the proposed CNN skipping connection based variation in all CNNs' architectures The figure shows convolve 1, batch normalization, and skip connection weights. The internal layers have a more significant number of weights or learnable parameters, and the different or contributing connection weights are shown in Figure 4. Multiple types of feature maps are included in the figure.However, the weights of one dataset are shown. In the other dataset, these weights may vary slightly. The skip-connection weights have multiple features that are not in a simple convolve layer. Therefore, we can say that the proposed CNN architecture is a new way to learn multiple types of features compared to previous studies that use a traditional CNN. This connection may be used in other aspects of text and object recognition and classification.</ns0:figDesc><ns0:graphic coords='12,141.73,134.04,413.54,225.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>images. The data are then split into five folds: 931, 941, 925, 937, and 924. The remaining data difference 11/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022) Manuscript to be reviewed Computer Science is adjusted into the training set, and splitting was adjusted during the random selection of 20-20% of the total data. The training on four-fold data and the testing on the one-fold data are shown in Table</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The validation loss and validation accuracy graphs are shown for each fold of the CNN (Row-1: five-character CAPTCHA; Row-2: four-character CAPTCHA).</ns0:figDesc><ns0:graphic coords='16,141.73,63.79,413.54,196.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>a skip-CNN connection network to break text-based CAPTCHA's. Two CAPTCHA datasets are discussed 15/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Few Recent CAPTCHA recognition-based studies methods and their results</ns0:figDesc><ns0:table><ns0:row><ns0:cell>3/18</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Description of both employed datasets' in proposed study.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Properties</ns0:cell><ns0:cell>d1</ns0:cell><ns0:cell>d2</ns0:cell></ns0:row><ns0:row><ns0:cell>Image dimension</ns0:cell><ns0:cell>50x200x3</ns0:cell><ns0:cell>24x72x3</ns0:cell></ns0:row><ns0:row><ns0:cell>Extension</ns0:cell><ns0:cell>PNG</ns0:cell><ns0:cell>PNG</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Images</ns0:cell><ns0:cell>9955</ns0:cell><ns0:cell>1040</ns0:cell></ns0:row><ns0:row><ns0:cell>Character Types</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>19</ns0:cell></ns0:row><ns0:row><ns0:cell>Resized Image Dimension (Per Character)</ns0:cell><ns0:cell>20x24x1</ns0:cell><ns0:cell>20x24x1</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Parameters setting and learnable weights for proposed Skipping-CNN Architecture</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Number</ns0:cell><ns0:cell>Layers Name</ns0:cell><ns0:cell>Category</ns0:cell><ns0:cell>Parameters</ns0:cell><ns0:cell cols='3'>Weights/Offset Padding Stride</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Input</ns0:cell><ns0:cell>Image Input</ns0:cell><ns0:cell>24 x 20 x 1</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Conv (1)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>3x3x1x8</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>BN (1)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>1x1x8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>ReLU (1)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>24 x 20 x 8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Conv (2)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>3x3x8x16</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>BN (2)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>1x1x16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>ReLU (2)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 16</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Conv (3)</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>3x3x16x32</ns0:cell><ns0:cell>Same</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>BN (3)</ns0:cell><ns0:cell>Batch Normalization</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>ReLU (3)</ns0:cell><ns0:cell>ReLU</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Skip-connection</ns0:cell><ns0:cell>Convolution</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>1x1x8x32</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Add</ns0:cell><ns0:cell>Addition</ns0:cell><ns0:cell>12 x 10 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>Pool</ns0:cell><ns0:cell>Average Pooling</ns0:cell><ns0:cell>6 x 5 x 32</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 19 (d2)</ns0:cell><ns0:cell>19 x 960 (d2)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>FC</ns0:cell><ns0:cell>Fully connected</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1 x 1 x 32 (d1)</ns0:cell><ns0:cell>32 x 960 (d1)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>1 x 1 x 19</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>Class Output</ns0:cell><ns0:cell>Classification</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row></ns0:table><ns0:note>10/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Five-character Dataset Accuracy (%) and F1-Score with five-fold text recognition based testing on the trained CNNs'In Table4, there are 19 types of characters that have their fold-by-fold varying accuracy. The mean of all folds is given. The overall or mean of each fold and the mean of all folds are given in the last row. We can see that the Y character has a significant or the highest accuracy rate (95.40%) of validation compared to other characters. This may be due to its almost entirely different structure from other characters. The other highest accuracy is of the G character with 95.06%, which is almost equal to the highest with a slight difference. However, these two characters have a more than 95% recognition accuracy, and no other character is nearer to 95. The other characters have a range of accuracies from 81 to 90%. The least accurate M character is 62.08, and it varies in five folds from 53 to 74%. Therefore, we can say that M</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Accuracy (%)</ns0:cell><ns0:cell /><ns0:cell cols='2'>F1-measure (%)</ns0:cell><ns0:cell>Accuracy (%)</ns0:cell><ns0:cell>F1-measure (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>5-Fold Mean</ns0:cell><ns0:cell>5-Fold Mean</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>89.63</ns0:cell><ns0:cell>84.21</ns0:cell><ns0:cell>83.14</ns0:cell><ns0:cell>84.48</ns0:cell><ns0:cell>86.772</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>87.76</ns0:cell><ns0:cell>75.51</ns0:cell><ns0:cell>87.75</ns0:cell><ns0:cell>90.323</ns0:cell><ns0:cell>89.32</ns0:cell><ns0:cell>86.12</ns0:cell><ns0:cell>86.0792</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>88.46</ns0:cell><ns0:cell>90.196</ns0:cell><ns0:cell>91.089</ns0:cell><ns0:cell>90.385</ns0:cell><ns0:cell>89.06</ns0:cell><ns0:cell>89.4066</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>84.31</ns0:cell><ns0:cell>80.39</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell>90.566</ns0:cell><ns0:cell>84.84</ns0:cell><ns0:cell>86.56</ns0:cell><ns0:cell>85.2644</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>76.59</ns0:cell><ns0:cell>82.61</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>87.58</ns0:cell><ns0:cell>85.2164</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>89.36</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>86.95</ns0:cell><ns0:cell>86.957</ns0:cell><ns0:cell>88.636</ns0:cell><ns0:cell>86.68</ns0:cell><ns0:cell>87.3026</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>91.66</ns0:cell><ns0:cell>93.47</ns0:cell><ns0:cell>89.362</ns0:cell><ns0:cell>87.49</ns0:cell><ns0:cell>89.5418</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>81.81</ns0:cell><ns0:cell>73.33</ns0:cell><ns0:cell>97.72</ns0:cell><ns0:cell>86.04</ns0:cell><ns0:cell>90.90</ns0:cell><ns0:cell>85.03</ns0:cell><ns0:cell>87.7406</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>79.16</ns0:cell><ns0:cell>85.10</ns0:cell><ns0:cell>82.60</ns0:cell><ns0:cell>80.0</ns0:cell><ns0:cell>82.64</ns0:cell><ns0:cell>81.0632</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>91.30</ns0:cell><ns0:cell>87.91</ns0:cell><ns0:cell>88.66</ns0:cell><ns0:cell>88.67</ns0:cell><ns0:cell>86.7954</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>62.79</ns0:cell><ns0:cell>79.54</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>85.41</ns0:cell><ns0:cell>81.928</ns0:cell><ns0:cell>78.73</ns0:cell><ns0:cell>79.4416</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>92.00</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>93.87</ns0:cell><ns0:cell>93.069</ns0:cell><ns0:cell>82.47</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>87.5008</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>95.83</ns0:cell><ns0:cell>91.83</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.833</ns0:cell><ns0:cell>94.73</ns0:cell><ns0:cell>95.06</ns0:cell><ns0:cell>94.522</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>64.00</ns0:cell><ns0:cell>56.00</ns0:cell><ns0:cell>53.061</ns0:cell><ns0:cell>70.47</ns0:cell><ns0:cell>67.34</ns0:cell><ns0:cell>62.08</ns0:cell><ns0:cell>63.8372</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>81.40</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>87.59</ns0:cell><ns0:cell>79.04</ns0:cell><ns0:cell>78.65</ns0:cell><ns0:cell>81.43</ns0:cell><ns0:cell>77.8656</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>78.26</ns0:cell><ns0:cell>82.22</ns0:cell><ns0:cell>91.67</ns0:cell><ns0:cell>98.87</ns0:cell><ns0:cell>90.34</ns0:cell><ns0:cell>92.0304</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>95.24</ns0:cell><ns0:cell>83.72</ns0:cell><ns0:cell>90.47</ns0:cell><ns0:cell>96.66</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>90.55</ns0:cell><ns0:cell>91.3156</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>89.58</ns0:cell><ns0:cell>87.50</ns0:cell><ns0:cell>82.97</ns0:cell><ns0:cell>87.23</ns0:cell><ns0:cell>82.105</ns0:cell><ns0:cell>85.68</ns0:cell><ns0:cell>86.067</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>93.02</ns0:cell><ns0:cell>95.45</ns0:cell><ns0:cell>97.67</ns0:cell><ns0:cell>95.43</ns0:cell><ns0:cell>95.349</ns0:cell><ns0:cell>95.40</ns0:cell><ns0:cell>95.8234</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>86.14</ns0:cell><ns0:cell>80.77</ns0:cell><ns0:cell>87.24</ns0:cell><ns0:cell>88.183</ns0:cell><ns0:cell>86.1265</ns0:cell><ns0:cell>85.52</ns0:cell><ns0:cell>85.9782</ns0:cell></ns0:row></ns0:table><ns0:note>12/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Table5that cross-validates the performance of the proposed study. The highest results show that this four-character CAPTCHA is at high risk, and line intersection, word joining, and correlation may break, preventing the CAPTCHA from breaking. Many approaches have been proposed to recognize the CAPTCHA, and most of them have used a conventional structure. The proposed study has used a more confident validation approach with multi-aspect feature extraction. Therefore, it can be used as a more promising approach to break CAPTCHA images and test the CAPTCHA design made by CAPTCHA designers. In this way, CAPTCHA designs can be protected against new approaches to deep learning. The graphical illustration of validation accuracy and the losses for both datasets on all folds is shown in Figure5.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The five-and four-character CAPTCHA fold validation losses and accuracies are shown. It can be</ns0:cell></ns0:row><ns0:row><ns0:cell>observed that the all folds of the five-character CAPTCHA reached close to 90%, and only the 2nd fold</ns0:cell></ns0:row></ns0:table><ns0:note>value remained at 80.77%. It is also important to state that, in this fold, there were cases that may not be covered in other deep learning approaches, and their results remain at risk. Similarly, a four-character CAPTCHA with a greater number of samples and less complex characters should not be used, as it can break easily compared to the five-character CAPTCHA. CAPTCHA-recognition-based studies have used self-generated or augmented datasets to propose CAPTCHA solvers. Therefore, the number of images, their spatial resolution sizes and styles, and other results have become incomparable. The proposed study13/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Four-character dataset Accuracy (%) and F1-Score with five-fold text recognition based testing on the trained CNNs'</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Accuracy (%)</ns0:cell><ns0:cell /><ns0:cell cols='2'>F1-measure (%)</ns0:cell><ns0:cell>Accuracy (%)</ns0:cell><ns0:cell>F1-measure (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Character</ns0:cell><ns0:cell>Fold 1</ns0:cell><ns0:cell>Fold 2</ns0:cell><ns0:cell>Fold 3</ns0:cell><ns0:cell>Fold 4</ns0:cell><ns0:cell>Fold 5</ns0:cell><ns0:cell>5-Fold Mean</ns0:cell><ns0:cell>5-Fold Mean</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>97.84</ns0:cell><ns0:cell>99.14</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>98.92</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.79</ns0:cell><ns0:cell>98.923</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>97.02</ns0:cell><ns0:cell>94.92</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>97.403</ns0:cell><ns0:cell>97.204</ns0:cell><ns0:cell>96.52</ns0:cell><ns0:cell>97.5056</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>97.87</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.934</ns0:cell><ns0:cell>98.526</ns0:cell><ns0:cell>98.55</ns0:cell><ns0:cell>98.4708</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>97.97</ns0:cell><ns0:cell>98.144</ns0:cell><ns0:cell>99.01</ns0:cell><ns0:cell>98.0812</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.65</ns0:cell><ns0:cell>99.56</ns0:cell><ns0:cell>99.346</ns0:cell><ns0:cell>99.127</ns0:cell><ns0:cell>98.69</ns0:cell><ns0:cell>98.947</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>98.80</ns0:cell><ns0:cell>99.60</ns0:cell><ns0:cell>99.19</ns0:cell><ns0:cell>99.203</ns0:cell><ns0:cell>98.603</ns0:cell><ns0:cell>99.36</ns0:cell><ns0:cell>98.9624</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>97.42</ns0:cell><ns0:cell>98.283</ns0:cell><ns0:cell>98.073</ns0:cell><ns0:cell>98.29</ns0:cell><ns0:cell>98.1656</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>98.85</ns0:cell><ns0:cell>96.55</ns0:cell><ns0:cell>98.08</ns0:cell><ns0:cell>98.092</ns0:cell><ns0:cell>99.617</ns0:cell><ns0:cell>98.39</ns0:cell><ns0:cell>98.4258</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>97.85</ns0:cell><ns0:cell>98.71</ns0:cell><ns0:cell>99.13</ns0:cell><ns0:cell>98.712</ns0:cell><ns0:cell>97.645</ns0:cell><ns0:cell>98.54</ns0:cell><ns0:cell>98.2034</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>96.59</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>97.89</ns0:cell><ns0:cell>96.567</ns0:cell><ns0:cell>97.95</ns0:cell><ns0:cell>97.912</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.379</ns0:cell><ns0:cell>99.374</ns0:cell><ns0:cell>99.25</ns0:cell><ns0:cell>99.334</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.787</ns0:cell><ns0:cell>99.153</ns0:cell><ns0:cell>99.92</ns0:cell><ns0:cell>99.6612</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>99.18</ns0:cell><ns0:cell>97.57</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.994</ns0:cell><ns0:cell>98.374</ns0:cell><ns0:cell>98.94</ns0:cell><ns0:cell>98.6188</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>98.69</ns0:cell><ns0:cell>98.26</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.253</ns0:cell><ns0:cell>98.253</ns0:cell><ns0:cell>98.52</ns0:cell><ns0:cell>98.3076</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>98.76</ns0:cell><ns0:cell>97.93</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.319</ns0:cell><ns0:cell>98.551</ns0:cell><ns0:cell>98.43</ns0:cell><ns0:cell>98.7944</ns0:cell></ns0:row><ns0:row><ns0:cell>H</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.347</ns0:cell><ns0:cell>99.371</ns0:cell><ns0:cell>99.33</ns0:cell><ns0:cell>99.1232</ns0:cell></ns0:row><ns0:row><ns0:cell>J</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.72</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.788</ns0:cell><ns0:cell>99.574</ns0:cell><ns0:cell>99.66</ns0:cell><ns0:cell>99.4458</ns0:cell></ns0:row><ns0:row><ns0:cell>K</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.156</ns0:cell><ns0:cell>99.371</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.1606</ns0:cell></ns0:row><ns0:row><ns0:cell>L</ns0:cell><ns0:cell>97.41</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.355</ns0:cell><ns0:cell>99.352</ns0:cell><ns0:cell>98.79</ns0:cell><ns0:cell>99.1344</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>96.23</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.532</ns0:cell><ns0:cell>98.58</ns0:cell><ns0:cell>98.9816</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>97.10</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.793</ns0:cell><ns0:cell>98.755</ns0:cell><ns0:cell>98.83</ns0:cell><ns0:cell>98.652</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>98.35</ns0:cell><ns0:cell>97.94</ns0:cell><ns0:cell>98.77</ns0:cell><ns0:cell>98.347</ns0:cell><ns0:cell>96.881</ns0:cell><ns0:cell>97.86</ns0:cell><ns0:cell>97.8568</ns0:cell></ns0:row><ns0:row><ns0:cell>Q</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.576</ns0:cell><ns0:cell>99.787</ns0:cell><ns0:cell>99.75</ns0:cell><ns0:cell>99.7456</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>99.174</ns0:cell><ns0:cell>98.319</ns0:cell><ns0:cell>99.00</ns0:cell><ns0:cell>99.0834</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.58</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.583</ns0:cell><ns0:cell>99.156</ns0:cell><ns0:cell>99.42</ns0:cell><ns0:cell>99.4118</ns0:cell></ns0:row><ns0:row><ns0:cell>T</ns0:cell><ns0:cell>97.47</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.73</ns0:cell><ns0:cell>98.305</ns0:cell><ns0:cell>98.312</ns0:cell><ns0:cell>97.98</ns0:cell><ns0:cell>99.558</ns0:cell></ns0:row><ns0:row><ns0:cell>U</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97.43</ns0:cell><ns0:cell>99.57</ns0:cell><ns0:cell>99.134</ns0:cell><ns0:cell>98.925</ns0:cell><ns0:cell>98.80</ns0:cell><ns0:cell>99.1794</ns0:cell></ns0:row><ns0:row><ns0:cell>V</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>98.67</ns0:cell><ns0:cell>99.332</ns0:cell><ns0:cell>98.441</ns0:cell><ns0:cell>98.47</ns0:cell><ns0:cell>98.8488</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.376</ns0:cell><ns0:cell>99.167</ns0:cell><ns0:cell>99.67</ns0:cell><ns0:cell>99.418</ns0:cell></ns0:row><ns0:row><ns0:cell>X</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>97.46</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>99.573</ns0:cell><ns0:cell>99.788</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>99.3174</ns0:cell></ns0:row><ns0:row><ns0:cell>Y</ns0:cell><ns0:cell>97.90</ns0:cell><ns0:cell>98.33</ns0:cell><ns0:cell>98.74</ns0:cell><ns0:cell>98.156</ns0:cell><ns0:cell>99.371</ns0:cell><ns0:cell>98.66</ns0:cell><ns0:cell>98.7866</ns0:cell></ns0:row><ns0:row><ns0:cell>Z</ns0:cell><ns0:cell>99.17</ns0:cell><ns0:cell>98.75</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>98.965</ns0:cell><ns0:cell>99.163</ns0:cell><ns0:cell>99.16</ns0:cell><ns0:cell>99.0832</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>98.97</ns0:cell><ns0:cell>98.18</ns0:cell><ns0:cell>99.32</ns0:cell><ns0:cell>98.894</ns0:cell><ns0:cell>98.737</ns0:cell><ns0:cell>98.82</ns0:cell><ns0:cell>98.846</ns0:cell></ns0:row></ns0:table><ns0:note>14/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of Proposed Study based Five and Four-Character datasets' with state-of-the-art Methods</ns0:figDesc><ns0:table><ns0:row><ns0:cell>References</ns0:cell><ns0:cell>No. of Characters</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>Faster R-CNN</ns0:cell><ns0:cell>Accuracy= 98.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Du et al. (2017)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell>Accuracy=97.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Chen et al. (2019)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>Selective D-CNN</ns0:cell><ns0:cell>Success rate= 95.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik et al. (2021)</ns0:cell><ns0:cell>Different</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>Accuracy= 80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Different</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>Precision=98.99%</ns0:cell></ns0:row><ns0:row><ns0:cell>Bostik and Klecka (2018)</ns0:cell><ns0:cell /><ns0:cell>SVN</ns0:cell><ns0:cell>99.80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Feed forward-Net</ns0:cell><ns0:cell>98.79%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Study</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>Skip-CNN with 5-Fold Validation Accuracy= 98.82%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Accuracy=85.52%</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64333:4:0:CHECK 13 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Editor’s Comments: Dear Editor thank you for allowing us resubmitting manuscript, As per your suggestion we have revised the manuscript. Comment 1: Please show examples of color images. In your current article, only grayscale images are demonstrated. Response: Thanks for your concern. However, the dataset images are of greyscale level, and similarly, the features maps extracted from the trained CNN are of the same intensity. Therefore, except for Figure.1, all other images colors are in actual intensity (Binary), but we have updated Figure 1 as per your suggestion. Comment 2: Please explain in your captions of figure and title of table, why are these tables or figures necessary in your paper? What are the purposes and what are the message you want to deliver via these figures and tables? Response: Thanks for your concern, as per your suggestion, further details and purpose of adding tables and figures are being added and discussed, and the captions of tables and figures are also updated. Please see the revised version. Comment 3: The metric 'Accuracy' might not be sufficient to judge the performance of the model holistically. Please enhance the result analysis part of your paper Response: Thanks for your concern. As per your suggestion, we have done additional experiments to validate results on another measure called F1- measure. Please see Tables 4 and 5. "
Here is a paper. Please give your review comments after reading it.
335
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>We present encube -a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: 1) the integration of comparative visualisation and analysis into a unified system; 2) the documentation of the discovery process; and 3) an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local desktop, making it a versatile solution. We discuss how our approach can help accelerate the discovery rate in a variety of research scenarios.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>1. provided a visual overview of the entire data cube survey, or a sufficiently large sub-set of the survey; 2. allowed qualitative, quantitative, and comparative visualisation; 3. supported interaction between the user and the data, including volume rotation, translation and zoom, modification of visualisation properties (colour map, transparency, etc.), and interactive querying;</ns0:p><ns0:p>4. supported different ways to organise the data, including automatic and manual organisation of data within the display space, as well as different ways of sorting lists using metadata included with the dataset; 5. could utilise stereoscopic displays to enhance comprehension of three-dimensional structures;</ns0:p><ns0:p>6. allowed a single data cube to be selected from the survey and visualised at higher-resolution; 7. encouraged collaborative investigation of data, so that a team of expert researchers could rapidly identify the relevant features; 8. was extensible (i.e. easily customizable) so that new functionalities can be easily be added as required;</ns0:p><ns0:p>9. automatically tracked the workflow, so that the sequence of interactions could be recorded and then replayed; 10. was sufficiently portable that a single solution could be deployed in different display environments, including on a standard desktop computer and monitor.</ns0:p><ns0:p>Identifying comparative visualisation of many data cubes (Item 1) as the key feature that could provide enhanced comprehension and increase the potential for collaborative discovery (Item 7), we elected to work with the CAVE2 TM 2 at Monash University in Melbourne, Australia (hereafter Monash CAVE2). To support quantitative interaction (Items 2, 3 and 4), we decided to work with a multi-touch controller to interact, query, and display extra quantitative information about the visualised data cubes. The CAVE2 provides multiple stereo-capable screens (Item 5). To visualise a single data cube at higher-resolution (Item 6), we elected to work with <ns0:ref type='bibr'>Omegalib Febretti et al. (2014)</ns0:ref>, and the recently added multi-head mode of S2PLOT <ns0:ref type='bibr' target='#b10'>(Barnes et al., 2006)</ns0:ref> (see Sections 2 and 4.1). For the solution to be easily extensible (Item 8), we designed a visualisation and analysis framework, where each of its parts can be customized as required (see Sections 3 and 4). The framework incorporates a manager that can track the workflow (Item 9), so that each action can be reviewed, and system states can be reloaded. Finally, the framework can easily be scaled-down for standalone use with a standard desktop computer.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.2'>The CAVE2</ns0:head><ns0:p>The CAVE2 is a hybrid 2D/3D virtual reality environment for immersive simulation and information analysis. It represents the evolution of immersive virtual reality environments like the CAVE <ns0:ref type='bibr' target='#b27'>(Cruz-Neira et al., 1992)</ns0:ref>. A significant increase in the number of pixels and the display brightness is achieved by replacing the CAVE's use of multiple projectors with a cylindrical matrix of stereoscopic panels. The first CAVE2 Hybrid Reality Environment <ns0:ref type='bibr' target='#b39'>(Febretti et al., 2013)</ns0:ref> was installed at the University of Illinois at Chicago (UIC). The UIC CAVE2 was designed to support collaborative work, operating as a fully immersive space, a tiled display wall, or a hybrid of the two. The UIC CAVE2 offered more physical space than a traditional five or six wall CAVE, better contrast ratio, higher stereo resolution, more memory and more processing power.</ns0:p><ns0:p>The Monash CAVE2 is an 8-meter diameter, 320 degree panoramic cylindrical display system. It comprises 80 stereo-capable displays arranged in 20 four-panel columns. Each display is a Planar Matrix LX46L 3D LCD panel, with a 1366 &#215; 768 resolution and 46' in diagonal. All 80 displays provide a total of &#8764; 84 million pixels. For image generation, the Monash CAVE2 comprises a 20-node cluster, where each node includes dual 8-core CPUs, 192 gigabytes (GB) of random access memory (RAM), along with 2 CAVE2 TM is a trademark of the University of Illinois Board of Trustees.</ns0:p></ns0:div> <ns0:div><ns0:head>3/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science dual 1536-core NVIDIA K5000 graphics cards. Approximately 100 tera floating-point operations per second (TFLOP/s) of integrated graphic processing units (GPU)-based processing power is available for computation. Communication between nodes is through a 10 gigabit (Gb) network fabric. Data is stored on a local Dell server, containing 2.2 terabytes (TB) of ultrafast redundant array of independent solid-state disk (RAID5 SSD), and 14 TB of fast SATA drives (RAID5). This server has 60 Gbit/s connectivity to the CAVE2 network fabric. It includes 256 GB of RAM to cache large files, and transfer out to the nodes very rapidly.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.3'>Outline</ns0:head><ns0:p>The remainder of this paper is structured as follow. Section 2 reviews related work and makes connections to our requirements. Section 3 presents a detailed description of the architecture design of encube, our visualisation and analysis system. Section 4 presents our implementation in the context of the Monash CAVE2. Section 5 presents a timing experiment comparing performances of encube when executed within the Monash CAVE2 and on a personal desktop. Finally, Section 6 discusses the proposed design and implementation, and discusses how it can help accelerate the discovery rate in a variety of research scenarios.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>We focus our attention on literature related to comparative visualisation, the use of high resolution displays (including tiled displays and CAVE2-style configurations), and interaction devices.</ns0:p><ns0:p>Early work by <ns0:ref type='bibr' target='#b70'>Post and van Wijk (1994)</ns0:ref>, <ns0:ref type='bibr' target='#b48'>Hesselink et al. (1994)</ns0:ref>, and <ns0:ref type='bibr' target='#b68'>Pagendarm and Post (1995)</ns0:ref> discussed comparative visualisation using computers, particularly in the context of tensor fields and fluid dynamics. They discussed the need to develop comparative visualisation techniques to generate comparable images from different sources. They found that two approaches to comparative visualisation can be considered: image-level comparison, visually comparing either two observation images or a theoretical model and an observation image, all generated via their own pipeline; and data level comparison, where data from two different sources are transformed to a common visual representation via the same pipeline. <ns0:ref type='bibr' target='#b80'>Roberts (2000)</ns0:ref> and <ns0:ref type='bibr' target='#b62'>Lunzer and Hornbaek (2003)</ns0:ref> advocated the use of side-by-side interactive visualisations for comparative work. <ns0:ref type='bibr' target='#b80'>Roberts (2000)</ns0:ref> proposed that multiple views and multiform visualisation can help during data exploration -providing alternative viewpoints and comparison of images, and encouraging collaboration. <ns0:ref type='bibr' target='#b62'>Lunzer and Hornbaek (2003)</ns0:ref> identified three kinds of issues common to comparative visualisation during exploration of information: a high number of required interactions, difficulty in remembering what has been previously seen, and difficulty in organising the exploration process. We return to these issues in Section 6 with regards to our solution. In accordance with <ns0:ref type='bibr' target='#b80'>Roberts (2000)</ns0:ref>, they argued that these issues can be reduced through an interactive interface that lets the user modify and compare different visualisations side by side. <ns0:ref type='bibr' target='#b90'>Unger et al. (2009)</ns0:ref> put these principles to work by using side-by-side visualisation to emphasize the spatial context of heterogeneous, multivariate, and time dependent data that arises from a spatial simulation algorithm of cell biological processes. They found that side-by-side views and an interactive user interface empowers the user with the ability to explore the data and adapt the visualization to his current analysis goals.</ns0:p><ns0:p>For scientific visualisation, <ns0:ref type='bibr' target='#b67'>Ni et al. (2006)</ns0:ref> highlighted the potential of large high-resolution displays as they offer a way to view large amounts of data simultaneously with the increased number of available pixels. They also mention that benefits of large-format displays for collaborative work -as a medium for presenting, capturing, and exchanging ideas -have been demonstrated in several projects (e.g. <ns0:ref type='bibr' target='#b32'>Elrod et al., 1992;</ns0:ref><ns0:ref type='bibr' target='#b75'>Raskar et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b52'>Izadi et al., 2003)</ns0:ref>. Moreover, <ns0:ref type='bibr' target='#b24'>Chung et al. (2015)</ns0:ref> assess that the use of multiple displays can enhance visual analysis through its discretized display space afforded by different screens.</ns0:p><ns0:p>In this line of thinking, <ns0:ref type='bibr' target='#b53'>Jeong et al. (2005)</ns0:ref>, <ns0:ref type='bibr' target='#b29'>Doerr and Kuester (2011)</ns0:ref>, <ns0:ref type='bibr' target='#b54'>Johnson et al. (2012)</ns0:ref>, <ns0:ref type='bibr' target='#b63'>Marrinan et al. (2014)</ns0:ref> and <ns0:ref type='bibr' target='#b59'>Kukimoto et al. (2014)</ns0:ref> discussed software for cluster-driven tiled-displays called SAGE, CGLX, DisplayCluster, SAGE2, and HyperInfo respectively. These solutions provide dynamic desktop-like windowing for media viewing. This includes media content such as ultra high-resolution imagery, video, and applications from remote sources to be displayed. They highlight how this type of setting lets collective work to take place, even remotely. <ns0:ref type='bibr' target='#b38'>Febretti et al. (2014)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>execute multiple immersive applications concurrently on a cluster-controlled display system, and have different input sources dynamically routed to applications.</ns0:p><ns0:p>In the context of comparative visualisation using a cluster-driven tiled-display, <ns0:ref type='bibr' target='#b84'>Son et al. (2010)</ns0:ref> discussed the visualisation of multiple views of a brain in both 2D and 3D using a 4 &#215; 2 tiled-display, with individual panel resolution of 2560 &#215; 1600 pixels. Their system supported both high resolution display over multiple screens (using the pixel distribution concept from SAGE and the OpenGL context distribution from CGLX) and multiple image display. Supporting both a 3D mesh and 2D images, they provided region selection of an image, which highlighted the same region in other visualisations of the same object. They also included mechanisms to control of the 3D mesh (resolution, position and rotation), and control of image scale using a mouse and keyboard, enabling users to modify the size of images displayed. Other approaches are presented by <ns0:ref type='bibr' target='#b60'>Lau et al. (2010</ns0:ref><ns0:ref type='bibr' target='#b41'>), Fujiwara et al. (2011</ns0:ref><ns0:ref type='bibr' target='#b43'>), and Gjerlufsen et al. (2011)</ns0:ref>, who developed applications to control multiple visualisations simultaneously. <ns0:ref type='bibr' target='#b60'>Lau et al. (2010)</ns0:ref> used ViewDock TDW to control multiple instances of the Chimera software 3 to visualise dozens of ligand-protein complexes simultaneously, while preserving the functionalities of Chimera. ViewDock TDW permitted comparison, grouping, analysis and manipulation of multiple candidates -which they argued increases the efficacy and decreases the time involved in drug discovery.</ns0:p><ns0:p>In an interaction study within a multisurface interactive environment called the WILD room, <ns0:ref type='bibr' target='#b43'>Gjerlufsen et al. (2011)</ns0:ref> discussed the comparative visualisation of 64 brain models. Controlling the Anatomist 4 software with the SubstanceGrise middleware, they enabled synchronous interaction with multiple displays using several interaction devices: a multitouch table, mobile devices, and tracked objects.</ns0:p><ns0:p>Discussing these results, <ns0:ref type='bibr' target='#b13'>Beaudouin-Lafon (2011) and</ns0:ref><ns0:ref type='bibr' target='#b16'>Beaudouin-Lafon et al. (2012)</ns0:ref> note that large display surfaces let researchers organize large sets of information and provide a physical space to organize group work.</ns0:p><ns0:p>Similarly, <ns0:ref type='bibr' target='#b41'>Fujiwara et al. (2011)</ns0:ref> discuss the development of a multi-application controller for the SAGE system, with synchronous interaction of multiple visualisations. A remote application running on personal computer was used to control the SAGE system. They evaluated how effectively scientists find similarities and differences between visualised cubic lattices (3D graph with vertices connected together with edges) using their controller. They measured the time required to complete a comparison task in two cases where the controller controls: 1) all visualisations synchronously; or 2) only one visualisation at a time. For their test, they built a tiled display of 2 &#215; 2 LCD monitors and three computers. For both cases, participants would look at either 8 or 24 volumes displayed side-by-side, and spread evenly on the different displays. Their results showed that comparison using synchronized visualisation was generally five times faster than individually controlled visualisation (case 1: &#8764; 25 seconds per task; case 2: &#8764; 130 seconds per task). Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> depicts system design schematics from <ns0:ref type='bibr' target='#b84'>Son et al. (2010)</ns0:ref>, <ns0:ref type='bibr' target='#b60'>Lau et al. (2010</ns0:ref><ns0:ref type='bibr' target='#b41'>), Fujiwara et al. (2011</ns0:ref><ns0:ref type='bibr' target='#b43'>), and Gjerlufsen et al. (2011)</ns0:ref>. All designs depend upon a one-way communication model, where a controller sends a command to control how visualisations are rendered on the tiled-display.</ns0:p><ns0:p>Hence, all models primarily focussed on enabling interaction with the visualisation space. With these models, the controller cannot receive information back from the display cluster (or equivalent computing infrastructure). It is worth noting that designs from <ns0:ref type='bibr' target='#b84'>Son et al. (2010)</ns0:ref>, <ns0:ref type='bibr' target='#b41'>Fujiwara et al. (2011), and</ns0:ref><ns0:ref type='bibr' target='#b43'>Gjerlufsen et al. (2011)</ns0:ref> also enable a visualisation to be rendered over multiple displays.</ns0:p><ns0:p>To interact with visualisations, many interaction devices have been proposed, varying with the nature of the visualisation environment and the visualisation space (e.g. 2D, 3D). One of the most common method is based on tracked devices (e.g. tracked wand controllers, gloves and body tracking, smart phone) that allow users to use natural gestures to interact with the visual environment (e.g. <ns0:ref type='bibr' target='#b61'>LaViola et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b79'>Roberts et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b66'>Nancel et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b39'>Febretti et al., 2013)</ns0:ref>. For instance, both the CAVE2 and the WILD room use a tracking system based on an array of Vicon Bonita infrared cameras to track the physical position of controllers relative to the screens. To interact with 3D data, another interaction method involves sphere shaped controllers that enables 6 degrees of freedom (e.g. <ns0:ref type='bibr' target='#b3'>Amatriain et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b39'>Febretti et al., 2013)</ns0:ref>. However, <ns0:ref type='bibr' target='#b49'>Hoang et al. (2010)</ns0:ref> argued that the practicality of these methods diminishes as the set of user controllable parameters and options increase -and that precise interaction can be difficult due to the lack of haptic feedback.</ns0:p><ns0:p>Another interaction method involves the use of mobile devices such as phones and tablets with <ns0:ref type='bibr' target='#b43'>Gjerlufsen et al. (2011)</ns0:ref>. All designs are predicated upon a one-way communication model, where a controller sends a command to control how visualisations are rendered on the tiled-display. All models primarily focussed on enabling interaction with the visualisation space. Designs A, C and D also enable a visualisation to be rendered over multiple displays. multitouch capabilities (e.g. <ns0:ref type='bibr' target='#b76'>Roberts et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b77'>Roberts et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b31'>Donghi, 2013)</ns0:ref>. They are used to interact with the visual environment or to access extra information not directly visible on the main display. For example, <ns0:ref type='bibr' target='#b50'>H&#246;llerer et al. (2007)</ns0:ref> and <ns0:ref type='bibr' target='#b3'>Amatriain et al. (2009)</ns0:ref> provided concurrent users with configurable interfaces. It let users adopt distinct responsibilities such as spatial navigation and agent control. <ns0:ref type='bibr' target='#b23'>Cheng et al. (2012)</ns0:ref> focused on an interaction technique that supports the use of tablet devices for interaction and collaboration with large displays. They presented a management and navigation interface based on an interactive world-in-miniature view and multitouch gestures. By doing so, users were able to manage their views on their tablets, navigate between different areas of the workspace, and share their view with other users. As multitouch devices are inherently 2D, a limitation of models based on mobile Manuscript to be reviewed Computer Science devices -when working with 3D data -is related to precision in manipulation and region selection.</ns0:p><ns0:p>From this review, we can note that previous research primarily focussed on rendering a visualisation seamlessly over multiple displays, and in cases involving comparative visualisation, it placed an emphesis on enabling interaction with the visualisation space. As our focus is to accelerate visualisation led discovery and analysis of data cube surveys, it is also important to consider the following questions. How can we integrate comparative visualisation and analysis into a unified system? How can we document the discovery process? Finally, how can we enable scientists to continue the research process once back at their desktop? We cover our proposed solutions to these three question throughout the following sections.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>OVERVIEW OF ENCUBE</ns0:head><ns0:p>encube's design focuses on enabling interactivity between the user and a date cube survey, presented in a comparative visualisation mode within an advanced display environment. Its main aim is to accelerate the visualisation and analysis of data cube surveys -with an initial focus on applications in neuroimaging and radio astronomy. The system's design is modular, allowing new features to be added as required.</ns0:p><ns0:p>It comprises strategies for qualitative, quantitative, and comparative visualisation, including different mechanisms to organise and query data interactively. It also includes strategies to serialize the workflow, so that actions taken throughout the discovery process can be reviewed either within the advanced visualisation environment or back at the researcher's desk. We discuss the general features of encube's design before addressing implementation in Section 4. We abstract the system's architecture into two layers: a process layer, and an input/output layer (Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). This abstraction enables tasks to be executed on the most relevant unit based on the nature of the task (e.g. compute intensive, communication). While sharing similarities with other systems (e.g. <ns0:ref type='bibr' target='#b53'>Jeong et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b41'>Fujiwara et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b43'>Gjerlufsen et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b38'>Febretti et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b63'>Marrinan et al., 2014)</ns0:ref>, it differs through three main additions (Table <ns0:ref type='table' target='#tab_0'>1):</ns0:ref> 1. mechanisms for quantitative visualisation; Our solution integrates visualisation and analysis into a unified system. The input/output layer includes units to visualise data (Display Units) and interact with visualisations (Interaction Units). The Process layer includes units to manage communication and data (Manager Unit), and to process and render visualisations (Process-Render Units). This system has two-way communication, enabling the Interaction Unit to query both the Manager Unit and the Process-Render Units, and retrieve the requested information. This information can then be reported on the Interaction Unit directly, reducing the amount of information to be displayed on the Display Units.</ns0:p></ns0:div> <ns0:div><ns0:head>7/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_0'>&#8226; &#8226; &#8226; &#8226; &#8226; Quantitative &#8226; Comparative &#8226; &#8226; &#8226; &#8226; &#8226; Interaction method Rotate &#8226; &#8226; &#8226; &#8226; &#8226; (with volumes) Pan/zoom &#8226; &#8226; &#8226; &#8226; &#8226; MVP &#8226; &#8226; QOV &#8226; QMV &#8226; Organise visualisations Automated &#8226; &#8226; &#8226; &#8226; &#8226; Manual &#8226; &#8226; Organisation mechanism Swap &#8226; &#8226; (comparative vis.) Sort &#8226; &#8226; Reorder &#8226; Organise data Automatic &#8226; &#8226; Manual &#8226; &#8226; Stereoscopic screens &#8226; Number of screens 4 &#215; 2 5 &#215; 4 2 &#215; 2 8 &#215; 4 20 &#215; 4 Multi-screens visualisation &#8226; &#8226; &#8226; &#8226; Collaborative environment &#8226; &#8226; &#8226; &#8226; &#8226; Customizable &#8226; &#8226; &#8226; Workflow history &#8226; Standalone (single desktop) &#8226;</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.1'>Process layer</ns0:head><ns0:p>As shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, the Process layer is the central component of our design. Communication occurs between a Manager Unit and one or more Process-Render Units. It follows a master/slaves communication pattern, where the Process-Render Units act upon directives from the Manager Unit.</ns0:p><ns0:p>Process-Render Unit. This component is the main computation engine. Each Process-Render Unit has access to data and is responsible for rendering to one or more Display Units. As it is expected that most of the available processing power of the system will reside on the Process-Render Units, derived data products like a histogram, or quantitative information (e.g. mean voxel value) are also computed within this part of the Process layer.</ns0:p><ns0:p>Manager Unit. This component has three different functionalities. Foremost, it is the metadata server for the system, where a structured list of the dataset is available and can be served to Process-Render Units and Interaction Units. Secondly, it acts as a scheduler, where it manages and synchronizes tasks communication between Process-Render Units and Interaction Units. Tasks are handled following the queue model. Finally, as all communication and queries (e.g. visualisation parameter change) circulate via the Manager Unit, the workflow serialization occurs here.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Input/output layer</ns0:head><ns0:p>Interaction Units. This component is in charge of controlling what to display, and how to display it. It also provides mechanisms to query the distributed data. In this context, screen-based controllers -laptops, tablets and smart phones -can both control the visualised content, and display information derived or computed from this content. The Interaction Unit can request that a specific computation be executed on one or more Process-Render Units and the result returned to the Interaction Unit. This result can then be visualised independently of the rendered images on the Display Units. While a single Interaction unit is likely to be the prefered option, multiple clients should be able to interact with the system concurrently.</ns0:p></ns0:div> <ns0:div><ns0:head>8/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Display Units. This component is where the rendered images and meta-data such as a data cube identification number (ID) are displayed. A Display Unit can display a single or multiple data cubes.</ns0:p><ns0:p>It can also display part of a data cube in cases where a data cube is displayed over multiple Display Units. In a tiled-display setting, a display unit comprises one or more physical screens. Several types of displays can be envisioned (e.g. high definition or ultra-high resolution screens; 2D or stereoscopic capable screens; multi-touch screens). For a traditional desktop configuration, the Display Unit may be a single application window on a desktop monitor.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>IMPLEMENTATION</ns0:head><ns0:p>In this section we describe and discuss our implementation of encube for use in the Monash CAVE2.</ns0:p><ns0:p>In this context, our design is being mapped to the CAVE2's main hardware components ( </ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Manager Unit</ns0:head><ns0:p>The server, communication hub and workflow serialization is done in the Manager Unit applicationencube-M. This application is implemented in Python 6 due to its ease for development and prototyping, along with its ability to execute C code when required. Python is also the principal control language for</ns0:p><ns0:p>Omegalib applications in the CAVE2; hence, encube-M offers the possibility for simple communication with Omegalib applications. Common multiprocessing (threading) and communication libraries such as Transmission Control Protocol (TCP) sockets or MPI for high-performance computing are also available. Furthermore, with its quick development and adoption by the scientific community in recent years (e.g. <ns0:ref type='bibr' target='#b18'>Bergstra et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b44'>Gorgolewski et al., 2011;</ns0:ref><ns0:ref type='bibr'>Astropy Collaboration et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b65'>Momcheva and Tollerud, 2015)</ns0:ref>, our system can easily integrate packages for scientific computing such as SciPy (e.g. NumPy, Pandas; <ns0:ref type='bibr' target='#b55'>Jones et al., 2001)</ns0:ref>, PyCUDA and PyOpenCL <ns0:ref type='bibr' target='#b57'>(Kl&#246;ckner et al., 2012)</ns0:ref>, semi-structured data handling (e.g. XML, JSON, YAML), along with specialized packages (e.g. machine learning for neuroimaging and astronomy; <ns0:ref type='bibr' target='#b69'>Pedregosa et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b92'>VanderPlas et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abraham et al., 2014)</ns0:ref>.</ns0:p><ns0:p>Throughout a session in the CAVE2, encube-M keeps track of the many components' states, and synchronizes the Interaction Units and the Process-Render Units. The Manager Unit communicates with the different Process-Render Units via TCP sockets, and responds to the Interaction Units via HTTP methods (request and response).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.1'>Workflow serialization</ns0:head><ns0:p>Each step of a session is kept and stored to document the discovery process. This workflow serialization is done by adding each uniquely identified action (e.g. load data, rotate, parameter change) to a dictionary (key-value data structure) from which the global state can be deduced. The dictionary of a session can be stored to disk as a serialized generic Python object or as JSON data. Structured data can then be used to query and retrieve previous sessions, which can then be loaded and re-examined. This data enables researchers to review the sequence of steps taken throughout a session. It also makes it possible for researchers to continue the discovery process when they leave the large-scale visualisation environment and return to their desk.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Interaction Units</ns0:head><ns0:p>We implemented the Interaction Unit as a web interface. Recent advances in web development technologies such as HTML5 7 , CSS3 8 and ECMAScript 9 (JavaScript) along with WebGL 10 (a Javascript library binding of the OpenGL ES 2.0 API) enable the development of interfaces that are portable to most available mobile devices. This development pathway has the advantage of allowing development independent from the evolution of the mobile platforms ['write once, run everywhere' <ns0:ref type='bibr' target='#b82'>(Schaaff and Jagade, 2015)</ns0:ref>]. Furthermore, a web server can easily communicate with multiple web clients, enabling collaborative interactions with the system. As it is currently implemented, multiple Interaction Units can connect and interact with the system concurrently. However, we did not yet implement a more complex scheduler to make sure all actions are consistent with one another (future work).</ns0:p><ns0:p>In order to control the multiple Process-Render Units, we equipped the interface with core features essential for interacting with 3D scenes (rotate, zoom and translate a volume), modifying the visualisation (change colours, intensity, transparency, etc.), and organising and querying one or more data cubes. The interface comprises three main panels: a meta-controller, an scene controller, and a rendering parameters controller. We further describe the components and functionalities of the three panels in the next sections.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.1'>Interactive data management: the meta-controller</ns0:head><ns0:p>The meta-controller panel enables the user to interactively handle data. Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> shows an example interface of the meta-controller configured for morphological classification of galaxies. The panel comprises four main components.</ns0:p><ns0:p>The first component is a miniature representation of the CAVE2 (Figure <ns0:ref type='figure' target='#fig_6'>6A</ns0:ref>). This component is implemented using Javascript and CSS3. Each display is mapped as a (coloured) rectangle onto a grid, representing the physical setting of the CAVE2 display environment (e.g. 4 rows by 20 columns). In the following, we refer to an element of this grid as a screen, as it maps to a screen location in the CAVE2.</ns0:p><ns0:p>Indices for columns (A to T) and rows (1 to 4) are displayed around the grid as references.</ns0:p><ns0:p>The second component comprises action buttons (Figure <ns0:ref type='figure' target='#fig_6'>6B</ns0:ref>). Actions include load data, unload data, swap screens, and query Process-Render Units. The action linked to each button will be described in the next section. The third component is an interactive table displaying the metadata of the data cube survey (Figure <ns0:ref type='figure' target='#fig_6'>6C</ns0:ref>). Each metadata category is displayed as a separate column. The table enables client-side multi-criteria sorting by clicking or tapping on the column header. In the example in Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>, the data is simply sorted by the Mean Distance parameter in ascending order.</ns0:p><ns0:p>Finally, it is possible to query the Process-Render Units to obtain derived data products or quantitative information about the displayed data (Figure <ns0:ref type='figure' target='#fig_6'>6D</ns0:ref>). The information returned in such a context is displayed in the lower right portion of the meta-controller module. The plots are dynamically generated using information sent back in JSON format from the web server and then drawn in a HTML canvas using Javascript.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.2'>Interacting with the meta-controller</ns0:head><ns0:p>A user can manipulate data using several functionalities, namely load data to a screen, select a screen, unload data, swap data between screens, request derived data products or quantitative information, and manual data reordering. Load data. Loading data onto the screens can be done via two methods:</ns0:p><ns0:p>1. Objects are selected from the table (Figure <ns0:ref type='figure' target='#fig_6'>6C</ns0:ref>) and loaded by clicking or tapping the load button (Figure <ns0:ref type='figure' target='#fig_6'>6B</ns0:ref>). This will display the selected objects in the order that they appear in the table (keeping track of the current sort state) in column-first, top-down order on the screens. The corresponding data cubes will then be rendered on the Display Units (Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>). Figure <ns0:ref type='figure' target='#fig_7'>7A</ns0:ref> shows a picture of MRI imaging and tractography data from the IMAGE-HD study displayed on the CAVE2 screens, while Figure <ns0:ref type='figure' target='#fig_7'>7C</ns0:ref> shows several points of view for different galaxy data taken from the THINGS galaxy survey <ns0:ref type='bibr' target='#b95'>(Walter et al., 2008)</ns0:ref>, a higher resolution survey than the WHISP survey mentioned previously, but with only nearby galaxies (distance &lt;15 megaparsec 11 ).</ns0:p><ns0:p>2. Load a specific cube via a click-and-drag gesture from the table and releasing on a given screen of the grid.</ns0:p><ns0:p>Once data is loaded, each CAVE2 screen displays the ID of the relevant data cube.</ns0:p><ns0:p>Screen selection and related functionalities. Each screen is selectable and interactive (such as the top-left screen in Figure <ns0:ref type='figure' target='#fig_6'>6A</ns0:ref>). Selection can be applied to one or many screens at a time by clicking its centre (and dragging for multi-selection). Screen selection currently enables four functionalities:</ns0:p><ns0:p>1. Highlight a screen within the CAVE2. Once selected, a coloured frame is displayed to give a quick visual reference (Figure <ns0:ref type='figure' target='#fig_7'>7B</ns0:ref>).</ns0:p><ns0:p>2. Unload data. By selecting one or more screens and clicking the unload button, the displayed data will be unloaded both within the controller and on the Process-Render Unit. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Swap (for comparative visualisation work)</ns0:head><ns0:p>. By selecting any two screens within the grid and clicking the swap button, the content (or absence of content if nothing is loaded on one of them) of the two displays will be swapped.</ns0:p></ns0:div> <ns0:div><ns0:head>4.</ns0:head><ns0:p>Request derived data products or quantitative information about a selected screen.</ns0:p><ns0:p>Integrating comparative visualisation and analysis as a unified system. As a proof of concept for the two-way communication, we currently have implemented two functionalities. The first function queries a specific data cube to retrieve an interactive histogram of voxel values (Figure <ns0:ref type='figure' target='#fig_6'>6D</ns0:ref>). The interactive histogram allows the user to dynamically alter the range of voxel values to be displayed. This approach is commonly used by astronomers working with low signal-to-noise data. The second function queries all plotted data cubes and summarizes the results in an interactive scatter plot (Figure <ns0:ref type='figure'>8</ns0:ref>; see Section 4.4 for further discussion). This example function enables neuroscientists to quickly access the number of connections between two given regions of the brain when comparing multiple MRI data files from control, pre-symptomatic and symptomatic individuals. Additional derived data products and quantitative information functionalities can easily be implemented based on specific user and use-case requirements.</ns0:p><ns0:p>Manual data reordering. To further support interactivity between the user and the environment, we included the option to manually arrange the order of the displayed data. For example, instead of relying only on automated sorting methods, a user may want to compare two objects displayed on the wall by manually placing them next to each another. This is achieved by clicking or tapping on the rectangular Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='4.3.3'>Interacting with data cubes: the scene and rendering parameters controllers</ns0:head><ns0:p>Once data is loaded onto one or more screens, the web interface enables synchronous interaction with all displayed data cubes. Figure <ns0:ref type='figure' target='#fig_9'>10</ns0:ref> shows an example interface of the scene controller and the parameters controller for a neuroimaging data cube survey. volume orientation, sampling size, etc.), a JSON string is submitted to the Manager Unit and relayed to the Process-Render Units, modifying the rendered geometry on the stereoscopic panels of all data cubes.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.4'>Isosurface and volume rendering in the browser</ns0:head><ns0:p>The interactive controller is based on ShareVol 12 , and is implemented in WebGL (GLSL language) and JavaScript. It has been developed with an emphasis on simplicity, loading speed and high-quality rendering. The aim being to keep it as a small and manageable library, as opposed to a full featured rendering library such as XTK 13 or VolumeRC 14 . Within the context of the browser and WebGL, we use a one-pass volume ray-casting algorithm, which makes use of a 2D texture atlas <ns0:ref type='bibr' target='#b25'>(Congote et al., 2011)</ns0:ref>, in order to avoid the use of dynamic server content. By doing so, it minimizes the stress upon the Manager Unit by porting the processing onto the client. The texture atlas of volume's slices is pre-processed and provided to the client upon loading. The same shader for both volume rendering and isosurfaces is used at both ends of the system (Client and Process-Render Units) to provide visualisations as homogeneous as possible.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>An example of execution</ns0:head><ns0:p>A typical encube use-case proceeds as follows. Firstly, the data set is gathered, and a list of survey contents is defined in a semi-structured file (currently CSV format). Secondly, a configuration file is prepared, which specifies information about how and where to launch each Process-Render Unit, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science describes the visualisation system, how to access the data, and so on. Using the Manager Unit, each of the Process-Render Units is launched.</ns0:p><ns0:p>Once operating, the Manager Unit opens TCP sockets to each of the Process-Render Unit for further message passing. Still using the Manager Unit, the web server is launched to enable connections with Interaction Units. Once a Client accesses the controller web page, it can start interacting with the displays through the available actions described in Section 4.3.</ns0:p><ns0:p>The workflow serialization, which stores the system's state at each time step, can be reused by piping a specific state to the Process-Render Units and/or to the Interaction Units to reestablish a previous, specific visualisation state.</ns0:p><ns0:p>An example of executing the application on a local desktop -using a windowed single column of four Display Units -is shown in Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref>. When using encube locally, it is possible to connect and control the desktop simply by using a mobile device's browser and using a personal proxy server (e.g. Squid 15 or SquidMan 16 ). Using a remote client helps maximizing the usage of display area of the local machine. </ns0:p></ns0:div> <ns0:div><ns0:head n='5'>TIMING EXPERIMENT</ns0:head><ns0:p>We systematically recorded loading time and frame rates for both neuroscience data (IMAGE-HD, <ns0:ref type='bibr'>-Karistianis et al., 2013)</ns0:ref>, and astronomy data (THINGS, <ns0:ref type='bibr' target='#b95'>Walter et al., 2008)</ns0:ref>. The desktop experiment was completed using Linux (CentOS 6.7), with 16GB of RAM, 12 Intel Xeon E5-1650</ns0:p></ns0:div> <ns0:div><ns0:head>Georgiou</ns0:head><ns0:p>Processors, and a NVIDIA Geforce GTX 470 graphics card. Details about the Monash CAVE2 hardware can be found in Section 1.2 and in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. While a direct comparison between both systems is somewhat contrived, we designed our experiment to be representative of a user experience. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>Neuroscience data. The IMAGE-HD dataset contains tracks files. Tracks represent connections between different brain regions. Each track is composed of a varying number of segments, represented by two spatial coordinates in the 3D space. When loading tracks data, the software calculates a colour for each track -a time consuming task. Pre-processed tracks files -which saves the track color informationcan be stored to accelerate the loading process.</ns0:p><ns0:p>We report timing for the unprocessed and pre-processed tracks data. In addition, as each file comprises a large number of tracks (e.g. &#8805; 2 6 tracks; &#8805; 8 &#215; 10 7 points), we present timing when loading all tracks, and when subsampling the total number of tracks by a factor of 40.</ns0:p><ns0:p>The 80 tracks files used in this experiment amounts to &#8764;39 GB, with a median file size of 493 MB.</ns0:p><ns0:p>We repeated the timing experiment three times for each action recorded, and we present the median time in seconds (s). For the Monash CAVE2, we load four files per column, and report the median time from the 20 columns loading data in parallel. On the desktop, we loaded one column of four files. The results are shown in Table <ns0:ref type='table'>4</ns0:ref>. To evaluate the rendering speed, in frames per second, we used the auto-spin routine of S2PLOT and recorded the internally-measured S2PLOT frame rate. We report the number of frames per second, along with the S2PLOT area (e.g. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>).The results are shown in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref>. Median frame rate per column using the Monash CAVE2 and a desktop computer for IMAGE-HD data. We note that numbers are limited by the refresh rate of the display screens. The slight difference in load time may be related to networking, having to fetch data from a remote location, while the desktop simply has to read from a local disk. Nevertheless, timing results are comparable. It is also worth noting that within the CAVE2, after the time reported, 80 volumes are ready for interaction, whereas on the desktop, only four are available.</ns0:p></ns0:div> <ns0:div><ns0:head>Measurement</ns0:head><ns0:p>Astronomy data. A similar approach was used for the THINGS dataset. The major difference between the two dataset is the nature of the data. The IMAGE-HD data consist of multiple tracks that needs to be drawn individually using a special geometry shader. A small volume 17 (mean brain, 2 MB) is volume-rendered to help understand the position of the tracks in 3D space relative to the physical brain. In the case of the astronomy data, the data cube is loaded as a 3D texture onto the GPU and volume rendered.</ns0:p><ns0:p>In this experiment, we load the entire volume on both the CAVE2 and the desktop.</ns0:p><ns0:p>The 80 data cube files used in this experiment sums up to &#8776;28 GB, with median file size of 317 MB.</ns0:p><ns0:p>In this experiment, the files are not pre-processed. Instead, we load the FITS 18 files directly. On load, 17 The volume is stored in the XRW format that can be accessed in parallel. The XRW format is a minimalist volumetric file, stored in compressed binary form. Files contain integer nx, ny and nz pixel dimensions, float wx, wy and wz pixel physical sizes, the data block as unsigned byte per pixel, and a 256 entry RGB colour table using 3 bytes per colour. Tools to read, write and manipulate the XRW format are published at https://github.com/mivp/s2volsurf. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>values are normalized to the range [0,1] and the histogram of voxels distribution is evaluated. We do not yet proceed with caching. We report the total loading time, which includes normalization and histogram evaluation, and the frame rates obtain with the same methodology as reported in the previous section.</ns0:p><ns0:p>Median load time in the CAVE2 and on the desktop is 41.96 and 40.14 seconds respectively. Frame rate results are in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref>. Median frame rate per column using the Monash CAVE2 and a desktop computer for THINGS data. slowing down the frame rate. However, using the web controller, the user can simply send a camera position which avoids having to rely purely on frame rates and animation to reach a given viewing angle.</ns0:p></ns0:div> <ns0:div><ns0:head>Measurement</ns0:head><ns0:p>As can be expected, the CAVE2 enables much more data to be processed, visualised and analysed simultaneously. For a similar loading period, the CAVE2 provides access to 20 times the data accessible on the desktop. This result comes from the large amount of processing power available, and the distributed model of computation. Nevertheless, the desktop solution can be considered useful in comparison, as researchers can evaluate a subset of data, and rely on the same tools at their office and within the CAVE2.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head n='6.1'>About the design</ns0:head><ns0:p>Our system offers several advantages over the classical desktop-based visualisation and analysis methodology where one data cube is examined at a time. The primary advantage is the capability to compare and contrast of order 100 data cubes, each individually beyond the size a typical workstation can handle.</ns0:p><ns0:p>Similar to the concept of 'Single Instruction, Multiple Data' (SIMD), this distributed model of processing and rendering leads one requested action to be applied to many data cubes in parallel -which we now call 'Single Instruction, Multiple Views', and 'Single Instruction, Multiple Queries'. This means that instead of repeating an analysis or a visualisation task over and over from data cube to data cube, the design of encube has the ability to spawn this task to multiple data cubes seamlessly. With this design, it becomes trivial to send a particular request to specific data cubes without (or with minimal) requirements for the user to write code.</ns0:p><ns0:p>As we showed in Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref>, the design also permits comparative visualisation and analysis of multiple data cubes on a local desktop -an advantage over the classical, 'one-by-one' inspection practice.</ns0:p><ns0:p>However, this mode of operation has limitations. Depending on the size of the data cubes, only a limited few can effectively be visualised at a time given the amount of RAM the desktop computer contains.</ns0:p><ns0:p>Also, the available display area (e.g. on a single 25-inch monitor) is a limiting factor when comparing high-resolution data cubes. When comparing multiple high-resolution data cubes on a single displaydepending on the display device -it is likely that they will be compared at a down-scaled resolution due to the lack of display area. Nevertheless, it is feasible, and useful as it enables researchers to use the same tools with or without a tiled-display environment.</ns0:p><ns0:p>It is important to keep in mind that modern computers are rarely isolated; most computers have access to internet resources. In this context, the system design permits the Process layer to be located on remote machines. Hence, a researcher with access to remote computing such as a supercomputer or cloud-computing infrastructure could execute the compute-intensive tasks remotely and retrieve the results back to be visualised on a local desktop. In this setting, the communication overhead might become an issue depending on the amount of data required to be transferred over the network. Nevertheless, it may well be worth it given the processing power gained in the process. For example, it is worth noting that much of the development of encube was accomplished using the MASSIVE Desktop <ns0:ref type='bibr'>(Goscinski et</ns0:ref> Manuscript to be reviewed <ns0:ref type='bibr'>2014)</ns0:ref>. In this mode of operation, it was feasible to load 3 or 4 columns (9 or 12 data sets) on a single MASSIVE node (having typically 192GB RAM).</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='6.2'>About the implementation</ns0:head><ns0:p>As instruments and facilities evolve, they generate data cubes with an increasing number of voxels per cube, which then requires more storage space and more RAM for the data to be processed and visualised.</ns0:p><ns0:p>Consider the WHISP and IMAGE-HD projects introduced in Section 1. Each WHISP data cube varies from &#8764; 32 megabyte to &#8764; 116 megabyte in storage space, and data collected after the update of the Westerbork Synthesis Radio Telescope for the APERTIF survey <ns0:ref type='bibr' target='#b81'>(R&#246;ttgering et al., 2011)</ns0:ref> will be &#8764; 250 GB per data cube <ns0:ref type='bibr' target='#b72'>(Punzo et al., 2015)</ns0:ref>. In the case of IMAGE-HD, the size of a tractography dataset is theoretically unlimited as many tractography algorithms are stochastic. For example, as we zoom into smaller regions of the brain, more tracks can be generated in realtime leading to a larger storage requirement <ns0:ref type='bibr' target='#b74'>(Raniga et al., 2012)</ns0:ref>. To compare a large number of data cubes concurrently, this data volume effectively limits the number of individual file that can be loaded on the desktop computer in its available RAM. Additionally, the display space of desktop is generally limited to one or a few screens.</ns0:p><ns0:p>In the tiled-display context, a clear advantage of our approach is the ability to visualise and compare many data cubes at once in a synchronized manner. <ns0:ref type='bibr' target='#b41'>Fujiwara et al. (2011)</ns0:ref> showed that comparative visualisation using a large-scale display like the CAVE2 is most effective with sychronised cameras for 3D data. Indeed, we found from direct experimentation in the Monash CAVE2 that synchronized cameras are extremely practical, as they minimize the number of interactions required to manipulate a large number of individual data cubes (up to 80 in our current configuration). In addition, encube enables rendering parameters to be modified or updated synchronously as well.</ns0:p><ns0:p>Another advantage comes through the available display area and available processing power. Not only does the display real estate of the CAVE2 provide many more pixels than a classical desktop monitor, it also enhances visualisation capabilities through a discretized display space. It is interesting to note that bezels around the screens can act as visual guides to organise a large number of individual data cubes.</ns0:p><ns0:p>Moreover, our solution can go beyond 80 data cubes at once by further dividing individual screens. This can be achieved in encube by increasing the number of S2PLOT panels from, for example, 4 to 8 or 16 (see Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>).</ns0:p><ns0:p>By nature, the CAVE2 is a collaborative space. Not only can we display many individual data cubes from a survey, but a team can easily work together in the physical space. The physical space gives enough room for researchers to walk around, discuss, debate and divide the workload. The large display space also enables researchers to exploit their natural spatial memory of interesting data and/or similar features. Building on this configuration by adding analysis functionalities via the portable interaction device provides researchers with a way to query and interact with their large dataset while still being able to move around the physical space.</ns0:p><ns0:p>An interesting aspect of the encube CAVE2 implementation is the availability of stereoscopic displays. Historically, most researchers visualise high dimensional data using 2D screens. While 2D slices of a volume are valuable, being able to interact with the 3D data in 3D can lead to an improved understanding of the data, its sub-structures, and so on. Since looking at the data in 3D tends to ameliorate comprehension, we anticipate that the ability to compare multiple volumes in 3D should also be beneficial.</ns0:p><ns0:p>Moreover, the accessibility to quantitative information enables further understanding about the visualised data. In the current setting, the grid of displayed volumes uses subtle 3D effects that are not tracked to the viewer's position. There is an opportunity to improve this aspect in the future, should it be deemed necessary.</ns0:p><ns0:p>As we mentioned in Section 4.3, further work is required to fully support concurrent users to interact with the system without behavioural anomalies, and to keep multiple devices informed of others' actions.</ns0:p><ns0:p>This would represent a major leap towards multi-user data visualisation and analysis of data cube surveys.</ns0:p><ns0:p>However, we note that the current system already enables collaborative work where one researcher drives the system (using a Interaction Unit), and others researchers observe, query, and discuss the content.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.3'>On the potential to accelerate research in large surveys</ns0:head><ns0:p>Now that we have presented a functional and flexible system, we can speculate on its potential. encube Manuscript to be reviewed</ns0:p><ns0:p>Computer Science interactivity and workflow serialization; and 3) difficulty in organising the exploration process -multiple views spread over multiple screens, and multiple mechanisms to organise data and visualisations.</ns0:p><ns0:p>As the system is real-time and dynamic (as shown in Section 5), it represents an additional alternative methodology for researchers to investigate their data. Through interactivity and visual feedback, one does not have time to forget what (s)he was looking for when (s)he triggered an action (load data cube to a panel, query data, rotate the volume). Moreover, by keeping a trail of all actions through workflow serialization,</ns0:p><ns0:p>the system acts as a backup memory for the user. A user can look back at a specific environment setup (data organised in a particular way, displayed with a specific angle, with specific rendering characteristics, etc.). Another interesting way of using serialization would be to use checkpointing, where users could bookmark a moment in their workflow, and quickly switch between bookmarks.</ns0:p><ns0:p>Machine learning algorithms, and other related automated approaches, are expected to play a prominent role in classifying and characterize the data from large surveys. As an example of how encube can help accelerate a machine learning workflow, we consider its application to supervised learning.</ns0:p><ns0:p>Supervised learning infers a function to classify data based on a labeled training set. To do so with accuracy, supervised learning requires a large training set (e.g. <ns0:ref type='bibr' target='#b17'>Beleites et al., 2013)</ns0:ref>. The acceleration of data labeling can be achieved via the ability to quickly load, move, and collaboratively evaluate a subsample of a large dataset in a short amount of time. Extra modules could easily be added to the controller to accomplish such a task. Some form of machine learning could also be incorporated to the workflow in order to characterise data prior to visualisation, and help guide the discovery process. This could lead to interesting research where researchers and computers work collaboratively towards a better understanding of features of interest within the dataset.</ns0:p><ns0:p>Through the use of workflow serialization, users can revisit an earlier session, and more easily return to the CAVE2 and continue where they left off previously. We think that this system feature will help to generate scientific value. Users will not simply get a nice experience by visiting the visualisation space, but they will be supported to gather information for further investigation, that will lead to tangible new knowledge discoveries. Moreover, this feature enables asynchronous collaboration, as collaborators can review previous work from other members of a team and continue the team's steps towards a discovery.</ns0:p><ns0:p>We assert that this is a key feature that will help scientists integrate the use of large visualisation systems more easily as part of their work practice.</ns0:p><ns0:p>With the opportunity to look at many data cubes at once at high resolution, it becomes quickly apparent that exploratory science requiring comparative studies will be accelerated. Many different research scenarios can be envisioned through the accessibility of qualitative, comparative and quantitative visualisation of multiple data cubes. For instance, by setting visualisation parameters (e.g. sigma clipping, or a common transfer function), a researcher can compare the resulting visualisations, giving insights about comparable information between data cubes. Similarly, one can compare a single data cube from different camera angles (top-to-bottom, left-to-right, etc.) at a fixed moment in time, or in a time-series (side by side videos seen from a different point-of-view).</ns0:p><ns0:p>Finally, an interesting research scenario of visual discovery using encube would be to implement the lineup protocol from <ns0:ref type='bibr' target='#b21'>(Buja et al., 2009)</ns0:ref>, wherein the visualisation of a single real data set is concealed amongst many decoy (synthetic) visualisations -and trained experts are prompted to confirm a discovery.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>Scientific research projects utilising sets of structured multidimensional images are now ubiquitous. The classical desktop-based visualisation and analysis methodology, where one multidimensional image is examined at a time by a single person, is highly inefficient for large collections of data cubes. It is hard to do comparative work with multiple 3D images, as there are insufficient pixels to view many objects at a reasonable resolution at one time. True collaborative work with a small display area is challenging, as multiple researchers need to gather around the desktop. Furthermore, the available local memory of a desktop limits the number of 3D images that can be visualised simultaneously.</ns0:p><ns0:p>By considering the specific requirements of medical imaging and radio astronomy (as exemplified by the IMAGE-HD project and the WHISP galaxy survey), we identified the CAVE2 as an ideal environment for large-scale collaborative, comparative and quantitative visualisation and analysis. Our solution, implemented as encube, favours an approach that is: flexible and interactive; allows qualitative, quantitative, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science allows the workflow history to be maintained; and is portable to systems other than the Monash CAVE2 (e.g. to a simple desktop computer).</ns0:p><ns0:p>From our review of comparative visualisation systems, we noted that many of the discussed features are desirable for the purpose of designing a system aimed at large-scale data cube surveys. We also noted unexplored avenues. Previous research mainly focused on rendering a visualisation seamlessly over multiple displays and enabling interaction with the visualisation space. However, the following questions remained open: 1) how to integrate comparative visualisation and analysis into a unified system; 2) how to document the discovery process; and 3) how to enable scientists to continue the research process once back at their desktop.</ns0:p><ns0:p>Building on these desirable feature, we presented the design of encube, a visualisation and analysis system. We also presented an implementation of encube, using the CAVE2 at Monash University as a testbed. We also showed that encube can be used on a local machine, enabling the research started in the CAVE2 to be continued at one's office, using the same tools. The implementation is a work in progress. More features and interaction models can, and will, be added as researchers request them. We continue to develop encube in collaboration with researchers, so that they can integrate our methods into their work practices.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>presented Omegalib to 4/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Systems design schematics from: (A) Son et al. (2010); (B)Lau et al. (2010); (C) Fujiwara et al. (2011); and (D) Gjerlufsen et al. (2011). All designs are predicated upon a one-way communication model, where a controller sends a command to control how visualisations are rendered on the tiled-display. All models primarily focussed on enabling interaction with the visualisation space. Designs A, C and D also enable a visualisation to be rendered over multiple displays.</ns0:figDesc><ns0:graphic coords='7,150.74,406.51,395.89,123.37' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. System design schematic. Our solution integrates visualisation and analysis into a unified system. The input/output layer includes units to visualise data (Display Units) and interact with visualisations (Interaction Units). The Process layer includes units to manage communication and data (Manager Unit), and to process and render visualisations (Process-Render Units). This system has two-way communication, enabling the Interaction Unit to query both the Manager Unit and the Process-Render Units, and retrieve the requested information. This information can then be reported on the Interaction Unit directly, reducing the amount of information to be displayed on the Display Units.</ns0:figDesc><ns0:graphic coords='8,273.40,558.39,106.05,51.98' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Components of encube depicted in the context of the Monash CAVE2. The Interaction Units permit researchers to control and interact with what is visualised on the 80 screens of the CAVE2. The Interaction Units communicate with the Manager Unit via HTTP methods. A command sent from a client is passed by the Manager Unit to the Process-Render Units, which execute the action. The result is either drawn on the Display Units or returned to the Interaction Unit.</ns0:figDesc><ns0:graphic coords='11,178.04,32.09,261.76,160.28' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. S2PLOT area (blue) covering all four Planar Matrix LX46L 3D LCD panels of a column. Each screen has resolution of 1366 &#215; 768 pixels. The S2PLOT area is divided into four independent panels, with the size matched to the physical display resolution (pink).</ns0:figDesc><ns0:graphic coords='12,249.31,196.21,198.43,169.08' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>7Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. A screenshot of the meta-controller, configured for comparative visualisation of data from the WHISP (van der Hulst et al., 2001), HIPASS (Barnes et al., 2001), and THINGS (Walter et al., 2008) galaxy surveys. (A) A miniature representation of the Monash CAVE2's 4 rows by 20 columns configuration; (B) action buttons; (C) the dataset viewer (allows the survey to be sorted by multiple criteria); and (D) request/display quantitative information about data from one or multiple screens (e.g. a histogram is shown here for galaxy NGC1569 currently on screen A1).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. (A) Photograph of a subset of five out of the 20 four-panel columns of the Monash CAVE2. (B) Selecting screens within the meta-controller leads to the display of a frame (pink) around the selected screens in the CAVE2. (C) Visualisation outputs showing different galaxy morphologies taken from the THINGS galaxy survey<ns0:ref type='bibr' target='#b95'>(Walter et al., 2008)</ns0:ref>.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .Figure 9 .</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figure 8. A screenshot of the iPad within the Monash CAVE2 showing the integration between analysis and comparative visualisation: querying all plotted brain data and summarizing the results in an interactive scatter plot. In this example, neuroscientists can quickly summarize the number of plotted tracks per brain for 80 different individuals in relation to their age of the individual. It allows data from control (Controls), pre-symptomatic (PreHD) and symptomatic (SympHD) individuals to be compared.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. A screenshot of the interactive web interface. The interface comprises (A) a scene controller, which displays an interactive reference volume and (B) a set of parameter controls for modifying quantities such as sampling size, opacity, colour map and isosurface level.</ns0:figDesc><ns0:graphic coords='17,149.66,125.53,397.05,228.72' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. encube can be used on a single desktop, hence enabling its use outside the CAVE2 environment. We show encube running on Linux (CentOS 6.7), with 16GB of RAM and a NVIDIA Geforce GTX 470 graphics card. The screenshot displays the Input/output layer: (A) a Display Unit comprising one column of four S2PLOT panels; and (B) an Interaction Unit. Four individual brains are plotted (A), along with their internal connections (green and orange tracks) starting from a region of the occipital lobe in the left hemisphere (blue region). A close up of a dynamically generated scatter plot is also shown (C). It gathers quantitative information from all plotted brains, showing the number of displayed tracks per brain in relation with the subject's age. The Process layer is running behind the scene.</ns0:figDesc><ns0:graphic coords='18,150.22,235.12,397.78,227.07' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>18 http://fits.gsfc.nasa.gov 18/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>brings solutions to the three common issues of comparative visualisation presented by Lunzer and Hornbaek (2003, Section 2): 1) requirement of high number of interactions -minimized by the parallelism of action over multiple data cubes; 2) difficulty in remembering what has been previously seen -real-time 20/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>and comparative visualisation; provides flexible (meta)data organisation mechanisms to suit scientists; 21/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparing</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Requirement</ns0:cell><ns0:cell>Son+ Lau+ Fujiwara+ Gerjlufsen+ This work</ns0:cell></ns0:row><ns0:row><ns0:cell>Visualisation class</ns0:cell><ns0:cell>Qualitative</ns0:cell></ns0:row></ns0:table><ns0:note>systems from<ns0:ref type='bibr' target='#b84'>Son et al. (2010)</ns0:ref>,<ns0:ref type='bibr' target='#b60'>Lau et al. (2010</ns0:ref><ns0:ref type='bibr' target='#b41'>), Fujiwara et al. (2011</ns0:ref><ns0:ref type='bibr' target='#b43'>), Gjerlufsen et al. (2011)</ns0:ref>, and this work based on our user requirements. Acronyms: query one volume (QOV); query multiple volume (QMV); modify visualisation property (e.g. colormap, transparency; MVP). A blank indicates that a topic is not discussed in related literature. Reorder is a cascading reordering event after manually moving a volume from one screen to another (see Section 4.3.2).</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Table2). The Process layer is represented by a server node (Manager Unit) communicating with a 20-node cluster (Process-Render Units). Each node is controlling a column comprising four 3D-capable screens (Display Units). Finally, laptops, tablets, or smart phones (Interaction Units) connect to the Manager Unit using the Hypertext Transfer Protocol (HTTP) via a web browser. On the software side, three main programs are connected together to form a visualisation and analysis application system: a data processing and encube's layers and units mapped to the Monash CAVE2 hardware.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell>Unit</ns0:cell><ns0:cell>Monash CAVE2 hardware</ns0:cell></ns0:row><ns0:row><ns0:cell>Process layer</ns0:cell><ns0:cell cols='2'>Process-Render Units 20-node cluster, where each node comprises:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Dual 8-core CPUs</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>192 GB of RAM</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>2304-core NVIDIA Quadro K5200 graphics card</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1536-core NVIDIA K5000 graphics cards</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Manager Unit</ns0:cell><ns0:cell>Head node, comprising:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Dual 8-core CPUs</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>256 GB ram</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>2304-core NVIDIA Quadro K5200 graphics card</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Input/output layer Interaction Units</ns0:cell><ns0:cell>Tablets, smartphones, or laptops, in particular:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>iPad2 A1395</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Display Units</ns0:cell><ns0:cell>20 columns &#215; 4 rows of stereoscopic screens,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>where each screen is:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Planar Matrix LX46L 3D LCD panel</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1366 &#215; 768 pixels</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>46' diagonal</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>4.1 Process-Render Units</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>All processing and rendering of volumetric data is implemented as a custom 3D visualisation application -</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>encube-PR. As processing and rendering is compute intensive, encube-PR is written in C, OpenGL</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>and GLSL, and based on S2PLOT (Barnes et al., 2006). Each Processor-Renderer node runs one</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>instance of this application. The motivation behind our choice of using S2PLOT, an open source three-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>dimensional plotting library, is related to its support of multiple panels 'out of the box' and stereoscopic</ns0:cell></ns0:row></ns0:table><ns0:note>rendering program (incorporating volume, isosurface and streamline shaders -instantiated on every Process-Render Unit), a manager program (instantiated on the Manager Unit), and an interaction program (instantiated on the Interaction Units). Schematics of the implementation, and how it relates to the general design, are depicted in Figures3 and 4respectively. rendering capabilities. It is also motivated by S2PLOT's multiple customizable methods to handle OpenGL 5 callbacks for interaction, and its support of remote input via a built-in socket. Moreover, S2PLOT can be programmed to share its basic rendering transformations (camera position etc) with other instances. Custom shaders are supported in the S2PLOT pipeline, alongside predefined graphics9/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Median load time per column (in seconds) using the Monash CAVE2 and a desktop computer for IMAGE-HD data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Measurement Tracks fraction</ns0:cell><ns0:cell>Option</ns0:cell><ns0:cell cols='2'>CAVE2 Desktop</ns0:cell></ns0:row><ns0:row><ns0:cell>Load time (s)</ns0:cell><ns0:cell cols='2'>Subsampled tracks unprocessed</ns0:cell><ns0:cell>4.82</ns0:cell><ns0:cell>3.48</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>pre-processed</ns0:cell><ns0:cell>0.27</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>All tracks</ns0:cell><ns0:cell>unprocessed</ns0:cell><ns0:cell>5.02</ns0:cell><ns0:cell>20.82</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>pre-processed</ns0:cell><ns0:cell>1.76</ns0:cell><ns0:cell>12.77</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>With astronomical data, the loading times per column for CAVE2 and desktop are comparable.As the number of data cubes (and the number of voxels per cube) increases, the memory will tend to saturate,</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Render mode</ns0:cell><ns0:cell>CAVE2</ns0:cell><ns0:cell>Desktop</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>S2PLOT area (pixel) -</ns0:cell><ns0:cell cols='2'>1366 &#215; 3072 600 &#215; 1200</ns0:cell></ns0:row><ns0:row><ns0:cell>Frames/s</ns0:cell><ns0:cell>Mono</ns0:cell><ns0:cell>22.5</ns0:cell><ns0:cell>19.6</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Stereo</ns0:cell><ns0:cell>11.2</ns0:cell><ns0:cell>-</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>al., </ns0:figDesc><ns0:table /><ns0:note>19/27PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016)</ns0:note></ns0:figure> <ns0:note place='foot' n='3'>http://www.cgl.ucsf.edu/chimera 4 http://brainvisa.info/web/index.html 5/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='5'>https://www.opengl.org</ns0:note> <ns0:note place='foot' n='6'>https://www.python.org 11/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10577:1:0:NEW 25 Aug 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Centre for Astrophysics & Supercomputing Swinburne University of Technology 1 Alfred Street Hawthorn VIC Australia 3122 Tel +61 3 9214 8708 dvohl@swin.edu.au August 24th, 2016 Dear Professor Sánchez, ​ Vohl ​et al., “Large­scale comparative visualisation of sets of multidimensional data”, ref: #CS­2016:05:10577:0:1:REVIEW, Article ID: 10577. We thank the reviewers for their time and generous comments on the manuscript, which we have revised to address their concerns. We are pleased to find both reviewers acknowledged our work as being of good quality. Following their detailed and helpful suggestions, we have refined our manuscript to better highlight our research questions, our scientific contribution, and how they relate to literature. We also included a new experiment section, along with additional tables, figures, and examples highlighting how ​encube meets our user requirements for our two targeted test cases (medical imaging and astronomy), where other systems do not. We also highlight that part of the discussion speculates ​ on ​encube's potential based on the system's features presented throughout the article. We gave a great deal of consideration to Reviewer 2’s request for a user study – noting that Reviewer 1 did not identify that such a study was required. The primary focus of this paper is to present our technical solution solving our requirements for visualization led discovery in large scale data cube surveys; requirements previously unmatched based on our review of previous work. We believe a user study would change the direction and purpose of the work we are presenting in this paper. Hence, we do not consider a user experiment for this particular paper. We believe concepts we introduce, namely integrating comparative and quantitative visualization and analysis into a unified system, and workflow serialization in the context of tiled­displays are novel. We believe the article is now suitable for publication in PeerJ Computer Science. Dany Vohl Ph.D. Candidate On behalf of all authors. We respond here to the specific comments and feedback of the Editor and Reviewers. Editor (Luciano Sánchez) The technical quality of the draft could be further improved. The scientific contributions should be clearly stated and supported by data thus the reader can perceive the novelty of this work. We have modified our manuscript to improve the technical quality of the draft. We also modified portions of the article, including the introduction, to clearly state our scientific contributions, which is supported by data (we have a functional piece of software). We introduced a new section, Section 5, presenting the results of a timing experiment comparing performances of ​encube when used within the CAVE2 and on a personal desktop computer. The article also presents real world results, described in the text and figures. Reviewer 1 (Luis Junco) “​Section 3 is somewhat generic in terms of the design description.” We have modified the section ​ header from “Design” to “Overview of ​encube”, clarifying that we are outlining the different parts and concepts of encube that will be further explained in Section 4. “​Section 4, and more specifically 4.3, provides much space to the description of the control software implementation, but only describes a few examples. (...)” We have modified Section 4.3.2 (in particular “Integrating comparative visualisation and analysis as a unified system”) to better highlight our two functionalities used as proof of concepts, both for astronomy (interactive histogram) and medical imaging (automatically generated scatter plot, showing the number of neural connections for all visualised brains). “​It could be explained in this section the appropriate set of tools and functionalities for the analysis of the two proposed problems (...) Regard this second problem is cited in the article but later nothing is explained about him.” We do not discuss all potential features that scientists in both field may want to implement as it would be outside the scope of the paper. We do mention however that additional functionalities are to be implemented based on specific user and use­case requirements. In addition, we added an extra panel to Figure 7 displaying several galaxy morphologies visualised with ​encube to give the reader an idea of what galaxy morphologies can look like. Reviewer 2 “​There is no research question.” As pointed out later on by Reviewer 2, our research questions are related to : 1) how to integrate comparative visualisation and analysis into a unified system; 2) how to document the discovery process; and 3) how to enable scientists to continue the research process once back at their desktop. We modified the introduction from an active form (e.g. We extend these solutions by considering [these three points]) to question form to better accentuate our research questions. “​There is no experiment.”; “There is no data gathering.” As shown in figures 7 and 10, we have experimented with our system to show that: 1) we properly enabled both comparative visualisation and quantitative analysis to run dynamically; 2) the system properly enables integration of real­world functionalities such as the interactive histogram for astronomy, the interactive scatterplot for medical imaging, and the recording of workflow; 3) the system can be executed as a standalone software. In addition, we conducted a new timing experiment where we evaluate loading time and frame rate for both neuroscience and astronomical datasets. This experiment highlights the responsiveness of the system while dealing with multiple large files simultaneously. Results of this experiment are now included as Section 5. In this section, we compare performances (load time and frame rate) between ​encube running within the Monash CAVE2 and on a personal desktop computer. We report that for a similar loading period, the CAVE2 provides access to 20 times the data accessible on the desktop. We also report that frame rates vary depending on the rendering method (e.g. tracks versus volume rendering; rendering in mono or in stereo). “​The conclusions should be appropriately stated, should be connected to the original question investigated, and should be limited to those supported by the results. ­> This is not ok in the writing.” We modified our manuscript to better connect our conclusions with our original questions. “​Speculation is welcomed, but should be identified as such. ­> It is not identified as speculative.”; “The conclusions are mostly subjective.” We modified the title for section 5.3 and clarified its discussion to better highlight that it includes speculations based on the real system's features described throughout the article. “​There is no doubt that the authors have put great effort into this work, and that it seems to present relevant features. However, the text has some serious flaws. Most of them seem to be linked to the excessive description of the system, which makes the paper miss points that are relevant in scientific literature: evaluation and the detection of the contributions of the work using objective data.” We included a new table (Table 1) comparing our system with previous work based our user requirements. We show that none of the previous systems met all requirements. In particular, none of the previous systems included quantitative mechanisms to evaluate visualised data. We also include new interaction mechanisms for comparative visualisation (see Table 1). Furthermore, none of the previous work considered recording the workflow history – which is a key feature to continue the data exploration subsequently to a single session. Finally, none of the previous work discussed using their software as a standalone solution. Since not all researchers have continuous access to a CAVE2 or other tiled­display, this new feature provides a means to continue the comparative work outside of an advanced display environment. “​It would be very useful for the reader to have some (two? Three?) figures with examples of standard techniques and how they work. Also, why are there three review papers on the subject in three consecutive years? It can be the case that there was a lot of innovation in the field in these years. Could the authors state what is the contribution of each of these reviews?” We modified the first paragraph of the introduction to highlight the differences between all three references. We did not include figures for standard techniques as this is outside the scope of this paper. “​I disagree with that. There are many fields in science that are oblivious about data cubes. Again, some figures and a deeper discussion would greatly increase the appeal of this paper.” We modified the statement and included a paragraph giving examples of fields that use data cubes in their science. 'Along “Related Work”: again, I miss some figures that could highlight the differences between visualization systems proposals and the preceeding ones. It would probably be a good idea to only select the two or three that provided inspiration for encube.” We added a figure in Section 2 showing system design schematics from the four systems that provided inspiration for ​encube. In addition, in Section 3, we added a table (Table 1) comparing these systems with ours regarding our user requirements. “​In Section 5.3 > Without an user study, all of these discussions are based on anedoctal data or speculation.” We modified the title for section 5.3 and clarified its discussion to better highlight that it includes speculations based on the real system's features described throughout the article. “​Although the authors state that there are three important questions (“1) how to integrate comparative visualisation and analysis into a unified system; 2) how to document the discovery process; and 3) how to enable scientists to continue the research process once back at their desktop.”). However (...) (…) “1) I am not convinced that this was discussed along the article. Again, the reader needs at least some comparative figures displaying how encube is different from previous systems. Also, this only accounts for the proposal of the system: proving that this point was properly addressed requires an user study.” We added a mention in the caption of Figure 2 to better highlight this integration. We also renamed a subsection, now called “Integrating comparative visualisation and analysis as a unified system”, in Section 4.3.2 to better highlight this point. In a sense, the article describes in detail how this integration is made possible. We gave a great deal of consideration to the request for a user study. This work commenced because of our long­term collaboration with researchers from the IMAGE­HD study (including co­authors Georgiou­Karistianis, Poudel and Barnes) and with astronomers working on galaxy survey projects (including co­author Fluke). These research teams are facing specific challenges that we are addressing. The alternative solutions that we reviewed did not have the full set of capabilities that these two research teams needed. Current expertise and experience with desktop­bound solutions has shown us that they are not adequate to the type of large­scale classification tasks that motivated our requirements (section 1.1). ​Encube has developed to a stage that we are now bringing the science team leaders, and associates, into the CAVE2 to make progress on their specific research goals. Future publications will focus on the scientific outcomes and advances in early identification of Huntington’s disease and morphological (shaped­based) analysis of neutral hydrogen in galaxies. The primary focus of this paper is to present and make available a technical solution that satisfies new requirements in the 'big data' era, hitherto not delivered by existing visualisation solutions. The technical solution needs to be fit for purpose, which it is. In practical terms, the technical solution and its tools may then be used by research teams in their own workflow process. We believe it is this workflow that must be validated as a whole by a user study, not the individual tools. Hence, we do not consider a user experiment for this particular paper. We do however consider a timing experiment, which is now included as Section 5. (…) “2) Again, an user study is necessary to validate the proposed system. How did users interact with the discovery process history?” We did provide a description of mechanisms to store the workflow, which shows how such a concept can be achieved. Testing how users interact with it is another research question that we do not tackle in this study. (…) “3) By research, I am guessing the authors mean “discovery” or “exploration”? The server­based service is a good solution for this. The user study, however, should highlight whether the server­based approach actually improved the data exploration experience.” We did show that our system can be work on a single machine, and still execute comparative and quantitative visualisations (see Figure 10 and Section 5). Testing how users continued their data exploration is another research question that we do not tackle in this study. "
Here is a paper. Please give your review comments after reading it.
336
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>SHAP (SHapley Additive exPlanation) values provide a game theoretic interpretation of the predictions of machine learning models based on Shapley values. While exact calculation of SHAP values is computationally intractable in general, a recursive polynomial-time algorithm called TreeShap is available for decision tree models. However, despite its polynomial time complexity, TreeShap can become a significant bottleneck in practical machine learning pipelines when applied to large decision tree ensembles. Unfortunately, the complicated TreeShap algorithm is difficult to map to hardware accelerators such as GPUs. In this work, we present GPUTreeShap, a reformulated TreeShap algorithm suitable for massively parallel computation on graphics processing units. Our approach first preprocesses each decision tree to isolate variable sized sub-problems from the original recursive algorithm, then solves a bin packing problem, and finally maps sub-problems to single-instruction, multiple-thread (SIMT) tasks for parallel execution with specialised hardware instructions. With a single NVIDIA Tesla V100-32 GPU, we achieve speedups of up to 19x for SHAP values, and speedups of up to 340x for SHAP interaction values, over a state-of-the-art multi-core CPU implementation executed on two 20-core Xeon E5-2698 v4 2.2 GHz CPUs. We also experiment with multi-GPU computing using eight V100 GPUs, demonstrating throughput of 1.2M rows per second---equivalent CPU-based performance is estimated to require 6850 CPU cores.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Explainability and accountability are important for practical applications of machine learning, but the interpretation of complex models with state-of-the-art accuracy such as neural networks or decision tree ensembles obtained using gradient boosting is challenging. Recent literature <ns0:ref type='bibr' target='#b31'>(Ribeiro et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b33'>Selvaraju et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Guidotti et al., 2018)</ns0:ref> describes methods for 'local interpretability' of these models, enabling the attribution of predictions for individual examples to component features. One such method calculates so-called SHAP (SHapley Additive exPlanation) values quantifying the contribution of each feature to a prediction. In contrast to other methods, SHAP values exhibit several unique properties that appear to agree with human intuition <ns0:ref type='bibr' target='#b23'>(Lundberg et al., 2020)</ns0:ref>. Although exact calculation of SHAP values generally takes exponential time, the special structure of decision trees admits a polynomial-time algorithm. This algorithm, implemented alongside state-of-the-art gradient boosting libraries such as XGBoost <ns0:ref type='bibr' target='#b6'>(Chen and Guestrin, 2016)</ns0:ref> and LightGBM <ns0:ref type='bibr' target='#b20'>(Ke et al., 2017)</ns0:ref>, enables complex decision tree ensembles with state-of-the-art performance to also output interpretable predictions.</ns0:p><ns0:p>However, despite improvements to algorithmic complexity and software implementation, computing SHAP values from tree ensembles remains a computational concern for practitioners, particularly as model size or size of test data increases: generating SHAP values can be more time-consuming than training the model itself. We address this problem by reformulating the recursive TreeShap algorithm, taking advantage of parallelism and increased computational throughput available on modern GPUs. We provide an open source module named GPUTreeShap implementing a high throughput variant of this algorithm using NVIDIA's CUDA platform. GPUTreeShap is integrated as a backend to the XGBoost library, providing significant improvements to runtime over its existing multicore CPU-based implementation.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In this section, we briefly review the definition of SHAP values for individual features and the TreeShap algorithm for computing these values from decision tree models. We also review an extension of SHAP values to second-order interaction effects between features.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>SHAP Values</ns0:head><ns0:p>SHAP values are defined as the coefficients of the following additive surrogate explanation model g, a linear function of binary variables</ns0:p><ns0:formula xml:id='formula_0'>g(z &#8242; ) = &#966; 0 + M &#8721; i=1 &#966; i z &#8242; i (1)</ns0:formula><ns0:p>where M is the number of features, z &#8242; &#8712; {0, 1} M , and &#966; i &#8712; R. z &#8242; i indicates the presence of a given feature and &#966; i its relative contribution to the model output. The surrogate model g(z &#8242; ) is a local explanation of a prediction f (x) generated by the model for a feature vector x, meaning that a unique explanatory model may be generated for any given x. SHAP values are defined by the following expression:</ns0:p><ns0:formula xml:id='formula_1'>&#966; i = &#8721; S&#8838;M\{i} |S|!(|M| &#8722; |S| &#8722; 1)! |M|! [ f S&#8746;{i} (x) &#8722; f S (x)] (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>)</ns0:formula><ns0:p>where M is the set of all features and f S (x) describes the model output restricted to feature subset S.</ns0:p><ns0:p>Equation 2 considers all possible subsets, and so has runtime exponential in the number of features.</ns0:p><ns0:p>We consider models that are decision trees with binary splits. Given a trained decision tree model f and data instance x, it is not necessarily clear how to restrict model output f (x) to feature subset S-when feature j is not present in subset S along a given branch of the tree, and a split condition testing j is encountered, then how do we choose which path to follow to obtain a prediction for x? <ns0:ref type='bibr' target='#b23'>Lundberg et al. (2020)</ns0:ref> define a conditional expectation for the decision tree model E[ f (x)|x S ], where the split condition on feature j is represented by a Bernoulli random variable with distribution estimated from the training set used to build the model. In effect, when a decision tree branch is encountered, and the feature to be tested is not in the active subset S, we take the output of both the left and right branch. More specifically, we use the proportion of weighted instances that flow down the left or right branch during model training as the estimated probabilities for the Bernoulli variable. This process is also how the C4.5 decision tree learner deals with missing values <ns0:ref type='bibr' target='#b30'>(Quinlan, 1993)</ns0:ref>. It is referred to as 'cover weighting' in what follows.</ns0:p><ns0:p>Given this interpretation of missing features, <ns0:ref type='bibr' target='#b23'>Lundberg et al. (2020)</ns0:ref> give a polynomial-time algorithm for efficiently solving Equation 2, named TreeShap. The algorithm exploits the specific structure of decision trees: the model is additive in the contribution of each leaf. Equation 2 can thus be independently evaluated for each unique path from root to leaf node. These unique paths are then processed using a quadratic-time dynamic programming algorithm. The intuition of the algorithm is to keep track of the proportion of all feature subsets that flow down each branch of the tree, weighted according to the length of each subset |S|, as well as the proportion that flow down the left and right branches when the feature is missing.</ns0:p><ns0:p>We reproduce the recursive polynomial-time TreeSHAP algorithm as presented in <ns0:ref type='bibr' target='#b23'>Lundberg et al. (2020)</ns0:ref> in Algorithm 1, where m is a list representing the path of unique features split on so far. Each list element has four attributes: d is the feature index, z is the fraction of paths that flow through the current branch when the feature is not present, o is the corresponding fraction when the feature is present, and w is the proportion of feature subsets of a given cardinality that are present. The decision tree is represented by the set of lists {v, a, b,t, r, d}, where each list element corresponds to a given tree node, with v containing leaf values, a pointers to the left children, b pointers to the right children, t the split condition, r the weights of training instances, and d the feature indices. The FINDFIRST function returns the index of the first occurrence of a feature in the list m, or a null value if the feature does not occur.</ns0:p><ns0:p>At a high level, the algorithm proceeds by stepping through a path in the decision tree of depth D from root to leaf. According to Equation 2, we have a different weighting for the size of each feature subset, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>m = EXTEND(m, p z , p o , p i ) 5: if v j == lea f then 6: for i &#8592; 2 to len(m) do 7: w = sum(UNWIND(m, i).w) 8: &#966; m i .d = &#966; m i .d + w(m i .o &#8722; m i .z)v j 9: else 10: h, c = x d j &#8804; t j ?(a j , b j ) : (b j , a j ) 11: i z = i o = 1 12: k = FINDFIRST(m.d, d j ) 13: if k = nothing then 14: i z , i o = (m k .z, m k .o) 15: m = UNWIND(m, k) 16: RECURSE(h, m, i z r h /r j , i o , d j ) 17: RECURSE(c, m, i z r c /r j , 0, d j ) 18: function EXTEND(m, p z , p o , p i ) 19: l = len(m)</ns0:formula><ns0:formula xml:id='formula_4'>m i+1 .w = m i+1 .w + p o &#8226; m i .w &#8226; i/(l + 1) 24: m i .w = p z &#8226; m i .w &#8226; (l + 1 &#8722; i)/(l + 1) 25: return m 26: function UNWIND(m, i) 27: l = len(m) 28: n = m l .w 29: m = copy(m 1&#8226;&#8226;&#8226;l&#8722;1 ) 30: for j &#8592; l &#8722; 1 to 1 do 31: if m i .o = 0 then 32: t = m j .w 33: m j .w = n &#8226; l/( j &#8226; m i .o) 34: n = t &#8722; m j .w &#8226; m i .z &#8226; (l &#8722; j)/l 35: else 36: m j .w = (m j .w &#8226; l)/(m i .z &#8226; (l &#8722; j)) 37: for j &#8592; i to l &#8722; 1 do 38: m j .(d, z, o) = m j+1 .(d, z, o)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>although we can accumulate feature subsets of the same size together. As the algorithm advances down the tree, it calls the method EXTEND, taking a new feature split and accumulating its effect on all possible feature subsets of length 1, 2, . . . up to the current depth. The UNWIND method is used to undo addition of a feature that has been added to the path via EXTEND. UNWIND and EXTEND are commutative and can be called in any order. UNWIND may be used to remove duplicate feature occurrences from the path and to compute the final SHAP values. When the recursion reaches a leaf, the SHAP values &#966; i for each feature present in the path are computed by calling UNWIND on feature i (line 7), temporarily removing it from the path; then, the overall effect of switching that feature on or off is adjusted by adding the appropriate term to &#966; i .</ns0:p><ns0:p>Given an ensemble of T decision trees, Algorithm 1 has time complexity O(T LD 2 ), using memory O(D 2 + M), where L is the maximum number of leaves for each tree, D is the maximum tree depth, and M the number of features <ns0:ref type='bibr' target='#b23'>(Lundberg et al., 2020)</ns0:ref>. In this paper, we reformulate Algorithm 1 for massively parallel GPUs.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>SHAP Interaction Values</ns0:head><ns0:p>In addition to the first-order feature relevance metric defined above, <ns0:ref type='bibr' target='#b23'>Lundberg et al. (2020)</ns0:ref> also provide an extension of SHAP values to second-order relationships between features, termed SHAP Interaction Values. This method applies the game-theoretic SHAP interaction index <ns0:ref type='bibr' target='#b13'>(Fujimoto et al., 2006)</ns0:ref>, defining a matrix of interactions as</ns0:p><ns0:formula xml:id='formula_5'>&#966; i, j = &#8721; S&#8838;M\{i, j} |S|!(M &#8722; |S| &#8722; 2)! 2(M &#8722; 1)! &#8711; i j (S)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>for i = j, where</ns0:p><ns0:formula xml:id='formula_6'>&#8711; i j (S) = f S&#8746;{i, j} (x) &#8722; f S&#8746;{i} (x) &#8722; f S&#8746;{ j} (x) + f S (x) (4) = f S&#8746;{i, j} (x) &#8722; f S&#8746;{ j} (x) &#8722; [ f S&#8746;{i} (x) &#8722; f S (x)]<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>with diagonals</ns0:p><ns0:formula xml:id='formula_7'>&#966; i,i = &#966; i &#8722; &#8721; j =i &#966; i, j .<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Interaction values can be efficiently computed by connecting Eq. 5 to Eq. 2, for which we have the polynomial time TreeShap algorithm. To compute &#966; i, j , TreeShap should be evaluated twice for &#966; i , where feature j is alternately considered fixed to present and not present in the model. To evaluate TreeShap for a unique path conditioning on j, the path is extended as normal, but if feature j is encountered, it is not included in the path (the dynamic programming solution is not extended with this feature, instead skipping to the next feature). If j is considered not present, the resulting &#966; i is weighted according to the probability of taking the left or right branch (cover weighting) at a split on feature j. If j is considered present, we evaluate the decision tree split condition x j &lt; t j and discard &#966; i from the path not taken.</ns0:p><ns0:p>To compute interaction values for all pairs of features, TreeShap can be evaluated M times, leading to time complexity of O(T LD 2 M). Interaction values are challenging to compute in practice, with runtimes and memory requirements significantly larger than decision tree induction itself. In Section 3.5, we show how to reformulate this algorithm to the GPU and how to improve runtime to O(T LD 3 ) (tree depth D is normally much smaller than the number of features M present in the data).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>GPU Computing</ns0:head><ns0:p>GPUs are massively parallel processors optimised for throughput, in contrast to conventional CPUs, which optimise for latency. GPUs in use today consist of many processing units with single-instruction, multiple-thread (SIMT) lanes that very efficiently execute a group of threads operating in lockstep. In modern NVIDIA GPUs such as the ones we use for the experiments in this paper, these processing units, called 'streaming multiprocessors' (SMs), have 32 SIMT lanes, and the corresponding group of 32 threads is called a 'warp'. Warps are generally executed on SMs without order guarantees, enabling latency in warp execution (e.g., from global memory loads) to be hidden by switching to other warps that are ready for execution (NVIDIA Corporation, 2020). 1 Large speed-ups in the domain of GPU computing commonly occur when the problem can be expressed as a balanced set of vector operations with minimal control flow. Notable examples are matrix multiplication <ns0:ref type='bibr' target='#b12'>(Fatahalian et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b17'>Hall et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b5'>Changhao Jiang and Snir, 2005)</ns0:ref>, image processing <ns0:ref type='bibr' target='#b3'>(Bo Fang et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b27'>Moreland and Angel, 2003)</ns0:ref>, deep neural networks <ns0:ref type='bibr' target='#b29'>(Perry et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Coates et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b7'>Chetlur et al., 2014)</ns0:ref>, and sorting <ns0:ref type='bibr' target='#b15'>(Green et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b32'>Satish et al., 2010)</ns0:ref>. Prior work exists on decision tree induction <ns0:ref type='bibr'>(Sharp, 2008;</ns0:ref><ns0:ref type='bibr' target='#b26'>Mitchell and Frank, 2017;</ns0:ref><ns0:ref type='bibr' target='#b39'>Zhang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Dorogush et al., 2018)</ns0:ref> and inference <ns0:ref type='bibr' target='#b36'>(Sharp, 2012)</ns0:ref> on GPUs, but we know of no prior work on tree interpretability specifically tailored to GPUs. Related work also exists on solving dynamic programming type problems <ns0:ref type='bibr' target='#b22'>(Liu et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b37'>Steffen et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b4'>Boyer et al., 2012)</ns0:ref>, but dynamic programming is a broad term, and the referenced works discuss significantly different problem sizes and applications (e.g., Smith-Waterman for sequence alignment).</ns0:p><ns0:p>In Section 3, we discuss a unique approach to exploiting GPU parallelism, different from the abovementioned works due to the unique characteristics of the TreeShap algorithm. In particular, our approach efficiently deals with large amounts of branching and load imbalance that normally inhibits performance on GPUs, leading to substantial improvements over a state-of-the-art multicore CPU implementation.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>GPUTREESHAP</ns0:head><ns0:p>Algorithm 1 has properties that make it unsuitable for direct implementation on GPUs in a performant way. Conventional multi-threaded CPU implementations of Algorithm 1 achieve parallel work distribution by instances <ns0:ref type='bibr' target='#b6'>(Chen and Guestrin, 2016;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ke et al., 2017)</ns0:ref>. For example, interpretability results for input matrix X are computed by launching one parallel CPU thread per row (i.e., data instance being evaluated). While this approach is embarrassingly parallel, CPU threads are different from GPU threads. If GPU threads in a warp take divergent branches, performance is reduced, as all threads must execute identical instructions when they are active <ns0:ref type='bibr' target='#b18'>(Harris and Buck, 2005)</ns0:ref>. Moreover, GPUs can suffer from per-thread load balancing problems-if work is unevenly distributed between threads in a warp, finished threads stall until all threads in the warp are finished. Additionally, GPU threads are more resource-constrained than their CPU counterparts, having a smaller number of available registers due to limited per-SM resources. Excessive register usage results in reduced SM occupancy by limiting the number of concurrent warps. It also results in register spills to global memory, causing memory loads at significantly higher latency.</ns0:p><ns0:p>To mitigate these issues, we segment the TreeShap algorithm to obtain fine-grained parallelism, observing that each unique path from root to leaf in Algorithm 1 can be constructed independently because the &#966; i obtained at each leaf are additive and depend only on features encountered on that unique path from root to leaf. Instead of allocating one thread per tree, we allocate a group of threads to cooperatively compute SHAP values for a given unique path through the tree. We launch one such group of threads for each (unique path, evaluation instance) pair, computing all SHAP values for this pair in a single GPU kernel. This method requires preprocessing to arrange the tree ensemble into a suitable form, avoid some less GPU-friendly operations of the original algorithm, and partition work efficiently across GPU threads. Our GPUTreeShap algorithm can be summarised by the following high-level steps:</ns0:p><ns0:p>1. Preprocess the ensemble to extract all unique decision tree paths.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Combine duplicate features along each path.</ns0:head><ns0:p>3. Partition path subproblems to GPU warps by solving a bin packing problem.</ns0:p><ns0:p>4. Launch a GPU kernel solving the dynamic programming subproblems in batch.</ns0:p><ns0:p>These steps are described in more detail below.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Extract Paths</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> shows a decision tree model, highlighting a unique path from root to leaf. The SHAP values computed by Algorithm 1 are simply the sum of the SHAP values from every unique path in the tree. Note that the decision tree model holds information about the weight of training instances that flow down paths in the cover variable. To apply GPU computing, we first preprocess the decision tree ensemble into lists of path elements representing all possible unique paths in the ensemble. Path elements are represented as per Listing 1.</ns0:p><ns0:p>As paths share information that is represented in a redundant manner in the collection of lists representing a tree, reformulating trees increases memory consumption: assuming balanced trees, it Considering each path element, we use a lower and an upper bound to represent the range of feature values that can flow through a particular branch of the tree when the corresponding feature is present. For example, the root node in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> has split condition f 0 &lt; 0.5. Therefore, if the feature is present, the left branch from this node contains instances where &#8722;&#8734; &#8804; f 0 &lt; 0.5, and the right branch contains instances where 0.5 &#8804; f 0 &lt; &#8734;. This representation is useful for the next preprocessing step, where we combine duplicate feature occurrences along a decision tree path.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows two unique paths extracted from Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>. An entire tree ensemble can be represented in this form. Crucially, this representation contains sufficient information to compute the ensemble's SHAP values.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Remove Duplicate Features</ns0:head><ns0:p>Part of the complexity of Algorithm 1 comes from a need to detect and handle multiple occurrences of a feature in a single unique path. In Lines 12 to 15, the candidate feature of the current recursion step is checked against existing features in the path. If a previous occurrence is detected, it is removed from the path using the UNWIND function. The p z and p o values for the old and new occurrences of the feature are multiplied, and the path extended with these new values.</ns0:p><ns0:p>Unwinding previous features to deal with multiple feature occurrences in this manner is problematic for GPU implementation because it requires threads to cooperatively evaluate FINDFIRST and then UNWIND, introducing branching as well as extra computation. Instead, we take advantage of our representation of a tree ensemble in path element form, combining duplicate features into a single occurrence. To do this, recognise that a path through a decision tree from root to leaf represents a single hyperrectangle in the M dimensional feature space, with boundaries defined according to split conditions along the path. The boundaries of the hyperrectangle may alternatively be represented with a lower and upper bound on each feature. Therefore, any number of decision tree splits over a single feature can be reduced to a single range, represented by these bounds. Moreover, note that the ordering of features Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref>. Two unique paths from the decision tree in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>. The second path listed here corresponds to the highlighted path in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>, encoding bounds on the feature values for an instance that reaches this leaf, the leaf prediction value, and the conditional probability ('zero fraction') of an instance meeting the split condition if the feature is unknown. within a path is irrelevant to the final SHAP values. As noted in <ns0:ref type='bibr' target='#b23'>(Lundberg et al., 2020)</ns0:ref>, the EXTEND and UNWIND functions defined in Algorithm 1 are commutative; therefore, features may be added to or removed from a path in any order, and we can sort unique path representations by feature index, combining consecutive occurrences of the same feature into a single path element.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Bin Packing For Work Allocation</ns0:head><ns0:p>Each unique path sub-problem identified above is mapped to GPU warps for hardware execution. A decision tree ensemble contains L unique paths, where L is the number of leaves, and each path has length between 1 and maximum tree depth D. To maximise throughput on the GPU, it is important to maximise utilisation of the processing units by saturating them with threads to execute. In particular, given a 32-thread warp, multiple paths may be resident and executed concurrently on a single warp. It is also important to assign all threads processing the same decision tree path to the same warp as we wish to use fast warp hardware intrinsics for communication between these threads and avoid synchronisation cost. Consequently, in our GPU algorithm, sub-problems are constrained to not overlap across warps. This implies that the maximum depth of a decision tree processed by our algorithm must be less than or equal to the GPU warp size of 32. Given that the number of nodes in a balanced decision tree increases exponentially with depth, and real-world experience showing D &#8804; 16 in high-performance boosted decision tree ensembles almost always, we believe this to be a reasonable constraint.</ns0:p><ns0:p>To achieve the highest device utilisation, path sub-problems should be mapped to warps such that the total number of warps is minimised. Given the above constraint, this requires solving a bin packing problem. Given a finite set of items I, with sizes s(i) &#8712; Z + , for each i &#8712; I, and maximum bin capacity B, I must be partitioned into the disjoint sets I 0 , I 1 , . . . , I K such that the sum of sizes in each set is less than B. The partitioning minimising K is the optimal bin packing solution. In our case, the bin capacity, B = 32, is the number of threads per warp, and our item sizes, s(i), are given by the unique path lengths from the tree ensemble. In general, finding the optimal packing is strongly NP-complete <ns0:ref type='bibr' target='#b14'>(Garey and Johnson, 1979)</ns0:ref>, although there are heuristics that can achieve close to optimal performance in many cases. In Section 4.1, we evaluate three standard heuristics for the off-line bin packing problem, Next-Fit (NF), First-Fit-Decreasing (FF), and Best-Fit-Decreasing (BFD), as well as a baseline where each item is placed in its own bin. We briefly describe these algorithms and refer the reader to <ns0:ref type='bibr' target='#b25'>Martello and Toth (1990)</ns0:ref> or <ns0:ref type='bibr' target='#b9'>Coffman et al. (1997)</ns0:ref> for a more in-depth survey.</ns0:p><ns0:p>Next-Fit is a simple algorithm, where only one bin is open at a time. Items are added to the bin as they arrive. If bin capacity is exceeded, the bin is closed and a new bin is opened. In contrast, First-Fit-Decreasing sorts the list of items by non-increasing size. Then, beginning with the largest item, it searches for the first bin with sufficient capacity and adds it to the bin. Similarly, Best-Fit-Decreasing also sorts items by non-increasing size, but assigns items to the feasible bin with the smallest residual <ns0:ref type='bibr' target='#b19'>Johnson (1974)</ns0:ref> for specifics).</ns0:p><ns0:p>Existing literature gives worst-case approximation ratios for the above heuristics. For a given set of items I, let A(I) denote the number of bins used by algorithm A, and OPT (I) be the number of bins for the optimal solution. The approximation ratio R A &#8804; A (I) OPT (I) describes the worst-case performance ratio for any possible I. Time complexities and approximation ratios for each of the three above bin packing heuristic are listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, as per <ns0:ref type='bibr' target='#b9'>Coffman et al. (1997)</ns0:ref>.</ns0:p><ns0:p>As this paper concerns the implementation of GPU algorithms, we would ideally formulate the above heuristics in parallel. Unfortunately, the bin packing problem is known to be hard to parallelise. In particular, FFD and BFD are P-complete, indicating that it is unlikely that these algorithms may be sped up significantly via parallelism <ns0:ref type='bibr' target='#b0'>(Anderson and Mayr, 1984)</ns0:ref>. An efficient parallel algorithm with the same approximation ratio as FFD/BFD is given in <ns0:ref type='bibr' target='#b1'>Anderson et al. (1989)</ns0:ref>, but the adaptation of this algorithm to GPU architectures is nontrivial and beyond the scope of this paper. Fortunately, as shown by our evaluation in Section 3.3, CPU-based implementations of the bin packing heuristics give acceptable performance for our task, and the main burden of computation still falls on the GPU kernels computing SHAP values in the subsequent step. We perform experiments comparing the three bin packing heuristics in terms of runtime and impact on efficiency for GPU kernels in Section 4.1.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>The GPU Kernel for Computing SHAP Values</ns0:head><ns0:p>Given a unique decision tree path extracted from a decision tree in an ensemble predictor, with duplicates removed, we allocate one path element per GPU thread and cooperatively evaluate SHAP values for each row in a test dataset X. The dataset X is assumed to be queryable from the device. Listing 2 provides a simplified overview of the GPU kernel that is the basis of GPUTreeShap, further details can be found at https://github.com/rapidsai/gputreeshap. A single kernel is launched, parallelising computation of Shapley values across GPU threads in three dimensions:</ns0:p><ns0:p>1. Dataset rows.</ns0:p><ns0:p>2. Unique paths in tree model from root to leaf.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Elements in each unique path.</ns0:head><ns0:p>GPU threads are launched according to the solution of the bin-packing problem described in Section 3.3, which allocates threads efficiently to this unevenly sized, three-dimensional problem space. A contiguous thread group of size &#8804; 32 is launched and assigned to each dataset row and model path sub-problem.</ns0:p><ns0:p>To enable non-recursive GPU-based implementation of Algorithm 1, it remains to describe how to compute permutation weights for each possible feature subset with the EXTEND function (Line 4), as well as how to UNWIND each feature in the path and calculate the sum of permutation weights (Line 7). The EXTEND function represents a single step in a dynamic programming problem. In the GPU version of the algorithm, it processes a single path in a decision tree, represented as a list of path elements. As discussed above, all threads processing the same path are assigned to the same warp to enable efficient processing. Data dependencies between threads occur when each thread processes a single path element. Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> shows the data dependency of each call to EXTEND on previous iterations when using GPU threads for the implementation. Each thread depends on its own previous result and the previous result of the thread to its 'left'.</ns0:p><ns0:p>This dependency pattern leads to a natural implementation using warp shuffle instructions, where threads directly access registers of other threads in the warp at considerably lower cost than shared or Listing 2. GPU kernel overview -Threads are mapped to elements of a path sub-problem, then groups of threads are formed. These small thread groups cooperatively solve dynamic programming problems, accumulating the final SHAP values using global atomics. for i &#8592; 2 to l + 1 in parallel, do 5:</ns0:p><ns0:formula xml:id='formula_8'>le f t w = shuffle(m i .w, i &#8722; 1) 6: m i .w = m i .w &#8226; p z &#8226; (l + 1 &#8722; i)/(l + 1) 7: m i .w = m i .w + p o &#8226; le f t w &#8226; i/(l + 1) 8:</ns0:formula><ns0:p>return m global memory. Algorithm 2 shows pseudo-code for a single step of a parallel EXTEND function on the device. In pseudocode, we define a shuffle function analogous to the corresponding function in NVIDIA's CUDA language, where the first argument is the register to be communicated, and the second argument is the thread to fetch the register from-if this thread does not exist, the function returns 0, else it returns the register value at the specified thread index. In Algorithm 2, the shuffle function is used to fetch the element m i .w from the current thread's left neighbour if this neighbour exists, and otherwise returns 0.</ns0:p><ns0:p>Given the permutation weights for the entire path, it is also necessary to establish how to UNWIND the effect of each individual feature from the path to evaluate its relative contribution (Algorithm 1, Line 7). We distribute this task among threads, with each thread 'unwinding' a unique feature. Pseudo-code for UNWOUNDSUM is given in Algorithm 3, where each thread i is effectively undoing the EXTEND function for a given feature and returning the sum along the path. Shuffle instructions are used to fetch weights w j from other threads in the group. The result of UNWOUNDSUM is used to compute the final SHAP value as per Algorithm 1, Line 8.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Computing SHAP Interaction Values</ns0:head><ns0:p>Computation of SHAP interaction values makes use of the same preprocessing steps as above, and the same basic kernel building blocks, except that the thread group associated with each row/path pair evaluates SHAP values multiple times, iterating over each unique feature and conditioning on that feature being fixed to present or not present respectively. There are some difficulties in conditioning on features with our algorithm formulation so far-conditioning on a feature j requires ignoring it and not adding it to the active path. This introduces complexity when neighbouring threads are communicating via shuffle instructions (see Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>). Each thread must adjust its indexing to skip over a path element being conditioned on. We found a more elegant solution is to swap a path element used for conditioning to the end of the path, then simply stop before adding it to the path (taking advantage of the fact that the ordering of path elements is irrelevant). Thus, to evaluate SHAP interaction values, we use a GPU kernel similar to the one used for computing per-feature SHAP values, except that we loop over each unique for j &#8592; l to 1 do 7:</ns0:p><ns0:formula xml:id='formula_9'>w j = shuffle(m i .w, j) 8: tmp = (next &#8226; (l &#8722; 1) + 1)/ j 9: sum i = sum i + tmp &#8226; p o 10: next = w j &#8722; tmp &#8226; (l &#8722; j) &#8226; p z /l 11: sum i = sum i + (1 &#8722; p o ) &#8226; w j &#8226; l/((l &#8722; j) &#8226; p z ) 12:</ns0:formula><ns0:p>return sum feature, conditioning on that feature as on or off. One major difference that arises between our GPU algorithm and the CPU algorithm of <ns0:ref type='bibr' target='#b23'>Lundberg et al. (2020)</ns0:ref>, is that we can easily avoid conditioning on features that are not present in a given path. It is clear from Equation 5 that f S&#8746;{i, j} (x) = f S&#8746;{i} (x), f S&#8746;{ j} (x) = f S (x) and &#8711; i j (S) = 0 if we condition on feature j that is not present in the path. Therefore, our approach has runtime proportional to O(T LD 3 ) instead of O(T LD 2 M) by exploiting the limited subset of possible feature interactions in a tree branch. This modification has a significant impact on runtime in practice (because, normally, M &#8811; D).</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>EVALUATION</ns0:head><ns0:p>We train a range of decision tree ensembles using the XGBoost algorithm on the datasets listed in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. Our goal is to evaluate a wide range of models representative of different real-world settings, from simple exploratory models to large ensembles of thousands of trees. For each dataset, we train a small, medium, and large variant by adjusting the number of boosting rounds (10, 100, 1000) and maximum tree depth <ns0:ref type='bibr'>(3,</ns0:ref><ns0:ref type='bibr'>8,</ns0:ref><ns0:ref type='bibr'>16)</ns0:ref>. The learning rate is set to 0.01 to prevent XGBoost learning the model in the first few trees and producing only stumps in subsequent iterations. Using a low learning rate is also common in practice to minimise generalisation error. Other hyperparameters are left as default. Summary statistics for each model variant are listed in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>, and our testing hardware is listed in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Evaluating Bin Packing Performance</ns0:head><ns0:p>We first evaluate the performance of the NF, FFD, and BFD bin packing algorithms from Section 3.3. We also include 'none' as a baseline, where no packing occurs and each unique path is allocated to a single warp. All bin packing heuristics are single-threaded and run on the CPU. We report the execution time (in seconds), utilisation, and number of bins used (K). Utilisation is defined as &#8721; i&#8712;I s(i) 32K , the total weight of all items divided by the bin space allocated, or for our purposes, the fraction of GPU threads that are active for a given bin packing. Poor bin packings waste space on each warp and underutilise the GPU.</ns0:p><ns0:p>Results are summarised in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>. 'None' is clearly a poor choice, with utilisation between 0.1 and 0.3, with worse utilisation for smaller tree depths-for example, small models with maximum depth three allocate items of size three to warps of size 32. The simple NF algorithm often provides competitive results with fast runtimes, but it can lag behind FFD and BFD when item sizes are larger, exhibiting utilisation as low as 0.79 for fashion mnist-large. FFD and BFD achieve better utilisation than NF in all cases, reflecting their superior approximation guarantees. Interestingly, FFD and BFD achieve the same efficiency on every example tested. We have verified that they can produce different packings on contrived examples, but there is no discernible difference for our application. FFD and BFD have longer runtimes than NF due to their O(n log n) time complexity. FFD is slightly faster than BFD because it uses a binary tree packed into an array, yielding greater cache efficiency, but its implementation is more complicated. In contrast, BFD is implemented easily using std::set.</ns0:p><ns0:p>Based on these results, we recommend BFD for its strong approximation guarantee, simple implementation, and acceptable runtime when packing jobs into batches for GPU execution. Its runtime is at most 1.6s in our experiments, for our largest model (covtype-large) with 6.7M items, and is constant with respect to the number of test rows because the bin packing occurs once per ensemble and is reused for each additional data point, allowing us to amortise its cost over improvements in end-to-end runtime from improved kernel efficiency. The gains in GPU thread utilisation from using BFD over NF directly translate into performance improvements, as fewer bins used means fewer GPU warps are launched. On our large size models, we see improvements in utilisation of 10.1%, 3.2%, 16.7% and 9.6% from BFD over NF. We use BFD in all subsequent experiments.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Evaluating SHAP Value Throughput</ns0:head><ns0:p>We evaluate the performance of GPUTreeShap as a backend to the XGBoost library <ns0:ref type='bibr' target='#b6'>(Chen and Guestrin, 2016)</ns0:ref>, comparing its execution time against the existing CPU implementation of Algorithm 1 2 . The baseline CPU algorithm is efficiently multithreaded using OpenMP, with a parallel for loop over all test instances. See https://github.com/dmlc/xgboost for exact implementation details for the baseline and https://github.com/rapidsai/gputreeshap for GPUTreeShap implementation details.</ns0:p><ns0:p>Table <ns0:ref type='table'>6</ns0:ref> reports the runtime of GPUTreeShap on a single V100 GPU compared to TreeShap on 40 CPU cores. Results are averaged over five runs and standard deviations are also shown. We observe speedups between 13-19x for medium and large models evaluated on 10,000 test rows. We observe little to no speedup for the small models as insufficient computation is performed to offset the latency of launching GPU kernels. 2 We do not benchmark against TreeShap implementations in the Python SHAP package or LightGBM because they are written by the same author, also in C++, and are functionally equivalent to XGBoost's implementation.</ns0:p></ns0:div> <ns0:div><ns0:head>12/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed Figure <ns0:ref type='figure' target='#fig_10'>4</ns0:ref> plots the time to evaluate varying numbers of test rows for the cal housing-med model. We plot the average of five runs; the shaded area indicates the 95% confidence interval. This illustrates the throughput vs. latency trade-off for this particular model size. The CPU is more effective for &lt; 180 test rows due to lower latency, but the throughput of the GPU is significantly higher at larger batch sizes.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>SHAP value computation is embarrassingly parallel over dataset rows, so we expect to see linear scaling of performance with respect to the number of GPUs or CPUs, given sufficient data. We set the number of rows to 1 million and evaluate the effect of additional processors for the cal housing-med model, measuring throughput in rows per second. Figure <ns0:ref type='figure' target='#fig_11'>5</ns0:ref> reports throughput up to the eight GPUs available on the DGX-1 system, showing the expected close to linear scaling and reaching a maximum throughput of 1.2M rows per second. Reported throughputs are from the average of five runs-error bars are too small to see due to relatively low variance. Figure <ns0:ref type='figure' target='#fig_12'>6</ns0:ref> shows linear scaling with respect to CPU cores up to a maximum throughput of 7000 rows per second. The shaded area indicates the 95% confidence interval from 5 runs. We speculate that the dip at 40 cores is due to contention with the operating system requiring threads for other system functions, and so ignore it for this scaling analysis. We can reasonably approximate from Figure <ns0:ref type='figure' target='#fig_12'>6</ns0:ref>, using a throughput of 7000 rows/s per 40 cores, that it would require 6850 Xeon E5-2698 v4 CPU cores, or 343 sockets, to achieve the same throughput as eight V100 GPUs for this particular model.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>SHAP Interaction Values</ns0:head><ns0:p>Table <ns0:ref type='table'>7</ns0:ref> compares single GPU vs. 40 core CPU runtime for SHAP interaction values. For this experiment, we lower the number of test rows to 200 due to the significantly increased computation time. Computing interaction values is challenging for datasets with larger numbers of features, in particular for fashion mnist (785 features). Our GPU implementation achieves moderate speedups on cal housing and adult due to the relatively low number of features; these speedups are roughly comparable to those obtained for standard SHAP values (Table <ns0:ref type='table'>6</ns0:ref>). In contrast, for covtype-large and fashion mnist-large, we see speedups of 114x and 340x, in the most extreme case reducing runtime from six hours to one minute. This speedup comes from both the increased throughput of the GPU over the CPU and the improvements to algorithmic complexity due to omission of irrelevant features described in Section 3.5. Note that it may be possible to reformulate the CPU algorithm to take advantage of the improved complexity with similar preprocessing steps, but investigating this is beyond the scope of this paper. </ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CONCLUSION</ns0:head><ns0:p>SHAP values have proven to be a useful tool for interpreting the predictions of decision tree ensembles. We have presented GPUTreeShap, an algorithm obtained by reformulating the TreeShap algorithm to enable efficient computation on GPUs. We exploit warp-level parallelism by cooperatively evaluating dynamic programming problems for each path in a decision tree ensemble, thus providing massive parallelism for large ensemble predictors. We have shown how standard bin packing heuristics can be used to effectively schedule problems at the warp level, maximising GPU utilisation. Additionally, our rearrangement leads to improvement in the algorithmic complexity when computing SHAP interaction values, from O(T LD 2 M) to O(T LD 3 ). Our library GPUTreeShap provides significant improvement to SHAP value computation over currently available software, allowing scaling onto one or more GPUs, and reducing runtime by one to two orders of magnitude. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>of len(x) zeroes 3: function RECURSE( j, m, p z , p o , p i ) 4:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>m</ns0:head><ns0:label /><ns0:figDesc>= copy(m) {m is copied so recursions down other branches are not affected.} 21: m l+1 .(d, z, o, w) = (p i , p z , p o , l = 0 ? 1 : 0) 22:for i &#8592; l to 1 do 23:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Unique decision tree path. Solid arrows indicate the path taken for an example test instance. Dashed lines indicate paths not taken. struct PathElement { // Unique path index size_t path_idx; // Feature of this split, -1 is root int64_t feature_idx; // Range of feature value float feature_lower_bound; float feature_upper_bound; // Probability of following this path // when feature_idx is missing float zero_fraction; // Leaf weight at the end of path float v; };</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021) Manuscript to be reviewed Computer Science __device__ float GetOneFraction( const PathElement&amp; e, DatasetT X, size_t row_idx) { // First element in path (bias term) is always zero if (e.feature_idx == -1) return 0.0; // Test the split // Does the training instance continue down this // path if</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Data dependencies of EXTEND -5 GPU threads communicate using warp shuffle intrinsics to solve a dynamic programming problem instance.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021) Manuscript to be reviewed Computer Science Algorithm 3 Parallel UNWOUNDSUM 1: function PARALLEL UNWOUNDSUM(m, p z , p o , p i ) 2: l = len(m) 3: sum = [] array of l zeroes 4: for i &#8592; 1 to l + 1 in parallel, do 5: next = shuffle(m i .w, l) 6:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The crossover point where the V100 GPU outperforms 40 CPU cores occurs at around 200 test rows for the cal housing-med model.</ns0:figDesc><ns0:graphic coords='14,203.77,63.78,289.50,217.13' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. GPUTreeShap scales linearly with 8 V100 GPUs for the cal housing-med model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. TreeShap scales linearly with 40 CPU cores, but at significantly lower throughput than GPUTreeShap.</ns0:figDesc><ns0:graphic coords='15,340.25,63.78,198.52,148.89' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Bin packing time complexities and worst-case approximation ratios</ns0:figDesc><ns0:table><ns0:row><ns0:cell>ALGORITHM</ns0:cell><ns0:cell>TIME</ns0:cell><ns0:cell>R A</ns0:cell></ns0:row><ns0:row><ns0:cell>NF</ns0:cell><ns0:cell>O(n)</ns0:cell><ns0:cell>2.0</ns0:cell></ns0:row><ns0:row><ns0:cell>FFD</ns0:cell><ns0:cell cols='2'>O(n log n) 1.222</ns0:cell></ns0:row><ns0:row><ns0:cell>BFD</ns0:cell><ns0:cell cols='2'>O(n log n) 1.222</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>capacity. FFD and BFD may be implemented in O(n log n) time using a tree data structure allowing bin</ns0:cell></ns0:row><ns0:row><ns0:cell>updates and insertions in O(log n) operations (see</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>, &amp;end_row, &amp;e, &amp;thread_active); if (!thread_active) return; float zero_fraction</ns0:head><ns0:label /><ns0:figDesc>the feature is present? float val = X.GetElement(row_idx, e.feature_idx);</ns0:figDesc><ns0:table><ns0:row><ns0:cell>return val &gt;= e.feature_lower_bound &amp;&amp;</ns0:cell></ns0:row><ns0:row><ns0:cell>val &lt; e.feature_upper_bound;</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row><ns0:row><ns0:cell>template &lt;typename DatasetT&gt;</ns0:cell></ns0:row><ns0:row><ns0:cell>__device__ float ComputePhi(</ns0:cell></ns0:row><ns0:row><ns0:cell>const PathElement&amp; e, size_t row_idx,</ns0:cell></ns0:row><ns0:row><ns0:cell>const DatasetT&amp; X,</ns0:cell></ns0:row><ns0:row><ns0:cell>const ContiguousGroup&amp; group,</ns0:cell></ns0:row><ns0:row><ns0:cell>float zero_fraction) {</ns0:cell></ns0:row><ns0:row><ns0:cell>float one_fraction = GetOneFraction(e, X, row_idx);</ns0:cell></ns0:row><ns0:row><ns0:cell>GroupPath path(group, zero_fraction, one_fraction);</ns0:cell></ns0:row><ns0:row><ns0:cell>size_t unique_path_length = group.size();</ns0:cell></ns0:row><ns0:row><ns0:cell>// Extend the path</ns0:cell></ns0:row><ns0:row><ns0:cell>for (auto unique_depth = 1ull;</ns0:cell></ns0:row><ns0:row><ns0:cell>unique_depth &lt; unique_path_length;</ns0:cell></ns0:row><ns0:row><ns0:cell>unique_depth++) {</ns0:cell></ns0:row><ns0:row><ns0:cell>path.Extend();</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row><ns0:row><ns0:cell>float sum = path.UnwoundPathSum();</ns0:cell></ns0:row><ns0:row><ns0:cell>return sum * (one_fraction -zero_fraction) * e.v;</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row><ns0:row><ns0:cell>template &lt;typename DatasetT, size_t kBlockSize,</ns0:cell></ns0:row><ns0:row><ns0:cell>size_t kRowsPerWarp&gt;</ns0:cell></ns0:row><ns0:row><ns0:cell>__global__ void ShapKernel(</ns0:cell></ns0:row><ns0:row><ns0:cell>DatasetT X, size_t bins_per_row,</ns0:cell></ns0:row><ns0:row><ns0:cell>const PathElement * path_elements,</ns0:cell></ns0:row><ns0:row><ns0:cell>const size_t * bin_segments, size_t num_groups,</ns0:cell></ns0:row><ns0:row><ns0:cell>float * phis) {</ns0:cell></ns0:row><ns0:row><ns0:cell>__shared__ PathElement s_elements[kBlockSize];</ns0:cell></ns0:row><ns0:row><ns0:cell>PathElement&amp; e = s_elements[threadIdx.x];</ns0:cell></ns0:row><ns0:row><ns0:cell>// Allocate some portion of rows to this warp</ns0:cell></ns0:row><ns0:row><ns0:cell>// Fetch the path element assigned to this</ns0:cell></ns0:row><ns0:row><ns0:cell>// thread</ns0:cell></ns0:row><ns0:row><ns0:cell>size_t start_row, end_row;</ns0:cell></ns0:row><ns0:row><ns0:cell>bool thread_active;</ns0:cell></ns0:row><ns0:row><ns0:cell>ConfigureThread&lt;DatasetT, kBlockSize, kRowsPerWarp&gt;(</ns0:cell></ns0:row><ns0:row><ns0:cell>X, bins_per_row, path_elements,</ns0:cell></ns0:row><ns0:row><ns0:cell>bin_segments, &amp;start_row= e.zero_fraction;</ns0:cell></ns0:row><ns0:row><ns0:cell>auto labelled_group =</ns0:cell></ns0:row><ns0:row><ns0:cell>active_labeled_partition(e.path_idx);</ns0:cell></ns0:row><ns0:row><ns0:cell>for (int64_t row_idx = start_row;</ns0:cell></ns0:row><ns0:row><ns0:cell>row_idx &lt; end_row; row_idx++) {</ns0:cell></ns0:row><ns0:row><ns0:cell>float phi =</ns0:cell></ns0:row><ns0:row><ns0:cell>ComputePhi(e, row_idx, X, labelled_group,</ns0:cell></ns0:row><ns0:row><ns0:cell>zero_fraction);</ns0:cell></ns0:row><ns0:row><ns0:cell>// Write results</ns0:cell></ns0:row><ns0:row><ns0:cell>if (!e.IsRoot()) {</ns0:cell></ns0:row><ns0:row><ns0:cell>atomicAdd(&amp;phis[IndexPhi(</ns0:cell></ns0:row><ns0:row><ns0:cell>row_idx, num_groups, e.group,</ns0:cell></ns0:row><ns0:row><ns0:cell>X.NumCols(), e.feature_idx)],</ns0:cell></ns0:row><ns0:row><ns0:cell>phi);</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Datasets used to train XGBoost models for Shapley value evaluation. Rows refers to training rows, cols refers to number of features (excluding label).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>NAME</ns0:cell><ns0:cell cols='3'>ROWS COLS TASK CLASSES</ns0:cell><ns0:cell>REF</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE</ns0:cell><ns0:cell>581012</ns0:cell><ns0:cell>54 CLASS</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>BLACKARD (1998)</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING</ns0:cell><ns0:cell>20640</ns0:cell><ns0:cell>8 REGR</ns0:cell><ns0:cell cols='2'>-PACE AND BARRY (1997)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>FASHION MNIST 70000 784 CLASS</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>XIAO ET AL. (2017)</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT</ns0:cell><ns0:cell>48842</ns0:cell><ns0:cell>14 CLASS</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>KOHAVI (1996)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>XGBoost models used for evaluation. Small, medium and large variants are created for each dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>MODEL</ns0:cell><ns0:cell cols='3'>TREES LEAVES MAX DEPTH</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-SMALL</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>560</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-MED</ns0:cell><ns0:cell cols='2'>800 113888</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-LARGE</ns0:cell><ns0:cell cols='2'>8000 6636440</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-SMALL</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-MED</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>21643</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-LARGE</ns0:cell><ns0:cell cols='2'>1000 3317209</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION MNIST-SMALL</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>800</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION MNIST-MED</ns0:cell><ns0:cell cols='2'>1000 144154</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>FASHION MNIST-LARGE 10000 2929521</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-SMALL</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-MED</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>13074</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-LARGE</ns0:cell><ns0:cell cols='2'>1000 642035</ns0:cell><ns0:cell>16</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Details of Nvidia DGX-1 used for benchmarking.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>PROCESSOR DETAILS</ns0:cell></ns0:row><ns0:row><ns0:cell>CPU</ns0:cell><ns0:cell>2X 20-CORE XEON E5-2698 V4 2.2 GHZ</ns0:cell></ns0:row><ns0:row><ns0:cell>GPU</ns0:cell><ns0:cell>8X TESLA V100-32</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Bin packing performance</ns0:figDesc><ns0:table><ns0:row><ns0:cell>MODEL</ns0:cell><ns0:cell cols='3'>ALG TIME(S) UTILISATION</ns0:cell><ns0:cell>BINS</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-SMALL</ns0:cell><ns0:cell cols='2'>NONE 0.0018</ns0:cell><ns0:cell>0.105246</ns0:cell><ns0:cell>560</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-SMALL</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0041</ns0:cell><ns0:cell>0.982292</ns0:cell><ns0:cell>60</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-SMALL</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.0064</ns0:cell><ns0:cell>0.998941</ns0:cell><ns0:cell>59</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-SMALL</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.0086</ns0:cell><ns0:cell>0.998941</ns0:cell><ns0:cell>59</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-MED</ns0:cell><ns0:cell cols='2'>NONE 0.0450</ns0:cell><ns0:cell cols='2'>0.211187 113533</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-MED</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0007</ns0:cell><ns0:cell>0.913539</ns0:cell><ns0:cell>26246</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-MED</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.0104</ns0:cell><ns0:cell>0.940338</ns0:cell><ns0:cell>25498</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-MED</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.0212</ns0:cell><ns0:cell>0.940338</ns0:cell><ns0:cell>25498</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-LARGE</ns0:cell><ns0:cell cols='2'>NONE 0.0346</ns0:cell><ns0:cell cols='2'>0.299913 6702132</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-LARGE</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0413</ns0:cell><ns0:cell cols='2'>0.851639 2360223</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-LARGE</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.8105</ns0:cell><ns0:cell cols='2'>0.952711 2109830</ns0:cell></ns0:row><ns0:row><ns0:cell>COVTYPE-LARGE</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>1.6702</ns0:cell><ns0:cell cols='2'>0.952711 2109830</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>CAL HOUSING-SMALL NONE 0.0015</ns0:cell><ns0:cell>0.085938</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>CAL HOUSING-SMALL NF</ns0:cell><ns0:cell>0.0025</ns0:cell><ns0:cell>0.982143</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>CAL HOUSING-SMALL FFD</ns0:cell><ns0:cell>0.0103</ns0:cell><ns0:cell>0.982143</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>CAL HOUSING-SMALL BFD</ns0:cell><ns0:cell>0.0001</ns0:cell><ns0:cell>0.982143</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-MED</ns0:cell><ns0:cell cols='2'>NONE 0.0246</ns0:cell><ns0:cell>0.181457</ns0:cell><ns0:cell>21641</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-MED</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0126</ns0:cell><ns0:cell>0.931429</ns0:cell><ns0:cell>4216</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-MED</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.0016</ns0:cell><ns0:cell>0.941704</ns0:cell><ns0:cell>4170</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-MED</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.0031</ns0:cell><ns0:cell>0.941704</ns0:cell><ns0:cell>4170</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-LARGE</ns0:cell><ns0:cell cols='2'>NONE 0.0089</ns0:cell><ns0:cell cols='2'>0.237979 3370373</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-LARGE</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0225</ns0:cell><ns0:cell cols='2'>0.901060 890148</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-LARGE</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.3534</ns0:cell><ns0:cell cols='2'>0.933114 859570</ns0:cell></ns0:row><ns0:row><ns0:cell>CAL HOUSING-LARGE</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.8760</ns0:cell><ns0:cell cols='2'>0.933114 859570</ns0:cell></ns0:row><ns0:row><ns0:cell>MODEL</ns0:cell><ns0:cell cols='3'>ALG TIME(S) UTILISATION</ns0:cell><ns0:cell>BINS</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>FASHION MNIST-SMALL NONE 0.0022</ns0:cell><ns0:cell>0.123906</ns0:cell><ns0:cell>800</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>FASHION MNIST-SMALL NF</ns0:cell><ns0:cell>0.0082</ns0:cell><ns0:cell>0.991250</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>FASHION MNIST-SMALL FFD</ns0:cell><ns0:cell>0.0116</ns0:cell><ns0:cell>0.991250</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>FASHION MNIST-SMALL BFD</ns0:cell><ns0:cell>0.0139</ns0:cell><ns0:cell>0.991250</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION MNIST-MED</ns0:cell><ns0:cell cols='2'>NONE 0.0439</ns0:cell><ns0:cell cols='2'>0.264387 144211</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION MNIST-MED</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0008</ns0:cell><ns0:cell>0.867580</ns0:cell><ns0:cell>43947</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION MNIST-MED</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.0130</ns0:cell><ns0:cell>0.880279</ns0:cell><ns0:cell>43313</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION MNIST-MED</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.0219</ns0:cell><ns0:cell>0.880279</ns0:cell><ns0:cell>43313</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>FASHION MNIST-LARGE NONE 0.0140</ns0:cell><ns0:cell cols='2'>0.385001 2929303</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>FASHION MNIST-LARGE NF</ns0:cell><ns0:cell>0.0132</ns0:cell><ns0:cell cols='2'>0.791948 1424063</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>FASHION MNIST-LARGE FFD</ns0:cell><ns0:cell>0.3633</ns0:cell><ns0:cell cols='2'>0.958855 1176178</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>FASHION MNIST-LARGE BFD</ns0:cell><ns0:cell>0.8518</ns0:cell><ns0:cell cols='2'>0.958855 1176178</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-SMALL</ns0:cell><ns0:cell cols='2'>NONE 0.0016</ns0:cell><ns0:cell>0.125000</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-SMALL</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0023</ns0:cell><ns0:cell>1.000000</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-SMALL</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.0061</ns0:cell><ns0:cell>1.000000</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-SMALL</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.0060</ns0:cell><ns0:cell>1.000000</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-MED</ns0:cell><ns0:cell cols='2'>NONE 0.0050</ns0:cell><ns0:cell>0.229014</ns0:cell><ns0:cell>13067</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-MED</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0066</ns0:cell><ns0:cell>0.913192</ns0:cell><ns0:cell>3277</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-MED</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.0575</ns0:cell><ns0:cell>0.950010</ns0:cell><ns0:cell>3150</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-MED</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.1169</ns0:cell><ns0:cell>0.950010</ns0:cell><ns0:cell>3150</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-LARGE</ns0:cell><ns0:cell cols='2'>NONE 0.0033</ns0:cell><ns0:cell cols='2'>0.297131 642883</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-LARGE</ns0:cell><ns0:cell>NF</ns0:cell><ns0:cell>0.0035</ns0:cell><ns0:cell cols='2'>0.858728 222446</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-LARGE</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>0.0684</ns0:cell><ns0:cell cols='2'>0.954377 200152</ns0:cell></ns0:row><ns0:row><ns0:cell>ADULT-LARGE</ns0:cell><ns0:cell>BFD</ns0:cell><ns0:cell>0.0954</ns0:cell><ns0:cell cols='2'>0.954377 200152</ns0:cell></ns0:row></ns0:table><ns0:note>17/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='1'>AMD GPUs have similar basic processing units, called 'compute units'; the corresponding term for a warp is 'wavefront'.4/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021)</ns0:note> <ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67391:1:0:NEW 25 Dec 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"We thank the reviewers for their useful and insightful feedback. # Reviewer 2 1. In Figure 1, please explain in the caption or text what the left/right arrows, solid/dash arrows mean for more clarification. Clarified. 2. In Figure 2, it is better to indicate how these two unique paths correspond to the two leaf nodes (and which two) in Figure 1. We have clarified the caption accordingly: “Two unique paths from the decision tree in Figure 1. The second path listed here corresponds to the highlighted path in Figure 1, encoding bounds on the feature values for an instance that reaches this leaf, the leaf prediction value, and the conditional probability (‘zero_fraction’) of an instance meeting the split condition if the feature is unknown.” 3. Please double-check Lines 22-24 of Algorithm 1, as well as related parts (Algorithm 2) and your code implementation. They are inconsistent with either (Lundberg et al, 2020) or their earlier publication (https://arxiv.org/abs/1802.03888). If it is not an error, can you confirm or explain the difference? We believe Lundberg’s version has an error, perhaps due to transcription from software with zero-based indexing to pseudocode with 1 based indexing. Comparing against a software version we can see differences: https://github.com/dmlc/xgboost/blob/8fa32fdda23c5d4d1bd7389427c58ad9b1aa0bfd/src/tree/tree_model.cc#L1178. For example the software for-loop seems to be from i -> l to 1 instead of the i -> l-1 to 1 we find in the pseudocode. We believe both the TreeShap and GPUTreeShap software implementations are correct based on extensive testing. Namely, it is unlikely that the efficiency property would hold if the software had an error here. Today, GPUTreeShap is even included directly in the shap package with results directly tested against the original algorithm, so we have full confidence in the results of this paper. 4. In Section 3.2, Line 184, please double-check which of 'Line 12' or 'Line 13' (of Algorithm 1) is more suitable for the context. This now reads “In Lines 12 to 15”. 5. In Table 2, 'rows' and 'cols' need explanation. Does a column mean a feature? Does it include the class label? Why does fashion_mnist contain 785 columns while the image size is 28x28=784? For cal_housing, does it include longitude and latitude features? The description has been extended. We had erroneously included the label column for fashion_mnist, it has been changed to 784. The others are correct. Yes, cal_housing does include the latitude and longitude features, as it has been used in other publications. 6. In Line 305, LaTeX symbol \gg may be considered for 'much greater than'. Changed. 7. In Line 319, are s(i) and I defined before? They are defined in section 3.3. 8. For easier reading/comparison, is it possible to reorganize Table 5 so that models and algs are in different dimensions (e.g. models in rows, algs in columns)? If not possible due to limited space, can horizontal split lines be added to separate different models? Horizontal split lines have been added. 9. Please double-check the caption of Table 7, as it is said in Line 373 of the main text that the number of test rows was reduced to 200 but the caption said 10000. This has been corrected to 200. > In Line 339, the authors claimed that greater utilization directly translates to performance improvement. Is it possible to add additional experiments that support this statement? The authors may consider comparing the runtime speedups of BFD vs NF, or BFD vs None. We think the relationship between utilisation of SIMD lanes and execution time is very straightforward, and so these experiments would not particularly add anything to the discussion. > Can GPUTreeSHAP result in exact SHAP values and interaction values, or just provide their approximations due to approximated heuristics in bin packing? If they are approximations, what is the error? The bin packing phase has no effect on the correctness of the result, it is designed solely to schedule work efficiently on a GPU, minimising wasted processing power by utilising more SIMD lanes. # Reviewer 3 - In section 2.3 GPU Computing, the authors could mention that both the LightGBM and Catboost have GPU implementations. Citations have been added. - In section 5 Conclusions, the authors could comment on extending the proposed algorithms to the other gradient boosting algorithms. Is the proposed method good for solving other problems? There are no differences limiting the application to any popular decision tree libraries, and our description of the algorithm is not specific to XGBoost, XGBoost is only used for the evaluation. In fact GPUTreeShap can already be used with other popular libraries such as LightGBM, Catboost or Sklearn random forests via integration with Scott Lundberg’s shap library: https://shap.readthedocs.io/en/latest/generated/shap.explainers.GPUTree.html#shap.explainers.GPUTree. - The abstract says 19x speedups, but section 4.2 says between 13-18x for medium and large models. Change the text of section 4.2 to read: “We observe speedups between 13-19x” - Please add a description of the multi-core CPU implementation. The following text in Section 4.2 describes the multi-core implementation i.e. the multicore implementation is simply the single-threaded implementation parallelised over test rows. “The baseline CPU algorithm is efficiently multithreaded using OpenMP, with a parallel for loop over all test instances. See https://github.com/dmlc/xgboost for exact implementation details for the baseline and https://github.com/rapidsai/gputreeshap for GPUTreeShap implementation details.” - The paper mentions the theoretical upper bounds of memory consumption, but no experimental memory consumption number are reported in the paper. This memory consumption applies to the original recursive algorithm but not to our reformulation. Our algorithm as described uses very little global memory - enough to store a path representation of the input model. Almost all of the computation occurs in on chip registers. We found memory usage to be a non-issue for this problem i.e. if the model was large enough to cause a modern GPU to run out of memory, then the computation time may already be in the order of weeks or months. > The authors evaluate the proposed method using several sets of parameters for the XGBoost trees. The method is evaluated in 4 different datasets. I wish the authors would have run experiment with even larger datasets. We would like to note that the size of the dataset has little to no impact on the evaluation as presented. The dataset is used to generate a model, and the number of instances tested in the evaluation is fixed for all datasets. The size of the model in terms of number of trees, depth or number of leaves is what matters in terms of the computational difficulty of the problem. Large models can still be generated from small datasets via appropriate hyper-parameters during training. "
Here is a paper. Please give your review comments after reading it.
337
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Non-orthogonal multiple access (NOMA) scheme is proved to be a potential candidate to enhance spectral potency and massive connectivity for 5G wireless networks. To achieve effective system performance, user grouping, power control, and decoding order are considered to be fundamental factors. In this regard, a joint combinatorial problem consisting of user grouping and power control is considered, to obtain high spectralefficiency for NOMA uplink system with lower computational complexity. To solve joint problem of power control and user grouping, for Uplink NOMA, up to authors knowledge, we have used for the first time a newly developed meta-heuristicnature-inspired optimization algorithm i.e. Whale Optimization Algorithm (WOA). Further, for comparison a recently initiated Grey Wolf Optimizer (GWO) and the well-known Particle Swarm Optimization (PSO) algorithms are also applied for the same joint issue. To attain optimal and sub-optimal solutions, a NOMA-based model is used to evaluate the potential of the proposed algorithm. Numerical results validate that proposed WOA outperforms GWO, PSO and existing literature reported for NOMA uplink systems in-terms of spectral performance.</ns0:p><ns0:p>In addition, WOA attains improved results in terms of joint user grouping and power control with lower system-complexity as compare to GWO and PSO algorithms. The proposed work is novel enhancement for 5G uplink applications of NOMA systems.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Multiple access approaches are increasingly gaining importance in modern mobile communication systems, primarily due to the overwhelming increase in the communication demands at both the user and device level. Over past few years, non-orthogonal multiple access (NOMA) <ns0:ref type='bibr' target='#b14'>(Ding et al. (2017a</ns0:ref><ns0:ref type='bibr' target='#b16'>(Ding et al. ( , 2014</ns0:ref><ns0:ref type='bibr' target='#b15'>(Ding et al. ( , 2017b))</ns0:ref>; <ns0:ref type='bibr' target='#b6'>Benjebbovu et al. (2013)</ns0:ref>) schemes have earned significant attention for supporting the huge connectivity in contemporary wireless communication systems. The NOMA schemes are currently considered as the most promising contender for the 5G and beyond 5G (B5G) wireless communications, which are capable of accessing massive user connections and attaining high spectrum performance. Moreover, a report has been published recently regarding the Third Generation Partnership Project for determining the effectiveness of NOMA schemes for several applications or development scenarios, particularly for Ultra-Reliable Low Latency Communications (URLLC), enhanced Mobile Broadband (eMBB), and massive Machine Type Communications (mMTC) <ns0:ref type='bibr' target='#b4'>(Benjebbour et al. (2013)</ns0:ref>). Contrary to the classic orthogonal multiple access (OMA) approaches, the NOMA schemes can offer services to PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science multiple users in the same space/code/frequency/time resource block (RB). The NOMA schemes are also capable of differentiating the users that have distinct channel settings. These schemes are mainly inclined at strengthening connectivity and facilitating users with an efficient broad-spectrum <ns0:ref type='bibr' target='#b22'>(Islam et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b11'>Dai et al. (2015)</ns0:ref>). Some recent studies <ns0:ref type='bibr' target='#b7'>(Chen et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b41'>Wang et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b38'>Shahini and Ansari (2019)</ns0:ref>) have discussed the effective use of the NOMA approach in standard frameworks for Internet of Things (IoT) systems and Vehicle-to-Everything (V2X) networks. The successive interference cancellation (SIC) technique, which is pertinent for multi-user detection and decoding is implemented for the NOMA scheme at the receiver end. The SIC technique operates differently for the downlink and uplink scenarios. In the downlink NOMA scenario, SIC is applied at the receiver end, where high energy is consumed during processing when a lot of users are considered in the NOMA group. For that reason, two users are typically considered in a group for optimum grouping/pairing of users in the case of the downlink NOMA system (Al-Abbasi and So (2016); <ns0:ref type='bibr' target='#b20'>He et al. (2016)</ns0:ref>). Whereas in the uplink NOMA systems, it is possible to employ SIC at the base station (BS) that has a higher processing capacity. Moreover, in uplink NOMA, multiple users are allowed to transmit in a grant-free approach that leads to a significantly reduced latency rate.</ns0:p><ns0:p>From a practical perspective, the user-pairing/grouping and power control schemes in uplink/downlink NOMA systems are critically required to achieve an appropriate trade-off between the performance of the NOMA system and the computational complexity of the SIC technique. Over the past few years, several studies have discussed different prospects regarding the maximization of sum rate <ns0:ref type='bibr' target='#b48'>(Zhang et al. (2016a)</ns0:ref>; <ns0:ref type='bibr' target='#b13'>Ding et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Ali et al. (2016)</ns0:ref>), the transmission power control approaches <ns0:ref type='bibr' target='#b44'>(Wei et al. (2017)</ns0:ref>), and fairness <ns0:ref type='bibr' target='#b25'>(Liu et al. (2015</ns0:ref><ns0:ref type='bibr' target='#b26'>(Liu et al. ( , 2016))</ns0:ref>) for user pairing/grouping NOMA systems. Regarding the maximization of sum rate, a two-user grouping scheme based on a unique channel gain is demonstrated in <ns0:ref type='bibr' target='#b13'>(Ding et al. (2015)</ns0:ref>) whereas another study <ns0:ref type='bibr' target='#b2'>(Ali et al. (2016)</ns0:ref>) presented a novel framework for pertinent user-pairing/grouping approaches to assign the same resource block to multiple users.</ns0:p><ns0:p>In reference to the user pairing schemes <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) used the Hungarian algorithm with a modified cost function to investigate optimum allocation for three distinct cases in the uplink NOMA system. Furthermore, several matching game-based <ns0:ref type='bibr' target='#b24'>(Liang et al. (2017)</ns0:ref>) user-pairing/grouping approaches are discussed in <ns0:ref type='bibr' target='#b45'>(Xu et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>Di et al. (2016)</ns0:ref>), wherein the allocation of users and two sets of players are modeled as a game theory problem. Numerous recent studies <ns0:ref type='bibr' target='#b47'>(Zhai et al. (2019);</ns0:ref><ns0:ref type='bibr' target='#b52'>Zhu et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Nguyen and Le (2019)</ns0:ref>) have also investigated different user-pairing/grouping schemes for NOMA systems. A novel algorithm named Ford Fulkerson <ns0:ref type='bibr' target='#b47'>(Zhai et al. (2019)</ns0:ref>) has been introduced for D2D cellular communication to address the user-pairing issue in NOMA systems. In addition to that, optimal user paring is achieved in <ns0:ref type='bibr' target='#b52'>(Zhu et al. (2018)</ns0:ref>) by taking two users with appropriate analytical conditions into consideration. A new framework <ns0:ref type='bibr' target='#b40'>(Song et al. (2014)</ns0:ref>) is also presented for optimum cooperative communication networks. Besides that, a lookup table <ns0:ref type='bibr' target='#b3'>(Azam et al. (2019)</ns0:ref>) is introduced by performing comprehensive calculations to highlight the significance of power allocation and uplink user pairing in obtaining high sum-rate capacity while fulfilling the demands of user data rates. For the uplink case, a cumulative distributive function (CDF)-based resource allocation scheme <ns0:ref type='bibr' target='#b51'>(Zhanyang et al. (2018)</ns0:ref>) is presented where for each time slot, the selection of two users is dependent on the highest value of the CDF. Moreover, a few dynamic power allocation and power back-off schemes are also discussed in few studies <ns0:ref type='bibr' target='#b50'>(Zhang et al. (2016b)</ns0:ref>; <ns0:ref type='bibr' target='#b46'>Yang et al. (2016)</ns0:ref>) for scrutinizing the performance of the system to obtain high sum rates and meet the service quality requirements.</ns0:p><ns0:p>In the context of overlapping, a generic user grouping approach <ns0:ref type='bibr' target='#b8'>(Chen et al. (2020a)</ns0:ref>) is presented for NOMA, which involves the grouping of many users with a limitation on maximum power. The authors also formulated a problem for generalized user grouping and power control to achieve an optimized user grouping scheme based on the machine learning approach. Furthermore, another study <ns0:ref type='bibr' target='#b10'>(Chen et al. (2020b)</ns0:ref>) proposed a framework in which an overlapping coalition formation (OCF) game is used for overlapping user grouping and an OCF-based algorithm is also introduced that facilitated the selforganization of each user in an appropriate overlapping coalition model. Besides that, a joint problem is examined in <ns0:ref type='bibr' target='#b19'>(Guo et al. (2019)</ns0:ref>) for user grouping, association, and power allocation in consideration of QoS requirements for enhancing the uplink network capacity. Zhang et.al <ns0:ref type='bibr' target='#b49'>(Zhang et al. (2019)</ns0:ref>) also discussed a joint combinatorial problem for obtaining a sub-optimal and universal solution for userpairing/grouping to boost the overall system performance. Additionally, the authors in <ns0:ref type='bibr' target='#b43'>(Wang et al. (2018)</ns0:ref>) considered a user association problem by using an orthogonal approach for grouping users and employing a game-theoretic scheme for the allocation of a resource block to multi-users in a network.</ns0:p></ns0:div> <ns0:div><ns0:head>2/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>It has been observed that there are certain limitations associated with the game-theoretic schemes that are typically employed in user association techniques. However, the evolutionary algorithms (EAs) are universal optimizers that exhibit exceptional performance irrespective of the optimization problems being studied. The problem formulation is done as a sum rate utility function for the network and a parameter is presented that depicts the intricacy for power control problems. Therefore, the parameters for power control remain constant for all the systems. Meta-heuristics are high-level processes that combine basic heuristics and procedures in order to provide excellent approximation solutions to computationally complex combinatorial optimization problems in telecommunications <ns0:ref type='bibr' target='#b27'>(Martins and Ribeiro (2006)</ns0:ref>) . Furthermore, the key ideas connected with various meta-heuristics and provide templates for simple implementations. In addition, several effective meta-heuristic approaches to optimization problems have been investigated in telecommunications.</ns0:p><ns0:p>Several meta-heuristic algorithms <ns0:ref type='bibr' target='#b39'>(Sharma and Gupta (2020)</ns0:ref>) have been proposed to address localization problems in sensor networks. Some of the meta-heuristic algorithms used to solve the localization problems include the bat algorithm, firework algorithm and cuckoo search algorithm.</ns0:p><ns0:p>For wireless sensors networks <ns0:ref type='bibr' target='#b42'>(Wang et al. (2020)</ns0:ref>), routing algorithm has been developed based on elite hybrid meta-heuristic optimization algorithm.</ns0:p><ns0:p>On the other hand, Swarm intelligence (SI) algorithms, in addition to game theory and convex optimization, has recently emerged as a promising optimization method for wireless-communication.</ns0:p><ns0:p>The use of SI algorithms can resolve arising issues in wireless networks such as Power control problem, spectrum allocation and network security problems <ns0:ref type='bibr' target='#b36'>(Pham et al. (2020b)</ns0:ref>). Furthermore, two SI algorithms, named Grey Wolf Optimizer (GWO) and particle swarm optimizer (PSO) are also used in literature for solving the joint problem regarding user associations and power control in NOMA downlink systems to attain maximized sum-rate <ns0:ref type='bibr' target='#b18'>(Goudos et al. (2020)</ns0:ref>). Additionally, an efficient meta-heuristic approach known as multi-trial vector-based differential evolution (MTDE) (Nadimi-Shahraki et al. ( <ns0:ref type='formula'>2020</ns0:ref>)) has been implemented for solving different complex engineering problems by using multi trial vector technique (MTV) which integrates several search algorithms in the form of trial vector producers (TVPs) approach.</ns0:p><ns0:p>Recently, an updated version of GWO i.e. Improved-Grey Wolf Optimizer (I-GWO) (Nadimi-Shahraki et al. ( <ns0:ref type='formula'>2021</ns0:ref>)) has been investigated for handling global optimization and engineering design challenges.</ns0:p><ns0:p>This modification is intended to address the shortage of population variety, the mismatch between exploitation and exploration, and the GWO algorithm's premature convergence. The I-GWO algorithm derives from a novel mobility approach known as dimension learning-based hunting (DLH) search strategy which was derived from the natural hunting behaviour of wolves. DLH takes a unique method to creating a neighbourhood for each wolf in which nearby information may be exchanged among wolves. This dimension learning when employed in the DLH search technique improves the imbalance between local and global search and preserves variation.</ns0:p><ns0:p>In this paper, a joint combinatorial problem of user pairing/grouping, power control, and decoding order are considered for every uplink NOMA user within the network. To solve this problem, we propose a recently introduced meta-heuristic algorithm known as Whale Optimization Algorithm (WOA) <ns0:ref type='bibr' target='#b28'>(Mirjalili and Lewis (2016)</ns0:ref>) that is inspired by the hunting approach of the humpback whale. Furthermore, a Grey Wolf Optimizer (GWO) <ns0:ref type='bibr' target='#b30'>(Mirjalili et al. (2014b)</ns0:ref>) and Particle Swarm Optimization (PSO) <ns0:ref type='bibr' target='#b23'>(Kennedy and Eberhart (1995)</ns0:ref>) algorithms are also employed in this research study. The results obtained through the algorithms proposed in <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>), WOA, GWO and the popular PSO are exclusively compared in this study. The acquired results indicate that the WOA outperformed the existing algorithm <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>), GWO and PSO in-terms of spectral-efficiency with lower computational complexity.</ns0:p><ns0:p>The rest of the paper is structured as follows. A 'System Model and Problem Formulation' describes the mathematical representation and research problem of NOMA uplink system. The solution is provided in the 'Solution of Proposed Problem' section where an efficient decoding order, power control scheme and user grouping approach are employed for NOMA uplink System. A concise analysis on the simulation is provided in 'Simulation Results' section. The 'Conclusion' section presents the summary of this research work.</ns0:p></ns0:div> <ns0:div><ns0:head>3/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>SYSTEM MODEL AND PROBLEM FORMULATION System Model</ns0:head><ns0:p>As illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, we consider an uplink NOMA transmission with a single-cell denoted by C.</ns0:p><ns0:p>The number of users M served by a single base station (BS) placed at the centre of the cell. To obtain the signal/information requirements of several users, the number of physical resource block (PRB) denoted by N are assigned to multiple-users in a cell. Hence, the received signal z n at the BS can be represented as:</ns0:p><ns0:formula xml:id='formula_0'>z n = M &#8721; m=1 &#965; n,m g m &#945; m P s m + &#969; n (1)</ns0:formula><ns0:p>where &#965; n,m &#8712; {0, 1} is the user n indicator assigned to the n &#8722;th group. The transmission path between user m and BS is represented by g m which is Guassian distributed. The power control coefficients is denoted by &#945; m (0 &#8804; &#945; m &#8804; 1). For each user m, the transmission power and the signal is denoted by P and s m , where E(|s m | 2 = 1). The additive white Gaussian noise (AWGN) power is denoted by &#969; n with an average power &#963; 2 . Therefore, the maximum spectral efficiency of user M and the received signal to interference plus noise ratio (SINR) on n &#8722; th PRB can be expressed as:</ns0:p><ns0:formula xml:id='formula_1'>S m = log 2 (1 + &#966; m )<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>&#966; m = |g m | 2 &#945; m P M &#8721; j&#824; =m |g j | 2 &#945; j P+ &#963; 2 (3)</ns0:formula><ns0:p>The SIC operation is carried out at the BS for each PRB/group to decode the users signal. The decoding order of a user n is represented by &#948; n,m in a cell, where &#948; n,m = a &gt; 0 assumes that any user m in a group is the i &#8722; th one in the n &#8722; th PRB is to be decoded. Thus, the maximum spectral efficiency of user n can be represented as: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_3'>S m = log 2 &#63723; &#63724; &#63724; &#63724; &#63724; &#63724; &#63724; &#63724; &#63725; 1 + |g m | 2 &#945; m &#947; M &#8721; j&#824; =m &#948; n, j &gt;&#948; n,m &gt;0 |g j | 2 &#945; j &#947; + 1 &#63734; &#63735; &#63735; &#63735; &#63735; &#63735; &#63735; &#63735; &#63736; (4)</ns0:formula><ns0:p>where &#948; n, j &gt; &#948; n,m represents the decoding order of users in a PRB/group. If users m and j are in the same group, then it implies that user m is decoded first. The transmission power to noise ratio is represented by &#947;, where &#947; = P/&#963; 2 . Assuming that, the channel-state-information (CSI) is known by BS of each user within coverage area.</ns0:p><ns0:p>To attain effective user-pairing/grouping and power control for NOMA uplink system, each user M in a cell transmit their power control coefficient &#945; m along with user indicator &#965; n,m . Hence, the maximum spectral-efficiency in the n &#8722; th PRB/group can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_4'>S t (n) = &#8721; &#965; n,m =1 S m</ns0:formula><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_5'>S t (n) = log 2 1 + &#8721; &#965; n,m =1 |g m | 2 &#945; m &#947;<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>The equation ( <ns0:ref type='formula' target='#formula_5'>6</ns0:ref>) clearly shows that the spectral efficiency in each group has not been affected by the order of decoding but has an impact on each user.</ns0:p></ns0:div> <ns0:div><ns0:head>Problem Formulation</ns0:head><ns0:p>In this paper, we propose an efficient method for power-control, decoding order and user-pairing/ grouping to increase the spectral-efficiency under their required minimum rate constraint. Therefore, a joint combinatorial problem of power control, decoding order and user pairing/grouping is formulated to maximize the spectral-efficiency. The minimum spectral-requirement of each user in the network is s m . Therefore, the spectral efficiency maximization problem <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>; Zhang et al.</ns0:p><ns0:p>(2019)) can be formulated as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_6'>maximize {&#965; n,m }, {&#948; n,m } &#8712; &#960;, {&#945; m } S t = N &#8721; n=1 S t (n) (7a) (7b) subject to C 1 : 0 &#8804; &#945; m &#8804; 1, &#8704;m,<ns0:label>(7c)</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>C 2 : S m &#8805; s m , &#8704;m,<ns0:label>(7d)</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>C 3 : &#965; n,m &#8712; {0, 1}, &#8704;m, &#8704;n,<ns0:label>(7e)</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>C 4 : N &#8721; n=1 &#965; n,m = 1, &#8704;m<ns0:label>(7f)</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>SOLUTION OF PROPOSED PROBLEM</ns0:head><ns0:p>To achieve the global optimal solution for Problem (7a), the optimization variables &#965; n,m , &#948; n,m , and &#945; m are strongly correlated, which makes the problem complex. In connection of the fact that user-pairing variables &#948; n,m are combinatorial integer programming variables. Hence, first solve the combinatorial problem of power control and decoding order instead and compute the optimum user-pairing/grouping solution. In case of any fixed scheme of user-grouping, the value of &#965; n,m are independent among all distinct group regarding both decoding order and power control.</ns0:p><ns0:formula xml:id='formula_10'>maximize {&#948; n,m } &#8712; &#960;, {&#945; m } S t (n) (8a) subject to S m &#8805; s m , m &#8712; M n , (8b) 0 &#8804; &#945; m &#8804; 1, m &#8712; M n (8c)</ns0:formula><ns0:p>where M n indicates set of all possible combination in the n &#8722; th PRB.</ns0:p></ns0:div> <ns0:div><ns0:head>Optimal Decoding for Optimal User-Pairing/Grouping</ns0:head><ns0:p>In order to apply SIC operation, all users signals/information are decoded by the receiver in the descending order based on channel condition. In uplink NOMA system <ns0:ref type='bibr' target='#b2'>(Ali et al. (2016)</ns0:ref>), the users with better channel condition is decode first at the BS while the user with worse channel condition is decode last. As a result the user with better channel condition experiences interference from all the users in the network, while the users with poor channel condition experiences interference free transmission.</ns0:p><ns0:p>To attain an efficient decoding <ns0:ref type='bibr' target='#b49'>(Zhang et al. (2019)</ns0:ref>) for NOMA uplink users, the decoding order for M users in a cell concern to same group/PRB, based upon the value of J n , where different decoding order of each user in a network depend on power control (Zhang et al. ( <ns0:ref type='formula'>2019</ns0:ref>)) scheme regarding different feasible region can be represented as:</ns0:p><ns0:formula xml:id='formula_11'>J m = |g m | 2 (1 + 1 &#936; m )<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_12'>&#936; m = 2 s m &#8722; 1 (10)</ns0:formula><ns0:p>Based on (9), the user with higher value of J m in a cell is decoded first . Also applies that the decoding-order does not affect the spectral efficiency of each PRB/group.</ns0:p></ns0:div> <ns0:div><ns0:head>Power Control</ns0:head><ns0:p>The nature of the problem in equation ( <ns0:ref type='formula'>8a</ns0:ref>) is a mixed integer non-linear programming (MINLP). Hence, we have achieved the optimal solution for decoding order &#948; n,m . Therefore, it is required to find all the possible group of combination for user pairing/grouping.</ns0:p><ns0:p>For this purpose, k users in a single cell C are considered. Without loss of generality, it is needful to reduce the complexity and simplify the mathematical procedure regarding optimal decoding order &#948; n,m . The users are listed in a C based on the decreasing order J m , for example 1, 2, 3, . . . , K. Therefore, equation (8a) can be represented as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_13'>maximize {&#945; k } K &#8721; k=1 |g k | 2 &#945; k &#947; (11a) subject to |g k | 2 &#945; k &#947; &#8805; &#936; k K &#8721; j=k+1 |g j | 2 &#945; j &#947; + 1 , &#8704;k,<ns0:label>(11b)</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>0 &#8804; &#945; k &#8804; 1, &#8704;k,<ns0:label>(11c</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where {&#945; k } represents power control-variables. Equations ( <ns0:ref type='formula' target='#formula_13'>11a</ns0:ref>) and (11b) show linearity and translated to SNR formulations respectively.</ns0:p><ns0:p>As shown in equation ( <ns0:ref type='formula' target='#formula_13'>11a</ns0:ref>), &#945; k is increasing. Therefore, the optimal solution for power control will always be upper bound. To determine the lower bound of power control <ns0:ref type='bibr' target='#b49'>(Zhang et al. (2019)</ns0:ref>), the following equation can be solved as:</ns0:p><ns0:formula xml:id='formula_15'>&#945; 0 k = &#936; k &#947; &#8242; k |g k | 2 &#947; , 1 &#8804; k &#8804; K (12)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_16'>&#947; &#8242; k = K &#8719; u=k+1 (&#936; u + 1)<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>Which signifies that the spectral efficiency requirements is equal to the sum of spectral efficiencies of all the users. If &#945; 0 k &#8805; 1, exceeds the limit of upper bound and hence, no feasible solution for equation <ns0:ref type='formula' target='#formula_13'>11a</ns0:ref>) has the feasible solution due to bound of &#945; k variables. Therefore, for all users M in a cell, the optimal solution (Zhang et al. ( <ns0:ref type='formula'>2019</ns0:ref>)) of the &#945; k variables can be illustrated as</ns0:p><ns0:formula xml:id='formula_17'>(11a). If 0 &#8804; &#945; 0 k &#8804; 1, equation (</ns0:formula><ns0:formula xml:id='formula_18'>&#945; * k = min{1, b k }<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_19'>b k = min{ |h u | 2 &#947; &#936; u &#8722; k&#8722;1 &#8721; q=u+1 |h q | 2 &#947; &#8722; K &#8721; j=k+1 |h j | 2 &#945; 0 j &#947; &#8722; 1(u = 1, 2, 3, . . . , k &#8722; 1)}<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>In reference to equation ( <ns0:ref type='formula' target='#formula_18'>14</ns0:ref>) and equation ( <ns0:ref type='formula' target='#formula_19'>15</ns0:ref>), the optimal power control variables &#945; * k mentioned in problem (11a) is achieved. Specifically, if &#945; * k = b k , for other users, the optimal power control variables are &#945; * j = &#945; 0 j for j &gt; k.</ns0:p></ns0:div> <ns0:div><ns0:head>User Grouping</ns0:head><ns0:p>An efficient and low computational time algorithm for user-pairing/grouping is one of the key concern for an effective NOMA uplink system. In this regard, three different meta-heuristic algorithms are proposed to solve the issue of complexity. The WOA is investigated for an efficient optimal and sub-optimal solution for user-pairing/grouping problem as a result to enhance the system performance. Further, the user pairing/grouping problem that exploits the channel-gain difference among different users in a network and the objective is to raise system's spectral-efficiency. To determine the optimum user-pairing/grouping, a specific approach of solving user pairing/grouping problem is by using the search approach. For fixed user-pairing/grouping scheme, the optimal solution is obtained <ns0:ref type='bibr' target='#b49'>(Zhang et al. (2019)</ns0:ref>). Then, list all the users in the decreasing order of J m accordingly. The proposed algorithm for user pairing/grouping problem is illustrated in Algorithm 1. Initially, define the feasible solution of user grouping for exhaustive and swarm based algorithm . An exhaustive search explores each data points within the search region and therefore provides the best available match. Furthermore, a huge proportion of computation is needed.</ns0:p><ns0:p>Particularly a discrete type problem where no such solution exists to find the effective feasible solution.</ns0:p><ns0:p>There may be a need to verify each and every possibility sequentially for the purpose of determining the best feasible solution. The optimal solution using exhaustive search algorithm <ns0:ref type='bibr' target='#b49'>(Zhang et al. (2019)</ns0:ref>) is getting obdurate because the number of comparison increases rapidly. Hence, the system complexity of WOA for user grouping scheme is O(MN), where as O(N M ) represent the complexity of the exhaustive search algorithm. Therefore, a WOA approach is employed to reduce the complexity and provide efficient results. In addition, GWO and PSO algorithms are also proposed for the same problem.</ns0:p></ns0:div> <ns0:div><ns0:head>7/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Whale Optimization Algorithm (WOA)</ns0:head><ns0:p>To enhance the spectral-throughput and reduce the system complexity, an innovative existence metaheuristic optimization technique named whale optimization algorithm (WOA) <ns0:ref type='bibr' target='#b28'>(Mirjalili and Lewis (2016)</ns0:ref>)</ns0:p><ns0:p>is proposed in this paper. The algorithm WOA is resembles to the behaviour of the humpback whales, which is based on the bubble-net searching approach. Three distinct approaches are used to model the</ns0:p></ns0:div> <ns0:div><ns0:head>WOA is described as</ns0:head></ns0:div> <ns0:div><ns0:head>Encircling Prey</ns0:head><ns0:p>In this approach, the humpback-whales can locate the prey-location of the prey and en-circle that region.</ns0:p><ns0:p>Considering that, the location of the optimal design in the search region is not known in the beginning.</ns0:p><ns0:p>Hence, the algorithm WOA provides the best solution that is nearer to the optimal value. First determine the best solution regarding location and then change the position according to the current condition of the other search agents concerning to determine the best solution. Such an approach is described mathematically and can be expressed as:</ns0:p><ns0:formula xml:id='formula_20'>&#8722; &#8594; E = |A. &#8722; &#8594; X * (t) &#8722; X(t)| (16) &#8722; &#8594; Y (t) = &#8722; &#8594; X (t + 1) (17) &#8722; &#8594; Y (t) = &#8722; &#8594; X * (t) &#8722; &#8722; &#8594; B . &#8722; &#8594; E (18)</ns0:formula><ns0:p>where, &#8722; &#8594; B and &#8722; &#8594; A represents the coefficients-vectors, t defines the initial iteration and X * and &#8722; &#8594; X both describes the position-vector where X * includes the best solution so far acquired. | | and . defines the absolute and multiplication. Noted that the position vector X * is updated for each iteration until to find the best solution. The coefficients vector vectors &#8722; &#8594; B and &#8722; &#8594; A can be determined as:</ns0:p><ns0:formula xml:id='formula_21'>&#8722; &#8594; B = 2 &#8722; &#8594; b . &#8722; &#8594; r &#8722; &#8722; &#8594; b (19) &#8722; &#8594; A = 2. &#8722; &#8594; r (20)</ns0:formula><ns0:p>where &#8722; &#8594; r indicates random vector 0 &#8804; r &#8804; 1 and &#8722; &#8594; b represent a vector with a value between 2 and 0, which is decreasing linearly during the iteration.</ns0:p></ns0:div> <ns0:div><ns0:head>Spiral bubble-net feeding maneuver</ns0:head><ns0:p>Two techniques are proposed to predict accurately the bubble-net activity of humpback-whales.</ns0:p></ns0:div> <ns0:div><ns0:head>Shrinking en-circling</ns0:head><ns0:p>This type of techniques is achieved by decreasing the value of &#8722; &#8594; b using equation <ns0:ref type='bibr'>(19)</ns0:ref> . It is to be noted that the variation range of &#8722; &#8594; B is also reduced by the value</ns0:p><ns0:formula xml:id='formula_22'>&#8722; &#8594; b . Therefore, &#8722; &#8594; B is a random value from [&#8722;b, b],</ns0:formula><ns0:p>where the value of b is decreasing from 2 to 0 during the iterations.</ns0:p></ns0:div> <ns0:div><ns0:head>Spiral updating position</ns0:head><ns0:p>In spiral method, a relationship between the location of prey and whale to impersonate the helix-shaped operations is represented in the form of mathematical equation of humpback-whales in the following manner:</ns0:p><ns0:formula xml:id='formula_23'>&#8722; &#8594; Y (t) = &#8722; &#8594; E &#8242; .e pq . cos (2&#960;q) + &#8722; &#8594; X * (21)</ns0:formula><ns0:p>where Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_24'>&#8722; &#8594; E &#8242; = | &#8722; &#8594; X * (t) &#8722; &#8722; &#8594; X (t)|,</ns0:formula><ns0:formula xml:id='formula_25'>Computer Science &#8722; &#8594; Y (t) = &#8722; &#8594; X * (t) &#8722; &#8722; &#8594; E . &#8722; &#8594; B , if d &lt; 0.5 &#8722; &#8594; E &#8242; .e pq . cos (2&#960;q) + &#8722; &#8594; X * , if d &#8805; 0.5 (22)</ns0:formula><ns0:p>where d represents a random number (0 &#8804; d &#8804; 1). Further, the searching behaviour of humpbackwhales for prey is randomly in the bubble-net approach. The following is the representation of mathematical model for bubble net approach.</ns0:p></ns0:div> <ns0:div><ns0:head>Prey Searching Technique</ns0:head><ns0:p>To locate prey, same strategy based on the modification of the &#8722; &#8594; B vector can be utilized (exploration). In </ns0:p><ns0:formula xml:id='formula_26'>&#8722; &#8594; E = | &#8722; &#8594; A . &#8722;&#8722;&#8594; X rand &#8722; &#8722; &#8594; X | (23) &#8722; &#8594; Y (t) = &#8722;&#8722;&#8594; X rand &#8722; &#8722; &#8594; B . &#8722; &#8594; E (24)</ns0:formula><ns0:p>where &#8722;&#8722;&#8594; X rand indicates a position-vector that is randomly selected from the existing space.</ns0:p><ns0:p>The algorithm WOA comprised of a selection of random samples. For every iteration, the search agents change their locations in relation to either a randomly selected search agent or the best solution acquired so far in this. For both cases exploitation phase and exploration the value of b is decreasing in the range from 2 to 0 accordingly. As the value of | &#8722; &#8594; B | &gt; 1, a randomly searching solution is selected, while the optimal solution is obtained when | &#8722; &#8594; B | &lt; 1 for updating the search-location of the agents. Based on the parameter d, the WOA is used as a circular or spiral behaviour. Ultimately, the WOA is ended by the successful termination condition is met. Theoretically, it provides exploration and exploitation capability.</ns0:p><ns0:p>Therefore, WOA can still be considered as a successful global optimizer. The WOA is described in Algorithm 1.</ns0:p></ns0:div> <ns0:div><ns0:head>Grey Wolf Optimizer (GWO)</ns0:head><ns0:p>A popular meta-heuristic algorithm, which is influenced by the behaviour of grey-wolves <ns0:ref type='bibr' target='#b30'>(Mirjalili et al. (2014b)</ns0:ref>).This algorithm is based on the hunting approach of grey wolves and their governing-hierarchy.</ns0:p><ns0:p>Grey wolves represent predatory animals, which means these are heading up in the hierarchy. Grey wolves tended to stay in groups. The wolves in a group is varying between 5 to 12. The governing-hierarchy of GWO is shown in Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>, where several kinds of grey wolves have been used particularly &#945;, &#946; , &#948; , and &#969;.</ns0:p><ns0:p>Both wolves (male and female) are the founders known as &#945;s. The &#945; is mainly in favour of producing decision making regarding hunting, sleeping and waking time, sleeping place etc. The group is governed by the &#945;s actions. Even so, some egalitarian behaviour has been observed, such as an alpha wolf following other wolves in the group. The whole group respects the &#945; by keeping their tails towards ground at gatherings. The &#945; wolf is also regarded as superior since the group must obey his/her orders. The group's &#945; wolves are the only ones that can mate. Usually, the &#945; is not always the biggest member of the group, but rather the best at handling the batch. Which illustrates that a group's structure and discipline are often more critical than its capacity. &#946; is the second phase of the grey wolf hierarchy. The &#946; 's are the sub-ordinate wolves who assist the &#945; in taking decision. The &#946; wolf (male or female), is most likely the better choice to be the &#945; wolf in the event that one of the &#945; wolves dies or gets very old. Therefore, &#946; wolf would honour the &#945; while still commanding all other lower-level wolves. It serves as an adviser to the &#945; and a group disciplinarian. Throughout the group, the &#946; confirms the &#945;s orders and provides List all the users with decreasing order of J m . while t &lt; (total iterations) for every search user Initialize b,B,A,q and d if1(d &lt; 0.5) if2(|B| &lt; 1) Existing search user position is updated using equation ( <ns0:ref type='formula'>16</ns0:ref>) else if2(|B| &#8805; 1) Randomly selected a search user (X rand ) Existing position of search user is updated using equation ( <ns0:ref type='formula'>23</ns0:ref>) end if2 else if1(d &#8805; 0.5) Exiting position of search user is updated using equation ( <ns0:ref type='formula'>21</ns0:ref> guidance to the &#945;. The grey wolf with the lowest rating is &#969;. The &#969; serves as a scapegoat. &#969; wolves must List all the users with decreasing order of J m .</ns0:p><ns0:p>for each generation do for each particle do Update the position and vector by using equation ( <ns0:ref type='formula' target='#formula_27'>37</ns0:ref>) and equation ( <ns0:ref type='formula' target='#formula_28'>38</ns0:ref>) Estimate the fitness of the particle Update both pbest and gbest t=t+1 end for end for return pbest, gbest Algorithm 3: PSO Particle Swarm Optimization (PSO) <ns0:ref type='bibr' target='#b23'>Kennedy and Eberhart (Kennedy and Eberhart (1995)</ns0:ref>) introduced PSO as an evolutionary computation method. It was influenced by the social behaviour of birds, which involves a large number of individuals (particles) moving through the search space to try to find a solution. Over the entire iterations, the particles map the best solution (best location) in their tracks. In essence, particles are guided by their own best positions, which is the best solution same as achieved by the swarm. This behaviour can modelled mathematically by using velocity vector (u), dimension (S), which represents the number of parameters and position vector (x). In the entire iterations, the position and velocity of the particles changing by the following equation:</ns0:p><ns0:formula xml:id='formula_27'>u t+1 i = vu t i + e 1 &#215; rand &#215; (pbest i &#8722; x t i ) + e 2 &#215; rand &#215; (gbest &#8722; x t i )<ns0:label>(37)</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>x t+1 i = u t+1 i + x t i (<ns0:label>38</ns0:label></ns0:formula><ns0:formula xml:id='formula_29'>)</ns0:formula><ns0:p>where v(0.4 &#8804; v &#8804; 0.9) represents the inertial weight, which control stability of the PSO algorithm.</ns0:p><ns0:p>cognitive coefficient e 1 (0 &lt; e 1 &#8804; 2), which limits the impact of the individual memory for best solution.</ns0:p><ns0:p>Social factor e 2 (0 &lt; e 2 &#8804; 2), which limits the motion of particles to find best solution by the entire swarm, rand indicates a random number in the range between 0 and 1, attempt to provide additional randomized search capability to the PSO algorithm and two variables pbest and gbest, used to accumulate best solutions achieved by each particle and the entire swarm accordingly. The PSO is described in Algorithm 3.</ns0:p></ns0:div> <ns0:div><ns0:head>SIMULATION RESULTS</ns0:head><ns0:p>This section evaluates the performance of the proposed meta-heuristic algorithms, namely, WOA, GWO and PSO of the user groping, power control and decoding order for NOMA uplink systems.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> presents the simulation parameter values attained from the literature <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b49'>Zhang et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b28'>Mirjalili and Lewis (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b30'>Mirjalili et al. (2014b)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>Kennedy and Eberhart (1995)</ns0:ref>) for WOA, GWO and PSO algorithms that participated in the simulation. Further, the Wilcoxon test and Friedman test <ns0:ref type='bibr' target='#b0'>(Abualigah et al. (2021)</ns0:ref>) are performed for experiments and the statistical analysis of GWO and PSO is also provided in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. Based on the results of tests, the proposed WOA outperforms the other algorithms in comparison.</ns0:p><ns0:p>Both channel of the users and location are allocated randomly in the simulation. Therefore, the range between the user and BS are uniformly distributed and considered that the channel response is Gaussian distribution <ns0:ref type='bibr' target='#b49'>(Zhang et al. (2019)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>13/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='formula'>2016</ns0:ref>)) , GWO <ns0:ref type='bibr' target='#b30'>(Mirjalili et al. (2014b)</ns0:ref>) and PSO <ns0:ref type='bibr' target='#b23'>(Kennedy and Eberhart (1995)</ns0:ref>) algorithms proposed for NOMA uplink system. We may conclude that WOA, GWO and PSO algorithms converge at a comparable rate, hence WOA converges after a greater number of iterations than GWO and PSO. The proposed WOA attains significant performance in-terms of spectral efficiency as compare to GWO and PSO algorithms. Also the proposed WOA <ns0:ref type='bibr' target='#b28'>(Mirjalili and Lewis (2016)</ns0:ref>) provides stability and attains the minimum rate requirement without such a noticeable drop in the results.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>4</ns0:ref> compares the spectral-efficiency of NOMA and OMA approaches with varying &#947;, respectively.</ns0:p><ns0:p>It has been proved that the spectral-efficiency of NOMA scheme is considerably higher than those of scheme. Moreover, the spectral-efficiency of the proposed sub-optimal approach is nearer to the optimal value. The proposed WOA algorithm attains near optimal performance with minimal computational complexity. In addition, as the number of users increases the computational cost of the exhaustive-search algorithm increases as compared to WOA.</ns0:p><ns0:p>For NOMA uplink systems, the power control approach in <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) is provided as a benchmark scheme, where the spectral efficiency are near to the optimal value. Noted that the approach used in <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) is valid only for two user-pairing. Hence, the proposed scheme performs admirably in-terms of having efficient user grouping for multiple users.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> and 6 evaluates the performance of GWO and PSO algorithms in-terms of spectral-efficiency.</ns0:p><ns0:p>For uplink NOMA system, the spectral-efficiency of NOMA scheme outperform OMA scheme with varying &#947;. Moreover, the spectral-efficiency of optimal and sub-optimal solutions are nearer to each other. The power control scheme for NOMA uplink system in <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) is used as a benchmark. It has been observed that the spectral-efficiency of both GWO and PSO algorithms shows better results than power control <ns0:ref type='bibr' target='#b37'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) and OMA scheme.</ns0:p><ns0:p>Moreover, a comparison of proposed optimal WOA, GWO and PSO has shown in Figure <ns0:ref type='figure'>7</ns0:ref>. The performance of proposed optimal WOA, GWO and PSO are almost nearer to one another. Moreover, as the value of &#947; above 30 dB, the optimal WOA performs better in-terms of spectral-efficiency as compare to GWO and PSO. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. NOMA Uplink Transmission.</ns0:figDesc><ns0:graphic coords='5,209.41,159.93,278.27,191.85' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>where &#948; n,m represents the decoding order and &#960; indicates all possible combinations of users decoding orders in a network. C 1 indicates the upper bound of transmission power. C 2 guarantees the minimum rate of a user. C 3 and C 4 ensures the user indicator and m users assigned to PRB/group.5/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>which represents the distance between prey and the i &#8722; th whale. l denotes the random number(&#8722;1 &#8804; l &#8804; 1). b represents logarithmic spiral, which is a constant number and . indicates the multiplication operation. It's worth noting that humpback-whales swim in a shrinking-circle around their prey while still following a spiral-shaped direction. To predict this concurrent action, an equation is derived to represent the model can be expressed as:8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:1:2:NEW 7 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>reality, humpback-whales search at random based on their location. As a result, we select &#8722; 1 to compel the search-agent to step away from a target value. Comparison with exploitation, modify the location of every search-agent in the sample space, based on randomly selected process until to obtained a better solution. This operation and | &#8722; &#8594; B | &gt; 1 place an emphasis on the exploration phase and enable WOA to perform global-searching. This can be represented below:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Set the input control variables M, N, &#947; m , {g m }, {s n } Population initialization X 1 , X 2 , .........X n Result: X * (Best search agent for user-pairing/grouping).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Grey wolf hierarchy (Mirjalili et al. (2014b)).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Illustration of convergence of WOA, GWO and PSO.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Illustration of spectral efficiency of WOA with increasing &#947;.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .Figure 7 .</ns0:head><ns0:label>67</ns0:label><ns0:figDesc>Figure 6. Illustration of spectral efficiency of PSO with increasing &#947;.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Parameters for Proposed Uplink NOMA</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>s m</ns0:cell><ns0:cell>1.1 bits/s/Hz</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947;</ns0:cell><ns0:cell>30 dB</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistical analysis of GWO and PSO</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Wilcoxon</ns0:cell><ns0:cell>GWO</ns0:cell><ns0:cell>PSO</ns0:cell></ns0:row><ns0:row><ns0:cell>p &#8722; value</ns0:cell><ns0:cell>1.8E &#8722; 169</ns0:cell><ns0:cell>4.7E &#8722; 181</ns0:cell></ns0:row><ns0:row><ns0:cell>Friedman</ns0:cell><ns0:cell>GWO</ns0:cell><ns0:cell>PSO</ns0:cell></ns0:row><ns0:row><ns0:cell>p &#8722; value</ns0:cell><ns0:cell>4.5E &#8722; 161</ns0:cell><ns0:cell>1.2E &#8722; 164</ns0:cell></ns0:row></ns0:table><ns0:note>Figure3indicates the comparison of convergence of WOA(Mirjalili and Lewis (</ns0:note></ns0:figure> </ns0:body> "
"Original Manuscript ID: CS-2021:07:64162:1:0:NEW Original Article Title: Joint User Grouping and Power Control Using Whale Optimization Algorithm for NOMA Uplink System. Honorable Editor and Reviewers: Please accept a heartfelt expression of thanks for the time and effort you have put in while reviewing the manuscript. Thanks are extended to the esteemed reviewers whose valuable feedback and insightful comments have improved the quality of our manuscript.  The authors have left no stone unturned in incorporating the suggestions put forth by the esteemed reviewers. Significant changes made to the original manuscript have been highlighted in yellow color, making it convenient for the esteemed reviewers to track the suggested changes and readily go through them.   Starting now are the comments shared by the esteemed reviewers, followed by a brief overview of the corresponding changes made by the authors. Relevant paragraphs within corresponding sections of the manuscript where content has been added or modified are also identified in the response. Without further ado, let's dive right in.    Reviewer # 1 Basic reporting 'The authors introduced Whale Optimization Algorithm (WOA) and compare it with Grey Wolf Optimizer (GWO) and Particle Swarm Optimization (PSO) algorithms in the context of NOMA-based wireless system. The manuscript is easy to follow. The authors can consider the below suggestions to improve the quality of the paper. As for the writing: - abstract: 'A user grouping,... assume...' to passive voice. Author response: The sentence has been revised in the revised manuscript. - please avoid using ambiguous expressions such as 'high', 'significantly comparable', etc. Author response: The suggested corrections have been made in the revised manuscript. - line 144 two 'at'. Author response: The suggested correction has been made. - line 151 unfinished sentence. Author response: As suggested by the reviewer, the unfinished sentence has been removed. - avoid starting sentences with Greek symbols or 'Which'. Author response: The honorable reviewer is thanked for suggesting an important refinement. The sentences have been revised. The modification is highlighted in yellow for reviewer convenience. - Grey Wolf Optimizer (GWO): The gender of the wolve is emphasized. How does it contribute to the algorithm? Author response: We are thankful to the honorable reviewer for the useful insight. The research problem mentioned in the manuscript is a joint combinatorial problem. The proposed solution is based on joint decoding order, power control and user-pairing/grouping for NOMA uplink system. The research problem is represented mathematically in equation (7a). To solve that problem, we proposed a meta-heuristic optimization algorithm namely Whale Optimization Algorithm (WOA) [1] and compare it with Grey Wolf Optimizer (GWO)[2] and Particle Swarm Optimization (PSO)[3] . The GWO [2] simulates the natural leadership structure and hunting process of grey wolves. The leadership hierarchy is simulated using four varieties of grey wolves: alpha, beta, delta, and omega. In addition, three major hunting processes are adopted to improve performance: searching for prey, encircling prey, and attacking prey. Both wolves (male and female) are called alphas. The alphas are capable of determining the decision about sleeping place, hunting and time to wake etc. The herd is directed by the alpha's decisions. Based on the meta-heuristic algorithm GWO shown in Algorithm 2 in the revised manuscript, we obtain an efficient user-grouping and the result is shown in figure 5 in the manuscript. The simulation parameters are given in table-1, where M=6 represents the number of users and N=3 represent the number of resource block (N). - 'Figure 2: Grey wolf hierarchy ?' Author response: The honorable reviewer is thanked for the opportunity to clarify this point. The symbol “?” has been replaced by a proper citation represented as (Mirjalili et al. (2014)). The modification has been highlighted in yellow. - line 379 'is assume' to passive voice. Author response: The honorable reviewer is thanked for suggesting an important refinement. The sentence has been revised and the modification has been highlighted in yellow for reviewer convenience. - line 412 'simulations results' Author response: The honorable reviewer is thanked for suggesting an important refinement. The sentence has been revised. Experimental design The authors proper explanation NOMA-based wireless system performance and detailed analysis. Neither experimental verification nor comparison to other methods/models is presented. and more comparison to recently published papers that may prove scientific value is not provided Author response: The honorable reviewer is thanked for the opportunity to clarify this point. The detailed analysis of the uplink NOMA system using whale optimization algorithm (WOA) is included in the revised manuscript to analyze the performance of the proposed algorithm. For simulation, list of parameters are mentioned in table-1. Further, the experimental verification of the proposed algorithm is provided in [1][2][3]. In addition the comparison of the proposed algorithms with recent algorithm (Sedaghat and Müller (2018)) used for NOMA particularly for uplink system is provided in the revised manuscript. Validity of the findings Please restructure the paper, as sections 'Spectral efficiency maximization' and 'Solution of proposed model' can be subsections instead of being equal to sections 'Introduction' and 'Discussion'. Author response: The honorable reviewer is thanked for his valuable feedback to improve the manuscript. The section “Spectral efficiency maximization” has been revised, whereas section “Solution of proposed model” has been replaced by “Solution of proposed problem” as a section which describes the solution of the problem mentioned in equation no (7a). The modification has been highlighted in yellow. If possible, please elaborate the discussion on the Simulation results and add application scenarios and/or future works in the Conclusion. Author response: The honorable reviewer is thanked for his valuable suggestion. In section “Simulation results”, we have added few sentences to draw out the importance of the proposed solution. Further, the future work has been included in section “Conclusion” in the revised manuscript. The modification has been highlighted in yellow for reviewer convenience. Reviewer # 2 Basic reporting no comment Experimental design no comment Validity of the findings no comment Additional comments This paper studies the uplink NOMA system with the WOA algorithm, But the reviewer cannot accept this manuscript. Comments are as follows: 1. The authors just applied the WOA algorithm to the uplink NOMA systems and the reviewer could not find any interesting idea or intuition from this manuscript. Authors' Response:  The honorable reviewer is thanked for valuable feedback. The research problem mentioned in the manuscript is a combinatorial in nature. This research work is focusing on joint decoding order, power control and user-pairing/grouping for uplink NOMA system, which is a complex problem represented mathematically in equation, no (7a) in the manuscript. For uplink case a limited research work is available as compare to downlink case [5]. Moreover, in literature, a meta-heuristic algorithm [4] has been investigated to solve the problem of joint user association and power control for Non- orthogonal multiple access scheme for downlink case. Hence, for uplink NOMA case, a metat-heurtistic (WOA)[2] is used first time to solve that problem . Further, for comparison two popular meta-heuristics algorithms namely GWO[2] and PSO[3] is implemented for the mentioned problem. 2. No performance gain of the WOA algorithm compare with PSO or GWO algorithm from fig.7 Author response: The honorable reviewer is thanked for the opportunity to clarify this point. Three different meta-heuristic optimization algorithms namely WOA [1], GWO [2] and PSO [3] have been proposed first time for NOMA uplink system. The simulation results in Figure 7 represent the performance of proposed WOA and compared with GWO and PSO in terms of spectral efficiency (bits/s/Hz). The Figure 7 has been modified in the revised manuscript. The Figure 7 represents that the spectral efficiency (bits/s/Hz) of WOA is comparatively high as the transmission power ( increases from 35 to 40 dB as compare to GWO and PSO. 3. Future works should be added in the last section. Author response: The honorable reviewer is thanked for his valuable suggestion. The future work has been included in section “Conclusion” in the revised manuscript. Modification in the content has been highlighted in yellow for the reviewer's convenience.    Reviewer # 3 Basic reporting Although the language of the article is understandable, it needs some enhancements, for example, when the authors wrote 'In this work, a newly developed Whale Optimization Algorithm (WOA) is implemented', they did not mention the purpose of using this algorithm specifically. And the article also lacks clarity when talking briefly about results in the Abstract section as in the sentence ' Also, WOA attains improved results in compliance with system complexity'. Moreover there are some grammatical mistakes and types like: 'is capable to chose', 'would honour the','where as ', 'response is assume to be', 'are describe as'.... Author response: The honorable reviewer is thanked for pinpointing an important room for improvement in the manuscript. Novelty and the contribution of the proposed work have now been included in both the abstract and the introduction section of the manuscript. We have revised the sentences to facilitate better reading and understanding in the abstract. We have also clarified the grammar mistakes of the manuscript and addressed Reviewer’s comments. Modification in the content has been highlighted in yellow for the reviewer's convenience. Experimental design WOA algorithm has to be written and described by taking into account the NOMA Uplink system, One of the drawbacks of the article is that the baselines contain more details than the proposed algorithms. Author response: We are thankful to the honorable reviewer for the useful insight. We agree and have added more literature related to the proposed meta-heuristics algorithms to draw out the importance of the proposed solution for NOMA uplink systems. The modification has been highlighted in yellow for reviewer convenience. Validity of the findings I think it would be good to compare your results against other algorithms such as Cuckoo Search Optimization, Ant Colony Algorithm, and other state-of-the-art algorithms... Author response: We are thankful to the honorable reviewer for the useful insight. The research problem mentioned in the manuscript is combinatorial in nature. The research work is focusing on joint decoding order, power control and user-pairing/grouping for uplink NOMA system. The proposed solution is based on the meta-heuristics algorithms namely WOA [1], GWO [2] and PSO [3].In the literature, no such meta-heuristic algorithms have been implemented before particularly for NOMA uplink systems. The valuable feedback is significant for future perspective, and we will surely investigate its performance with recently enhanced variants of algorithms with close attention in the upcoming versions of the manuscript. Additional comments The article is missing the future work Author response: The honorable reviewer is thanked for his valuable suggestion. The future work has been added in section “Conclusion” in the revised manuscript. The modification has been highlighted in yellow for reviewer convenience. Reviewer # 4 Basic reporting The problem under study is interesting and trendy, but the major issue is that there is no significant and convincing novelty. In the following, some concerns will be discussed which hopefully can help the authors. 1. The main concern about this work is that the manuscript consists of contradictory statements about the contribution. It seems the authors investigated the usage of three algorithms on the application of NOMA, not proposing a new variant of WOA. The authors should specify with clarity their novelty and contribution. For instance, the reader encountered the following inconsistent sentences, - “In this work PSO, GWO, and WOA are employed”, “a newly developed WOA is implemented”, “To solve the issue of complexity, a WOA with low complexity is investigated for an efficient opt…”, “The proposed algorithm for user pairing/grouping”, “… reduce the system complexity, an innovative existence metaheuristic optimization technique named WOA is proposed in this work.”, “This section evaluates the proposed algorithms WOA, GWO, and PSO…”. Author response: The honorable reviewer is thanked for pinpointing an important room for improvement in the manuscript. In this research work, we proposed three different meta-heuristics algorithms namely, Whale Optimization Algorithm (WOA) [1], Grey Wolf Optimizer [2] and Particle Swarm Optimization (PSO) [3] for NOMA uplink systems. Novelty and the contribution of the proposed work have now been included in both the abstract and the introduction section of the manuscript. We have revised the sentences to facilitate better reading and understanding in the abstract. Modification in the content has been highlighted in yellow for the reviewer's convenience. 2. The abstract should be revised to reflect the main novelty and contribution of the paper. Authors' Response:  The honorable reviewer is thanked for his valuable suggestion. Novelty and the contribution of the proposed work have now been revised in the abstract. Modification has been highlighted in yellow for the reviewer's convenience.    3. The literature overview in Introduction section is shallow, the authors should add an in-depth literature review of recent optimization algorithms such as MTDE and I-GWO (with DLH search strategy) algorithms. Author response: The honorable reviewer is thanked for pointing out this point for an improvement of the manuscript. Literature regarding recent algorithms has now been included in the revised manuscript. Modification in the content has been highlighted in yellow for the reviewer's convenience. 4. It is recommended to provide more informative literature on the usage of metaheuristic algorithms for solving the NOMA uplink system. Author response: The honorable reviewer is thanked for useful comments and recommendation. More useful literature on the use of meta-heuristic algorithms has been included in the revised manuscript. Also the modification has been highlighted in yellow. 5. The Section entitled “Optimal and Sub-optimal User Grouping using WOA” is vague and does not show the methodology of the usage of WOA for user grouping tasks. The authors should describe the proposed algorithm step-by-step and show the sequence by using a flowchart or pseudo-code. (Algorithm 1 is the pseudo-code of WOA not the usage of it for user grouping) Author respons: We are thankful to the honorable reviewer for his valuable suggestion. The section entitled “Optimal and Sub-optimal User Grouping using WOA” has been replaced by “User Grouping” in the revised manuscript. The methodology of the usage of the proposed WOA has been included and highlighted in yellow in the revised manuscript. Also the proposed WOA for user grouping is described step-by-step in Algorithm 1 in the revised manuscript. 6. The authors should clarify what is the “WOA NOMA”. There is no explanation for “WOA NOMA” in the manuscript until the Simulation results section. Authors' Response:  The honorable reviewer is thanked for pointing out this point. In the manuscript, three different meta-heuristic algorithms namely, Whale Optimization Algorithm (WOA) [1] , Grey Wolf Optimizer (GWO) [2] and Particle Swarm Optimization (PSO) [3] have been proposed to solve the problem of NOMA uplink system. We agree this term “WOA NOMA” was confusing and we have removed this term in the revised manuscript. The modification has been highlighted in yellow for reviewer convenience. Experimental design 7. The authors mentioned in Introduction that “The results obtained through the algorithms proposed in (Sedaghat and Mu¨ ller (2018)), WOA, GWO and the popular PSO are exclusively compared in this study.”, but there are not any results from (Sedaghat and Mu¨ ller (2018) in the Simulation results section. Author response: We are thankful to the honorable reviewer for the useful insight. As suggested by the esteemed reviewer, simulations have been performed to evaluate the proposed algorithms' performance. In section “Simulation results”, results of the proposed algorithm (WOA) [1] are compared with the GWO [2], PSO [3] and algorithm used in (Sedaghat and Müller (2018)) [6]. Also the results of the (Sedaghat and Müller (2018)) [6] has been mentioned in the legend of Figures 4, 5 and 6 in the revised manuscript.    8. The authors should correct the legend of Figures 4-6 to remove “[19]” and add a proper reference. Author response: The honorable reviewer is thanked for suggesting an important refinement. The legend of Figures 4, 5 and 6 have been modified and also replaced “[19]” by a proper citation represented as (Sedaghat and Müller (2018)) in the revised manuscript. 9. The selection of comparative algorithms is not satisfactory. It’s necessary to compare the proposed algorithm with recently proposed and enhanced variants of algorithms. Authors’ Response: The honorable reviewer is thanked for his valuable suggestion. The research problem mentioned in the manuscript is focusing on decoding order, power control and user-pairing/grouping for uplink NOMA system. In the literature, no such algorithms have been implemented before to improve the performance of NOMA particularly for uplink case. For fifth generation (5G) wireless communication, most of the literatures are available for downlink scenario. In this manuscript, first time we use the meta-heuristic algorithms namely, Whale Optimization Algorithm (WOA) [1], Grey Wolf Optimizer (GWO) [2] and Particle Swarm Optimization (PSO) [3] for NOMA uplink scenario. The valuable feedback is significant for future perspective, and we will surely investigate its performance with recently enhanced variants of algorithms with close attention in the upcoming versions of the manuscript. 10. I could not find the number of iterations, runs, and population used in experiments; are they missed? Author response: We are thankful to the honorable reviewer for the useful insight. The number of iterations (Figure 3) is included in the manuscript whereas runs and population used in experiments are mentioned in [1], [2] [3]. Validity of the findings 11. The authors should statistically analyze the proposed and comparative algorithms using Wilcoxon and Friedman tests. Authors' Response:  The honorable reviewer is thanked for pointing out this point. The Wilcoxon test and Friedman test have been performed and the statistical analysis of GWO and PSO has been also provided in Table-2 in section “Simulation Results”. The modification has been highlighted in yellow for reviewer convenience. 12. The authors have claimed about the aim of using WOA and “obtaining high spectral-efficiency with lower computational complexity”, but this complexity is related to WOA and the authors have no contribution to achieve or even to reduce it. Authors' Response:  The honorable reviewer is thanked for pointing out this point. In the manuscript, the proposed WOA outperforms the GWO [2], PSO [3] and the existing algorithm in the literature [6] for NOMA uplink systems in-terms of spectral performance. For complexity the Big O Notation of WOA has been determined and included in the section “User Grouping” in the revised manuscript. The modification has been highlighted in yellow for reviewer convenience. 13. It is recommended to analyze the performance and effectiveness of the proposed and comparative algorithms using the performance index (PI) as shown in Subsection 5.3.5 in the QANA paper. Author response: The honourable reviewer is thanked for his valuable suggestion. The subsection 5.3.5 is not included in the manuscript. The Wilcoxon test and Friedman test have been performed for experiments. The results (Table-2) have been mentioned in Section “Simulation Results” in the revised manuscript to evaluate the performance of the proposed algorithms. The modification has been highlighted in yellow for reviewer convenience. Additional comments 14. It is noted that Figures 4, 5, 6, and 7 are illustrated before their description. Author response: The honorable reviewer is thanked for his valuable suggestion. In section “Simulation Results”, the Figures 4, 5 , 6 and 7 have been arranged and illustrated after description in the revised manuscript. References: [1]. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [2]. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in engineering software, 69, 46-61. [3. ]Kennedy, J., & Eberhart, R. (1995, November). Particle swarm optimization. In Proceedings of ICNN'95-international conference on neural networks (Vol. 4, pp. 1942-1948). IEEE. [4]. Goudos, S. K., Diamantoulakis, P. D., Boursianis, A. D., Papanikolaou, V. K., & Karagiannidis, G. K. (2020, September). Joint User Association and Power Allocation Using Swarm Intelligence Algorithms in Non-Orthogonal Multiple Access Networks. In 2020 9th International Conference on Modern Circuits and Systems Technologies (MOCAST) (pp. 1-4). IEEE. [5]. Khan, W. U., Jameel, F., Ristaniemi, T., Elhalawany, B. M., & Liu, J. (2019, April). Efficient power allocation for multi-cell uplink NOMA network. In 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring) (pp. 1-5). IEEE. [6]. Sedaghat, M. A., & Müller, R. R. (2018). On user pairing in uplink NOMA. IEEE Transactions on Wireless Communications, 17(5), 3474-3486. "
Here is a paper. Please give your review comments after reading it.
338
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Non-orthogonal multiple access (NOMA) scheme is proved to be a potential candidate to enhance spectral potency and massive connectivity for 5G wireless networks. To achieve effective system performance, user grouping, power control, and decoding order are considered to be fundamental factors. In this regard, a joint combinatorial problem consisting of user grouping and power control is considered, to obtain high spectralefficiency for NOMA uplink system with lower computational complexity. To solve joint problem of power control and user grouping, for Uplink NOMA, up to authors knowledge, we have used for the first time a newly developed meta-heuristicnature-inspired optimization algorithm i.e. Whale Optimization Algorithm (WOA). Further, for comparison a recently initiated Grey Wolf Optimizer (GWO) and the well-known Particle Swarm Optimization (PSO) algorithms are also applied for the same joint issue. To attain optimal and sub-optimal solutions, a NOMA-based model is used to evaluate the potential of the proposed algorithm. Numerical results validate that proposed WOA outperforms GWO, PSO and existing literature reported for NOMA uplink systems in-terms of spectral performance.</ns0:p><ns0:p>In addition, WOA attains improved results in terms of joint user grouping and power control with lower system-complexity as compare to GWO and PSO algorithms. The proposed work is novel enhancement for 5G uplink applications of NOMA systems.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Multiple access approaches are increasingly gaining importance in modern mobile communication systems, primarily due to the overwhelming increase in the communication demands at both the user and device level. Over past few years, non-orthogonal multiple access (NOMA) <ns0:ref type='bibr' target='#b15'>(Ding et al. (2017a</ns0:ref><ns0:ref type='bibr' target='#b17'>(Ding et al. ( , 2014</ns0:ref><ns0:ref type='bibr' target='#b16'>(Ding et al. ( , 2017b))</ns0:ref>; <ns0:ref type='bibr' target='#b8'>Benjebbovu et al. (2013)</ns0:ref>) schemes have earned significant attention for supporting the huge connectivity in contemporary wireless communication systems. The NOMA schemes are currently considered as the most promising contender for the 5G and beyond 5G (B5G) wireless communications, which are capable of accessing massive user connections and attaining high spectrum performance. Moreover, a report has been published recently regarding the Third Generation Partnership Project for determining the effectiveness of NOMA schemes for several applications or development scenarios, particularly for Ultra-Reliable Low Latency Communications (URLLC), enhanced Mobile Broadband (eMBB), and massive Machine Type Communications (mMTC) <ns0:ref type='bibr' target='#b6'>(Benjebbour et al. (2013)</ns0:ref>). Contrary to the classic orthogonal multiple access (OMA) approaches, the NOMA schemes can offer services to PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science multiple users in the same space/code/frequency/time resource block (RB). The NOMA schemes are also capable of differentiating the users that have distinct channel settings. These schemes are mainly inclined at strengthening connectivity and facilitating users with an efficient broad-spectrum <ns0:ref type='bibr' target='#b23'>(Islam et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>Dai et al. (2015)</ns0:ref>). Some recent studies <ns0:ref type='bibr' target='#b9'>(Chen et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b43'>Wang et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b40'>Shahini and Ansari (2019)</ns0:ref>) have discussed the effective use of the NOMA approach in standard frameworks for Internet of Things (IoT) systems and Vehicle-to-Everything (V2X) networks. The successive interference cancellation (SIC) technique, which is pertinent for multi-user detection and decoding is implemented for the NOMA scheme at the receiver end. The SIC technique operates differently for the downlink and uplink scenarios. In the downlink NOMA scenario, SIC is applied at the receiver end, where high energy is consumed during processing when a lot of users are considered in the NOMA group. For that reason, two users are typically considered in a group for optimum grouping/pairing of users in the case of the downlink NOMA system (Al-Abbasi and So (2016); <ns0:ref type='bibr' target='#b21'>He et al. (2016)</ns0:ref>). Whereas in the uplink NOMA systems, it is possible to employ SIC at the base station (BS) that has a higher processing capacity. Moreover, in uplink NOMA, multiple users are allowed to transmit in a grant-free approach that leads to a significantly reduced latency rate.</ns0:p><ns0:p>From a practical perspective, the user-pairing/grouping and power control schemes in uplink/downlink NOMA systems are critically required to achieve an appropriate trade-off between the performance of the NOMA system and the computational complexity of the SIC technique. Over the past few years, several studies have discussed different prospects regarding the maximization of sum rate <ns0:ref type='bibr' target='#b50'>(Zhang et al. (2016a)</ns0:ref>; <ns0:ref type='bibr' target='#b14'>Ding et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Ali et al. (2016)</ns0:ref>), the transmission power control approaches <ns0:ref type='bibr' target='#b46'>(Wei et al. (2017)</ns0:ref>), and fairness <ns0:ref type='bibr' target='#b26'>(Liu et al. (2015</ns0:ref><ns0:ref type='bibr' target='#b27'>(Liu et al. ( , 2016))</ns0:ref>) for user pairing/grouping NOMA systems. Regarding the maximization of sum rate, a two-user grouping scheme based on a unique channel gain is demonstrated in <ns0:ref type='bibr' target='#b14'>(Ding et al. (2015)</ns0:ref>) whereas another study <ns0:ref type='bibr' target='#b3'>(Ali et al. (2016)</ns0:ref>) presented a novel framework for pertinent user-pairing/grouping approaches to assign the same resource block to multiple users.</ns0:p><ns0:p>In reference to the user pairing schemes <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) used the Hungarian algorithm with a modified cost function to investigate optimum allocation for three distinct cases in the uplink NOMA system. Furthermore, several matching game-based <ns0:ref type='bibr' target='#b25'>(Liang et al. (2017)</ns0:ref>) user-pairing/grouping approaches are discussed in <ns0:ref type='bibr' target='#b47'>(Xu et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b13'>Di et al. (2016)</ns0:ref>), wherein the allocation of users and two sets of players are modeled as a game theory problem. Numerous recent studies <ns0:ref type='bibr' target='#b49'>(Zhai et al. (2019);</ns0:ref><ns0:ref type='bibr' target='#b56'>Zhu et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b35'>Nguyen and Le (2019)</ns0:ref>) have also investigated different user-pairing/grouping schemes for NOMA systems. A novel algorithm named Ford Fulkerson <ns0:ref type='bibr' target='#b49'>(Zhai et al. (2019)</ns0:ref>) has been introduced for D2D cellular communication to address the user-pairing issue in NOMA systems. In addition to that, optimal user paring is achieved in <ns0:ref type='bibr' target='#b56'>(Zhu et al. (2018)</ns0:ref>) by taking two users with appropriate analytical conditions into consideration. A new framework <ns0:ref type='bibr' target='#b42'>(Song et al. (2014)</ns0:ref>) is also presented for optimum cooperative communication networks. Besides that, a lookup table <ns0:ref type='bibr' target='#b4'>(Azam et al. (2019)</ns0:ref>) is introduced by performing comprehensive calculations to highlight the significance of power allocation and uplink user pairing in obtaining high sum-rate capacity while fulfilling the demands of user data rates. For the uplink case, a cumulative distributive function (CDF)-based resource allocation scheme <ns0:ref type='bibr' target='#b54'>(Zhanyang et al. (2018)</ns0:ref>) is presented where for each time slot, the selection of two users is dependent on the highest value of the CDF. Moreover, a few dynamic power allocation and power back-off schemes are also discussed in few studies <ns0:ref type='bibr' target='#b53'>(Zhang et al. (2016b)</ns0:ref>; <ns0:ref type='bibr' target='#b48'>Yang et al. (2016)</ns0:ref>) for scrutinizing the performance of the system to obtain high sum rates and meet the service quality requirements.</ns0:p><ns0:p>In the context of overlapping, a generic user grouping approach <ns0:ref type='bibr' target='#b10'>(Chen et al. (2020a)</ns0:ref>) is presented for NOMA, which involves the grouping of many users with a limitation on maximum power. The authors also formulated a problem for generalized user grouping and power control to achieve an optimized user grouping scheme based on the machine learning approach. Furthermore, another study <ns0:ref type='bibr' target='#b11'>(Chen et al. (2020b)</ns0:ref>) proposed a framework in which an overlapping coalition formation (OCF) game is used for overlapping user grouping and an OCF-based algorithm is also introduced that facilitated the selforganization of each user in an appropriate overlapping coalition model. Besides that, a joint problem is examined in <ns0:ref type='bibr' target='#b20'>(Guo et al. (2019)</ns0:ref>) for user grouping, association, and power allocation in consideration of QoS requirements for enhancing the uplink network capacity. Zhang et.al <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2019)</ns0:ref>) also discussed a joint combinatorial problem for obtaining a sub-optimal and universal solution for userpairing/grouping to boost the overall system performance. Additionally, the authors in <ns0:ref type='bibr' target='#b45'>(Wang et al. (2018)</ns0:ref>) considered a user association problem by using an orthogonal approach for grouping users and employing a game-theoretic scheme for the allocation of a resource block to multi-users in a network.</ns0:p></ns0:div> <ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>It has been observed that there are certain limitations associated with the game-theoretic schemes that are typically employed in user association techniques. However, the evolutionary algorithms (EAs) are universal optimizers that exhibit exceptional performance irrespective of the optimization problems being studied. The problem formulation is done as a sum rate utility function for the network and a parameter is presented that depicts the intricacy for power control problems. Therefore, the parameters for power control remain constant for all the systems. Moreover, NOMA-based Mobile edge computing (MEC) system <ns0:ref type='bibr' target='#b55'>(Zheng et al. (2020)</ns0:ref>) has been investigated to improve the energy efficiency during task offloading process. Further, a matching coalition scheme has been used to address the issue of power control and resource allocation. In addition, a matching theory <ns0:ref type='bibr' target='#b36'>(Panda (2020)</ns0:ref>) approach is proposed to enhance the operational system's user patterns and resource management.</ns0:p><ns0:p>Meta-heuristics are high-level processes that combine basic heuristics and procedures in order to provide excellent approximation solutions to computationally complex combinatorial optimization problems in telecommunications <ns0:ref type='bibr' target='#b29'>(Martins and Ribeiro (2006)</ns0:ref>) . Furthermore, the key ideas connected with various meta-heuristics and provide templates for simple implementations. In addition, several effective meta-heuristic approaches to optimization problems have been investigated in telecommunications.</ns0:p><ns0:p>Several meta-heuristic algorithms <ns0:ref type='bibr' target='#b41'>(Sharma and Gupta (2020)</ns0:ref>) have been proposed to address localization problems in sensor networks. Some of the meta-heuristic algorithms used to solve the localization problems include the bat algorithm, firework algorithm and cuckoo search algorithm. For wireless sensors networks <ns0:ref type='bibr' target='#b44'>(Wang et al. (2020)</ns0:ref>), routing algorithm has been developed based on elite hybrid meta-heuristic optimization algorithm.</ns0:p><ns0:p>On the other hand, Swarm intelligence (SI) algorithms, in addition to game theory and convex optimization, has recently emerged as a promising optimization method for wireless-communication.</ns0:p><ns0:p>The use of SI algorithms can resolve arising issues in wireless networks such as Power control problem, spectrum allocation and network security problems <ns0:ref type='bibr' target='#b38'>(Pham et al. (2020b)</ns0:ref>). Furthermore, two SI algorithms, named Grey Wolf Optimizer (GWO) and particle swarm optimizer (PSO) are also used in literature for solving the joint problem regarding user associations and power control in NOMA downlink systems to attain maximized sum-rate <ns0:ref type='bibr' target='#b19'>(Goudos et al. (2020)</ns0:ref>). Additionally, an efficient meta-heuristic approach known as multi-trial vector-based differential evolution (MTDE) <ns0:ref type='bibr' target='#b34'>(Nadimi-Shahraki et al. (2020)</ns0:ref>) has been implemented for solving different complex engineering problems by using multi trial vector technique (MTV) which integrates several search algorithms in the form of trial vector producers (TVPs) approach.</ns0:p><ns0:p>Recently, an updated version of GWO i.e. Improved-Grey Wolf Optimizer (I-GWO) (Nadimi-Shahraki et al. ( <ns0:ref type='formula'>2021</ns0:ref>)) has been investigated for handling global optimization and engineering design challenges.</ns0:p><ns0:p>This modification is intended to address the shortage of population variety, the mismatch between exploitation and exploration, and the GWO algorithm's premature convergence. The I-GWO algorithm derives from a novel mobility approach known as dimension learning-based hunting (DLH) search strategy which was derived from the natural hunting behaviour of wolves. DLH takes a unique method to creating a neighbourhood for each wolf in which nearby information may be exchanged among wolves. This dimension learning when employed in the DLH search technique improves the imbalance between local and global search and preserves variation. A parallel variant of the Cuckoo Search method is the Island-based Cuckoo Search (IBCS) <ns0:ref type='bibr' target='#b2'>(Alawad and Abed-alguni (2021)</ns0:ref>) using extremely disruptive polynomial mutation (iCSPM). The Discrete iCSPM with opposition-based learning approach (DiCSPM) is a version of iCSPM has been proposed to schedule processes in cloud computing systems focusing on data communication expenses and computations. Moreover, for scheduling dependent tasks to Virtual Machines (VMs), this work offers a discrete variant of the Distributed Grey Wolf Optimizer (DGWO) <ns0:ref type='bibr' target='#b0'>(Abed-alguni and Alawad (2021)</ns0:ref>). In DGWO, the scheduling process is considered as a problem of minimization for data communication expenses and computation.</ns0:p><ns0:p>In this paper, a joint combinatorial problem of user pairing/grouping, power control, and decoding order are considered for every uplink NOMA user within the network. To solve this problem, we propose a recently introduced meta-heuristic algorithm known as Whale Optimization Algorithm (WOA) <ns0:ref type='bibr' target='#b30'>(Mirjalili and Lewis (2016)</ns0:ref>) that is inspired by the hunting approach of the humpback whale. Furthermore, a Grey Wolf Optimizer (GWO) <ns0:ref type='bibr' target='#b32'>(Mirjalili et al. (2014b)</ns0:ref>) and Particle Swarm Optimization (PSO) <ns0:ref type='bibr' target='#b24'>(Kennedy and Eberhart (1995)</ns0:ref>) algorithms are also employed in this research study. The results obtained through the algorithms proposed in <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>), WOA, GWO and the popular PSO are exclusively compared in this study. The acquired results indicate that the WOA outperformed the existing algorithm <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>), GWO and PSO in-terms of spectral-efficiency with lower computational <ns0:ref type='table' target='#tab_3'>-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science complexity.</ns0:p><ns0:formula xml:id='formula_0'>3/20 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:p>The rest of the paper is structured as follows. A 'System Model and Problem Formulation' describes the mathematical representation and research problem of NOMA uplink system. The solution is provided in the 'Solution of Proposed Problem' section where an efficient decoding order, power control scheme and user grouping approach are employed for NOMA uplink System. A concise analysis on the simulation is provided in 'Simulation Results' section. The 'Conclusion' section presents the summary of this research work.</ns0:p></ns0:div> <ns0:div><ns0:head>SYSTEM MODEL AND PROBLEM FORMULATION System Model</ns0:head><ns0:p>As illustrated in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> Hence, the received signal z n at the BS can be represented as:</ns0:p><ns0:formula xml:id='formula_1'>z n = M &#8721; m=1 &#965; n,m g m &#945; m P s m + &#969; n (1)</ns0:formula><ns0:p>where &#965; n,m &#8712; {0, 1} is the user n indicator assigned to the n &#8722;th group. The transmission path between user m and BS is represented by g m which is Guassian distributed. The power control coefficients is denoted by &#945; m (0 &#8804; &#945; m &#8804; 1). For each user m, the transmission power and the signal is denoted by P and s m , where E(|s m | 2 = 1). The additive white Gaussian noise (AWGN) power is denoted by &#969; n with an average power &#963; 2 . Therefore, the maximum spectral efficiency of user M and the received signal to interference plus noise ratio (SINR) on n &#8722; th PRB can be expressed as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>S m = log 2 (1 + &#966; m )<ns0:label>(2</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_3'>&#966; m = |g m | 2 &#945; m P M &#8721; j&#824; =m |g j | 2 &#945; j P+ &#963; 2 (3)</ns0:formula><ns0:p>The SIC operation is carried out at the BS for each PRB/group to decode the users signal. The decoding order of a user n is represented by &#948; n,m in a cell, where &#948; n,m = a &gt; 0 assumes that any user m in a group is the i &#8722; th one in the n &#8722; th PRB is to be decoded. Thus, the maximum spectral efficiency of user n can be represented as:</ns0:p><ns0:formula xml:id='formula_4'>S m = log 2 &#63723; &#63724; &#63724; &#63724; &#63724; &#63724; &#63724; &#63724; &#63725; 1 + |g m | 2 &#945; m &#947; M &#8721; j&#824; =m &#948; n, j &gt;&#948; n,m &gt;0 |g j | 2 &#945; j &#947; + 1 &#63734; &#63735; &#63735; &#63735; &#63735; &#63735; &#63735; &#63735; &#63736; (4)</ns0:formula><ns0:p>where &#948; n, j &gt; &#948; n,m represents the decoding order of users in a PRB/group. If users m and j are in the same group, then it implies that user m is decoded first. The transmission power to noise ratio is represented by &#947;, where &#947; = P/&#963; 2 . Assuming that, the channel-state-information (CSI) is known by BS of each user within coverage area.</ns0:p><ns0:p>To attain effective user-pairing/grouping and power control for NOMA uplink system, each user M in a cell transmit their power control coefficient &#945; m along with user indicator &#965; n,m . Hence, the maximum spectral-efficiency in the n &#8722; th PRB/group can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_5'>S t (n) = &#8721; &#965; n,m =1 S m</ns0:formula><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_6'>S t (n) = log 2 1 + &#8721; &#965; n,m =1 |g m | 2 &#945; m &#947; (6)</ns0:formula><ns0:p>The equation ( <ns0:ref type='formula'>6</ns0:ref>) clearly shows that the spectral efficiency in each group has not been affected by the order of decoding but has an impact on each user.</ns0:p></ns0:div> <ns0:div><ns0:head>Problem Formulation</ns0:head><ns0:p>In this paper, we propose an efficient method for power-control, decoding order and user-pairing/ grouping to increase the spectral-efficiency under their required minimum rate constraint. Therefore, a joint combinatorial problem of power control, decoding order and user pairing/grouping is formulated to maximize the spectral-efficiency. The minimum spectral-requirement of each user in the network is s m . Therefore, the spectral efficiency maximization problem <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>; Zhang et al.</ns0:p><ns0:p>(2019)) can be formulated as: </ns0:p><ns0:formula xml:id='formula_7'>maximize {&#965; n,m }, {&#948; n,m } &#8712; &#960;, {&#945; m } S t = N &#8721; n=1 S t (n) (7a) (7b) subject to C 1 : 0 &#8804; &#945; m &#8804; 1, &#8704;m,<ns0:label>(7c)</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>C 2 : S m &#8805; s m , &#8704;m,<ns0:label>(7d)</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>C 3 : &#965; n,m &#8712; {0, 1}, &#8704;m, &#8704;n,<ns0:label>(7e)</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>C 4 : N &#8721; n=1 &#965; n,m = 1, &#8704;m<ns0:label>(7f</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>SOLUTION OF PROPOSED PROBLEM</ns0:head><ns0:p>To achieve the global optimal solution for Problem (7a), the optimization variables &#965; n,m , &#948; n,m , and &#945; m are strongly correlated, which makes the problem complex. In connection of the fact that user-pairing variables &#948; n,m are combinatorial integer programming variables. Hence, first solve the combinatorial problem of power control and decoding order instead and compute the optimum user-pairing/grouping solution. In case of any fixed scheme of user-grouping, the value of &#965; n,m are independent among all distinct group regarding both decoding order and power control.</ns0:p><ns0:formula xml:id='formula_11'>maximize {&#948; n,m } &#8712; &#960;, {&#945; m } S t (n) (8a) subject to S m &#8805; s m , m &#8712; M n , (8b) 0 &#8804; &#945; m &#8804; 1, m &#8712; M n (8c)</ns0:formula><ns0:p>where M n indicates set of all possible combination in the n &#8722; th PRB.</ns0:p></ns0:div> <ns0:div><ns0:head>Optimal Decoding for Optimal User-Pairing/Grouping</ns0:head><ns0:p>In order to apply SIC operation, all users signals/information are decoded by the receiver in the descending order based on channel condition. In uplink NOMA system <ns0:ref type='bibr' target='#b3'>(Ali et al. (2016)</ns0:ref>), the users with better channel condition is decode first at the BS while the user with worse channel condition is decode last. As a result the user with better channel condition experiences interference from all the users in the network, while the users with poor channel condition experiences interference free transmission.</ns0:p><ns0:p>To attain an efficient decoding <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2019)</ns0:ref>) for NOMA uplink users, the decoding order for M users in a cell concern to same group/PRB, based upon the value of J n , where different decoding order of each user in a network depend on power control <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2019)</ns0:ref>) scheme regarding different feasible region can be represented as:</ns0:p><ns0:formula xml:id='formula_12'>J m = |g m | 2 (1 + 1 &#936; m )<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_13'>&#936; m = 2 s m &#8722; 1 (10)</ns0:formula><ns0:p>Based on (9), the user with higher value of J m in a cell is decoded first . Also applies that the decoding-order does not affect the spectral efficiency of each PRB/group.</ns0:p></ns0:div> <ns0:div><ns0:head>Power Control</ns0:head><ns0:p>The nature of the problem in equation ( <ns0:ref type='formula'>8a</ns0:ref>) is a mixed integer non-linear programming (MINLP). Hence, we have achieved the optimal solution for decoding order &#948; n,m . Therefore, it is required to find all the possible group of combination for user pairing/grouping.</ns0:p><ns0:p>For this purpose, k users in a single cell C are considered. Without loss of generality, it is needful to reduce the complexity and simplify the mathematical procedure regarding optimal decoding order &#948; n,m . The users are listed in a C based on the decreasing order J m , for example 1, 2, 3, . . . , K. Therefore, equation (8a) can be represented as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_14'>maximize {&#945; k } K &#8721; k=1 |g k | 2 &#945; k &#947; (11a) subject to |g k | 2 &#945; k &#947; &#8805; &#936; k K &#8721; j=k+1 |g j | 2 &#945; j &#947; + 1 , &#8704;k,<ns0:label>(11b)</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>0 &#8804; &#945; k &#8804; 1, &#8704;k,<ns0:label>(11c</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where {&#945; k } represents power control-variables. Equations ( <ns0:ref type='formula' target='#formula_14'>11a</ns0:ref>) and (11b) show linearity and translated to SNR formulations respectively.</ns0:p><ns0:p>As shown in equation ( <ns0:ref type='formula' target='#formula_14'>11a</ns0:ref>), &#945; k is increasing. Therefore, the optimal solution for power control will always be upper bound. To determine the lower bound of power control <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2019)</ns0:ref>), the following equation can be solved as:</ns0:p><ns0:formula xml:id='formula_16'>&#945; 0 k = &#936; k &#947; &#8242; k |g k | 2 &#947; , 1 &#8804; k &#8804; K (12)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_17'>&#947; &#8242; k = K &#8719; u=k+1 (&#936; u + 1)<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>Which signifies that the spectral efficiency requirements is equal to the sum of spectral efficiencies of all the users. If &#945; 0 k &#8805; 1, exceeds the limit of upper bound and hence, no feasible solution for equation <ns0:ref type='formula' target='#formula_14'>11a</ns0:ref>) has the feasible solution due to bound of &#945; k variables. Therefore, for all users M in a cell, the optimal solution (Zhang et al. ( <ns0:ref type='formula'>2019</ns0:ref>)) of the &#945; k variables can be illustrated as</ns0:p><ns0:formula xml:id='formula_18'>(11a). If 0 &#8804; &#945; 0 k &#8804; 1, equation (</ns0:formula><ns0:formula xml:id='formula_19'>&#945; * k = min{1, b k }<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_20'>b k = min{ |h u | 2 &#947; &#936; u &#8722; k&#8722;1 &#8721; q=u+1 |h q | 2 &#947; &#8722; K &#8721; j=k+1 |h j | 2 &#945; 0 j &#947; &#8722; 1(u = 1, 2, 3, . . . , k &#8722; 1)}<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>In reference to equation ( <ns0:ref type='formula' target='#formula_19'>14</ns0:ref>) and equation ( <ns0:ref type='formula' target='#formula_20'>15</ns0:ref>), the optimal power control variables &#945; * k mentioned in problem (11a) is achieved. Specifically, if &#945; * k = b k , for other users, the optimal power control variables are &#945; * j = &#945; 0 j for j &gt; k.</ns0:p></ns0:div> <ns0:div><ns0:head>User Grouping</ns0:head><ns0:p>An efficient and low computational time algorithm for user-pairing/grouping is one of the key concern for an effective NOMA uplink system. In this regard, three different meta-heuristic algorithms are proposed to solve the issue of complexity. The WOA is investigated for an efficient optimal and sub-optimal solution for user-pairing/grouping problem as a result to enhance the system performance. Further, the user pairing/grouping problem that exploits the channel-gain difference among different users in a network and the objective is to raise system's spectral-efficiency. To determine the optimum user-pairing/grouping, a specific approach of solving user pairing/grouping problem is by using the search approach. For fixed user-pairing/grouping scheme, the optimal solution is obtained <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2019)</ns0:ref>). Then, list all the users in the decreasing order of J m accordingly. The proposed algorithm for user pairing/grouping problem is illustrated in Algorithm 1. Initially, define the feasible solution of user grouping for exhaustive and swarm based algorithm . An exhaustive search explores each data points within the search region and therefore provides the best available match. Furthermore, a huge proportion of computation is needed.</ns0:p><ns0:p>Particularly a discrete type problem where no such solution exists to find the effective feasible solution.</ns0:p><ns0:p>There may be a need to verify each and every possibility sequentially for the purpose of determining the best feasible solution. The optimal solution using exhaustive search algorithm <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2019)</ns0:ref>) is getting obdurate because the number of comparison increases rapidly. Hence, the system complexity of WOA for user grouping scheme is O(MN), where as O(N M ) represent the complexity of the exhaustive search algorithm. Therefore, a WOA approach is employed to reduce the complexity and provide efficient results. In addition, GWO and PSO algorithms are also proposed for the same problem.</ns0:p></ns0:div> <ns0:div><ns0:head>7/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Whale Optimization Algorithm (WOA)</ns0:head><ns0:p>To enhance the spectral-throughput and reduce the system complexity, an innovative existence metaheuristic optimization technique named whale optimization algorithm (WOA) <ns0:ref type='bibr' target='#b30'>(Mirjalili and Lewis (2016)</ns0:ref>)</ns0:p><ns0:p>is proposed in this paper. The algorithm WOA is resembles to the behaviour of the humpback whales, which is based on the bubble-net searching approach. Three distinct approaches are used to model the</ns0:p></ns0:div> <ns0:div><ns0:head>WOA is described as</ns0:head></ns0:div> <ns0:div><ns0:head>Encircling Prey</ns0:head><ns0:p>In this approach, the humpback-whales can locate the prey-location of the prey and en-circle that region.</ns0:p><ns0:p>Considering that, the location of the optimal design in the search region is not known in the beginning.</ns0:p><ns0:p>Hence, the algorithm WOA provides the best solution that is nearer to the optimal value. First determine the best solution regarding location and then change the position according to the current condition of the other search agents concerning to determine the best solution. Such an approach is described mathematically and can be expressed as:</ns0:p><ns0:formula xml:id='formula_21'>&#8722; &#8594; E = |A. &#8722; &#8594; X * (t) &#8722; X(t)| (16) &#8722; &#8594; Y (t) = &#8722; &#8594; X (t + 1) (17) &#8722; &#8594; Y (t) = &#8722; &#8594; X * (t) &#8722; &#8722; &#8594; B . &#8722; &#8594; E (18)</ns0:formula><ns0:p>where, &#8722; &#8594; B and &#8722; &#8594; A represents the coefficients-vectors, t defines the initial iteration and X * and &#8722; &#8594; X both describes the position-vector where X * includes the best solution so far acquired. | | and . defines the absolute and multiplication. Noted that the position vector X * is updated for each iteration until to find the best solution. The coefficients vector vectors &#8722; &#8594; B and &#8722; &#8594; A can be determined as:</ns0:p><ns0:formula xml:id='formula_22'>&#8722; &#8594; B = 2 &#8722; &#8594; b . &#8722; &#8594; r &#8722; &#8722; &#8594; b (19) &#8722; &#8594; A = 2. &#8722; &#8594; r (20)</ns0:formula><ns0:p>where &#8722; &#8594; r indicates random vector 0 &#8804; r &#8804; 1 and &#8722; &#8594; b represent a vector with a value between 2 and 0, which is decreasing linearly during the iteration.</ns0:p></ns0:div> <ns0:div><ns0:head>Spiral bubble-net feeding maneuver</ns0:head><ns0:p>Two techniques are proposed to predict accurately the bubble-net activity of humpback-whales.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.'>Shrinking en-circling</ns0:head><ns0:p>This type of techniques is achieved by decreasing the value of &#8722; &#8594; b using equation( <ns0:ref type='formula'>19</ns0:ref>) . It is to be noted that the variation range of &#8722; &#8594; B is also reduced by the value</ns0:p><ns0:formula xml:id='formula_23'>&#8722; &#8594; b . Therefore, &#8722; &#8594; B is a random value from [&#8722;b, b],</ns0:formula><ns0:p>where the value of b is decreasing from 2 to 0 during the iterations.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Spiral updating position</ns0:head><ns0:p>In spiral method, a relationship between the location of prey and whale to impersonate the helix-shaped operations is represented in the form of mathematical equation of humpback-whales in the following manner: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_24'>&#8722; &#8594; Y (t) = &#8722; &#8594; E &#8242; .e pq . cos (2&#960;q) + &#8722; &#8594; X *<ns0:label>(</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where</ns0:p><ns0:formula xml:id='formula_25'>&#8722; &#8594; E &#8242; = | &#8722; &#8594; X * (t) &#8722; &#8722; &#8594; X (t)|</ns0:formula><ns0:p>, which represents the distance between prey and the i &#8722; th whale. l denotes the random number(&#8722;1 &#8804; l &#8804; 1). b represents logarithmic spiral, which is a constant number and</ns0:p><ns0:p>. indicates the multiplication operation. It's worth noting that humpback-whales swim in a shrinking-circle around their prey while still following a spiral-shaped direction. To predict this concurrent action, an equation is derived to represent the model can be expressed as:</ns0:p><ns0:formula xml:id='formula_26'>&#8722; &#8594; Y (t) = &#8722; &#8594; X * (t) &#8722; &#8722; &#8594; E . &#8722; &#8594; B , if d &lt; 0.5 &#8722; &#8594; E &#8242; .e pq . cos (2&#960;q) + &#8722; &#8594; X * , if d &#8805; 0.5 (22)</ns0:formula><ns0:p>where d represents a random number (0 &#8804; d &#8804; 1). Further, the searching behaviour of humpbackwhales for prey is randomly in the bubble-net approach. The following is the representation of mathematical model for bubble net approach.</ns0:p></ns0:div> <ns0:div><ns0:head>Prey Searching Technique</ns0:head><ns0:p>To locate prey, same strategy based on the modification of the &#8722; &#8594; B vector can be utilized (exploration). In </ns0:p><ns0:formula xml:id='formula_27'>&#8722; &#8594; E = | &#8722; &#8594; A . &#8722;&#8722;&#8594; X rand &#8722; &#8722; &#8594; X | (23) &#8722; &#8594; Y (t) = &#8722;&#8722;&#8594; X rand &#8722; &#8722; &#8594; B . &#8722; &#8594; E (24)</ns0:formula><ns0:p>where &#8722;&#8722;&#8594; X rand indicates a position-vector that is randomly selected from the existing space.</ns0:p><ns0:p>The algorithm WOA comprised of a selection of random samples. For every iteration, the search agents change their locations in relation to either a randomly selected search agent or the best solution acquired so far in this. For both cases exploitation phase and exploration the value of b is decreasing in the range from 2 to 0 accordingly. As the value of | &#8722; &#8594; B | &gt; 1, a randomly searching solution is selected, while the optimal solution is obtained when | &#8722; &#8594; B | &lt; 1 for updating the search-location of the agents. Based on the parameter d, the WOA is used as a circular or spiral behaviour. Ultimately, the WOA is ended by the successful termination condition is met. Theoretically, it provides exploration and exploitation capability.</ns0:p><ns0:p>Therefore, WOA can still be considered as a successful global optimizer. The WOA is described in Algorithm 1.</ns0:p></ns0:div> <ns0:div><ns0:head>Grey Wolf Optimizer (GWO)</ns0:head><ns0:p>A popular meta-heuristic algorithm, which is influenced by the behaviour of grey-wolves <ns0:ref type='bibr' target='#b32'>(Mirjalili et al. (2014b)</ns0:ref>).This algorithm is based on the hunting approach of grey wolves and their governing-hierarchy.</ns0:p><ns0:p>Grey wolves represent predatory animals, which means these are heading up in the hierarchy. Grey wolves tended to stay in groups. The wolves in a group is varying between 5 to 12. The governing-hierarchy of GWO is shown in Figure <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>, where several kinds of grey wolves have been used particularly &#945;, &#946; , &#948; , and &#969;.</ns0:p><ns0:p>Both wolves (male and female) are the founders known as &#945;s. The &#945; is mainly in favour of producing decision making regarding hunting, sleeping and waking time, sleeping place etc. The group is governed by the &#945;s actions. Even so, some egalitarian behaviour has been observed, such as an alpha wolf following other wolves in the group. The whole group respects the &#945; by keeping their tails towards ground at gatherings. The &#945; wolf is also regarded as superior since the group must obey his/her orders. The group's &#945; wolves are the only ones that can mate. Usually, the &#945; is not always the biggest member of the group,</ns0:p></ns0:div> <ns0:div><ns0:head>9/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Data: Set the input control variables M, N, &#947; m , {g m }, {s n } Population initialization X 1 , X 2 , .........X n Result: X * (Best search agent for user-pairing/grouping).</ns0:p><ns0:p>List all the users with decreasing order of J m . while t &lt; (total iterations) for every search user Initialize b,B,A,q and d if1(d &lt; 0.5) if2(|B| &lt; 1) Existing search user position is updated using equation ( <ns0:ref type='formula'>16</ns0:ref>) else if2(|B| &#8805; 1) Randomly selected a search user (X rand ) Existing position of search user is updated using equation ( <ns0:ref type='formula'>23</ns0:ref>) end if2 else if1(d &#8805; 0.5) Exiting position of search user is updated using equation ( <ns0:ref type='formula'>21</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>guidance to the &#945;. The grey wolf with the lowest rating is &#969;. The &#969; serves as a scapegoat. &#969; wolves must still respond towards other dominant wolves in a group. They are the last wolves permitted to feed. While it might seem that the &#969; is not a vital member of the group, it has been found that when the &#969; is lost, the entire group experiences internal combat and problems. This would be attributed to the &#969; venting his anger and resentment on both wolves (s). This tends to please the whole group while still preserving the dominance system. In certain circumstances, the &#969; is also the group's babysitter. Whenever a wolf should not be an &#945;, &#946; , or &#969;, he or she is referred to as a subordinate also called &#948; . where &#948; wolves must yield to &#945;s and &#946; s, however they rule the &#969;. This group contains scouts, sentinels, elders, hunters, and caregivers.</ns0:p><ns0:p>Scouts are in charge of patrolling the area and alerting the group if there is any threat. Sentinels defend and ensure the group's safety. Furthermore, the mathematical model of GWO is described as:</ns0:p><ns0:p>Grey wolves primarily hunt depending on the locations of the &#945;, &#946; , and &#948; . At the starting they diverge from other wolves to hunt and then combine to hit prey. The mathematical model of divergence can be achieved by utilizing the value of</ns0:p><ns0:formula xml:id='formula_28'>&#8722; &#8594; B . For divergence, random values of &#8722; &#8594; B &lt; 1 or &#8722; &#8594; B &gt;</ns0:formula><ns0:p>1 is used by the search agent. This process enables the GW algorithm to search globally. In nature, the D vector can even be assumed as the impact of barriers to pursuing prey. In general, natural barriers arise in wolves' hunting paths and discourage them from approaching prey effectively and easily. This is precisely depend on the vector D. It will arbitrarily give the prey a weight to find it tougher and farther to catch for wolves, depending on location of the wolf, or likewise.</ns0:p><ns0:p>The suggested social hierarchy supports GWO in sustaining the best solutions achieved so far through iteration. By using hunting approach, it enables agents to search the likely location of prey. The GWO is described in Algorithm 2.</ns0:p><ns0:p>Particle Swarm Optimization (PSO) <ns0:ref type='bibr' target='#b24'>Kennedy and Eberhart (Kennedy and Eberhart (1995)</ns0:ref>) introduced PSO as an evolutionary computation method. It was influenced by the social behaviour of birds, which involves a large number of individuals (particles) moving through the search space to try to find a solution. Over the entire iterations, the particles map the best solution (best location) in their tracks. In essence, particles are guided by their own best positions, which is the best solution same as achieved by the swarm. This behaviour can modelled mathematically by using velocity vector (u), dimension (S), which represents the number of parameters and position vector (x). In the entire iterations, the position and velocity of the particles changing by the following equation: </ns0:p><ns0:formula xml:id='formula_29'>u t+1 i = vu t i + e 1 &#215; rand &#215; (pbest i &#8722; x t i ) + e 2 &#215; rand &#215; (gbest &#8722; x t i )<ns0:label>(37</ns0:label></ns0:formula><ns0:formula xml:id='formula_30'>x t+1 i = u t+1 i + x t i (<ns0:label>38</ns0:label></ns0:formula><ns0:formula xml:id='formula_31'>)</ns0:formula><ns0:p>where v(0.4 &#8804; v &#8804; 0.9) represents the inertial weight, which control stability of the PSO algorithm.</ns0:p><ns0:p>cognitive coefficient e 1 (0 &lt; e 1 &#8804; 2), which limits the impact of the individual memory for best solution.</ns0:p><ns0:p>Social factor e 2 (0 &lt; e 2 &#8804; 2), which limits the motion of particles to find best solution by the entire swarm, rand indicates a random number in the range between 0 and 1, attempt to provide additional randomized search capability to the PSO algorithm and two variables pbest and gbest, used to accumulate best solutions achieved by each particle and the entire swarm accordingly. The PSO is described in Algorithm 3.</ns0:p></ns0:div> <ns0:div><ns0:head>SIMULATION RESULTS</ns0:head><ns0:p>This section evaluates the performance of the proposed meta-heuristic algorithms, namely, WOA, GWO and PSO of the user groping, power control and decoding order for NOMA uplink systems. Both channel of the users and location are allocated randomly in the simulation. Therefore, the range between the user and BS are uniformly distributed and considered that the channel response is Gaussian distribution <ns0:ref type='bibr' target='#b51'>(Zhang et al. (2019)</ns0:ref>).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> indicates the comparison of convergence of WOA <ns0:ref type='bibr' target='#b30'>(Mirjalili and Lewis (2016)</ns0:ref>) , GWO <ns0:ref type='bibr' target='#b32'>(Mirjalili et al. (2014b)</ns0:ref>) and PSO <ns0:ref type='bibr' target='#b24'>(Kennedy and Eberhart (1995)</ns0:ref>) algorithms proposed for NOMA uplink system. We may conclude that WOA, GWO and PSO algorithms converge at a comparable rate, hence WOA converges after a greater number of iterations than GWO and PSO. The proposed WOA attains significant performance in-terms of spectral efficiency as compare to GWO and PSO algorithms. Also the proposed WOA <ns0:ref type='bibr' target='#b30'>(Mirjalili and Lewis (2016)</ns0:ref>) provides stability and attains the minimum rate requirement without such a noticeable drop in the results.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref> compares the spectral-efficiency of NOMA and OMA approaches with varying &#947;, respectively.</ns0:p><ns0:p>It has been proved that the spectral-efficiency of NOMA scheme is considerably higher than those of scheme. Moreover, the spectral-efficiency of the proposed sub-optimal approach is nearer to the optimal value. The proposed WOA algorithm attains near optimal performance with minimal computational complexity. In addition, as the number of users increases the computational cost of the exhaustive-search algorithm increases as compared to WOA.</ns0:p><ns0:p>For NOMA uplink systems, the power control approach in <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) is provided as a benchmark scheme, where the spectral efficiency are near to the optimal value. Noted that the approach used in <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) is valid only for two user-pairing. Hence, the proposed scheme performs admirably in-terms of having efficient user grouping for multiple users. varying &#947;. Moreover, the spectral-efficiency of optimal and sub-optimal solutions are nearer to each 435 other. The power control scheme for NOMA uplink system in <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) is used as a 436 benchmark. It has been observed that the spectral-efficiency of both GWO and PSO algorithms shows 437 better results than power control <ns0:ref type='bibr' target='#b39'>(Sedaghat and M&#252;ller (2018)</ns0:ref>) and OMA scheme. </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>NOMA systems have garnered a lot of interest in recent years for 5G cellular communication networks.</ns0:p><ns0:p>The efficient user grouping and power control scheme play an essential role to enhance the performance of communication network. In this paper, we have examined for the first time up to authors knowledge, a joint issue of user-grouping and power control for NOMA uplink systems. we have solved this problem by proposing WOA with low complexity. Further, for comparison GWO and PSO are adopted to solve the same problem. Simulations results show that the WOA proposed for this combine issue in uplink allows better performance than the conventional OMA in-terms of spectral-efficiency. Further, proposed WOA provides better result as compared to GWO, PSO and existing algorithm in literature with lower system complexity by considering same constraint regarding uplink NOMA systems.The acquired results also suggest that the combinatorial joint problem gets more difficult to solve as the number of users grows and needs additional network resources. In the future, the study might be expanded to include more performance parameters to the mentioned problem and implementation of multiple antennas combinations which leads to massive MIMO (Mulitiple-Input and Multiple-Output) scenario in order to further enhance the performance of the network.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>, we consider an uplink NOMA transmission with a single-cell denoted by C. The number of users M served by a single base station (BS) placed at the centre of the cell. To obtain the signal/information requirements of several users, the number of physical resource block (PRB) denoted by N are assigned to multiple-users in a cell.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. NOMA Uplink Transmission.</ns0:figDesc><ns0:graphic coords='5,209.41,274.30,278.27,191.85' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022) Manuscript to be reviewed Computer Science where &#948; n,m represents the decoding order and &#960; indicates all possible combinations of users decoding orders in a network. C 1 indicates the upper bound of transmission power. C 2 guarantees the minimum rate of a user. C 3 and C 4 ensures the user indicator and m users assigned to PRB/group.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>reality, humpback-whales search at random based on their location. As a result, we select &#8722; 1 to compel the search-agent to step away from a target value. Comparison with exploitation, modify the location of every search-agent in the sample space, based on randomly selected process until to obtained a better solution. This operation and | &#8722; &#8594; B | &gt; 1 place an emphasis on the exploration phase and enable WOA to perform global-searching. This can be represented below:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Grey wolf hierarchy (Mirjalili et al. (2014b)).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 Figure 3 .</ns0:head><ns0:label>53</ns0:label><ns0:figDesc>Figure5and 6 evaluates the performance of GWO and PSO algorithms in-terms of spectral-efficiency. For uplink NOMA system, the spectral-efficiency of NOMA scheme outperform OMA scheme with</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of spectral efficiency of WOA with increasing &#947;.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>438Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Illustration of spectral efficiency of GWO with increasing &#947;.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .Figure 7 .</ns0:head><ns0:label>67</ns0:label><ns0:figDesc>Figure 6. Illustration of spectral efficiency of PSO with increasing &#947;.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>) Set the input control variables M, N, &#947; m , {g m }, {s n } Population initialization X 1 , X 2 , .........X n Initialization of B,b and D Result: X &#945; (Best search agent for user-pairing/grouping). Set the input control variables M, N, &#947; m , {g m }, {s n } Population initialization X 1 , X 2 , .........X n Result: pbest and gbest (Best search agent for user-pairing/grouping).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>List all the users with decreasing order of J m .</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Determine fitness of every search agent</ns0:cell></ns0:row><ns0:row><ns0:cell>X &#945; ,X &#946; and X &#948; .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>while t &lt; (total iterations)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>for every search user</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Update the existing location of search agent by using equation (36)</ns0:cell></ns0:row><ns0:row><ns0:cell>Update B, b and D</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Determine fitness of search agent</ns0:cell></ns0:row><ns0:row><ns0:cell>U pdateX &#945; , X &#946; and X &#948;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>t=t+1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>end for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>end while</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>return X &#945;</ns0:cell><ns0:cell>Algorithm 2: GWO</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Data: List all the users with decreasing order of J m .</ns0:cell></ns0:row><ns0:row><ns0:cell>for each generation do</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>for each particle do</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Update the position and vector by using equation (37) and equation (38)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Estimate the fitness of the particle</ns0:cell></ns0:row><ns0:row><ns0:cell>Update both pbest and gbest</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>t=t+1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>end for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>end for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>return pbest, gbest</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Algorithm 3: PSO</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022)</ns0:cell><ns0:cell>12/20</ns0:cell></ns0:row></ns0:table><ns0:note>Data:</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>1995)) for WOA, GWO and PSO algorithms that participated in the simulation. Further, the Wilcoxon test and Friedman test (?) are performed for experiments and the statistical analysis of GWO and PSO is also provided in Table2. Based on the results of tests, the proposed WOA outperforms the other algorithms in comparison.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>presents the simulation parameter values attained from the literature (Sedaghat and M&#252;ller</ns0:cell></ns0:row><ns0:row><ns0:cell>(2018); Zhang et al. (2019); Mirjalili and Lewis (2016); Mirjalili et al. (2014b); Kennedy and Eberhart</ns0:cell></ns0:row><ns0:row><ns0:cell>13/20</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:64162:2:0:NEW 4 Jan 2022) Manuscript to be reviewed Computer Science(</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Parameters for Proposed Uplink NOMA</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>s m</ns0:cell><ns0:cell>1.1 bits/s/Hz</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947;</ns0:cell><ns0:cell>30 dB</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistical analysis of GWO and PSO</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Wilcoxon</ns0:cell><ns0:cell>GWO</ns0:cell><ns0:cell>PSO</ns0:cell></ns0:row><ns0:row><ns0:cell>p &#8722; value</ns0:cell><ns0:cell>1.8E &#8722; 169</ns0:cell><ns0:cell>4.7E &#8722; 181</ns0:cell></ns0:row><ns0:row><ns0:cell>Friedman</ns0:cell><ns0:cell>GWO</ns0:cell><ns0:cell>PSO</ns0:cell></ns0:row><ns0:row><ns0:cell>p &#8722; value</ns0:cell><ns0:cell>4.5E &#8722; 161</ns0:cell><ns0:cell>1.2E &#8722; 164</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='2'>to 0. If any random values ranges between [&#8722;1, 1], then new location of a search-agent lies between exiting and prey location.Search for prey</ns0:note> </ns0:body> "
"Original Manuscript ID: CS-2021:07:64162:1:0:NEW Original Article Title: Joint User Grouping and Power Control Using Whale Optimization Algorithm for NOMA Uplink System. Honorable Editor and Reviewers: Please accept a heartfelt expression of thanks for the time and effort you have put in while reviewing the manuscript. Thanks are extended to the esteemed reviewers whose valuable feedback and insightful comments have improved the quality of our manuscript.  The authors have left no stone unturned in incorporating the suggestions put forth by the esteemed reviewers. Significant changes made to the original manuscript have been highlighted in the revised manuscript (Tracked.Changes.pdf), making it convenient for the esteemed reviewers to track the suggested changes and readily go through them.   Reviewer 3 Basic reporting The article appears much better. The content is clear and easy to understand. However, I recommend referring to some recent research papers that are considered as related to this research paper: - Alawad, N. A., & Abed-alguni, B. H. (2021). Discrete Island-Based Cuckoo Search with Highly Disruptive Polynomial Mutation and Opposition-Based Learning Strategy for Scheduling of Workflow Applications in Cloud Environments. Arabian Journal for Science and Engineering, 46(4), 3213-3233. - Panda, S. (2020). Joint user patterning and power control optimization of MIMO–NOMA systems. Wireless Personal Communications, 1-17. - Zheng, G., Xu, C., & Tang, L. (2020, May). Joint User Association and Resource Allocation for NOMA-Based MEC: A Matching-Coalition Approach. In 2020 IEEE Wireless Communications and Networking Conference (WCNC) (pp. 1-6). IEEE. - Abed-alguni, B. H., & Alawad, N. A. (2021). Distributed Grey Wolf Optimizer for scheduling of workflow applications in cloud environments. Applied Soft Computing, 102, 107113 Experimental design The methods and figures are described snd formulated well. Validity of the findings No comment. Author response: The honorable reviewer is thanked for his valuable feedback. I agree with the reviewer's suggestion that more recent and relevant references need to be added. I have therefore added 4 references [1,2,3,4] as suggested by the reviewer and cited in revised manuscript. Also, the 4 added references are given below in references. The modification has been highlighted in the revised manuscript (Tracked.Changes.pdf) for reviewer convenience. References: [1]. Alawad, N. A., & Abed-alguni, B. H. (2021). Discrete Island-Based Cuckoo Search with Highly Disruptive Polynomial Mutation and Opposition-Based Learning Strategy for Scheduling of Workflow Applications in Cloud Environments. Arabian Journal for Science and Engineering, 46(4), 3213-3233. [2]. Panda, S. (2020). Joint user patterning and power control optimization of MIMO–NOMA systems. Wireless Personal Communications, 1-17. [3]. Zheng, G., Xu, C., & Tang, L. (2020, May). Joint User Association and Resource Allocation for NOMA-Based MEC: A Matching-Coalition Approach. In 2020 IEEE Wireless Communications and Networking Conference (WCNC) (pp. 1-6). IEEE. [4]. Abed-alguni, B. H., & Alawad, N. A. (2021). Distributed Grey Wolf Optimizer for scheduling of workflow applications in cloud environments. Applied Soft Computing, 102, 107113. "
Here is a paper. Please give your review comments after reading it.
339
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>are popular larvae as feed ingredients that are widely used by animal lovers to feed reptiles, songbirds, and other poultry. These two larvae share a similar appearance, however; the nutritional ingredients are significantly different. Zophobas Morio is more nutritious and has a higher economic value compared to Tenebrio Molitor. Due to limited knowledge, many animal lovers find it difficult to distinguish between the two. This study aims to build a machine learning model that is able to distinguish between the two. The model is trained using images that are taken from a standard camera on a mobile phone. The training is carried on using a deep learning algorithm, by adopting an architecture through transfer learning, namely VGG-19 and Inception v3. The experimental results on the datasets show that the accuracy rates of the model are 94.219% and 96.875%, respectively. The results are quite promising for practical use and can be improved for future works.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Computer vision has been used in various fields of life, such as agriculture, animal husbandry, health, smart cities, and others <ns0:ref type='bibr' target='#b21'>(Pratondo et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b23'>Rizqyawan et al., 2020)</ns0:ref>. In agriculture and animal husbandry, the use of computer vision for classification and detection of various objects has been widely practiced <ns0:ref type='bibr' target='#b1'>(Abd Aziz et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b33'>Thai et al., 2021)</ns0:ref>. Detection and classification of similar objects are challenging tasks for researchers to provide the best possible accuracy.</ns0:p><ns0:p>Zophobas Morio and Tenebrio Molitor are two kinds of larva that share the same morphology.</ns0:p><ns0:p>However, Zophobas Morio is more nutritious compared to Tenebrio Molitor <ns0:ref type='bibr' target='#b22'>(Purnamasari et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Santoso et al., 2017)</ns0:ref>. These nutritional advantages from Zophobas Morio make it preferable for animal lovers. Zophobas Morio can be used as feed for various animals, especially for chirping birds. The comparison of the two larvae in term of their ingredients is presented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> as previously studied in <ns0:ref type='bibr' target='#b5'>(Benzertiha et al., 2019)</ns0:ref>. It can be seen that Zophobas Morio has more nutrition ingredients, such as dry matter, protein, and ether extract; than Tenebrio Molitor has. Only chitin is the exception. Because of the similar morphology of the two larvae, common people with limited knowledge are often unable to distinguish between them. Several buyers may spend money as high as the price of PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66996:1:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Zophobas Morio; however, the purchased larva is actually Tenebrio Molitor. This study aims to build an application that is able to distinguish Zophobas Morio and Tenebrio</ns0:p><ns0:p>Molitor larvae based on images obtained from mobile phone cameras. The images are then analyzed for further classification based on the model which is built using a machine learning algorithm. Through the model, the difficulty of distinguishing the two larvae can be resolved.</ns0:p><ns0:p>The remainder of this paper will discuss how the model is developed and evaluated. In section 2, a literature review will be discussed for the methods that are related and used to build the classification model. In section 3, we will discuss the experiments carried out which include dataset preparation, experimental settings, and experimental results. In Section 4, discussions on the experimental results and the possibility of further research in the future will be elaborated. Section 5 will provide more related works in detail. Finally, conclusions will be presented in Section 6.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head><ns0:p>Traditionally, image classification can be performed by manually selecting features. Features are designed and extracted from training images. However, selecting features is a complicated task and often requires a high expertise. To date, the use of pixels as features is widely employed for image classification. Original classification algorithms, such as the Naive Bayesian, k-nearest neighbors (k-NN), and support vector machines (SVM), are commonly used before the era of deep learning, which is basically developed from the artificial neural networks <ns0:ref type='bibr' target='#b7'>(Bishop, 2006)</ns0:ref>. These traditional classification algorithms, nevertheless, are still implemented in baseline experiments with classification tasks.</ns0:p><ns0:p>This study employs transfer learning which is an advanced development of the artificial neural networks method. The theoretical background is presented successively from artificial neural networks, followed by deep learning, and finally, transfer learning.</ns0:p></ns0:div> <ns0:div><ns0:head>2/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66996:1:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Artificial Neural Networks</ns0:head><ns0:p>The artificial neural network is a mathematical model of problem-solving inspired by the functioning of human nerves <ns0:ref type='bibr' target='#b6'>(Bishop, 1995;</ns0:ref><ns0:ref type='bibr' target='#b13'>Haykin, 2010)</ns0:ref>. The simplest model of an artificial neural network is a single-layer perceptron, which is diagrammatically presented in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Deep Neural Networks</ns0:head><ns0:p>The artificial neural network continues to develop. Researchers proceed to model it as a multilayer perceptron <ns0:ref type='bibr' target='#b11'>(Goodfellow et al., 2016)</ns0:ref>. Similar to the single-layer perceptron, the multilayer perceptron receives input in the form of a set of feature values to then be weighted and the final result is entered into the activation function. The difference between single-layer perceptron and multilayer perceptron is that there is a hidden layer that automatically adds the number of nodes and creates more complex connections. Hence, multilayer perceptron can solve any problems that cannot be solved by using single-layer perceptron.</ns0:p><ns0:p>The use of multilayer perceptron continues to grow, especially in the field of computer vision.</ns0:p><ns0:p>Hardware support enables more complex image operations, such as convolution procedures. With the use of artificial neural networks that have many hidden layers and complex image operations, the era of deep neural networks that is more popularly known as deep learning begins.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Works on Deep Learning</ns0:head><ns0:p>The use of deep learning in classifying a collection of images becomes more popular due to the explosion of data and the availability of the high-end computational platforms. The classification of larvae images takes the advantage of the deep learning platform. To the best of our knowledge, however, only a few studies have been proposed in this field.</ns0:p><ns0:p>The first related work is proposed in <ns0:ref type='bibr' target='#b9'>(Fuad et al., 2018)</ns0:ref> that utilized transfer learning technique, i.e. Google inception model <ns0:ref type='bibr' target='#b28'>(Szegedy et al., 2017)</ns0:ref>, to identify the larvae of Aedes Aegypti which are commonly found inside the clean liquid place. These larvae are important to classify because it helps reduce the spread of dengue fever, especially in tropical countries. There were enough pictures, i.e. more than five hundred, used in this paper due to the assorted canvas and lightning within the liquid place.</ns0:p><ns0:p>The paper mentioned that the dataset was produced by laboratory work, however, it remains unclear why the authors chose to have no data pre-processing required by the study. It is common to have data</ns0:p></ns0:div> <ns0:div><ns0:head>3/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66996:1:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>pre-processing as an essential step in deep learning especially dealing with real-world pictures, although most researchers skip the explanation in their reports. The authors claimed that the less the learning rate is the better the classification accuracy. It can be understood because there are only two classes are produced, i.e. larva or not larva. In detail, the accuracy in the paper achieved 99.98% and 99.90% for learning rate 0.1, 99.91% and 99.77% for learning rate 0.01; and 99.10% and 99.93% for learning rate 0.001. The cross-entropy error in the paper achieved 0.0021 and 0.0184 for learning rate 0.1, 0.0091 and 0.0121 for learning rate 0.01; and 0.0513 and 0.0330 for learning rate 0.001. Although the metrics of accuracy and cross-entropy errors are presented, the significance of the experiments could not be justified due to the goal of the paper is to provide informative accuracies and learning errors with three learning rates. It would be interesting to see the baseline of the accuracy performed by this paper with different kinds of the larva.</ns0:p><ns0:p>Aedes Aegypti is the most popular larva to be deeply classified from its images. Another work on this larva is proposed in <ns0:ref type='bibr' target='#b3'>(Azman and Sarlan, 2020</ns0:ref>) that examined whether particular water storage is the suitable place for the mosquitoes to lay their larva or not. Although it is not explicitly mentioned, it can be assumed that the paper provides two different types of larva from the mosquitoes that spread the lethal epidemic of dengue fever and three other types of larvae from different mosquitoes. These five types of the larva are sorted based on the prediction accuracy by utilizing a general convolution neural network algorithm, although there is no further explanation in detail on how to implement this algorithm with specific techniques. The result shows the unbalanced accuracies for each type of larvae that varies from 0.7% to 73%. It remains unclear why the gap between the two types of dengue fever mosquito larva and three other types is huge.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b2'>(Asmai et al., 2019</ns0:ref>) also analyzed Aedes Aegypti larvae. The testing was conducted from ten images of Aedes Aegypti larvae and ten images of non Aedes Aegypti larvae. Various methods of convolution neural networks were utilized, such as VGG16 <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2015)</ns0:ref>, VGG-19 <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2015)</ns0:ref>, ResNet-50 <ns0:ref type='bibr' target='#b17'>(Koonce, 2021)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b28'>(Szegedy et al., 2017)</ns0:ref>. The results showed that Resnet-50 outperforms other methods in terms of the implementation on mobile devices, although its performance in terms of accuracy and loss was lower than VGG-19 <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2015)</ns0:ref>. However, combining the mobile performance from two regular metrics, such as accuracy and loss, with other provided metrics, such as file size and training time, should be further investigated because it is not commonly used in deep learning.</ns0:p><ns0:p>A different kind of larva is analyzed in <ns0:ref type='bibr' target='#b26'>(Shang et al., 2020)</ns0:ref> that addressed the issue of a low number of pictures with high-quality tags. Another issue tackled by this paper was that the picture of Zebrafish larva usually contains the fuzzy classification for ten types of the larva, such as deceased, regular, and short bottom. Hence, the deep learning is non-trivial in classifying this larva. The result showed that there was an improvement of classification performance between the proposed two-layered classification technique and other deep learning methods, such as GoogLeNet <ns0:ref type='bibr' target='#b29'>(Szegedy et al., 2015)</ns0:ref>, VGG-19 <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2015)</ns0:ref> and AlexNet <ns0:ref type='bibr' target='#b18'>(Krizhevsky et al., 2012)</ns0:ref>, by reproducing the previous researches with the same dataset. The improvement compared to the baseline reached 22%. The obtained accuracy mean was 91%</ns0:p><ns0:p>for overall types and the obtained maximum accuracy was 100% for deceased and skin types. However, it remains unclear how to obtain these numbers since there is no exact numbers provided by the paper.</ns0:p><ns0:p>Moreover, it should have been more interesting to see how the technique operates, since running two layers of classification in a single workflow is obviously better than running one layer, regardless of the image characteristics.</ns0:p><ns0:p>The latest work from aquaculture researchers in <ns0:ref type='bibr' target='#b15'>(Kakehi et al., 2021)</ns0:ref> presented the use of deep learning in identifying oyster larvae. The work was important for the oyster farmers to speed up the process of larva identification during the oyster growing time. The result was claimed to have high performance with more than 80% for precision, around 90% or recall, and therefore around 86% for F-score. However, the overall process was not completely automatic, since human intervention was required during the process of identification. It is interesting to note that the difficulty was due to the characteristics of oyster larva that occasionally mixed with other objects that produce several layers on the images. This paper also provided three-dimension graphs of shell height, long side, and short side to help understand the dataset. These three measures were estimated by using the coordinate system of PyTorch <ns0:ref type='bibr' target='#b16'>(Ketkar, 2017)</ns0:ref>.</ns0:p><ns0:p>Another latest work was proposed in <ns0:ref type='bibr' target='#b19'>(Ong et al., 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science picture shooting. The variation also includes various colors and textures. The final result of the proposed method performance was precision between 88.44% and 92.95%, recall between 88.23% and 94.10%, accuracy between 87.56% and 92.89%, and F-score between 88.08% and 93.02%. Another interesting result from this paper is that the image with green and white lighting performed the best while the images with red lighting performed the worst. However, it should be further researched whether the result from this larva remains the same as the results from other kinds of larvae. Another critical thinking on this paper is that the benefits of the classification result on house fly larva to the real world remain questionable, except for the sake of experimental laboratory. In addition, it would be more interesting to see the specific technique of deep learning used in the paper instead of explaining convolution neural networks in general.</ns0:p><ns0:p>Overall, the presented literature review shows that the use of deep learning in classifying larva images is eminent. However, none of the papers, to the best of our knowledge, handled the images of Zophobas Morio and Tenebrio Molitor larvae. These larvae are important for feeding broiler chicken in emerging countries; hence, the classification of the larvae can be an additional feature for other presented larva automatic detection systems. This is also due to the fact that the detection for two different larvae by naked eyes has lower accuracy as been identified by biologists <ns0:ref type='bibr' target='#b5'>(Benzertiha et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Transfer Learning</ns0:head><ns0:p>Several transfer learning algorithms have been proposed, including VGG-19, Resnet-50, and Inception-ResnetV2 <ns0:ref type='bibr' target='#b11'>(Goodfellow et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b27'>Simonyan and Zisserman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b14'>He et al., 2016)</ns0:ref>, to enhance deep learning with less amount but significant enough dataset. Instead of using deep learning from scratch for the two specific larvae, transfer learning is more suitable because the previous works have been proposed</ns0:p><ns0:p>for various larvae.</ns0:p><ns0:p>VGG-19 is a more advanced development of VGG-16 <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2015)</ns0:ref>. It has 19 layers, which is quite a contrast compared to Resnet-50 that has 50 layers, and InceptionResnetV2 which that 164 layers.</ns0:p><ns0:p>The architecture for VGG-19 can be seen in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. employed <ns0:ref type='bibr' target='#b10'>(G&#233;ron, 2019)</ns0:ref>. The results from these two algorithms are used as a reference for the baseline.</ns0:p><ns0:p>In other words, the result of the pre-trained models is compared to the traditional classifiers.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>Research preparation, especially how to get the dataset, and parameter settings on the model, as well as the experimental results of the model on the dataset are presented as follows.</ns0:p></ns0:div> <ns0:div><ns0:head>5/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66996:1:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>The datasets are obtained using a standard mobile phone camera commonly available on the market. In this experiment, we use a mobile phone with specifications: Memory 6 GB, Processor Exynos 9611</ns0:p><ns0:p>Octa-core, Camera: Quad camera 48 MP+12 MP ultrawide+5 MP macro+5 MP depth. The images were taken with the use of flash. These specifications are important for reproducing the research in the future.</ns0:p><ns0:p>Larva image samples were obtained from the animal market. The captured larvae are expected to be alive, although several larvae are generally mixed with skins and dead larvae. The total number of samples was 640 equally distributed for each Zophobas Morio and Tenebrio Molitor. Each sample was placed on a white paper sheet and subsequently captured with a distance of approximately 10 centimeters from the larvae as previously displayed in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. Since the images are designed with homogeneous background, i.e., white paper, and the size of the larva is various, it is not necessary to perform an image augmentation on the dataset.</ns0:p><ns0:p>The dataset is divided into 10 folds during the experiment. Each fold is chosen as a testing set while the remaining folds are utilized as a training set. Moreover, the distribution of each class in the fold is considered identical for all folds. This split is known as stratified cross-validation (CV) <ns0:ref type='bibr' target='#b8'>(Breiman et al., 2017)</ns0:ref>. We summarize the distribution of the dataset in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. Each class consists of 320 images which is divided into 10 folds. One fold containing 32 images per class is assigned as data testing while the remaining ones are as data training. It is also important to note that each fold is assigned as testing data only once. After 10 folds are iteratively evaluated, the testing set will completely contain 320 images. </ns0:p></ns0:div> <ns0:div><ns0:head>Model Implementation</ns0:head><ns0:p>We carried out experiments for the above datasets using deep learning. The model used is taken from another model that has been tested and has the same objective characteristics, namely image classification.</ns0:p><ns0:p>As stated in Section 2, two transfer learning methods are employed in building the classification model, namely VGG-19 <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2015)</ns0:ref> and inception v3 <ns0:ref type='bibr' target='#b31'>(Szegedy et al., 2016)</ns0:ref>.</ns0:p><ns0:p>The transfer learning using VGG-19 is implemented in Python programming language and run on Google Colab <ns0:ref type='bibr' target='#b20'>(Paper, 2021)</ns0:ref>. The library for implementing the transfer learning is Keras <ns0:ref type='bibr' target='#b12'>(Gulli and Pal, 2017)</ns0:ref> which is imported from Tensorflow <ns0:ref type='bibr' target='#b0'>(Abadi et al., 2016)</ns0:ref>. In VGG-19, the number of layers has been defined as 19. The number of layers is fixed for this model. However, there are still several parameters that we set based on the number of datasets used. First, the trainable parameter is set to false because the network will not be trained again. Then, the last classification layer is set to 2 because in this experiment there are only two labels, namely Zophobas Morio and Tenebrio Molitor. On the initial layer, the image is adjusted to the size of (224x224x3). The batch size is set to 32 while the epoch is 25.</ns0:p><ns0:p>Similarly, inception v3 is implemented in Python and executed on Google Colab. Inception v3 has 48 layers and the trainable parameter is set to false. In order to provide a fair comparison, the batch and the epoch parameters are set to the same setting as VGG-19.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Results</ns0:head><ns0:p>The testing data are utilized in the final evaluation to determine the performance of the built model. As Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>By employing Eq. 2, 3, 4, the performance for each model are visually expressed in a confusion matrix as shown in Figure <ns0:ref type='figure'>4a</ns0:ref>, 4b, 5a, and 5b, consecutively. Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> provides a more detailed performance comparison by including precision, recall, and accuracy. It can be seen that the chosen transfer learning models outperform the commonly used traditional models. Moreover, the best performance is achieved by the Inception v3 model where the precision and recall are 0.972 and 0.966 respectively, while the accuracy is 96.875%. The highest precision value is 0.972, which means that 97.2% of positive predictions are true.</ns0:p><ns0:p>Meanwhile, the recall value is 0.966, which indicates that 96.6% of positive data are correctly detected.</ns0:p><ns0:p>We are quite confident of the positive prediction result obtained in this research. Furthermore, the final result of accuracy is 96.875%. This high accuracy score is significantly promising for future practical use.</ns0:p><ns0:p>Especially, when these results are compared to other studies previously reviewed. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b17'>(Koonce, 2021)</ns0:ref> and Efficient Nets model <ns0:ref type='bibr' target='#b32'>(Tan and Le, 2020)</ns0:ref>. Compared to those models, the methods of VGG-19 and Inception v3 used in this study have a simpler architecture, and therefore, they can be used as a baseline of transfer learning approach for the two larvae research.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>We built the complete models for the classification of Zophobas Morio and Tenebrio Molitor larvae using two transfer learning methods, namely VGG-19 and Inception v3. These models outperform the traditional ones, i.e. k-NN and SVM. The experimental results show that Inception v3 achieves the best accuracy, i.e., 96.875%. In addition, the values of precision and recall are 0.972 and 0.966 respectively.</ns0:p><ns0:p>These results are quite promising for practical use. Several improvements to this model can still be made for further research, such as increasing the dataset and combining more advanced transfer learning methods.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1 shows several samples of Zophobas Morio and Tenebrio Molitor images.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Images of Zophobas Morio (upper) and Tenebrio Molitor (lower)</ns0:figDesc><ns0:graphic coords='3,146.77,102.59,403.50,308.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. A single layer perceptron</ns0:figDesc><ns0:graphic coords='4,165.83,123.73,365.40,158.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>that provided a deep learning model to predict the teemingness of the larvae of house flies. The experiments were conducted with different angles of 4/11 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66996:1:0:NEW 8 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The architecture of VGG-19</ns0:figDesc><ns0:graphic coords='6,164.39,389.44,368.28,163.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>previously mentioned, two traditional classification algorithms, namely k-NN and SVM, are selected. The results serve as a baseline because no previous study exists on the classification of the two larvae to the best of our knowledge. The detailed results for k-NN with various k values are listed in Table 3. The image size is scaled down to 224 &#215; 224. The value of k is set at an odd number since the classification is binary. The results of k-NN and SVM are used as the baseline. The best results on a particular k in k-NN are chosen and subsequently considered a reliable representation of k-NN in general. Similar to k-NN, SVM is performed on resized images, i.e., 224&#215;224. The selected kernel is linear due to its simplicity. The results of SVM, as well as k-NN, are subsequently compared to the two transfer learning algorithms used in this research, i.e., VGG-19 and inception v3. The comparison of accuracy 6/11 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66996:1:0:NEW 8 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>( a )Figure 4 .Figure 5 .</ns0:head><ns0:label>a45</ns0:label><ns0:figDesc>Figure 4. Classification results using k-NN and SVM</ns0:figDesc><ns0:graphic coords='9,167.28,337.78,151.56,130.68' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Nutrition ingredients for the two larvae<ns0:ref type='bibr' target='#b5'>(Benzertiha et al., 2019)</ns0:ref> </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Item</ns0:cell><ns0:cell cols='2'>Tenebrio Molitor Zophobus Morio</ns0:cell></ns0:row><ns0:row><ns0:cell>Dry matter (DM, %)</ns0:cell><ns0:cell>95.58</ns0:cell><ns0:cell>96.32</ns0:cell></ns0:row><ns0:row><ns0:cell>Crude protein (% of DM)</ns0:cell><ns0:cell>47.0</ns0:cell><ns0:cell>49.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Ether extract (% of DM)</ns0:cell><ns0:cell>29.6</ns0:cell><ns0:cell>33.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Chitin (% of DM)</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>45.9</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Image distribution for 10-CV</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Species</ns0:cell><ns0:cell cols='3'>Total Training per CV Testing per CV</ns0:cell></ns0:row><ns0:row><ns0:cell>Zophobas Morio</ns0:cell><ns0:cell>320</ns0:cell><ns0:cell>288</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Tenebrio Molitor</ns0:cell><ns0:cell>320</ns0:cell><ns0:cell>288</ns0:cell><ns0:cell>32</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Experimental results using k-NN SVM, VGG-19, and inception v3 is presented in Table4. Please note that the assignment of k = 1 for the k-NN is obtained from the previous experimental result as listed in Table3</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Testing fold Training fold</ns0:cell><ns0:cell>k=1</ns0:cell><ns0:cell>k=3</ns0:cell><ns0:cell>Accuracy k=5</ns0:cell><ns0:cell>k=7</ns0:cell><ns0:cell>k=9</ns0:cell><ns0:cell>Average</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>2-9</ns0:cell><ns0:cell cols='6'>90.625 84.375 82.812 84.376 76.562 83.750</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>1,3-10</ns0:cell><ns0:cell cols='6'>84.375 81.250 79.688 76.562 71.875 78.750</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>1-2,4-10</ns0:cell><ns0:cell cols='6'>81.250 81.250 76.562 76.562 75.000 78.125</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>1-3,5-10</ns0:cell><ns0:cell cols='6'>81.250 82.812 82.812 76.562 75.000 79.687</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>1-4,6-10</ns0:cell><ns0:cell cols='6'>82.812 76.562 71.875 73.438 68.750 74.687</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>1-5,7-10</ns0:cell><ns0:cell cols='6'>82.812 78.125 75.000 75.000 71.875 76.562</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>1-6,8-10</ns0:cell><ns0:cell cols='6'>75.000 73.438 68.750 67.188 65.625 70.000</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>1-7,9-10</ns0:cell><ns0:cell cols='6'>92.188 81.250 75.000 73.438 71.875 78.750</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>1-8,10</ns0:cell><ns0:cell cols='6'>89.062 82.812 79.688 79.688 76.562 81.562</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>1-9</ns0:cell><ns0:cell cols='6'>75.000 78.125 76.562 75.000 70.312 75.000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Average</ns0:cell><ns0:cell cols='6'>83.437 80.000 76.875 75.781 72.344 77.687</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>per fold between k-NN,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Accuracy of various models Zophobas Morio or not. Since there are only two classes, the model can be considered a binary classification. Hence, Zophobas Morio can be viewed as a positive class while Tenebrio Molitor can be a negative class.There are several available performance metrics in binary classification. However, we prefer to use the most common ones, such as precision, recall, and accuracy. Let T P, T N, FP, and FN be True Positive,</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>Testing fold Training fold</ns0:cell><ns0:cell>k-NN</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>Accuracy VGG-19 Inception v3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>2-9</ns0:cell><ns0:cell>90.625 96.875 89.625</ns0:cell><ns0:cell>98.438</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell>1,3-10</ns0:cell><ns0:cell>84.375 95.313 95.313</ns0:cell><ns0:cell>98.438</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell>1-2,4-10</ns0:cell><ns0:cell>81.250 92.188 98.438</ns0:cell><ns0:cell>96.875</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell>1-3,5-10</ns0:cell><ns0:cell>81.250 92.188 95.313</ns0:cell><ns0:cell>100.000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell>1-4,6-10</ns0:cell><ns0:cell>82.812 90.625 96.875</ns0:cell><ns0:cell>92.188</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell /><ns0:cell>1-5,7-10</ns0:cell><ns0:cell>82.812 95.313 95.313</ns0:cell><ns0:cell>98.438</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>7</ns0:cell><ns0:cell /><ns0:cell>1-6,8-10</ns0:cell><ns0:cell>75.000 93.750 92.188</ns0:cell><ns0:cell>89.438</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell /><ns0:cell>1-7,9-10</ns0:cell><ns0:cell>92.188 95.313 90.625</ns0:cell><ns0:cell>89.063</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>9</ns0:cell><ns0:cell /><ns0:cell>1-8,10</ns0:cell><ns0:cell>89.062 87.500 90.625</ns0:cell><ns0:cell>98.438</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>10</ns0:cell><ns0:cell /><ns0:cell>1-9</ns0:cell><ns0:cell>75.000 90.625 95.313</ns0:cell><ns0:cell>98.438</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>DISCUSSIONS</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='5'>The main objective of the proposed classification model is to determine whether an observed larva image</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>is categorized as a True Negative, False Positive, and False Negative, respectively. Precision, recall, and accuracy can be</ns0:cell></ns0:row><ns0:row><ns0:cell>expressed as</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Precision =</ns0:cell><ns0:cell cols='2'>T P T P + FP</ns0:cell><ns0:cell>,</ns0:cell><ns0:cell>(2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Recall =</ns0:cell><ns0:cell cols='2'>T P T P + FN</ns0:cell><ns0:cell>,</ns0:cell><ns0:cell>(3)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Accuracy =</ns0:cell><ns0:cell cols='3'>T P + T N T P + FP + FP + FN</ns0:cell><ns0:cell>&#215; 100%.</ns0:cell><ns0:cell>(4)</ns0:cell></ns0:row></ns0:table><ns0:note>7/11PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66996:1:0:NEW 8 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Performance metrics for binary classification</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='5'>TP FP TN FN Precision Recall Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>k-NN</ns0:cell><ns0:cell cols='2'>319 105 215 1</ns0:cell><ns0:cell>0.752</ns0:cell><ns0:cell>0.997</ns0:cell><ns0:cell>83.438</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>310 35</ns0:cell><ns0:cell>285 10</ns0:cell><ns0:cell>0.899</ns0:cell><ns0:cell>0.969</ns0:cell><ns0:cell>92.969</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG-19</ns0:cell><ns0:cell>298 15</ns0:cell><ns0:cell>305 22</ns0:cell><ns0:cell>0.952</ns0:cell><ns0:cell>0.931</ns0:cell><ns0:cell>94.219</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Inception v3 309 9</ns0:cell><ns0:cell>311 11</ns0:cell><ns0:cell>0.972</ns0:cell><ns0:cell>0.966</ns0:cell><ns0:cell>96.875</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Related works comparison dataset can increase the number of recognizable patterns. Another alternative is to use more complex transfer learning methods, such as Resnet-50</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Work</ns0:cell><ns0:cell /><ns0:cell cols='2'>Larva Types</ns0:cell><ns0:cell /><ns0:cell>Methods</ns0:cell><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(Fuad et al.,</ns0:cell><ns0:cell cols='2'>Aedes Aegypti</ns0:cell><ns0:cell /><ns0:cell cols='2'>Google inception model</ns0:cell><ns0:cell>99.77-99.98% accuracy, 0.21-</ns0:cell></ns0:row><ns0:row><ns0:cell>2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>5.13% cross-entropy error</ns0:cell></ns0:row><ns0:row><ns0:cell>(Azman</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>Aegypti,</ns0:cell><ns0:cell cols='2'>Albopic-</ns0:cell><ns0:cell cols='2'>Convolution neural net-</ns0:cell><ns0:cell>0.7-73% accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sarlan, 2020)</ns0:cell><ns0:cell>tus,</ns0:cell><ns0:cell cols='2'>Anopheles,</ns0:cell><ns0:cell>work</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Armigeres, Culex</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>(Asmai et al.,</ns0:cell><ns0:cell cols='2'>Aedes Aegypti</ns0:cell><ns0:cell /><ns0:cell>VGG16,</ns0:cell><ns0:cell>VGG-19,</ns0:cell><ns0:cell>77.31-85.10% accuracy, 0.31-</ns0:cell></ns0:row><ns0:row><ns0:cell>2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>ResNet-50, InceptionV3</ns0:cell><ns0:cell>0.66% loss</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(Shang et al.,</ns0:cell><ns0:cell>Zebrafish</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>GoogLeNet,</ns0:cell><ns0:cell>VGG-19,</ns0:cell><ns0:cell>91-100% accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>AlexNet</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(Kakehi et al.,</ns0:cell><ns0:cell>Oyster</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>coordinate system of Py-</ns0:cell><ns0:cell>82.4% precision, 90.8% recall,</ns0:cell></ns0:row><ns0:row><ns0:cell>2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Torch</ns0:cell><ns0:cell>86.4% F-score</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(Ong et al.,</ns0:cell><ns0:cell cols='2'>House flies</ns0:cell><ns0:cell /><ns0:cell cols='2'>Convolution neural net-</ns0:cell><ns0:cell>88.44-92.95%</ns0:cell><ns0:cell>precision,</ns0:cell></ns0:row><ns0:row><ns0:cell>2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>work</ns0:cell><ns0:cell>88.23-94.10% recall, 87.56-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>92.89% accuracy, 88.08-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>93.02% F-score</ns0:cell></ns0:row><ns0:row><ns0:cell>Ours</ns0:cell><ns0:cell /><ns0:cell cols='2'>Zophobas</ns0:cell><ns0:cell>Morio,</ns0:cell><ns0:cell cols='2'>VGG-19, Inception v3</ns0:cell><ns0:cell>97.2% precision, 96.6% recall,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Tenebrio Molitor</ns0:cell><ns0:cell /><ns0:cell>96.876% accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>addition of the</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Dear Editor-in-Chief, We would like to submit the revision of our paper titled “Classification of Zophobas Morio and Tenebrio Molitor using transfer learning” (#CS-2021:10:66996:0:1:REVIEW). We would like to thank you very much for sending us the associate editor’s as well as the reviewers’ comments on our manuscript. We have found the comments and suggestions to be very helpful for our revision. We have revised the manuscript by addressing all of the reviewers' comments and made the manuscript suitable for publication in PeerJ Computer Science. Our responses to the comments are included below. Should you have any queries, please contact us. We would like to express our gratitude for your attention. Sincerely yours, Agus Pratondo, Ph.D. On behalf of all authors. 1. Reviewer 1 (Anonymous) Basic reporting 1.1. Comment The presentation quality of this paper is good. The background, the task, and the model are introduced clearly. However, the technical contribution of this paper is less. It seems that this paper just simply applies the VGG-19 model for solving the Zophobas Morio and Tenebrio Molitor classification problem. Response We wish to thank the reviewer for the valuable comments and suggestions. The paper has been improved accordingly in the revised manuscript. Several classifiers have been applied and further discussions related to the comparison have been elaborated to increase the technicality while compromising the novelty and significance of this research. Experimental design 1.2. Comment It seems that the authors do not perform baselines on the dataset and provide comparisons. Specifically, what is the performance of non-transfer-learning models on the dataset? What is the superiority of the VGG-19 model compared with other transfer learning approaches on the given dataset? Response To accommodate the comments, we have carried out additional experiments using traditional classifiers, namely k-Nearest Neighbors and Support Vector Machines to obtain the baselines. Most importantly, we have experimented with another suitable transfer learning model to beat the superiority of VGG-19, i.e. Inception v3. We found that VGG-19 and Inception v3 provide the best results with 19 and 48 layers respectively. 1.3. Comment Since the dataset is very small, the authors are suggested to use cross-validation to avoid the impact of sampling. Response To accommodate the reviewer, we have revised the evaluation part by performing 10-cross-validation accordingly. We have tried to increase the number of datasets as well, however, it did not give any significant improvement. Validity of the findings 1.4. Comment The novelty of this paper has not been assessed. The dataset of this paper has not been provided. Response We found the problem from the beginners who tried to identify two types of larvae in the real world. They were complaining that they occasionally deal with difficulty in buying these larvae that lead them to unnecessary expenses. Furthermore, we collected the dataset from the market that has not been provided as an open-source or even commercial dataset before. Hence, we will submit the dataset as part of the final version of the paper to the journal. To the best of our knowledge, this is the first investigation on the Zophobas Morio and Tenebrio Molitor classification problem. 2. Reviewer 2 (Anonymous) Basic reporting 2.1. Comment This is a good technical paper. Figures are clear and have at least 300 dpi. On the related works, the authors should move Table 2 to the Discussions section as the benchmarking between their proposed methods and other papers. Table 2 should discuss the summary of the work, methods, strengths, and weaknesses of other papers. Response We would like to thank the reviewer for the valued comments and suggestions. We have rearranged the discussion of the benchmark between our results and others. The paper has been improved accordingly in the revised manuscript. Experimental design 2.2. Comment The use of VGG-19 should be justified. Response We have emphasized the use of VGG-19 due to its simplicity compared to the recent transfer learning algorithms such as Resnet 50 and Efficient Net. Moreover, we have carried out additional experiments using traditional classifiers, namely k-Nearest Neighbors and Support Vector Machines to obtain the baselines. Moreover, the use of VGG-19 has been accompanied by another transfer learning algorithm, i.e., Inception v3. We believe that the additional experimentation is strong enough to support our argument that the transfer learning technique is suitable in accommodating the need for two larvae identification. 2.3. Comment The dataset is appropriate. No data augmentation performed? Response We understand that it is a common practice to perform data augmentation in image classification using deep learning. However, since the images have a neutral background and the position of the larvae varies, we prefer no augmentation in the data set to maintain the fairness for the accuracy evaluation between the traditional classifiers and the transfer learning. Validity of the findings 2.4. Comment The original Table 2 could be moved here to improve the proposed method validation. Response We have rearranged the proposed method validation by moving Table 2 to a more appropriate place to support the discussion. The paper has been improved accordingly in the revised manuscript. 3. Reviewer 3 (Hsiao-Han Liu) Basic reporting 3.1. Comment 1. The English article is written quite clear and professionally. 2. The background and context are sufficient for this topic. 3. The article structure is quite reasonable 4. Relevant results to the hypothesis are quite self-contained. 5. Formal results are clear and have detailed proofs. Response We would like to thank the reviewer for spending the time to provide those motivating appreciations to our manuscript. The comments encourage us to have another useful research work on this area. Experimental design 3.2. Comment 1. This article is within the aims and scope of the journal. 2. The research questions are well defined and fill the specific problems of difficulties to differentiate these two worms. 3. This investigation was performed to a high technical and ethical standard. 4. The methods are described with sufficient detail and information to replicate. Response We would like to appreciate the reviewer for the positive comments on our paper. We were doing our best in maintaining these quality comments on the revision version as well. Validity of the findings 3.3. Comment 1. This article gives impact and novelty for worms recognition algorithm. 2. All underlying data have been provided and robust, statistically sound, and controlled. 3. Conclusions are well stated, linked to original research questions, and limited to supporting results. Response We would like to express our gratitude to the reviewer for his appreciation of our works. These make us more believe in our capability in contributing to the giant’s shoulder. Additional comments 3.4. Comment 1. 1. In general, we are not confused by distinguishing these two worms in my lab. 2. The app will help beginners to differentiate these two worms. Response We would like to thank the reviewer for the understanding of the practical use of our findings. We will definitely spread the outcome of this research to the community. "
Here is a paper. Please give your review comments after reading it.
340
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Research in computer systems often involves the engineering, implementation, and measurement of complex systems software. The availability of these software artifacts is critical to the reproducibility and replicability of the research results because system software often embodies numerous implicit assumptions and parameters that are not fully documented in the research article itself. Artifact availability has also been previously associated with higher paper impact, as measured by citations counts. And yet, the sharing of research artifacts is still not as common as warranted by its importance.</ns0:p><ns0:p>The primary goal of this study is to provide an exploratory statistical analysis of the artifact-sharing rates and associated factors in the research field of computer systems. To this end, we explore a crosssectional dataset of papers from 56 contemporaneous systems conferences. In addition to extensive data on the conferences, papers, and authors, this dataset includes data on the release, ongoing availability, badging, and locations of research artifacts. We combine this manually curated data with recent citation counts to evaluate the relationships between different artifact properties and citation metrics. Additionally, we revisit previous observations from other fields on the relationships between artifact properties and various other characteristics of papers, authors, and venue and apply them to this field.</ns0:p><ns0:p>The overall rate of artifact sharing we find in this dataset is approximately 30%, although it varies significantly with paper, author, and conference factors, and it is closer to 43% for conferences that actively evaluated artifact sharing. Approximately 20% of all shared artifacts are no longer accessible four years after publications, predominately when hosted on personal and academic websites. Our main finding is that papers with shared artifacts averaged approximately 75% more citations than papers with none. Even after controlling for numerous confounding covariates, the release of an artifact appears to increase the citations of a systems paper by some 34%. This metric is further boosted by the open availability of the paper's text.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The only form of standardized artifact metadata was found for the subset of conferences organized by the ACM with artifact badge initiatives. In the proceedings page in the ACM's digital library of these conferences, special badges denote which papers made artifacts available, and which papers had artifacts evaluated (for conferences that supported either badge). In addition, the ACM digital library also serves as a repository for the artifacts, and all of these ACM papers included a link back to the appropriate web page with the artifact.</ns0:p><ns0:p>Unfortunately, most papers in this dataset were not published by the ACM or had no artifact badges. In the absence of artifact metadata or an automated way to extract artifact data, these papers required a manual scanning of the PDF text of every paper in order to identify such links. When skimming these paper, several search terms were used to assist in identifying artifacts, such as 'github,' 'gitlab,' 'bitbucket,' 'sourceforge,' and 'zenodo' for repositories; variants of 'available,' 'open source,' and 'download' for links; and variations of 'artifact,' 'reproducibility,' and 'will release' for indirect references. Some papers make no mention of artifacts in the text, but we can still discover associated artifacts online by searching github.com for author names, paper titles, and especially unique monikers used in the paper to identify their software.</ns0:p><ns0:p>We also recorded for each paper: whether the paper had an 'artifact available' badge or 'artifact evaluated' badge, whether a link to the artifact was included in the text, the actual URL for the artifact, and the latest date that this artifact was still found intact online. All of the searches for these artifacts are recent, so from the last field above we can denote the current status of an artifact as either extant or expired. From the availability of a URL, we can classify an artifact as released or unreleased (the latter denoting papers that promised an artifact but no link or repository was found). And from the host domain of the URL we can classify the location of the artifact as either an Academic web page, the ACM digital library, a Filesharing service such as Dropbox or Google, a specialized Repository such as github.com, Other (including .com and .org web sites), or NA.</ns0:p></ns0:div> <ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In all, 722 papers in our dataset (29.6%) had an identifiable or promised artifact, primarily source code but occasionally data, configuration, or benchmarking files. Artifacts that had been included in previous papers or written by someone other than the paper's authors were excluded from this count. This statistic only reflects artifact availability, not quality, since evaluating artifact quality is both subjective and time-consuming. It is worth noting, however, that most of the source-code repositories in these artifacts showed no development activity-commits, forks, or issues-after the publication of their paper, suggesting limited impact for the artifacts alone.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations</ns0:head><ns0:p>The methodology described here is constrained by the manual curation of data, especially artifact data.</ns0:p><ns0:p>The effort involved in compiling all the necessary data limits the scalability of this approach to additional conferences or years. Furthermore, the manual search for artifacts in the text and in repositories is a laborious process and prone to human error. Although a large-enough number of artifacts was identified for statistical analysis, there likely remain untagged papers in the dataset that did actually release an artifact (false negatives). Nevertheless, there is no evidence to suggest that their number is large or that their distribution is skewed in some way as to bias statistical analyses. That said, since the complete dataset is (naturally) released as an artifact of this paper, it can be enhanced and corrected over time.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistics</ns0:head><ns0:p>For statistical testing, group means were compared pairwise using Welch's two-sample t-test; differences between distributions of two categorical variables were tested with &#967; 2 test; and comparisons between two numeric properties of the same population were evaluated with Pearson's product-moment correlation.</ns0:p><ns0:p>All statistical tests are reported with their p-values.</ns0:p></ns0:div> <ns0:div><ns0:head>Ethics statement</ns0:head><ns0:p>All of the data for this study was collected from public online sources and therefore did not require the informed consent of the papers' authors.</ns0:p></ns0:div> <ns0:div><ns0:head>Code and data availability</ns0:head><ns0:p>The complete dataset and metadata are available in the supplementary material, as well as a github repository <ns0:ref type='bibr' target='#b14'>[Frachtenberg, 2021]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Descriptive statistics</ns0:head><ns0:p>Before addressing our main research question, we start with simple characterization of the statistical distributions of artifacts in our dataset. Of the 722 papers with artifacts, we find that about 79.5% included an actual link to the artifact in the text. The ACM digital library marked 88 of artifact papers (12.2%) with</ns0:p><ns0:p>an 'Artifact available' badge, and 89 papers (12.3%) with an 'Artifact evaluated' badge. The majority of artifact papers (86.7%) still had their artifacts available for download at the time of this writing. This ratio is somewhat similar to a comparable study that found that 73% Looking at the differences across conferences, Fig. <ns0:ref type='figure'>1</ns0:ref> shows the percentage of papers with artifacts per conference, ranging from 0% for ISCA, IGSC, and HCW to OOPSLA's 78.79% (mean: 27.22%, SD: 19.32%). Unsurprisingly, nearly all of the conferences where artifacts were evaluated are prominent in their relatively high artifact rates. Only PACT stands out as a conference that evaluated artifacts but had a lower-than-average overall ratio of papers with artifacts (0.24). The MobiCom conference also shows a distinctly low ratio, 0.09, despite actively encouraging artifacts. It should be noted, however, that many papers in PACT and MobiCom are hardware-related, where artifacts are typically unfeasible.</ns0:p><ns0:p>The same is true for a number of other conferences with low artifact ratios, such as ISCA, HPCA, and MICRO. Also worth noting is the fact that ACM conferences appear to attract many more artifacts than IEEE conferences, although the reasons likely vary on a conference-by-conference basis.</ns0:p><ns0:p>Another indicator for artifact availability is author affiliation. As observed in other systems papers, industry-affiliated authors typically face more restrictions for sharing artifacts <ns0:ref type='bibr' target='#b9'>[Collberg and Proebsting, 2016]</ns0:ref>, likely because the artifacts hold commercial or competitive ramifications <ns0:ref type='bibr' target='#b34'>[Ince et al., 2012]</ns0:ref>. In our dataset, only 19.3% of the 109 papers where all authors had an industry affiliation also released an artifact, compared to 28.1% for the other papers (&#967; 2 = 3.6, p = 0.06).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Relationships to citations</ns0:head><ns0:p>Turning now to our main research hypothesis, we ask: does the open availability of an artifact affect the citations of a paper in systems? To answer this question, we look at the distribution of citations for each paper 42 months after its conference's opening day, when its proceedings presumably were published. 1</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows the overall paper distribution as a histogram, while Fig. <ns0:ref type='figure' target='#fig_0'>3</ns0:ref> breaks down the distributions of artifact and non-artifact papers as density plots.</ns0:p><ns0:p>Citations range from none at all (49 papers) to about a thousand, with two outlier papers exceeding 2,000 citations <ns0:ref type='bibr' target='#b7'>[Carlini and</ns0:ref><ns0:ref type='bibr'>Wagner, 2017, Jouppi et al., 2017]</ns0:ref>. The distributions appear roughly lognormal. The mean citations per paper with artifacts released was 50.7, compared to 29.1 with none (t = 4.07, p &lt; 10 &#8722;4 ). Since the citation distribution is so right-skewed, it makes sense to also compare the 1 At the time of this writing during summer 2021, the papers from December 2017 had been public for 3.5 years, so this 42-month duration was selected for all papers to normalize the comparison.</ns0:p></ns0:div> <ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2021:07:63690:1:1:NEW 30 Oct 2021)</ns0:ref> Manuscript to be reviewed median citations with and without artifacts (25 vs. 13, W = 767739, p &lt; 10 &#8722;9 ). Both statistics suggest a clear and statistically significant advantage in citations for papers that released an artifact. Likewise, the 675 papers that actually released an artifact garnered more citations than the 47 papers that did promise an artifact that could later not be found (t = 3.82, p &lt; 10 &#8722;3 ), and extant artifacts fared better than expired ones (t = 4.17, p &lt; 10 &#8722;4 ).</ns0:p><ns0:p>In contradistinction, some positive attributes of artifacts were actually associated with fewer citations.</ns0:p><ns0:p>For example, the mean citations of the 573 papers with a linked artifact, 47, was much lower than the 71.3 mean for the 102 papers with artifacts we found using a Web search (t = &#8722;2.02, p = 0.04; W = 22865, p &lt; 10 &#8722;3 ). Curiously, the inclusion of a link in the paper, presumably making the artifact more accessible, was associated with fewer citations.</ns0:p><ns0:p>Similarly counter-intuitive, papers that received an 'Artifact evaluated' badge fared worse in citations than artifact papers who did not (t = &#8722;3.32, p &lt; 0.01; W = 11932.5, p = 0.03). Papers who received an 'Artifact available' badge did fared a little worse than artifact papers who did not (t = &#8722;1.45, p = 0.15; W = 26050.5, p = 0.56). These findings appear to contradict the premise that such badges are associated with increased artifact sharing, as has been found in other fields <ns0:ref type='bibr' target='#b1'>[Baker, 2016a]</ns0:ref>.</ns0:p><ns0:p>Finally, we can also break down the citations per paper grouped by the type of location for the artifact and by organization, examining medians because of the outsize effects of outliers (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>). The three major location categories do not show significant differences in citations, and the last two categories may be too small to ascribe statistical significance to their differences.</ns0:p><ns0:p>An interesting tangential question is how often do systems papers cite other artifacts. Unfortunately, there is no simple automated way to answer this question, and a careful reading of all 2439 papers is impractical. As a crude approximation, a simple search for the string 'github' in the full-text of all the papers yielded 900 distinct results. Keep in mind, however, that perhaps half of those could be referring to their own artifact rather than another paper's, and that not all cited github repositories do indeed represent paper artifacts. Incidentally, papers with released artifacts also tend to incorporate significantly more references themselves (mean: 32.31 vs. 28.71; t = 5.25, p &lt; 10 &#8722;6 ). However, there is no reason to suspect a causal relationship to artifacts, as opposed to some other confounding cause. We dive deeper into questions of association and causality with citations in the discussion section.</ns0:p></ns0:div> <ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science As mentioned previously, 13.3% of released artifacts are already inaccessible, a mere &#8776; 3.5 years after publication. Most of the artifacts in our dataset were published in code repositories, predominantly github, that do not guarantee persistent access or even universal access protocols such as digital object identifiers (DOI). However, only 2.3% of the 'Repository' artifacts were inaccessible. In contrast, 22.2% of the artifacts in university pages have already expired, likely because they had been hosted by students or faculty that have since moved elsewhere. Also, a full 50% of the artifacts on file-sharing sites such as Dropbox or Google Drive are no longer there, possibly because these are paid services or free to a limited capacity, and can get expensive to maintain over time.</ns0:p><ns0:p>Accessibility is also closely related to the findability of the artifact, which in the absence of artifact DOIs in our dataset, we estimate by looking at the number of papers that explicitly link to their artifacts.</ns0:p><ns0:p>The missing (expired) artifacts consisted of a full 31.1% of the papers with no artifact link, compared to only 8.7% for papers that linked to them (&#967; 2 = 49.15, p &lt; 10 &#8722;9 ).</ns0:p><ns0:p>Another related question to artifact accessibility is how accessible is the actual paper that introduced the artifact, which may itself be associated with higher citations <ns0:ref type='bibr' target='#b22'>[Gargouri et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b41'>, McCabe and Snyder, 2015</ns0:ref><ns0:ref type='bibr' target='#b42'>, McKiernan et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b56'>, Tahamtan et al., 2016]</ns0:ref>. A substantial proportion of the papers (23.1%) were published in 15 open-access conferences. Other papers have also been released openly as preprints or via other means. One way to gauge the availability of the paper's text is to look it up on GS and see if an accessible version (eprint) is linked, which is recorded in our dataset. Of the 2439 papers, 91.8% displayed at some point an accessible link to the full text on GS. Specifically, of the papers that released artifacts, 96.7% were associated with an eprint as well, compared to 90% of the papers with no artifacts (&#967; 2 = 29, p &lt; 10 &#8722;7 ).</ns0:p><ns0:p>Moreover, our dataset includes not only the availability of an eprint link on GS, but also the approximate duration since publication (in months) that it took GS to display this link, offering a quantitative measure of accessibility speed as well. It shows that for papers with artifacts, GS averaged approximately 4 months post-publication to display a link to an eprint, compared to 5.8 months for papers with no artifacts (t = &#8722;5.48, p &lt; 10 &#8722;7 ). Both of these qualitative and quantitative differences are statistically significant, but keep in mind that the accessibility of papers and artifacts are not independent: some conferences that encouraged artifacts were also open-access, particularly those with the ACM. Another dependent covariate with accessibility is citations; several studies suggested that accessible papers are better cited <ns0:ref type='bibr' target='#b3'>[Bernius and Hanauske, 2009</ns0:ref><ns0:ref type='bibr' target='#b43'>, Niyazov et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b53'>, Snijder, 2016]</ns0:ref>, although others disagree <ns0:ref type='bibr' target='#b6'>[Calver and Bradley, 2010</ns0:ref><ns0:ref type='bibr' target='#b11'>, Davis and Walters, 2011</ns0:ref><ns0:ref type='bibr' target='#b41'>, McCabe and Snyder, 2015]</ns0:ref>. This dependence may explain part of the higher citability of papers with artifacts, as elaborated next.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Covariate analysis</ns0:head><ns0:p>Having addressed the relationships between artifacts and citations, we can now explore relationships between additional variables from this expansive dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.1'>Awards</ns0:head><ns0:p>Many conferences present competitive awards, such as 'best paper,' 'best student paper,' 'community award,' etc. Of the 2439 total papers, 4.7% received at least one such award. Papers with artifacts are disproportionately represented in this exclusive subset (39.5% vs. 27.1% in non-award papers; &#967; 2 = 7.71,</ns0:p><ns0:formula xml:id='formula_0'>p &lt; 0.01).</ns0:formula><ns0:p>Again, it is unclear whether this relationship is causal, since the two covariates are not entirely independent. For example, a handful of awards specifically evaluated the contribution of the paper's artifact. Even if the relationship is indeed causal, its direction is also unclear, since 20% of award papers with artifact did not link to it in the paper. It is possible that these papers released their artifacts after winning the award or because of it.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.2'>Textual properties</ns0:head><ns0:p>Some of the textual properties of papers can be estimated from their full text using simple command-line tools. Our dataset includes three such properties: the length of each paper in words, the number of references it cites, and the existence of a system's moniker in the paper's title.</ns0:p><ns0:p>The approximate paper length in words and the number of references turn out to be positively associated with the release of an artifact. Papers with artifacts average more pages than papers without (13.98 vs. 12.4; t = 8.24, p &lt; 10 &#8722;9 ), more words (11757.36 vs. 10525.22; t = 7.86, p &lt; 10 &#8722;9 ), and more references (32.31 vs. 28.71; t = 5.25, p &lt; 10 &#8722;6 ). Keep in mind, however, that longer papers also correspond to more references (r = 0.48, p &lt; 10 &#8722;9 ), and are further confounded with specific conference factors such as page limits.</ns0:p><ns0:p>As previously mentioned, many systems papers introduce a new computer system, often as software.</ns0:p><ns0:p>Sometimes, these papers name their system by a moniker, and their title starts with the moniker, followed by a colon and a short description (e.g., 'Widget: An Even Faster Key-Value Store'). This feature is easy to extract automatically for all paper titles.</ns0:p><ns0:p>We could hypothesize that a paper that introduces a new system, especially a named system, would be more likely to include an artifact with the code for this system, quite likely with the same repository name.</ns0:p><ns0:p>Our data supports this hypothesis. The ratio of artifacts released in papers with a labeled title, 41.9%, is nearly double that of papers without a labeled title, 22.8% (&#967; 2 = 84.23, p &lt; 10 &#8722;9 ).</ns0:p><ns0:p>These textual relationships may not be very insightful, because of the difficulty to ascribe any causality to them, but they can clue the paper's reader to the possibility of an artifact, even if one is not linked in the paper. Indeed, they accelerated the manual search for such unlinked artifacts during the curation of the data for this study.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.3'>Conference prestige</ns0:head><ns0:p>Finally, we look at conference-specific covariates that could represent how well-known or competitive a conference is. In addition to textual conference factors, these conference metrics may also be associated with higher rates of artifact release.</ns0:p><ns0:p>Several proxy metrics for prestige appear to support this hypothesis. Papers with released artifacts tend to appear in conferences that average a lower acceptance rate (0.21 vs. 0.24; t = &#8722;6.28, p &lt; 10 &#8722;9 ), more paper submissions (360.5 vs. 292.45; t = 6.33, p &lt; 10 &#8722;9 ), higher historical mean citations per paper (16.6 vs. 14.96; t = 3.09, p &lt; 0.01), and a higher h5-index from GS metrics (46.04 vs. 41.04; t = 6.07, p &lt; 10 &#8722;8 ). Also note that papers in conferences that offered some option for author response to peer review (often in the form of a rebuttal) were slightly more likely to include artifacts, perhaps as a response to peer review (&#967; 2 = 2.03, p = 0.15).</ns0:p><ns0:p>To explain these relationships, we might hypothesize that a higher rate of artifact submission would be associated with more reputable conferences, either because artifact presence contributes to prestige, or because more rigorous conferences are also more likely to expect such artifacts. Observe, however, that some of the conferences that encourage or require artifacts are not as competitive as the others.</ns0:p><ns0:p>For example, OOPSLA, with the highest artifact rate, had an acceptance rate of 0.3, and SLE, with the fourth-highest artifact rate, had an acceptance rate of 0.42. The implication here is that it may not suffice for a conference to actively encourage artifacts for it to be competitive, but a conference that already is competitive may also attract more artifacts.</ns0:p></ns0:div> <ns0:div><ns0:head>10/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>DISCUSSION</ns0:head><ns0:p>In this section we revisit in depth our primary research interest: the effect of artifact sharing on citations.</ns0:p><ns0:p>We already observed a strong statistical association between artifact release and higher eventual citations.</ns0:p><ns0:p>As cautioned throughout this study, such associations are insufficient to draw causal conclusions, primarily because there are many confounding variables, most of which relating to the publishing conference. These confounding factors could provide partial or complete statistical explanation to differences in citations beyond artifact availability.</ns0:p><ns0:p>In other words, papers published in the same conference might exhibit strong correlations that interact or interfere with our response variable. One such factor affecting paper citations is time since publication, which we control for by measuring all citations at exactly the same interval, 42 months since the conference's official start. Another crucial factor is the field of study, which we control for by focusing on a single field, while providing a wide cross-section of the field to limit the effect of statistical variability.</ns0:p><ns0:p>There are also numerous less-obvious paper-related factors that have shown positive association with citations, such as: review-type studies, fewer equations, higher number of references, statistically significant positive results, papers' length, number of figures and images, and even more obscure features such as the presence of punctuation marks in the title. We can attempt to control for these confounding variables when evaluating associations by using a multilevel model. To this end, we fit a linear regression model of citations as a function of artifact availability, and then add predictor variables as controls, observing their effect on the main predictor. The response variable we model for is ln(citations) instead of citations, because of the long tail of their distribution. We also omit the 49 papers with zero citations to improve the linear fit with the predictors.</ns0:p><ns0:p>In the baseline form, fitting a linear model of the log-transformed citations as a function of only artifact released yields an intercept (baseline log citations) of 2.6 and a slope of 0.59, meaning that releasing an artifact adds approximately 81% more citations to the paper, after exponentiation. The p-value for this predictor is exceedingly low (less than 2 &#215; 10 &#8722;16 ) but the simplistic model only explains 4.61% of the variance in citations (Adjusted R 2 =0.046). The Bayesian Information Criterion (BIC) for this model is 7693.252, with 2388 degrees of freedom (df).</ns0:p><ns0:p>We can now add various paper covariates to the linear model in an attempt to get more precise estimates for the artifact released predictor, by iteratively experimenting with different predictor combinations to minimize BIC using stepwise model selection <ns0:ref type='bibr'>[Garc&#237;a-Portugu&#233;s, 2021, Ch. 3</ns0:ref>]. The per-paper factors considered were: paper length (words), number of coauthors, number of references, colon in the title, award given, and accessibility speed (months to eprint 2 ).</ns0:p><ns0:p>It turns out that all these paper-level factors except award given have a statistically significant effect on citations, which brings the model to an increased adjusted R 2 value of 0.285 and a BIC of 7028.07 (df = 2380). However, the coefficient for artifact released went down to 0.35 (42% relative citation increase) with an associated p-value of 3.8 &#215; 10 &#8722;13 .</ns0:p><ns0:p>Similar to paper variables, some author-related factors such as their academic reputation, country of residence, and gender have been associated with citation count <ns0:ref type='bibr' target='#b56'>[Tahamtan et al., 2016]</ns0:ref>. We next enhance our linear model with the following predictor variables (omitting 448 papers with NA values):</ns0:p><ns0:p>&#8226; Whether all the coauthors with a known affiliation came from the same country <ns0:ref type='bibr' target='#b48'>[Puuska et al., 2014]</ns0:ref>.</ns0:p><ns0:p>&#8226; Is the lead author affiliated with the United States <ns0:ref type='bibr'>[Gargouri et al., 2010, Peng and</ns0:ref><ns0:ref type='bibr' target='#b46'>Zhu, 2012</ns0:ref>]?</ns0:p><ns0:p>&#8226; Whether any of the coauthors was affiliated with one of the top 50 universities per www.topuniversities.com (27% of papers) or a top company (if any author was affiliated with either (Google, Microsoft, Yahoo!, or Facebook: 18% of papers), based on the definitions of a similar study <ns0:ref type='bibr' target='#b58'>[Tomkins et al., 2017]</ns0:ref>.</ns0:p><ns0:p>&#8226; Whether all the coauthors with a known affiliation came from industry.</ns0:p><ns0:p>&#8226; The gender of the first author <ns0:ref type='bibr' target='#b15'>[Frachtenberg and Kaner, 2021]</ns0:ref>.</ns0:p><ns0:p>&#8226; The sum of the total past publications of all coauthors of the paper <ns0:ref type='bibr' target='#b4'>[Bjarnason and Sigfusdottir, 2002]</ns0:ref>.</ns0:p><ns0:p>&#8226; The maximum h-index of all coauthors <ns0:ref type='bibr' target='#b33'>[Hurley et al., 2014</ns0:ref>].</ns0:p><ns0:p>2 Papers with no eprint available at the time of this writing were assigned an arbitrary time to eprint of 1,000 months, but the regression analysis was not particularly sensitive to this choice.</ns0:p></ns0:div> <ns0:div><ns0:head>11/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science Only the maximum h-index and top-university affiliation had statistically significant coefficients, but hardly affected the overall model. 3 These minimal changes may not justify the increased complexity and reduced data size of the new model (because of missing data), so for the remainder of the analysis we ignore author-related factors and proceed with the previous model.</ns0:p><ns0:p>Finally, we add the last level: venue factors. Conference (or journal) factors-such as the conference's own prestige and competitiveness-can have a large effect on citations, as discussed in the previous section. Although we can approximate some of these factors with some metrics in the dataset, there may also be other unknown or qualitative conference factors that we cannot model. Instead, to account for conference factors we next build a mixed-effects model, where all the previously mentioned factors become fixed effects and the conference becomes a random effect <ns0:ref type='bibr'>[Roback and Legler, 2021, Ch. 8</ns0:ref>].</ns0:p><ns0:p>This last model does indeed reduce the relative effect of artifact release on citations to a coefficient of 0.29 (95% confidence interval: 0.2-0.39). But this coefficient still represents a relative citation increase of about a third for papers with released artifacts (34%), which is significant. We can approximate a p-value for this coefficient via Satterthwaite's degrees of freedom method using R's lmerTest package <ns0:ref type='bibr' target='#b39'>[Kuznetsova et al., 2017]</ns0:ref>, which is also statistically significant at 3.5825 &#215; 10 &#8722;10 . The parameters for this final model are enumerated in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. The only difference in paper-level factors is that award availability has replaced word count as a significant predictor, but realistically, both have a negligible effect on citations.</ns0:p></ns0:div> <ns0:div><ns0:head>Causality</ns0:head><ns0:p>Even with all these controls, we observe a strong association between artifact release and citations. This association may still not suffice to claim causation due to hidden variables <ns0:ref type='bibr' target='#b40'>[Lewis, 2018]</ns0:ref>, but it supports the hypothesis that releasing artifacts can indeed improve the prospects of a systems research paper to achieve wider acceptance, recognition, and scientific impact.</ns0:p><ns0:p>One implication of this model is that even if we assume no causal relation between artifact sharing and higher citation counts, the association is strong enough to justify a change in future scientometric studies of citations. Such studies often attempt to control for various confounders when attempting to explain or predict citations, and this strong link suggests that at least for experimental and data-driven sciences, the sharing of research artifacts should be included as an explanatory variable.</ns0:p><ns0:p>That said, there may be a case for a causal explanation with a clear direction after all. First, the model controls for many of the confounding variables identified in the literature, so the possibility of hidden, explanatory variables is diminished. Second, there is a clear temporal relationship between artifact sharing and citations. Artifact sharing invariably accompanies the publication of a paper, while its citations invariable follow months or years afterwards. It is therefore plausible to expect that citation counts are influenced by artifact sharing, and not the other way around.</ns0:p><ns0:p>If we do indeed assume causality between the two, then an important, practical implication also arises from this model, especially for authors wishing to increase their work's citations. There are numerous factors that authors cannot really control, such as their own demographic factors, but fortunately, these turn out to have insignificant effects on citations. Even authors' choice of a venue to publish in, which does influence citations, can be constrained by paper length, scope match, dates and travel, and most importantly, the peer-review process that is completely outside of their control. But among the citation factors that authors can control, the most influential one turns out to be the sharing of research artifacts.</ns0:p><ns0:p>A causal link would then provide a simple lever for systems authors to improve their citations by an average of some 34%: share and link any available research artifacts. Presumably, authors attempting to maximize impact already work hard to achieve a careful study design, elaborate engineering effort, a well-written paper, and acceptance at a competitive conference. The additional effort of planning for and releasing their research artifact should be a relatively minor incremental effort that could improve their average citation count. If we additionally assume causality in the link between higher artifact sharing rates and acceptance to more competitive conferences, the effect on citations can be compounded.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>RELATED WORK</ns0:head><ns0:p>This paper investigates the relationship between research artifacts and citations in the computer systems field. This relationship has been receiving increasingly more attention in recent years for CS papers in general. For example, a new study on software artifacts in CS research observed that while artifact sharing rate is increasing, the bidirectional links between artifacts and papers do not always exist or last very long, as we have also found <ns0:ref type='bibr' target='#b24'>[Hata et al., 2021]</ns0:ref>. Some of the reasons that researchers struggle to reproduce experimental results and reuse research code from scientific papers are the continuously changing software and hardware, lack of common APIs, stochastic behavior of computer systems and a lack of a common experimental methodology <ns0:ref type='bibr' target='#b20'>[Fursin, 2021]</ns0:ref>, as well as copyright restrictions <ns0:ref type='bibr' target='#b55'>[Stodden, 2008]</ns0:ref>.</ns0:p><ns0:p>Software artifacts have often been discussed in the context of their benefits for open, reusable, and reproducible science <ns0:ref type='bibr' target='#b23'>[Hasselbring et al., 2019]</ns0:ref>. Such results have lead more CS organizations and conferences to increase adoption of artifact sharing and evaluation, including a few of the conferences evaluated in this paper <ns0:ref type='bibr' target='#b1'>[Baker, 2016a</ns0:ref><ns0:ref type='bibr' target='#b10'>, Dahlgren, 2019</ns0:ref><ns0:ref type='bibr' target='#b25'>, Hermann et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b52'>, Saucez et al., 2019]</ns0:ref>. One recent study examined specifically the benefit of software artifacts for higher citation counts <ns0:ref type='bibr' target='#b26'>[Heum&#252;ller et al., 2020]</ns0:ref>. Another study looked at artifact evaluation for CS papers and found a small but positive correlation with higher citations counts for papers between <ns0:ref type='bibr'>2013</ns0:ref><ns0:ref type='bibr'>and 2016</ns0:ref><ns0:ref type='bibr'>[Childers and Chrysanthis, 2017]</ns0:ref>.</ns0:p><ns0:p>When analyzing the relationship between artifact sharing and citations, one must be careful to consider the myriad possibilities for confounding factors, as we have in our mixed-effects model. Many such factors have been found to be associated with higher citation counts. Some examples relating to the author demographics include the authors' gender <ns0:ref type='bibr' target='#b14'>[Frachtenberg and</ns0:ref><ns0:ref type='bibr'>Kaner, 2021, Tahamtan et al., 2016]</ns0:ref>, country of residence <ns0:ref type='bibr' target='#b22'>[Gargouri et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b46'>, Peng and Zhu, 2012</ns0:ref><ns0:ref type='bibr' target='#b48'>, Puuska et al., 2014]</ns0:ref>, affiliation <ns0:ref type='bibr' target='#b58'>[Tomkins et al., 2017]</ns0:ref>, and academic reputation metrics <ns0:ref type='bibr'>[Hurley et al., 2014, Bjarnason and</ns0:ref><ns0:ref type='bibr' target='#b4'>Sigfusdottir, 2002]</ns0:ref>.</ns0:p><ns0:p>Other factors were associated with the publishing journal or conference, such as the relative quality of the article and the venue <ns0:ref type='bibr' target='#b41'>[McCabe and Snyder, 2015]</ns0:ref> and others still related to the papers themselves, such as characteristics of the titles and abstracts, characteristics of references and length of paper <ns0:ref type='bibr' target='#b56'>[Tahamtan et al., 2016]</ns0:ref>.</ns0:p><ns0:p>Among the many paper-related factors studied in relation to citations is the paper's text availability, which our data shows to be also linked with artifact availability. there exists a rich literature examining the association between a paper's own accessibility and higher citation counts, the so-called 'OA advantage' <ns0:ref type='bibr' target='#b3'>[Bernius and Hanauske, 2009</ns0:ref><ns0:ref type='bibr' target='#b11'>, Davis and Walters, 2011</ns0:ref><ns0:ref type='bibr' target='#b54'>, Sotudeh et al., 2015</ns0:ref><ns0:ref type='bibr'>, Wagner, 2010]</ns0:ref>.</ns0:p><ns0:p>For example, Gargouri et al. found that articles whose authors have supplemented subscription-based access to the publisher's version with a freely accessible self-archived version are cited significantly more than articles in the same journal and year that have not been made open <ns0:ref type='bibr' target='#b22'>[Gargouri et al., 2010]</ns0:ref>. A few other more recent studies and reviews not only corroborated the OA advantage, but also found that the proportion of OA research is increasing rapidly <ns0:ref type='bibr' target='#b5'>[Breugelmans et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b18'>, Fu and Hughey, 2019</ns0:ref><ns0:ref type='bibr' target='#b42'>, McKiernan et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b56'>, Tahamtan et al., 2016]</ns0:ref>. The actual amount by which open access improves citations is unclear, but one recent study found the number to be approximately 18% <ns0:ref type='bibr' target='#b47'>[Piwowar et al., 2018]</ns0:ref>, which means that higher paper accessibility on its own is not enough to explain all of the citation advantage we identified for papers with available artifacts.</ns0:p><ns0:p>Turning our attention specifically to the field of systems, we might expect that many software-based experiments should be both unimpeded and imperative to share and reproduce <ns0:ref type='bibr' target='#b34'>[Ince et al., 2012]</ns0:ref>. But instead we find that many artifacts are not readily available or buildable <ns0:ref type='bibr' target='#b9'>[Collberg and Proebsting, 2016</ns0:ref><ns0:ref type='bibr' target='#b17'>, Freire et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b26'>, Heum&#252;ller et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b38'>, Krishnamurthi and Vitek, 2015]</ns0:ref>. A few observational studies looked at artifact sharing rates in specific subfields of systems, such as software engineering <ns0:ref type='bibr'>[Childers and Chrysanthis, 2017</ns0:ref><ns0:ref type='bibr' target='#b26'>, Heum&#252;ller et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b57'>, Timperley et al., 2021]</ns0:ref> and computer architecture <ns0:ref type='bibr' target='#b19'>[Fursin and Lokhmotov, 2011]</ns0:ref>, but none that we are aware of have looked across the entire field.</ns0:p></ns0:div> <ns0:div><ns0:head>13/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Without directly comparable information on artifact availability rates in all of systems or in other fields, it is impossible to tell whether the overall rate of papers with artifacts in our dataset, 27.7%, is high or low. However, within the six conferences that evaluated artifacts, 42.86% of papers released an artifact, a very similar rate to the &#8776; 40% rate found in a study of of a smaller subset of systems conferences with an artifact evaluation process <ns0:ref type='bibr'>[Childers and Chrysanthis, 2017]</ns0:ref>.</ns0:p><ns0:p>In general, skimming the papers in our dataset revealed that many 'systems' papers do in fact describe the implementation of a new computer system, mostly in software. It is plausible that the abundance of software systems in these papers and the relative ease of releasing them as software artifacts contributes directly to this sharing rate, in addition to conference-level factors.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>Several studies across disparate fields found a positive association between the sharing of research artifacts and increased citation of the research work. In this cross-sectional study of computer systems research, we also observed a strong statistical relationship between the two, although there are numerous potential confounding and explanatory variables to increased citations. Still, even when controlling for various paper-related and conference-related factors, we observe that papers with shared artifacts receive approximately a third more citations than papers without.</ns0:p><ns0:p>Citation metrics are a controversial measure of a work's quality, impact, and importance, and perhaps should not represent the sole or primary motivation for authors to share their artifacts. Instead, authors and readers may want to focus on the clear and important benefits to science in general, and to the increased reproducibility and credibility of their work in particular. If increased citation counts are not enough to incent more systems authors to share their artifacts, perhaps conference organizers can leverage their substantial influence to motivate authors. Although artifact evaluation can represent a nontrivial additional burden on the program committee, our data shows that it does promote higher rates of artifact sharing.</ns0:p><ns0:p>While many obstacles to universal sharing of artifacts still remain, the field of computer systems does have the advantage that many--if not most-of its artifacts come in the form of software, which is much easier to share than artifacts in other experimental fields. It is therefore not surprising that we find the majority of shared and extant artifacts in computer systems hosted on github.com, a highly accessible source-code sharing platform. That said, a high artifact sharing rate is not enough for the goals of reproducible science, since many of the shared artifacts in our dataset have since expired or been difficult to locate.</ns0:p><ns0:p>Our analysis found that both the findability and accessibility of systems artifacts can decay significantly even after only a few years, especially when said artifacts are not hosted in dedicated open and free repositories. Conference organizers could likely improve both aspects by requiring-and perhaps offering-standardized tools, techniques, and repositories, in addition to the sharing itself. The ACM has taken significant steps in this direction by not only standardizing various artifact badges, but also offering its own supplementary material repository in its digital library. A few conferences in our dataset, like SC, are taking another step in this direction by also requesting a standardized artifact description appendix and review for every technical paper, including a citeable link to the research artifacts.</ns0:p><ns0:p>To evaluate the impact of such efforts, we must look beyond the findability and accessibility of artifacts, as was done in this study. In future work, this analysis can be expanded to the two remaining aspects of the FAIR principles: interoperability and reusability, possibly by incorporating input from the artifact review process itself. The hope is that as the importance and awareness of research artifacts grows in computer systems research, many more conferences will require and collect this information, facilitating not only better, reproducible research, but also a better understanding of the nuanced effects of software artifact sharing.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Density plot of paper citations 42 months after publication (log-scale)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>System conferences, including start date, number of published papers, total number of named authors, and acceptance rate.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Conference</ns0:cell><ns0:cell>Date</ns0:cell><ns0:cell cols='3'>Papers Authors Acceptance</ns0:cell><ns0:cell>Conference</ns0:cell><ns0:cell>Date</ns0:cell><ns0:cell cols='3'>Papers Authors Acceptance</ns0:cell></ns0:row><ns0:row><ns0:cell>ICDM</ns0:cell><ns0:cell>2017-11-19</ns0:cell><ns0:cell>72</ns0:cell><ns0:cell>268</ns0:cell><ns0:cell cols='2'>0.09 PACT</ns0:cell><ns0:cell>2017-09-11</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>89</ns0:cell><ns0:cell>0.23</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD</ns0:cell><ns0:cell>2017-08-15</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>237</ns0:cell><ns0:cell cols='2'>0.09 SPAA</ns0:cell><ns0:cell>2017-07-24</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>0.24</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGMETRICS</ns0:cell><ns0:cell>2017-06-05</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell cols='2'>0.13 MASCOTS</ns0:cell><ns0:cell>2017-09-20</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>75</ns0:cell><ns0:cell>0.24</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGCOMM</ns0:cell><ns0:cell>2017-08-21</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>216</ns0:cell><ns0:cell cols='2'>0.14 CCGrid</ns0:cell><ns0:cell>2017-05-14</ns0:cell><ns0:cell>72</ns0:cell><ns0:cell>296</ns0:cell><ns0:cell>0.25</ns0:cell></ns0:row><ns0:row><ns0:cell>SP</ns0:cell><ns0:cell>2017-05-22</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>287</ns0:cell><ns0:cell cols='2'>0.14 PODC</ns0:cell><ns0:cell>2017-07-25</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>0.25</ns0:cell></ns0:row><ns0:row><ns0:cell>PLDI</ns0:cell><ns0:cell>2017-06-18</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell>173</ns0:cell><ns0:cell cols='2'>0.15 CLOUD</ns0:cell><ns0:cell>2017-06-25</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>NDSS</ns0:cell><ns0:cell>2017-02-26</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell>327</ns0:cell><ns0:cell cols='2'>0.16 Middleware</ns0:cell><ns0:cell>2017-12-11</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>NSDI</ns0:cell><ns0:cell>2017-03-27</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>203</ns0:cell><ns0:cell cols='2'>0.16 EuroPar</ns0:cell><ns0:cell>2017-08-30</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>179</ns0:cell><ns0:cell>0.28</ns0:cell></ns0:row><ns0:row><ns0:cell>IMC</ns0:cell><ns0:cell>2017-11-01</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>124</ns0:cell><ns0:cell cols='2'>0.16 PODS</ns0:cell><ns0:cell>2017-05-14</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>0.29</ns0:cell></ns0:row><ns0:row><ns0:cell>ISCA</ns0:cell><ns0:cell>2017-06-24</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>295</ns0:cell><ns0:cell cols='2'>0.17 ICPP</ns0:cell><ns0:cell>2017-08-14</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>234</ns0:cell><ns0:cell>0.29</ns0:cell></ns0:row><ns0:row><ns0:cell>SOSP</ns0:cell><ns0:cell>2017-10-29</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>217</ns0:cell><ns0:cell cols='2'>0.17 ISPASS</ns0:cell><ns0:cell>2017-04-24</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>ASPLOS</ns0:cell><ns0:cell>2017-04-08</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>247</ns0:cell><ns0:cell cols='2'>0.18 Cluster</ns0:cell><ns0:cell>2017-09-05</ns0:cell><ns0:cell>65</ns0:cell><ns0:cell>273</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>CCS</ns0:cell><ns0:cell>2017-10-31</ns0:cell><ns0:cell>151</ns0:cell><ns0:cell>589</ns0:cell><ns0:cell cols='2'>0.18 OOPSLA</ns0:cell><ns0:cell>2017-10-25</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>232</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>HPDC</ns0:cell><ns0:cell>2017-06-28</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell cols='2'>0.19 HotOS</ns0:cell><ns0:cell>2017-05-07</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>0.31</ns0:cell></ns0:row><ns0:row><ns0:cell>MICRO</ns0:cell><ns0:cell>2017-10-16</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>306</ns0:cell><ns0:cell cols='2'>0.19 ISC</ns0:cell><ns0:cell>2017-06-18</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>MobiCom</ns0:cell><ns0:cell>2017-10-17</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>164</ns0:cell><ns0:cell cols='2'>0.19 HotCloud</ns0:cell><ns0:cell>2017-07-10</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>ICAC</ns0:cell><ns0:cell>2017-07-18</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>46</ns0:cell><ns0:cell cols='2'>0.19 HotI</ns0:cell><ns0:cell>2017-08-28</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>SC</ns0:cell><ns0:cell>2017-11-13</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>325</ns0:cell><ns0:cell cols='2'>0.19 SYSTOR</ns0:cell><ns0:cell>2017-05-22</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>CoNEXT</ns0:cell><ns0:cell>2017-12-13</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>145</ns0:cell><ns0:cell cols='2'>0.19 ICPE</ns0:cell><ns0:cell>2017-04-22</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGMOD</ns0:cell><ns0:cell>2017-05-14</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>335</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>HotStorage</ns0:cell><ns0:cell>2017-07-10</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>94</ns0:cell><ns0:cell>0.36</ns0:cell></ns0:row><ns0:row><ns0:cell>PPoPP</ns0:cell><ns0:cell>2017-02-04</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>122</ns0:cell><ns0:cell cols='2'>0.22 IISWC</ns0:cell><ns0:cell>2017-10-02</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell>HPCA</ns0:cell><ns0:cell>2017-02-04</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>215</ns0:cell><ns0:cell cols='2'>0.22 CIDR</ns0:cell><ns0:cell>2017-01-08</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>213</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>EuroSys</ns0:cell><ns0:cell>2017-04-23</ns0:cell><ns0:cell>41</ns0:cell><ns0:cell>169</ns0:cell><ns0:cell cols='2'>0.22 VEE</ns0:cell><ns0:cell>2017-04-09</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>85</ns0:cell><ns0:cell>0.42</ns0:cell></ns0:row><ns0:row><ns0:cell>ATC</ns0:cell><ns0:cell>2017-07-12</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>279</ns0:cell><ns0:cell cols='2'>0.22 SLE</ns0:cell><ns0:cell>2017-10-23</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell>0.42</ns0:cell></ns0:row><ns0:row><ns0:cell>HiPC</ns0:cell><ns0:cell>2017-12-18</ns0:cell><ns0:cell>41</ns0:cell><ns0:cell>168</ns0:cell><ns0:cell cols='2'>0.22 HPCC</ns0:cell><ns0:cell>2017-12-18</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>287</ns0:cell><ns0:cell>0.44</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGIR</ns0:cell><ns0:cell>2017-08-07</ns0:cell><ns0:cell>78</ns0:cell><ns0:cell>264</ns0:cell><ns0:cell cols='2'>0.22 HCW</ns0:cell><ns0:cell>2017-05-29</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>FAST</ns0:cell><ns0:cell>2017-02-27</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell cols='2'>0.23 SOCC</ns0:cell><ns0:cell>2017-09-25</ns0:cell><ns0:cell>45</ns0:cell><ns0:cell>195</ns0:cell><ns0:cell>Unknown</ns0:cell></ns0:row><ns0:row><ns0:cell>IPDPS</ns0:cell><ns0:cell>2017-05-29</ns0:cell><ns0:cell>116</ns0:cell><ns0:cell>447</ns0:cell><ns0:cell cols='2'>0.23 IGSC</ns0:cell><ns0:cell>2017-10-23</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>Unknown</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Class of artifact URLs. 'NA' locations indicate expired or unreleased URLs.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Location</ns0:cell><ns0:cell>Count</ns0:cell></ns0:row><ns0:row><ns0:cell>Repository</ns0:cell><ns0:cell>477</ns0:cell></ns0:row><ns0:row><ns0:cell>Academic</ns0:cell><ns0:cell>81</ns0:cell></ns0:row><ns0:row><ns0:cell>Other</ns0:cell><ns0:cell>67</ns0:cell></ns0:row><ns0:row><ns0:cell>ACM</ns0:cell><ns0:cell>44</ns0:cell></ns0:row><ns0:row><ns0:cell>Filesharing</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>NA</ns0:cell><ns0:cell>47</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Median citations by class of artifact URLs for extant artifactsOne commonly used set of principles to assess research software artifacts is termed FAIR: findability, accessibility, interoperability, and reusability[Hong et al., 2021, Wilkinson et al., 2016]. We have overviewed the findability aspect of artifacts in the statistics of how many of these were linked or found via a Web search. The reusability and interoperability of artifacts unfortunately cannot be assessed with the current data. But we can address some of our secondary research questions by analyzing the accessibility of artifacts in depth.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Location</ns0:cell><ns0:cell cols='2'>Count Median citations</ns0:cell></ns0:row><ns0:row><ns0:cell>Repository</ns0:cell><ns0:cell>477</ns0:cell><ns0:cell>25</ns0:cell></ns0:row><ns0:row><ns0:cell>Academic</ns0:cell><ns0:cell>81</ns0:cell><ns0:cell>24</ns0:cell></ns0:row><ns0:row><ns0:cell>Other</ns0:cell><ns0:cell>67</ns0:cell><ns0:cell>27</ns0:cell></ns0:row><ns0:row><ns0:cell>ACM</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>Filesharing</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>35</ns0:cell></ns0:row><ns0:row><ns0:cell>3.3 Accessibility</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Estimated parameters for final multilevel mixed-effects model of ln(citations) </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Factor</ns0:cell><ns0:cell>Coefficient p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>Intercept</ns0:cell><ns0:cell>1.74166 6.6e-31</ns0:cell></ns0:row><ns0:row><ns0:cell>Artifact released</ns0:cell><ns0:cell>0.29357 3.6e-10</ns0:cell></ns0:row><ns0:row><ns0:cell>Award given</ns0:cell><ns0:cell>0.00002 4.7e-02</ns0:cell></ns0:row><ns0:row><ns0:cell>Months to eprint</ns0:cell><ns0:cell>-0.00048 8.8e-09</ns0:cell></ns0:row><ns0:row><ns0:cell>References number</ns0:cell><ns0:cell>0.00907 1.3e-08</ns0:cell></ns0:row><ns0:row><ns0:cell>Coauthors number</ns0:cell><ns0:cell>0.06989 1.6e-19</ns0:cell></ns0:row><ns0:row><ns0:cell>Colon in title</ns0:cell><ns0:cell>0.13369 1.4e-03</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='3'>Note that the last publication counts and h-index are correlated (r = 0.6, p &lt; 10 &#8722;9 ), so one may cancel the other out.12/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:1:1:NEW 30 Oct 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"REED COLLEGE Department of Mathematics and Computer Science 3203 SE Woodstock Boulevard, Portland, Oregon 97202-8199phone: 503/459-4659 October 29, 2021 Dear Editor, Thank you for providing three detailed reviews of my submission 'Software artifacts and citations in computer systems papers'. I found all reviews to be thoughtful and constructive, and attempted to address every single suggestion. I truly appreciate the opportunity to improve the paper's content and clarity. To this end, I have made the following changes to the paper, based on your and the reviewers' suggestions: • • • • • • Added to the introduction the primary research goals and hypothesis, as well as a summary of secondary findings. The introduction now also attempts to define more clearly the scope of the paper, main terms, the data, the study population, and sample investigated. Wrote a new related work section. Generated a DOI for the supplementary material (code and data) (doi: 10.5281/zenodo.5590575). Explained the choice for the 42-month citation duration. Edited extensively the text to address typos, grammar, clarity, and style issues. Moved the discussion of accessibility into the Results section, and refocused the discussion section on the main question of citations and artifacts. This section has been expanded to discuss the ramifications of the research findings. In the next few pages, I address the reviewers comments point-by-point in detail. I hope you and the reviewers find this version acceptable for publication at PeerJ/CS. Sincerely, Eitan Frachtenberg Visiting Associate Professor of Computer Science Reed College, OR, USA. Detailed response Editor comments (Daniel de Oliveira) > Overall, all reviewers agree that the research topic is relevant. However, they think that there are several points to improve in your article: (i) Summarize the main findings of the study in the Introduction, (ii) Clearly define the research goals/ hypotheses, (iii) Create a Related Work section, (iv) State the study population and sample investigated, (v) Define the protocol of the study, (vi) Promote the Data/Software Availability with their own DOI at Zenodo or OSF. In summary, the author should clearly define the design of his study to support his findings. Please consider these comments in your new version. All six of these additions were made to the paper. Reviewer 1 (Anonymous) > I find the English acceptable for publication. However, I suggest adding some definitions and unifying some concepts. I found it difficult to understand whether some expressions were used interchangeably or not, which turned out to make it difficult to realize the scope of the paper regarding the type of software included in the study. In particular, * Is “computer systems” used as a synonym of “computer science”, “software”, “system software”, some of them or none? * Does system software correspond to software used as a platform to develop other software including operating systems, software languages and/or utility software? * Is the paper about system software in general or only system software created as part of a research process? Is the paper about software artifacts or research artifacts (which also include, for instance, data)? * What is the field the author refers to? Is it computer science, computer systems, system software, research software? The terms “computer systems” and “system software” are now clearly defined in the introduction. I’ve also clarified that the paper is about all research artifacts, but in practice, most systems research artifacts are in the form of software. > Some typos: * Line 11, “research’s results”, possibly no need of the possessive form * Line 60, “are valued an are an important contribution”, possibly “and” rather than “an” * Line 443 and 444, “many—-f not most—”, possible “if” rather than “f” I suggest a full proof-reading to catch and fix those issues. Thank you for catching these. The paper has been fully proofread. > 1.2. Literature references, sufficient field background/context provided. The article is accompanied with a good number of references. However, the author should double check the citation information. For instance * Line 472, reference ACM, URL not working (https://www.acm.org/publications/policies/artifact-review473 and-badging-current). * Whenever possible, even for web pages, please include information about the author and published date. * The reference used for FAIR applied to research software (line 319, Katz et al., 2021) corresponds to a (partial) outcome of a Research Data Alliance Working Group and could be updated to the now first formal output produced by this group (see DOI:10.15497/RDA00065 and corresponding PDF https://www.rd-alliance.org/system/files/FAIR4RS_Principles_v0.3_RDA-RFC.pdf) * I suggest also including a reference to the code supporting this publication. I know it is provided as supplementary material but getting a permanent identifier together with citation data for this particular release is also possible (and slowly becoming a common practice). I would also like to see the GitHub pages working and linked to from the publication (possibly with a note regarding differences that might appear as the repo evolves post-publication). The bibliography has been thoroughly revised and expanded with the new “Related work” section. A DOI to the supplementary code and data is now provided and cited. Reviewer 2 (Anonymous) > Basic reporting The list of detailed research questions presented in the Introduction contrasts with the regular scope of an initial section. These questions should be properly justified and connected, which should be done in a further section. Alternatively, I suggest the authors expanding the 'interesting' question (line 66) by exemplifying other possible indicators that would be influenced by artifact sharing. I suggest the authors summarize the main findings of the study in the Introduction. The structure of the introduction and the results section has been modified to accommodate these suggestions. The introduction now also includes a summary of findings for each research question. > The author cites several references addressing open science and initiatives promoting artifact sharing, which address the research background. However, I missed a section discussing related work on the topic. For instance, previous investigations/discussions on the maturity of open science in the field. A new related work section has been added. > I suggest presenting the Figure 1 data using tables or grouping conferences by frequency. Yet regarding Figure 1, it is not clear to me the relevance of identifying the 'Organization' of the conferences. For instance, several conferences continuously alternate between ACM and IEEE. The designation of organization was specifically for the year 2017, from which all the papers are collected. > I understand Table 1 should adopt a better-contextualized criterion than alphabetical order for listing conferences. For instance, the author may use the number of papers or the acceptance rate. The table has been reorganized as suggested. > Each conference has a variable time of running/publication, which may even lead to its publishing in the next year. I would like to understand how the author reached '42 months of publication' for all publications and why he opted by following this gold number. To normalize all citations to an identical duration since proceedings availability, I chose the number of months that have passed since the last conference in December 2017 for which citation data was collected in June 2021. That’s 3.5 years or 42 months. > I was expecting a user-friendly dataset available from a research paper addressing the artifacts' availability in research papers. Experimental design Before reading about the study results (Section 3), I was expecting a Section reporting the study design. After reading the entire section of 'Data and Methods' (Section 2), I could not find any reference to the research goals/ hypotheses. Besides, the author should properly address the study population and sample investigated. The introduction now includes a section on study design and main findings. > The text of section 2 is too monolithic and hard to follow. It does not address the basic content expected from a study design section. After reading it, I only got a vague idea about what the authors had done (execution) and the main data types they collected. The methodology section has been edited for clarity. > It is not clear the criteria followed by the author to reach the 'hand-curated collection' of 56 peer-reviewed 'systems conferences.' Even if the sample was established by convenience from google scholar, the authors should let it clear in the paper. I’ve clarified the selection criteria in the paper. They are a combination of my subjective judgment, based on myresearch experience in systems, and Google Scholar metrics. > I could not reach what does the author mean by 'systems conferences.' Besides, I am afraid about the lack of representativeness of the addressed in the study. For instance, I could found only one relevant conference from the Software Engineering field. The introduction now defines the term “systems conferences” more precisely. The sample may never be representative, but it is expansive and includes many papers from each subfield. > It is not clear the methodology followed for identifying artifacts in the papers. Besides, I understand that artifacts available in research papers should also address their quality. I recommend the authors planning and performing an evaluation in this direction. Validity of the findings The artifacts were identified by manual skimming of every paper, as well as Web searches for artifacts for papers that do not explicitly mention artifact availability. The text of the paper should now hopefuly clarify this. > Section 3.2 illustrates my difficulty in understanding the research methodology. It is a subsection about results. However, it starts by presenting a new research question: 'does the open availability of an artifact affect the citations of a paper in computer systems?'. Besides, this RQ is presented as 'the main research question of this paper.' Contrastingly, several of the eight research questions presented in the paper Introduction are barely (or clearly) addressed in the paper. The new organization of the introduction and results section should hopefully make understanding easier. There is now a clearer relationship between the questions posed in the introduction and the answers presented in the results section. > The findings of the paper are not discussed. For instance, what are the possible implications of these findings for the practice of sharing artifacts? Which are possible ways for filling the gaps observed? The section 4.1 'Discussion' basically summarize the several statistical tests and analysis performed (4.1). However, even here I could not see a clear connection among them. Besides, without a proper research plan, I could not get why all these tests and analyses were performed. There are several new paragraphs closing the revised discussion section that now address the implications of these findings. Reviewer 3 (Stian Soiland-Reyes) > As the article points out, the availability of software artefacts is important - however this article itself does not have an explicit 'Data and software availability' section - which would be very valuable given the data quality and reproducibility provided for this manuscript. I would also expect the data to be archived in Zenodo with a DOI rather than be a fluctual GitHub reference - I know the author have also uploaded a snapshot of said GitHub repository along with the article - but this would not have been detected by their own measures. As the data seems to be used for multiple articles under review, a separate version-less Zenodo DOI would be able to bring these articles together for describing different aspects of the same data (e.g. using Related Identifiers mechanism in Zenodo).Only one of these other articles are cited from this manuscript. A new specific DOI for the paper’s code and data is now cited in the paper. > The findings are based on a large subste of the CS System conferences - perhaps more could be added on why these particular conferences (e.g. practicallity of access, notability, familiarity to the authors). The authors have done a good selection time-wise (all considered data in 2017) in order to consider citations building over time. The introduction now tries to explain better the selection criteria for the conferences. "
Here is a paper. Please give your review comments after reading it.
342
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Research in computer systems often involves the engineering, implementation, and measurement of complex systems software. The availability of these software artifacts is critical to the reproducibility and replicability of the research results because system software often embodies numerous implicit assumptions and parameters that are not fully documented in the research article itself. Artifact availability has also been previously associated with higher paper impact, as measured by citations counts. And yet, the sharing of research artifacts is still not as common as warranted by its importance.</ns0:p><ns0:p>The primary goal of this study is to provide an exploratory statistical analysis of the artifact-sharing rates and associated factors in the research field of computer systems. To this end, we explore a crosssectional dataset of papers from 56 contemporaneous systems conferences. In addition to extensive data on the conferences, papers, and authors, this dataset includes data on the release, ongoing availability, badging, and locations of research artifacts. We combine this manually curated data with recent citation counts to evaluate the relationships between different artifact properties and citation metrics. Additionally, we revisit previous observations from other fields on the relationships between artifact properties and various other characteristics of papers, authors, and venue and apply them to this field.</ns0:p><ns0:p>The overall rate of artifact sharing we find in this dataset is approximately 30%, although it varies significantly with paper, author, and conference factors, and it is closer to 43% for conferences that actively evaluated artifact sharing. Approximately 20% of all shared artifacts are no longer accessible four years after publications, predominately when hosted on personal and academic websites. Our main finding is that papers with shared artifacts averaged approximately 75% more citations than papers with none. Even after controlling for numerous confounding covariates, the release of an artifact appears to increase the citations of a systems paper by some 34%. This metric is further boosted by the open availability of the paper's text.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>For this study, the most critical piece of information on these papers is their artifacts. Unfortunately, 153 most papers included no standardized metadata with artifact information, so it had to be collected manually 154 from various sources, as detailed next.</ns0:p><ns0:p>155</ns0:p><ns0:p>The only form of standardized artifact metadata was found for the subset of conferences organized by 156 the ACM with artifact badge initiatives. In the proceedings page in the ACM's digital library of these</ns0:p></ns0:div> <ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:3:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>expired. From the availability of a URL, we can classify an artifact as released or unreleased (the latter denoting papers that promised an artifact but no link or repository was found). And from the host domain of the URL we can classify the location of the artifact as either an Academic web page, the ACM digital library, a Filesharing service such as Dropbox or Google, a specialized Repository such as github.com, Other (including .com and .org web sites), or NA.</ns0:p><ns0:p>In all, 722 papers in our dataset (29.6%) had an identifiable or promised artifact, primarily source code but occasionally data, configuration, or benchmarking files. Artifacts that had been included in previous papers or written by someone other than the paper's authors were excluded from this count. This statistic only reflects artifact availability, not quality, since evaluating artifact quality is both subjective and time-consuming. It is worth noting, however, that most of the source-code repositories in these artifacts showed no development activity-commits, forks, or issues-after the publication of their paper, suggesting limited activity for the artifacts alone.</ns0:p></ns0:div> <ns0:div><ns0:head>Data-collection procedure</ns0:head><ns0:p>The following list summarizes the data-collection process for reproducibility purposes.</ns0:p><ns0:p>1. Visit the website and proceedings of each conference and record general information about the conference: review policy, open-access, rebuttal policy, acceptance rate, program committee, etc.</ns0:p><ns0:p>2. Also from these sources, manually copy the following information for each paper: title, author names, and award status (as noted on the website and in proceedings).</ns0:p><ns0:p>3. Double-check all paper titles and author names by comparing conference website and postconference proceedings. Also compare titles to GS search results and ensure all papers are (eventually) discovered by GS with the title corrected as necessary. Finally, check the same titles and author names against the Semantic Scholar database and resolve any discrepancies.</ns0:p><ns0:p>4. Download the full text of each paper in PDF format via institutional digital library access.</ns0:p><ns0:p>5. Record all papers with artifact badges. These are unique to the ACM conferences in our dataset and are clearly shown both in the ACM digital library and in the PDF copy of such papers.</ns0:p><ns0:p>6. Collect and record GS citation counts for each paper as close to possible to exactly 42 months after the conference's opening day. The dataset includes citation counts for each paper across multiple time points, but the analysis in this paper only uses one data point per paper, closest to to the selected duration.</ns0:p><ns0:p>7. Record artifact availability and links for papers. This is likely the most time-consuming and errorprone process in the preparation of the data specific to this study and involves the following steps:</ns0:p><ns0:p>Using a search tool on each document ('pdfgrep') on each of the search terms listed above and perusing the results to identify any links or promises to artifacts; skimming or reading papers with negative results to ensure such a link was not accidentally missed; Finally, searching github.com for specific system names if a paper describes one, even if not linked directly from the paper.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistics</ns0:head><ns0:p>For statistical testing, group means were compared pairwise using Welch's two-sample t-test; differences between distributions of two categorical variables were tested with &#967; 2 test; and comparisons between two numeric properties of the same population were evaluated with Pearson's product-moment correlation.</ns0:p><ns0:p>All statistical tests are reported with their p-values.</ns0:p></ns0:div> <ns0:div><ns0:head>Ethics statement</ns0:head><ns0:p>All of the data for this study was collected from public online sources and therefore did not require the informed consent of the papers' authors.</ns0:p></ns0:div> <ns0:div><ns0:head>Code and data availability</ns0:head><ns0:p>The complete dataset and metadata are available in the supplementary material, as well as a github repository <ns0:ref type='bibr' target='#b14'>[Frachtenberg, 2021]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>5/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:3:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='3'>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Descriptive statistics</ns0:head><ns0:p>Before addressing our main research question, we start with a simple characterization of the statistical distributions of artifacts in our dataset. Of the 722 papers with artifacts, we find that about 79.5% included an actual link to the artifact in the text. The ACM digital library marked 88 of artifact papers (12.2%) with</ns0:p><ns0:p>an 'Artifact available' badge, and 89 papers (12.3%) with an 'Artifact evaluated' badge. The majority of artifact papers (86.7%) still had their artifacts available for download at the time of this writing. This ratio is somewhat similar to a comparable study that found that 73% of URLs in five open-access (OA) journals after five years <ns0:ref type='bibr' target='#b53'>[Saberi and Abedi, 2012]</ns0:ref>. Of the 722 papers that promised artifacts, 47 appear to have never released them. The distribution of the location of the accessible artifacts is shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>, and is dominated by Github repositories.</ns0:p><ns0:p>Looking at the differences across conferences, Fig. <ns0:ref type='figure'>1</ns0:ref> shows the percentage of papers with artifacts per conference, ranging from 0% for ISCA, IGSC, and HCW to OOPSLA's 78.79% (mean: 27.22%, SD: 19.32%). Unsurprisingly, nearly all of the conferences where artifacts were evaluated are prominent in their relatively high artifact rates. Only PACT stands out as a conference that evaluated artifacts but had a lower-than-average overall ratio of papers with artifacts (0.24). The MobiCom conference also shows a distinctly low ratio, 0.09, despite actively encouraging artifacts. It should be noted, however, that many papers in PACT and MobiCom are hardware-related, where artifacts are typically unfeasible.</ns0:p><ns0:p>The same is true for a number of other conferences with low artifact ratios, such as ISCA, HPCA, and MICRO. Also worth noting is the fact that ACM conferences appear to attract many more artifacts than IEEE conferences, although the reasons likely vary on a conference-by-conference basis.</ns0:p><ns0:p>Another indicator for artifact availability is author affiliation. As observed in other systems papers, industry-affiliated authors typically face more restrictions for sharing artifacts <ns0:ref type='bibr' target='#b9'>[Collberg and Proebsting, 2016]</ns0:ref>, likely because the artifacts hold commercial or competitive ramifications <ns0:ref type='bibr' target='#b37'>[Ince et al., 2012]</ns0:ref>. In our dataset, only 19.3% of the 109 papers where all authors had an industry affiliation also released an artifact, compared to 28.1% for the other papers (&#967; 2 = 3.6, p = 0.06).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Relationships to citations</ns0:head><ns0:p>Turning now to our main research hypothesis, we ask: does the open availability of an artifact affect the citations of a paper in systems? To answer this question, we look at the distribution of citations for each paper 42 months after its conference's opening day, when its proceedings presumably were published. 1</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows the overall paper distribution as a histogram, while Fig. <ns0:ref type='figure'>3</ns0:ref> breaks down the distributions of artifact and non-artifact papers as density plots.</ns0:p><ns0:p>Citations range from none at all (49 papers) to about a thousand, with two outlier papers exceeding 2,000 citations <ns0:ref type='bibr' target='#b7'>[Carlini and</ns0:ref><ns0:ref type='bibr'>Wagner, 2017, Jouppi et al., 2017]</ns0:ref>. The distributions appear roughly lognormal. The mean citations per paper with artifacts released was 50.7, compared to 29.1 with none (t = 4.07, p &lt; 10 &#8722;4 ). Since the citation distribution is so right-skewed, it makes sense to also compare the median citations with and without artifacts (25 vs. 13, W = 767709, p &lt; 10 &#8722;9 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science an artifact that could later not be found (t = 3.82, p &lt; 10 &#8722;3 ), and extant artifacts fared better than expired ones (t = 4.17, p &lt; 10 &#8722;4 ).</ns0:p><ns0:p>In contradistinction, some positive attributes of artifacts were actually associated with fewer citations.</ns0:p><ns0:p>For example, the mean citations of the 573 papers with a linked artifact, 47, was much lower than the 71.3 mean for the 102 papers with artifacts we found using a Web search (t = &#8722;2.02, p = 0.04; W = 22865, p &lt; 10 &#8722;3 ). Curiously, the inclusion of a link in the paper, presumably making the artifact more accessible, was associated with fewer citations.</ns0:p><ns0:p>Similarly counter-intuitive, papers that received an 'Artifact evaluated' badge fared worse in citations than artifact papers who did not (t = &#8722;3.32, p &lt; 0.01; W = 11932.5, p = 0.03). Papers who received an 'Artifact available' badge did fared a little worse than artifact papers who did not (t = &#8722;1.45, p = 0.15; W = 26050.5, p = 0.56). These findings appear to contradict the premise that such badges are associated with increased artifact sharing, as has been found in other fields <ns0:ref type='bibr' target='#b1'>[Baker, 2016a]</ns0:ref>.</ns0:p><ns0:p>Finally, we can also break down the citations per paper grouped by the type of location for the artifact and by its organization, examining medians because of the outsize effects of outliers (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>). The three major location categories do not show significant differences in citations, and the last two categories may be too small to ascribe statistical significance to their differences.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Accessibility</ns0:head><ns0:p>One commonly used set of principles to assess research software artifacts is termed FAIR: findability, accessibility, interoperability, and reusability <ns0:ref type='bibr'>[Hong et al., 2021</ns0:ref><ns0:ref type='bibr'>, Wilkinson et al., 2016]</ns0:ref>. We have overviewed the findability aspect of artifacts in the statistics of how many of these were linked or found via a Web search. The reusability and interoperability of artifacts unfortunately cannot be assessed with the current data. But we can address some of our secondary research questions by analyzing the accessibility of artifacts in depth.</ns0:p><ns0:p>As mentioned previously, 13.3% of released artifacts are already inaccessible, a mere &#8776; 3.5 years after publication. Most of the artifacts in our dataset were published in code repositories, predominantly github, that do not guarantee persistent access or even universal access protocols such as digital object identifiers (DOI). However, only 2.3% of the 'Repository' artifacts were inaccessible. In contrast, 22.2% of the artifacts in university pages have already expired, likely because they had been hosted by students or faculty that have since moved elsewhere. Also, a full 50% of the artifacts on file-sharing sites such as Dropbox or Google Drive are no longer there, possibly because these are paid services or free to a limited capacity, and can get expensive to maintain over time.</ns0:p><ns0:p>Accessibility is also closely related to the findability of the artifact, which in the absence of artifact DOIs in our dataset, we estimate by looking at the number of papers that explicitly link to their artifacts.</ns0:p><ns0:p>The missing (expired) artifacts consisted of a full 31.1% of the papers with no artifact link, compared to only 8.7% for papers that linked to them (&#967; 2 = 49.15, p &lt; 10 &#8722;9 ).</ns0:p><ns0:p>Another related question to artifact accessibility is how accessible is the actual paper that introduced the artifact, which may itself be associated with higher citations <ns0:ref type='bibr' target='#b22'>[Gargouri et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b43'>, McCabe and Snyder, 2015</ns0:ref><ns0:ref type='bibr' target='#b44'>, McKiernan et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b59'>, Tahamtan et al., 2016]</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Moreover, our dataset includes not only the availability of an eprint link on GS, but also the approximate duration since publication (in months) that it took GS to display this link, offering a quantitative measure of accessibility speed as well. It shows that for papers with artifacts, GS averaged approximately 4 months post-publication to display a link to an eprint, compared to 5.8 months for papers with no artifacts (t = &#8722;5.48, p &lt; 10 &#8722;7 ). Both of these qualitative and quantitative differences are statistically significant, but keep in mind that the accessibility of papers and artifacts are not independent: some conferences that encouraged artifacts were also open-access, particularly those with the ACM. Another dependent covariate with accessibility is citations; several studies suggested that accessible papers are better cited <ns0:ref type='bibr' target='#b3'>[Bernius and Hanauske, 2009</ns0:ref><ns0:ref type='bibr' target='#b45'>, Niyazov et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b55'>, Snijder, 2016]</ns0:ref>, although others disagree <ns0:ref type='bibr' target='#b6'>[Calver and Bradley, 2010</ns0:ref><ns0:ref type='bibr' target='#b11'>, Davis and Walters, 2011</ns0:ref><ns0:ref type='bibr' target='#b43'>, McCabe and Snyder, 2015]</ns0:ref>. This dependence may explain part of the higher citability of papers with artifacts, as elaborated next.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Covariate analysis</ns0:head><ns0:p>Having addressed the relationships between artifacts and citations, we can now explore relationships between additional variables from this expansive dataset.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.1'>Awards</ns0:head><ns0:p>Many conferences present competitive awards, such as 'best paper,' 'best student paper,' 'community award,' etc. Of the 2439 total papers, 4.7% received at least one such award. Papers with artifacts are disproportionately represented in this exclusive subset (39.5% vs. 27.1% in non-award papers; &#967; 2 = 7.71,</ns0:p><ns0:formula xml:id='formula_0'>p &lt; 0.01).</ns0:formula><ns0:p>Again, it is unclear whether this relationship is causal since the two covariates are not entirely independent. For example, a handful of awards specifically evaluated the contribution of the paper's artifact. Even if the relationship is indeed causal, its direction is also unclear, since 20% of award papers with artifacts did not link to it in the paper. It is possible that these papers released their artifacts after winning the award or because of it.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.2'>Textual properties</ns0:head><ns0:p>Some of the textual properties of papers can be estimated from their full text using simple command-line tools. Our dataset includes three such properties: the length of each paper in words, the number of references it cites, and the existence of a system's moniker in the paper's title.</ns0:p><ns0:p>The approximate paper length in words and the number of references turn out to be positively associated with the release of an artifact. Papers with artifacts average more pages than papers without (13.98 vs. 12.4; t = 8.24, p &lt; 10 &#8722;9 ), more words (11757.36 vs. 10525.22; t = 7.86, p &lt; 10 &#8722;9 ), and more references (32.31 vs. 28.71; t = 5.25, p &lt; 10 &#8722;6 ). Keep in mind, however, that longer papers also correspond to more references (r = 0.48, p &lt; 10 &#8722;9 ), and are further confounded with specific conference factors such as page limits.</ns0:p><ns0:p>As previously mentioned, many systems papers introduce a new computer system, often as software.</ns0:p><ns0:p>Sometimes, these papers name their system by a moniker, and their title starts with the moniker, followed by a colon and a short description (e.g., 'Widget: An Even Faster Key-Value Store'). This feature is easy to extract automatically for all paper titles.</ns0:p><ns0:p>We could hypothesize that a paper that introduces a new system, especially a named system, would be more likely to include an artifact with the code for this system, quite likely with the same repository name.</ns0:p><ns0:p>Our data support this hypothesis. The ratio of artifacts released in papers with a labeled title, 41.9%, is nearly double that of papers without a labeled title, 22.8% (&#967; 2 = 84.23, p &lt; 10 &#8722;9 ).</ns0:p><ns0:p>The difficulty to ascribe any causality to these textual relationships could mean that there is little insight to be gained from them. But they can clue the paper's reader to the possibility of an artifact, even if one is not linked in the paper. Indeed, they accelerated the manual search for such unlinked artifacts during the curation of the data for this study.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.3'>Conference prestige</ns0:head><ns0:p>Next, we look at conference-specific covariates that could represent how well-known or competitive a conference is. In addition to textual conference factors, these conference metrics may also be associated with higher rates of artifact release.</ns0:p><ns0:p>Several proxy metrics for prestige appear to support this hypothesis. Papers with released artifacts tend to appear in conferences that average a lower acceptance rate (0.21 vs. 0.24; t = &#8722;6.28, p &lt; 10 &#8722;9 ), more paper submissions (360.5 vs. 292.45; t = 6.33, p &lt; 10 &#8722;9 ), higher historical mean citations per 10/20</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:3:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science paper (16.6 vs. 14.96; t = 3.09, p &lt; 0.01), and a higher h5-index from GS metrics (46.04 vs. 41.04; t = 6.07, p &lt; 10 &#8722;8 ). Also note that papers in conferences that offered some option for author response to peer review (often in the form of a rebuttal) were slightly more likely to include artifacts, perhaps as a response to peer review (&#967; 2 = 2.03, p = 0.15).</ns0:p><ns0:p>To explain these relationships, we might hypothesize that a higher rate of artifact submission would be associated with more reputable conferences, either because artifact presence contributes to prestige, or because more rigorous conferences are also more likely to expect such artifacts. Observe, however, that some of the conferences that encourage or require artifacts are not as competitive as the others.</ns0:p><ns0:p>For example, OOPSLA, with the highest artifact rate, had an acceptance rate of 0.3, and SLE, with the fourth-highest artifact rate, had an acceptance rate of 0.42. The implication here is that it may not suffice for a conference to actively encourage artifacts for it to be competitive, but a conference that already is competitive may also attract more artifacts.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5'>Regression model</ns0:head><ns0:p>Finally, we combine all of these factors to revisit in depth our primary research interest: the effect of artifact sharing on citations. We already observed a strong statistical association between artifact release and higher eventual citations. As cautioned throughout this study, such associations are insufficient to draw causal conclusions, primarily because there are many confounding variables, most of which relating to the publishing conference. These confounding factors could provide a partial or complete statistical explanation to differences in citations beyond artifact availability.</ns0:p><ns0:p>In other words, papers published in the same conference might exhibit strong correlations that interact or interfere with our response variable. One such factor affecting paper citations is time since publication, which we control for by measuring all citations at exactly the same interval, 42 months since the conference's official start. Another crucial factor is the field of study-which we control for by focusing on a single field-while providing a wide cross-section of the field to limit the effect of statistical variability.</ns0:p><ns0:p>There are also numerous less-obvious paper-related factors that have shown positive association with citations, such as review-type studies, fewer equations, more references, statistically significant positive results, papers' length, number of figures and images, and even more obscure features such as the presence of punctuation marks in the title. We can attempt to control for these confounding variables when evaluating associations by using a multilevel model. To this end, we fit a linear regression model of citations as a function of artifact availability, and then add predictor variables as controls, observing their effect on the main predictor. The response variable we model for is ln(citations) instead of citations, because of the long tail of their distribution. We also omit the 49 papers with zero citations to improve the linear fit with the predictors.</ns0:p><ns0:p>In the baseline form, fitting a linear model of the log-transformed citations as a function of only artifact released yields an intercept (baseline log citations) of 2.6 and a slope of 0.59, meaning that releasing an artifact adds approximately 81% more citations to the paper, after exponentiation. The p-value for this predictor is exceedingly low (less than 2 &#215; 10 &#8722;16 ) but the simplistic model only explains 4.61% of the variance in citations (Adjusted R 2 =0.046). The Bayesian Information Criterion (BIC) for this model is 7693.289, with 2388 degrees of freedom (df).</ns0:p><ns0:p>We can now add various paper covariates to the linear model in an attempt to get more precise estimates for the artifact released predictor, by iteratively experimenting with different predictor combinations to minimize BIC using stepwise model selection <ns0:ref type='bibr'>[Garc&#237;a-Portugu&#233;s, 2021, Ch. 3</ns0:ref>]. The per-paper factors considered were: paper length (words), number of coauthors, number of references, colon in the title, award given, and accessibility speed (months to eprint 2 ).</ns0:p><ns0:p>It turns out that all these paper-level factors except award given have a statistically significant effect on citations, which brings the model to an increased adjusted R 2 value of 0.285 and a BIC of 7028.07 (df = 2,380). However, the coefficient for artifact released went down to 0.35 (42% relative citation increase) with an associated p-value of 3.8 &#215; 10 &#8722;13 .</ns0:p><ns0:p>Similar to paper variables, some author-related factors such as their academic reputation, country of residence, and gender have been associated with citation count <ns0:ref type='bibr' target='#b59'>[Tahamtan et al., 2016]</ns0:ref>. We next enhance our linear model with the following predictor variables (omitting 451 papers with NA values):</ns0:p><ns0:p>2 Papers with no eprint available at the time of this writing were assigned an arbitrary time to eprint of 1,000 months, but the regression analysis was not particularly sensitive to this choice.</ns0:p></ns0:div> <ns0:div><ns0:head>11/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:3:0:NEW 8 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science &#8226; Whether all the coauthors with a known affiliation came from the same country <ns0:ref type='bibr' target='#b51'>[Puuska et al., 2014]</ns0:ref>.</ns0:p><ns0:p>&#8226; Is the lead author affiliated with the United States <ns0:ref type='bibr'>[Gargouri et al., 2010, Peng and</ns0:ref><ns0:ref type='bibr' target='#b48'>Zhu, 2012</ns0:ref>]?</ns0:p><ns0:p>&#8226; Whether any of the coauthors was affiliated with one of the top 50 universities per www.topuniversities.com (27% of papers) or a top company (if any author was affiliated with either (Google, Microsoft, Yahoo!, or Facebook: 18% of papers), based on the definitions of a similar study <ns0:ref type='bibr' target='#b61'>[Tomkins et al., 2017]</ns0:ref>.</ns0:p><ns0:p>&#8226; Whether all the coauthors with a known affiliation came from industry.</ns0:p><ns0:p>&#8226; The gender of the first author <ns0:ref type='bibr' target='#b15'>[Frachtenberg and Kaner, 2021]</ns0:ref>.</ns0:p><ns0:p>&#8226; The sum of the total past publications of all coauthors of the paper <ns0:ref type='bibr' target='#b4'>[Bjarnason and Sigfusdottir, 2002]</ns0:ref>.</ns0:p><ns0:p>&#8226; The maximum h-index of all coauthors <ns0:ref type='bibr' target='#b36'>[Hurley et al., 2014]</ns0:ref>.</ns0:p><ns0:p>Only the maximum h-index and top-university affiliation had statistically significant coefficients, but hardly affected the overall model. 3 These minimal changes may not justify the increased complexity and reduced data size of the new model (because of missing data), so for the remainder of the analysis, we ignore author-related factors and proceed with the previous model.</ns0:p><ns0:p>We can now add the last level: venue factors. Conference (or journal) factors-such as the conference's own prestige and competitiveness-can have a large effect on citations, as discussed in the previous section. Although we can approximate some of these factors with some metrics in the dataset, there may also be other unknown or qualitative conference factors that we cannot model. Instead, to account for conference factors we next build a mixed-effects model, where all the previously mentioned factors become fixed effects and the conference becomes a random effect <ns0:ref type='bibr'>[Roback and Legler, 2021, Ch. 8</ns0:ref>].</ns0:p><ns0:p>This last model does indeed reduce the relative effect of artifact release on citations to a coefficient of 0.29 (95% confidence interval: 0.2-0.39). But this coefficient still represents a relative citation increase of about a third for papers with released artifacts (34%), which is significant. We can approximate a p-value for this coefficient via Satterthwaite's degrees of freedom method using R's lmerTest package <ns0:ref type='bibr' target='#b40'>[Kuznetsova et al., 2017]</ns0:ref>, which is also statistically significant at 3.5825 &#215; 10 &#8722;10 . The parameters for this final model are enumerated in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>. The only difference in paper-level factors is that award availability has replaced word count as a significant predictor, but realistically, both have a negligible effect on citations.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Implications</ns0:head><ns0:p>The regression model described in the preceding section showed that even with multiple controlling variables we observe a strong association between artifact release and citations. We can therefore ask, does this association allow for any causal or practical inferences? This association may still not suffice to claim causation due to hidden variables <ns0:ref type='bibr' target='#b41'>[Lewis, 2018]</ns0:ref>, but it does support the hypothesis that releasing artifacts can indeed improve the prospects of a systems research paper to achieve wider acceptance, recognition, and scientific impact.</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>One implication of this model is that even if we assume no causal relation between artifact sharing and higher citation counts, the association is strong enough to justify a change in future scientometric studies of citations. Such studies often attempt to control for various confounders when attempting to explain or predict citations, and this strong link suggests that at least for experimental and data-driven sciences, the sharing of research artifacts should be included as an explanatory variable.</ns0:p><ns0:p>That said, there may be a case for a causal explanation with a clear direction after all. First, the model controls for many of the confounding variables identified in the literature, so the possibility of hidden, explanatory variables is diminished. Second, there is a clear temporal relationship between artifact sharing and citations. Artifact sharing invariably accompanies the publication of a paper, while its citations invariable follow months or years afterward. It is therefore plausible to expect that citation counts are influenced by artifact sharing and not the other way around.</ns0:p><ns0:p>If we do indeed assume causality between the two, then an important, practical implication also arises from this model, especially for authors wishing to increase their work's citations. There are numerous factors that authors cannot really control, such as their own demographic factors, but fortunately, these turn out to have insignificant effects on citations. Even authors' choice of a venue to publish in, which does influence citations, can be constrained by paper length, scope match, dates and travel, and most importantly, the peer-review process that is completely outside of their control. But among the citation factors that authors can control, the most influential one turns out to be the sharing of research artifacts.</ns0:p><ns0:p>A causal link would then provide a simple lever for systems authors to improve their citations by an average of some 34%: share and link any available research artifacts. Presumably, authors attempting to maximize impact already work hard to achieve a careful study design, elaborate engineering effort, a well-written paper, and acceptance at a competitive conference. The additional effort of planning for and releasing their research artifact should be a relatively minor incremental effort that could improve their average citation count. If we additionally assume causality in the link between higher artifact sharing rates and acceptance to more competitive conferences, the effect on citations can be compounded.</ns0:p><ns0:p>Other potential implications of our findings mostly agree with our intuition and with previous findings in related studies, as described in the next section. For example, all other things being equal, papers with open access and with long-lasting artifacts receive more citations.</ns0:p><ns0:p>Two factors that do not appear to have a positive impact on citations, at least in our dataset, are the receipt of artifact badges or the linking of artifacts in the paper. This is unfortunate because it implicitly discourages standardized or searchable metadata on artifacts, which is critical for studies on their effect, as described next.</ns0:p></ns0:div> <ns0:div><ns0:head>Threats to validity</ns0:head><ns0:p>Perhaps the greatest challenge in performing this study or in replicating it is the fact that good metadata on research artifacts is either nonexistent or nonstandard. There is currently no automated or even manual methodology to reliably discover which papers shared artifacts, how were they shared, and how long did they survive. There are currently several efforts underway to try to standardize artifact metadata and citation, but for this current study, the validity and scalability of the analysis hinge on the quality of the manual process of data collection.</ns0:p><ns0:p>One way to address potential human errors in data collection and tagging is to collect a sizeable dataset-as was attempted in this dataset-so that such errors disappear in the statistical noise. Although a large-enough number of artifacts was identified for statistical analysis, there likely remain untagged papers in the dataset that did actually release an artifact (false negatives). Nevertheless, there is no evidence to suggest that their number is large or that their distribution is skewed in some way as to bias statistical analyses. Moreover, since the complete dataset is (naturally) released as an artifact of this paper, it can be enhanced and corrected over time.</ns0:p><ns0:p>Additionally, there is the possibility of errors in the manual process of selecting conferences, importing data about papers and authors, disambiguating author names, and identifying the correct citation data on GS. In the data-collection process, we have been careful to cross-validate the data we input against the one found in the official proceedings of each conference, as well as the data that GS recorded, and reconciled any differences we found.</ns0:p><ns0:p>Citation metrics were collected from the GS database because it includes many metrics and allows for manual verification of the identity of each author by linking to their homepage. This database is not without its limitations, however. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science overcount publications and citations <ns0:ref type='bibr' target='#b23'>[Halevi et al., 2017</ns0:ref><ns0:ref type='bibr' target='#b24'>, Harzing and Alakangas, 2016</ns0:ref><ns0:ref type='bibr' target='#b42'>, Martin-Martin et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b58'>, Sugimoto and Lariviere, 2018]</ns0:ref>. The name disambiguation challenge was addressed by manually verifying the GS profiles of all researchers and ensuring that they include the papers from our dataset. Ambiguous profiles were omitted from our dataset. As for citation over-counting, note that the absolute number of citations is immaterial to this analysis, only the difference between papers with and without artifacts. Assuming GS overcounts both classes of papers in the same way, it should not materially change the conclusions we reached.</ns0:p><ns0:p>Our dataset also does not include data specific to self-citations. Although it is possible that papers with released artifacts have different self-citations characteristics, thus confounding the total citation count, there is no evidence to suggest such a difference. This possibility certainly opens up an interesting research question for future research, using a citation database with reliable self-citation information (unlike GS).</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>RELATED WORK</ns0:head><ns0:p>This paper investigates the relationship between research artifacts and citations in the computer systems field. This relationship has been receiving increasingly more attention in recent years for CS papers in general. For example, a new study on software artifacts in CS research observed that while artifact sharing rate is increasing, the bidirectional links between artifacts and papers do not always exist or last very long, as we have also found <ns0:ref type='bibr' target='#b26'>[Hata et al., 2021]</ns0:ref>. Some of the reasons that researchers struggle to reproduce experimental results and reuse research code from scientific papers are the continuously changing software and hardware, lack of common APIs, stochastic behavior of computer systems, and a lack of a common experimental methodology <ns0:ref type='bibr' target='#b20'>[Fursin, 2021]</ns0:ref>, as well as copyright restrictions <ns0:ref type='bibr' target='#b57'>[Stodden, 2008]</ns0:ref>.</ns0:p><ns0:p>Software artifacts have often been discussed in the context of their benefits for open, reusable, and reproducible science <ns0:ref type='bibr' target='#b25'>[Hasselbring et al., 2019]</ns0:ref>. Such results have led more CS organizations and conferences to increase adoption of artifact sharing and evaluation, including a few of the conferences evaluated in this paper <ns0:ref type='bibr' target='#b1'>[Baker, 2016a</ns0:ref><ns0:ref type='bibr' target='#b10'>, Dahlgren, 2019</ns0:ref><ns0:ref type='bibr' target='#b27'>, Hermann et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b54'>, Saucez et al., 2019]</ns0:ref>. One recent study examined specifically the benefit of software artifacts for higher citation counts <ns0:ref type='bibr' target='#b28'>[Heum&#252;ller et al., 2020]</ns0:ref>. Another study looked at artifact evaluation for CS papers and found a small but positive correlation with higher citations counts for papers between <ns0:ref type='bibr'>2013</ns0:ref><ns0:ref type='bibr'>and 2016</ns0:ref><ns0:ref type='bibr'>[Childers and Chrysanthis, 2017]</ns0:ref>.</ns0:p><ns0:p>When analyzing the relationship between artifact sharing and citations, one must be careful to consider the myriad possibilities for confounding factors, as we have in our mixed-effects model. Many such factors have been found to be associated with higher citation counts. Some examples relating to the author demographics include the authors' gender <ns0:ref type='bibr' target='#b14'>[Frachtenberg and</ns0:ref><ns0:ref type='bibr'>Kaner, 2021, Tahamtan et al., 2016]</ns0:ref>, country of residence <ns0:ref type='bibr' target='#b22'>[Gargouri et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b48'>, Peng and Zhu, 2012</ns0:ref><ns0:ref type='bibr' target='#b51'>, Puuska et al., 2014]</ns0:ref>, affiliation <ns0:ref type='bibr' target='#b61'>[Tomkins et al., 2017]</ns0:ref>, and academic reputation metrics <ns0:ref type='bibr'>[Hurley et al., 2014, Bjarnason and</ns0:ref><ns0:ref type='bibr' target='#b4'>Sigfusdottir, 2002]</ns0:ref>.</ns0:p><ns0:p>Other factors were associated with the publishing journal or conference, such as the relative quality of the article and the venue <ns0:ref type='bibr' target='#b43'>[McCabe and Snyder, 2015]</ns0:ref> and others still related to the papers themselves, such as characteristics of the titles and abstracts, characteristics of references, and length of paper <ns0:ref type='bibr' target='#b59'>[Tahamtan et al., 2016]</ns0:ref>.</ns0:p><ns0:p>Among the many paper-related factors studied in relation to citations is the paper's text availability, which our data shows to be also linked with artifact availability. there exists a rich literature examining the association between a paper's own accessibility and higher citation counts, the so-called 'OA advantage' <ns0:ref type='bibr' target='#b3'>[Bernius and Hanauske, 2009</ns0:ref><ns0:ref type='bibr' target='#b11'>, Davis and Walters, 2011</ns0:ref><ns0:ref type='bibr' target='#b56'>, Sotudeh et al., 2015</ns0:ref><ns0:ref type='bibr'>, Wagner, 2010]</ns0:ref>.</ns0:p><ns0:p>For example, Gargouri et al. found that articles whose authors have supplemented subscription-based access to the publisher's version with a freely accessible self-archived version are cited significantly more than articles in the same journal and year that have not been made open <ns0:ref type='bibr' target='#b22'>[Gargouri et al., 2010]</ns0:ref>. A few other more recent studies and reviews not only corroborated the OA advantage but also found that the proportion of OA research is increasing rapidly <ns0:ref type='bibr' target='#b5'>[Breugelmans et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b18'>, Fu and Hughey, 2019</ns0:ref><ns0:ref type='bibr' target='#b44'>, McKiernan et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b59'>, Tahamtan et al., 2016]</ns0:ref>. The actual amount by which open access improves citations is unclear, but one recent study found the number to be approximately 18% <ns0:ref type='bibr' target='#b50'>[Piwowar et al., 2018]</ns0:ref>, which means that higher paper accessibility on its own is not enough to explain all of the citation advantage we identified for papers with available artifacts. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science instead we find that many artifacts are not readily available or buildable <ns0:ref type='bibr' target='#b9'>[Collberg and Proebsting, 2016</ns0:ref><ns0:ref type='bibr' target='#b17'>, Freire et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b28'>, Heum&#252;ller et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b39'>, Krishnamurthi and Vitek, 2015]</ns0:ref>. A few observational studies looked at artifact sharing rates in specific subfields of systems, such as software engineering <ns0:ref type='bibr'>[Childers and Chrysanthis, 2017</ns0:ref><ns0:ref type='bibr' target='#b28'>, Heum&#252;ller et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b60'>, Timperley et al., 2021]</ns0:ref> and computer architecture <ns0:ref type='bibr' target='#b19'>[Fursin and Lokhmotov, 2011]</ns0:ref>, but none that we are aware of have looked across the entire field.</ns0:p><ns0:p>Without directly comparable information on artifact availability rates in all of systems or in other fields, it is impossible to tell whether the overall rate of papers with artifacts in our dataset, 27.7%, is high or low. However, within the six conferences that evaluated artifacts, 42.86% of papers released an artifact, a very similar rate to the &#8776; 40% rate found in a study of of a smaller subset of systems conferences with an artifact evaluation process <ns0:ref type='bibr'>[Childers and Chrysanthis, 2017]</ns0:ref>.</ns0:p><ns0:p>In general, skimming the papers in our dataset revealed that many 'systems' papers do in fact describe the implementation of a new computer system, mostly in software. It is plausible that the abundance of software systems in these papers and the relative ease of releasing them as software artifacts contributes directly to this sharing rate, in addition to conference-level factors.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>Several studies across disparate fields found a positive association between the sharing of research artifacts and increased citation of the research work. In this cross-sectional study of computer systems research, we also observed a strong statistical relationship between the two, although there are numerous potential confounding and explanatory variables to increased citations. Still, even when controlling for various paper-related and conference-related factors, we observe that papers with shared artifacts receive approximately one-third more citations than papers without.</ns0:p><ns0:p>Citation metrics are a controversial measure of a work's quality, impact, and importance, and perhaps should not represent the sole or primary motivation for authors to share their artifacts. Instead, authors and readers may want to focus on the clear and important benefits to science in general, and to the increased reproducibility and credibility of their work in particular. If increased citation counts are not enough to incent more systems authors to share their artifacts, perhaps conference organizers can leverage their substantial influence to motivate authors. Although artifact evaluation can represent a nontrivial additional burden on the program committee, our data shows that it does promote higher rates of artifact sharing.</ns0:p><ns0:p>While many obstacles to the universal sharing of artifacts still remain, the field of computer systems does have the advantage that many--if not most-of its artifacts come in the form of software, which is much easier to share than artifacts in other experimental fields. It is therefore not surprising that we find the majority of shared and extant artifacts in computer systems hosted on github.com, a highly accessible source-code sharing platform. That said, a high artifact sharing rate is not enough for the goals of reproducible science, since many of the shared artifacts in our dataset have since expired or been difficult to locate.</ns0:p><ns0:p>Our analysis found that both the findability and accessibility of systems artifacts can decay significantly even after only a few years, especially when said artifacts are not hosted in dedicated open and free repositories. Conference organizers could likely improve both aspects by requiring-and perhaps offering-standardized tools, techniques, and repositories, in addition to the sharing itself. The ACM has taken significant steps in this direction by not only standardizing various artifact badges but also offering its own supplementary material repository in its digital library. A few conferences in our dataset, like SC, are taking another step in this direction by also requesting a standardized artifact description appendix and review for every technical paper, including a citeable link to the research artifacts.</ns0:p><ns0:p>To evaluate the impact of such efforts, we must look beyond the findability and accessibility of artifacts, as was done in this study. In future work, this analysis can be expanded to the two remaining aspects of the FAIR principles: interoperability and reusability, possibly by incorporating input from the artifact review process itself. The hope is that as the importance and awareness of research artifacts grows in computer systems research, many more conferences will require and collect this information, facilitating not only better, reproducible research, but also a better understanding of the nuanced effects of software artifact sharing.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .Figure 2 .Figure 3 .</ns0:head><ns0:label>123</ns0:label><ns0:figDesc>Figure 1. Papers with artifact by conference</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Turning our attention specifically to the field of systems, we might expect that many software-based experiments should be both unimpeded and imperative to share and reproduce<ns0:ref type='bibr' target='#b37'>[Ince et al., 2012]</ns0:ref>. But14/20PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:3:0:NEW 8 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>System conferences, including start date, number of published papers, total number of named authors, and acceptance rate.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Conference</ns0:cell><ns0:cell>Date</ns0:cell><ns0:cell cols='3'>Papers Authors Acceptance</ns0:cell><ns0:cell>Conference</ns0:cell><ns0:cell>Date</ns0:cell><ns0:cell cols='3'>Papers Authors Acceptance</ns0:cell></ns0:row><ns0:row><ns0:cell>ICDM</ns0:cell><ns0:cell>2017-11-19</ns0:cell><ns0:cell>72</ns0:cell><ns0:cell>269</ns0:cell><ns0:cell cols='2'>0.09 PACT</ns0:cell><ns0:cell>2017-09-11</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>89</ns0:cell><ns0:cell>0.23</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD</ns0:cell><ns0:cell>2017-08-15</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>237</ns0:cell><ns0:cell cols='2'>0.09 SPAA</ns0:cell><ns0:cell>2017-07-24</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>0.24</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGMETRICS</ns0:cell><ns0:cell>2017-06-05</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell cols='2'>0.13 MASCOTS</ns0:cell><ns0:cell>2017-09-20</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>75</ns0:cell><ns0:cell>0.24</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGCOMM</ns0:cell><ns0:cell>2017-08-21</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>216</ns0:cell><ns0:cell cols='2'>0.14 CCGrid</ns0:cell><ns0:cell>2017-05-14</ns0:cell><ns0:cell>72</ns0:cell><ns0:cell>296</ns0:cell><ns0:cell>0.25</ns0:cell></ns0:row><ns0:row><ns0:cell>SP</ns0:cell><ns0:cell>2017-05-22</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>287</ns0:cell><ns0:cell cols='2'>0.14 PODC</ns0:cell><ns0:cell>2017-07-25</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>0.25</ns0:cell></ns0:row><ns0:row><ns0:cell>PLDI</ns0:cell><ns0:cell>2017-06-18</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell>173</ns0:cell><ns0:cell cols='2'>0.15 CLOUD</ns0:cell><ns0:cell>2017-06-25</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>NDSS</ns0:cell><ns0:cell>2017-02-26</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell>327</ns0:cell><ns0:cell cols='2'>0.16 Middleware</ns0:cell><ns0:cell>2017-12-11</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>NSDI</ns0:cell><ns0:cell>2017-03-27</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>203</ns0:cell><ns0:cell cols='2'>0.16 EuroPar</ns0:cell><ns0:cell>2017-08-30</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>179</ns0:cell><ns0:cell>0.28</ns0:cell></ns0:row><ns0:row><ns0:cell>IMC</ns0:cell><ns0:cell>2017-11-01</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>124</ns0:cell><ns0:cell cols='2'>0.16 PODS</ns0:cell><ns0:cell>2017-05-14</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>0.29</ns0:cell></ns0:row><ns0:row><ns0:cell>ISCA</ns0:cell><ns0:cell>2017-06-24</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>295</ns0:cell><ns0:cell cols='2'>0.17 ICPP</ns0:cell><ns0:cell>2017-08-14</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>234</ns0:cell><ns0:cell>0.29</ns0:cell></ns0:row><ns0:row><ns0:cell>SOSP</ns0:cell><ns0:cell>2017-10-29</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>217</ns0:cell><ns0:cell cols='2'>0.17 ISPASS</ns0:cell><ns0:cell>2017-04-24</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row><ns0:row><ns0:cell>ASPLOS</ns0:cell><ns0:cell>2017-04-08</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>247</ns0:cell><ns0:cell cols='2'>0.18 Cluster</ns0:cell><ns0:cell>2017-09-05</ns0:cell><ns0:cell>65</ns0:cell><ns0:cell>273</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row><ns0:row><ns0:cell>CCS</ns0:cell><ns0:cell>2017-10-31</ns0:cell><ns0:cell>151</ns0:cell><ns0:cell>589</ns0:cell><ns0:cell cols='2'>0.18 OOPSLA</ns0:cell><ns0:cell>2017-10-25</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>232</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row><ns0:row><ns0:cell>HPDC</ns0:cell><ns0:cell>2017-06-28</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell cols='2'>0.19 HotOS</ns0:cell><ns0:cell>2017-05-07</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>0.31</ns0:cell></ns0:row><ns0:row><ns0:cell>MICRO</ns0:cell><ns0:cell>2017-10-16</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>306</ns0:cell><ns0:cell cols='2'>0.19 ISC</ns0:cell><ns0:cell>2017-06-18</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>MobiCom</ns0:cell><ns0:cell>2017-10-17</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>164</ns0:cell><ns0:cell cols='2'>0.19 HotCloud</ns0:cell><ns0:cell>2017-07-10</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>ICAC</ns0:cell><ns0:cell>2017-07-18</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>46</ns0:cell><ns0:cell cols='2'>0.19 HotI</ns0:cell><ns0:cell>2017-08-28</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>SC</ns0:cell><ns0:cell>2017-11-13</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>325</ns0:cell><ns0:cell cols='2'>0.19 SYSTOR</ns0:cell><ns0:cell>2017-05-22</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>CoNEXT</ns0:cell><ns0:cell>2017-12-13</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>145</ns0:cell><ns0:cell cols='2'>0.19 ICPE</ns0:cell><ns0:cell>2017-04-22</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGMOD</ns0:cell><ns0:cell>2017-05-14</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>335</ns0:cell><ns0:cell cols='2'>0.20 HotStorage</ns0:cell><ns0:cell>2017-07-10</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>94</ns0:cell><ns0:cell>0.36</ns0:cell></ns0:row><ns0:row><ns0:cell>PPoPP</ns0:cell><ns0:cell>2017-02-04</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>122</ns0:cell><ns0:cell cols='2'>0.22 IISWC</ns0:cell><ns0:cell>2017-10-02</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell>HPCA</ns0:cell><ns0:cell>2017-02-04</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>215</ns0:cell><ns0:cell cols='2'>0.22 CIDR</ns0:cell><ns0:cell>2017-01-08</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>213</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>EuroSys</ns0:cell><ns0:cell>2017-04-23</ns0:cell><ns0:cell>41</ns0:cell><ns0:cell>169</ns0:cell><ns0:cell cols='2'>0.22 VEE</ns0:cell><ns0:cell>2017-04-09</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>85</ns0:cell><ns0:cell>0.42</ns0:cell></ns0:row><ns0:row><ns0:cell>ATC</ns0:cell><ns0:cell>2017-07-12</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>279</ns0:cell><ns0:cell cols='2'>0.22 SLE</ns0:cell><ns0:cell>2017-10-23</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell>0.42</ns0:cell></ns0:row><ns0:row><ns0:cell>HiPC</ns0:cell><ns0:cell>2017-12-18</ns0:cell><ns0:cell>41</ns0:cell><ns0:cell>168</ns0:cell><ns0:cell cols='2'>0.22 HPCC</ns0:cell><ns0:cell>2017-12-18</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>287</ns0:cell><ns0:cell>0.44</ns0:cell></ns0:row><ns0:row><ns0:cell>SIGIR</ns0:cell><ns0:cell>2017-08-07</ns0:cell><ns0:cell>78</ns0:cell><ns0:cell>264</ns0:cell><ns0:cell cols='2'>0.22 HCW</ns0:cell><ns0:cell>2017-05-29</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>FAST</ns0:cell><ns0:cell>2017-02-27</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell cols='2'>0.23 SOCC</ns0:cell><ns0:cell>2017-09-25</ns0:cell><ns0:cell>45</ns0:cell><ns0:cell>195</ns0:cell><ns0:cell>Unknown</ns0:cell></ns0:row><ns0:row><ns0:cell>IPDPS</ns0:cell><ns0:cell>2017-05-29</ns0:cell><ns0:cell>116</ns0:cell><ns0:cell>447</ns0:cell><ns0:cell cols='2'>0.23 IGSC</ns0:cell><ns0:cell>2017-10-23</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>Unknown</ns0:cell></ns0:row><ns0:row><ns0:cell>152</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>short-to-medium term impact in terms of citations. In practice, this duration is long enough that only 41 151 papers (1.68%)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Class of artifact URLs. 'NA' locations indicate expired or unreleased URLs.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Location</ns0:cell><ns0:cell>Count</ns0:cell></ns0:row><ns0:row><ns0:cell>Repository</ns0:cell><ns0:cell>477</ns0:cell></ns0:row><ns0:row><ns0:cell>Academic</ns0:cell><ns0:cell>81</ns0:cell></ns0:row><ns0:row><ns0:cell>Other</ns0:cell><ns0:cell>67</ns0:cell></ns0:row><ns0:row><ns0:cell>ACM</ns0:cell><ns0:cell>44</ns0:cell></ns0:row><ns0:row><ns0:cell>Filesharing</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>NA</ns0:cell><ns0:cell>47</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Median citations by class of artifact URLs for extant artifacts</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Location</ns0:cell><ns0:cell cols='2'>Count Median citations</ns0:cell></ns0:row><ns0:row><ns0:cell>Repository</ns0:cell><ns0:cell>477</ns0:cell><ns0:cell>25</ns0:cell></ns0:row><ns0:row><ns0:cell>Academic</ns0:cell><ns0:cell>81</ns0:cell><ns0:cell>24</ns0:cell></ns0:row><ns0:row><ns0:cell>Other</ns0:cell><ns0:cell>67</ns0:cell><ns0:cell>27</ns0:cell></ns0:row><ns0:row><ns0:cell>ACM</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>Filesharing</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>35</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Estimated parameters for final multilevel mixed-effects model of ln(citations) </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Factor</ns0:cell><ns0:cell>Coefficient p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>Intercept</ns0:cell><ns0:cell>1.74166 6.6e-31</ns0:cell></ns0:row><ns0:row><ns0:cell>Artifact released</ns0:cell><ns0:cell>0.29357 3.6e-10</ns0:cell></ns0:row><ns0:row><ns0:cell>Award given</ns0:cell><ns0:cell>0.00002 4.7e-02</ns0:cell></ns0:row><ns0:row><ns0:cell>Months to eprint</ns0:cell><ns0:cell>-0.00048 8.8e-09</ns0:cell></ns0:row><ns0:row><ns0:cell>References number</ns0:cell><ns0:cell>0.00907 1.3e-08</ns0:cell></ns0:row><ns0:row><ns0:cell>Coauthors number</ns0:cell><ns0:cell>0.06989 1.6e-19</ns0:cell></ns0:row><ns0:row><ns0:cell>Colon in title</ns0:cell><ns0:cell>0.13369 1.4e-03</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>It does not always disambiguate author names correctly, and it tends to</ns0:figDesc><ns0:table /><ns0:note>13/20PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:3:0:NEW 8 Jan 2022)</ns0:note></ns0:figure> <ns0:note place='foot' n='1'>At the time of this writing during summer 2021, the papers from December 2017 had been public for 3.5 years, so this 42-month duration was selected for all papers to normalize the comparison.</ns0:note> <ns0:note place='foot' n='3'>Note that the last publication counts and h-index are correlated (r = 0.6, p &lt; 10 &#8722;9 ), so one may cancel the other out.12/20PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63690:3:0:NEW 8 Jan 2022)</ns0:note> </ns0:body> "
"Dear Prof. de Oliveira, Thank you and the reviewer once again for providing these constructive reviews. I've addressed each comment as detailed below, and proofread the entire paper. Reviewer 1 (Anonymous) > The article is presented in a clear way, with definitions and references useful to the reader. I still would suggest a quick proof-reading to double check for typos and so. For instance, an article seems to be missing in the sentence starting as . Thank you for catching this error. I've corrected it (and numerous others) throughout the paper. > First mention of a research artifact (including data, publication, software, etc.) should be accompanied by a reference, even if detailed later (this makes it easier for readers interested in that element to directly go for it rather than looking for the reference somewhere else in the text). This is not the case of the dataset produced together with this study, first mentioned on line 115. I've added the reference as requested. > Regarding the sharing of data and software. For any future occasion, I suggest separating data from software as they might have different licenses (unless using a data version for both is indeed intended). Thank you. This may be a good consideration for future occasions, as you suggest. For this submission, it makes little sense to separate the two, since they are both covered by the same license. More importantly, they are intricately tied together. The dataset includes two types of data: raw data (in the `data/` directory, primarily in JSON format) and clean & tabulated data (in the `features/` directory, in CSV format). The latter is the one used in the analysis of this paper, and it is produced from the former via various scripts in the `src/` directory. If the code even changes, so does the data and the analysis, so it makes more sense to keep them coupled in the same repository. > Some conferences might have the same acronym. I suggest expanding the name and adding a link to their websites (if still available). I've added a new supplementary information document with numerous details on each conference, sorted by acronym. > I suggest making clear from the beginning that the only research artifacts analyzed in this study are in the form of software (or systems software) excluding data, workflows and others. That is not quite the case. I looked at all the research artifacts I could find in repositories and digital libraries. A minority of those were pure data or configuration files (even when hosted on github.com). They are treated equally to the code artifacts in this study. > Step 6 (line 200) needs more detailed explanation. I (randomly) checked the file https://github.com/eitanf/sysconf/blob/master/data/papers/ASPLOS. json from the GitHub repo corresponding to this study, it includes some information. The first paper appears with 21 in Google Scholar but that does not seem to coincide with the in that file or to the in the file https://raw.githubusercontent.com/eitanf/sysconf/master/data/s2pa pers.json. Based on the description on the GitHub repo , I was expecting to find 21 items there for the mentioned paper but I saw more. Again, this was a random and quick look but still it might be worth to add some more information on this step (and maybe the others). I must admit I don't fully understand this comment. I looked up the paper mentioned (ASPLOS_17_001). As of April 8th, 2021, it had 18 citations, not 21 (that is, this paper was cited by 18 other papers on scholar.google.com, visited during 2021-04-08). There is no reference to the number 21 for this paper. There are 21 different 'citedby' data points, each from different days when I visited GS to inspect the number of citations at that day. I added a README file to this directory to clarify the structure of these JSON files, but at any rate, they represent raw data and are not used in this analysis. The relevant file for this analysis is `features/citations.csv`, which formats this data in a normalized form. As for the `s2papers.json` outCites variable, it is a different metric collected from a different data source. This metric represents how many other papers this paper cites, collected from Semantic Scholar. It is not currently used because it is inaccurate. For the number of references used in the linear model in this paper, I extracted the number directly from the PDF text of each paper, after cleaning. At any rate, I added some clarification to step 6 in an attempt to lower the confusion. > One of the limitations of this paper that is not discussed at all (it is maybe outside the scope) is exclusive focus on software as research artifact. Sharing and citing data has been encouraged longer than sharing and citing software. One question that arises is whether the analysis on citation count would change across papers only sharing data, only sharing software or sharing both. One of the premises in this study is that paper sharing software is more reproducible and therefore it would be preferred by researchers, i.e., traducing in more citations. I find it difficult to reproduce a paper with data but not software or vice versa. That is indeed a very interesting research question. The current dataset is inadequate to address it, because the vast majority of the artifacts are indeed software. It would require a new dataset of papers and artifacts, likely from a different field, as systems research appears to be much more reliant on code than on data for supporting artifacts. Reviewer 2 (Anonymous) > s reader to the possibility of an artifact, even if one is not linked in the paper.pdfgrep...several search terms were used to assist in identifying artifacts, *such as* , and Here, I am afraid the use of the expression may not let clear whether all terms applied in the searchers are listed (as expected). It may address a reproducibility issue to fix. This list represents all the search terms, not examples. I've edited this sentence. Sincerely, Eitan Frachtenberg. "
Here is a paper. Please give your review comments after reading it.
344
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The global healthcare system is being overburdened by an increasing number of COVID-19 patients. Physicians are having difficulty allocating resources and focusing their attention on high-risk patients, partly due to the difficulty in identifying high-risk patients early. COVID-19 hospitalizations require specialized treatment capabilities and can cause a burden on healthcare resources. Estimating future hospitalization of COVID-19 patients is, therefore, crucial to saving lives. In this paper, an interpretable deep learning model is developed to predict intensive care unit (ICU) admission and mortality of COVID-19 patients. The study comprised of patients from the Stony Brook University Hospital, with patient information such as demographics, comorbidities, symptoms, vital signs, and laboratory tests recorded. The top 3 predictors of ICU admission were Ferritin, diarrhoea, and Alamine Aminotransferase, and the top predictors for mortality were COPD, Ferritin, and Myalgia. The proposed model predicted ICU admission with an AUC score of 88.3% and predicted mortality with an AUC score of 96.3%. The proposed model was evaluated against existing model in the literature which achieved an AUC of 72.8% in predicting ICU admission and achieved an AUC of 84.4% in predicting mortality. It can clearly be seen that the model proposed in this paper shows superiority over existing models. The proposed model has the potential to provide tools to frontline doctors to help classify patients in time-bound and resource-limited scenarios.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Coronavirus is a virus family that causes respiratory tract illnesses and diseases that can be lethal in some situations, such as SARS and COVID-19. SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2) is a new type of Coronavirus which began spreading in late 2019 in the Chinese province of Hubei claiming multiple human lives <ns0:ref type='bibr' target='#b18'>(Li et al., 2020a)</ns0:ref>. The novel coronavirus outbreak was declared a Public Health Emergency of International Concern by the World Health Organization (WHO) in January 2020. The infectious disease caused by the novel coronavirus was given the official title, COVID-19 (Coronavirus Disease 2019) by WHO in February 2020, and a COVID-19 Pandemic was announced in March 2020 by the (World Health Organization; WHO Director General) <ns0:ref type='bibr' target='#b2'>(Bogoch et al., 2020)</ns0:ref>. Since then, there has been over 170 million cases with many of them being hospitalized. A staggering 3.8 million people died from the disease with the numbers increasing as this paper is being written. Every patient has a different reaction to the virus, with many of them being asymptomatic and a small percentage getting worse rapidly with their organs failing <ns0:ref type='bibr' target='#b17'>(Leung et al., 2020)</ns0:ref>. The ongoing surge in COVID-19 patients has put a burden on healthcare systems unlike ever before. According to a recent study by <ns0:ref type='bibr' target='#b26'>Pourhomayoun and Shakibi (2021)</ns0:ref>, once the coronavirus outbreak begins, the healthcare system will be overwhelmed in less than four weeks. When a hospitals capacity is exceeded, the death rate rises. The repercussions of an extended stay and increased demand for hospital resources as a result of COVID-19 have been disastrous for health systems around the world, necessitating quick clinical judgments, especially when patients. They determined the relative importance of blood panel profile data and discovered that when this data was removed from the equation, the AUC decreased by 0.12 units. It provided useful information in predicting the severity of the disease. <ns0:ref type='bibr' target='#b16'>Ikemura et al. (2021)</ns0:ref> aimed to train various machine learning algorithms using automated machine learning <ns0:ref type='bibr'>(autoML)</ns0:ref>. They chose the model that best estimated how long patients would survive a SARS-CoV-2 infection. They used data that comprised of patients who tested positive for COVID-19 between March 1 and July 3 of the year 2020 with 48 features. The stacked ensemble model (AUPRC=0.807) was the best model generated using autoML. The gradient boost machine and extreme gradient boost models were the two best independent models, with AUPRCs of 0.803 and 0.793, respectively. The deep learning model (AUPRC=0.73) performed significantly worse than the other models. <ns0:ref type='bibr' target='#b19'>Li et al. (2020b)</ns0:ref> used the clinical variables of COVID-19 patients to develop a deep learning model and a risk score system to predict ICU admission and mortality of patients in the hospital. The data consisted of 5,766 patients, with comorbidities, vital signs, symptoms, and laboratory tests, between 7th February, 2020 and 4th May, 2020. AUC score was used to evaluate their models. The deep learning model achieved an AUC of 78% for ICU admission and 84% for mortality with the risk score for ICU admission being 72.8% and 84.8% for mortality. Their model was accurate enough to provide doctors with the tools to stratify patients in limited-resource and time-bound scenarios.</ns0:p></ns0:div> <ns0:div><ns0:head>Current Work</ns0:head><ns0:p>Data sets Test period Objective(Covid-19patients in the hospital) <ns0:ref type='bibr' target='#b21'>Manca et al. (2020)</ns0:ref> Lombardy, Italy ICU hospital admission 21 Feb 2020-27June 2020 Predict ICU beds and mortality rate <ns0:ref type='bibr' target='#b11'>Goic et al. (2021)</ns0:ref> Chile official covid-19 data May 20th 2020-July 28th 2020 Forecast in the short-term, ICU beds availability <ns0:ref type='bibr' target='#b26'>Pourhomayoun and Shakibi (2021)</ns0:ref> Worldwide Covid data from 146 countries December 1, 2019 -February 5th, 2020 Predict the mortality risk in patients <ns0:ref type='bibr' target='#b7'>Fernandes et al. (2021)</ns0:ref> S&#227;o Paulo COVID-19 hospital admission March 1 2020-28 June2020 Predict the risk of developing critical conditions <ns0:ref type='bibr' target='#b31'>Yu et al. (2021)</ns0:ref> Michigan Covid 19 hospital data 1 Feb 2020-4 May 2020 Predict the need for mechanical ventilation and mortality . <ns0:ref type='bibr' target='#b16'>Ikemura et al. (2021)</ns0:ref> Montefiore Medical Center COVID 19 data March 1 2020 -July 3 2020 Predict patients' chances of surviving SARS-CoV-2 infection <ns0:ref type='bibr' target='#b19'>Li et al. (2020b)</ns0:ref> Stony Brook University Hospital COVID hospital data 7 February 2020-4 May 2020.</ns0:p><ns0:p>Predict ICU admission and in-hospital mortality .</ns0:p></ns0:div> <ns0:div><ns0:head>Table 1. Summary of existing works</ns0:head><ns0:p>Existing models used in the literature, as summarized in Table <ns0:ref type='table'>1</ns0:ref> perform very well for their respective purposes, however, they have a downside in that they are difficult to interpret. The model lacks interpretability on which patient attributes it uses when making a decision (ICU admission and mortality).</ns0:p><ns0:p>Existing models use various approaches, but the majority of them use neural network models, which are excellent at achieving good results, but their predictions are not traceable. Tracing a prediction back to which features are significant is difficult, and there is no comprehension of how the output is generated. Therefore, this paper proposes the use of an interpretable neural network approach to predict ICU admission likelihood and mortality rate in COVID-19 patients. It employs a deep learning algorithm that can interpret how the model makes decisions and which features the model selects in making the decision. The model has outstanding and comparable results to other neural network models in the literature. The proposed model can be utilized to generate better outcomes when compared to previously published models.</ns0:p></ns0:div> <ns0:div><ns0:head>METHOD</ns0:head><ns0:p>This paper proposes a high-performance and interpretable deep tabular learning architecture, TabNet <ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019)</ns0:ref>, that exploits the benefits of sequential attention (following in a logical order or sequence) to choose features at each decision step which enables interpretability and more efficient learning as the learning capacity is used for the most salient features from the input parameters. The degree to which a human can comprehend the reason for a decision is known as interpretability. The higher the interpretability of the machine or deep learning model, the easier it is for someone to understand why particular decisions or predictions were made. Although neural network models are known to produce excellent results, they have the drawback of being a black box, which means that their predictions are not traceable. It is difficult to trace a prediction back to which features are important, and there is no understanding of how the output was obtained. The interpretabilty here denotes the ability for the model to interpret its decision and shows the features that are the most important in predicting ICU admission and mortality of COVID-19 patients <ns0:ref type='bibr' target='#b10'>(Ghiringhelli, 2021)</ns0:ref> .This section starts by describing how the input data has been pre-processed for the proposed learning model. Then the different components of the proposed model, and the steps it takes to arrive at a decision to predict ICU admission and mortality has been discussed. Finally, the evaluation metrics used to evaluate the model is analyzed. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Data Preprocessing</ns0:head><ns0:p>It is important to pre-process the data before applying it to a machine-learning algorithm. Many preprocessing techniques were applied with each serving a specific purpose. The various pre-processing steps have been discussed below. Various sampling methods were experimented with which included Adaptive Synthetic(ADASYN), and SMOTE to deal with the imbalance in the class labels.</ns0:p><ns0:p>ADASYN <ns0:ref type='bibr' target='#b14'>(He et al., 2008)</ns0:ref> </ns0:p><ns0:formula xml:id='formula_0'>G = (m l &#8722; m s ) &#215; &#946; (1)</ns0:formula><ns0:p>where G in is the total number of synthetic data examples for the minority class that must be produced, m l is the minority class, m s is the majority class , the &#946; is used to determine the desired balance level between 0 and 1. SMOTE <ns0:ref type='bibr' target='#b4'>(Chawla et al., 2002)</ns0:ref> is a technique for balancing class distribution by replicating minority class examples at random:</ns0:p><ns0:formula xml:id='formula_1'>x &#8242; = x + rand(0, 1) &#215; |x &#8722; x k | (2)</ns0:formula><ns0:p>where x' is the new generated synthetic data, x is the original data, x k is the kth attribute of the data, and rand represents a random number between 0 and 1.</ns0:p><ns0:p>Feature Extraction is a technique for reducing the number of features in a dataset by generating new ones from existing ones. Principal Component Analysis (PCA), Fast Independent Component Analysis (Fast ICA), Factor Analysis, t-Distributed Stochastic Neighbor Embedding t-SNE(t-SNE), and UMAP are the techniques used for the current dataset. When PCA <ns0:ref type='bibr' target='#b0'>(Abdi and Williams, 2010)</ns0:ref> is used, the original data is taken as input and it gives an output of a mix of input features which can better summarize the original data distribution such that its original dimensions are reduced. By looking at pair-wise distances, PCA can maximize variances while minimizing reconstruction error:</ns0:p><ns0:formula xml:id='formula_2'>cov(X, y) = ( 1 n &#8722; 1 ) n &#8721; i=1 (X i &#8722; x)(Y i &#8722; y) (3)</ns0:formula><ns0:p>where x is the input, and y is the output. cov(x,y) is the covariance matrix after which it is transformed to a new subspace which is y = W'x FAST ICA <ns0:ref type='bibr' target='#b15'>(Hyvarinen, 1999</ns0:ref>) is a linear dimensionality reduction approach that uses the principle of negentropy from maximization of non-gaussian technique as input data and attempts to correctly classify each of them (deleting all the unnecessary noise):</ns0:p><ns0:formula xml:id='formula_3'>&#948; = lg( &#8721; N i=1 )y i .y T i MSE ) (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>)</ns0:formula><ns0:p>where MSE is the mean squared error, and y is the output.</ns0:p><ns0:p>Factor analysis <ns0:ref type='bibr' target='#b12'>(Gorsuch, 2013</ns0:ref>) is a method for compressing a large number of variables into a smaller number of factors. This method takes the highest common variance from all variables and converts it to a single score. t-SNE <ns0:ref type='bibr' target='#b29'>(Wattenberg et al., 2016)</ns0:ref> is a non-linear dimensionality reduction algorithm for high-dimensional data exploration. It converts multi-dimensional data into two or more dimensions that can be visualized by humans:</ns0:p><ns0:formula xml:id='formula_5'>C = KL(P||Q) &#8721; i &#8721; j p i j log( p i j q i j )<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where p i j is the joint probability distribution of the features, and q i j | is the t-distribution of the features. KL is the Kullback-Leiber divergence, P and Q are the distribution in space. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to pre-process the input data for machine learning. The theoretical foundation for UMAP is focused on Riemannian geometry and algebraic topology:</ns0:p><ns0:formula xml:id='formula_6'>p i | j = e &#8722; d(x i , x j ) &#8722; p i &#963; i (6)</ns0:formula><ns0:p>where p represents the distance from each ith data point to the nearest jth data point.</ns0:p><ns0:p>Before feeding all this information to the TabNet to act on, the dataset needs to be split into a training set and a testing set. The training dataset is used to train the model and the testing dataset is used to evaluate the models performance.The Stratified K-fold <ns0:ref type='bibr' target='#b32'>(Zeng and Martinez, 2000)</ns0:ref> cross-validation was implemented which splits the data into 'k' portions. In each of 'k' iterations, one portion is used as the test set, while the remaining portions are used for training. The fold used here was k=5 which means that the dataset was divided into 5 folds with each fold being utilized once as a testing set, with the remaining From Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, the input features are Batch Normalized (BN) and passed to the Feature Transformer, where it goes through four layers of a Fully Connected layer (FC), a Batch Normalization layer (BN), and a Gated Linear Unit (GLU) in that order. The Feature Transformer produces n(d) and n(a). The Feature Transformer comprises two decision steps. A Fully Connected (FC) layer is a type of layer where every neuron is connected to every other neuron.</ns0:p><ns0:formula xml:id='formula_7'>F C B N G L U F C B N G L U F C B N G L U F C B N G L U</ns0:formula></ns0:div> <ns0:div><ns0:head>Feature Transformer</ns0:head><ns0:p>The output from the FC layer should always be Batch Normalized. Batch normalization is used to transform the input features to a common scale. It can be represented mathematically as:</ns0:p><ns0:formula xml:id='formula_8'>BN = x &#8722; &#181; b &#8730; &#969; 2 + &#949; (7)</ns0:formula><ns0:p>where x represents the input features, &#181; b denotes the mean of the features and &#969; 2 denotes the variance.</ns0:p><ns0:p>The fully connected layer is the combination of all the inputs with the weights, which can be represented mathematically as:</ns0:p><ns0:formula xml:id='formula_9'>FC = W (x)<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where x denotes the input features, and W denotes the weights.</ns0:p><ns0:p>These operations are done sequentially, starting from equation 9, then to equation 10 and finally to equation 11</ns0:p><ns0:formula xml:id='formula_10'>FC = W (x) (9) BN = x &#8722; &#181; b &#8730; &#969; 2 + &#949;<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>The Gated Linear Unit (GLU) is simply the sigmoid of x:</ns0:p><ns0:formula xml:id='formula_11'>GLU = &#963; (x)<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>The Fully Connected layer performs its operations, then its output is fed into the Batch Normalization layer to perform its operations. Finally, the output from the Batch Normalization is fed into the Gated Linear Unit, all in a sequential manner.</ns0:p><ns0:p>The decision output from the Feature Transformers n(d) is also aggregated and embedded in this form, and a linear mapping is applied to get the final decision. As a result, they are made up of two shared decision steps and two independent decision steps. A residual connection connects the shared steps with the independent steps and they are summed together via the &#8853; operation, which is a direct summation block.</ns0:p><ns0:p>Since the same input features in the dataset are used in distinct steps, the layers are shared between two decision steps for robust learning. By ensuring that the variation across the network does not change significantly, normalization with a square root of 0.5 helps to stabilize learning which produces the outputs of n(d) and n(a) as mentioned previously.</ns0:p><ns0:p>From the Feature Transformer, and after the Split layer, the Attentive Transformer is applied to determine the various features and their values. The feature importance for that step is combined with the other steps and is made up of four layers: FC, BN, Prior Scales, and Sparse Max in sequential order.</ns0:p><ns0:p>The split layer splits the output and obtains p[i-1], which is then passed through the Fully Connected (FC) layer and the Batch Normalization (BN) layer, whose purpose is to achieve a linear combination of features allowing higher-dimensional and abstract features to be extracted. </ns0:p><ns0:formula xml:id='formula_12'>M[i] = Sparsemax(P[i &#8722; 1] * h i (p[i &#8722; 1])<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>where the h i represents the summation of the FC and the BN layer, the Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the weights of all features of each sample to 1 <ns0:ref type='bibr' target='#b30'>(Yoon et al., 2018)</ns0:ref>, allowing TabNet to employ the most useful features for the model in each decision step. M[i] then updates p[i]:</ns0:p><ns0:formula xml:id='formula_13'>P[i] = i &#8719; j=1 (&#947; &#8722; M j )<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>If &#947; is set closer to 1, the model uses different features at each step; if &#947; is greater than 1, the model uses the same features in multiple steps. The Sparse matrix is similar to the softmax, except that instead of all features adding up to 1, some will be 0 and others will add up to 1. The Sparse Max is expressed as:</ns0:p><ns0:formula xml:id='formula_14'>n i=1 sparsemax(x) i (14)</ns0:formula><ns0:p>This makes it possible to choose features on an instance-by-instance basis with various features being considered at various steps. These are then fed into the Mask layer, which aids in the identification of the desired features. The Feature Transformer is applied again, and the resulting output is split to the Attentive Transformer. The split layer divides the output from the feature transformer into two parts which are d[i], and a[i]:</ns0:p><ns0:formula xml:id='formula_15'>d[i], a[i] = f i (M[i] * f ) (15)</ns0:formula><ns0:p>where d[i] is used to calculate the final output of the model, and a[i] is used to determine the mask of the next step. ReLU activation is then applied:</ns0:p><ns0:formula xml:id='formula_16'>f (x) = max(0, d[i])<ns0:label>(16)</ns0:label></ns0:formula><ns0:p>where f(x) returns 0 if it receives any negative input, but for any positive value x, it returns that value back. The contribution of the ith step to obtain the final result can be expressed as:</ns0:p><ns0:formula xml:id='formula_17'>&#966; b [i] = N d c=1 Relu(d[i], c[i]) (17)</ns0:formula><ns0:p>where &#966; b <ns0:ref type='bibr'>[i]</ns0:ref> indicates the features that are selected at the ith step.</ns0:p><ns0:p>To map the output dimension, the outputs of all decision steps are summed and passed through a Fully Connected layer. Combining the Masks at various stages necessitates the use of a coefficient that can account for the relative value of each step in the decision-making process. The importance of the features can be expressed using the equation: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_18'>M ag g &#8722; b , j = N s te ps i=1 &#966; b [i]M b , j D j=1 N s te ps i=1 &#966; b [i]M b, j<ns0:label>(18)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>TabNet decision making Process</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The mask value for a given sample indicates how significant the corresponding feature is for that sample. Brighter columns indicate the features that contribute a lot to the decision-making process. It can be seen that the majority values for features other than 0, 1, 4, 5, and 8 are close to '0,' indicating that the TabNet model correctly selects the salient features for the output. We can then interpret which features the model selects enhancing the interpretability of the model. With this, the features that contribute to individuals being admitted to the ICU and dying of the COVID-19 disease can be ranked accordingly.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>Evaluation metrics are the metrics used to evaluate the model to determine if it is working as well as it should be. The evaluation metrics used are: Accuracy is the proportion of correctly expected observations to all observations.</ns0:p><ns0:formula xml:id='formula_19'>Confusion</ns0:formula><ns0:formula xml:id='formula_20'>Accuracy = T P + T N T P + FP + FN + T N (19)</ns0:formula><ns0:p>Precision is the ratio of correctly predicted positive observations to total predicted positive observations.</ns0:p></ns0:div> <ns0:div><ns0:head>Precision = T P T P + FP (20)</ns0:head><ns0:p>Recall is the ratio of correctly expected positive observations to all observations in the actual class.</ns0:p></ns0:div> <ns0:div><ns0:head>Recall = T P T P + FN (21)</ns0:head><ns0:p>F1 Score is the weighted average of Precision and Recall.</ns0:p><ns0:formula xml:id='formula_21'>F1Score = 2(Precision &#215; Recall) Recall + Precision (22)</ns0:formula><ns0:p>The various mathematical notations used in this section are shown in Table <ns0:ref type='table' target='#tab_5'>2</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>To demonstrate the effectiveness of the method in predicting ICU admission and mortality of patients, the different TabNet model hyperparameters, dimensionality reduction techniques, and oversampling methods have been thoroughly examined and contrasted. The section starts with the description of datasets followed by the statistical analysis, and it concludes with results and discussion.</ns0:p></ns0:div> <ns0:div><ns0:head>Description of data sets</ns0:head><ns0:p>Data were collected as previously described in <ns0:ref type='bibr' target='#b19'>Li et al. (2020b)</ns0:ref>. Specifically, there are two data sets used for this analysis, one for the ICU likelihood and the other for the mortality rate. The ICU data set consists of 1020 individuals with 43 features and the mortality data set consists of 1106 individuals with 43 features. The features consist of vital signs, laboratory tests, symptoms, and demographics of these individuals. There are two labels associated with the ICU dataset which are, ICU admitted (label 0), and ICU non-admitted (label 1). Similarly, there are two labels associated with the death dataset which are, non-death (label 0), and death (label 1). The datasets are unbalanced with distribution ratios of 75.5:24.5, and 86.1:13.9 for the ICU and mortality datasets, respectively. Table <ns0:ref type='table'>3</ns0:ref> shows the summary of the description of the datasets. </ns0:p></ns0:div> <ns0:div><ns0:head>Table 3. Description of Datasets Statistical Analysis</ns0:head></ns0:div> <ns0:div><ns0:head>ICU Admission</ns0:head><ns0:p>There were more males than females in the study population. The non-Hispanic ethnicity forms the most individuals, and the Caucasian race has the highest number of individuals in the population. Hypertension is the co-morbidity that most individuals in the study population presented, with cancer being the least.</ns0:p><ns0:p>The average age of individuals that needed the ICU is higher than those that did not. Regarding the vital signs, heart rate is the sign that showed quite a big difference on average, with the individuals needing ICU having a higher heart rate than their fellow counterparts. Procalcitonin, Ferritin and C-reactive protein are the laboratory findings that showed the biggest average difference, with individuals needing ICU showing higher levels of these. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For the symptoms, loss of smell and loss of taste were the symptoms that most individuals that got admitted to the ICU acquired whereas fever and cough were the symptoms that the least number of individuals acquired to be admitted to the ICU. Overall, over 70% of individuals acquired a symptom of disease at admission to the ICU. Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref> shows the relationship between symptoms and ICU admission by looking at the distribution of patients who were admitted to the ICU and whether or not they had a symptom. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Symptoms</ns0:head><ns0:p>Computer Science with individuals that died showing higher levels of these. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Loss of smell and loss of taste were the symptoms that most individuals that died acquired, whereas fever and cough were the symptoms that the least number of individuals that died acquired. Overall, at least 50% of individuals acquired a particular symptom before they died. </ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Settings</ns0:head></ns0:div> <ns0:div><ns0:head>Hyperparameters of TabNet</ns0:head><ns0:p>The TabNet model has a considerable number of hyperparameters which can be tuned to improve performance. The TabNet comes with some default parameters which works well, but for certain use cases, different values of certain hyperparameters yield better performances. Table <ns0:ref type='table' target='#tab_12'>10</ns0:ref> shows the default hyperparameters of the TabNet model, which were suggested by <ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019)</ns0:ref>.</ns0:p><ns0:p>We initialized the parameters according to the official TabNet paper in <ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019)</ns0:ref> an early stopping is referred to as patience. If patience is set to 0, there will be no early stopping. The number of shared Gated Linear Units at each step is n shared, and the momentum for batch normalization normally ranges from 0.01 to 0.4. n independent is the number of independent Gated Linear Units layers at each step, and it usually ranges from 1 to 5. The normal range of values is 1 to 5. The coefficient of feature re-use in the masks is called gamma. It has a range of values from 1.0 to 2.0. A number close to 1 reduces the correlation between layers in mask selection. The number of steps in the architecture is n.</ns0:p><ns0:p>(usually between 3 and 10). The extra sparsity loss coefficient is denoted by lambda sparse. The larger the coefficient, the more sparse the model in terms of feature selection <ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Analysis</ns0:head><ns0:p>We present a summary of our experimental results and analysis on two categories: ICU Admission and Mortality.</ns0:p></ns0:div> <ns0:div><ns0:head>ICU Admission</ns0:head></ns0:div> <ns0:div><ns0:head>Results of Hyperparameter tuning using Tabnet</ns0:head><ns0:p>The hyper parameters of the model have been tuned using various values of each parameter. The final table which has the best hyper parameters in predicting ICU admission for the various metrics are shown here, the rest of the tables for the individual hyper parameters can be seen in the appendix section. In varying the width of decision prediction layer (nd), the value of nd was changed from a range of 2 to 64 to determine the best output. The results were the best when nd was set to 64.</ns0:p><ns0:p>In varying number of steps in the architecture (nsteps), the value of nsteps was varied from 3 to 12 to determine the best output. A value of 3 gave the best results. Changing the nsteps to numbers between 8 to 12 showed a slight decrease in performance which indicates that the performance will not be enhanced by increasing the number of steps.</ns0:p><ns0:p>In varying gamma. Changing the gamma shows a very haphazard trend in performance, the best results are given when the gamma is 2.0. Increasing the gamma does not improve the results. Thus, gamma was not increased any further.</ns0:p><ns0:p>In varying number of independent gates (nindependent), the number of independent gates was varied from 2 to 7. All the nindependent gates obtained similar results converging to the best result with 2. Thus, 2 independent gates give the best results.</ns0:p><ns0:p>In varying number of shared gates (nshared), the nshared gates was varied from 2 to 7, with 2 gates achieving the best results. Increasing the gates did not improve the results.</ns0:p><ns0:p>In varying values of momentum,momentum values of 0.2 and 0.3 displayed the best results, with higher values of momentum producing poorer results.</ns0:p><ns0:p>In varying lambda sparse, the values of lambda sparse was varied from 0.001 to 0.005. The results of the model showed a negative correlation with the value of lambda sparse. The best result was achieved with a value of 0.001.</ns0:p></ns0:div> <ns0:div><ns0:head>17/40</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Two Different types of masks were used, the entmax and the sparsemax. It was concluded that the mask type of entmax gives a better result across the board in all the performance metrics.</ns0:p><ns0:p>The number of epochs and stopping condition was also experimented to determine the impact it has on the performance of the TabNet model. The results are generally better when the stopping condition is defined. The best results are achieved with an epoch of 150, and patience greater than 60. The results do not change when the patience is greater than 60.</ns0:p><ns0:p>Regarding the dimensionality reduction methods, 5 methods were experiemented with to check its effect on the performance of the TabNet baseline model. The results using the Fast ICA has the best results with it falling short only on recall to PCA. The most important parameter in the table is the AUC, in which the Fast ICA has a score of 83.6.</ns0:p><ns0:p>Both PCA method and the Fast ICA methods yielded similar scores on the impact of different dimensionality reduction methods on the performance of the best TabNet model. For the AUC parameter, the Fast ICA has the highest score of 86.4.</ns0:p><ns0:p>Different oversampling methods on the performance of the TabNet Baseline model were experimented with and the results using the ADASYN method has the best outcome in all the measured performance metrics.</ns0:p><ns0:p>The impact of different oversampling methods on the performance of the TabNet Best model was determined. The results using the ADASYN method has the best results in all the measured performance metric.</ns0:p><ns0:p>The tables showing the various experimentations of the individual hyperparameter explained above can be found in the appendix section. Figure <ns0:ref type='figure'>3</ns0:ref> shows the trend of the various hyperparameter during the experiments and tuning. Figure <ns0:ref type='figure'>4</ns0:ref> further shows the feature importance masks for predicting ICU admission. TabNet features a feature value output called Masks that may be used to quantify feature importance and indicate if a feature is chosen at a particular decision step in the model. Each row represents the masks for each input element and the column represents a sample from the dataset. The brighter the color, the higher the value. In predicting ICU admission, two of the masks are shown as an example in Figure <ns0:ref type='figure'>4</ns0:ref>, where the features which the respective masks are paying attention to can be seen in bright colors. The brighter a grid, the more important that particular feature is for the particular Mask. The number of grids lighting up corresponds to the number of features that are being paid attention to by the particular Mask. It can be seen that Mask 0 is paying the most attention to the earlier features, with an emphasis on the 20th feature.</ns0:p><ns0:p>Mask 1 is paying the most attention to the later features, with the most attention given to features 32 and 38. The average feature output among all the Masks is used to arrive at the final decision. Manuscript to be reviewed Manuscript to be reviewed </ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Mortality</ns0:head><ns0:p>We now present a summary of our experimental results and analysis for the Mortality category.</ns0:p></ns0:div> <ns0:div><ns0:head>Results of Hyperparameter tuning using Tabnet</ns0:head><ns0:p>Similarly in predicting morality,the hyper parameters of the model have been tuned using various values of each parameter. The final table which has the best hyper parameters in predicting mortality for the various metrics are shown here, the rest of the tables for the individual hyper parameters can be seen in the appendix section. Figure <ns0:ref type='figure' target='#fig_1'>10</ns0:ref> shows the trend of the various hyperparameter during the experiments and tuning. In varying the width of decision prediction layer (nd), the value of nd was changed from a range of 2 to 64 to determine the best output. The results were the best when nd was set to 8.</ns0:p></ns0:div> <ns0:div><ns0:head>23/40</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In varying number of steps in the architecture (nsteps), the value of nsteps was varied from 3 to 12 to determine the best output. A value of 3 gave the best results. nsteps of 10 to 12 did not show any improvement in results, which indicates that the performance of the model will not be improved by increasing the number of steps.</ns0:p><ns0:p>In varying gamma, the values of gamma was varied from 1.3 to 2.2. The performance of the model shows a very haphazard trend with the changing values. A gamma of 2.0 gives the best results,</ns0:p><ns0:p>In varying number of independent gates (nindependent),the number of independent gates was varied from 2 to 7. The number of gates which gave the best output is 2, and increasing the number of gates decreased the accuracy of the results.</ns0:p><ns0:p>In varying number of shared gates (nshared), the number of shared gates was varied from 2 to 7. The number of shared gates of 2 gave the best results and increasing the number of gates did not improve the results. Although there is a spike in results when the number of shared gates is 5, the performance reduces when it is increased further.</ns0:p><ns0:p>The values of momentum was varied from 0.02 to 0.3, with 0.02 giving the best results. Increasing the value of the momentum gave poorer results.</ns0:p><ns0:p>In varying lambda sparse, the values of lambda sparse was varied from 0.001 to 0.005, with 0.001 achieving the best results. The value of the lambda sparse had a negative correlation with the performance of the model.</ns0:p><ns0:p>The different masktypes used were sparsemax, and entmax. The output using the sparsemax had a better result compared to the entmax</ns0:p><ns0:p>The number of epochs and stopping condition was also experimented to determine the impact it has on the performance of the TabNet model. The results are generally better when the stopping condition is defined. The best results are achieved with an epoch of 150, and patience greater than 60. The results do not change when the patience is greater than 60.</ns0:p><ns0:p>Regarding the dimensionality reduction methods, 5 methods were experiemented with to check its effect on the performance of the TabNet baseline model. The results using the Fast ICA has the best results with it falling short only on recall to PCA. The most important parameter in the table is the AUC, in which the PCA has a score of 94.0.</ns0:p><ns0:p>The Fast ICA yielded the best score on the impact of different dimensionality reduction methods on the performance of the best TabNet model. For the AUC parameter, the Fast ICA has the highest score of 86.4.</ns0:p><ns0:p>Different oversampling methods on the performance of the TabNet Baseline model were experimented with and the results using the ADASYN method has the best outcome in all the measured performance metrics.</ns0:p><ns0:p>The impact of different oversampling methods on the performance of the TabNet Best model was determined. The results using the ADASYN method has the best results in all the measured performance metric.</ns0:p><ns0:p>The tables showing the various experimentations of the individual hyperparameter explained above can be found in the appendix section. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science As the model is learning (going through the epochs), the error reduces and hence the loss reduces. In predicting mortality also,the ROC curve which demonstrates a trade-off between the true positive rate (TPR) and the false positive rate (FPR)is plotted due to the imbalance of the dataset. The confusion matrix also gives a sense of the specific number of patients that were correctly classified as dying or not dying, and the ones that were incorrectly classified.</ns0:p><ns0:p>A large area under the curve indicates high recall and precision, with high precision indicating a low false-positive rate and high recall indicating a low false-negative rate. High scores for both indicate that the classifier is producing correct (high precision) results as well as a majority of all positive outcomes (high recall). From Figure <ns0:ref type='figure' target='#fig_1'>15</ns0:ref>, it can be seen that there is a large area under the curve, indicating that the model is functioning well. With an AUC of 96.30%, the model can distinguish between most of the patients who died, and the patients that did not die. It can be seen from the confusion matrix in Figure <ns0:ref type='figure' target='#fig_10'>16</ns0:ref> that the model correctly predicted 89 individuals who died from the virus and 78 individuals who did not die from the virus. 7 individuals were incorrectly classified as not dying from the virus when they died, and 1 individual was classified as dead when the individual did not die from the virus.</ns0:p><ns0:p>The proposed model does an excellent job in predicting mortality. Next, the proposed model will be compared with the baseline models existing in the literature.</ns0:p></ns0:div> <ns0:div><ns0:head>Model</ns0:head><ns0:p>AUC F1Score Accuracy Recall Precision Li, Xiaoran, et al (baseline) <ns0:ref type='bibr' target='#b19'>Li et al. (2020b)</ns0:ref> Manuscript to be reviewed The various hyperparameters were also tuned, and the best results of each was combined with the dimensionality technique and then oversampled to obtain the final result for all metrics. Results achieved were, an AUC of 88.3%, F1 score of 89.7%, the accuracy of 88.7%, recall of 93.3%, and precision of 86.4% for predicting ICU. In predicting mortality, results of 96.3% AUC, 95.8% F1 score, accuracy of 96.0%, recall of 99.8%, and precision of 91.8% were obtained. The reason why the results in predicting mortality achieves higher performances than the one in predicting ICU admission could be because sometimes individuals that need ICU admission, do not get the opportunity due to lack of beds available at that time because of large volumes of individuals present at the hospital needing the same resources.</ns0:p><ns0:note type='other'>Computer Science Death No Death Predicted</ns0:note><ns0:p>In the case of mortality, when an individual dies, the individual dies, there is no middle ground, so it is relatively easier to distinguish mortality than ICU admission.</ns0:p><ns0:p>A confusion matrix was constructed to show specifics, where there were more false positives than false negatives in both determining ICU admission and mortality. The reason for more false positives than false negatives could be because doctors have to make a quick and instant guess as to which patient needs the ICU at that time by simply looking at the physical conditions of the patient present. Due to the lack of time and resources, they depend on only those physical symptoms to make a decision, so patients who can deteriorate quickly due to underlying illnesses or other factors are often overlooked for admission to the ICU simply because they do not show physical deterioration at the time of decision making.</ns0:p></ns0:div> <ns0:div><ns0:head>31/40</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The process by which the proposed model makes decisions to determine which features are most important was also determined. The model uses Masks which shows the features they were paying the most attention to in the heat map, which can be seen here in Figure <ns0:ref type='figure'>4</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_1'>11</ns0:ref>. This was then used to construct the global feature importance graph, which is easier to understand, where the longer the bar, the more importance it has in determining if a patient with COVID-19 is likely to be sent to the ICU or if the patient is likely to die from the disease.</ns0:p><ns0:p>The findings from our model suggesting the most important features in predicting ICU admission and mortality, has been supported by other literature's, these are shown below.</ns0:p><ns0:p>Ferritin is the symptom of the patient which is the most important in determining if the patient needs ICU or not. Ferritin represents how much iron is contained in the body, and if a ferritin test reveals a lowerthan-normal ferritin level in the blood, this may indicate that the body's iron stores are low. This is a high indication of iron deficiency which can cause anaemia. <ns0:ref type='bibr' target='#b6'>(Dinevari et al., 2021)</ns0:ref>. Ferritin levels were found to be elevated upon hospital admission and throughout the hospital stay in patients admitted to the ICU by COVID-19. In comparison to individuals with less severe COVID-19, ferritin levels in the peripheral blood of patients with severe COVID-19 were shown to be higher. As a result, serum ferritin levels were found to be closely linked to the severity of COVID19 <ns0:ref type='bibr' target='#b5'>(Dahan et al., 2020)</ns0:ref>. Early analysis of ferritin levels in patients with COVID-19 might effectively predict the disease severity <ns0:ref type='bibr' target='#b3'>(Bozkurt et al., 2021)</ns0:ref>.</ns0:p><ns0:p>The magnitude of inflammation present at admission of COVID-19 patients, represented by high ferritin levels, is predictive of in-hospital mortality <ns0:ref type='bibr' target='#b20'>(Lino et al., 2021)</ns0:ref>. Studies indicate that Chronic obstructive pulmonary disease (COPD) is the symptom that shows the most importance in predicting mortality among COVID-19 patients. This COPD is a chronic inflammatory lung condition in which the lungs' airflow is impeded. Breathing difficulties, cough, mucus (sputum) production, and wheezing are all symptoms. Since COVID-19 is a disease that affects the respiratory system, it makes sense that a disease like COPD which also affects the lungs could have devasting effects on a patient who contracts COVID-19. <ns0:ref type='bibr' target='#b8'>(Gerayeli et al., 2021)</ns0:ref>. Patients with Chronic Obstructive Pulmonary Disease (COPD) have a higher prevalence of coronary ischemia and other factors that put them at risk for COVID-19-related complications. The results of this study confirm a higher incidence of COVID-19 in COPD patients and higher rates of hospital admissions <ns0:ref type='bibr' target='#b13'>(Graziani et al., 2020)</ns0:ref>. While COPD was present in only a few percentage of patients, it was associated with higher rates of mortality <ns0:ref type='bibr' target='#b28'>(Venkata and Kiernan, 2020)</ns0:ref>.</ns0:p><ns0:p>The limitations of this study are that the sample size is small, with only about 1000 patients included in the study. The study was restricted to patients at Stony Brook University Hospital and conducted between 7 February, 2020 to 4 May, 2020.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This paper proposes a tabular, interpretable deep learning model to predict ICU admission likelihood and mortality of COVID-19 patients. The proposed model achieves this by employing a sequential attention mechanism that selects the features at each step of the decision-making process based on a sparse selection of the most important features such as patient demographics, vital signs, comorbidities, and laboratory discoveries.</ns0:p><ns0:p>ADASYN was used to balance the data sets, Fast ICA to extract useful features, and all the various hyperparameters tuned to improve results. The proposed model achieves an AUC of 88.3% for predicting ICU admission likelihood which beats the 72.8% reported in the literature. The proposed model also achieves an AUC of 96.3% for predicting mortality rate which beats the 84.4% reported in the literature.</ns0:p><ns0:p>The most important patient attributes for predicting ICU admission and mortality were also determined to give a clear indication of which attributes contribute the most to a patient needing ICU and a patient dying from COVID-19, where these claims were also backed up previous studies as well. The information from the model can be used to assist medical personnel globally by helping direct the limited healthcare resources in the right direction, in prioritizing patients, and to provide tools for front-line doctors to help classify patients in time-bound and resource-limited scenarios.</ns0:p><ns0:p>For future work, the study can be extended to include a lot more patients over a longer time frame from several hospitals. The proposed method can be combined with other machine learning methods for improved results. This study could be extended to include more diseases, allowing the healthcare system to respond more quickly in the event of an outbreak or pandemic.</ns0:p></ns0:div> <ns0:div><ns0:head>32/40</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>is a synthetic data generation algorithm that employs a weighted distribution for distinct minority class examples based on their learning difficulty, with more synthetic data generated for minority class examples that are more difficult to learn compared to minority class examples that are simpler to learn. This is expressed by:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. TabNet Model Architecture</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2provides an illustration of how the TabNet<ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019</ns0:ref>) makes a decision(individual explainability). TabNet has a feature value output called Masks, which shows whether a feature is selected at a given decision step in the model and can be used to calculate the feature importance. The Masks for each input feature are represented by each row, and the column represents a sample from the data set. Brighter colors show a higher value. Consider Figure2, where 9 features ranging from feat 0 to feat 8 are shown. For the random sample at 3, the first feature is the one being heavily used, hence the brighter colour, and the sample at 6, three features have brighter colours,feature 0, 1 and 8, with 8 being the brightest, signifying the feature 8's output was heavily used for this sample.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>matrix describes a classification models output on a collection of test data for which the true values are known. The matrix compares the actual target values to the machine learning models predictions. It includes true positives (TP) which are correctly predicted positive values, indicating that the value of the real class and the value of the predicted class are both yes, True negatives (TN) which correctly estimates negative values, indicating that the real class value is zero and the predicted class value is zero as well. False positives (FP), where the actual class is no but the predicted class is yes. False negatives (FN), where the actual class is yes but the predicted class is no.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>DatasetNo. Patients-No. Features Class Labels Class distribution ratio(Pos: Neg) ICUMice-ICU 1106-43 1 = death 0 = non-death 86.1:13.9 DEADMice-Mortality 1020-43 1 = ICU 0 = no-ICU 75.5:24.5</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Varying Hyper parameters with respect to AUC score for predicting ICU admission</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure5shows a graph of features against feature importance. Features are the symptoms that contribute to an individual being admitted to the ICU. Feature importance stands for the importance of the symptom in contributing to the patient being admitted into the ICU, where a larger number indicates a higher contribution to an ICU admission. All features have some importance in determining if a person would be admitted to the ICU. The sum of all the feature importance data points is 1. The top 5 features that contribute greatly to a person needing to be admitted to the ICU were Ferritin, ALT, ckdhx, Diarrhoea and carcinomahx.Best Final Model for ICU predictionThe model was analyzed and its output compared with different TabNet configurations based on different feature extractors. The model with the best results has been selected as the final proposed model for ICU prediction. The proposed model is a TabNet model with 150 epochs, 128 batch size, and 60 patience with the number of steps of 2, width of precision of layer of 64, gamma of 1.3, entmax mask type, n independent of 2, momentum of 0.3, lambda sparse of 1e-3, and n shared of 2, using the Fast Independent Component Analysis as the feature extractor, and ADASYN as the sampling technique to balance the imbalanced data. Figures6 and 7below show the loss graph, and training and validating accuracy graph of the TabNet in predicting ICU admission.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 7 Figure 7 .Figure 8 .Figure 9 .</ns0:head><ns0:label>7789</ns0:label><ns0:figDesc>Figure7shows a graph of accuracy against number of epochs. Accuracy tends to go higher as the number of epochs increases. At the early stages of the training, the accuracy is low, but as the model begins to learn the patterns of the data, the accuracy increases and reaches a higher value at the end of the epochs (150). Difference between the training accuracy and the testing accuracy is not high, which suggests that the model is not overfitting on the dataset.The precision-recall curve,which demonstrates a trade-off between the recall score (True Positive Rate), and the precision score (Positive Predictive Value), is used for this analysis due to the dataset being imbalanced (there is a large skew in the class distribution). The confusion matrix also gives a sense of the specific number of patients that were correctly classified as needing ICU or not needing it, and the ones that were incorrectly classified.A large area under the curve indicates high recall and precision, with high precision indicating a low false-positive rate and high recall indicating a low false-negative rate. High scores for both indicate that the classifier is producing correct (high precision) results as well as a majority of all positive outcomes</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 10 .Figure 11 .Figure 12 .Figure 13 .</ns0:head><ns0:label>10111213</ns0:label><ns0:figDesc>Figure 10. Varying Hyper parameters with respect to AUC score for predicting mortality</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 14 Figure 14 .</ns0:head><ns0:label>1414</ns0:label><ns0:figDesc>Figure14shows a graph of accuracy against number of epochs. The accuracy tends to increase as the number of epochs increases. At the early stages of the training, the accuracy is low, but as the model</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Confusion matrix of the best TabNet model for predicting Mortality</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='9,224.27,178.45,325.98,411.03' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,143.00,367.02,411.05,325.98' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,194.39,140.50,255.17,255.17' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,143.01,63.78,411.03,448.40' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,143.01,249.11,411.05,271.86' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>k -1 folds becoming the training set. This ensures that no value in the training and test sets is over-or under-represented, resulting in a more accurate estimate of performance/error.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture of TabNet model</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>A TabNet Model, (Arik and Pfister, 2019) is proposed to perform prediction of ICU admission and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>mortality parameters. A schematic diagram of the proposed TabNet deep learning model 15 is presented</ns0:cell></ns0:row><ns0:row><ns0:cell>in Figure 1.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>The TabNet model consists primarily of sequential multi-steps that transfer inputs from one stage to</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>the next. There are 3 key layers of this model, namely, the Feature Transformer, the Attentive Transformer,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>and the Mask. The Feature Transformer is made up of four sequential Gated Linear Unit (GLU) decision</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>blocks, which allow the selection of important features for predicting the next decision. The Attentive</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Transformer provides sparse feature selection that uses sparse-matrix to improve interpretability and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>learning. The way it does this is by giving importance to the most important features. The mask is then</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>used in conjunction with the Transformer to produce two decision parameters n(d) and n(a) which are</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>then passed on to the next step. n(d) is the output decision that predicts the two classes, namely, ICU</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>admission (yes/no) and Deaths (yes/no). n(a) is the input to the next Attentive Transformer, where the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>next cycle starts. From the Feature transformer, the output is sent to the Split module.</ns0:cell></ns0:row><ns0:row><ns0:cell>Input Patient Features</ns0:cell><ns0:cell>Batch Normalization</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Notations</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Notations</ns0:cell><ns0:cell>Definitions</ns0:cell></ns0:row><ns0:row><ns0:cell>x</ns0:cell><ns0:cell>Features</ns0:cell></ns0:row><ns0:row><ns0:cell>&#963;</ns0:cell><ns0:cell>sigmoid</ns0:cell></ns0:row><ns0:row><ns0:cell>n(d)</ns0:cell><ns0:cell>output decision from current step</ns0:cell></ns0:row><ns0:row><ns0:cell>n(a)</ns0:cell><ns0:cell>input decision to the next current step</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>weights</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8853; otimes</ns0:cell><ns0:cell>direct summation tensor product</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947;</ns0:cell><ns0:cell>gamma</ns0:cell></ns0:row><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell>beta</ns0:cell></ns0:row><ns0:row><ns0:cell>&#181; b</ns0:cell><ns0:cell>mean</ns0:cell></ns0:row><ns0:row><ns0:cell>i i=1 &#8719; i i=1</ns0:cell><ns0:cell>integral block product block</ns0:cell></ns0:row><ns0:row><ns0:cell>P[i-1]</ns0:cell><ns0:cell>prior scales</ns0:cell></ns0:row><ns0:row><ns0:cell>p[i-1]</ns0:cell><ns0:cell>split layer division</ns0:cell></ns0:row><ns0:row><ns0:cell>h i</ns0:cell><ns0:cell>FC layer + BN layer</ns0:cell></ns0:row><ns0:row><ns0:cell>M j</ns0:cell><ns0:cell>Mask learning process</ns0:cell></ns0:row><ns0:row><ns0:cell>d[i]</ns0:cell><ns0:cell>final output</ns0:cell></ns0:row><ns0:row><ns0:cell>a[i]</ns0:cell><ns0:cell>determine mask of next step</ns0:cell></ns0:row><ns0:row><ns0:cell>f(x)</ns0:cell><ns0:cell>function to return value of relu function</ns0:cell></ns0:row><ns0:row><ns0:cell>&#966;</ns0:cell><ns0:cell>features selected at ith step</ns0:cell></ns0:row><ns0:row><ns0:cell>M a gg b ,&#8722; j</ns0:cell><ns0:cell>importance of features</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>Total number of synthetic data examples</ns0:cell></ns0:row><ns0:row><ns0:cell>m l</ns0:cell><ns0:cell>minority class</ns0:cell></ns0:row><ns0:row><ns0:cell>m s</ns0:cell><ns0:cell>majority class</ns0:cell></ns0:row><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell>Desired balance level</ns0:cell></ns0:row><ns0:row><ns0:cell>x'</ns0:cell><ns0:cell>new generated synthetic data</ns0:cell></ns0:row><ns0:row><ns0:cell>cov(X,y)</ns0:cell><ns0:cell>covariance matrix</ns0:cell></ns0:row><ns0:row><ns0:cell>p i | j q i j</ns0:cell><ns0:cell>joint probability distribution of features t-distribution of features</ns0:cell></ns0:row><ns0:row><ns0:cell>KL</ns0:cell><ns0:cell>Kullback-Leiber divergence</ns0:cell></ns0:row><ns0:row><ns0:cell>TP</ns0:cell><ns0:cell>True positive</ns0:cell></ns0:row><ns0:row><ns0:cell>TN</ns0:cell><ns0:cell>True negative</ns0:cell></ns0:row><ns0:row><ns0:cell>FP</ns0:cell><ns0:cell>False positive</ns0:cell></ns0:row><ns0:row><ns0:cell>FN</ns0:cell><ns0:cell>False negative</ns0:cell></ns0:row></ns0:table><ns0:note>9/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021) Manuscript to be reviewed Computer Science 10/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Relationship between Features and ICU admission</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features / Variables</ns0:cell><ns0:cell>ICU(n=271)</ns0:cell><ns0:cell>No-ICU(n=835)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Demographics</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Age, mean</ns0:cell><ns0:cell>59.42</ns0:cell><ns0:cell>62.06</ns0:cell></ns0:row><ns0:row><ns0:cell>Male</ns0:cell><ns0:cell>67.5% (183)</ns0:cell><ns0:cell>54% (451)</ns0:cell></ns0:row><ns0:row><ns0:cell>Female</ns0:cell><ns0:cell>32.5% (88)</ns0:cell><ns0:cell>46% (384)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ethnicity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hispanic/Latino</ns0:cell><ns0:cell>28.8% (78)</ns0:cell><ns0:cell>26.6% (222)</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-Hispanic/Latino</ns0:cell><ns0:cell>54.6% (148)</ns0:cell><ns0:cell>60.7% (507)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown 16.6% (45)</ns0:cell><ns0:cell>12.7% (106)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Race</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Caucasian</ns0:cell><ns0:cell>45.4% (123)</ns0:cell><ns0:cell>54.3% (453)</ns0:cell></ns0:row><ns0:row><ns0:cell>African American</ns0:cell><ns0:cell>4.79% (13)</ns0:cell><ns0:cell>7.3% (61)</ns0:cell></ns0:row><ns0:row><ns0:cell>American Indian</ns0:cell><ns0:cell>0.7% (2)</ns0:cell><ns0:cell>0.2% (2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asian</ns0:cell><ns0:cell>7.4% (20)</ns0:cell><ns0:cell>3.1% (26)</ns0:cell></ns0:row><ns0:row><ns0:cell>Native Hawaiian</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.1% (1)</ns0:cell></ns0:row><ns0:row><ns0:cell>More than one race</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.6% (5)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown/ not reported</ns0:cell><ns0:cell>41.7% (113)</ns0:cell><ns0:cell>34.4% (287)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Comorbidities</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Smoking history</ns0:cell><ns0:cell>22.5% (61)</ns0:cell><ns0:cell>25.6% (214)</ns0:cell></ns0:row><ns0:row><ns0:cell>Diabetes</ns0:cell><ns0:cell>29.5% (80)</ns0:cell><ns0:cell>26.3% (220)</ns0:cell></ns0:row><ns0:row><ns0:cell>Hypertension</ns0:cell><ns0:cell>46.5% (126)</ns0:cell><ns0:cell>49.3% (412)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asthma</ns0:cell><ns0:cell>8.5% (23)</ns0:cell><ns0:cell>5.1% (43)</ns0:cell></ns0:row><ns0:row><ns0:cell>COPD</ns0:cell><ns0:cell>6.3% (17)</ns0:cell><ns0:cell>9.1% (76)</ns0:cell></ns0:row><ns0:row><ns0:cell>Coronary artery disease</ns0:cell><ns0:cell>14.4% (39)</ns0:cell><ns0:cell>15.1% (126)</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart failure</ns0:cell><ns0:cell>6.6% (18)</ns0:cell><ns0:cell>7.4% (62)</ns0:cell></ns0:row><ns0:row><ns0:cell>Cancer</ns0:cell><ns0:cell>5.5% (15)</ns0:cell><ns0:cell>10.5% (88)</ns0:cell></ns0:row><ns0:row><ns0:cell>Chronic kidney disease</ns0:cell><ns0:cell>7.4% (20)</ns0:cell><ns0:cell>9.7% (81)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Vital signs</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Systolic blood pressure(mmHg),mean</ns0:cell><ns0:cell>124.8</ns0:cell><ns0:cell>128.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Temperature (degree Celsius), mean</ns0:cell><ns0:cell>37.63</ns0:cell><ns0:cell>37.47</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart rate, mean</ns0:cell><ns0:cell>106.1</ns0:cell><ns0:cell>98.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Respiratory rate(rate/min), mean</ns0:cell><ns0:cell>25.28</ns0:cell><ns0:cell>21.77</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Laboratory Findings</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Alanine aminotransferase(U/L), mean</ns0:cell><ns0:cell>49.62</ns0:cell><ns0:cell>47.03</ns0:cell></ns0:row><ns0:row><ns0:cell>C-reactive protein(mg/dL), mean</ns0:cell><ns0:cell>15.4</ns0:cell><ns0:cell>9.49</ns0:cell></ns0:row><ns0:row><ns0:cell>D-dimer(ng/mL), mean</ns0:cell><ns0:cell>1101.92</ns0:cell><ns0:cell>1210.51</ns0:cell></ns0:row><ns0:row><ns0:cell>Ferritin(ng/mL), mean</ns0:cell><ns0:cell>1469.67</ns0:cell><ns0:cell>1005.43</ns0:cell></ns0:row><ns0:row><ns0:cell>Lactase dehydrogenase(U/L), mean</ns0:cell><ns0:cell>481.7</ns0:cell><ns0:cell>377.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Lymphocytes(*1000/ml)</ns0:cell><ns0:cell>12.43</ns0:cell><ns0:cell>14.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Procalcitonin(ng/mL), mean</ns0:cell><ns0:cell>2.66</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>Troponin(ng/mL), mean</ns0:cell><ns0:cell>0.038</ns0:cell><ns0:cell>0.03</ns0:cell></ns0:row></ns0:table><ns0:note>Table 4 shows the demographics, vital signs, comorbidities, and laboratory discoveries of ICU patients and non-ICU patients. 11/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021) Manuscript to be reviewed Computer Science 12/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Relationship</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Percentage of patients with symptoms</ns0:cell></ns0:row></ns0:table><ns0:note>between Symptoms and ICU admission Certain symptoms had a higher correlation with ICU admission than others and Table6gives a summary of this. It is observed that the Shortness of Breath (SOB) feature has the highest correlation with admission to the ICU unit.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Correlation between Symptoms and ICU admission</ns0:figDesc><ns0:table /><ns0:note>MortalityThere were more males than females in the study population. The non-Hispanic ethnicity forms the most individuals and the Caucasian race has the highest number of individuals in the population. Hypertension is the co-morbidity that most individuals in the study population presented, with Asthma being the least. The average age of individuals that died is higher than those that did not. Regarding the vital signs, Respiratory rate is the sign that showed quite a big difference on average, with the individuals that died having a higher respiratory rate compared to the ones who did not die. Procalcitonin, Ferritin, D-dimer, and C-reactive protein are the laboratory findings that showed the biggest average differences,13/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Table 7 shows the demographics, vital signs, Relationship between Features and Mortality</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features / Variables</ns0:cell><ns0:cell>Death(n=271)</ns0:cell><ns0:cell>No-Death(n=835)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Demographics</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Age, mean</ns0:cell><ns0:cell>73</ns0:cell><ns0:cell>59.83</ns0:cell></ns0:row><ns0:row><ns0:cell>Male</ns0:cell><ns0:cell>65.5% (93)</ns0:cell><ns0:cell>55% (483)</ns0:cell></ns0:row><ns0:row><ns0:cell>Female</ns0:cell><ns0:cell>34.5% (49)</ns0:cell><ns0:cell>45% (395)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ethnicity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hispanic/Latino</ns0:cell><ns0:cell>16.2% (23)</ns0:cell><ns0:cell>28.5% (250)</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-Hispanic /Latino</ns0:cell><ns0:cell>73.9% (105)</ns0:cell><ns0:cell>57.4% (504)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown</ns0:cell><ns0:cell>9.9% (14)</ns0:cell><ns0:cell>14.1% (124)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Race</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Caucasian</ns0:cell><ns0:cell>64.1% (91)</ns0:cell><ns0:cell>51.3% (450)</ns0:cell></ns0:row><ns0:row><ns0:cell>African American</ns0:cell><ns0:cell>4.2% (6)</ns0:cell><ns0:cell>6.9% (61)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asian</ns0:cell><ns0:cell>6.3% (9)</ns0:cell><ns0:cell>3.% (33)</ns0:cell></ns0:row><ns0:row><ns0:cell>American Indian</ns0:cell><ns0:cell>0.7% (2)</ns0:cell><ns0:cell>0.23% (2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Native Hawaiian</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.1% (1)</ns0:cell></ns0:row><ns0:row><ns0:cell>More than one race</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.6% (5)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown/ not reported</ns0:cell><ns0:cell>24.6% (35)</ns0:cell><ns0:cell>37.1% (326)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Comorbidities</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Smoking history</ns0:cell><ns0:cell>36.6% (52)</ns0:cell><ns0:cell>23.2% (204)</ns0:cell></ns0:row><ns0:row><ns0:cell>Diabetes</ns0:cell><ns0:cell>33.8% (48)</ns0:cell><ns0:cell>26.08% (229)</ns0:cell></ns0:row><ns0:row><ns0:cell>Hypertension</ns0:cell><ns0:cell>64.8% (92)</ns0:cell><ns0:cell>45.8% (402)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asthma</ns0:cell><ns0:cell>4.22% (6)</ns0:cell><ns0:cell>5.8% (51)</ns0:cell></ns0:row><ns0:row><ns0:cell>COPD</ns0:cell><ns0:cell>16.2% (23)</ns0:cell><ns0:cell>7.5% (66)</ns0:cell></ns0:row><ns0:row><ns0:cell>Coronary artery disease</ns0:cell><ns0:cell>27.5% (39)</ns0:cell><ns0:cell>13.1% (115)</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart failure</ns0:cell><ns0:cell>20.4% (29)</ns0:cell><ns0:cell>5.4% (47)</ns0:cell></ns0:row><ns0:row><ns0:cell>Cancer</ns0:cell><ns0:cell>13.4% (19)</ns0:cell><ns0:cell>8.9% (78)</ns0:cell></ns0:row><ns0:row><ns0:cell>Immunosuppression</ns0:cell><ns0:cell>5.6% (8)</ns0:cell><ns0:cell>7.4% (65)</ns0:cell></ns0:row><ns0:row><ns0:cell>Chronic kidney disease</ns0:cell><ns0:cell>14.08% (20)</ns0:cell><ns0:cell>8.5% (75)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Vital signs</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Systolic blood pressure(mmHg), mean</ns0:cell><ns0:cell>127.45</ns0:cell><ns0:cell>128.57</ns0:cell></ns0:row><ns0:row><ns0:cell>Temperature (degree Celsius), mean</ns0:cell><ns0:cell>37.3</ns0:cell><ns0:cell>37.52</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart rate, mean</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>100.38</ns0:cell></ns0:row><ns0:row><ns0:cell>Respiratory rate(rate/min), mean</ns0:cell><ns0:cell>26.39</ns0:cell><ns0:cell>21.79</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Laboratory Findings</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Alanine aminotransferase(U/L), mean</ns0:cell><ns0:cell>42.91</ns0:cell><ns0:cell>48.45</ns0:cell></ns0:row><ns0:row><ns0:cell>C-reactive protein(mg/dL), mean</ns0:cell><ns0:cell>16.07</ns0:cell><ns0:cell>9.62</ns0:cell></ns0:row><ns0:row><ns0:cell>D-dimer(ng/mL), mean</ns0:cell><ns0:cell>2626.27</ns0:cell><ns0:cell>1016</ns0:cell></ns0:row><ns0:row><ns0:cell>Ferritin(ng/mL), mean</ns0:cell><ns0:cell>1565</ns0:cell><ns0:cell>1037.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Lactase dehydrogenase(U/L), mean</ns0:cell><ns0:cell>588.28</ns0:cell><ns0:cell>363.14</ns0:cell></ns0:row><ns0:row><ns0:cell>Lymphocytes (*1000/ml)</ns0:cell><ns0:cell>10.96</ns0:cell><ns0:cell>14.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Procalcitonin(ng/mL), mean</ns0:cell><ns0:cell>5.14</ns0:cell><ns0:cell>0.76</ns0:cell></ns0:row><ns0:row><ns0:cell>Troponin(ng/mL), mean</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell>0.0278</ns0:cell></ns0:row></ns0:table><ns0:note>281comorbidities, and laboratory discoveries of patients that died and the ones who did not die.282 14/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021) Manuscript to be reviewed Computer Science 15/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Table8shows the relationship between symptoms and mortality by looking at the distribution of patients that died and whether or not they had a symptom. Relationship between Symptoms and Mortality Certain symptoms had a higher correlation with mortality than others and Table9gives a summary of this. It is observed that the headache feature has the highest correlation with death.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>symptoms</ns0:cell><ns0:cell cols='2'>Percentage of Patients with Symptoms</ns0:cell></ns0:row><ns0:row><ns0:cell>Fever</ns0:cell><ns0:cell>57%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cough</ns0:cell><ns0:cell>51.4%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Shortness of Breath (SOB)</ns0:cell><ns0:cell>71.8%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fatigue</ns0:cell><ns0:cell>86.6%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Sputum</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Myalgia</ns0:cell><ns0:cell>89.44%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diarrhea</ns0:cell><ns0:cell>81%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Nausea or vomiting</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Sore throat</ns0:cell><ns0:cell>95.1%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Runny nose or Nasal congestion</ns0:cell><ns0:cell>97.18%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Loss of smell</ns0:cell><ns0:cell>98.59%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Loss of Taste</ns0:cell><ns0:cell>98.59%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Headache</ns0:cell><ns0:cell>95.07%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chest discomfort or chest pain</ns0:cell><ns0:cell>92.96%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Symptoms</ns0:cell><ns0:cell>Correlation(Pearson)</ns0:cell><ns0:cell>pvalues</ns0:cell></ns0:row><ns0:row><ns0:cell>Fever</ns0:cell><ns0:cell>-0.08</ns0:cell><ns0:cell>0.009</ns0:cell></ns0:row><ns0:row><ns0:cell>Cough</ns0:cell><ns0:cell>-0.149</ns0:cell><ns0:cell>0.0000069</ns0:cell></ns0:row><ns0:row><ns0:cell>Shortness of Breath (SOB)</ns0:cell><ns0:cell>0.031</ns0:cell><ns0:cell>0.32</ns0:cell></ns0:row><ns0:row><ns0:cell>Fatigue</ns0:cell><ns0:cell>-0.09</ns0:cell><ns0:cell>0.003</ns0:cell></ns0:row><ns0:row><ns0:cell>Sputum</ns0:cell><ns0:cell>0.006</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell>Myalgia</ns0:cell><ns0:cell>-0.119</ns0:cell><ns0:cell>0.00013</ns0:cell></ns0:row><ns0:row><ns0:cell>Diarrhea</ns0:cell><ns0:cell>-0.04</ns0:cell><ns0:cell>0.199</ns0:cell></ns0:row><ns0:row><ns0:cell>Nausea or vomiting</ns0:cell><ns0:cell>-0.128</ns0:cell><ns0:cell>0.00004116</ns0:cell></ns0:row><ns0:row><ns0:cell>Sore throat</ns0:cell><ns0:cell>-0.037</ns0:cell><ns0:cell>0.233</ns0:cell></ns0:row><ns0:row><ns0:cell>Runny nose or Nasal congestion</ns0:cell><ns0:cell>-0.03</ns0:cell><ns0:cell>0.318</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss of smell</ns0:cell><ns0:cell>-0.05</ns0:cell><ns0:cell>0.0965</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss of Taste</ns0:cell><ns0:cell>-0.066</ns0:cell><ns0:cell>0.0377</ns0:cell></ns0:row><ns0:row><ns0:cell>Headache</ns0:cell><ns0:cell>0.062</ns0:cell><ns0:cell>0.048</ns0:cell></ns0:row><ns0:row><ns0:cell>Chest discomfort or chest pain</ns0:cell><ns0:cell>-0.1</ns0:cell><ns0:cell>0.002</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Correlation between Symptoms and Mortality</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Default Hyper parameters of the TabNet Model and be greater than 16. For selecting features, the masking function is used. For the width of decision prediction layer, bigger values provide the model more capacity at the expense of overfitting. Typically, the values range from 8 to 64. The number of consecutive epochs without improvement before performing</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Training Hyper parameters</ns0:cell><ns0:cell>Default Values</ns0:cell></ns0:row><ns0:row><ns0:cell>Max epochs</ns0:cell><ns0:cell>200</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch Size</ns0:cell><ns0:cell>1024</ns0:cell></ns0:row><ns0:row><ns0:cell>Masking Function</ns0:cell><ns0:cell>sparsemax</ns0:cell></ns0:row><ns0:row><ns0:cell>Width of decision prediction layer</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>patience</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>momentum</ns0:cell><ns0:cell>0.02</ns0:cell></ns0:row><ns0:row><ns0:cell>n shared</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>n independent</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>gamma</ns0:cell><ns0:cell>1.3</ns0:cell></ns0:row><ns0:row><ns0:cell>nsteps</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>lambda sparse</ns0:cell><ns0:cell>1e-3</ns0:cell></ns0:row></ns0:table><ns0:note>, where the max epochs are the maximum number of epochs for training. The batch size is the number of examples per batch. Based on the original paper implementation, the number should preferably be a multiple of two 16/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 11</ns0:head><ns0:label>11</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>F1Score Accuracy</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>Precision</ns0:cell></ns0:row><ns0:row><ns0:cell>Li, Xiaoran, et al (baseline) Li et al. (2020b)</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>55.1 72.1</ns0:cell><ns0:cell>76.0</ns0:cell><ns0:cell>43.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Tabnet Baseline+ Fast ICA+ ADASYN Tabnet Best+ Fast ICA+ ADASYN</ns0:cell><ns0:cell cols='4'>79.77 &#177; 1.87 84.66 &#177; 2.46 85.73 &#177; 3.208 84.52 &#177; 3.07 92.31&#177; 1.08 81.28 &#177; 4.87 84.47&#177;6.57 77.01&#177; 4.71 80.09 &#177;2.78 82.1&#177; 2.05</ns0:cell></ns0:row></ns0:table><ns0:note>shows the performance of the TabNet Baseline, and TabNet Best models with FastICA dimensionality reduction, and ADASYN oversampling method. The results of the TabNet Best is the best amongst the other baseline models.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Performance of the most optimized TabNet model with corresponding standard deviations across all the runs</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_16'><ns0:head>Table 13</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>shows the performance of the TabNet Baseline, and TabNet Best models with FastICA dimensionality reduction, and ADASYN oversampling method. The results of the TabNet Best is the best amongst the other baseline models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>F1Score Accuracy</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>Precision</ns0:cell></ns0:row><ns0:row><ns0:cell>Li, Xiaoran, et al (baseline) Li et al. (2020b)</ns0:cell><ns0:cell>84.4</ns0:cell><ns0:cell>61.6 85.3</ns0:cell><ns0:cell>70.6</ns0:cell><ns0:cell>52.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Tabnet Baseline+ Fast ICA+ ADASYN Tabnet Best+ Fast ICA+ ADASYN</ns0:cell><ns0:cell cols='4'>89.03 &#177; 2.19 89.12 &#177; 2.40 88.92 &#177; 2.40 92.98 &#177; 2.97 85.75 &#177; 4.11 91.59 &#177; 1.63 91.74 &#177; 1.63 91.49 &#177; 1.62 96.65 &#177; 1.91 87.36 &#177; 2.24</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Performance of the best final TabNet model with FastICA dimensionality reduction method and ADASYN oversampling method</ns0:figDesc><ns0:table /><ns0:note>24/40PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_18'><ns0:head>Table 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Comparison of results between proposed method and existing techniqueIt can be seen from Table14that the proposed model beats the model reported by<ns0:ref type='bibr' target='#b19'>(Li et al., 2020b)</ns0:ref> in all metrics, which is a clear suggestion that the proposed model is superior. The proposed TabNet model can predict ICU admission likelihood rate with an AUC of 88.3%, and mortality rate with AUC of 96.3% which beats the model existing in the literature<ns0:ref type='bibr' target='#b19'>(Li et al., 2020b)</ns0:ref> In predicting ICU admission likelihood, the TabNet model depicted that Ferritin, ALT, and Cxdhx were the top 3 predictors of a patient needing ICU admission after contracting COVID 19, and COPD, ferritin, and Myalgia were the top 3 predictors of a patient dying from the COVID-19 disease.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>84.4</ns0:cell><ns0:cell>61.6 85.3</ns0:cell><ns0:cell>70.6</ns0:cell><ns0:cell>52.2</ns0:cell></ns0:row></ns0:table><ns0:note>29/40PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)Manuscript to be reviewedFigure 15. Precision-Recall Curve of the best TabNet model for predicting MortalityDISCUSSIONThe main purpose of this study was to develop a deep learning model to predict mortality rate and ICU admission likelihood of patients with COVID-19, and to determine which patient attributes are most important in determining the mortality, and ICU admission of COVID patients. From the results obtained, it can be concluded that:Finding 1 5 different dimensionality techniques were also experimented with to improve the results. The Fast ICA technique yielded the best results, achieving a 2.58% difference over the next highest technique (PCA), in predicting ICU admissions. It also achieved a 1.16% difference for mortality. The other dimensionality reduction techniques are excluded, and the Fast ICA technique is concentrated upon to achieve the final results. 30/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:note></ns0:figure> <ns0:note place='foot' n='5'>/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:1:1:NEW 30 Nov 2021)</ns0:note> </ns0:body> "
"Rebuttal letter Original Manuscript ID: #CS-2021:08:64558:0:1:REVIEW) Original Article Title: “Interpretable Deep Learning for the Prediction of ICU Admission Likelihood and Mortality of COVID-19 Patients” To: PeerJ Computer Science Re: Response to reviewers Dear Editor, Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated manuscript without highlights (PDF main document). Best regards, Amril Nazir et al. Reviewer#1, Concern # 1(Basic reporting): The authors should decrease the number of tables as possible. Some tables could be merged or moved to supplement files, such as hyperparameters tuning. They should keep the most important parts for readers. Author response and action: We thank the reviewer for the suggestion. In the revised manuscript, we have decreased the number of tables, and moved most of it to the appendix section. The hyperparameters had a lot of tables, which only the final optimized hyperparameters has been kept, and the rest moved to Appendix section. The final version now only has 14 tables when compared to the original version of 40 tables. Reviewer#1, Concern # 2(Experimental design): The research question is 'interpretable'. The authors only pick the important features from the model but not to the point 'interpretable'. They can explain the definition of 'interpretable'. Author response and action: We thank the reviewer for the suggestion. In the revised manuscript, we have further explained what interpretable is. We have added further explanation of what interpretable is in the method section. In the revised manuscript, the changes are highlighted in yellow from line 123128 and line 133-141. Reviewer#1, Concern # 3(Additional comments): The analytic process could be concise and to the point. Author response and action: We thank the reviewer for the suggestion. This concern is related to the Concern #1. In response to this, the number of tables and explanations of tables for various things such as statistical analysis, hyperparameter tuning, have been reduced, leaving only the very important ones in the main script, and the rest moved to the Appendix. In addition, the explanations to the tables have been trimmed down leaving only the very essential ones in the main manuscript, hence, the analytical process is now concise and to the point. Reviewer#2, Concern # 1(Basic reporting): The report is too detailed and includes trivial information which renders the reading experience to be exhausting. In scientific papers many different methods are used these days and it is necessary to think about what information should be included, what should be cited and what should be left out (as trivial information). This is a process which the current report can greatly benefit from. More focus needs to be directed towards the main findings of the papers and the methods that ended up being used in the most optimal model that was eventually applied. Author response and action: We thank the reviewer for the suggestion. We have removed trivial information such as the numerous tables, particularly for the hyperparameter tuning tables and explanations, and placed them in the Appendix section, leaving only the important information and main findings in the main manuscript. Reviewer#2, Concern # 2(Basic reporting): The figures and the tables lack detailed descriptions. Furthermore they are not completely labelled and for instance the x axis and y axis labels are missing in the feature importance plots (i.e. figure 2 and 4). Author response and action: The figures and tables have been labelled, described and captioned fully. Figures 2,4 and 11 which weren’t previously labelled and captioned correctly, have all been relabeled and captioned. Reviewer#2, Concern # 3(Basic reporting): information provided in many tables are redundant, as for instance in many tables such as table 6 the 'Without symptoms' percentages are the complements of the 'With symptoms'. Better and more brief means of describing the 'Description of data sets' and the 'statistical analysis' must be used that listing all these information in a sequence of tables. If some of these information are not relevant or significant they can be referred to in supplement documents rather than in the main paper. Author response and action: As suggested, we have removed some tables and shifted most tables in the Appendix. The final version of manuscript now only has 14 tables when compared to the original version of 40 tables. Reviewer#2, Concern # 4(Basic reporting): * The following sentences require rewriting : The sentences which require rewriting as indicated by reviewer #2, have been rewritten, clarified, and explained in full as follows. Reviewer#2, Concern # 4a: - Line 145: 'ADASYN (He et al., 2008) is a synthetic data generation algorithm that has the benefits of not copying the same minority data and producing more data which are harder to learn.' Author response and action: We thank the reviewer for the suggestion, the changes are highlighted from lines 150 to 153. Reviewer#2, Concern # 4b: - In equation 2: Xk is not defined . What is Xk ? Author response and action: The changes have been highlighted at line 154. Reviewer#2, Concern # 4c: - Line 153: '… same proportion of observations, and also due to the dataset 154 being imbalanced. The fold used was k=5 which means …' Author response and action: The changes have been highlighted from lines 159 to 164. Reviewer#2, Concern # 4d: -Top of page 4: 'These operations are done sequentially in the order as shown in Equations 4, and 5: FC =W(x) (10) …' Do the equations need to be repeated ?! Please clarify how do they relate to formula 4 and 5. Author response and action: The operations are done sequentially. Equations 4 and 5 are incorrect, and the correct equations are 9, 10 and 11. The equations explains the statements. Also, the operations are done sequentially in the sense that, the output from one operation is fed into the next operation, which also feeds the output to the next one. These have been highlighted at line 180, and from line 181 to 183. Reviewer#2, Concern # 4e: In tables 5 and 8: Both columns are labelled as 'no-ICU' ! Author response and action: Thank you for your suggestion, the repetition of ICU for both columns was a mistake, which has been rectified in previously tables 5 and 8, which is now tables 4 and 7. Reviewer#2, Concern # 4f: Line 273: 'Overall, over 50% of the individuals acquired a symptom of a disease before they died.' This does not seem correct! How far over 50% ? and by 'a symptom' do the authors mean 'at least one symptom' ? This sentence suggests that a bit less than 50% of patients died without showing any symptoms ?! Author response and action: Thank you for your comment. The statement was meant that cough was the least common symptom among individuals, with 51% of individuals coughing before they died, so we had 1106 individuals captured in the mortality data, with 835 of them dying, so at least 418 of them (which is half) had a cough before dying. Reviewer#2, Concern # 4g: Line 392: 'The key performance metric in this analysis is the AUC score and it can be plotted using the precision-recall curve which demonstrates a trade-off between the recall score (True Positive Rate), and the precision score (Positive Predictive Value).' AUC is usually referred to the area under ROC curve. Do you mean the area under Precision Recall curve here ? If yes, you can use AUCPR to avoid confusion Author response and action: Thank you for the suggestion. In order to clarify this point, we have added extra explanations, which have been highlighted from line 396 to 398. Reviewer#2, Concern # 5(Experimental design) I was not convinced and completely informed about how the authors avoided overfitting to the data throughout the whole study… Author response and action: We thank the reviewer for the suggestion. Yes, we have seen the error. At first initial experiment, we first used a training-validation split of 90%-10%, but we later changed that to a stratified k-fold split of 5, but we carelessly didn’t remember to remove the statements in the paper. We have removed the statements in the revised manuscript. The report has been revised to make the necessary corrections regarding the splitting of the data including the pvalues. The pvalues have been added to the manuscript, and the correlation distance has been updated as pearson correlation. Reviewer#2, Concern # 6(Validity of the findings) My biggest issue with the study is that as the authors have also mentioned the data is small for the kind of prediction model that the authors are aiming for… Author response and action: We thank the reviewer for the suggestion. Some studies have been cited to support our findings regarding the most important features in predicting ICU admission and mortality. 2 papers each for icu admission and mortality have been cited to support our claims. The report has been revised to include the citations of other studies which supports our claim to make the study very valid. "
Here is a paper. Please give your review comments after reading it.
345
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The global healthcare system is being overburdened by an increasing number of COVID-19 patients. Physicians are having difficulty allocating resources and focusing their attention on high-risk patients, partly due to the difficulty in identifying high-risk patients early. COVID-19 hospitalizations require specialized treatment capabilities and can cause a burden on healthcare resources. Estimating future hospitalization of COVID-19 patients is, therefore, crucial to saving lives. In this paper, an interpretable deep learning model is developed to predict intensive care unit (ICU) admission and mortality of COVID-19 patients. The study comprised of patients from the Stony Brook University Hospital, with patient information such as demographics, comorbidities, symptoms, vital signs, and laboratory tests recorded. The top 3 predictors of ICU admission were Ferritin, diarrhoea, and Alamine Aminotransferase, and the top predictors for mortality were COPD, Ferritin, and Myalgia. The proposed model predicted ICU admission with an AUC score of 88.3% and predicted mortality with an AUC score of 96.3%. The proposed model was evaluated against existing model in the literature which achieved an AUC of 72.8% in predicting ICU admission and achieved an AUC of 84.4% in predicting mortality. It can clearly be seen that the model proposed in this paper shows superiority over existing models. The proposed model has the potential to provide tools to frontline doctors to help classify patients in time-bound and resource-limited scenarios.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Coronavirus is a virus family that causes respiratory tract illnesses and diseases that can be lethal in some situations, such as SARS and COVID-19. SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2) is a new type of Coronavirus which began spreading in late 2019 in the Chinese province of Hubei claiming multiple human lives <ns0:ref type='bibr'>(Li et al., 2020a)</ns0:ref>. The novel coronavirus outbreak was declared a Public Health Emergency of International Concern by the World Health Organization (WHO) in January 2020. The infectious disease caused by the novel coronavirus was given the official title, COVID-19 (Coronavirus Disease 2019) by WHO in February 2020, and a COVID-19 Pandemic was announced in March 2020 by the (World Health Organization; WHO Director General) <ns0:ref type='bibr' target='#b2'>(Bogoch et al., 2020)</ns0:ref>. Since then, there has been over 170 million cases with many of them being hospitalized. A staggering 3.8 million people died from the disease with the numbers increasing as this paper is being written. Every patient has a different reaction to the virus, with many of them being asymptomatic and a small percentage getting worse rapidly with their organs failing <ns0:ref type='bibr' target='#b17'>(Leung et al., 2020)</ns0:ref>. The ongoing surge in COVID-19 patients has put a burden on healthcare systems unlike ever before. According to a recent study by <ns0:ref type='bibr' target='#b28'>Pourhomayoun and Shakibi (2021)</ns0:ref>, once the coronavirus outbreak begins, the healthcare system will be overwhelmed in less than four weeks. When a hospitals capacity is exceeded, the death rate rises. The repercussions of an extended stay and increased demand for hospital resources as a result of COVID-19 have been disastrous for health systems around the world, necessitating quick clinical judgments, especially when patients. They determined the relative importance of blood panel profile data and discovered that when this data was removed from the equation, the AUC decreased by 0.12 units. It provided useful information in predicting the severity of the disease. <ns0:ref type='bibr' target='#b16'>Ikemura et al. (2021)</ns0:ref> aimed to train various machine learning algorithms using automated machine learning <ns0:ref type='bibr'>(autoML)</ns0:ref>. They chose the model that best estimated how long patients would survive a SARS-CoV-2 infection. They used data that comprised of patients who tested positive for COVID-19 between March 1 and July 3 of the year 2020 with 48 features. The stacked ensemble model (AUPRC=0.807) was the best model generated using autoML. The gradient boost machine and extreme gradient boost models were the two best independent models, with AUPRCs of 0.803 and 0.793, respectively. The deep learning model (AUPRC=0.73) performed significantly worse than the other models. <ns0:ref type='bibr' target='#b21'>Li et al. (2020b)</ns0:ref> used the clinical variables of COVID-19 patients to develop a deep learning model and a risk score system to predict ICU admission and mortality of patients in the hospital. The data consisted of 5,766 patients, with comorbidities, vital signs, symptoms, and laboratory tests, between 7th February, 2020 and 4th May, 2020. AUC score was used to evaluate their models. The deep learning model achieved an AUC of 78% for ICU admission and 84% for mortality with the risk score for ICU admission being 72.8% and 84.8% for mortality. Their model was accurate enough to provide doctors with the tools to stratify patients in limited-resource and time-bound scenarios.</ns0:p></ns0:div> <ns0:div><ns0:head>Current Work</ns0:head><ns0:p>Data sets Test period Objective(Covid-19patients in the hospital) <ns0:ref type='bibr' target='#b23'>Manca et al. (2020)</ns0:ref> Lombardy, Italy ICU hospital admission 21 Feb 2020-27June 2020 Predict ICU beds and mortality rate <ns0:ref type='bibr' target='#b11'>Goic et al. (2021)</ns0:ref> Chile official covid-19 data May 20th 2020-July 28th 2020 Forecast in the short-term, ICU beds availability <ns0:ref type='bibr' target='#b28'>Pourhomayoun and Shakibi (2021)</ns0:ref> Worldwide Covid data from 146 countries December 1, 2019 -February 5th, 2020 Predict the mortality risk in patients <ns0:ref type='bibr' target='#b7'>Fernandes et al. (2021)</ns0:ref> S&#227;o Paulo COVID-19 hospital admission March 1 2020-28 June2020 Predict the risk of developing critical conditions <ns0:ref type='bibr' target='#b33'>Yu et al. (2021)</ns0:ref> Michigan Covid 19 hospital data 1 Feb 2020-4 May 2020 Predict the need for mechanical ventilation and mortality . <ns0:ref type='bibr' target='#b16'>Ikemura et al. (2021)</ns0:ref> Montefiore Medical Center COVID 19 data March 1 2020 -July 3 2020 Predict patients' chances of surviving SARS-CoV-2 infection <ns0:ref type='bibr' target='#b21'>Li et al. (2020b)</ns0:ref> Stony Brook University Hospital COVID hospital data 7 February 2020-4 May 2020.</ns0:p><ns0:p>Predict ICU admission and in-hospital mortality .</ns0:p></ns0:div> <ns0:div><ns0:head>Table 1. Summary of existing works</ns0:head><ns0:p>Existing models used in the literature perform very well for their respective purposes, however, they have a downside in that they are difficult to interpret. The model lacks interpretability on which patient attributes it uses when making a decision (ICU admission and mortality). Existing models use various approaches, but the majority of them use neural network models, which are excellent at achieving good results, but their predictions are not traceable. Tracing a prediction back to which features are significant is difficult, and there is no comprehension of how the output is generated. Therefore, this paper proposes the use of an interpretable neural network approach to predict ICU admission likelihood and mortality rate in COVID-19 patients. It employs a deep learning algorithm that can interpret how the model makes decisions and which features the model selects in making the decision. The model has outstanding and comparable results to other neural network models in the literature. The proposed model can be utilized to generate better outcomes when compared to previously published models.</ns0:p></ns0:div> <ns0:div><ns0:head>METHOD</ns0:head><ns0:p>This paper proposes a high-performance and interpretable deep tabular learning architecture, TabNet, that exploits the benefits of sequential attention (following in a logical order or sequence) to choose features at each decision step which enables interpretability and more efficient learning as the learning capacity is used for the most salient features from the input parameters. The degree to which a human can comprehend the reason for a decision is known as interpretability. The higher the interpretability of the machine or deep learning model, the easier it is for someone to understand why particular decisions or predictions were made. Although neural network models are known to produce excellent results, they have the drawback of being a black box, which means that their predictions are not traceable. It is difficult to trace a prediction back to which features are important, and there is no understanding of how the output was obtained. The interpretabilty here denotes the ability for the model to interpret its decision and shows the features that are the most important in predicting ICU admission and mortality of COVID-19 patients <ns0:ref type='bibr' target='#b10'>(Ghiringhelli, 2021)</ns0:ref> .This section starts by describing how the input data has been pre-processed for the proposed learning model. Then the different components of the proposed model, and the steps it takes to arrive at a decision to predict ICU admission and mortality has been discussed. Finally, the evaluation metrics used to evaluate the model is analyzed. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Data Preprocessing</ns0:head><ns0:p>It is important to pre-process the data before applying it to a machine-learning algorithm. Many preprocessing techniques were applied with each serving a specific purpose. The various pre-processing steps have been discussed below. Various sampling methods were experimented with which included Adaptive Synthetic(ADASYN), and SMOTE to deal with the imbalance in the class labels.</ns0:p><ns0:p>ADASYN <ns0:ref type='bibr' target='#b14'>(He et al., 2008)</ns0:ref> </ns0:p><ns0:formula xml:id='formula_0'>G = (m l &#8722; m s ) &#215; &#946; (1)</ns0:formula><ns0:p>where G in is the total number of synthetic data examples for the minority class that must be produced, m l is the minority class, m s is the majority class , the &#946; is used to determine the desired balance level between 0 and 1. SMOTE <ns0:ref type='bibr' target='#b4'>(Chawla et al., 2002)</ns0:ref> is a technique for balancing class distribution by replicating minority class examples at random:</ns0:p><ns0:formula xml:id='formula_1'>x &#8242; = x + rand(0, 1) &#215; |x &#8722; x k | (2)</ns0:formula><ns0:p>where x' is the new generated synthetic data, x is the original data, x k is the kth attribute of the data, and rand represents a random number between 0 and 1.</ns0:p><ns0:p>Feature Extraction is a technique for reducing the number of features in a dataset by generating new ones from existing ones. Principal Component Analysis (PCA), Fast Independent Component Analysis (Fast ICA), Factor Analysis, t-Distributed Stochastic Neighbor Embedding t-SNE(t-SNE), and UMAP are the techniques used for the current dataset. When PCA <ns0:ref type='bibr' target='#b0'>(Abdi and Williams, 2010)</ns0:ref> is used, the original data is taken as input and it gives an output of a mix of input features which can better summarize the original data distribution such that its original dimensions are reduced. By looking at pair-wise distances, PCA can maximize variances while minimizing reconstruction error:</ns0:p><ns0:formula xml:id='formula_2'>cov(X, y) = ( 1 n &#8722; 1 ) n &#8721; i=1 (X i &#8722; x)(Y i &#8722; y) (3)</ns0:formula><ns0:p>where x is the input, and y is the output. cov(x,y) is the covariance matrix after which it is transformed to a new subspace which is y = W'x FAST ICA <ns0:ref type='bibr' target='#b15'>(Hyvarinen, 1999</ns0:ref>) is a linear dimensionality reduction approach that uses the principle of negentropy from maximization of non-gaussian technique as input data and attempts to correctly classify each of them (deleting all the unnecessary noise):</ns0:p><ns0:formula xml:id='formula_3'>&#948; = lg( &#8721; N i=1 )y i .y T i MSE ) (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>)</ns0:formula><ns0:p>where MSE is the mean squared error, and y is the output.</ns0:p><ns0:p>Factor analysis <ns0:ref type='bibr' target='#b12'>(Gorsuch, 2013</ns0:ref>) is a method for compressing a large number of variables into a smaller number of factors. This method takes the highest common variance from all variables and converts it to a single score. t-SNE <ns0:ref type='bibr' target='#b31'>(Wattenberg et al., 2016)</ns0:ref> is a non-linear dimensionality reduction algorithm for high-dimensional data exploration. It converts multi-dimensional data into two or more dimensions that can be visualized by humans:</ns0:p><ns0:formula xml:id='formula_5'>C = KL(P||Q) &#8721; i &#8721; j p i j log( p i j q i j ) (5)</ns0:formula><ns0:p>to pre-process the input data for machine learning. The theoretical foundation for UMAP is focused on Riemannian geometry and algebraic topology:</ns0:p><ns0:formula xml:id='formula_6'>p i | j = e &#8722; d(x i , x j ) &#8722; p i &#963; i (6)</ns0:formula><ns0:p>where p represents the distance from each ith data point to the nearest jth data point.</ns0:p><ns0:p>Before feeding all this information to the TabNet to act on, the dataset needs to be split into a training set and a testing set. The training dataset is used to train the model and the testing dataset is used to evaluate the models performance.The Stratified K-fold <ns0:ref type='bibr' target='#b34'>(Zeng and Martinez, 2000)</ns0:ref> cross-validation was implemented which splits the data into 'k' portions. In each of 'k' iterations, one portion is used as the test set, while the remaining portions are used for training. The fold used here was k=5 which means that the dataset was divided into 5 folds with each fold being utilized once as a testing set, with the remaining k -1 folds becoming the training set. This ensures that no value in the training and test sets is over-or under-represented, resulting in a more accurate estimate of performance/error.</ns0:p></ns0:div> <ns0:div><ns0:head>Architecture of TabNet model</ns0:head><ns0:p>A TabNet Model, <ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019)</ns0:ref> From Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, the input features are Batch Normalized (BN) and passed to the Feature Transformer, where it goes through four layers of a Fully Connected layer (FC), a Batch Normalization layer (BN), and a Gated Linear Unit (GLU) in that order. The Feature Transformer produces n(d) and n(a). The Feature Transformer comprises two decision steps. A Fully Connected (FC) layer is a type of layer where every neuron is connected to every other neuron.</ns0:p><ns0:formula xml:id='formula_7'>F C B N G L U F C B N G L U F C B N G L U F C B N G L U</ns0:formula></ns0:div> <ns0:div><ns0:head>Feature Transformer</ns0:head><ns0:p>The output from the FC layer should always be Batch Normalized. Batch normalization is used to transform the input features to a common scale. It can be represented mathematically as:</ns0:p><ns0:formula xml:id='formula_8'>BN = x &#8722; &#181; b &#8730; &#969; 2 + &#949; (7)</ns0:formula><ns0:p>where x represents the input features, &#181; b denotes the mean of the features and &#969; 2 denotes the variance.</ns0:p><ns0:p>The fully connected layer is the combination of all the inputs with the weights, which can be represented mathematically as:</ns0:p><ns0:formula xml:id='formula_9'>FC = W (x)<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where x denotes the input features, and W denotes the weights.</ns0:p><ns0:p>These operations are done sequentially, starting from equation 9, then to equation 10 and finally to equation 11</ns0:p><ns0:formula xml:id='formula_10'>FC = W (x) (9) BN = x &#8722; &#181; b &#8730; &#969; 2 + &#949;<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>The Gated Linear Unit (GLU) is simply the sigmoid of x:</ns0:p><ns0:formula xml:id='formula_11'>GLU = &#963; (x)<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>The Fully Connected layer performs its operations, then its output is fed into the Batch Normalization layer to perform its operations. Finally, the output from the Batch Normalization is fed into the Gated Linear Unit, all in a sequential manner.</ns0:p><ns0:p>The decision output from the Feature Transformers n(d) is also aggregated and embedded in this form, and a linear mapping is applied to get the final decision. As a result, they are made up of two shared decision steps and two independent decision steps. A residual connection connects the shared steps with the independent steps and they are summed together via the &#8853; operation, which is a direct summation block.</ns0:p><ns0:p>Since the same input features in the dataset are used in distinct steps, the layers are shared between two decision steps for robust learning. By ensuring that the variation across the network does not change significantly, normalization with a square root of 0.5 helps to stabilize learning which produces the outputs of n(d) and n(a) as mentioned previously.</ns0:p><ns0:p>From the Feature Transformer, and after the Split layer, the Attentive Transformer is applied to determine the various features and their values. The feature importance for that step is combined with the other steps and is made up of four layers: FC, BN, Prior Scales, and Sparse Max in sequential order.</ns0:p><ns0:p>The split layer splits the output and obtains p[i-1], which is then passed through the Fully Connected (FC) layer and the Batch Normalization (BN) layer, whose purpose is to achieve a linear combination of features allowing higher-dimensional and abstract features to be extracted. </ns0:p><ns0:formula xml:id='formula_12'>M[i] = Sparsemax(P[i &#8722; 1] * h i (p[i &#8722; 1])<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>where the h i represents the summation of the FC and the BN layer, the Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the weights of all features of each sample to 1 <ns0:ref type='bibr' target='#b32'>(Yoon et al., 2018)</ns0:ref>, allowing TabNet to employ the most useful features for the model in each decision step. M[i] then updates p[i]:</ns0:p><ns0:formula xml:id='formula_13'>P[i] = i &#8719; j=1 (&#947; &#8722; M j )<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>If &#947; is set closer to 1, the model uses different features at each step; if &#947; is greater than 1, the model uses the same features in multiple steps. The Sparse matrix is similar to the softmax, except that instead of all features adding up to 1, some will be 0 and others will add up to 1. The Sparse Max is expressed as:</ns0:p><ns0:formula xml:id='formula_14'>n i=1 sparsemax(x) i (<ns0:label>14</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>)</ns0:formula><ns0:p>This makes it possible to choose features on an instance-by-instance basis with various features being considered at various steps. These are then fed into the Mask layer, which aids in the identification of the desired features. The Feature Transformer is applied again, and the resulting output is split to the Attentive Transformer. The split layer divides the output from the feature transformer into two parts which are d[i], and a[i]:</ns0:p><ns0:formula xml:id='formula_16'>d[i], a[i] = f i (M[i] * f ) (15)</ns0:formula><ns0:p>where d[i] is used to calculate the final output of the model, and a[i] is used to determine the mask of the next step. ReLU activation is then applied:</ns0:p><ns0:formula xml:id='formula_17'>f (x) = max(0, d[i])<ns0:label>(16)</ns0:label></ns0:formula><ns0:p>where f(x) returns 0 if it receives any negative input, but for any positive value x, it returns that value back. The contribution of the ith step to obtain the final result can be expressed as:</ns0:p><ns0:formula xml:id='formula_18'>&#966; b [i] = N d c=1 Relu(d[i], c[i]) (17)</ns0:formula><ns0:p>where &#966; b <ns0:ref type='bibr'>[i]</ns0:ref> indicates the features that are selected at the ith step.</ns0:p><ns0:p>To map the output dimension, the outputs of all decision steps are summed and passed through a Fully Connected layer. Combining the Masks at various stages necessitates the use of a coefficient that can account for the relative value of each step in the decision-making process. The importance of the features can be expressed using the equation: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_19'>M ag g &#8722; b , j = N s te ps i=1 &#966; b [i]M b , j D j=1 N s te ps i=1 &#966; b [i]M b, j<ns0:label>(18)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>TabNet decision making Process</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The mask value for a given sample indicates how significant the corresponding feature is for that sample. Brighter columns indicate the features that contribute a lot to the decision-making process. It can be seen that the majority values for features other than 0, 1, 4, 5, and 8 are close to '0,' indicating that the TabNet model correctly selects the salient features for the output. We can then interpret which features the model selects enhancing the interpretability of the model. With this, the features that contribute to individuals being admitted to the ICU and dying of the COVID-19 disease can be ranked accordingly.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>Evaluation metrics are the metrics used to evaluate the model to determine if it is working as well as it should be. The evaluation metrics used are: Accuracy is the proportion of correctly expected observations to all observations.</ns0:p><ns0:formula xml:id='formula_20'>Confusion</ns0:formula><ns0:formula xml:id='formula_21'>Accuracy = T P + T N T P + FP + FN + T N (19)</ns0:formula><ns0:p>Precision is the ratio of correctly predicted positive observations to total predicted positive observations.</ns0:p></ns0:div> <ns0:div><ns0:head>Precision = T P T P + FP (20)</ns0:head><ns0:p>Recall is the ratio of correctly expected positive observations to all observations in the actual class.</ns0:p></ns0:div> <ns0:div><ns0:head>Recall = T P T P + FN (21)</ns0:head><ns0:p>F1 Score is the weighted average of Precision and Recall.</ns0:p><ns0:formula xml:id='formula_22'>F1Score = 2(Precision &#215; Recall) Recall + Precision (22)</ns0:formula><ns0:p>The various mathematical notations used in this section are shown in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>To demonstrate the effectiveness of the method in predicting ICU admission and mortality of patients, the different TabNet model hyperparameters, dimensionality reduction techniques, and oversampling methods have been thoroughly examined and contrasted. The section starts with the description of datasets followed by the statistical analysis, and it concludes with results and discussion.</ns0:p></ns0:div> <ns0:div><ns0:head>Description of data sets</ns0:head><ns0:p>There are two data sets used for this analysis, one for the ICU likelihood and the other for the mortality rate. The ICU data set consists of 1020 individuals with 43 features and the mortality data set consists of 1106 individuals with 43 features. The features consist of vital signs, laboratory tests, symptoms, and demographics of these individuals. There are two labels associated with the ICU dataset which are, ICU admitted (label 0), and ICU non-admitted (label 1). Similarly, there are two labels associated with the death dataset which are, non-death (label 0), and death (label 1). The datasets are unbalanced with distribution ratios of 75.5:24.5, and 86.1:13.9 for the ICU and mortality datasets, respectively. Table <ns0:ref type='table'>3</ns0:ref> shows the summary of the description of the datasets. </ns0:p></ns0:div> <ns0:div><ns0:head>Table 3. Description of Datasets Statistical Analysis</ns0:head></ns0:div> <ns0:div><ns0:head>ICU Admission</ns0:head><ns0:p>There were more males than females in the study population. The non-Hispanic ethnicity forms the most individuals, and the Caucasian race has the highest number of individuals in the population. Hypertension is the co-morbidity that most individuals in the study population presented, with cancer being the least.</ns0:p><ns0:p>The average age of individuals that needed the ICU is higher than those that did not. Regarding the vital signs, heart rate is the sign that showed quite a big difference on average, with the individuals needing ICU having a higher heart rate than their fellow counterparts. Procalcitonin, Ferritin and C-reactive protein are the laboratory findings that showed the biggest average difference, with individuals needing ICU showing higher levels of these. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For the symptoms, loss of smell and loss of taste were the symptoms that most individuals that got admitted to the ICU acquired whereas fever and cough were the symptoms that the least number of individuals acquired to be admitted to the ICU. Overall, over 70% of individuals acquired a symptom of disease at admission to the ICU. Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref> shows the relationship between symptoms and ICU admission by looking at the distribution of patients who were admitted to the ICU and whether or not they had a symptom. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Symptoms</ns0:head><ns0:p>Computer Science with individuals that died showing higher levels of these. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Loss of smell and loss of taste were the symptoms that most individuals that died acquired, whereas fever and cough were the symptoms that the least number of individuals that died acquired. Overall, at least 50% of individuals acquired a particular symptom before they died. </ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Settings</ns0:head></ns0:div> <ns0:div><ns0:head>Hyperparameters of TabNet</ns0:head><ns0:p>The TabNet model has a considerable number of hyperparameters which can be tuned to improve performance. The TabNet comes with some default parameters which works well, but for certain use cases, different values of certain hyperparameters yield better performances. before performing an early stoppage. If patience is set to 0, then no early stopping will be performed.</ns0:p><ns0:p>Momentum for batch normalization typically ranges from 0.01 to 0.4. n shared is the number of shared Gated Linear Units at each step. The usual values range from 1 to 5. n independent is the number of independent Gated Linear Units layers at each step. The usual values range from 1 to 5. Gamma is the coefficient of feature re-usage in the masks. Its values range from 1.0 to 2.0. A value close to 1 will make mask selection less correlated between layers. n steps is the number of steps in the architecture (usually between 3 and 10). lambda sparse is the extra sparsity loss coefficient. The bigger the coefficient, the sparser the model will be in terms of feature selection <ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Analysis</ns0:head><ns0:p>We present a summary of our experimental results and analysis on two categories: ICU Admission and Mortality.</ns0:p></ns0:div> <ns0:div><ns0:head>ICU Admission</ns0:head></ns0:div> <ns0:div><ns0:head>Results of Hyperparameter tuning using Tabnet</ns0:head><ns0:p>The hyper parameters of the model have been tuned using various values of each parameter. The final table which has the best hyper parameters in predicting ICU admission for the various metrics are shown here, the rest of the tables for the individual hyper parameters can be seen in the appendix section. In varying the width of decision prediction layer (nd), the value of nd was changed from a range of 2 to 64 to determine the best output. The results were the best when nd was set to 64.</ns0:p><ns0:p>In varying number of steps in the architecture (nsteps), the value of nsteps was varied from 3 to 12 to determine the best output. A value of 3 gave the best results. Changing the nsteps to numbers between 8 to 12 showed a slight decrease in performance which indicates that the performance will not be enhanced by increasing the number of steps.</ns0:p><ns0:p>In varying gamma. Changing the gamma shows a very haphazard trend in performance, the best results are given when the gamma is 2.0. Increasing the gamma does not improve the results. Thus, gamma was not increased any further.</ns0:p><ns0:p>In varying number of independent gates (nindependent), the number of independent gates was varied from 2 to 7. All the nindependent gates obtained similar results converging to the best result with 2. Thus, 2 independent gates give the best results.</ns0:p><ns0:p>In varying number of shared gates (nshared), the nshared gates was varied from 2 to 7, with 2 gates achieving the best results. Increasing the gates did not improve the results.</ns0:p><ns0:p>In varying values of momentum,momentum values of 0.2 and 0.3 displayed the best results, with higher values of momentum producing poorer results.</ns0:p><ns0:p>In varying lambda sparse, the values of lambda sparse was varied from 0.001 to 0.005. The results of the model showed a negative correlation with the value of lambda sparse. The best result was achieved with a value of 0.001.</ns0:p><ns0:p>Two Different types of masks were used, the entmax and the sparsemax. It was concluded that the mask Manuscript to be reviewed</ns0:p><ns0:p>Computer Science type of entmax gives a better result across the board in all the performance metrics.</ns0:p><ns0:p>The number of epochs and stopping condition was also experimented to determine the impact it has on the performance of the TabNet model. The results are generally better when the stopping condition is defined. The best results are achieved with an epoch of 150, and patience greater than 60. The results do not change when the patience is greater than 60.</ns0:p><ns0:p>Regarding the dimensionality reduction methods, 5 methods were experiemented with to check its effect on the performance of the TabNet baseline model. The results using the Fast ICA has the best results with it falling short only on recall to PCA. The most important parameter in the table is the AUC, in which the Fast ICA has a score of 83.6.</ns0:p><ns0:p>Both PCA method and the Fast ICA methods yielded similar scores on the impact of different dimensionality reduction methods on the performance of the best TabNet model. For the AUC parameter, the Fast ICA has the highest score of 86.4.</ns0:p><ns0:p>Different oversampling methods on the performance of the TabNet Baseline model were experimented with and the results using the ADASYN method has the best outcome in all the measured performance metrics.</ns0:p><ns0:p>The impact of different oversampling methods on the performance of the TabNet Best model was determined. The results using the ADASYN method has the best results in all the measured performance metric.</ns0:p><ns0:p>The tables showing the various experimentations of the individual hyperparameter explained above can be found in the appendix section. Figure <ns0:ref type='figure'>3</ns0:ref> shows the trend of the various hyperparameter during the experiments and tuning. Figure <ns0:ref type='figure'>4</ns0:ref> further shows the feature importance masks for predicting ICU admission. TabNet features a feature value output called Masks that may be used to quantify feature importance and indicate if a feature is chosen at a particular decision step in the model. Each row represents the masks for each input element and the column represents a sample from the dataset. The brighter the color, the higher the value. In predicting ICU admission, two of the masks are shown as an example in Figure <ns0:ref type='figure'>4</ns0:ref>, where the features which the respective masks are paying attention to can be seen in bright colors. The brighter a grid, the more important that particular feature is for the particular Mask. The number of grids lighting up corresponds to the number of features that are being paid attention to by the particular Mask. It can be seen that Mask 0 is paying the most attention to the earlier features, with an emphasis on the 20th feature.</ns0:p><ns0:p>Mask 1 is paying the most attention to the later features, with the most attention given to features 32 and 38. The average feature output among all the Masks is used to arrive at the final decision. Manuscript to be reviewed Manuscript to be reviewed </ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Mortality</ns0:head><ns0:p>We now present a summary of our experimental results and analysis for the Mortality category.</ns0:p></ns0:div> <ns0:div><ns0:head>Results of Hyperparameter tuning using Tabnet</ns0:head><ns0:p>Similarly in predicting morality,the hyper parameters of the model have been tuned using various values of each parameter. The final table which has the best hyper parameters in predicting mortality for the various metrics are shown here, the rest of the tables for the individual hyper parameters can be seen in the appendix section. In varying the width of decision prediction layer (nd), the value of nd was changed from a range of 2 to 64 to determine the best output. The results were the best when nd was set to 8.</ns0:p><ns0:p>In varying number of steps in the architecture (nsteps), the value of nsteps was varied from 3 to 12</ns0:p></ns0:div> <ns0:div><ns0:head>23/40</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to determine the best output. A value of 3 gave the best results. nsteps of 10 to 12 did not show any improvement in results, which indicates that the performance of the model will not be improved by increasing the number of steps.</ns0:p><ns0:p>In varying gamma, the values of gamma was varied from 1.3 to 2.2. The performance of the model shows a very haphazard trend with the changing values. A gamma of 2.0 gives the best results,</ns0:p><ns0:p>In varying number of independent gates (nindependent),the number of independent gates was varied from 2 to 7. The number of gates which gave the best output is 2, and increasing the number of gates decreased the accuracy of the results.</ns0:p><ns0:p>In varying number of shared gates (nshared), the number of shared gates was varied from 2 to 7. The number of shared gates of 2 gave the best results and increasing the number of gates did not improve the results. Although there is a spike in results when the number of shared gates is 5, the performance reduces when it is increased further.</ns0:p><ns0:p>The values of momentum was varied from 0.02 to 0.3, with 0.02 giving the best results. Increasing the value of the momentum gave poorer results.</ns0:p><ns0:p>In varying lambda sparse, the values of lambda sparse was varied from 0.001 to 0.005, with 0.001 achieving the best results. The value of the lambda sparse had a negative correlation with the performance of the model.</ns0:p><ns0:p>The different masktypes used were sparsemax, and entmax. The output using the sparsemax had a better result compared to the entmax</ns0:p><ns0:p>The number of epochs and stopping condition was also experimented to determine the impact it has on the performance of the TabNet model. The results are generally better when the stopping condition is defined. The best results are achieved with an epoch of 150, and patience greater than 60. The results do not change when the patience is greater than 60.</ns0:p><ns0:p>Regarding the dimensionality reduction methods, 5 methods were experiemented with to check its effect on the performance of the TabNet baseline model. The results using the Fast ICA has the best results with it falling short only on recall to PCA. The most important parameter in the table is the AUC, in which the PCA has a score of 94.0.</ns0:p><ns0:p>The Fast ICA yielded the best score on the impact of different dimensionality reduction methods on the performance of the best TabNet model. For the AUC parameter, the Fast ICA has the highest score of 86.4.</ns0:p><ns0:p>Different oversampling methods on the performance of the TabNet Baseline model were experimented with and the results using the ADASYN method has the best outcome in all the measured performance metrics.</ns0:p><ns0:p>The impact of different oversampling methods on the performance of the TabNet Best model was determined. The results using the ADASYN method has the best results in all the measured performance metric.</ns0:p><ns0:p>The tables showing the various experimentations of the individual hyperparameter explained above can be found in the appendix section.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_16'>13</ns0:ref> shows the performance of the TabNet Baseline, and TabNet Best models with FastICA dimensionality reduction, and ADASYN oversampling method. The results of the TabNet Best is the best amongst the other baseline models. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science As the model is learning (going through the epochs), the error reduces and hence the loss reduces. In predicting mortality also,the ROC curve which demonstrates a trade-off between the true positive rate (TPR) and the false positive rate (FPR)is plotted due to the imbalance of the dataset. The confusion matrix also gives a sense of the specific number of patients that were correctly classified as dying or not dying, and the ones that were incorrectly classified.</ns0:p><ns0:p>A large area under the curve indicates high recall and precision, with high precision indicating a low false-positive rate and high recall indicating a low false-negative rate. High scores for both indicate that the classifier is producing correct (high precision) results as well as a majority of all positive outcomes (high recall). From Figure <ns0:ref type='figure' target='#fig_1'>15</ns0:ref>, it can be seen that there is a large area under the curve, indicating that the model is functioning well. With the best performing model here also achieving an AUC of 96.30%, the model can distinguish between most of the patients who died, and the patients that did not die. It can be seen from the confusion matrix in Figure <ns0:ref type='figure' target='#fig_11'>16</ns0:ref> that the model correctly predicted 89 individuals who died from the virus and 78 individuals who did not die from the virus. 7 individuals were incorrectly classified as not dying from the virus when they died, and 1 individual was classified as dead when the individual did not die from the virus.</ns0:p><ns0:p>The proposed model does an excellent job in predicting mortality. Next, the proposed model will be compared with the baseline models existing in the literature.</ns0:p></ns0:div> <ns0:div><ns0:head>Model</ns0:head><ns0:p>AUC F1Score Accuracy Recall Precision Li, Xiaoran, et al (baseline) <ns0:ref type='bibr' target='#b21'>Li et al. (2020b)</ns0:ref> Manuscript to be reviewed The various hyperparameters were also tuned, and the best results of each was combined with the dimensionality technique and then oversampled to obtain the final result for all metrics. Results achieved were, an AUC of 88.3%, F1 score of 89.7%, the accuracy of 88.7%, recall of 93.3%, and precision of 86.4% for predicting ICU. In predicting mortality, results of 96.3% AUC, 95.8% F1 score, accuracy of 96.0%, recall of 99.8%, and precision of 91.8% were obtained. The reason why the results in predicting mortality achieves higher performances than the one in predicting ICU admission could be because sometimes individuals that need ICU admission, do not get the opportunity due to lack of beds available at that time because of large volumes of individuals present at the hospital needing the same resources.</ns0:p><ns0:note type='other'>Computer Science Death No Death Predicted</ns0:note><ns0:p>In the case of mortality, when an individual dies, the individual dies, there is no middle ground, so it is relatively easier to distinguish mortality than ICU admission.</ns0:p><ns0:p>A confusion matrix was constructed to show specifics, where there were more false positives than false negatives in both determining ICU admission and mortality. The reason for more false positives than false negatives could be because doctors have to make a quick and instant guess as to which patient needs the ICU at that time by simply looking at the physical conditions of the patient present. Due to the lack of time and resources, they depend on only those physical symptoms to make a decision, so patients who Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the ICU simply because they do not show physical deterioration at the time of decision making.</ns0:p><ns0:p>The process by which the proposed model makes decisions to determine which features are most important was also determined. The model uses Masks which shows the features they were paying the most attention to in the heat map, which can be seen here in Figure <ns0:ref type='figure'>4</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_1'>11</ns0:ref>. This was then used to construct the global feature importance graph, which is easier to understand, where the longer the bar, the more importance it has in determining if a patient with COVID-19 is likely to be sent to the ICU or if the patient is likely to die from the disease.</ns0:p><ns0:p>The findings from our model suggesting the most important features in predicting ICU admission and mortality, has been supported by other literature's, these are shown below.</ns0:p><ns0:p>Ferritin is the symptom of the patient which is the most important in determining if the patient needs ICU or not. Ferritin represents how much iron is contained in the body, and if a ferritin test reveals a lowerthan-normal ferritin level in the blood, this may indicate that the body's iron stores are low. This is a high indication of iron deficiency which can cause anaemia. <ns0:ref type='bibr' target='#b6'>(Dinevari et al., 2021)</ns0:ref>. Ferritin levels were found to be elevated upon hospital admission and throughout the hospital stay in patients admitted to the ICU by COVID-19. In comparison to individuals with less severe COVID-19, ferritin levels in the peripheral blood of patients with severe COVID-19 were shown to be higher. As a result, serum ferritin levels were found to be closely linked to the severity of COVID19 <ns0:ref type='bibr' target='#b5'>(Dahan et al., 2020)</ns0:ref>. Early analysis of ferritin levels in patients with COVID-19 might effectively predict the disease severity <ns0:ref type='bibr' target='#b3'>(Bozkurt et al., 2021)</ns0:ref>.</ns0:p><ns0:p>The magnitude of inflammation present at admission of COVID-19 patients, represented by high ferritin levels, is predictive of in-hospital mortality <ns0:ref type='bibr' target='#b22'>(Lino et al., 2021)</ns0:ref>. Studies indicate that Chronic obstructive pulmonary disease (COPD) is the symptom that shows the most importance in predicting mortality among COVID-19 patients. This COPD is a chronic inflammatory lung condition in which the lungs' airflow is impeded. Breathing difficulties, cough, mucus (sputum) production, and wheezing are all symptoms. Since COVID-19 is a disease that affects the respiratory system, it makes sense that a disease like COPD which also affects the lungs could have devasting effects on a patient who contracts COVID-19. <ns0:ref type='bibr' target='#b8'>(Gerayeli et al., 2021)</ns0:ref>. Patients with Chronic Obstructive Pulmonary Disease (COPD) have a higher prevalence of coronary ischemia and other factors that put them at risk for COVID-19-related complications. The results of this study confirm a higher incidence of COVID-19 in COPD patients and higher rates of hospital admissions <ns0:ref type='bibr' target='#b13'>(Graziani et al., 2020)</ns0:ref>. While COPD was present in only a few percentage of patients, it was associated with higher rates of mortality <ns0:ref type='bibr' target='#b30'>(Venkata and Kiernan, 2020)</ns0:ref>. It can be observed that the top 3 highest predictors for mortality and ICU admission are different, which indicates that there are some features which can be seen in the later and more advanced stages of COVID (at the time of death). For example, <ns0:ref type='bibr' target='#b27'>Pardhan et al. (2021)</ns0:ref> observed that COPD is reported more often than asthma, suggesting that physicians in Sweden considered COPD to be a better predictor than asthma for detecting severe COVID-19 cases.</ns0:p><ns0:p>It can also be seen that shortness of breath had a high correlation with ICU admission but it was not among the top predictors for predicting ICU admission. This is because correlation looks at only the linear relationship between that feature and the target without considering other features. For example, a single feature can have a low correlation, but when combined with other features, it can offer a high predictive power, as in the case of the COPD.</ns0:p><ns0:p>The limitations of this study are that the sample size is small, with only about 1000 patients included in the study. The study was restricted to patients at Stony Brook University Hospital and conducted between 7 February, 2020 to 4 May, 2020.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This paper proposes a tabular, interpretable deep learning model to predict ICU admission likelihood and mortality of COVID-19 patients. The proposed model achieves this by employing a sequential attention mechanism that selects the features at each step of the decision-making process based on a sparse selection of the most important features such as patient demographics, vital signs, comorbidities, and laboratory discoveries.</ns0:p><ns0:p>ADASYN was used to balance the data sets, Fast ICA to extract useful features, and all the various hyperparameters tuned to improve results. The proposed model achieves an AUC of 88.3% for predicting ICU admission likelihood which beats the 72.8% reported in the literature. The proposed model also achieves an AUC of 96.3% for predicting mortality rate which beats the 84.4% reported in the literature.</ns0:p><ns0:p>The most important patient attributes for predicting ICU admission and mortality were also determined For future work, the study can be extended to include a lot more patients over a longer time frame from several hospitals. The proposed method can be combined with other machine learning methods for improved results. This study could be extended to include more diseases, allowing the healthcare system to respond more quickly in the event of an outbreak or pandemic.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>is a synthetic data generation algorithm that employs a weighted distribution for distinct minority class examples based on their learning difficulty, with more synthetic data generated for minority class examples that are more difficult to learn compared to minority class examples that are simpler to learn. This is expressed by:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. TabNet Model Architecture</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2provides an illustration of how the TabNet<ns0:ref type='bibr' target='#b1'>(Arik and Pfister, 2019</ns0:ref>) makes a decision(individual explainability). TabNet has a feature value output called Masks, which shows whether a feature is selected at a given decision step in the model and can be used to calculate the feature importance. The Masks for each input feature are represented by each row, and the column represents a sample from the data set. Brighter colors show a higher value. Consider Figure2, where 9 features ranging from feat 0 to feat 8 are shown. For the random sample at 3, the first feature is the one being heavily used, hence the brighter colour, and the sample at 6, three features have brighter colours,feature 0, 1 and 8, with 8 being the brightest, signifying the feature 8's output was heavily used for this sample.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. TabNet decision making process</ns0:figDesc><ns0:graphic coords='9,185.53,179.53,325.98,411.03' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>matrix describes a classification models output on a collection of test data for which the true values are known. The matrix compares the actual target values to the machine learning models predictions. It includes true positives (TP) which are correctly predicted positive values, indicating that the value of the real class and the value of the predicted class are both yes, True negatives (TN) which correctly estimates negative values, indicating that the real class value is zero and the predicted class value is zero as well. False positives (FP), where the actual class is no but the predicted class is yes. False negatives (FN), where the actual class is yes but the predicted class is no.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>DatasetNo. Patients-No. Features Class Labels Class distribution ratio(Pos: Neg) ICUMice-ICU 1106-43 1 = death 0 = non-death 86.1:13.9 DEADMice-Mortality 1020-43 1 = ICU 0 = no-ICU 75.5:24.5</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Varying Hyper parameters with respect to AUC score for predicting ICU admission</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure5shows a graph of features against feature importance. Features are the symptoms that contribute to an individual being admitted to the ICU. Feature importance stands for the importance of the symptom in contributing to the patient being admitted into the ICU, where a larger number indicates a higher contribution to an ICU admission. All features have some importance in determining if a person would be admitted to the ICU. The sum of all the feature importance data points is 1. The top 5 features that contribute greatly to a person needing to be admitted to the ICU were Ferritin, ALT, ckdhx, Diarrhoea and carcinomahx.Best Final Model for ICU predictionThe model was analyzed and its output compared with different TabNet configurations based on different feature extractors. The model with the best results has been selected as the final proposed model for ICU prediction. The proposed model is a TabNet model with 150 epochs, 128 batch size, and 60 patience with the number of steps of 2, width of precision of layer of 64, gamma of 1.3, entmax mask type, n independent of 2, momentum of 0.3, lambda sparse of 1e-3, and n shared of 2, using the Fast Independent Component Analysis as the feature extractor, and ADASYN as the sampling technique to balance the imbalanced data. Figures6 and 7below show the loss graph, and training and validating accuracy graph of the TabNet in predicting ICU admission.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 Figure 7 .Figure 8 .Figure 9 .</ns0:head><ns0:label>7789</ns0:label><ns0:figDesc>Figure7shows a graph of accuracy against number of epochs. Accuracy tends to go higher as the number of epochs increases. At the early stages of the training, the accuracy is low, but as the model begins to learn the patterns of the data, the accuracy increases and reaches a higher value at the end of the epochs (150). Difference between the training accuracy and the testing accuracy is not high, which suggests that the model is not overfitting on the dataset.The precision-recall curve,which demonstrates a trade-off between the recall score (True Positive Rate), and the precision score (Positive Predictive Value), is used for this analysis due to the dataset being imbalanced (there is a large skew in the class distribution). The confusion matrix also gives a sense of the specific number of patients that were correctly classified as needing ICU or not needing it, and the ones that were incorrectly classified.A large area under the curve indicates high recall and precision, with high precision indicating a low false-positive rate and high recall indicating a low false-negative rate. High scores for both indicate that the classifier is producing correct (high precision) results as well as a majority of all positive outcomes</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 10 .Figure 11 .Figure 12 .Figure 13 .</ns0:head><ns0:label>10111213</ns0:label><ns0:figDesc>Figure 10. Varying Hyper parameters with respect to AUC score for predicting mortality</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 14 Figure 14 .</ns0:head><ns0:label>1414</ns0:label><ns0:figDesc>Figure14shows a graph of accuracy against number of epochs. The accuracy tends to increase as the number of epochs increases. At the early stages of the training, the accuracy is low, but as the model</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Confusion matrix of the best TabNet model for predicting Mortality</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>can deteriorate quickly due to underlying illnesses or other factors are often overlooked for admission to 31/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022) Manuscript to be reviewed Computer Science to give a clear indication of which attributes contribute the most to a patient needing ICU and a patient dying from COVID-19, where these claims were also backed up previous studies as well. The information from the model can be used to assist medical personnel globally by helping direct the limited healthcare resources in the right direction, in prioritizing patients, and to provide tools for front-line doctors to help classify patients in time-bound and resource-limited scenarios.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,143.00,367.02,411.05,325.98' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,194.39,140.50,255.17,255.17' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,143.01,63.78,411.03,448.40' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,143.01,249.11,411.05,271.86' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Notations</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Notations</ns0:cell><ns0:cell>Definitions</ns0:cell></ns0:row><ns0:row><ns0:cell>x</ns0:cell><ns0:cell>Features</ns0:cell></ns0:row><ns0:row><ns0:cell>&#963;</ns0:cell><ns0:cell>sigmoid</ns0:cell></ns0:row><ns0:row><ns0:cell>n(d)</ns0:cell><ns0:cell>output decision from current step</ns0:cell></ns0:row><ns0:row><ns0:cell>n(a)</ns0:cell><ns0:cell>input decision to the next current step</ns0:cell></ns0:row><ns0:row><ns0:cell>W</ns0:cell><ns0:cell>weights</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8853; otimes</ns0:cell><ns0:cell>direct summation tensor product</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947;</ns0:cell><ns0:cell>gamma</ns0:cell></ns0:row><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell>beta</ns0:cell></ns0:row><ns0:row><ns0:cell>&#181; b</ns0:cell><ns0:cell>mean</ns0:cell></ns0:row><ns0:row><ns0:cell>i i=1 &#8719; i i=1</ns0:cell><ns0:cell>integral block product block</ns0:cell></ns0:row><ns0:row><ns0:cell>P[i-1]</ns0:cell><ns0:cell>prior scales</ns0:cell></ns0:row><ns0:row><ns0:cell>p[i-1]</ns0:cell><ns0:cell>split layer division</ns0:cell></ns0:row><ns0:row><ns0:cell>h i</ns0:cell><ns0:cell>FC layer + BN layer</ns0:cell></ns0:row><ns0:row><ns0:cell>M j</ns0:cell><ns0:cell>Mask learning process</ns0:cell></ns0:row><ns0:row><ns0:cell>d[i]</ns0:cell><ns0:cell>final output</ns0:cell></ns0:row><ns0:row><ns0:cell>a[i]</ns0:cell><ns0:cell>determine mask of next step</ns0:cell></ns0:row><ns0:row><ns0:cell>f(x)</ns0:cell><ns0:cell>function to return value of relu function</ns0:cell></ns0:row><ns0:row><ns0:cell>&#966;</ns0:cell><ns0:cell>features selected at ith step</ns0:cell></ns0:row><ns0:row><ns0:cell>M a gg b ,&#8722; j</ns0:cell><ns0:cell>importance of features</ns0:cell></ns0:row><ns0:row><ns0:cell>G</ns0:cell><ns0:cell>Total number of synthetic data examples</ns0:cell></ns0:row><ns0:row><ns0:cell>m l</ns0:cell><ns0:cell>minority class</ns0:cell></ns0:row><ns0:row><ns0:cell>m s</ns0:cell><ns0:cell>majority class</ns0:cell></ns0:row><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell>Desired balance level</ns0:cell></ns0:row><ns0:row><ns0:cell>x'</ns0:cell><ns0:cell>new generated synthetic data</ns0:cell></ns0:row><ns0:row><ns0:cell>cov(X,y)</ns0:cell><ns0:cell>covariance matrix</ns0:cell></ns0:row><ns0:row><ns0:cell>p i | j q i j</ns0:cell><ns0:cell>joint probability distribution of features t-distribution of features</ns0:cell></ns0:row><ns0:row><ns0:cell>KL</ns0:cell><ns0:cell>Kullback-Leiber divergence</ns0:cell></ns0:row><ns0:row><ns0:cell>TP</ns0:cell><ns0:cell>True positive</ns0:cell></ns0:row><ns0:row><ns0:cell>TN</ns0:cell><ns0:cell>True negative</ns0:cell></ns0:row><ns0:row><ns0:cell>FP</ns0:cell><ns0:cell>False positive</ns0:cell></ns0:row><ns0:row><ns0:cell>FN</ns0:cell><ns0:cell>False negative</ns0:cell></ns0:row></ns0:table><ns0:note>9/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022) Manuscript to be reviewed Computer Science 10/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Relationship between Features and ICU admission</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features / Variables</ns0:cell><ns0:cell>ICU(n=271)</ns0:cell><ns0:cell>No-ICU(n=835)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Demographics</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Age, mean</ns0:cell><ns0:cell>59.42</ns0:cell><ns0:cell>62.06</ns0:cell></ns0:row><ns0:row><ns0:cell>Male</ns0:cell><ns0:cell>67.5% (183)</ns0:cell><ns0:cell>54% (451)</ns0:cell></ns0:row><ns0:row><ns0:cell>Female</ns0:cell><ns0:cell>32.5% (88)</ns0:cell><ns0:cell>46% (384)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ethnicity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hispanic/Latino</ns0:cell><ns0:cell>28.8% (78)</ns0:cell><ns0:cell>26.6% (222)</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-Hispanic/Latino</ns0:cell><ns0:cell>54.6% (148)</ns0:cell><ns0:cell>60.7% (507)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown 16.6% (45)</ns0:cell><ns0:cell>12.7% (106)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Race</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Caucasian</ns0:cell><ns0:cell>45.4% (123)</ns0:cell><ns0:cell>54.3% (453)</ns0:cell></ns0:row><ns0:row><ns0:cell>African American</ns0:cell><ns0:cell>4.79% (13)</ns0:cell><ns0:cell>7.3% (61)</ns0:cell></ns0:row><ns0:row><ns0:cell>American Indian</ns0:cell><ns0:cell>0.7% (2)</ns0:cell><ns0:cell>0.2% (2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asian</ns0:cell><ns0:cell>7.4% (20)</ns0:cell><ns0:cell>3.1% (26)</ns0:cell></ns0:row><ns0:row><ns0:cell>Native Hawaiian</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.1% (1)</ns0:cell></ns0:row><ns0:row><ns0:cell>More than one race</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.6% (5)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown/ not reported</ns0:cell><ns0:cell>41.7% (113)</ns0:cell><ns0:cell>34.4% (287)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Comorbidities</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Smoking history</ns0:cell><ns0:cell>22.5% (61)</ns0:cell><ns0:cell>25.6% (214)</ns0:cell></ns0:row><ns0:row><ns0:cell>Diabetes</ns0:cell><ns0:cell>29.5% (80)</ns0:cell><ns0:cell>26.3% (220)</ns0:cell></ns0:row><ns0:row><ns0:cell>Hypertension</ns0:cell><ns0:cell>46.5% (126)</ns0:cell><ns0:cell>49.3% (412)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asthma</ns0:cell><ns0:cell>8.5% (23)</ns0:cell><ns0:cell>5.1% (43)</ns0:cell></ns0:row><ns0:row><ns0:cell>COPD</ns0:cell><ns0:cell>6.3% (17)</ns0:cell><ns0:cell>9.1% (76)</ns0:cell></ns0:row><ns0:row><ns0:cell>Coronary artery disease</ns0:cell><ns0:cell>14.4% (39)</ns0:cell><ns0:cell>15.1% (126)</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart failure</ns0:cell><ns0:cell>6.6% (18)</ns0:cell><ns0:cell>7.4% (62)</ns0:cell></ns0:row><ns0:row><ns0:cell>Cancer</ns0:cell><ns0:cell>5.5% (15)</ns0:cell><ns0:cell>10.5% (88)</ns0:cell></ns0:row><ns0:row><ns0:cell>Chronic kidney disease</ns0:cell><ns0:cell>7.4% (20)</ns0:cell><ns0:cell>9.7% (81)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Vital signs</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Systolic blood pressure(mmHg),mean</ns0:cell><ns0:cell>124.8</ns0:cell><ns0:cell>128.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Temperature (degree Celsius), mean</ns0:cell><ns0:cell>37.63</ns0:cell><ns0:cell>37.47</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart rate, mean</ns0:cell><ns0:cell>106.1</ns0:cell><ns0:cell>98.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Respiratory rate(rate/min), mean</ns0:cell><ns0:cell>25.28</ns0:cell><ns0:cell>21.77</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Laboratory Findings</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Alanine aminotransferase(U/L), mean</ns0:cell><ns0:cell>49.62</ns0:cell><ns0:cell>47.03</ns0:cell></ns0:row><ns0:row><ns0:cell>C-reactive protein(mg/dL), mean</ns0:cell><ns0:cell>15.4</ns0:cell><ns0:cell>9.49</ns0:cell></ns0:row><ns0:row><ns0:cell>D-dimer(ng/mL), mean</ns0:cell><ns0:cell>1101.92</ns0:cell><ns0:cell>1210.51</ns0:cell></ns0:row><ns0:row><ns0:cell>Ferritin(ng/mL), mean</ns0:cell><ns0:cell>1469.67</ns0:cell><ns0:cell>1005.43</ns0:cell></ns0:row><ns0:row><ns0:cell>Lactase dehydrogenase(U/L), mean</ns0:cell><ns0:cell>481.7</ns0:cell><ns0:cell>377.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Lymphocytes(*1000/ml)</ns0:cell><ns0:cell>12.43</ns0:cell><ns0:cell>14.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Procalcitonin(ng/mL), mean</ns0:cell><ns0:cell>2.66</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>Troponin(ng/mL), mean</ns0:cell><ns0:cell>0.038</ns0:cell><ns0:cell>0.03</ns0:cell></ns0:row></ns0:table><ns0:note>Table 4 shows the demographics, vital signs, comorbidities, and laboratory discoveries of ICU patients and non-ICU patients. 11/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022) Manuscript to be reviewed Computer Science 12/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Relationship</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Percentage of patients with symptoms</ns0:cell></ns0:row></ns0:table><ns0:note>between Symptoms and ICU admission Certain symptoms had a higher correlation with ICU admission than others and Table6gives a summary of this. It is observed that the Shortness of Breath (SOB) feature has the highest correlation with admission to the ICU unit.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Correlation between Symptoms and ICU admission</ns0:figDesc><ns0:table /><ns0:note>MortalityThere were more males than females in the study population. The non-Hispanic ethnicity forms the most individuals and the Caucasian race has the highest number of individuals in the population. Hypertension is the co-morbidity that most individuals in the study population presented, with Asthma being the least. The average age of individuals that died is higher than those that did not. Regarding the vital signs, Respiratory rate is the sign that showed quite a big difference on average, with the individuals that died having a higher respiratory rate compared to the ones who did not die. Procalcitonin, Ferritin, D-dimer, and C-reactive protein are the laboratory findings that showed the biggest average differences,13/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Table 7 shows the demographics, vital signs, Relationship between Features and Mortality</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features / Variables</ns0:cell><ns0:cell>Death(n=271)</ns0:cell><ns0:cell>No-Death(n=835)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Demographics</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Age, mean</ns0:cell><ns0:cell>73</ns0:cell><ns0:cell>59.83</ns0:cell></ns0:row><ns0:row><ns0:cell>Male</ns0:cell><ns0:cell>65.5% (93)</ns0:cell><ns0:cell>55% (483)</ns0:cell></ns0:row><ns0:row><ns0:cell>Female</ns0:cell><ns0:cell>34.5% (49)</ns0:cell><ns0:cell>45% (395)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ethnicity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hispanic/Latino</ns0:cell><ns0:cell>16.2% (23)</ns0:cell><ns0:cell>28.5% (250)</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-Hispanic /Latino</ns0:cell><ns0:cell>73.9% (105)</ns0:cell><ns0:cell>57.4% (504)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown</ns0:cell><ns0:cell>9.9% (14)</ns0:cell><ns0:cell>14.1% (124)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Race</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Caucasian</ns0:cell><ns0:cell>64.1% (91)</ns0:cell><ns0:cell>51.3% (450)</ns0:cell></ns0:row><ns0:row><ns0:cell>African American</ns0:cell><ns0:cell>4.2% (6)</ns0:cell><ns0:cell>6.9% (61)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asian</ns0:cell><ns0:cell>6.3% (9)</ns0:cell><ns0:cell>3.% (33)</ns0:cell></ns0:row><ns0:row><ns0:cell>American Indian</ns0:cell><ns0:cell>0.7% (2)</ns0:cell><ns0:cell>0.23% (2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Native Hawaiian</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.1% (1)</ns0:cell></ns0:row><ns0:row><ns0:cell>More than one race</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.6% (5)</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown/ not reported</ns0:cell><ns0:cell>24.6% (35)</ns0:cell><ns0:cell>37.1% (326)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Comorbidities</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Smoking history</ns0:cell><ns0:cell>36.6% (52)</ns0:cell><ns0:cell>23.2% (204)</ns0:cell></ns0:row><ns0:row><ns0:cell>Diabetes</ns0:cell><ns0:cell>33.8% (48)</ns0:cell><ns0:cell>26.08% (229)</ns0:cell></ns0:row><ns0:row><ns0:cell>Hypertension</ns0:cell><ns0:cell>64.8% (92)</ns0:cell><ns0:cell>45.8% (402)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asthma</ns0:cell><ns0:cell>4.22% (6)</ns0:cell><ns0:cell>5.8% (51)</ns0:cell></ns0:row><ns0:row><ns0:cell>COPD</ns0:cell><ns0:cell>16.2% (23)</ns0:cell><ns0:cell>7.5% (66)</ns0:cell></ns0:row><ns0:row><ns0:cell>Coronary artery disease</ns0:cell><ns0:cell>27.5% (39)</ns0:cell><ns0:cell>13.1% (115)</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart failure</ns0:cell><ns0:cell>20.4% (29)</ns0:cell><ns0:cell>5.4% (47)</ns0:cell></ns0:row><ns0:row><ns0:cell>Cancer</ns0:cell><ns0:cell>13.4% (19)</ns0:cell><ns0:cell>8.9% (78)</ns0:cell></ns0:row><ns0:row><ns0:cell>Immunosuppression</ns0:cell><ns0:cell>5.6% (8)</ns0:cell><ns0:cell>7.4% (65)</ns0:cell></ns0:row><ns0:row><ns0:cell>Chronic kidney disease</ns0:cell><ns0:cell>14.08% (20)</ns0:cell><ns0:cell>8.5% (75)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Vital signs</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Systolic blood pressure(mmHg), mean</ns0:cell><ns0:cell>127.45</ns0:cell><ns0:cell>128.57</ns0:cell></ns0:row><ns0:row><ns0:cell>Temperature (degree Celsius), mean</ns0:cell><ns0:cell>37.3</ns0:cell><ns0:cell>37.52</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart rate, mean</ns0:cell><ns0:cell>98.28</ns0:cell><ns0:cell>100.38</ns0:cell></ns0:row><ns0:row><ns0:cell>Respiratory rate(rate/min), mean</ns0:cell><ns0:cell>26.39</ns0:cell><ns0:cell>21.79</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Laboratory Findings</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Alanine aminotransferase(U/L), mean</ns0:cell><ns0:cell>42.91</ns0:cell><ns0:cell>48.45</ns0:cell></ns0:row><ns0:row><ns0:cell>C-reactive protein(mg/dL), mean</ns0:cell><ns0:cell>16.07</ns0:cell><ns0:cell>9.62</ns0:cell></ns0:row><ns0:row><ns0:cell>D-dimer(ng/mL), mean</ns0:cell><ns0:cell>2626.27</ns0:cell><ns0:cell>1016</ns0:cell></ns0:row><ns0:row><ns0:cell>Ferritin(ng/mL), mean</ns0:cell><ns0:cell>1565</ns0:cell><ns0:cell>1037.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Lactase dehydrogenase(U/L), mean</ns0:cell><ns0:cell>588.28</ns0:cell><ns0:cell>363.14</ns0:cell></ns0:row><ns0:row><ns0:cell>Lymphocytes (*1000/ml)</ns0:cell><ns0:cell>10.96</ns0:cell><ns0:cell>14.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Procalcitonin(ng/mL), mean</ns0:cell><ns0:cell>5.14</ns0:cell><ns0:cell>0.76</ns0:cell></ns0:row><ns0:row><ns0:cell>Troponin(ng/mL), mean</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell>0.0278</ns0:cell></ns0:row></ns0:table><ns0:note>279comorbidities, and laboratory discoveries of patients that died and the ones who did not die.280 14/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022) Manuscript to be reviewed Computer Science 15/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Table8shows the relationship between symptoms and mortality by looking at the distribution of patients that died and whether or not they had a symptom. Relationship between Symptoms and Mortality Certain symptoms had a higher correlation with mortality than others and Table9gives a summary of this. It is observed that the headache feature has the highest correlation with death.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>symptoms</ns0:cell><ns0:cell cols='2'>Percentage of Patients with Symptoms</ns0:cell></ns0:row><ns0:row><ns0:cell>Fever</ns0:cell><ns0:cell>57%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cough</ns0:cell><ns0:cell>51.4%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Shortness of Breath (SOB)</ns0:cell><ns0:cell>71.8%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fatigue</ns0:cell><ns0:cell>86.6%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Sputum</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Myalgia</ns0:cell><ns0:cell>89.44%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diarrhea</ns0:cell><ns0:cell>81%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Nausea or vomiting</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Sore throat</ns0:cell><ns0:cell>95.1%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Runny nose or Nasal congestion</ns0:cell><ns0:cell>97.18%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Loss of smell</ns0:cell><ns0:cell>98.59%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Loss of Taste</ns0:cell><ns0:cell>98.59%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Headache</ns0:cell><ns0:cell>95.07%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chest discomfort or chest pain</ns0:cell><ns0:cell>92.96%</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Symptoms</ns0:cell><ns0:cell>Correlation(Pearson)</ns0:cell><ns0:cell>pvalues</ns0:cell></ns0:row><ns0:row><ns0:cell>Fever</ns0:cell><ns0:cell>-0.08</ns0:cell><ns0:cell>0.009</ns0:cell></ns0:row><ns0:row><ns0:cell>Cough</ns0:cell><ns0:cell>-0.149</ns0:cell><ns0:cell>0.0000069</ns0:cell></ns0:row><ns0:row><ns0:cell>Shortness of Breath (SOB)</ns0:cell><ns0:cell>0.031</ns0:cell><ns0:cell>0.32</ns0:cell></ns0:row><ns0:row><ns0:cell>Fatigue</ns0:cell><ns0:cell>-0.09</ns0:cell><ns0:cell>0.003</ns0:cell></ns0:row><ns0:row><ns0:cell>Sputum</ns0:cell><ns0:cell>0.006</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell>Myalgia</ns0:cell><ns0:cell>-0.119</ns0:cell><ns0:cell>0.00013</ns0:cell></ns0:row><ns0:row><ns0:cell>Diarrhea</ns0:cell><ns0:cell>-0.04</ns0:cell><ns0:cell>0.199</ns0:cell></ns0:row><ns0:row><ns0:cell>Nausea or vomiting</ns0:cell><ns0:cell>-0.128</ns0:cell><ns0:cell>0.00004116</ns0:cell></ns0:row><ns0:row><ns0:cell>Sore throat</ns0:cell><ns0:cell>-0.037</ns0:cell><ns0:cell>0.233</ns0:cell></ns0:row><ns0:row><ns0:cell>Runny nose or Nasal congestion</ns0:cell><ns0:cell>-0.03</ns0:cell><ns0:cell>0.318</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss of smell</ns0:cell><ns0:cell>-0.05</ns0:cell><ns0:cell>0.0965</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss of Taste</ns0:cell><ns0:cell>-0.066</ns0:cell><ns0:cell>0.0377</ns0:cell></ns0:row><ns0:row><ns0:cell>Headache</ns0:cell><ns0:cell>0.062</ns0:cell><ns0:cell>0.048</ns0:cell></ns0:row><ns0:row><ns0:cell>Chest discomfort or chest pain</ns0:cell><ns0:cell>-0.1</ns0:cell><ns0:cell>0.002</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Correlation between Symptoms and Mortality</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Default Hyper parameters of the TabNet Model the width of decision prediction layer gives more capacity to the model with the risk of overfitting. The values typically range from 8 to 64. Patience is the number of consecutive epochs without improvement</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Training Hyper parameters</ns0:cell><ns0:cell>Default Values</ns0:cell></ns0:row><ns0:row><ns0:cell>Max epochs</ns0:cell><ns0:cell>200</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch Size</ns0:cell><ns0:cell>1024</ns0:cell></ns0:row><ns0:row><ns0:cell>Masking Function</ns0:cell><ns0:cell>sparsemax</ns0:cell></ns0:row><ns0:row><ns0:cell>Width of decision prediction layer</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>patience</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>momentum</ns0:cell><ns0:cell>0.02</ns0:cell></ns0:row><ns0:row><ns0:cell>n shared</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>n independent</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>gamma</ns0:cell><ns0:cell>1.3</ns0:cell></ns0:row><ns0:row><ns0:cell>nsteps</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>lambda sparse</ns0:cell><ns0:cell>1e-3</ns0:cell></ns0:row></ns0:table><ns0:note>Table 10 shows the default hyperparameters of the TabNet model. Max epochs are the maximum number of epochs for training. It can be any number equal to or higher than 10. Batch size is the number of examples per batch. The number should preferably be a multiple of two and be greater than 16. The masking function is used for selecting features. Higher values for 16/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 11</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>shows the performance of the TabNet Baseline, and TabNet Best models with FastICA dimensionality reduction, and ADASYN oversampling method. The results of the TabNet Best is the best amongst the other baseline models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>F1Score Accuracy</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>Precision</ns0:cell></ns0:row><ns0:row><ns0:cell>Li, Xiaoran, et al (baseline) Li et al. (2020b)</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>55.1 72.1</ns0:cell><ns0:cell>76.0</ns0:cell><ns0:cell>43.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Tabnet Baseline+ Fast ICA+ ADASYN Tabnet Best+ Fast ICA+ ADASYN</ns0:cell><ns0:cell cols='4'>79.77 &#177; 1.87 84.66 &#177; 2.46 85.73 &#177; 3.208 84.52 &#177; 3.07 92.31&#177; 1.08 81.28 &#177; 4.87 84.47&#177;6.57 77.01&#177; 4.71 80.09 &#177;2.78 82.1&#177; 2.05</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Performance of the most optimized TabNet model with corresponding standard deviations across all the runs</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_16'><ns0:head>Table 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Performance of the best final TabNet model with FastICA dimensionality reduction method and ADASYN oversampling method</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>F1Score Accuracy</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>Precision</ns0:cell></ns0:row><ns0:row><ns0:cell>Li, Xiaoran, et al (baseline) Li et al. (2020b)</ns0:cell><ns0:cell>84.4</ns0:cell><ns0:cell>61.6 85.3</ns0:cell><ns0:cell>70.6</ns0:cell><ns0:cell>52.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Tabnet Baseline+ Fast ICA+ ADASYN Tabnet Best+ Fast ICA+ ADASYN</ns0:cell><ns0:cell cols='4'>89.03 &#177; 2.19 89.12 &#177; 2.40 88.92 &#177; 2.40 92.98 &#177; 2.97 85.75 &#177; 4.11 91.59 &#177; 1.63 91.74 &#177; 1.63 91.49 &#177; 1.62 96.65 &#177; 1.91 87.36 &#177; 2.24</ns0:cell></ns0:row></ns0:table><ns0:note>24/40PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Comparison of results between proposed method and existing techniqueIt can be seen from Table14that the proposed model beats the model reported by<ns0:ref type='bibr' target='#b21'>(Li et al., 2020b)</ns0:ref> in all metrics, which is a clear suggestion that the proposed model is superior. The proposed TabNet model can predict ICU admission likelihood rate with an AUC of 88.3%, and mortality rate with AUC of 96.3% which beats the model existing in the literature<ns0:ref type='bibr' target='#b21'>(Li et al., 2020b)</ns0:ref> In predicting ICU admission likelihood, the TabNet model depicted that Ferritin, ALT, and Cxdhx were the top 3 predictors of a patient needing ICU admission after contracting COVID 19, and COPD, ferritin, and Myalgia were the top 3 predictors of a patient dying from the COVID-19 disease.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>84.4</ns0:cell><ns0:cell>61.6 85.3</ns0:cell><ns0:cell>70.6</ns0:cell><ns0:cell>52.2</ns0:cell></ns0:row></ns0:table><ns0:note>29/40PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)Manuscript to be reviewedFigure 15. Precision-Recall Curve of the best TabNet model for predicting MortalityDISCUSSIONThe main purpose of this study was to develop a deep learning model to predict mortality rate and ICU admission likelihood of patients with COVID-19, and to determine which patient attributes are most important in determining the mortality, and ICU admission of COVID patients. From the results obtained, it can be concluded that:Finding 1 which can also be seen in the Table 40 in predicting mortality. Thus, we exclude the oversampling using SMOTE and focus on the balancing using ADASYN.5 different dimensionality techniques were also experimented with to improve the results. The Fast ICA technique yielded the best results, achieving a 2.58% difference over the next highest technique (PCA), in predicting ICU admissions, which can be seen in Table25. It also achieved a 1.16% difference for mortality, which can also be seen in theTable 38. The other dimensionality reduction techniques are 30/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:note></ns0:figure> <ns0:note place='foot' n='5'>/40 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64558:2:0:NEW 15 Jan 2022)</ns0:note> </ns0:body> "
"Original Manuscript ID: #CS-2021:08:64558:1:1:REVIEW Original Article Title: “Interpretable Deep Learning for the Prediction of ICU Admission Likelihood and Mortality of COVID-19 Patients” To: PeerJ Computer Science Re: Response to reviewers Dear Editor, Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated manuscript without highlights (PDF main document). Best regards, Amril Nazir et al. Reviewer#2, Concern # 1(Experimental design): It would be interesting if the authors can elaborate that how can factors such as COPD and Myalgia mortality be amongst 3 highest predictors of mortality but not amongst the highest factors for ICU admission. Are they seen in the later and more advanced stages of Covid? Author response and action: We thank the reviewer for the suggestion. In the revised manuscript, we have thrown more light on this, and provided a reference to support it. In the revised manuscript, the changes are highlighted in yellow from line 577 to 582. Reviewer#2, Concern # 2(Experimental design): I understand that Neural Networks does not necessarily model in the same manner as linear regression and etc, however it would still be inteersting if the authors could elaborate how the Shortness of Breath (SOB) feature which has the highest correlation with admission to the ICU unit is not amongst its highest 3 predictors ! Author response and action: We thank the reviewer for the suggestion. In the revised manuscript, this has been explained well and the changes are highlighted in yellow from line 583 to 587. Reviewer#2, Concern # 3(Validity of the findings): Something that is still not fully clear to me is that how has the stratified k-fold split of 5 been implemented in the training and testing of the models, using the most optimal model for ICU admission as for instance… Author response and action: We thank the reviewer for the suggestion. We stated in the description of Figures 8 and 15 that those were the precision-recall curves of the best Tabnet model, so that one is the most optimal model to predict ICU admission and mortality. We have therefore added some sentences to clearly state that to back up the description of the figures 8 and 15, the changes are highlighted in yellow from 402 to 403, and from 499 to 500. Reviewer#2, Concern # 4(Additional comments): * Correct m_l and m_s in the line beneath equation 1 , beneath line 153 (some lines don’t have numbers !). * Line 167: “TabNet deep learning model 15 is”. Is “15” a typo? Author response and action: : We thank the reviewer for the suggestion. The corrections have been effected, and highlighted in yellow between line 152 and 153. "
Here is a paper. Please give your review comments after reading it.
346
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Opportunistic data forwarding significantly increases the throughput in multi-hop wireless mesh networks by utilizing the broadcast nature of wireless transmissions and the fluctuation of link qualities. Network coding strengthens the robustness of data transmissions over unreliable wireless links. However, opportunistic data forwarding and network coding are rarely incorporated with TCP because the frequent occurrences of outof-order packets in opportunistic data forwarding and long decoding delay in network coding overthrow TCP's congestion control. In this paper, we propose a solution dubbed TCPFender, which supports opportunistic data forwarding and network coding in TCP. Our solution adds an adaptation layer to mask the packet loss caused by wireless link errors and provides early positive feedbacks to trigger a larger congestion window for TCP. This adaptation layer functions over the network layer and reduces the delay of ACKs for each coded packet. The simulation results show that TCPFender significantly outperforms TCP/IP in terms of the network throughput in different topologies of wireless networks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Wireless mesh networks have emerged as the most common technology for the last mile of Internet access. The Internet provides a platform for rapid and timely information exchanges among clients and servers. Transmission Control Protocol (TCP) has became the most prominent transport protocol on the Internet. Since TCP was originally designed primarily for wired networks that have low bit error rates, moderate packet loss, and packet collisions, the performance of TCP degrades to a greater extent in multi-hop wireless networks, where several unreliable wireless links may be involved in data transmissions <ns0:ref type='bibr' target='#b0'>(Aguayo et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b15'>Jain and Das, 2005)</ns0:ref>. However, multi-hop wireless networks have several advantages, including rapid deployment with less infrastructure and less transmission power over multiple short links. Moreover, a high data rate can be achieved by novel cooperation or high link utilization <ns0:ref type='bibr' target='#b20'>(Larsson, 2001)</ns0:ref>. Some important issues are being addressed by researchers to utilize these capabilities and increase TCP performance in multi-hop wireless networks, such as efficiently searching the ideal path from a source to a destination, maintaining reliable wireless links, protecting nodes from network attacks, reducing energy consumption, and supporting different applications.</ns0:p><ns0:p>In multi-hop wireless networks, data packet collision and link quality variation can cause packet losses.</ns0:p><ns0:p>TCP often incorrectly assumes that there is congestion, and therefore reduces the sending rate. However, TCP is actually required to transmit continuously to overcome these packets losses. As a result, such a problem causes poor performance in multi-hop wireless networks. There are extensive studies working on these harmful effects. Some studies were proposed to reduce the collision between TCP data packets and TCP acknowledgements or dynamically adjust the congestion window. Other relief may come from network coding. The pioneering paper proposed by <ns0:ref type='bibr' target='#b1'>Ahlswede et al. (2000)</ns0:ref> presents the fundamental theory of network coding. Instead of forwarding a single packet at each time, network coding allows nodes to recombine input packets into one or several output packets. Furthermore, network coding is also very well suited for environments where only partial or uncertain data is available for making a PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:1:0:CHECK 30 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science decision <ns0:ref type='bibr' target='#b25'>(Mehta and Narmawala, 2011)</ns0:ref>.</ns0:p><ns0:p>The link quality variation in multi-hop wireless networks is widely studied in the opportunistic data forwarding under User Datagram Protocol (UDP). It was traditionally treated as an adversarial factor in wireless networks, where its effect must be masked from upper-layer protocols by automatic retransmissions or strong forwarding error corrections. However, recent innovative studies utilize the characteristic explicitly to achieve opportunistic data forwarding <ns0:ref type='bibr' target='#b5'>(Biswas and Morris, 2005;</ns0:ref><ns0:ref type='bibr' target='#b8'>Chen et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b28'>Wang et al., 2012)</ns0:ref>. Unlike traditional routing protocols, the forwarder in opportunistic routing protocols broadcasts the data packets before the selection of next-hop forwarder. Opportunistic routing protocols allow multiple downstream nodes as candidates to forward data packets instead of using a dedicated next-hop forwarder.</ns0:p><ns0:p>Since the broadcasting nature of wireless links naturally supports both network coding and opportunistic data forwarding, many studies work on improving UDP performance in multi-hop wireless networks by opportunistic data forwarding and network coding. However, opportunistic data forwarding and network coding are inherently unsuitable for TCP. The frequent dropping of packets or out-of-order arrivals overthrow TCP's congestion control. Specifically, opportunistic data forwarding does not attempt to forward packets in the same order as they are injected in the network, so the arrival of packets will be in a different order. Network coding also introduces long coding delays by both the encoding and the decoding processes; besides, it is possible along with some scenarios of not being able to decode packets. These phenomena introduce duplicated ACK segments and frequent timeouts in TCP transmissions, which reduce the TCP throughput significantly.</ns0:p><ns0:p>Our proposed protocol, called TCPFender, uses opportunistic data forwarding and network coding to improve TCP throughputs. TCPFender adds an adaptation layer above the network layer to cooperate with TCP's control feedback loop; it makes the TCP's congestion control work well with opportunistic data forwarding and network coding. TCPFender proposes a novel feedback-based scheme to detect the network congestion and distinguish duplicated ACKs caused by out-of-order arrivals in opportunistic data forwarding from those caused by network congestion. We compared the throughput of TCPFender and TCP/IP in different topologies of wireless mesh networks, and analyzed the influence of batch sizes on the TCP throughput and the end-to-end delay. Since our work adapts the TCPFender to functioning over the network layer without any modification to TCP itself, it is easy to deploy in wireless mesh networks.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Opportunistic data forwarding</ns0:head><ns0:p>ExOR (Extreme Opportunistic Routing) is a seminal effort in opportunistic routing protocols <ns0:ref type='bibr' target='#b5'>(Biswas and Morris, 2005)</ns0:ref>. It is an integrated routing and MAC protocol that exploits the broadcast nature of wireless media. In a wireless mesh network, when a source transmits a data packet to a destination by several intermediate nodes which are decided by the routing module, other downstream nodes not in the routing path, can overhear the transmission. If the dedicated intermediate node, which is in the routing path, fails to receive this packet, other nearby downstream nodes can be scheduled to forward this packet instead of the sender retransmitting. In this case, the total transmission energy consumption and the transmission delay can be reduced, and the network throughput will be increased. Unfortunately, traditional IP forwarding dictates that all nodes without a matching receiver address should drop the packet, and only the node that the routing module selects to be the next hop can keep it for forwarding subsequently, so traditional IP forwarding is easily affected by link quality variation. However, ExOR allows multiple downstream nodes to coordinate and forward packets. The intermediate nodes, which are 'closer' to the destination, have a higher priority in forwarding packets towards the destination. ExOR can utilize the transient high quality of links and obtains an opportunistic forwarding gain by taking advantage of transmissions that reach unexpectedly far or fall unexpectedly short. In ExOR, a forwarding schedule is proposed to reduce duplicate transmissions. This schedule guarantees that only the highest priority receiver will forward packets to downstream nodes. However, this 'strict' schedule also reduces the possibilities for spatial reuse. The study in <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007)</ns0:ref> shows that ExOR can have better spatial reuse of wireless media. Furthermore, this schedule may be violated due to frequent packet loss and packet collision.</ns0:p></ns0:div> <ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:1:0:CHECK 30 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.2'>Opportunistic data forwarding with network coding</ns0:head><ns0:p>Studies show that network coding can reduce the data packet collision and approach the maximum theoretical capacity of networks <ns0:ref type='bibr' target='#b1'>(Ahlswede et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b22'>Li et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b16'>Koetter and M&#233;dard, 2003;</ns0:ref><ns0:ref type='bibr' target='#b19'>Laneman et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b14'>Jaggi et al., 2005;</ns0:ref><ns0:ref type='bibr'>Ho et al., 2006)</ns0:ref>. Many researchers incorporate network coding in opportunistic data forwarding to improve the throughput performance <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007;</ns0:ref><ns0:ref type='bibr'>Lin et al., 2008</ns0:ref><ns0:ref type='bibr' target='#b24'>Lin et al., , 2010;;</ns0:ref><ns0:ref type='bibr' target='#b30'>Zhu et al., 2015)</ns0:ref>. MORE (MAC-independent Opportunistic Routing and Encoding) is practical opportunistic routing protocol based on random linear network coding <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007)</ns0:ref>.</ns0:p><ns0:p>In MORE, the source node divides data packets from the upper layer into batches and generates coded packets of each batch. Similar to ExOR, packets in MORE are also forwarded based on a batch. The destination node can decode these coded packets to original packets after receiving enough independently coded packets in the same batch. The destination receives enough packets when the decoding matrix reaches the full rank, then these original packets will be pushed to the upper layer. MORE coordinates the forwarding of each node using a transmission credit system, which is calculated based on how effective it would be in forwarding coded data packets to downstream nodes. This transmission credit system reduces the possibility that intermediate nodes forward the same packets in duplication. However, MORE uses a 'stop-and-wait' design with a single batch in transmission, which is not efficient utilizing the bandwidth of networks. COPE focuses on inter-session network coding; it is a framework to combine and encode data flows through joint nodes to achieve a high throughput <ns0:ref type='bibr' target='#b4'>(Basagni et al., 2008)</ns0:ref>. CAOR (Coding Aware Opportunistic Routing) proposes a localized coding-aware opportunistic routing mechanism to increase the throughput of wireless mesh networks. In this protocol, the packet carries out with the awareness of coding opportunities and no synchronization is required among nodes <ns0:ref type='bibr' target='#b29'>(Yan et al., 2008)</ns0:ref>. NC-MAC improves the efficiency of coding decisions by verifying the decodability of packets before they are transmitted <ns0:ref type='bibr' target='#b2'>(Argyriou, 2009)</ns0:ref>. The scheme focuses on ensuring correct coding decisions at each network node, and it requires no cross-layer interactions.</ns0:p><ns0:p>CodeOR (Coding in Opportunistic Routing) improves MORE in a few important ways <ns0:ref type='bibr'>(Lin et al., 2008)</ns0:ref>. In MORE, the source simply keeps transmitting coded packets belonging to the same batch until the acknowledgment of this batch from the destination has been received. CodeOR allows the source to transmit multiple batches of packets in a pipeline fashion. They also proposed a mathematical analysis in tractable network models to show the way of 'stop-and-wait' affects the network throughput, especially in large or long topology. The timely ACKs are transmitted from downstream nodes to reduce the penalty of inaccurate timing in transmitting the next batch. CodeOR applies the ideas of TCP flow control to estimate the correct sending window and the flow control algorithm is similar to TCP Vegas, which uses increased queueing delay as congestion signals. SlideOR works with online network coding <ns0:ref type='bibr' target='#b24'>(Lin et al., 2010)</ns0:ref>, in which data packets are not required to be divided into multiple batches or to be encoded separately in each batch. In SlideOR, the source node encodes packets in overlapping sliding windows such that coded packets from one window position may be useful towards decoding the packets inside another window position. Once a coded packet is 'seen' by the destination node, the source node only encodes packets after this seen packet. Since it does not need to encode any packet that is already seen at the destination, SlideOR can transmit useful coded packets and achieve a high throughput.</ns0:p><ns0:p>CCACK (Cumulative Coded ACKnowledgment) allows nodes to acknowledge coded packets to upstream nodes with negligible overhead <ns0:ref type='bibr' target='#b17'>(Koutsonikolas et al., 2011)</ns0:ref>. It utilizes a null space-based (NSB) coded feedback vector to represent the entire decoding matrix. CodePipe is a reliable multicast protocol, which improves the multicast throughput by exploiting both intra and inter network coding <ns0:ref type='bibr' target='#b21'>(Li et al., 2012)</ns0:ref>. CORE (Coding-aware Opportunistic Routing mEchanism) combines inter-session and intra-session network coding <ns0:ref type='bibr' target='#b18'>(Krigslund et al., 2013)</ns0:ref>. It allows nodes in the network to setup inter-session coding regions where packets from different flows can be XORed. Packets from the same flow uses random linear network coding for intra-session coding. CORE provides a solution to cope with the unreliable overhearing and improves the throughput performance in multi-hop wireless networks. NCOR focuses on how to select the best candidate forwarder set and allocate traffic among candidate forwarders to approach optimal routing <ns0:ref type='bibr' target='#b6'>(Cai et al., 2014)</ns0:ref>. It contracts a relationship tree to describe the child-parent relations along the path from the source to the destination. The cost of the path is the sum of the costs of each constituent hyperlink for delivering one unit of information to the destination. The nodes, which create the path with the minimum cost, can be chosen as candidate forwarders. <ns0:ref type='bibr' target='#b12'>Hsu et al. (2015)</ns0:ref> proposed a stochastic dynamic framework to minimize a long-run average cost. They also analyzed the problem of whether to delay packet transmission in hopes that a coding pair will be available in the future or transmit</ns0:p></ns0:div> <ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2016:05:10857:1:0:CHECK 30 Jul 2016)</ns0:ref> Manuscript to be reviewed Computer Science a packet without coding. <ns0:ref type='bibr' target='#b10'>Garrido et al. (2015)</ns0:ref> proposed a cross-layer technique to balance the load between relaying nodes based on bandwidth of wireless links, and they used an intra-flow network coding solution modelled by means of Hidden Markov Processes. However, the schemes above were designed to utilize opportunistic data forwarding and network coding, but none of these was designed to support TCP.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Network coding in TCP</ns0:head><ns0:p>A number of recent papers have utilized network coding to improve TCP throughput. In particular, Huang et al. introduce network coding to TCP traffic, where data segments in one direction and ACK segments in the opposite direction can be coded at intermediate nodes <ns0:ref type='bibr' target='#b13'>(Huang et al., 2008)</ns0:ref>. The simulation showed that making a small delay at each intermediate node can increase the coding opportunity and increase the TCP throughput. TCP/NC enables a TCP-compatible sliding-window approach to utilize network coding <ns0:ref type='bibr' target='#b27'>(Sundararajan et al., 2011)</ns0:ref>. Such a variant of TCP is based on ACK-based sliding-window network coding approach and improves the TCP throughput in lossy links. It uses the degree of freedom in the decoding matrix instead of the number of received original packets as the sequence number in ACK.</ns0:p><ns0:p>If a received packet increases the degree of freedom in the decoding matrix, this packet is called an innovative packet and this packet is 'seen' by the destination. The destination node will generate an acknowledgment whenever a coded packet is seen instead of producing an original packet. However, TCP/NC cannot efficiently control the waiting time for the decoding matrix to become full rank, and the packet loss can make TCP/NC's decoding matrix very large, which causes a long packet delay <ns0:ref type='bibr' target='#b26'>(Sun et al., 2015)</ns0:ref>. TCP-VON introduces online network coding (ONC) to TCP/NC, which can smoothly increase the receiving data rate and packets can be decoded quickly by the destination node. However, these protocols are variants of RTT-based congestion control TCP protocols (e.g., Vegas), which limits their applications in practice since most TCP protocols are loss-based congestion control <ns0:ref type='bibr' target='#b3'>(Bao et al., 2012)</ns0:ref>. TCP-FNC proposes two algorithms to increase the TCP throughput <ns0:ref type='bibr' target='#b26'>(Sun et al., 2015)</ns0:ref>. One is a feedback based scheme to reduce the waiting delay. The other is an optimized progressive decoding algorithm to reduce computation delay. It can be applied to loss-based congestion control, but it does not take advantage of opportunistic data forwarding. Since TCP-FNC is based on traditional IP forwarding, it is easily affected by link quality variation.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Contribution of TCPFender</ns0:head><ns0:p>Opportunistic data forwarding and network coding do not inherently support TCP, so many previous research on opportunistic data forwarding and network coding were not designed for TCP. Other studies modified TCP protocols by cooperating network coding into TCP protocols; these work created different variants of TCP protocols to improve the throughput. However, TCP protocols (especially, TCP Reno) are widely deployed in current communication systems, it is not easy work to modify all TCP protocols of the communication systems. Therefore, we propose an adaptation layer (TCPFender) functioning below TCP Reno. With the help of TCPFender, TCP Reno do not make any change to itself and it can take advantage of both network coding and opportunistic data forwarding.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>DESIGN OF TCPFENDER</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Overview of TCPFender</ns0:head><ns0:p>We introduce TCPFender as an adaptation layer above the network layer, which hides network coding and opportunistic forwarding from the transport layer. The process of TCPFender is shown in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>It confines the modification of the system only under the network layer. The goal of TCPFender is to improve TCP throughput in wireless mesh networks by opportunistic data forwarding and network coding. However, opportunistic data forwarding in wireless networks causes many dropped packets and out-of-order arrivals, and it is difficult for TCP sender to maintain a large congestion window. Especially the underlying link layer is the stock IEEE 802.11, which only provides standard unreliable broadcast or reliable unicast (best effort with a limited number of retransmissions). TCP has its own interpretation of the arrival (or absence) of the ACK segments and their timing. It opens up its congestion window based on continuous ACKs coming in from the destination. The dilemma is that when packets arrive out of order or are dropped, the TCP receiver cannot signal the sender to proceed with the expected ACK segment. Unfortunately, opportunistic data forwarding can introduce many out-of-order arrivals, which can significantly reduce the congestion window size of regular TCP since it increases the possibility of The TCPFender adaptation layer at the receiving side functions over the network layer and provides positive feedback early on when innovative coded packets are received, i.e. suggesting that more information has come through the network despite not being decoded for the time being. This process helps the sender to open its congestion window and trigger fast recovery when the receiving side acknowledges the arrival of packets belonging to a later batch, in which case the sending side will resend dropped packets of the unfinished batch. On the sender side, the ACK signalling module is able to differentiate duplicated ACKs and filter useless ACKs (shown in Fig. <ns0:ref type='figure'>1</ns0:ref>).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>. TCPFender design scheme.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>TCPFender Algorithm</ns0:head><ns0:p>To better support TCP with opportunistic data forwarding and network coding, TCPFender inserts the TCP adaptation layer above the network work layer at the source, the forwarder, and the destination. The main work of the TCP adaptation layer is to interpret observations of the network layer phenomena in a way that is understandable by TCP. The network coding module in the adaptation layer is based on a batch-oriented network coding operation. The original TCP packets are grouped into batches, where all packets in the same batch carry encoding vectors on the same basis. At the intermediate nodes, packets will be recoded and forwarded following the schedule of opportunistic data forwarding proposed by MORE, which proposes a transmission credit system to describe the duplication of packets. This transmission credit system can compensate the packet loss, increase the reliability of the transmission, and represent the schedule of opportunistic data forwarding. The network coding module in the destination node will try to decode received coded packets to original packets when it receives any coded packet. The ACK signalling modules at the source and the destination are responsible for translation between TCP ACKs and TCPFender ACKs.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1'>Network Coding in TCPFender</ns0:head><ns0:p>We implement batch-oriented network coding operations at the sender and receiver to support TCP transmissions. All data pushed down by the transport layer in sender are grouped into batches, and each batch has a fixed number &#946; (&#946; = 10 in our implementation) of packets of equal length (with possible padding). When the source has accumulated packets in a batch, these packets are coded with random linear network coding, tagged with the encoding vectors, and transmitted to downstream nodes. The downstream nodes are any nodes in the network closer to the destination. Any downstream node can recode and forward packets when it receives a sufficient number of them. We use transmission credit mechanism, as proposed in MORE, to balance the number of packets to be forwarded in intermediate nodes.</ns0:p><ns0:p>We make two important changes to improve the network coding process of MORE for TCP transmissions. For a given batch, the source does not need to wait until the last packet of a batch from the TCP</ns0:p></ns0:div> <ns0:div><ns0:head>5/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:1:0:CHECK 30 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science before transmitting coded packets. We call this accumulative coding. That is, if k packets (k &lt; &#946; ) have been sent down by TCP at a point of time, a random linear combination of these k packets is created and transmitted. Initially, the coded packets only include information for the first few TCP data segments of the batch, but will include more towards the end of the batch. The reason for this 'early release' behaviour is for the TCP receiving side to be able to provide early feedback for the sender to open up the congestion window. On the other hand, we use a deeper pipelining than MORE where we allow multiple batches to flow in the network at the same time. To do that, the sending side does not need to wait for the batch acknowledgement before proceeding with the next batch. In this case, packets of a batch are labeled with a batch index for differentiation, in order for TCP to have a stable, large congestion window size rather than having to reset it to 1 for each new batch. The cost of such pipelining is that all nodes need to maintain packets for multiple batches.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2'>Source adaptation layer</ns0:head><ns0:p>The source adaptation layer buffers all original packets of a batch that have not been acknowledged. The purpose is that when TCP pushes down a new data packet or previously sent data packet due to a loss event, the source adaptation layer can still mix it with other data packets of the same batch. The ACK signalling module can discern duplicated ACKs which are not in fact caused by the network congestion.</ns0:p><ns0:p>Opportunistic data forwarding may cause many extra coded packets, specifically when some network links are of the high quality at a certain point. This causes the destination node to send multiple ACKs with same sequence number. In this case, such duplicated ACKs are not a signal for the network congestion, and should be treated differently by the ACK signalling module in the source. These two cases of duplicated ACKs can actually be differentiated by tagging the ACKs with the associated sequence numbers of the TCP data segment. These ACKs are used by the TCPFender adaptation layer at the source and the destination and should be converted to original TCP ACKs before being delivered to the upper layer.</ns0:p><ns0:p>The flow of data or ACKs transmissions is shown in the left of Fig. <ns0:ref type='figure'>1</ns0:ref>. Original TCP data segments are generated and delivered to the module of 'network coding and opportunistic forwarding'. Here, TCP data segments may be distributed to several batches based on their TCP segment sequences, so the retransmitted packets will be always in the same batch as their initial distribution. After the current TCP data segment mixes with packets in a batch, TCPFender data segments will be generated and injected to network via hop-by-hop IP forwarding, which is essentially broadcasting of IP datagrams. On the ACK signalling module, when it receives TCPFender ACKs, if the ACK's sequence number is greater than the maximum received ACK sequence number, this ACK will be translated into a TCP ACK and delivered to the TCP sender. Otherwise, the ACK signalling module will check whether this duplicated ACK is caused by opportunistic data forwarding or not. Then it will decide whether to forward a TCP ACK to the TCP or not. The reason for differentiating duplicated ACKs at the source instead of at the destination is to reduce the impact of ACK loss on TCP congestion control.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.3'>Destination adaptation layer</ns0:head><ns0:p>The main function of the destination adaptation layer is to generate ACKs and detect congestion in the network. It expects packets in the order of increasing batch index. For example, when it is expecting the bth batch, it implies that it has successfully received packets of the previous b &#8722; 1 batches and delivered them up to the TCP layer. In this case, it is only interested in and buffers packets of the bth batch or later. However, the destination node may receive packets of any batch. Suppose that the destination node is expecting the bth batch, and that the rank of the decoding matrix of this batch is r. In this case, the destination node has 'almost' received &#946; &#215; (b &#8722; 1) + r packets of the TCP flow, where &#946; &#215; (b &#8722; 1) packets have been decoded and pushed up the TCP receiver, and r packets are still in the decoding matrix. When it receives a coded packet of the b th batch, if b &lt; b, the packet is discarded. Otherwise, this packet is inserted into the corresponding decoding matrix. Such an insertion can increase r by 1 if b = b and this received packet is an innovative packet. The received packet is defined as an innovative packet only if the received packet is linearly independent with all the buffered coded packets within the same batch. In either case, it generates an ACK of sequence number &#946; &#215; (b &#8722; 1) + r, which is sent over IP back to the source node. One exception is that if r = &#946; (i.e. decoding matrix become full rank), the ACK sequence number is &#946; &#215; ( b &#8722; 1) + r, where b is the next batch that is not full and r is its rank. At this point, the receiver moves on to the bth batch. This mechanism ensures that the receiver can send multiple duplicate ACKs for the sender to detect congestion and start fast recovery. It also supports multiple-batch transmissions in the network and guarantees the reliable transmission at the end of the transmission of each batch.</ns0:p></ns0:div> <ns0:div><ns0:head>6/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:1:0:CHECK 30 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The design of the destination adaptation layer is shown on the right of Fig. <ns0:ref type='figure'>1</ns0:ref>. The network coding module has two functions. First, it will check whether the received TCPFender data segment is innovative or not. In either case, it will notify the ACK signalling to generate an TCPFender ACK. Second, it will deliver original TCP data segments to TCP layer if one or more original TCP data segment are decoded after receiving an innovative coded data packet. This mechanism can significantly reduce the decoding delay of the batch-based network coding. On the other hand, TCPFender has its own congestion control mechanism, so TCP ACK that is generated by the TCP layer will be dropped by the ACK signalling module at the destination.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.4'>Forwarder adaptation layer</ns0:head><ns0:p>The flow of data at forwarders is shown in the middle of Fig. <ns0:ref type='figure'>1</ns0:ref>. The ACK is unicast from the destination to the source by IP forwarding, which is standard forwarding mechanism and is not shown in the diagram.</ns0:p><ns0:p>The intermediate node receives TCPFender data segment from below and this segment will be distributed into corresponding batches and regenerates a new coded TCPFender data segment. This new TCPFender data segment will be sent to downstream forwarders via hop-by-hop IP broadcasting based on the credit transmission system proposed by MORE.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>PERFORMANCE EVALUATION</ns0:head><ns0:p>In this section, we investigate the performance of TCPFender through computer simulations using NS-2.</ns0:p><ns0:p>The topologies of the simulations are made up of three exemplar network topologies and one specific mesh. These topologies are depicted in Fig. <ns0:ref type='figure'>2</ns0:ref> 'diamond topology', Fig. <ns0:ref type='figure'>3</ns0:ref> 'string topology', Fig. <ns0:ref type='figure'>4</ns0:ref> 'grid topology', and Fig. <ns0:ref type='figure'>5</ns0:ref> 'mesh topology'. The packet delivery rates at the physical layer for the mesh topology are marked in Fig. <ns0:ref type='figure'>5</ns0:ref>, and the packet delivery rates for other topologies are described in Table <ns0:ref type='table'>.</ns0:ref> 1.</ns0:p><ns0:p>The source node and the destination node are at the opposite ends of the network. One FTP application sends long files from the source to the destination. The source node emits packets continuously until the end of the simulation, and each simulation lasts for 100 seconds. All the wireless links have a bandwidth of 1Mbps and the buffer size on the interfaces is set to 100 packets. To compensate for the link loss, we used the hop-to-hop redundancy factor for TCPFender on a lossy link. Recall that the redundancy factor is calculated based on the packet loss rate, which was proposed in MORE <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007)</ns0:ref>. This packet loss rate should incorporate the loss effect at both the Physical and Link layers, which is higher than the marked physical layer loss rates. The redundancy factors of the links are thus set according to these revised rates. We compared our protocol against TCP and TCP+NC in four network topologies. In our simulations, TCP ran on top of IP, and TCP+NC has batch-based network coding enabled but still over IP. The version of TCP is TCP Reno for TCPFender and both baselines. The ACK packet for the three protocols are routed to the source by shortest-path routing.</ns0:p><ns0:p>In this paper, we examined whether TCPFender can effectively utilize opportunistic forwarding and network coding. TCPFender can provide reliable transmissions in these four topologies and the analysis metrics we took are the network throughput and the end-to-end packet delay at the application layer.</ns0:p><ns0:p>We repeated each scenario 10 times with different random seeds for TCPFender, TCP+NC, and TCP/IP, respectively. In TCPFender, every intermediate node has the opportunity to forward coded packets and all nodes operate in the 802.11 broadcast mode. By contrast, for TCP/IP and TCP+NC, we use the unicast model of 802.11 with ARQ and the routing module is the shortest-path routing of <ns0:ref type='bibr'>ETX Couto et al. (2003)</ns0:ref>. In the diamond topology (Fig. <ns0:ref type='figure'>2</ns0:ref>), the source node has three different paths to the destination. TCP and TCP+NC only use one path to the destination, but TCPFender could utilize more intermediate forwarders thanks to the opportunistic routing. The packet delivery rates for each link are varied between 20%, 40%, 60% and 80%. We plotted the throughput of these three protocols in Fig. <ns0:ref type='figure' target='#fig_2'>6</ns0:ref>. In all cases, the TCPFender has the highest throughput, and the performance gain is more visible for poor link qualities.</ns0:p><ns0:p>Next, we tested these protocols in the string topology (Fig. <ns0:ref type='figure'>3</ns0:ref>) with 6 nodes. The distance between the two nodes is 100 meters, and the transmission range is the default 250 meters. Different combinations of packet delivery rates for 100-meter and 200-meter distances are described in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. As a result, the shortest path routing used by TCP and TCP+NC can decide to use the 100m or 200m links depending on their relative reliability. The throughputs of the three protocols are plotted in Fig. <ns0:ref type='figure'>7</ns0:ref>, where we observed how they perform under different link qualities. Except for the one case where both the 100m and 200m links are very stable (i.e 100% and 80%, respectively), the gains of having network coding and opportunistic forwarding are fairly significant in maintaining TCP's capacity to the application layer.</ns0:p><ns0:p>When the links are very stable, the cost of the opportunistic forwarding schedule and the network coding delay will slightly reduce the network throughput.</ns0:p><ns0:p>We also plotted these three protocols' throughputs in a grid topology (Fig. <ns0:ref type='figure'>4</ns0:ref>) and a mesh topology (Fig. <ns0:ref type='figure'>5</ns0:ref>). Each node has more neighbours in these two topologies, compared to string topology (Fig. <ns0:ref type='figure'>3</ns0:ref>), which increases the chance of opportunistic data forwarding. The packet delivery rates are indicated in these two Figures (Fig. <ns0:ref type='figure'>4</ns0:ref> and Fig. <ns0:ref type='figure'>5</ns0:ref>). In general, the packet delivery rates drop when the distance between a sender and a receiver increases. In our experiment, the source and destination nodes deploy at the opposite ends of the network. The throughput of TCPFender is depicted in Fig. <ns0:ref type='figure' target='#fig_4'>8</ns0:ref> and it is much higher than TCP/IP because opportunistic data forwarding and network coding increase the utilization of network capacity. The gain is about 100% in our experiment. The end-to-end delays of the grid topology and the mesh topology are plotted in Fig. <ns0:ref type='figure' target='#fig_4'>8</ns0:ref>. In general, TCP+NC has long end-to-end delays because Manuscript to be reviewed packets need be decoded before delivered to the application layer, this is an inherent feature of batch-based network coding. TCPFender can benefit from backup paths and receive packets early, so it reduces the time-consumption of waiting for decoding and its end-to-end delay is shorter than TCP+NC.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Next, we are interested in the impact of batch sizes on the throughput and the end-to-end delay. Fig. <ns0:ref type='figure'>9</ns0:ref> shows the throughput of TCPFender in the mesh topology for batch sizes of 10, 20, 30, ..., 100 packets.</ns0:p><ns0:p>In general, batch sizes will have an impact on then TCP throughput (as exemplified in Fig. <ns0:ref type='figure'>11</ns0:ref>). When the batch size is small (&#8804; 40), the increment of the batch size can increase the throughput, since it expands the congestion window. However, if the batch size is too large (&gt; 40), the increment of the batch size will decrease the throughput because the increase of batch size will amplify the fluctuation of the congestion window and also increase packet overhead by long encoding vectors. The Fig. <ns0:ref type='figure'>11</ns0:ref> From the Fig. <ns0:ref type='figure'>11</ns0:ref>, since the number of packets transmitted in the network is smaller than two batch sizes, intermediate nodes only need to keep two batches of packets and the memories required to store the packets are acceptable. The nature of batch-based network coding will also introduce decoding delays, so the batch size has a direct impact on the end-to-end delay, as summaries in Fig. <ns0:ref type='figure'>9</ns0:ref>. In Fig. <ns0:ref type='figure'>10</ns0:ref>, we plotted the end-to-end delays of all packets over time in two sample simulations.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CONCLUDING REMARKS</ns0:head><ns0:p>In this paper, we proposed TCPFender, which is a novel mechanism to support TCP with network an innovative packet. In current work, we implemented our algorithm to support TCP Reno. In fact, TCPFender can also support other TCP protocols with loss-based congestion control (e.g., TCP-NewReno, TCP-Tahoe). The adaptive modules are designed generally enough to not only support network coding and opportunistic data forwarding, but also any packet forwarding techniques that can cause many dropping packets or out-of-order arrivals. One example will be multi-path routing, where IP packets of the same data flow can follow different paths from the source to the destination. By simulating how TCP receiver will signal the TCP sender, we are able to adapt TCPFender to functioning over such the multi-path routing without having to modify TCP itself.</ns0:p><ns0:p>In the simulation results, we compared TCPFender and TCP/IP in four different network topologies.</ns0:p><ns0:p>The result shows that TCPFender has a sizeable throughput gain over TCP/IP, and the gain will be very distinct from each other when the link quality is not that good. We also discussed the influence of batch size on the network throughput and end-to-end packet delay. In general, the bath size has a small impact on the network throughput, but it has direct impact on end-to-end packet delay.</ns0:p><ns0:p>In future, we will consider TCP protocols with RTT-based congestion control and also analyze how multiple TCP flows interact with each other in a network coded, opportunistic forwarding network layer, or a more generally error-prone network layer. We will refine the redundancy factor and the bandwidth </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10857:1:0:CHECK 30 Jul 2016) Manuscript to be reviewed Computer Science duplicated ACKs. Furthermore, the long decoding delay for batch-based network coding does not fare well with TCP, because it triggers excessive time-out events.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Figure 2. Diamond topology</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 5. Mesh topology</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>also describes how many packets are transmitted in the network. Each intermediate node will keep all unfinished batches.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 7. Throughput for string topology</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. Throughput and delay for different batch sizes</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,146.92,184.78,403.25,176.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Packet</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>delivery rate</ns0:cell></ns0:row><ns0:row><ns0:cell>100m</ns0:cell><ns0:cell>200 m</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>100% 80% 60% 40% 20%</ns0:cell></ns0:row><ns0:row><ns0:cell>80%</ns0:cell><ns0:cell>60% 40% 20%</ns0:cell></ns0:row><ns0:row><ns0:cell>60%</ns0:cell><ns0:cell>40% 20%</ns0:cell></ns0:row><ns0:row><ns0:cell>40%</ns0:cell><ns0:cell>20%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>8/13</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:1:0:CHECK 30 Jul 2016)</ns0:note></ns0:figure> <ns0:note place='foot' n='13'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:1:0:CHECK 30 Jul 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Reviewer One: 1. Typos or grammatical errors to correct. Response: In line 54, Change showes to shows. In line 55, Change opportunisty to opportunity. In line 128, Change in a same batch to in the same batch. In line 158, Change a crucial importance to of crucial importance. In Figure 1, Change fowarding to forwarding. In line 236, Change initially to initial. In line 268, Change it to its. In line 306, Change each links to each link In line 307, Change ploted to plotted. In line 313, Change throughput to throughputs. In Figure 8, Change througput to throughput. The manuscript has been proofed-read several times by myself and our university’s technical writers. 2. The introduction of background knowledge and related work is thorough. Though it is a choice of style, I think that many citations in the first three paragraphs of the introduction can be moved to related work. Response: The introduction of background knowledge and related work are rewritten. Based on the suggestion of reviewer, the citations in the first three paragraphs of the introduction are moved to related work. 3. Fonts of numbers in Figure 5 look very small, and colors of TCP and TCPFender are hard to differentiate in Figure 6, 7, and 8, when printed in the greyscale. Also, the legend is missing in Figure 7. Response: The font in Figure 5 is increased and Figure 7 is updated. The colours of the plots in Figure 6, 7, and 8 are changed so that they are easier to read when printed in greyscale. 4. In this paper, all nodes including the source, forwarders, and the destination need to store packets in multiple batches. It is not shown in this paper that how much more memory will be used to store packets from multiple batches, under different network conditions. Response: Same overhead as TCP/IP for source and destinations but forwarders will need to buffer all packets of the batches currently flowing through. In our experiment, we showed the change of congestion window in Figure 11. The congestion window also described how many packets are sent out from the source, therefore, the memories used to store packets are almost two batch sizes in our experiment (we added descriptions in the last paragraph of Section 4). 5. In the evaluation, a large file is transmitted and the throughput is the major metric measured. As the elephant flow is typically not sensitive to the delay, it would be interesting to investigate if TCPFender could improve the delay of mice flows that are not sensitive to throughput but delay. Response: The reviewer’s observation is correct in that smaller file sizes would have delay disadvantages caused by network coding. In particular, when the file size is comparable to the batch size, the file-wise delay will be comparable to the decoding delay of an entire batch, which may seem large relatively. On the other hand, because the file size is small, this delay is not overly significant because the delay is at the order of its transmission time. That is why we only tested for large files. 6. Besides, as stated in the paper, opportunistic forwarding may introduce extra packets which can congest the network and hurt the actual goodput. However, it is unknown how many extra packets in total are transmitted through the network. Moreover, the reviewer suggests evaluating the performance with background traffic. Response: The extra packets transmitted in the network is to recover the packet losses. In 802.11, it retransmits lossy packets up to 7 times. In our protocol, the number of retransmissions is decided by the redundancy factor, which is proposed by MORE. Since it is not our contribution, we didn’t show it in our experiment. TCPFender completes the control feedback loop of TCP by creating a bridge between the adaptation modules of the sender and the receiver. It can detect the congestion of network and provide required feedback to TCP sender. For example, when the network is busy, TCPFender can provide duplicated ACKs or timeout information to TCP layer, which can be used to reduce congestion window size of TCP. When there is background traffic, TCPFender can still provide required information to TCP layer and help TCP adjust its sending rate. We will evaluate the performance with background traffic in the future work. Reviewer Two: 7. Some important prior literature is missing. For example, the following paper also extends MORE to allow multiple batches to flow in the network: [R1] Y. Lin, B. Li, and B. Liang, ``CodeOR: Opportunistic routing in wireless mesh networks with segmented network coding,' IEEE International Conference on Network Protocols (ICNP), 2008. Response: Two important prior literature are discussed in detail. One paper is titled “CodeOR: Opportunistic routing in wireless mesh networks with segmented network coding”, another one is titled “SlideOR: Online Opportunistic Network Coding in Wireless Mesh Networks”. Some other literature, which are closed to my research, are also discussed in the related work. Based on the reviewer’s comment. 8. I believe that the concept of `innovative packets' should be explained in more detail, because it is an important concept in the design of TCPFender. Response: The concept of “innovative packets” is added in detail in the Section 3.2.3. 9. It is unclear that how many simulations have been conducted for each experiment. Without such information, it is hard to assess the statistical significance of the conclusions. Response: The setup of the simulations is described at the beginning of Section 4, we repeated each scenario 10 times. 10. My concern is mostly about the novelty of the paper. First of all, the idea of `accumulative coding’ is well known in the literature of network coding. So, its application to multi-hop wireless networks is quite straightforward. Second, the idea of allowing for multiple batches in MORE has already been studied in several papers including [R1] mentioned above. Therefore, the proposed two extensions of MORE in the paper do not have sufficient novelty. Response: The two extensions of MORE are not the main contribution of our work, the same idea is already proposed by other authors, for example, Yunfeng Lin worked on this problems in CodeOR. In our paper, we designed an adaptation layer over the network layer (TCPFender) to support TCP-Reno without any change to TCP-Reno itself. Our work designed this adaptation layer to complete the control feedback loop of TCP. This adaptation layer can detect network congestion and distinguish different types of duplicated ACKs. With the TCPFender adaptation layer, TCPReno can take advantage of opportunistic data forwarding and network coding without any change to itself. So it is easy to create the compatibility to the current communication systems. (We added Section 2.4 to describe our contribution.) 11. Another concern of mine is about the simulation part of the paper. In particular, the paper only compares its proposed scheme with MORE, but not with some extensions of MORE (such as CodeOR proposed in [R1]). Since the work of MORE receives more than 1000 citations, there must be plenty of good extensions in the literature. Some exemplary ones should be compared with the proposed scheme in this paper. Response: After literature surveys, we find that most of extensions of MORE (such as CodeOR) were not designed to support TCP Reno, other extensions changed TCP and created different variants of TCP to utilize the opportunistic data forwarding and network coding. Our work created an adaptation layer over the network layer (TCPFender) to support TCP without any change to TCP itself. From our knowledge, It is the first paper that is designed to support TCP Reno by opportunistic data forwarding and network coding and doesn’t require to change TCP Reno itself. TCP Reno is widely used in current communication systems and it is not easy work to replace TCP Reno to other variants, so we design TCPFender without any change to TCP. Our simulation result proved that TCP Reno with our adaptation layer can have higher throughput. "
Here is a paper. Please give your review comments after reading it.
347
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Opportunistic data forwarding significantly increases the throughput in multi-hop wireless mesh networks by utilizing the broadcast nature of wireless transmissions and the fluctuation of link qualities. Network coding strengthens the robustness of data transmissions over unreliable wireless links. However, opportunistic data forwarding and network coding are rarely incorporated with TCP because the frequent occurrences of outof-order packets in opportunistic data forwarding and long decoding delay in network coding overthrow TCP's congestion control. In this paper, we propose a solution dubbed TCPFender, which supports opportunistic data forwarding and network coding in TCP. Our solution adds an adaptation layer to mask the packet loss caused by wireless link errors and provides early positive feedbacks to trigger a larger congestion window for TCP. This adaptation layer functions over the network layer and reduces the delay of ACKs for each coded packet. The simulation results show that TCPFender significantly outperforms TCP/IP in terms of the network throughput in different topologies of wireless networks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Wireless mesh networks have emerged as the most common technology for the last mile of Internet access. The Internet provides a platform for rapid and timely information exchanges among clients and servers. Transmission Control Protocol (TCP) has become the most prominent transport protocol on the Internet. Since TCP was originally designed primarily for wired networks that have low bit error rates, moderate packet loss, and packet collisions, the performance of TCP degrades to a greater extent in multi-hop wireless networks, where several unreliable wireless links may be involved in data transmissions <ns0:ref type='bibr' target='#b0'>(Aguayo et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b16'>Jain and Das, 2005)</ns0:ref>. However, multi-hop wireless networks have several advantages, including rapid deployment with less infrastructure and less transmission power over multiple short links. Moreover, a high data rate can be achieved by novel cooperation or high link utilization <ns0:ref type='bibr' target='#b21'>(Larsson, 2001)</ns0:ref>. Some important issues are being addressed by researchers to utilize these capabilities and increase TCP performance in multi-hop wireless networks, such as efficiently searching the ideal path from a source to a destination, maintaining reliable wireless links, protecting nodes from network attacks, reducing energy consumption, and supporting different applications.</ns0:p><ns0:p>In multi-hop wireless networks, data packet collision and link quality variation can cause packet losses.</ns0:p><ns0:p>TCP often incorrectly assumes that there is congestion, and therefore reduces the sending rate. However, TCP is actually required to transmit continuously to overcome these packets losses. As a result, such a problem causes poor performance in multi-hop wireless networks. There are extensive studies working on these harmful effects. Some studies were proposed to reduce the collision between TCP data packets and TCP acknowledgements or dynamically adjust the congestion window. Other relief may come from network coding. The pioneering paper proposed by <ns0:ref type='bibr' target='#b1'>Ahlswede et al. (2000)</ns0:ref> presents the fundamental theory of network coding. Instead of forwarding a single packet at each time, network coding allows nodes to recombine input packets into one or several output packets. Furthermore, network coding is also very well suited for environments where only partial or uncertain data is available for making a PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:2:0:NEW 6 Sep 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science decision <ns0:ref type='bibr' target='#b27'>(Mehta and Narmawala, 2011)</ns0:ref>.</ns0:p><ns0:p>The link quality variation in multi-hop wireless networks is widely studied in the opportunistic data forwarding under User Datagram Protocol (UDP). It was traditionally treated as an adversarial factor in wireless networks, where its effect must be masked from upper-layer protocols by automatic retransmissions or strong forwarding error corrections. However, recent innovative studies utilize the characteristic explicitly to achieve opportunistic data forwarding <ns0:ref type='bibr' target='#b5'>(Biswas and Morris, 2005;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chen et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b30'>Wang et al., 2012)</ns0:ref>. Unlike traditional routing protocols, the forwarder in opportunistic routing protocols broadcasts the data packets before the selection of next-hop forwarder. Opportunistic routing protocols allow multiple downstream nodes as candidates to forward data packets instead of using a dedicated next-hop forwarder.</ns0:p><ns0:p>Since the broadcasting nature of wireless links naturally supports both network coding and opportunistic data forwarding, many studies work on improving UDP performance in multi-hop wireless networks by opportunistic data forwarding and network coding. However, opportunistic data forwarding and network coding are inherently unsuitable for TCP. The frequent dropping of packets or out-of-order arrivals overthrow TCP's congestion control. Specifically, opportunistic data forwarding does not attempt to forward packets in the same order as they are injected in the network, so the arrival of packets will be in a different order. Network coding also introduces long coding delays by both the encoding and the decoding processes; besides, it is possible along with some scenarios of not being able to decode packets. These phenomena introduce duplicated ACK segments and frequent timeouts in TCP transmissions, which reduce the TCP throughput significantly.</ns0:p><ns0:p>Our proposed protocol, called TCPFender, uses opportunistic data forwarding and network coding to improve TCP throughputs. TCPFender adds an adaptation layer above the network layer to cooperate with TCP's control feedback loop; it makes the TCP's congestion control work well with opportunistic data forwarding and network coding. TCPFender proposes a novel feedback-based scheme to detect the network congestion and distinguish duplicated ACKs caused by out-of-order arrivals in opportunistic data forwarding from those caused by network congestion. We compared the throughput of TCPFender and TCP/IP in different topologies of wireless mesh networks, and analyzed the influence of batch sizes on the TCP throughput and the end-to-end delay. Since our work adapts the TCPFender to functioning over the network layer without any modification to TCP itself, it is easy to deploy in wireless mesh networks.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Opportunistic data forwarding</ns0:head><ns0:p>ExOR (Extreme Opportunistic Routing) is a seminal effort in opportunistic routing protocols <ns0:ref type='bibr' target='#b5'>(Biswas and Morris, 2005)</ns0:ref>. It is an integrated routing and MAC protocol that exploits the broadcast nature of wireless media. In a wireless mesh network, when a source transmits a data packet to a destination by several intermediate nodes which are decided by the routing module, other downstream nodes not in the routing path, can overhear the transmission. If the dedicated intermediate node, which is in the routing path, fails to receive this packet, other nearby downstream nodes can be scheduled to forward this packet instead of the sender retransmitting. In this case, the total transmission energy consumption and the transmission delay can be reduced, and the network throughput will be increased. Unfortunately, traditional IP forwarding dictates that all nodes without a matching receiver address should drop the packet, and only the node that the routing module selects to be the next hop can keep it for forwarding subsequently, so traditional IP forwarding is easily affected by link quality variation. However, ExOR allows multiple downstream nodes to coordinate and forward packets. The intermediate nodes, which are 'closer' to the destination, have a higher priority in forwarding packets towards the destination. ExOR can utilize the transient high quality of links and obtains an opportunistic forwarding gain by taking advantage of transmissions that reach unexpectedly far or fall unexpectedly short. In ExOR, a forwarding schedule is proposed to reduce duplicate transmissions. This schedule guarantees that only the highest priority receiver will forward packets to downstream nodes. However, this 'strict' schedule also reduces the possibilities for spatial reuse. The study in <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007)</ns0:ref> shows that ExOR can have better spatial reuse of wireless media. Furthermore, this schedule may be violated due to frequent packet loss and packet collision. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.2'>Opportunistic data forwarding with network coding</ns0:head><ns0:p>Studies show that network coding can reduce the data packet collision and approach the maximum theoretical capacity of networks <ns0:ref type='bibr' target='#b1'>(Ahlswede et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b23'>Li et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b17'>Koetter and M&#233;dard, 2003;</ns0:ref><ns0:ref type='bibr' target='#b20'>Laneman et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b15'>Jaggi et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b12'>Ho et al., 2006)</ns0:ref>. Many researchers incorporate network coding in opportunistic data forwarding to improve the throughput performance <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b24'>Lin et al., 2008</ns0:ref><ns0:ref type='bibr' target='#b26'>Lin et al., , 2010;;</ns0:ref><ns0:ref type='bibr' target='#b32'>Zhu et al., 2015)</ns0:ref>. MORE (MAC-independent Opportunistic Routing and Encoding) is practical opportunistic routing protocol based on random linear network coding <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007)</ns0:ref>.</ns0:p><ns0:p>In MORE, the source node divides data packets from the upper layer into batches and generates coded packets of each batch. Similar to ExOR, packets in MORE are also forwarded based on a batch. The destination node can decode these coded packets to original packets after receiving enough independently coded packets in the same batch. The destination receives enough packets when the decoding matrix reaches the full rank, then these original packets will be pushed to the upper layer. MORE coordinates the forwarding of each node using a transmission credit system, which is calculated based on how effective it would be in forwarding coded data packets to downstream nodes. This transmission credit system reduces the possibility that intermediate nodes forward the same packets in duplication. However, MORE uses a 'stop-and-wait' design with a single batch in transmission, which is not efficient utilizing the bandwidth of networks. COPE focuses on inter-session network coding; it is a framework to combine and encode data flows through joint nodes to achieve a high throughput <ns0:ref type='bibr' target='#b4'>(Basagni et al., 2008)</ns0:ref>. CAOR (Coding Aware Opportunistic Routing) proposes a localized coding-aware opportunistic routing mechanism to increase the throughput of wireless mesh networks. In this protocol, the packet carries out with the awareness of coding opportunities and no synchronization is required among nodes <ns0:ref type='bibr' target='#b31'>(Yan et al., 2008)</ns0:ref>. NC-MAC improves the efficiency of coding decisions by verifying the decodability of packets before they are transmitted <ns0:ref type='bibr' target='#b2'>(Argyriou, 2009)</ns0:ref>. The scheme focuses on ensuring correct coding decisions at each network node, and it requires no cross-layer interactions.</ns0:p><ns0:p>CodeOR (Coding in Opportunistic Routing) improves MORE in a few important ways <ns0:ref type='bibr' target='#b24'>(Lin et al., 2008)</ns0:ref>. In MORE, the source simply keeps transmitting coded packets belonging to the same batch until the acknowledgment of this batch from the destination has been received. CodeOR allows the source to transmit multiple batches of packets in a pipeline fashion. They also proposed a mathematical analysis in tractable network models to show the way of 'stop-and-wait' affects the network throughput, especially in large or long topology. The timely ACKs are transmitted from downstream nodes to reduce the penalty of inaccurate timing in transmitting the next batch. CodeOR applies the ideas of TCP flow control to estimate the correct sending window and the flow control algorithm is similar to TCP Vegas, which uses increased queueing delay as congestion signals. SlideOR works with online network coding <ns0:ref type='bibr' target='#b26'>(Lin et al., 2010)</ns0:ref>, in which data packets are not required to be divided into multiple batches or to be encoded separately in each batch. In SlideOR, the source node encodes packets in overlapping sliding windows such that coded packets from one window position may be useful towards decoding the packets inside another window position. Once a coded packet is 'seen' by the destination node, the source node only encodes packets after this seen packet. Since it does not need to encode any packet that is already seen at the destination, SlideOR can transmit useful coded packets and achieve a high throughput.</ns0:p><ns0:p>CCACK (Cumulative Coded ACKnowledgment) allows nodes to acknowledge coded packets to upstream nodes with negligible overhead <ns0:ref type='bibr' target='#b18'>(Koutsonikolas et al., 2011)</ns0:ref>. It utilizes a null space-based (NSB) coded feedback vector to represent the entire decoding matrix. CodePipe is a reliable multicast protocol, which improves the multicast throughput by exploiting both intra and inter network coding <ns0:ref type='bibr' target='#b22'>(Li et al., 2012)</ns0:ref>. CORE (Coding-aware Opportunistic Routing mEchanism) combines inter-session and intra-session network coding <ns0:ref type='bibr' target='#b19'>(Krigslund et al., 2013)</ns0:ref>. It allows nodes in the network to setup inter-session coding regions where packets from different flows can be XORed. Packets from the same flow uses random linear network coding for intra-session coding. CORE provides a solution to cope with the unreliable overhearing and improves the throughput performance in multi-hop wireless networks. NCOR focuses on how to select the best candidate forwarder set and allocate traffic among candidate forwarders to approach optimal routing <ns0:ref type='bibr' target='#b6'>(Cai et al., 2014)</ns0:ref>. It contracts a relationship tree to describe the child-parent relations along the path from the source to the destination. The cost of the path is the sum of the costs of each constituent hyperlink for delivering one unit of information to the destination. The nodes, which create the path with the minimum cost, can be chosen as candidate forwarders. <ns0:ref type='bibr' target='#b13'>Hsu et al. (2015)</ns0:ref> proposed a stochastic dynamic framework to minimize a long-run average cost. They also analyzed the problem of whether to delay packet transmission in hopes that a coding pair will be available in the future or transmit <ns0:ref type='table' target='#tab_0'>-2016:05:10857:2:0:NEW 6 Sep 2016)</ns0:ref> Manuscript to be reviewed Computer Science a packet without coding. <ns0:ref type='bibr' target='#b11'>Garrido et al. (2015)</ns0:ref> proposed a cross-layer technique to balance the load between relaying nodes based on bandwidth of wireless links, and they used an intra-flow network coding solution modelled by means of Hidden Markov Processes. However, the schemes above were designed to utilize opportunistic data forwarding and network coding, but none of these was designed to support TCP.</ns0:p><ns0:formula xml:id='formula_0'>3/14 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div> <ns0:div><ns0:head n='2.3'>Network coding in TCP</ns0:head><ns0:p>A number of recent papers have utilized network coding to improve TCP throughput. In particular, Huang et al. introduce network coding to TCP traffic, where data segments in one direction and ACK segments in the opposite direction can be coded at intermediate nodes <ns0:ref type='bibr'>(Huang et al., 2008)</ns0:ref>. The simulation showed that making a small delay at each intermediate node can increase the coding opportunity and increase the TCP throughput. TCP/NC enables a TCP-compatible sliding-window approach to utilize network coding <ns0:ref type='bibr' target='#b29'>(Sundararajan et al., 2011)</ns0:ref>. Such a variant of TCP is based on ACK-based sliding-window network coding approach and improves the TCP throughput in lossy links. It uses the degree of freedom in the decoding matrix instead of the number of received original packets as the sequence number in ACK.</ns0:p><ns0:p>If a received packet increases the degree of freedom in the decoding matrix, this packet is called an innovative packet and this packet is 'seen' by the destination. The destination node will generate an acknowledgment whenever a coded packet is seen instead of producing an original packet. However, TCP/NC cannot efficiently control the waiting time for the decoding matrix to become full rank, and the packet loss can make TCP/NC's decoding matrix very large, which causes a long packet delay <ns0:ref type='bibr' target='#b28'>(Sun et al., 2015)</ns0:ref>. TCP-VON introduces online network coding (ONC) to TCP/NC, which can smoothly increase the receiving data rate and packets can be decoded quickly by the destination node. However, these protocols are variants of RTT-based congestion control TCP protocols (e.g., Vegas), which limits their applications in practice since most TCP protocols are loss-based congestion control <ns0:ref type='bibr'>(Bao et al., 2012)</ns0:ref>. TCP-FNC proposes two algorithms to increase the TCP throughput <ns0:ref type='bibr' target='#b28'>(Sun et al., 2015)</ns0:ref>. One is a feedback based scheme to reduce the waiting delay. The other is an optimized progressive decoding algorithm to reduce computation delay. It can be applied to loss-based congestion control, but it does not take advantage of opportunistic data forwarding. Since TCP-FNC is based on traditional IP forwarding, it is easily affected by link quality variation. ComboCoding <ns0:ref type='bibr' target='#b8'>(Chen et al., 2011)</ns0:ref> uses both inter-and intra-flow networking to support TCP with deterministic routing. The inter-flow coding is done between the data flows of the two directions of the same TCP session. The intra-flow coding is based on random linear coding serving as a forward-error correction mechanism. It has an adaptive redundancy to overcome variable packet loss rates over wireless links. However, ComboCoding was not designed for opportunistic data forwarding.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Contribution of TCPFender</ns0:head><ns0:p>Opportunistic data forwarding and network coding do not inherently support TCP, so many previous research on opportunistic data forwarding and network coding were not designed for TCP. Other studies modified TCP protocols by cooperating network coding into TCP protocols; these work created different variants of TCP protocols to improve the throughput. However, TCP protocols (especially, TCP Reno) are widely deployed in current communication systems, it is not easy work to modify all TCP protocols of the communication systems. Therefore, we propose an adaptation layer (TCPFender) functioning below TCP Reno. With the help of TCPFender, TCP Reno do not make any change to itself and it can take advantage of both network coding and opportunistic data forwarding.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>DESIGN OF TCPFENDER</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Overview of TCPFender</ns0:head><ns0:p>We introduce TCPFender as an adaptation layer above the network layer, which hides network coding and opportunistic forwarding from the transport layer. The process of TCPFender is shown in Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>.</ns0:p><ns0:p>It confines the modification of the system only under the network layer. The goal of TCPFender is to improve TCP throughput in wireless mesh networks by opportunistic data forwarding and network coding. However, opportunistic data forwarding in wireless networks causes many dropped packets and out-of-order arrivals, and it is difficult for TCP sender to maintain a large congestion window. Especially the underlying link layer is the stock IEEE 802.11, which only provides standard unreliable broadcast or reliable unicast (best effort with a limited number of retransmissions). TCP has its own interpretation of the arrival (or absence) of the ACK segments and their timing. It opens up its congestion window based on continuous ACKs coming in from the destination. The dilemma is that when packets arrive out </ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>TCPFender Algorithm</ns0:head><ns0:p>To better support TCP with opportunistic data forwarding and network coding, TCPFender inserts the TCP adaptation layer above the network work layer at the source, the forwarder, and the destination. The main work of the TCP adaptation layer is to interpret observations of the network layer phenomena in a way that is understandable by TCP. The network coding module in the adaptation layer is based on a batch-oriented network coding operation. The original TCP packets are grouped into batches, where all packets in the same batch carry encoding vectors on the same basis. At the intermediate nodes, packets will be recoded and forwarded following the schedule of opportunistic data forwarding proposed by MORE, which proposes a transmission credit system to describe the duplication of packets. This transmission credit system can compensate the packet loss, increase the reliability of the transmission, and represent the schedule of opportunistic data forwarding. The network coding module in the destination node will try to decode received coded packets to original packets when it receives any coded packet. The ACK signalling modules at the source and the destination are responsible for translation between TCP ACKs and TCPFender ACKs.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1'>Network Coding in TCPFender</ns0:head><ns0:p>We implement batch-oriented network coding operations at the sender and receiver to support TCP transmissions. All data pushed down by the transport layer in sender are grouped into batches, and each batch has a fixed number &#946; (&#946; = 10 in our implementation) of packets of equal length (with possible padding). When the source has accumulated packets in a batch, these packets are coded with random linear network coding, tagged with the encoding vectors, and transmitted to downstream nodes. The downstream nodes are any nodes in the network closer to the destination. Any downstream node can recode and forward packets when it receives a sufficient number of them. We use transmission credit mechanism, as proposed in MORE, to balance the number of packets to be forwarded in intermediate nodes.</ns0:p></ns0:div> <ns0:div><ns0:head>5/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10857:2:0:NEW 6 Sep 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We make two important changes to improve the network coding process of MORE for TCP transmissions. For a given batch, the source does not need to wait until the last packet of a batch from the TCP before transmitting coded packets. We call this accumulative coding. That is, if k packets (k &lt; &#946; ) have been sent down by TCP at a point of time, a random linear combination of these k packets is created and transmitted. Initially, the coded packets only include information for the first few TCP data segments of the batch, but will include more towards the end of the batch. The reason for this 'early release' behaviour is for the TCP receiving side to be able to provide early feedback for the sender to open up the congestion window. On the other hand, we use a deeper pipelining than MORE where we allow multiple batches to flow in the network at the same time. To do that, the sending side does not need to wait for the batch acknowledgement before proceeding with the next batch. In this case, packets of a batch are labeled with a batch index for differentiation, in order for TCP to have a stable, large congestion window size rather than having to reset it to 1 for each new batch. The cost of such pipelining is that all nodes need to maintain packets for multiple batches.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2'>Source adaptation layer</ns0:head><ns0:p>The source adaptation layer buffers all original packets of a batch that have not been acknowledged. The purpose is that when TCP pushes down a new data packet or previously sent data packet due to a loss event, the source adaptation layer can still mix it with other data packets of the same batch. The ACK signalling module can discern duplicated ACKs which are not in fact caused by the network congestion.</ns0:p><ns0:p>Opportunistic data forwarding may cause many extra coded packets, specifically when some network links are of the high quality at a certain point. This causes the destination node to send multiple ACKs with same sequence number. In this case, such duplicated ACKs are not a signal for the network congestion, and should be treated differently by the ACK signalling module in the source. These two cases of duplicated ACKs can actually be differentiated by tagging the ACKs with the associated sequence numbers of the TCP data segment. These ACKs are used by the TCPFender adaptation layer at the source and the destination and should be converted to original TCP ACKs before being delivered to the upper layer.</ns0:p><ns0:p>The flow of data or ACKs transmissions is shown in the left of Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. Original TCP data segments are generated and delivered to the module of 'network coding and opportunistic forwarding'. Here, TCP data segments may be distributed to several batches based on their TCP segment sequences, so the retransmitted packets will be always in the same batch as their initial distribution. After the current TCP data segment mixes with packets in a batch, TCPFender data segments will be generated and injected to network via hop-by-hop IP forwarding, which is essentially broadcasting of IP datagrams. On the ACK signalling module, when it receives TCPFender ACKs, if the ACK's sequence number is greater than the maximum received ACK sequence number, this ACK will be translated into a TCP ACK and delivered to the TCP sender. Otherwise, the ACK signalling module will check whether this duplicated ACK is caused by opportunistic data forwarding or not. Then it will decide whether to forward a TCP ACK to the TCP or not. The reason for differentiating duplicated ACKs at the source instead of at the destination is to reduce the impact of ACK loss on TCP congestion control.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.3'>Destination adaptation layer</ns0:head><ns0:p>The main function of the destination adaptation layer is to generate ACKs and detect congestion in the network. It expects packets in the order of increasing batch index. For example, when it is expecting the bth batch, it implies that it has successfully received packets of the previous b &#8722; 1 batches and delivered them up to the TCP layer. In this case, it is only interested in and buffers packets of the bth batch or later. However, the destination node may receive packets of any batch. Suppose that the destination node is expecting the bth batch, and that the rank of the decoding matrix of this batch is r. In this case, the destination node has 'almost' received &#946; &#215; (b &#8722; 1) + r packets of the TCP flow, where &#946; &#215; (b &#8722; 1) packets have been decoded and pushed up the TCP receiver, and r packets are still in the decoding matrix. When it receives a coded packet of the b th batch, if b &lt; b, the packet is discarded. Otherwise, this packet is inserted into the corresponding decoding matrix. Such an insertion can increase r by 1 if b = b and this received packet is an innovative packet. The received packet is defined as an innovative packet only if the received packet is linearly independent with all the buffered coded packets within the same batch. In either case, it generates an ACK of sequence number &#946; &#215; (b &#8722; 1) + r, which is sent over IP back to the source node. One exception is that if r = &#946; (i.e. decoding matrix become full rank), the ACK sequence number is &#946; &#215; ( b &#8722; 1) + r, where b is the next batch that is not full and r is its rank. At this point, the receiver moves on to the bth batch. This mechanism ensures that the receiver can send multiple duplicate ACKs The design of the destination adaptation layer is shown on the right of Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. The network coding module has two functions. First, it will check whether the received TCPFender data segment is innovative or not. In either case, it will notify the ACK signalling to generate a TCPFender ACK. Second, it will deliver original TCP data segments to TCP layer if one or more original TCP data segment are decoded after receiving an innovative coded data packet. This mechanism can significantly reduce the decoding delay of the batch-based network coding. On the other hand, TCPFender has its own congestion control mechanism, so TCP ACK that is generated by the TCP layer will be dropped by the ACK signalling module at the destination.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.4'>Forwarder adaptation layer</ns0:head><ns0:p>The flow of data at forwarders is shown in the middle of Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. The ACK is unicast from the destination to the source by IP forwarding, which is standard forwarding mechanism and is not shown in the diagram.</ns0:p><ns0:p>The intermediate node receives TCPFender data segment from below and this segment will be distributed into corresponding batches and regenerates a new coded TCPFender data segment. This new TCPFender data segment will be sent to downstream forwarders via hop-by-hop IP broadcasting based on the credit transmission system proposed by MORE.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>PERFORMANCE EVALUATION</ns0:head><ns0:p>In this section, we investigate the performance of TCPFender through computer simulations using NS-2.</ns0:p><ns0:p>The topologies of the simulations are made up of three exemplar network topologies and one specific mesh. These topologies are depicted in Fig. <ns0:ref type='figure'>2</ns0:ref> 'diamond topology', Fig. <ns0:ref type='figure'>3</ns0:ref> 'string topology', Fig. <ns0:ref type='figure'>4</ns0:ref> 'grid topology', and Fig. <ns0:ref type='figure'>5</ns0:ref> 'mesh topology'. The packet delivery rates at the physical layer for the mesh topology are marked in Fig. <ns0:ref type='figure'>5</ns0:ref>, and the packet delivery rates for other topologies are described in Table <ns0:ref type='table'>.</ns0:ref> 1.</ns0:p><ns0:p>The source node and the destination node are at the opposite ends of the network. One FTP application sends long files from the source to the destination. The source node emits packets continuously until the end of the simulation, and each simulation lasts for 100 seconds. All the wireless links have a bandwidth of 1Mbps and the buffer size on the interfaces is set to 100 packets. To compensate for the link loss, we used the hop-to-hop redundancy factor for TCPFender on a lossy link. Recall that the redundancy factor is calculated based on the packet loss rate, which was proposed in MORE <ns0:ref type='bibr' target='#b7'>(Chachulski et al., 2007)</ns0:ref>. This packet loss rate should incorporate the loss effect at both the Physical and Link layers, which is higher than the marked physical layer loss rates. The redundancy factors of the links are thus set according to these revised rates. We compared our protocol against TCP and TCP+NC in four network topologies. In our simulations, TCP ran on top of IP, and TCP+NC has batch-based network coding enabled but still over IP. The version of TCP is TCP Reno for TCPFender and both baselines. The ACK packet for the three protocols are routed to the source by shortest-path routing.</ns0:p><ns0:p>In this paper, we examined whether TCPFender can effectively utilize opportunistic forwarding and network coding. TCPFender can provide reliable transmissions in these four topologies and the analysis metrics we took are the network throughput and the end-to-end packet delay at the application layer.</ns0:p><ns0:p>We repeated each scenario 10 times with different random seeds for TCPFender, TCP+NC, and TCP/IP, respectively. In TCPFender, every intermediate node has the opportunity to forward coded packets and all nodes operate in the 802.11 broadcast mode. By contrast, for TCP/IP and TCP+NC, we use the unicast model of 802.11 with ARQ and the routing module is the shortest-path routing of <ns0:ref type='bibr'>ETX Couto et al. (2003)</ns0:ref>. In the diamond topology (Fig. <ns0:ref type='figure'>2</ns0:ref>), the source node has three different paths to the destination. TCP and TCP+NC only use one path to the destination, but TCPFender could utilize more intermediate forwarders thanks to the opportunistic routing. The packet delivery rates for each link are varied between 20%, 40%, 60% and 80%. We plotted the throughput of these three protocols in Fig. <ns0:ref type='figure'>6</ns0:ref>. In all cases, the TCPFender has the highest throughput, and the performance gain is more visible for poor link qualities.</ns0:p><ns0:p>Next, we tested these protocols in the string topology (Fig. <ns0:ref type='figure'>3</ns0:ref>) with 6 nodes. The distance between the two nodes is 100 meters, and the transmission range is the default 250 meters. Different combinations of packet delivery rates for 100-meter and 200-meter distances are described in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. As a result, the shortest path routing used by TCP and TCP+NC can decide to use the 100m or 200m links depending on their relative reliability. The throughputs of the three protocols are plotted in Fig. <ns0:ref type='figure' target='#fig_5'>7</ns0:ref>, where we observed how they perform under different link qualities. Except for the one case where both the 100m and 200m links are very stable (i.e 100% and 80%, respectively), the gains of having network coding and opportunistic forwarding are fairly significant in maintaining TCP's capacity to the application layer.</ns0:p><ns0:p>When the links are very stable, the cost of the opportunistic forwarding schedule and the network coding delay will slightly reduce the network throughput.</ns0:p><ns0:p>We also plotted these three protocols' throughputs in a grid topology (Fig. <ns0:ref type='figure'>4</ns0:ref>) and a mesh topology (Fig. <ns0:ref type='figure'>5</ns0:ref>). Each node has more neighbours in these two topologies, compared to string topology (Fig. <ns0:ref type='figure'>3</ns0:ref>), which increases the chance of opportunistic data forwarding. The packet delivery rates are indicated in these two Figures (Fig. <ns0:ref type='figure'>4</ns0:ref> and Fig. <ns0:ref type='figure'>5</ns0:ref>). In general, the packet delivery rates drop when the distance between a sender and a receiver increases. In our experiment, the source and destination nodes deploy at the opposite ends of the network. The throughput of TCPFender is depicted in Fig. <ns0:ref type='figure' target='#fig_6'>8</ns0:ref> and it is much higher than TCP/IP because opportunistic data forwarding and network coding increase the utilization of network capacity. The gain is about 100% in our experiment. The end-to-end delays of the grid topology and the mesh topology are plotted in Fig. <ns0:ref type='figure' target='#fig_6'>8</ns0:ref>. In general, TCP+NC has long end-to-end delays because packets need be decoded before delivered to the application layer, this is an inherent feature of batch-based network coding. TCPFender can benefit from backup paths and receive packets early, so it reduces the time-consumption of waiting for decoding and its end-to-end delay is shorter than TCP+NC.</ns0:p><ns0:p>Next, we are interested in the impact of batch sizes on the throughput and the end-to-end delay. Fig. <ns0:ref type='figure'>9</ns0:ref> shows the throughput of TCPFender in the mesh topology for batch sizes of 10, 20, 30, ..., 100 packets.</ns0:p><ns0:p>In general, batch sizes will have an impact on then TCP throughput (as exemplified in Fig. <ns0:ref type='figure' target='#fig_2'>11</ns0:ref>). When the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science From the Fig. <ns0:ref type='figure' target='#fig_2'>11</ns0:ref>, since the number of packets transmitted in the network is smaller than two batch sizes, intermediate nodes only need to keep two batches of packets and the memories required to store the packets are acceptable. The nature of batch-based network coding will also introduce decoding delays, so the batch size has a direct impact on the end-to-end delay, as summaries in Fig. <ns0:ref type='figure'>9</ns0:ref>. In Fig. <ns0:ref type='figure' target='#fig_7'>10</ns0:ref>, we plotted the end-to-end delays of all packets over time in two sample simulations. Note that these tests were done for files that need many batches to carry. On the other hand, when the file size is comparable to the batch size, the file-wise delay will be comparable to the decoding delay of an entire batch, which may seem large relatively. However, because the file size is small, this delay is not overly significant as the delay is at the order of its transmission time. Nevertheless, network coding does add considerable amount of delay in comparison to pure TCP/IP.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CONCLUDING REMARKS</ns0:head><ns0:p>In this paper, we proposed TCPFender, which is a novel mechanism to support TCP with network by opportunistic data forwarding, and the receiver side releases ACK segments whenever receiving an innovative packet. In current work, we implemented our algorithm to support TCP Reno. In fact, TCPFender can also support other TCP protocols with loss-based congestion control (e.g., TCP-NewReno, TCP-Tahoe). The adaptive modules are designed generally enough to not only support network coding and opportunistic data forwarding, but also any packet forwarding techniques that can cause many dropping packets or out-of-order arrivals. One example will be multi-path routing, where IP packets of the same data flow can follow different paths from the source to the destination. By simulating how TCP receiver will signal the TCP sender, we are able to adapt TCPFender to functioning over such the multi-path routing without having to modify TCP itself.</ns0:p><ns0:p>In the simulation results, we compared TCPFender and TCP/IP in four different network topologies.</ns0:p><ns0:p>The result shows that TCPFender has a sizeable throughput gain over TCP/IP, and the gain will be very distinct from each other when the link quality is not that good. We also discussed the influence of batch size on the network throughput and end-to-end packet delay. In general, the bath size has a small impact on the network throughput, but it has direct impact on end-to-end packet delay.</ns0:p><ns0:p>In future, we will consider TCP protocols with RTT-based congestion control and also analyze how multiple TCP flows interact with each other in a network coded, opportunistic forwarding network layer, or a more generally error-prone network layer. We will refine the redundancy factor and the bandwidth estimation to optimize the congestion control feedback of TCP. Finally, we will propose a theoretical model of TCP with opportunistic forwarding and network coding, which will enable us to study the TCPFender as a function in various communication systems.</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10857:2:0:NEW 6 Sep 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10857:2:0:NEW 6 Sep 2016)Manuscript to be reviewed Computer Science of order or are dropped, the TCP receiver cannot signal the sender to proceed with the expected ACK segment. Unfortunately, opportunistic data forwarding can introduce many out-of-order arrivals, which can significantly reduce the congestion window size of regular TCP since it increases the possibility of duplicated ACKs. Furthermore, the long decoding delay for batch-based network coding does not fare well with TCP, because it triggers excessive time-out events.The TCPFender adaptation layer at the receiving side functions over the network layer and provides positive feedback early on when innovative coded packets are received, i.e. suggesting that more information has come through the network despite not being decoded for the time being. This process helps the sender to open its congestion window and trigger fast recovery when the receiving side acknowledges the arrival of packets belonging to a later batch, in which case the sending side will resend dropped packets of the unfinished batch. On the sender side, the ACK signalling module is able to differentiate duplicated ACKs and filter useless ACKs (shown in Fig.1).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. TCPFender design scheme.</ns0:figDesc><ns0:graphic coords='6,146.92,217.93,403.25,176.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10857:2:0:NEW 6 Sep 2016) Manuscript to be reviewed Computer Science for the sender to detect congestion and start fast recovery. It also supports multiple-batch transmissions in the network and guarantees the reliable transmission at the end of the transmission of each batch.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure 2. Diamond topology</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Throughput for string topology</ns0:figDesc><ns0:graphic coords='11,186.52,64.11,323.75,309.41' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Throughput and delay for grid topology and mesh topology.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Delay for two specific cases with batch sizes of 10 and 40</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='10,186.52,63.80,324.03,253.52' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Packet delivery rate</ns0:figDesc><ns0:table><ns0:row><ns0:cell>8/14</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Thank you very much for reviewing our work. Your efforts helped us to keep our manuscript to the high-quality standard of the journal, and we appreciate it a great deal. In this revision, we addressed the issues raised by the reviewers and made two additional discussions accordingly. We highlighted these two discussions in the revision for ease of identification. Reviewer One: 1. The authors' response to my concern on the file size in the evaluation is not fully convincing. The authors argue that the decoding delay is not overly significant and can be comparable with the file-wise delay. This may still mean that the additional latency to transmit a small file can be larger than TCP without NC. A more explicit and clearer explanation is needed in the paper. Response: Thank you for pointing this out. We added a short discussion at the end of Section 4 for small file sizes, where we acknowledge and reason about the delay introduced by network coding. 2. The font size in Fig. 5/6/7/8/11 is still too small, and much smaller than the font size in other figures. Please update and make them consistent. Besides, colors of TCP+NC and TCPFender in Fig. 6/7/8 still look similar in greyscale. Response: The font in Figure 5/6/7/8/11 is increased. The colors of the plots in Figure 6, 7, and 8 are changed so that they are easier to read when printed in greyscale. 3. Typos or grammatical errors to correct. Response: Done. Thank you. Reviewer 2 I'm glad to see that some of my previous comments have been carefully addressed. For example, additional related works have been cited and discussed, and the important concept of `innovative packets' has been explained in detail. To justify the novelty of the paper, the authors stated (in their response) that the main contribution of the paper is to ``design an adaptation layer over the network layer to support TCP-Reno without any change to TCP-Reno itself' instead of the two extensions of MORE (which have been proposed before). In addition, the authors claimed that their paper is the ``first paper that is designed to support TCP Reno by opportunistic data forwarding and network coding' in wireless multi-hop networks. This is a strong claim! However, this claim doesn't seem to be convincing. For example, the work of ComboCoding (by Chien-Chia Chen, Mario Gerla, and others) proposed a networkcoding scheme that is ``implemented in the network layer' and is ``transparent to TCP.' In particular, their scheme supports a variety of TCP protocols, including TCP-Reno and TCP-NewReno. Therefore, I'd like to suggest the authors to compare their work with ComboCoding and other similar works. Response: We appreciate the reviewer’s observation. The claim was intended to define the scope of this work that has three features at the same time 1) network coding 2) opportunistic data forwarding 3) supporting TCP Reno without modifying it Thank you for bringing up ComboCoding, and we included it in the literature review at the end of Section 2.3, which summarizes how it is related to and differs from our work. "
Here is a paper. Please give your review comments after reading it.
348
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Today, there are many e-commerce websites, but not all of them are accessible.</ns0:p><ns0:p>Accessibility is a crucial element that can make a difference and determine the success or failure of a digital business. The study was applied to 50 e-commerce sites in the top rankings according to the classification proposed by ecommerceDB. In evaluating the web accessibility of e-commerce sites, we applied an automatic review method based on a modification of Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0. To evaluate accessibility, we used Web Accessibility Evaluation Tool (WAVE) with the extension for Google Chrome, which helps verify password-protected, locally stored, or highly dynamic pages. The study found that the correlation between the ranking of ecommerce websites and accessibility barriers is 0.329, indicating that the correlation is low positive according to Spearman's Rho. According to the WAVE analysis, the research results reveal that the top ten most accessible websites are Sainsbury's Supermarkets, Walmart, Target Corporation, Macy's, IKEA, H &amp; M Hennes, Chewy, The Kroger QVC and Nike. The most significant number of accessibility barriers relate to contrast errors that must be corrected for e-commerce websites to reach an acceptable level of accessibility.</ns0:p><ns0:p>The most neglected accessibility principle is perceivable, representing 83.1%, followed by operable with 13.7%, in third place is robust with 1.7% and finally understandable with 1.5%. Future work suggests constructing a software tool that includes artificial intelligence algorithms that help the software identify accessibility barriers.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Internet technology has radically revolutionized the world of communications to become a global means of communication. Due to COVID-19 (World Health Organization (WHO), 2021b), the confinement worldwide has caused many businesses to close. Therefore, the number of ecommerce websites has increased significantly in recent years due to the pandemic. Statistics from the Statista website <ns0:ref type='bibr' target='#b20'>(Statista, 2021)</ns0:ref> indicate that e-commerce has undergone a substantial transformation in recent years thanks to digitization in modern life. Therefore, to Statista <ns0:ref type='bibr' target='#b19'>(Statista, 2020</ns0:ref>), e-commerce websites have seen a notable increase in worldwide traffic flow between January 2019 and June 2020. In 2021, more than 2.14 billion people globally are estimated to purchase supplies and customer services online. Figure <ns0:ref type='figure'>1</ns0:ref> shows the search of terms performed in the last five years in Google Trends <ns0:ref type='bibr' target='#b10'>(Google, 2021)</ns0:ref> related to accessibility, e-commerce, and Web Content Accessibility Guidelines (WCAG) <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. We observe that the term e-commerce tends to grow from March 2020 during the COVID-19 where most users began to consume massively digital material and are oriented to use e-commerce applications, to reduce the number of infections, we also observe that the term accessibility and WCAG tend to grow. Figure <ns0:ref type='figure'>1</ns0:ref> The trend in Google Trends for terms related to accessibility, e-commerce, and WCAG. Diagram of the trend of the terms in the last five years worldwide.</ns0:p><ns0:p>E-commerce websites have grown considerably, but most of them are not accessible. Accessibility <ns0:ref type='bibr' target='#b2'>(Patricia Acosta-Vargas et al., 2018)</ns0:ref> refers to a set of techniques, guidelines or methods that make web content and functionality compatible with the needs of all persons regardless of their physical or technological capabilities. Statements from the World Health Organization (World Health Organization (WHO), 2017) reveal that 15% of the world's population suffers from some disability. Therefore, it is essential to apply web accessibility guidelines to e-commerce sites. A well-designed website can be easy to navigate in interaction for web users <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. Accessibility also benefits several users <ns0:ref type='bibr' target='#b8'>(Andersen et al., 2020)</ns0:ref> with aging-related difficulties that decrease their visual ability due to presbyopia. WCAG 2.1 <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref> proposes applying accessibility principles to reduce accessibility barriers perceived by users when interrelating with a website. The outcomes of this investigation evidenced that 25.6% of the e-commerce websites in the sample present images that require the inclusion of alternative text; additionally, we found that 54.4% of the sites present contrast problems related to the principle of perception. As future work, we suggest considering the problems of hardware limitations and interstitial advertising. We also recommend building a software tool with artificial intelligence algorithms that include new heuristics to help developers identify accessibility barriers to generate more accessible and inclusive sites. The rest of this document is constructed in respect. Section II provides a review of the literature on COVID-19, e-commerce, and web accessibility. Section III shows the methodology used in the evaluation of accessibility in e-commerce sites. Section IV shows the results obtained by applying the evaluation method recommended in this research. Section V contains the discussion of the outcomes. Section VI explains the restrictions of the research. Finally, conclusions and future work are described in parts VII and VIII.</ns0:p></ns0:div> <ns0:div><ns0:head>Literature review of COVID-19, e-commerce, and web accessibility</ns0:head><ns0:p>The pandemic condition related to COVID-19 has quickly improved the shift of industry and education to the virtual world, which has enabled the accelerated growth of several e-commerce websites <ns0:ref type='bibr' target='#b11'>(Munkova et al., 2021)</ns0:ref>. The article <ns0:ref type='bibr' target='#b21'>(Villa &amp; Monz&#243;n, 2021)</ns0:ref> points out that COVID-19 has created a substantial change in the success of metropolises and an unprecedented growth of e-commerce worldwide which has generated various forms of consumption habits. The research <ns0:ref type='bibr' target='#b16'>(Poll&#225;k et al., 2021)</ns0:ref> describes the fluctuations in e-consumer behavior in the marketplace during the COVID-19. The research examined the nature and timing of interactions; the results confirmed the evolutionary change from offline to online during the COVID-19, a process that was accelerated and inevitable. The study <ns0:ref type='bibr' target='#b14'>(Pa&#537;tiu et al., 2020)</ns0:ref> indicates that the e-commerce trend, due to the global phenomenon of the COVID-19, has undergone significant variations in online consumer behavior. They indicate that the validated model after assessment shows the thorough impact of website accessibility on consumers, enjoyment, and expectation. The results of the study indicate that ecommerce systems require improved sustainability and software systems. Previous studies indicate the increase of e-commerce in COVID-19 time; this accelerated growth has generated millions of e-commerce websites, but many sites are not accessible; therefore, it is essential to concern the Web Content Accessibility Guidelines 2.1 (WCAG 2.1) proposed by the World Wide Web Consortium. Web accessibility implies that persons with incapacities can use the web. This process implies that they can recognize, identify, navigate, and relate to the website. WCAG 2.1 (World Wide Web Consortium, 2018) consists of 4 principles, 13 guidelines and 78 conformance such as compliance or success criteria, plus some techniques. Principle 1-Perceptible refers to the website's contents, and the interface must be perceptible to all its users. It includes audiovisual content; the interface, images, buttons, video players and other components must be accessible, completely recognizable, and viable by any individual in any condition, tool, and operating system. Principle 2 -Operable, this means that a website must have many ways, all obvious and warned, to perform an action or search for content. The more alternatives there are, the better its accessibility will be. The website is operable according to W3C rules is associated with ensuring: 1) All its functionality is based on the keyboard. 2) Allow a reasonable amount of moment to read out and understand the content. 3) Do not create content or tools that can provoke convulsions. 4) Provide means for expeditious navigation. Principle 3 -Understandable, for a site to be considered understandable, it must consider the following elements: 1) To be legible and understandable, it refers to both the form and the background of the texts of a website. In its form, it must have a font that all users can read. 2) Predictable is related to the functioning of a site so that potential users do not waste time trying to guess what a tool works for a better browsing experience. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Principle 4 -Robust refers to websites or applications that must be compatible with all web browsers, operating systems and devices, and assistive technology applications or digital ramps. Below the principles are the guidelines; the 13 guidelines support the essential aims that authors should accomplish to generate more than accessible content for disabled users. These guidelines are not verifiable but require general objectives that benefit creators to recognize the success criteria and better execute the procedures. Each guideline includes success criteria that can be applied in situations for compliance testing in a contractual agreement. To meet accessibility requirements, three conformance levels are defined, with A being the lowest, AA being a medium level and acceptable by WCAG, and AAA being the highest (World Wide Web Consortium, 2018). The documentation includes several techniques; the techniques are grouped into two classes 1) those that are sufficient to achieve the success criteria and 2) advisable techniques that allow the authors to better comply with the guidelines. In addition, some of the advisable techniques deal with accessibility barriers that have not been accessibility barriers that the verifiable success criteria have not covered. In reviewing the literature, we found some research related to the evaluation of web accessibility in e-commerce sites, evaluation methods and tools used in automatic inspection. The study <ns0:ref type='bibr' target='#b15'>(Paz et al., 2021)</ns0:ref> argues the accessibility with which software products, including e-commerce stores, should be designed. It indicates that some countries apply laws and government policies that ensure accessibility to websites considering different skills and abilities. The study compared the results of five tools for inspecting the accessibility of e-commerce websites; the conclusions show that there are no 100% accessible sites. The research <ns0:ref type='bibr' target='#b26'>(Xu, 2020)</ns0:ref> compares the accessibility of e-commerce among major retailers considering compliance with web accessibility guidelines. As a case study, they applied the evaluation of 45 e-commerce websites. They used the WAVE tool; the findings revealed that websites with ARIA attribute lower accessibility levels overall. They concluded that the accessibility of mature websites was stronger than new websites with innovative products. The study <ns0:ref type='bibr' target='#b6'>(Alshamari, 2016)</ns0:ref> argues that many tools can help make a website accessible. The article explores some available tools that help designers; developers evaluate the accessibility of the web. The research results indicate that navigation, readability, and timing are the most common accessibility issues encountered when evaluating the accessibility of selected websites. The article <ns0:ref type='bibr' target='#b13'>(Padure &amp; Pribeanu, 2020)</ns0:ref> argues that Web Accessibility Assessment Tool (WAVE) is a free tool provided by Web accessibility in mind (WebAIM). The authors indicate that WAVE offers a color-coding system: red for errors that need to be corrected urgently, green for correct lines but still need to be checked, and yellow for potential problems that need manual review. The study's authors <ns0:ref type='bibr' target='#b11'>(Oliveira et al., 2020)</ns0:ref> infer that accessibility is fundamental in the democratization of technologies, so applying the Web Content Accessibility Guidelines (WCAG 2.1) is essential. The accessibility evaluation was applied to three websites operating in Portugal, considering the ranking of the three best-positioned retailers corresponding to the SimilarWeb. The results obtained established a collection of suggestions aimed at increasing the accessibility of websites aimed at e-commerce. Our study differs from <ns0:ref type='bibr' target='#b11'>(Oliveira et al., 2020)</ns0:ref> and the research <ns0:ref type='bibr' target='#b26'>(Xu, 2020)</ns0:ref> because the total sample is taken from ecommerceDB, which presents the e-commerce websites related to market trends and a ranking of the leading e-commerce stores. In our evaluation, we applied a new method based on the methodology (WCAG-EM) 1.0. In addition, the WAVE evaluation tool is based on version 3.1.6, updated as of October 14, 2021, which includes the plugin component that allows evaluating websites that require authentication. We, therefore, propose ten recommendations to improve the accessibility of the websites listed in the discussion section. Research <ns0:ref type='bibr' target='#b0'>(Abascal et al., 2019)</ns0:ref> related to Web accessibility evaluation argues that manual verification of compliance with accessibility guidelines is often complicated and unmanageable, so the authors suggest applying software tools that perform automatic accessibility evaluations. It presents a review of the main features of tools used for Web accessibility evaluation and presents an introspection of the future of accessibility tools. The authors <ns0:ref type='bibr' target='#b5'>(Patricia Acosta-Vargas et al., 2019)</ns0:ref> suggest that verifying the accessibility of a Web site is a considerable challenge for accessibility specialists. Today, there are quantitative and qualitative methods for verifying whether a website is accessible. In general, the methods use automatic tools because they are low-cost, but they do not represent a perfect solution. The authors propose a heuristic method with manual support seeing the Web Content Accessibility Guidelines 2.1. The evaluators concluded that the research could serve as a preliminary argument for upcoming analyses concerned with web accessibility heuristics. Our investigation proposes an automatic review method using the WAVE Web Accessibility Evaluation Tool <ns0:ref type='bibr' target='#b23'>(WebAIM, 2021)</ns0:ref>. Earlier studies by the authors (Patricia Acosta-Vargas et al., 2018) indicated that one of the best tools for automated review is WAVE, which allows you to identify any accessibility barriers, centered on the Web Content Accessibility Guidelines (WCAG) 2.1 (World Wide Web Consortium, 2018) assists in automatic review and evaluation by web content experts. This preliminary study was applied to the 50 best-ranked e-commerce stores according to the ranking proposed by the ecommerceDB site (EcommerceDB, 2020), which contains detailed information on more than 20,000 stores from 50 countries and 13 categories.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>In this research to evaluate the accessibility of e-commerce websites, we applied an automatic review method (World Wide Web Consortium (W3C), 2014) centered on a modification of the Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0. We used (WebAIM, 2021) Web Accessibility Evaluation Tool (WAVE) with the extension for Google Chrome, which helps verify password-protected, and highly dynamic pages. Utah State University developed the WAVE automatic evaluation tool to help find potential accessibility issues according to WCAG 2.1, facilitating manual evaluation. It should be noted that manual testing cannot be replaced, especially when it comes to accessibility, as it may be essential to test with end-users with disabilities. Accessibility validation is performed using guidelines based on Section 508 and WCAG 2.1; some phases of this methodology (Salvador-Ullauri et al., 2020) were PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:09:65405:1:1:NEW 15 Nov 2021)</ns0:ref> Manuscript to be reviewed Computer Science tested in previous works of the authors related to serious games; the methodology for evaluating e-commerce websites is summarized in eight phases, as shown in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>. Phase 1 Select e-commerce Websites. In this phase, we selected the 50 e-commerce websites that are in the top rankings according to the classification proposed by ecommerceDB <ns0:ref type='bibr' target='#b9'>(EcommerceDB, 2020)</ns0:ref>. This phase starts the evaluation process is defined, the level of compliance with WCAG 2.1 for the evaluation is defined <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. In this case, the AA level is evaluated, which is within the accepted and recommended level. This knowledge can be improved to propose future work; we determine the lowest setting of patterns of web browsers, operating systems, and assistive technologies that the website must work out with during this phase. We defined the Windows 10 operating system for this case study, with Google Chrome browser version 95.0.4638.54 and supporting screen readers. Phase 2 Categorize the type of users. According to <ns0:ref type='bibr' target='#b18'>(Salvador-Ullauri et al., 2020)</ns0:ref>, in this phase, we involved three web accessibility experts with experience in the area since 2015, who have more than ten scientific publications in web accessibility evaluations, serious games, and accessible mobile applications. Discrepancies found in the automatic review of e-commerce sites were resolved in consensus. Experts performed the automatic review with WAVE as evaluated by experts WAVE is one of the best performing tools according to previous studies (Patricia <ns0:ref type='bibr' target='#b2'>Acosta-Vargas et al., 2018)</ns0:ref>. In this phase, the flow of events that users interact with when browsing ecommerce sites was identified. Phase 3 Define the test scenario. According to (Salvador-Ullauri et al., 2020), evaluating with the automatic tool is defined in this phase. We identify the essential functionalities of the ecommerce website to help select the most representative instances. The definition of the test scenario serves as the basis for the subsequent selection of the e-commerce sites. In this case, we apply the following scenario: 1) We enter the first page of the website. 2) We interact with the selection and purchase of products on the e-commerce website. 3) We test how to fill out and submit the forms. 4) We check if an account registration on the e-commerce site is required. Phase 4 Explore the e-commerce Website. In this phase, the first page of each e-commerce website was explored. The evaluators explored the e-commerce website better to understand its purpose, functionality, and usage throughout this phase. It is impossible to identify all the functionality, or the types of web pages and technologies used in some cases. Initial exploration of this phase was considered in Phase 1 by selecting a representative sample then refined in Phase 5 by evaluating with WAVE. Involving accessibility experts and website designers can help get the scans more than efficiently. At first, cursory checks are performed to help identify relevant web pages; later, a more detailed evaluation is performed on each component of the website. Therefore, this phase is essential for the evaluators, who must access all significant website components and functionalities.</ns0:p><ns0:p>Phase 5 Evaluate with WAVE. In this phase, the experts used WAVE to evaluate the home page of each e-commerce website. The assessment was conducted in March 2021; during this phase, the evaluators audited the sample e-commerce websites and the states of the websites selected in phases 1 and 4. The assessment was carried out following the WCAG 2.1 conformity requirements at level AA previously defined in phase 1. For this, the level of conformity, the completed web pages, processes, and forms of technologies are compatible with accessibility and noninterference. In this phase, it is essential to have a thorough knowledge of the WCAG 2.1 (World Wide Web Consortium, 2018) conformance requirements and the experience of accessibility experts. In addition, the authors classified accessibility barriers matching WCAG 2.1 principles, guidelines, and success criteria, which were then validated with WAVE results. Phase 6 Record evaluation data. In this phase, the assessment data was documented in a spreadsheet to organize a dataset available in the Mendeley source (P. <ns0:ref type='bibr' target='#b1'>Acosta-Vargas et al., 2021)</ns0:ref>. Recording the evaluation details is a good practice on the part of the evaluators, such as the names of the websites audited, the URLs of the e-commerce sites, and the evaluation data that can be used to replicate this study. The dataset contains: 1) The e-commerce websites evaluated. 2) The outcomes of the e-commerce websites evaluated with WAVE. 3) The map of the number of ecommerce websites by country. 4) Diagram of the WAVE accessibility evaluation process. 5) Summary of the assessment of e-commerce websites. 6) The accessibility principles of WCAG 2.1. 7) Accessibility barriers identified when evaluating with WAVE. 8) E-commerce websites versus compliance. 9) E-commerce websites with errors and contrast errors. 10) The number of alerts, features, structural elements, and ARIA. 11) E-commerce websites and the relationship to the levels found. Phase 7 Classify and analyze data. According to <ns0:ref type='bibr' target='#b18'>(Salvador-Ullauri et al., 2020)</ns0:ref>, in this phase, data related to accessibility principles, success criteria and accessibility levels were organized. This information is described in the outcomes section and examined in the discussion section. Also, the e-commerce websites were classified by 1) The countries to which each domain corresponds according to the registered URL. 2) The severe errors that need to be corrected to remove accessibility barriers. 3) Contrast errors that make access difficult for visually impaired users. 4) The ranking in which they are placed and the level of accessibility. The data analysis was performed with Microsoft Excel version 365 MSO 16.0.14326.20504, with macros, advanced functions, tables, and dynamic graphs. Phase 8 Suggest accessibility improvements. In this phase, proposals for improvements to the ecommerce websites were presented. The improvements are detailed in the discussion section.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>This research was applied to a sample of the top 50 e-commerce websites taken from ecommerceDB; which contains information on more than 20,000 e-shops from around 50 countries. It is divided into several categories such as revenue and competitor analysis, market development, performance and traffic indicators, vendor submission, payment options, social media activity and SEO information. The ecommerceDB.com database also covers e-commerce PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:1:1:NEW 15 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science market analysis, customer behaviors and buying patterns, market trends and company histories. Table <ns0:ref type='table'>1</ns0:ref> contains the sites that were evaluated with WAVE.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> E-commerce websites. Presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the name of the electronic store, the URL, and the acronym.</ns0:p><ns0:p>The evaluation of the accessibility of e-commerce stores was carried out with the WAVE automatic review tool. Rich Internet applications tend to dynamically update the Document Object Model (DOM) structure, which is why the method used by WAVE to analyze the rendered DOM of pages uses heuristics and logic to detect end-user accessibility barriers considering WCAG 2.1 (World Wide Web Consortium, 2018). All automatic review tools, including WAVE, have limitations; they can detect barriers in 35% of possible compliance failures (WebAIM, 2021). The method applied in evaluating the accessibility of e-commerce stores was based on a modification of the (World Wide Web Consortium (W3C), 2014) WCAG-EM 1.0; our method consists of an eightphase process. Table <ns0:ref type='table'>2</ns0:ref> presents the data achieved from the accessibility assessment of e-commerce websites with WAVE. It contains the number of errors, refers to the accessibility barriers that will affect certain users and contains the WCAG 2.1 compliance failures that need to be urgently corrected by web developers to make the site accessible and inclusive. Contrast errors this issue is related to the text that does not meet WCAG 2.1 contrast requirements. The term alerts are related to elements that may cause problems; in this case, the evaluator is the one who decides the impact of the accessibility of the website. Features imply that elements can improve accessibility when implemented correctly. Structural elements are related to some title of a web page, indicating that it has been marked as a top-level title or related to several milestones. Finally, ARIA presents accessibility information for people with disabilities; it also reduces accessibility when misused.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> E-commerce websites evaluated. It presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the acronym, errors, contrast errors, alerts, features, structural elements, ARIA, and the country to which each e-commerce corresponds.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the number of e-commerce sites by country, taken as a sample for the accessibility evaluation. The most significant number of e-commerce sites evaluated corresponds to the United States with 24 sites, representing 48% of the total, followed by the United Kingdom with eight sites, representing 16%. In third place, Greater China, with six e-commerce sites, represents 12%. Then Germany, with three sites, accounts for 6%, followed by France and Russia, each with two sites, representing 8% of the total. Finally, each e-commerce site from Brazil, Canada, Italy, Japan, and Spain accounted for 10%. Figure <ns0:ref type='figure'>3</ns0:ref> Map of the number of e-commerce sites taken by the country. The map presents the countries taken as part of the sample to evaluate accessibility according to the classification proposed by ecommerceDB. The sky blue color indicates the country with the highest number of e-commerce sites, the yellow color the midpoint and the pink color the lowest number of e-commerce sites.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows two categories of barriers, warning and serious barriers; the most significant warning barriers are ARIA, Structural Elements, Features and Alerts. These barriers do not affect accessibility to a high degree, and correcting them is unnecessary. The serious barriers are in a high number in Contrast Errors with 1721 barriers, representing 7.4% (pink bars) and Errors with 1229, corresponding to 5.3% (sky blue bars), which must be corrected urgently for e-commerce sites to reach an acceptable level of accessibility. ARIA attributes (World Wide Web Consortium, 2018) add semantic information to the elements of a website, specifically for properties that help to inform: 1) The state of an element of the graphical interface. 2) The content of a section that may change when there is user interaction. 3) The elements that are part of a drag-and-drop interface. 4) The relationships between document elements. Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> Evaluation of accessibility with WAVE. It shows the barriers Errors (sky blue bars) and Contrast Errors (pink) that should be corrected urgently to improve accessibility. Alerts (yellow), Features (gray), Structural Elements (orange) and ARIA (blue) that can be corrected depending on the evaluator's criteria.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> summarizes the barriers identified during the evaluation of the e-commerce sites with the WAVE. Table <ns0:ref type='table'>3</ns0:ref> includes the barriers, success criteria, level, principle, and total barriers of the 50 e-commerce websites assessed. Table <ns0:ref type='table'>3</ns0:ref> comprises the success criteria composed of three numbers; the first is associated with the accessibility principle, the second to the guideline, and the third to the success criteria related to the accessibility barrier. Table <ns0:ref type='table'>3</ns0:ref> Summary of the evaluation of e-commerce websites. Shows the summary of accessibility barriers identified by applying the WAVE automatic review tool.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>5</ns0:ref> shows a synopsis of the accessibility principles recognized in the assessment of ecommerce websites. The most neglected accessibility principle is perceivable, representing 83.1% of the total, followed by operable with 13.7%, in third place is robust with 1.7%, and finally, understandable with 1.5%. Figure <ns0:ref type='figure' target='#fig_2'>6</ns0:ref> summarizes the barriers identified in the accessibility evaluation. The most affected accessibility barrier corresponds to Contrast with 54.4%, followed by Non-text Content, representing 25.6%, in third place is Link purpose, representing 11.6% of the total. The rest of the barriers, such as Info and relationships, Name, Role, Value, Headings and labels, Labels or Instructions, Bypass blocks, Keyboard, Language of page and Error identification, correspond to values lower than 3.1%. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> presents the e-commerce sites and the level of web accessibility; among the top ten most accessible websites according to this analysis with WAVE, we have Sainsbury's Supermarkets, Walmart, Target Corporation, Macy's, IKEA, H &amp; M Hennes, Chewy, The Kroger, QVC, and Nike.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> E-commerce websites evaluated. It presents the level of web accessibility of the ten most accessible websites evaluated with the WAVE automatic review tool.</ns0:p><ns0:p>In addition, the correlation between the ranking of e-commerce sites and accessibility barriers was analyzed. In Table <ns0:ref type='table'>4</ns0:ref>, the test statistic p &gt;0.05 for Errors, Contrast Errors and Ranking have a normal distribution despite their variability. While applying Lilliefors significance correction, the variables Errors and Contrast Errors p&lt;0.05 confirm that they do not have a normal distribution. However, the variable Ranking with Lilliefors significance correction has a p&gt;0.05, confirming a normal distribution. Table <ns0:ref type='table'>4</ns0:ref> Normality tests. Show normality tests for Lilliefors significance correction. We applied for errors, contrast errors, and ranking.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> presents Spearman's non-parametric correlation between e-commerce website ranking and accessibility barriers. In this case, the correlation is significant for accessibility barriers at the 0.05 level (bilateral). Table <ns0:ref type='table'>5</ns0:ref> Spearman correlation. It presents shows Spearman's non-parametric correlation between the ranking of ecommerce websites and accessibility barriers. Spearman's Rho correlation is 0.329, indicating that the correlation is low positive.</ns0:p><ns0:p>In analyzing the accessibility of e-commerce websites, we applied multivariate descriptive statistics and pivot tables with the Excel tool. In addition, to analyze the correlation between the ranking of e-commerce websites and accessibility barriers, we applied the IBM SPSS Statistics version 25 statistical software with which we applied the Kolmogorov-Smirnov and Lilliefors significance correction. We found a non-parametric correlation, so we applied Spearman's Rho between the ranking of e-commerce websites and the accessibility barriers of e-commerce websites. The correlation is 0.329, which indicates that the correlation is low positive.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Concerning the modification made in WCAG-EM 1.0, three additional phases were included. By applying WAVE in phase 5, it is possible to perform automatic checks, which considerably reduces review time and detects numerous problems that would take more time and would be difficult to identify manually. Our methodology allows relating WCAG 2.1 criteria to accessibility barriers; this method can be applied throughout the website development cycle. Currently, e-commerce sites have become essential tools to perform commercial transactions due to the COVID-19; with the findings obtained, it is evident that web designers and developers need to utilize the WCAG 2.1 (World Wide Web Consortium, 2018) in order to have more accessible and inclusive e-commerce sites. We identified that there is a significant number of e-commerce PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:1:1:NEW 15 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>sites that occupy the top positions in the most developed countries such as the United States, United Kingdom, Greater China, Germany, Russia and France; however, occupying the top positions do not guarantee that they comply with the WCAG 2.1 accessibility standards. The most accessible store is Sainsbury's Supermarkets, located in the United Kingdom with zero 'Errors', followed by Walmart, Target, Chewy, IKEA, and Macy's, from the United States and H &amp; M from Germany with one 'Errors', the rest of the e-commerce websites have more than one 'Errors' and more than six 'Contrast Errors'. In the evaluation of e-commerce websites, it was found that the most common errors are related to 'Contrast errors', representing 54.4% of the total, in second place 'Non-text Content', corresponding to 25.6%, 'Link Purpose' corresponds to 11.6% and 'Info and Relationships' with 3.1% of the total. Of the 50 e-commerce websites evaluated, 44.2% of the total comply with level 'A' for accessibility and 55.8% with level 'AA'; none of the sites evaluated reach level 'AAA'. The highest number of accessibility barriers are condensed in the 'perceivable' principle, representing 83.1% of the total, while 16.9% are distributed among the 'operable,' 'robust,' and 'understandable' principles. This finding implies that more barriers are related to problems for users with low vision, including older adults whose <ns0:ref type='bibr' target='#b12'>(Padmanaban et al., 2019)</ns0:ref> vision deteriorates with age due to the eye's normal aging process. The existence of ocular degenerative diseases such as glaucoma, age-related macular degeneration, diabetic retinopathy, age-related cataracts, and cardiovascular accidents can mainly trigger a decrease in visual acuity and the visual field. One of the most frequently repeated barriers on e-commerce sites is contrast and color usage, vital parameters for web accessibility. In contrast, most of the world's users are visually impaired. According to World Health Organization (World Health Organization (WHO), 2021), worldwide, at least 2.2 billion people have near or distance vision problems. To achieve accessibility in e-commerce websites, it is essential to apply the contrast ratio, which measures the difference in 'luminance' or perceived brightness between two colors. The difference in brightness is expressed as a ratio varying from 1:1. WCAG 2.1 suggests addressing contrast with the three success criteria, 1.4.3 Contrast (minimum), 1.4.6 Contrast (enhanced) and 1.4.11 Contrast without text. In WCAG 2.1, it is suggested that 4.5:1 is the minimum required. Possibly some of these combinations are not very readable for all users.</ns0:p><ns0:p>For an e-commerce website to be accessible <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>, it must meet level AA for accessibility; the evaluated e-commerce websites do not present any accessibility statement or specify whether they officially comply. Since the home page of these e-commerce sites is the user's entry point, it should be a primary objective for enhancements in terms of accessibility. Improving e-commerce websites should be a responsibility that companies take on, as it enhances the user experience and reduces the technology gap that people with disabilities may experience. Accessibility issues can be reduced with the following recommendations 1) Improve contrast by considering the colors and contrasts of the screen, checking that they are displayed correctly on all devices. 2) Eliminate time limits, or at least lengthen them. It is essential to consider that users with disabilities need more time to browse online. 3) Always adding the option to 'skip content' is very useful for users who access the Internet with screen readers and avoid content that does not interest them. 4) Provide transcripts of texts: in addition to subtitling videos, it is advisable to include transcripts so that the hearing impaired can read the video content at their own pace. 5) Add captions to graphics: especially those whose description does not conform to the 'alt' attribute. The use of the 'Alt' attribute, called 'alternative text,' is essential for blind users who use screen readers. 6) Avoid using red to highlight important things; it would be fine if no people had color blindness problems. A good accessibility option is to use more giant letters or representative icons. 7) Write in a simple way when writing content oriented to any reader. This research will help both people with and without disabilities. 8) Use legible fonts larger than 15px. 9) Avoid using paragraphs longer than four lines. 10) Use images and diagrams that help the reader understand the content better. This study brings originality since the 50 top-ranked e-commerce sites were evaluated. Nowadays, e-commerce websites must have accessibility policies and standards; several transactions are made electronically due to the pandemic. This research can guide developers and designers of e-commerce websites to spread the use of WCAG 2.1, which is intended to cover a more extensive set of recommendations to make the web more accessible. WCAG 2.1 can be considered as a superset containing WCAG 2.0. Therefore, as WCAG 2.1 extends WCAG 2.0, there are no incompatible requirements between one version.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations</ns0:head><ns0:p>This research has a fundamental limitation; it was evaluated using the WAVE automated review tool. Despite being a powerful tool that helps organizations improve the accessibility of websites for people with disabilities, WAVE cannot tell whether web content is accessible; only a human being can determine true accessibility. This evaluation did not include testing with users with disabilities; three accessibility experts conducted accessibility testing. No additional hardware or digital ramps were used in this study to achieve greater accessibility during the website evaluation process.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We consider the research relevant, especially during the COVID-19 period, when e-commerce is considered the leading solution in confinement and indirectly improves the world economy. The WAVE e-commerce site evaluation procedure can be employed on any website throughout the site development sequence to make a website more than accessible. We recommend performing evaluations with heuristic methods based on WCAG 2.1 (World Wide Web Consortium, 2018) accessibility barriers and automated reviews with users with different disabilities. We found that 55.8% of the websites reach the 'AA' level suggested by WCAG 2.1. We found a low positive correlation between the rating of e-commerce websites and accessibility barriers according to Spearman's Rho of 0.329. The study revealed that 25.6% of e-commerce websites present images of the products they offer and that 54.4% of the sites present contrast problems related to the perception principle, which need to be solved urgently to make the sites more inclusive. Finally, we suggest that business people, governments, and academia work in multidisciplinary teams to generate laws and regulations related to web accessibility that would benefit all users, especially those with disabilities.</ns0:p></ns0:div> <ns0:div><ns0:head>Future Work</ns0:head><ns0:p>It is recommended to 1) Perform tests with other automatic review tools and compare the results obtained for future research. 2) Conduct tests with users with different disabilities. 3) Build a software tool that includes artificial intelligence algorithms that help the software learn the heuristics that may cause accessibility barriers. 4) Include hardware limitations and interstitial advertising in the study. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2Methodology for evaluating e-commerce websites. Diagram for assessing accessibility in e-commerce websites.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5WAVE accessibility evaluation. It presents detailed evaluation results with accessibility principles according to WCAG 2.1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure6Accessibility barriers identified when evaluating with WAVE. It presents the results related to the success criteria according to WCAG 2.1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Figure 1</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Methodology for evaluating e-commerce websites.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 Figure 3</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Evaluation of accessibility with WAVE.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 WAVE accessibility evaluation.</ns0:figDesc><ns0:graphic coords='22,42.52,205.78,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 Accessibility barriers identified when evaluating with WAVE.</ns0:figDesc><ns0:graphic coords='23,42.52,204.37,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 E</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 E-commerce websites evaluated.</ns0:figDesc></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:1:1:NEW 15 Nov 2021)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:1:1:NEW 15 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Universidad de Las Américas RUC: 1791362845001 Vía a Nayón a 300 metros del Redondel del Ciclista Quito, Pichincha EC 170503 Ecuador https://www.udla.edu.ec/ patricia.acosta@udla.edu.ec November 15, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited it to address their concerns. We implemented the recommendations to improve our manuscript. We believe the manuscript is now suitable for publication in PeerJ. Dra. Patricia Acosta-Vargas Associate Professor of Computer Science UDLA On behalf of all authors. Reviewer 1 (Anonymous) Dear reviewer, we appreciate your recommendation. We have implemented what you suggested, and the change is highlighted in yellow in the PDF document. Basic reporting This paper is well written. In addition to the introduction and conclusion, it is composed of method and material, results and discussion sections Hence, It is well structured. The literature references are relevant and recent. However, almost of theme are reports and guides from international organizations like WHO. In fact, it will be more suitable to cite academic papers. Thank you very much for your recommendation, we have restructured the paper, we include more citations of academic articles related to this study. The authors follow a clear methodology. The obtained results are analyzed and discussed in order to give solution to the accessibility problem. Solutions of accessibility are described in the conclusion section. I think that it is more suitable to put in the discussion section. Again thank you very much for your suggestion. We have placed the accessibility solutions in the discussion section. Also, I think that the authors have to describe in details the followed methodology and the proposed solutions. Dear reviewer, we appreciate your valuable recommendations that allow us to improve our article. We have implemented what was suggested, we have described the methodology and the proposed solutions in more detail. You can see it in the Methods and materials section. The authors have to explain some abbreviations (like ARIA attributes, A, AA and AAA levels). Thanks for the recommendation; we have explained what was suggested. You can check it in the Literature review of COVID-19, e-commerce, and web accessibility section. I think that the introduction is very long. The authors touch on many subjects to show the importance of the problem (like COVID-19, e-commerce, accessibility). I suggest to rewrite the introduction to be more consistent. I remark that there is no related work section presented in this paper !!! Thank you very much for the timely recommendation. We have restructured the sections of the paper. We shortened the Introduction and rewrote it. We included the Literature review of COVID-19, e-commerce, and web accessibility section where related papers to this research were included. Experimental design The paper presents the accessibility challenges of e-commerce websites. The authors through this study have presented many contributions: 1 - correlation between the ranking of e-commerce websites and the accessibility, 2 - the most significant accessibility barriers and neglected principles 3 - some proposed solutions to improve the accessibility of e-commerce websites. I think that the research is well designed. Also, the research question is relevant, especially during COVID-19 period where e-commerce is considered as a main solution to apply confinement. The applied methodology is rigorous. Also, the authors have mentioned the main limits of this methodology which consists mainly in the inability of automated tools to test content accessibility. I think, that the authors have to explain in details the followed methodology. Dear reviewer, thank you for your valuable comments. We have applied what you suggested; the changes made can be reviewed in the Materials & Methods section. Comparing to some cited works (like Oliveira et al (2020)), , I did understand exactly the novelty of this study. Is it the proposition of solutions to the accessibility? because the same tools (WAVE) and field (Ecommerce) are used in both studies. Thank you for your suggestions; we applied what was requested. The applied changes can be reviewed in the Literature review of COVID-19, e-commerce, and web accessibility section. Validity of the findings The obtained results are very important and they allow improving e-commerce websites (and indirectly improving the global economy). The first remark of the authors is the correlation between e-commerce websites ranking and the accessibility. This remark guides the business sector to improve the accessibility of their websites to improve their rank. Also, the authors describe the main accessibility barriers and the most neglected principles of this attribute. Moreover, the authors presented the solutions of this issue. In term of novelty and originality, the authors have mentioned several recent works that are deal with this problem (using the same tools and applied in the e-commerce field). I think, that the authors have to mention clearly the novelty and originality points comparing to these studies. I suggest adding related work section to study this point. Dear reviewer, thank you very much for your valuable comments. We have implemented the suggested added Literature review of COVID-19, e-commerce, and web accessibility section. In addition, we included the points of novelty and originality of our study with the related works. The authors provided all data required to replicate. In future works, the authors suggest using in their future work a hybrid method by applying automated tool with manual review method. I think that it is difficult to test the accessibility of 50 websites manually. Also, the manual method causes subjectivity problems. Dear reviewer, thank you for your comments; we have taken them into account. We have included the Future Work section, where we suggest 1) Perform tests with other automatic review tools and compare the results obtained for future research. 2) Test with users with different disabilities. 3) Build a software tool that includes artificial intelligence algorithms that help the software learn heuristics that may cause accessibility barriers. 4) Include hardware limitations and interstitial advertising in the study. Reviewer 2 (Anonymous) Dear reviewer, we appreciate your valuable comments that allow us to improve our article. We have implemented what you have suggested; the changes highlighted in yellow in the PDF document. Basic reporting The manuscript titled: Accessibility challenges of e-commerce websites, evaluates the accessibility of 50 e-commerce web sites in the top rankings based on classification proposed by ecommerceDB. The evaluation was made under WAVE tool. MS Excel pivot table tool was applied for analyzing the accessibility of selected e-commerce. In constrast, the normality test and the correlation between the ranking of e-commerce websites and accessibility barriers were analyzed using IBM SPSS (v 25). Overall, the authors’ contribution was not pretty clear since the study analyses the outputs of WAVE tool on 50 top ranking e-commerce web sites. The proposed Methodology for evaluating e-commerce websites needs further highlights. For exemple, in the third phase: Define the test scenario, from a software engineering point of view there is no provided test. scenario. Also, Is there any potential back-tracking in the proposed Methodology in the case where WAVE does not exhibit a relevant information (potential failures, new accessibility improvements purposes, etc) ? Also, in Phase 2, which user categories were selected? it should be discussed. Dear reviewer, thank you very much for your comments; we have restructured the document to clarify our contribution better. We have enlarged the sections: • Literature review of COVID-19, e-commerce, and web accessibility. • Limitations • Future Work We have also detailed and highlighted the methodology applied in the Materials & Methods section. For example, in Phase 2, Categorize the type of users. This phase involved three web accessibility experts with experience in the area since 2015, who have more than ten scientific publications in web accessibility evaluations, serious games, and accessible mobile applications. Discrepancies found in the automatic review of e-commerce sites were resolved in consensus. Experts performed the automatic review with WAVE as evaluated by experts WAVE is one of the best performing tools according to previous studies (Patricia Acosta-Vargas et al., 2018). In this phase, the flow of events that users interact with when browsing e-commerce sites was identified. Phase 3 Define the test scenario. In this phase, the activities to be performed to evaluate with the automatic tool are defined. We identify the essential functionalities of the e-commerce website to help select the most representative instances. The definition of the test scenario serves as the basis for the subsequent selection of the e-commerce sites. In this case, we apply the following scenario: 1) We enter the first page of the website. 2) We interact with the selection and purchase of products on the e-commerce website. 3) We test how to fill out and submit the forms. 4) We check if an account registration on the ecommerce site is required. The authors are strongly invited to make a comparison between their findings and the study of (Xu, 2020) which is the unique work presented in the State-Of-The-Art section that uses WAVE-based evaluation of 45 e-commerce website. Dear reviewer, thank you very much for your observations; our study differs from (Oliveira et al., 2020) and the research (Xu, 2020) because the total sample is taken from ecommerceDB, which presents the e- commerce websites related to market trends and a ranking of the leading e-commerce stores. In our evaluation, we applied a new method based on the methodology (WCAG-EM) 1.0. In addition, the WAVE evaluation tool is based on version 3.1.6, updated as of October 14, 2021, which includes the plugin component that allows evaluating websites that require authentication. We, therefore, propose ten recommendations to improve the accessibility of the websites listed in the Discussion section. I should be interesting, for authors, to highlight the WAVE analysis method and the additional modification indicated in WCAG-EM 1.0 since it impacts the results and the proposed accessibility improvements. Dear reviewer, again, thank you for your suggestions. Concerning the modification made in WCAG-EM 1.0, three additional phases were included. By applying WAVE in phase 5, it is possible to perform automatic checks, which considerably reduces review time and detects numerous problems that would take more time and would be difficult to identify manually. Our methodology allows relating WCAG 2.1 criteria to accessibility barriers; this method can be applied throughout the website development cycle. The applied changes can be reviewed in the Discussion section. In table 2 the column headings were not explained (Errors, Contrast Errors, Alerts, Features, Structural Elements and ARIA). For instance, the authors indicate errors and contrast errors, what is the difference between them? Also, it seems that ARIA, Structural Elements, Features and Alerts are warning barrier types. I suggest to explain these concepts when introducing table2. Dear reviewer thank you for your comments. We implemented what you suggested; the changes are reflected in the Results section, Table 2. Table 3 is presented in summative manner and needs more clarification about Success criteria and Level. Success criteria presents number with X.X.X format which is not clear (the meaning of the first, second and third number is missing). Also, level takes A or AA values which is even not explained before table3. In short, success criteria according to WCAG 2.1 should be outlined first in the appropriate position in the paper. Dear reviewer, thank you for your comments. We have implemented what you suggested; the changes are reflected in the Literature review of COVID-19, e-commerce, web accessibility and Results section, Table 3. The authors should mention how Accessibility barriers have been identified? if the identification process belongs to WAVE findings, it must be noted. Dear reviewer, thank you for your comments. The accessibility barriers were categorized by the authors according to WCAG 2.1 principles, guidelines and success criteria subsequently validated with WAVE findings. This change is detailed in the Results section, in Table 3. In table 4, the authors are invited to justify why the Normality testing is required? Also, the result of normality test should be commented either the data follow the normal distribution or not (i.e., Lilliefors test) Dear reviewer, thank you for your comments. We have implemented what you have suggested; you can check the changes made in the Results section. In Table 4, the test statistic p >0.05 for errors, contrast errors, and ranking have a normal distribution despite their variability. When applying the Lilliefors significance correction, the variables Errors and Contrast errors p<0.05 confirm that they do not have a normal distribution. However, the variable ranking with the Lilliefors significance correction has a p>0.05, confirming a normal distribution. In the discussion section, in addition to what is raised about accessibility barriers (low vision, ocular degenerative diseases, cardiovascular accidents, etc), the Hardware limitations were not discussed. Sometimes end-users’ devices (smartphone, tablets, …) may hide some significant information on the web site based on automatic brightness functionality which is proposed mainly for preserving human vision. Interstitial advertising was not discussed as well. Dear reviewer, thank you very much for your recommendations on hardware limitations and interstitial advertising in the future work section. In table 5, when calculating the Pearson correlation, the authors should provide at which level the correlation is significant. Using SPSS, the significance level is defaulted by 0.01, it there any modification of that level? Dear reviewer, thank you for your comments. Table 5 was removed the study was analyzed in Table 4. Page 11, line 282, (In analyzing the accessibility of e-commerce websites, we applied descriptive statistics …)  the authors should clarify which type of descriptive statistics is used: univariate or multi-variate Dear reviewer, thank you for your concern; we applied multivariate descriptive statistics in this study, as indicated in the Results section. Finally, as a native question: Is it possible to take into account web applications by WAVE-based evaluation? Dear reviewer, thank you for your concern. The WAVE e-commerce site evaluation method can be applied to any website throughout the website development cycle to achieve a more accessible website. The changes are reflected in the Conclusions section. Although figures, tables are well labelled and described, some minor issues are: • Line 94 the title: The state of the art is missing since the what follows discuss literature review. Dear reviewer, thank you for your comment. We have added the Literature review of COVID-19, ecommerce, and web accessibility section to address this concern. • Second paragraph after the figure 1: (there are many; many people with hearing roblems)-> redundance Dear reviewer, thank you for your comment; we have solved the problem. • Table3, figure 5, figure 6 and figure 7 captions are written in bold. It should be submitted to the template. Dear reviewer, thank you for your comment; we have implemented what you suggested. • (…, a sample of the top 50 e-commerce websites was taken from the EcommerceDB ranking site)  recurrent phrase in the manuscript. (In Introduction section, In Materials & Methods section, in results section) Dear reviewer, thank you for your statement; we have implemented what you suggested. • Page 8, line 125, (Our research proposes an automatic review method using the WAVE Web Accessibility Evaluation Tool (15) to solve …)  what (15) stands for? Is it about the tool version? Dear reviewer, thank you for your comment; it was a typing error, the (15) was deleted. • page 7, ligne 85: WCAG 2.1 (World Wide Web Consortium, 2018) consists of 4 principles, 13 guidelines and conformance criteria, plus an undetermined number of suitable techniques  Is there really an undetermined number? Can it be limited? Dear reviewer, thank you for your comment; we have clarified your request. You can review the applied change in the Literature review of COVID-19, e-commerce, and web accessibility section. • Page 7, line 88: Principle 1, related to perceptibility, refers to information, and user interface components should be presented most simply. Principle 2 focuses on operability - it comprises the user interface components, and navigation should be between each page  the difference between principle 1 and 2 is not clear. Dear reviewer, thank you for your statement; we have clarified your request. You can review the applied change in the Literature review of COVID-19, e-commerce, and web accessibility section. • The first paragraph in discussion section, (it was revealed that there is a need for web developers and web designers to apply WCAG 2.1 …) : incomplete phrase, a descriptive word is missing after WCAG 2.1 like : guidelines, principles, requirements. Dear reviewer, thank you for your statement; we have implemented what you suggested. • Page 13, line 340, (This research can guide developers and designers of e-commerce websites to spread the use of WCAG 2.1 (6), ..) what does (6) stands for ? Dear reviewer, thank you for your comment; it was a typing error, the (15) was deleted. Experimental design Rigorous investigation is performed in a simplest way. Overall, the authors’ contribution was not pretty clear since the study analyses the outputs of WAVE tool on 50 top ranking e-commerce web sites. The proposed Methodology for evaluating e-commerce websites needs further highlights. The introduction should also highlights what are the gaps left by existing solutions and how the presented proposal addresses them. Dear reviewer, thank you for your concern regarding the proposed methodology for evaluating ecommerce websites, which has been detailed and expanded in the Materials & Methods section. Validity of the findings In conclusion section, the provided recommendations are suitable for any type of web sites. I would have liked that they are dedicated to e-commerce sites since the core matter of the study is the Accessibility issue in e-commerce websites. Generally, e-commerce websites present images of products. I suppose that an image processing issue should be tackled. Dear reviewer, thank you for your concern; we have applied what was suggested in the conclusions section. The authors suggest future works in two places in the manuscript: before and within the conclusion section. It should be better to combine suggested future works in the end of conclusion section. Dear reviewer, thank you for your concern, we have implemented what you suggested and restructured the document, adding a Future Work section. "
Here is a paper. Please give your review comments after reading it.
349
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Today, there are many e-commerce websites, but not all of them are accessible.</ns0:p><ns0:p>Accessibility is a crucial element that can make a difference and determine the success or failure of a digital business. The study was applied to 50 e-commerce sites in the top rankings according to the classification proposed by ecommerceDB. In evaluating the web accessibility of e-commerce sites, we applied an automatic review method based on a modification of Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0. To evaluate accessibility, we used Web Accessibility Evaluation Tool (WAVE) with the extension for Google Chrome, which helps verify password-protected, locally stored, or highly dynamic pages. The study found that the correlation between the ranking of ecommerce websites and accessibility barriers is 0.329, indicating that the correlation is low positive according to Spearman's Rho. According to the WAVE analysis, the research results reveal that the top ten most accessible websites are Sainsbury's Supermarkets,</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Internet technology has radically revolutionized the world of communications to become a global means of communication. The number of e-commerce websites has increased significantly due to the pandemic COVID-19 (World Health Organization (WHO), 2021b); the global confinement caused many businesses to close. Statistics from the Statista website <ns0:ref type='bibr' target='#b19'>(Statista, 2021)</ns0:ref> indicate that e-commerce has undergone a substantial transformation in recent years thanks to digitization in modern life.</ns0:p><ns0:p>However, for Statista <ns0:ref type='bibr' target='#b18'>(Statista, 2020</ns0:ref>), e-commerce websites have seen a notable increase in worldwide traffic flow between January 2019 and June 2020. By 2022, more than 2.14 billion people worldwide are forecast to shop online, and global e-commerce revenues could grow to $5.4 trillion <ns0:ref type='bibr' target='#b19'>(Statista, 2021)</ns0:ref>. Figure <ns0:ref type='figure'>1</ns0:ref> shows the search of terms performed in the last five years in Google Trends <ns0:ref type='bibr' target='#b8'>(Google, 2021)</ns0:ref> related to accessibility, e-commerce, and Web Content Accessibility Guidelines (WCAG) <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. We observe that the term e-commerce tends to grow from March 2020 during the COVID-19 where most users began to consume massively digital material and are oriented to use e-commerce applications, to reduce the number of infections, we also observe that the term accessibility and WCAG tend to grow. Figure <ns0:ref type='figure'>1</ns0:ref> The trend in Google Trends for terms related to accessibility, e-commerce, and WCAG. Diagram of the trend of the terms in the last five years worldwide.</ns0:p><ns0:p>E-commerce websites have grown considerably, but most of them are not accessible. Accessibility <ns0:ref type='bibr' target='#b2'>(Patricia Acosta-Vargas et al., 2018)</ns0:ref> refers to a set of techniques, guidelines or methods that make web content and functionality compatible with the needs of all people regardless of their physical or technological capabilities. Statements from the World Health Organization (World Health Organization (WHO), 2017) reveal that 15% of the world's population suffers from some disability. Therefore, it is essential to apply web accessibility guidelines to e-commerce sites. A well-designed website can be easy to navigate for web users <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. Accessibility also benefits several users <ns0:ref type='bibr' target='#b6'>(Andersen et al., 2020)</ns0:ref> with aging-related difficulties that decrease their visual ability due to presbyopia. WCAG 2.1 <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref> proposes applying accessibility principles to reduce accessibility barriers perceived by users when interrelating with a website. The outcomes of this investigation evidenced that 25.6% of the e-commerce websites in the sample present images that require the inclusion of alternative text; additionally, we found that 54.4% of the sites present contrast problems related to the principle of perception. As future work, we suggest considering the problems of hardware limitations and interstitial advertising. We also recommend building a software tool with artificial intelligence algorithms that include new heuristics to help developers identify accessibility barriers to generate more accessible and inclusive sites. The remainder of this paper is structured as follows; Section II reviews the literature on web accessibility. Section III shows the methodology used to evaluate accessibility in e-commerce sites. Section IV shows the results obtained by applying the evaluation. Section V contains the discussion of the outcomes. Section VI explains the restrictions of the research. Finally, conclusions and future work are described in parts VII and VIII.</ns0:p><ns0:p>The documentation includes several techniques; the techniques are grouped into two classes 1) those that are sufficient to achieve the success criteria and 2) advisable techniques that allow the authors to better comply with the guidelines. In addition, some of the advisable techniques deal with accessibility barriers that the verifiable success criteria have not covered. In reviewing the literature, we found some research related to the evaluation of web accessibility in e-commerce sites, methods, and tools used in automatic inspection. The study <ns0:ref type='bibr' target='#b14'>(Paz et al., 2021)</ns0:ref> argues the accessibility with which software products, including e-commerce stores, should be designed. It indicates that some countries apply laws and government policies that ensure accessibility to websites considering different skills and abilities. The study compared the results of five tools for inspecting the accessibility of e-commerce websites; the conclusions show that there are no 100% accessible sites. The research <ns0:ref type='bibr' target='#b24'>(Xu, 2020)</ns0:ref> compares the accessibility of e-commerce, considering compliance with web accessibility guidelines. As a case study, they applied the evaluation to 45 e-commerce websites. They used the Web Accessibility Assessment Tool (WAVE); the results revealed that websites with Accessible Rich Internet Applications (ARIA) attribute lower accessibility levels overall. They concluded that the accessibility of mature websites was higher than that of new websites with innovative products. The study <ns0:ref type='bibr' target='#b4'>(Alshamari, 2016)</ns0:ref> argues that many tools can help make a website accessible. The article explores some available tools that help designers and developers evaluate web accessibility. The research results indicate that navigation, readability, and timing are the most common accessibility issues when evaluating the accessibility of selected websites. The article <ns0:ref type='bibr' target='#b11'>(Padure &amp; Pribeanu, 2020)</ns0:ref> argues that WAVE tool is a free tool provided by Web Accessibility In Mind (WebAIM). The authors indicate that WAVE offers a color-coding system: red for errors that need to be corrected urgently, green for correct lines but still need to be checked, and yellow for potential problems that need manual review. The study's authors <ns0:ref type='bibr' target='#b9'>(Oliveira et al., 2020)</ns0:ref> infer that accessibility is fundamental in the democratization of technologies, so applying the Web Content Accessibility Guidelines (WCAG 2.1) is essential. The accessibility evaluation was applied to three websites operating in Portugal, considering the three best-positioned retailers' ranking corresponding to the SimilarWeb. The results obtained established a collection of suggestions to increase the accessibility of websites aimed at e-commerce. Our study differs from <ns0:ref type='bibr' target='#b9'>(Oliveira et al., 2020)</ns0:ref> and the research <ns0:ref type='bibr' target='#b24'>(Xu, 2020)</ns0:ref> because the total sample is taken from ecommerceDB, which presents the e-commerce websites related to market trends and a ranking of the leading e-commerce stores. Our evaluation applied a new method based on the methodology (WCAG-EM) 1.0. In addition, the WAVE evaluation tool is based on version 3.1.6, updated as of October 14, 2021, which includes the plugin component that allows evaluating websites that require authentication. We, therefore, propose ten recommendations to improve the accessibility of the websites listed in the discussion section. Research <ns0:ref type='bibr' target='#b0'>(Abascal et al., 2019)</ns0:ref> related to Web accessibility evaluation argues that manual verification of compliance with accessibility guidelines is often complicated and unmanageable, so the authors suggest applying software tools that perform automatic accessibility evaluations. It presents a review of the main features of tools used for Web accessibility evaluation and presents an introspection of the future of accessibility tools. The authors <ns0:ref type='bibr' target='#b3'>(Patricia Acosta-Vargas et al., 2019)</ns0:ref> suggest that verifying the accessibility of a Web site is a considerable challenge for accessibility specialists. Today, there are quantitative and qualitative methods for verifying whether a website is accessible. In general, the methods use automatic tools because they are low-cost, but they do not represent a perfect solution. The authors propose a heuristic method with a manual review supported by the Web Content Accessibility Guidelines 2.1. The evaluators concluded that the research could serve as a preliminary argument for upcoming analyses concerned with web accessibility heuristics. Our investigation proposes an automatic review method using the WAVE Web Accessibility Evaluation Tool <ns0:ref type='bibr' target='#b22'>(WebAIM, 2021)</ns0:ref>. Earlier studies by the authors (Patricia <ns0:ref type='bibr' target='#b2'>Acosta-Vargas et al., 2018)</ns0:ref> indicated that one of the best tools for automated review is WAVE, which allows you to identify any accessibility barriers, centered on the Web Content Accessibility Guidelines (WCAG) 2.1 (World Wide Web Consortium, 2018) that help in automatic review and evaluation by web content experts. This preliminary study was applied to the 50 best-ranked e-commerce stores according to the ranking proposed by the ecommerceDB site (EcommerceDB, 2020), which contains detailed information on more than 20,000 stores from 50 countries and 13 categories.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>In this research to evaluate the accessibility of e-commerce websites, we applied an automatic review method (World Wide Web Consortium (W3C), 2014) centered on a modification of the Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0. We used Web Accessibility Evaluation Tool (WAVE) (WebAIM, 2021) with the extension for Google Chrome, which helps verify password-protected and highly dynamic pages. Utah State University developed the WAVE automatic evaluation tool to help find potential accessibility issues according to WCAG 2.1, facilitating manual evaluation. It should be noted that manual testing cannot be replaced, especially when it comes to accessibility, as it may be essential to test with end-users with disabilities. Accessibility validation was performed using guidelines based on Section 508 and WCAG 2.1; some phases of this methodology <ns0:ref type='bibr' target='#b17'>(Salvador-Ullauri et al., 2020)</ns0:ref> were tested in previous works of the authors related to serious games; the methodology for evaluating e-commerce websites is summarized in eight phases, as shown in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>. Phase 1 Select e-commerce Websites. In this phase, we selected the 50 e-commerce websites that are in the top rankings according to the classification proposed by ecommerceDB <ns0:ref type='bibr' target='#b7'>(EcommerceDB, 2020)</ns0:ref>. In this preliminary phase, we define the level of compliance with WCAG 2.1 <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. In this case, we evaluated the AA level within the level accepted and recommended by WCAG 2.1. This knowledge can be improved to propose future work; we also determined the lowest configuration of web browser patterns, operating systems and assistive PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:09:65405:2:0:NEW 25 Dec 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>technologies with which the website should work during the testing phase. This case study used the Windows 10 operating system with Google Chrome browser version 95.0.4638.54 and screen reader support. Phase 2 Categorize the type of users. According to <ns0:ref type='bibr' target='#b17'>(Salvador-Ullauri et al., 2020)</ns0:ref>, in this phase, we involved three web accessibility experts with experience in the area since 2015, who have more than ten scientific publications in web accessibility evaluations, serious games, and accessible mobile applications. Discrepancies found in the automatic review of e-commerce sites were resolved in consensus. Experts performed the automatic review with WAVE; as evaluated by experts, WAVE is one of the best performing tools according to previous studies (Patricia <ns0:ref type='bibr' target='#b2'>Acosta-Vargas et al., 2018)</ns0:ref>. This phase identified the flow of events users interact with when browsing ecommerce sites. Phase 3 Define the test scenario. According to <ns0:ref type='bibr' target='#b17'>(Salvador-Ullauri et al., 2020)</ns0:ref>, this phase identifies the essential functionalities of the e-commerce website to help select the most representative instances. The definition of the test scenario serves as the basis for the subsequent selection of the e-commerce sites. In this case, we apply the following scenario: 1) We enter the first page of the website. 2) We interact by selecting and purchasing products on the e-commerce website. 3) We test how to fill out and submit the forms. 4) We check if account registration on the e-commerce site is required. Phase 4 Explore the e-commerce website. In this phase, the first page of each website was explored. The evaluators explored the e-commerce website to understand its purpose, functionality, and usage. Initial exploration of this phase was considered in Phase 1 by selecting a representative sample, then refined in phase 5 by evaluating with WAVE. Involving accessibility experts and website designers can help get the scans more efficiently. At first, cursory checks were performed to help identify relevant web pages; later, a more detailed evaluation of each website component was performed. Therefore, this phase is essential for evaluators to access all the essential components and functionalities of the website. Phase 5 Evaluate with WAVE. In this phase, the experts used WAVE to evaluate the home page of each e-commerce website. The assessment was conducted in March 2021; during this phase, the evaluators audited the sample e-commerce websites and the states of the websites selected in phases 1 and 4. The evaluation was conducted following the WCAG 2.1 conformance requirements at the AA level previously defined in phase 1. The conformance level, web pages, processes and technologies, and compatibility with accessibility and non-interference were considered. In this phase, it was essential to know the WCAG 2.1 (World Wide Web Consortium, 2018) conformance requirements and the experience of accessibility experts. In addition, the authors classified accessibility barriers by matching the WCAG 2.1 principles, guidelines, and success criteria, which were then validated with WAVE results. Phase 6 Record evaluation data. In this phase, the assessment data was documented in a spreadsheet to organize a dataset available in Mendeley (P. <ns0:ref type='bibr' target='#b1'>Acosta-Vargas et al., 2021)</ns0:ref>. The dataset includes information with the names of the audited websites, the URLs of the e-commerce sites, and the evaluation data used to replicate this study as part of good practices that help researchers. The set organizes the information into spreadsheets, containing 1) The e-commerce websites evaluated. 2) The results of the e-commerce websites evaluated with WAVE. 3) The map of the number of e-commerce websites per country. 4) The diagram of the accessibility evaluation process with WAVE. 5) A summary of the evaluation of e-commerce websites. 6) The accessibility principles of WCAG 2.1. 7) Accessibility barriers identified when evaluating with WAVE. 8) Ecommerce websites versus compliance. 9) E-commerce websites with errors and contrast errors. 10) The number of alerts, features, structural elements and ARIA. 11) E-commerce websites and the relation to accessibility levels. Phase 7 Classify and analyze data. Following previous work by the authors (Salvador-Ullauri et al., 2020), data related to accessibility principles, success criteria, and accessibility levels are organized in this phase. This information is detailed in the results section and discussed in the discussion section. Also, the e-commerce websites were classified by 1) The countries to which each domain corresponds according to the registered URL. 2) The severe errors in need of correction to remove accessibility barriers. 3) Contrast errors that make access difficult for visually impaired users. 4) The ranking in which they are placed and the level of accessibility. The data analysis was performed with Microsoft Excel version 365 MSO 16.0.14326.20504, with macros, advanced functions, tables, and dynamic graphs. Phase 8 Suggest accessibility improvements. In this phase, proposals for improvements to the ecommerce websites were presented. The improvements are detailed in the discussion section.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>This research was applied to a sample of the top 50 e-commerce websites taken from ecommerceDB; which contains information on more than 20,000 e-shops from around 50 countries. It is divided into several categories such as revenue and competitor analysis, market development, performance and traffic indicators, vendor submission, payment options, social media activity and SEO information. The ecommerceDB.com database also covers e-commerce market analysis, customer behaviors and buying patterns, market trends and company histories. Table <ns0:ref type='table'>1</ns0:ref> contains the sites that were evaluated with WAVE.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> E-commerce websites. Presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the name of the electronic store, the URL, and the acronym.</ns0:p><ns0:p>The evaluation of the accessibility of e-commerce stores was carried out with the WAVE automatic review tool. Rich Internet applications tend to dynamically update the Document Object Model (DOM) structure, which is why the method used by WAVE to analyze the rendered DOM of pages uses heuristics and logic to detect end-user accessibility barriers considering WCAG 2.1 (World Wide Web Consortium, 2018). All automatic review tools, including WAVE, have limitations; they can detect barriers in 35% of possible compliance failures (WebAIM, 2021). The method applied in evaluating the accessibility of e-commerce stores was based on a modification of the (World Wide Web Consortium (W3C), 2014) WCAG-EM 1.0; our method consists of an eightphase process detailed in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> presents the data obtained from the accessibility evaluation of e-commerce websites with WAVE. It contains the number of WCAG 2.1 compliance failures that may affect specific users. Web developers should correct the barriers identified in the evaluation to make the e-commerce site accessible and inclusive. The contrast errors found in this study are related to the text that violates WCAG 2.1 contrast requirements. The term alerts are related to elements that may cause accessibility problems; in this case, the evaluator is the one who decides the impact of the accessibility of the website. Features imply that elements can improve accessibility when implemented correctly. Structural elements are related to some title of a web page, indicating that it has been marked as a top-level title or related to several milestones. Finally, the ARIA element presents information about accessibility for people with disabilities; in such a way, it influences accessibility when misused.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> E-commerce websites evaluated. It presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the acronym, errors, contrast errors, alerts, features, structural elements, ARIA, and the country to which each e-commerce corresponds.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the number of e-commerce sites by country, taken as a sample for the accessibility evaluation. The most significant number of e-commerce sites evaluated corresponds to the United States, with 24 sites representing 48% of the total, followed by the United Kingdom, with eight sites representing 16%. With six e-commerce sites, Greater China accounts for 12% in third place. Next, Germany, with three sites, accounts for 6%, followed by France and Russia, with two sites each, representing 8% of the total. Lastly, Brazil, Canada, Italy, Japan and Spain, with one ecommerce site, account for 10%. Figure <ns0:ref type='figure'>3</ns0:ref> Map of the number of e-commerce sites taken by the country. The map presents the countries taken as part of the sample to evaluate accessibility according to the classification proposed by ecommerceDB. The sky blue color indicates the country with the highest number of e-commerce sites, the yellow color the midpoint and the pink color the lowest number of e-commerce sites.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> shows two categories of barriers, warning and serious barriers; the most significant warning barriers are ARIA, Structural Elements, Features and Alerts. These barriers do not affect accessibility to a high degree, and correcting them is unnecessary. The serious barriers with a high number are Contrast Errors with 1721 barriers, representing 7.4% (pink bars) and Errors with 1229, corresponding to 5.3% (sky blue bars), which must be corrected urgently for e-commerce sites to reach an acceptable level of accessibility. ARIA attributes (World Wide Web Consortium, 2018) add semantic information to the elements of a website, specifically for properties that help to inform: 1) The state of an element of the graphical interface. 2) The content of a section that may change when there is user interaction. 3) The elements that are part of a drag-and-drop interface. 4) The relationships between document elements. Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> Evaluation of accessibility with WAVE. It shows the barriers related to Errors (sky blue bars) and Contrast Errors (pink) that should be corrected urgently to improve accessibility. Alerts (yellow), Features (gray), Structural Elements (orange) and ARIA (blue) that can be corrected depending on the evaluator's criteria.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> summarizes the barriers identified during the evaluation of the e-commerce sites with WAVE. Table <ns0:ref type='table'>3</ns0:ref> includes the barriers, success criteria, level, principle, and total barriers of the 50 e-commerce websites assessed. Table <ns0:ref type='table'>3</ns0:ref> comprises the success criteria composed of three numbers; the first is associated with the accessibility principle, the second to the guideline, and the third to the success criteria related to the accessibility barrier. Table <ns0:ref type='table'>3</ns0:ref> Summary of the evaluation of e-commerce websites. Shows the summary of accessibility barriers identified by applying the WAVE automatic review tool.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>5</ns0:ref> shows a synopsis of the accessibility principles recognized in the assessment of ecommerce websites. The most neglected accessibility principle is perceivable, representing 83.1% of the total, followed by operable with 13.7%, in third place is robust with 1.7%, and finally, understandable with 1.5%. Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> summarizes the barriers identified in the accessibility evaluation. The most affected accessibility barrier corresponds to Contrast with 54.4%, followed by Non-text Content, representing 25.6%, in third place is Link purpose, representing 11.6% of the total. The rest of the barriers, such as info and relationships, name, role, value, headings and labels, labels or instructions, bypass blocks, keyboard, the language of page and error identification, correspond to values lower than 3.1%. Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> Accessibility barriers identified when evaluating with WAVE. It presents the results related to the success criteria according to WCAG 2.1.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref> presents the e-commerce sites and the level of web accessibility; among the top ten most accessible websites according to this analysis with WAVE, we have Sainsbury's Supermarkets, Walmart, Target Corporation, Macy's, IKEA, H &amp; M Hennes, Chewy, The Kroger, QVC, and Nike.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref> E-commerce websites evaluated. It presents the level of web accessibility of the ten most accessible websites evaluated with the WAVE automatic review tool.</ns0:p><ns0:p>In addition, the correlation between the ranking of e-commerce sites and accessibility barriers was analyzed. In Table <ns0:ref type='table' target='#tab_0'>4</ns0:ref>, the test statistic p &gt;0.05 for Errors, Contrast Errors and Ranking have a normal distribution despite their variability. While applying Lilliefors significance correction, the variables Errors and Contrast Errors p&lt;0.05 confirm that they do not have a normal distribution. However, the variable Ranking with Lilliefors significance correction has a p&gt;0.05, confirming a normal distribution. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> presents Spearman's non-parametric correlation between e-commerce website ranking and accessibility barriers. In this case, the correlation is significant for accessibility barriers at the 0.05 level (bilateral). Table <ns0:ref type='table'>5</ns0:ref> Spearman correlation. It shows Spearman's non-parametric correlation between the ranking of e-commerce websites and accessibility barriers. Spearman's Rho correlation is 0.329, indicating that the correlation is low positive.</ns0:p><ns0:p>In analyzing the accessibility of e-commerce websites, we applied multivariate descriptive statistics and pivot tables with the Excel tool. In addition, to analyze the correlation between the ranking of e-commerce websites and accessibility barriers, we applied the IBM SPSS Statistics version 25 statistical software with which we applied the Kolmogorov-Smirnov and Lilliefors significance correction. We found a non-parametric correlation, so we applied Spearman's Rho between the ranking of e-commerce websites and the accessibility barriers of e-commerce websites. The correlation is 0.329, which indicates that the correlation is low positive.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Concerning the modification made in WCAG-EM 1.0, three additional phases were included. By applying WAVE in phase 5, it is possible to perform automatic checks, which considerably reduces review time and detects numerous problems that would take more time and would be difficult to identify manually. Our methodology allows linking WCAG 2.1 criteria to accessibility barriers; this method can be applied throughout the website development cycle. Currently, e-commerce sites have become essential tools to perform commercial transactions due to the COVID-19; with the findings obtained, it is evident that web designers and developers need to utilize the WCAG 2.1 (World Wide Web Consortium, 2018) to have more accessible and inclusive e-commerce sites. We identified that there is a significant number of e-commerce sites that occupy the top positions in the most developed countries such as the United States, United Kingdom, Greater China, Germany, Russia, and France; however, occupying the top positions do not guarantee that they comply with the WCAG 2.1 accessibility standards. The most accessible store is Sainsbury's Supermarkets, located in the United Kingdom with zero 'Errors', followed by Walmart, Target, Chewy, IKEA, and Macy's, from the United States and H &amp; M from Germany with one 'Errors', the rest of the e-commerce websites have more than one 'Errors' and more than six 'Contrast Errors'. In the evaluation of e-commerce websites, it was found that the most common errors are related to 'Contrast errors', representing 54.4% of the total, in second place 'Non-text Content', corresponding to 25.6%, 'Link Purpose' corresponds to 11.6% and 'Info and Relationships' with 3.1% of the total. Of the 50 e-commerce websites evaluated, 44.2% of the total comply with level 'A' for accessibility and 55.8% with level 'AA'; none of the sites evaluated reach level 'AAA'. The highest number of accessibility barriers are condensed in the 'perceivable' principle, representing 83.1% of the total, while 16.9% are distributed among the 'operable,' 'robust,' and 'understandable' principles. This finding implies that more barriers are related to problems for users with low vision, including older adults <ns0:ref type='bibr' target='#b10'>(Padmanaban et al., 2019)</ns0:ref>; vision deteriorates with age due to the eye's normal aging process. The existence of ocular degenerative diseases such as glaucoma, age-related macular degeneration, diabetic retinopathy, age-related cataracts, and cardiovascular accidents can mainly trigger a decrease in visual acuity and the visual field. One of the most frequently repeated barriers on e-commerce sites is contrast and color usage, vital parameters for web accessibility. In contrast, most of the world's users are visually impaired. According to World Health Organization (World Health Organization (WHO), 2021), worldwide, at least 2.2 billion people have near or distance vision problems. Achieving accessibility on e-commerce websites is an excellent challenge, so it is essential to apply the contrast ratio, which measures the difference in 'luminance' or perceived brightness between two colors. The difference in brightness is expressed as a ratio varying from 1:1. WCAG 2.1 suggests addressing contrast with the three success criteria, 1.4.3 Contrast (minimum), 1.4.6 Contrast (enhanced) and 1.4.11 Contrast without text. In WCAG 2.1, it is suggested that 4.5:1 is the minimum required. Possibly some of these combinations are not very readable for all users.</ns0:p><ns0:p>For an e-commerce website to be accessible <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>, it must meet level AA for accessibility; the evaluated e-commerce websites do not present any accessibility statement or specify whether they officially comply. Since the home page of these e-commerce sites is the user's entry point, it should be a primary objective for enhancements in terms of accessibility. Improving e-commerce websites should be a responsibility that companies take on, as it enhances the user experience and reduces the technology gap that people with disabilities may experience. Accessibility issues can be reduced with the following recommendations 1) Improve contrast by considering the colors and contrasts of the screen, checking that they are displayed correctly on all devices. 2) Eliminate time limits, or at least lengthen them. It is essential to consider that users with disabilities need more time to browse online. 3) Always adding the option to 'skip content' is very useful for users who access the Internet with screen readers and avoid content that does not interest them. 4) Provide transcripts of texts: in addition to subtitling videos, it is advisable to include transcripts so that the hearing impaired can read the video content at their own pace. 5) Add captions to graphics: especially those whose description does not conform to the 'alt' attribute. The use of the 'Alt' attribute, called 'alternative text,' is essential for blind users who use screen readers. 6) Avoid using red to highlight important things; it would be fine if no people had color blindness problems. A good accessibility option is to use more giant letters or representative icons. 7) Write contents oriented to any reader, with and without disabilities. 8) Use legible fonts larger than 15px. 9) Avoid using paragraphs longer than four lines. 10) Use images and diagrams that help the reader understand the content better. This study brings originality since the 50 top-ranked e-commerce sites were evaluated. Nowadays, e-commerce websites must have accessibility policies and standards; several transactions are made electronically due to the pandemic. This research can guide developers and designers of e-commerce websites to spread the use of WCAG 2.1, which is intended to cover a more extensive set of recommendations to make the web more accessible. WCAG 2.1 can be considered as a superset containing WCAG 2.0. Therefore, as WCAG 2.1 extends WCAG 2.0, there are no incompatible requirements between one version.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations</ns0:head><ns0:p>This research has a fundamental limitation; it was evaluated using the WAVE automated review tool. Despite being a powerful tool that helps organizations improve the accessibility of websites for people with disabilities, WAVE cannot tell whether web content is accessible; only a human being can determine true accessibility. This evaluation did not include testing with users with disabilities; three accessibility experts conducted accessibility testing. No additional hardware or digital ramps were used in this study to achieve greater accessibility during the website evaluation process.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We consider the research relevant, especially during the COVID-19 period, when e-commerce is considered the leading solution in confining and indirectly improving the global economy. The procedure for evaluating e-commerce websites with the WAVE tool can be applied to any website to make it more than accessible. We recommend performing evaluations with heuristic methods based on WCAG 2.1 (World Wide Web Consortium, 2018) accessibility barriers and automated reviews with users with different disabilities. We found that 55.8% of the websites reach the 'AA' level suggested by WCAG 2.1. We found a low positive correlation between the rating of ecommerce websites and accessibility barriers according to Spearman's Rho of 0.329. The study revealed that 25.6% of e-commerce websites present images of the products they offer and that 54.4% of the sites present contrast problems related to the perception principle, which need to be solved urgently to make the sites more inclusive. Finally, we suggest that business people, governments, and academia work in multidisciplinary teams to generate laws and regulations related to web accessibility that would benefit all users, especially those with disabilities.</ns0:p></ns0:div> <ns0:div><ns0:head>Future Work</ns0:head><ns0:p>It is recommended to 1) Perform tests with other automatic review tools and compare the results obtained for future research. 2) Conduct tests with users with different disabilities. 3) Build a software tool that includes artificial intelligence algorithms that help the software learn the heuristics that may cause accessibility barriers. 4) Include hardware limitations and interstitial advertising in the study. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>1 Table 2 E-commerce websites evaluated. It presents a sample of 50 e-commerce stores according to the ranking of 2 the classification proposed by ecommerceDB, followed by the acronym, errors, contrast errors, alerts, features, 3 structural elements, ARIA, and the country to which each e-commerce corresponds. </ns0:p></ns0:div> <ns0:div><ns0:head>Ranking</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2Methodology for evaluating e-commerce websites. Diagram for assessing accessibility in e-commerce websites.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5Accessibility evaluation using WAVE. It presents detailed evaluation results with accessibility principles according to WCAG 2.1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 Figure 1</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Methodology for evaluating e-commerce websites.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 Figure 3</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Evaluation of accessibility with WAVE.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Accessibility evaluation using WAVE.</ns0:figDesc><ns0:graphic coords='21,42.52,205.78,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 Accessibility barriers identified when evaluating with WAVE.</ns0:figDesc><ns0:graphic coords='22,42.52,204.37,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 E</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 E-commerce websites evaluated.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>It presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the acronym, errors, contrast errors, alerts, features, structural elements, ARIA, and the country to which each e-commerce corresponds.PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:2:0:NEW 25 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Normality tests. Show normality tests for Lilliefors significance correction. We applied for errors, contrast errors, and ranking.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:2:0:NEW 25 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 E</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>-commerce websites evaluated.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:2:0:NEW 25 Dec 2021) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:2:0:NEW 25 Dec 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Universidad de Las Américas RUC: 1791362845001 Vía a Nayón a 300 metros del Redondel del Ciclista Quito, Pichincha EC 170503 Ecuador https://www.udla.edu.ec/ patricia.acosta@udla.edu.ec December 25, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited it to address their concerns. We implemented the recommendations to improve our manuscript. We believe the manuscript is now suitable for publication in PeerJ. Dra. Patricia Acosta-Vargas Associate Professor of Computer Science UDLA On behalf of all authors. Reviewer 1 (Anonymous) Dear reviewer, thank you for accepting our article with the suggested changes. Reviewer 2 (Anonymous) Dear reviewer, we appreciate your valuable comments that improve our article. We have implemented what you have suggested; the changes highlighted in yellow in the PDF document. Basic reporting Overall, the raised comments are handled. Dear reviewer, thank you very much for your valuable comments to improve our publication. Paper formatting needs more adjustments. As well as English improvement. Dear reviewer, we have applied the suggested changes; subject and language experts reviewed the English language throughout the document. For example, in Page 10, line 211 : (This phase starts the evaluation process is defined, the level of compliance with WCAG 2.1 for the evaluation is defined (World Wide Web Consortium, 2018). Should be reformulated. Dear reviewer, thank you for your comments, we will apply what was suggested, and you can review it in the paragraph of the initial phase highlighted in yellow. In this phase, we selected the 50 e-commerce websites that are in the top rankings according to the classification proposed by ecommerceDB (EcommerceDB, 2020). In this preliminary phase, we define the level of compliance with WCAG 2.1 (World Wide Web Consortium, 2018). In this case, we evaluated the AA level within the level accepted and recommended by WCAG 2.1. In Materials & Mathods section, page 195, (We used (WebAIM, 2021) Web Accessibility Evaluation Tool (WAVE) ) -> the citation should be putted after the tool name. Dear reviewer, thank you for your comments, we will apply what was suggested, and you can review it in the paragraph of the initial phase highlighted in yellow. We used Web Accessibility Evaluation Tool (WAVE) (WebAIM, 2021) with the extension for Google Chrome, which helps verify password-protected and highly dynamic pages. Experimental design The authors are invited to justify why they put the works ((Villa & Monzón, 2021), (Pollák et al., 2021) and (Paștiu et al., 2020)) to denote the impact of COVID-19 on the growth of e-commerce websites? Is there a need for making that? Also, the main issue of the paper is the accessibility not the impact of COVID-19 on the evolution of e-commerce websites. I suggest to make just a short paragraph in the form: according to ((Villa & Monzón, 2021), (Pollák et al., 2021) and (Paștiu et al., 2020)) the COVID-19 impacts the growth of e-commerce Websites in the way …..) not to describe each work separately. Dear reviewer, thank you for your comments; we will apply the suggested changes; the changes made can be reviewed in the document in the paragraph highlighted in yellow. According to (Villa & Monzón, 2021), (Pollák et al., 2021) and (Paștiu et al., 2020), COVID-19 has impacted the growth of e-commerce websites; unprecedented worldwide changes are evident in the various forms of consumer habits. Consumer behavior shows an evolutionary shift from offline to online, where e-commerce applications require 1) more accessible designs, 2) greater sustainability, 3) software applications that utilize business intelligence. In the Literature review of COVID-19, e-commerce, and web accessibility section, I suggest removing (COVID-19, e-commerce, and web accessibility) and keeping only the title: Literature review Dear reviewer thank you very much for your suggestion; we have improved the structure of the document we removed (COVID-19, e-commerce and web accessibility) and keep only the title: Literature review. You can review the change in the document in the paragraph highlighted in yellow color. Literature review The pandemic status related to COVID-19 has accelerated the movement of industry, education and business to the virtual world, and on par with these events, several e-commerce websites have been created (Munkova et al., 2021). According to (Villa & Monzón, 2021), (Pollák et al., 2021) and (Paștiu et al., 2020), COVID-19 has impacted the growth of e-commerce websites; unprecedented worldwide changes are evident in the various forms of consumer habits. Consumer behavior shows an evolutionary shift from offline to online, where e-commerce applications require 1) more accessible designs, 2) greater sustainability, 3) software applications that utilize business intelligence. Web accessibility principles From the line 103 to 136, the authors expose the WCAG 2.1 principles in the literature review section after introducing COVID-19 works related to Web accessibility. I suggest to put a brief description of WCAG principles in the introduction section and restructure it better or to make a separated section that outlined WCAG principles. Dear reviewer, thank you very much for your suggestion; we have improved the document's structure. You can review the change made in the document in the paragraph highlighted in yellow color. Web accessibility principles Previous studies indicate the increase of e-commerce in COVID-19 time; this accelerated growth has generated millions of e-commerce websites, but many sites are not accessible; therefore, it is essential to bear in mind the Web Content Accessibility Guidelines 2.1 (WCAG 2.1) proposed by the World Wide Web Consortium. Web accessibility implies that people with incapacities can use the web. This process implies that they can recognize, identify, navigate, and relate to the website. WCAG 2.1 (World Wide Web Consortium, 2018) consists of 4 principles, 13 guidelines and 78 conformance such as compliance or success criteria, plus some techniques. Principle 1- Perceptible refers to the website's contents and the interface design for all users. It includes the audiovisual contents, the interface, images, buttons, video players and other components that must be accessible, recognizable, and feasible by any individual in any condition, tool, and operating system. Principle 2 – Operable, this means that a website should be as intuitive as possible, with options to perform an action or search for content. The more alternatives included in the site's navigation, the better its accessibility. In other words, the website must ensure all keyboard-based functionality and avoid designs that may cause epileptic seizures. Principle 3 - Understandable implies that the site includes legible and understandable elements, both in the form and substance of the texts. It must contain fonts that all users can read in its form. In addition, it should be predictable as to how the site works so that potential users do not waste time trying to guess how a tool works for better navigation. Principle 4 - Robust Robustness refers to websites or applications that must be compatible with all browsers, operating systems, and devices, as well as assistive technology applications or digital ramps. The paper contribution is well designed. Dear reviewer, thank you very much for your comment; it motivates us to continue contributing to this line of research. Validity of the findings The previous remarks and suggestions have been reviewed. Dear reviewer, thank you very much for your comment; it motivates us to continue contributing to this research line and learn from your feedback. "
Here is a paper. Please give your review comments after reading it.
350
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Today, there are many e-commerce websites, but not all of them are accessible.</ns0:p><ns0:p>Accessibility is a crucial element that can make a difference and determine the success or failure of a digital business. The study was applied to 50 e-commerce sites in the top rankings according to the classification proposed by ecommerceDB. In evaluating the web accessibility of e-commerce sites, we applied an automatic review method based on a modification of Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0. To evaluate accessibility, we used Web Accessibility Evaluation Tool (WAVE) with the extension for Google Chrome, which helps verify password-protected, locally stored, or highly dynamic pages. The study found that the correlation between the ranking of ecommerce websites and accessibility barriers is 0.329, indicating that the correlation is low positive according to Spearman's Rho. According to the WAVE analysis, the research results reveal that the top ten most accessible websites are Sainsbury's Supermarkets,</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Internet technology has radically revolutionized the world of communications to become a global means of communication. The number of e-commerce websites has increased significantly due to the pandemic COVID-19 (World Health Organization (WHO), 2021b); the global confinement caused many businesses to close. Statistics from the Statista website <ns0:ref type='bibr' target='#b19'>(Statista, 2021)</ns0:ref> indicate that e-commerce has undergone a substantial transformation in recent years thanks to digitization in modern life.</ns0:p><ns0:p>However, for Statista <ns0:ref type='bibr' target='#b18'>(Statista, 2020</ns0:ref>), e-commerce websites have seen a notable increase in worldwide traffic flow between January 2019 and June 2020. By 2022, more than 2.14 billion people worldwide are forecast to shop online, and global e-commerce revenues could grow to $5.4 trillion <ns0:ref type='bibr' target='#b19'>(Statista, 2021)</ns0:ref>. Figure <ns0:ref type='figure'>1</ns0:ref> shows the search of terms performed in the last five years in Google Trends <ns0:ref type='bibr' target='#b8'>(Google, 2021)</ns0:ref> related to accessibility, e-commerce, and Web Content Accessibility Guidelines (WCAG) <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. We observe that the term e-commerce tends to grow from March 2020 during the COVID-19 where most users began to consume massively digital material and are oriented to use e-commerce applications, to reduce the number of infections, we also observe that the term accessibility and WCAG tend to grow. Figure <ns0:ref type='figure'>1</ns0:ref> The trend in Google Trends for terms related to accessibility, e-commerce, and WCAG. Diagram of the trend of the terms in the last five years worldwide.</ns0:p><ns0:p>E-commerce websites have grown considerably, but most of them are not accessible. Accessibility <ns0:ref type='bibr' target='#b2'>(Patricia Acosta-Vargas et al., 2018)</ns0:ref> refers to a set of techniques, guidelines or methods that make web content and functionality compatible with the needs of all people regardless of their physical or technological capabilities. Statements from the World Health Organization (World Health Organization (WHO), 2017) reveal that 15% of the world's population suffers from some disability. Therefore, it is essential to apply web accessibility guidelines to e-commerce sites. A well-designed website can be easy to navigate for web users <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. Accessibility also benefits several users <ns0:ref type='bibr' target='#b6'>(Andersen et al., 2020)</ns0:ref> with aging-related difficulties that decrease their visual ability due to presbyopia. WCAG 2.1 <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref> proposes applying accessibility principles to reduce accessibility barriers perceived by users when interrelating with a website. The outcomes of this investigation evidenced that 25.6% of the e-commerce websites in the sample present images that require the inclusion of alternative text; additionally, we found that 54.4% of the sites present contrast problems related to the principle of perception. As future work, we suggest considering the problems of hardware limitations and interstitial advertising. We also recommend building a software tool with artificial intelligence algorithms that include new heuristics to help developers identify accessibility barriers to generate more accessible and inclusive sites. The remainder of this paper is structured as follows; Section II reviews the literature on web accessibility. Section III shows the methodology used to evaluate accessibility in e-commerce sites. Section IV shows the results obtained by applying the evaluation. Section V contains the discussion of the outcomes. Section VI explains the restrictions of the research. Finally, conclusions and future work are described in parts VII and VIII.</ns0:p></ns0:div> <ns0:div><ns0:head>Literature review</ns0:head><ns0:p>The pandemic status related to COVID-19 has accelerated the movement of industry, education and business to the virtual world, and on par with these events, several e-commerce websites have been created <ns0:ref type='bibr' target='#b9'>(Munkova et al., 2021)</ns0:ref>. According to <ns0:ref type='bibr' target='#b20'>(Villa &amp; Monz&#243;n, 2021)</ns0:ref>, <ns0:ref type='bibr' target='#b15'>(Poll&#225;k et al., 2021)</ns0:ref> and <ns0:ref type='bibr' target='#b13'>(Pa&#537;tiu et al., 2020)</ns0:ref>, COVID-19 has impacted the growth of e-commerce websites; unprecedented worldwide changes are evident in the various forms of consumer habits. Consumer behavior shows an evolutionary shift from offline to online, where e-commerce applications require 1) more accessible designs, 2) greater sustainability, 3) software applications that utilize business intelligence. In reviewing the literature, we found some research related to the evaluation of web accessibility in e-commerce sites, methods, and tools used in automatic inspection. The study <ns0:ref type='bibr' target='#b14'>(Paz et al., 2021)</ns0:ref> argues the accessibility with which software products, including e-commerce stores, should be designed. It indicates that some countries apply laws and government policies that ensure accessibility to websites considering different skills and abilities. The study compared the results of five tools for inspecting the accessibility of e-commerce websites; the conclusions show that there are no 100% accessible sites. The research <ns0:ref type='bibr' target='#b24'>(Xu, 2020)</ns0:ref> compares the accessibility of e-commerce, considering compliance with web accessibility guidelines. As a case study, they applied the evaluation to 45 e-commerce websites. They used the Web Accessibility Assessment Tool (WAVE); the results revealed that websites with Accessible Rich Internet Applications (ARIA) attribute lower accessibility levels overall. They concluded that the accessibility of mature websites was higher than that of new websites with innovative products. The study <ns0:ref type='bibr' target='#b4'>(Alshamari, 2016)</ns0:ref> argues that many tools can help make a website accessible. The article explores some available tools that help designers and developers evaluate web accessibility. The research results indicate that navigation, readability, and timing are the most common accessibility issues when evaluating the accessibility of selected websites. The article <ns0:ref type='bibr' target='#b11'>(Padure &amp; Pribeanu, 2020)</ns0:ref> argues that WAVE tool is a free tool provided by Web Accessibility In Mind (WebAIM). The authors indicate that WAVE offers a color-coding system: red for errors that need to be corrected urgently, green for correct lines but still need to be checked, and yellow for potential problems that need manual review. The study's authors <ns0:ref type='bibr' target='#b9'>(Oliveira et al., 2020)</ns0:ref> infer that accessibility is fundamental in the democratization of technologies, so applying the Web Content Accessibility Guidelines (WCAG 2.1) is essential. The accessibility evaluation was applied to three websites operating in Portugal, considering the three best-positioned retailers' ranking corresponding to the SimilarWeb. The results obtained established a collection of suggestions to increase the accessibility of websites aimed at e-commerce. Our study differs from <ns0:ref type='bibr' target='#b9'>(Oliveira et al., 2020)</ns0:ref> and the research <ns0:ref type='bibr' target='#b24'>(Xu, 2020)</ns0:ref> because the total sample is taken from ecommerceDB, which presents the e-commerce websites related to market trends and a ranking of the leading e-commerce stores. Our evaluation applied a new method based on the methodology (WCAG-EM) 1.0. In addition, the WAVE evaluation tool is based on version 3.1.6, updated as of October 14, 2021, which includes the plugin component Manuscript to be reviewed Computer Science that allows evaluating websites that require authentication. We, therefore, propose ten recommendations to improve the accessibility of the websites listed in the discussion section. Research <ns0:ref type='bibr' target='#b0'>(Abascal et al., 2019)</ns0:ref> related to Web accessibility evaluation argues that manual verification of compliance with accessibility guidelines is often complicated and unmanageable, so the authors suggest applying software tools that perform automatic accessibility evaluations. It presents a review of the main features of tools used for Web accessibility evaluation and presents an introspection of the future of accessibility tools. The authors <ns0:ref type='bibr' target='#b3'>(Patricia Acosta-Vargas et al., 2019)</ns0:ref> suggest that verifying the accessibility of a Web site is a considerable challenge for accessibility specialists. Today, there are quantitative and qualitative methods for verifying whether a website is accessible. In general, the methods use automatic tools because they are low-cost, but they do not represent a perfect solution. The authors propose a heuristic method with a manual review supported by the Web Content Accessibility Guidelines 2.1. The evaluators concluded that the research could serve as a preliminary argument for upcoming analyses concerned with web accessibility heuristics. Our investigation proposes an automatic review method using the WAVE Web Accessibility Evaluation Tool <ns0:ref type='bibr' target='#b22'>(WebAIM, 2021)</ns0:ref>. Earlier studies by the authors (Patricia <ns0:ref type='bibr' target='#b2'>Acosta-Vargas et al., 2018)</ns0:ref> indicated that one of the best tools for automated review is WAVE, which allows you to identify any accessibility barriers, centered on the Web Content Accessibility Guidelines (WCAG) 2.1 (World Wide Web Consortium, 2018) that help in automatic review and evaluation by web content experts. This preliminary study was applied to the 50 best-ranked e-commerce stores according to the ranking proposed by the ecommerceDB site (EcommerceDB, 2020), which contains detailed information on more than 20,000 stores from 50 countries and 13 categories.</ns0:p></ns0:div> <ns0:div><ns0:head>Web accessibility principles</ns0:head><ns0:p>Previous studies indicate the increase of e-commerce in COVID-19 time; this accelerated growth has generated millions of e-commerce websites, but many sites are not accessible; therefore, it is essential to bear in mind the Web Content Accessibility Guidelines 2.1 (WCAG 2.1) proposed by the World Wide Web Consortium. Web accessibility implies that people with incapacities can use the web. This process implies that they can recognize, identify, navigate, and relate to the website. WCAG 2.1 (World Wide Web Consortium, 2018) consists of 4 principles, 13 guidelines and 78 conformance such as compliance or success criteria, plus some techniques. Principle 1-Perceptible refers to the website's contents and the interface design for all users. It includes the audiovisual contents, the interface, images, buttons, video players and other components that must be accessible, recognizable, and feasible by any individual in any condition, tool, and operating system. Principle 2 -Operable, this means that a website should be as intuitive as possible, with options to perform an action or search for content. The more alternatives included in the site's navigation, the better its accessibility. In other words, the website must ensure all keyboard-based functionality and avoid designs that may cause epileptic seizures.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>In this research to evaluate the accessibility of e-commerce websites, we applied an automatic review method (World Wide Web Consortium (W3C), 2014) centered on a modification of the Website Accessibility Conformance Evaluation Methodology (WCAG-EM) 1.0. We used Web Accessibility Evaluation Tool (WAVE) (WebAIM, 2021) with the extension for Google Chrome, which helps verify password-protected and highly dynamic pages. Utah State University developed the WAVE automatic evaluation tool to help find potential accessibility issues according to WCAG 2.1, facilitating manual evaluation. It should be noted that manual testing cannot be replaced, especially when it comes to accessibility, as it may be essential to test with end-users with disabilities. Accessibility validation was performed using guidelines based on Section 508 and WCAG 2.1; some phases of this methodology <ns0:ref type='bibr' target='#b17'>(Salvador-Ullauri et al., 2020)</ns0:ref> were tested in previous works of the authors related to serious games; the methodology for evaluating e-commerce websites is summarized in eight phases, as shown in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. Phase 1 Select e-commerce Websites. In this phase, we selected the 50 e-commerce websites that are in the top rankings according to the classification proposed by ecommerceDB <ns0:ref type='bibr' target='#b7'>(EcommerceDB, 2020)</ns0:ref>. In this preliminary phase, we define the level of compliance with WCAG 2.1 <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>. In this case, we evaluated the AA level within the level accepted and recommended by WCAG 2.1. This knowledge can be improved to propose future work; we also determined the lowest configuration of web browser patterns, operating systems and assistive technologies with which the website should work during the testing phase. This case study used PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:09:65405:3:0:NEW 23 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the Windows 10 operating system with Google Chrome browser version 95.0.4638.54 and screen reader support. Phase 2 Categorize the type of users. According to <ns0:ref type='bibr' target='#b17'>(Salvador-Ullauri et al., 2020)</ns0:ref>, in this phase, we involved three web accessibility experts with experience in the area since 2015, who have more than ten scientific publications in web accessibility evaluations, serious games, and accessible mobile applications. Discrepancies found in the automatic review of e-commerce sites were resolved in consensus. Experts performed the automatic review with WAVE; as evaluated by experts, WAVE is one of the best performing tools according to previous studies (Patricia <ns0:ref type='bibr' target='#b2'>Acosta-Vargas et al., 2018)</ns0:ref>. This phase identified the flow of events users interact with when browsing ecommerce sites. Phase 3 Define the test scenario. According to <ns0:ref type='bibr' target='#b17'>(Salvador-Ullauri et al., 2020)</ns0:ref>, this phase identifies the essential functionalities of the e-commerce website to help select the most representative instances. The definition of the test scenario serves as the basis for the subsequent selection of the e-commerce sites. In this case, we apply the following scenario: 1) We enter the first page of the website. 2) We interact by selecting and purchasing products on the e-commerce website. 3) We test how to fill out and submit the forms. 4) We check if account registration on the e-commerce site is required. Phase 4 Explore the e-commerce website. In this phase, the first page of each website was explored. The evaluators explored the e-commerce website to understand its purpose, functionality, and usage. Initial exploration of this phase was considered in Phase 1 by selecting a representative sample, then refined in phase 5 by evaluating with WAVE. Involving accessibility experts and website designers can help get the scans more efficiently. At first, cursory checks were performed to help identify relevant web pages; later, a more detailed evaluation of each website component was performed. Therefore, this phase is essential for evaluators to access all the essential components and functionalities of the website. Phase 5 Evaluate with WAVE. In this phase, the experts used WAVE to evaluate the home page of each e-commerce website. The assessment was conducted in March 2021; during this phase, the evaluators audited the sample e-commerce websites and the states of the websites selected in phases 1 and 4. The evaluation was conducted following the WCAG 2.1 conformance requirements at the AA level previously defined in phase 1. The conformance level, web pages, processes and technologies, and compatibility with accessibility and non-interference were considered. In this phase, it was essential to know the WCAG 2.1 (World Wide Web Consortium, 2018) conformance requirements and the experience of accessibility experts. In addition, the authors classified accessibility barriers by matching the WCAG 2.1 principles, guidelines, and success criteria, which were then validated with WAVE results. Phase 6 Record evaluation data. In this phase, the assessment data was documented in a spreadsheet to organize a dataset available in Mendeley (P. <ns0:ref type='bibr' target='#b1'>Acosta-Vargas et al., 2021)</ns0:ref>. The dataset includes information with the names of the audited websites, the URLs of the e-commerce sites, and the evaluation data used to replicate this study as part of good practices that help researchers. The set organizes the information into spreadsheets, containing 1) The e-commerce websites evaluated. 2) The results of the e-commerce websites evaluated with WAVE. 3) The map of the number of e-commerce websites per country. 4) The diagram of the accessibility evaluation process with WAVE. 5) A summary of the evaluation of e-commerce websites. 6) The accessibility principles of WCAG 2.1. 7) Accessibility barriers identified when evaluating with WAVE. 8) Ecommerce websites versus compliance. 9) E-commerce websites with errors and contrast errors. 10) The number of alerts, features, structural elements and ARIA. 11) E-commerce websites and the relation to accessibility levels. Phase 7 Classify and analyze data. Following previous work by the authors (Salvador-Ullauri et al., 2020), data related to accessibility principles, success criteria, and accessibility levels are organized in this phase. This information is detailed in the results section and discussed in the discussion section. Also, the e-commerce websites were classified by 1) The countries to which each domain corresponds according to the registered URL. 2) The severe errors in need of correction to remove accessibility barriers. 3) Contrast errors that make access difficult for visually impaired users. 4) The ranking in which they are placed and the level of accessibility. The data analysis was performed with Microsoft Excel version 365 MSO 16.0.14326.20504, with macros, advanced functions, tables, and dynamic graphs. Phase 8 Suggest accessibility improvements. In this phase, proposals for improvements to the ecommerce websites were presented. The improvements are detailed in the discussion section.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>This research was applied to a sample of the top 50 e-commerce websites taken from ecommerceDB; which contains information on more than 20,000 e-shops from around 50 countries. It is divided into several categories such as revenue and competitor analysis, market development, performance and traffic indicators, vendor submission, payment options, social media activity and SEO information. The ecommerceDB.com database also covers e-commerce market analysis, customer behaviors and buying patterns, market trends and company histories. Table <ns0:ref type='table'>1</ns0:ref> contains the sites that were evaluated with WAVE.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> E-commerce websites. Presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the name of the electronic store, the URL, and the acronym.</ns0:p><ns0:p>The evaluation of the accessibility of e-commerce stores was carried out with the WAVE automatic review tool. Rich Internet applications tend to dynamically update the Document Object Model (DOM) structure, which is why the method used by WAVE to analyze the rendered DOM of pages uses heuristics and logic to detect end-user accessibility barriers considering WCAG 2.1 (World Wide Web Consortium, 2018). All automatic review tools, including WAVE, have limitations; they can detect barriers in 35% of possible compliance failures (WebAIM, 2021). The method applied in evaluating the accessibility of e-commerce stores was based on a modification of the (World Wide Web Consortium (W3C), 2014) WCAG-EM 1.0; our method consists of an eightphase process detailed in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> presents the data obtained from the accessibility evaluation of e-commerce websites with WAVE. It contains the number of WCAG 2.1 compliance failures that may affect specific users. Web developers should correct the barriers identified in the evaluation to make the e-commerce site accessible and inclusive. The contrast errors found in this study are related to the text that violates WCAG 2.1 contrast requirements. The term alerts are related to elements that may cause accessibility problems; in this case, the evaluator is the one who decides the impact of the accessibility of the website. Features imply that elements can improve accessibility when implemented correctly. Structural elements are related to some title of a web page, indicating that it has been marked as a top-level title or related to several milestones. Finally, the ARIA element presents information about accessibility for people with disabilities; in such a way, it influences accessibility when misused.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> E-commerce websites evaluated. It presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the acronym, errors, contrast errors, alerts, features, structural elements, ARIA, and the country to which each e-commerce corresponds.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the number of e-commerce sites by country, taken as a sample for the accessibility evaluation. The most significant number of e-commerce sites evaluated corresponds to the United States, with 24 sites representing 48% of the total, followed by the United Kingdom, with eight sites representing 16%. With six e-commerce sites, Greater China accounts for 12% in third place. Next, Germany, with three sites, accounts for 6%, followed by France and Russia, with two sites each, representing 8% of the total. Lastly, Brazil, Canada, Italy, Japan and Spain, with one ecommerce site, account for 10%. Figure <ns0:ref type='figure'>3</ns0:ref> Map of the number of e-commerce sites taken by the country. The map presents the countries taken as part of the sample to evaluate accessibility according to the classification proposed by ecommerceDB. The sky blue color indicates the country with the highest number of e-commerce sites, the yellow color the midpoint and the pink color the lowest number of e-commerce sites.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows two categories of barriers, warning and serious barriers; the most significant warning barriers are ARIA, Structural Elements, Features and Alerts. These barriers do not affect accessibility to a high degree, and correcting them is unnecessary. The serious barriers with a high number are Contrast Errors with 1721 barriers, representing 7.4% (pink bars) and Errors with 1229, corresponding to 5.3% (sky blue bars), which must be corrected urgently for e-commerce sites to reach an acceptable level of accessibility. ARIA attributes (World Wide Web Consortium, 2018) add semantic information to the elements of a website, specifically for properties that help to inform: 1) The state of an element of the graphical interface. 2) The content of a section that may change when there is user interaction. 3) The elements that are part of a drag-and-drop interface. 4) The relationships between document elements. Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> Evaluation of accessibility with WAVE. It shows the barriers related to Errors (sky blue bars) and Contrast Errors (pink) that should be corrected urgently to improve accessibility. Alerts (yellow), Features (gray), Structural Elements (orange) and ARIA (blue) that can be corrected depending on the evaluator's criteria.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> summarizes the barriers identified during the evaluation of the e-commerce sites with WAVE. Table <ns0:ref type='table'>3</ns0:ref> includes the barriers, success criteria, level, principle, and total barriers of the 50 e-commerce websites assessed. Table <ns0:ref type='table'>3</ns0:ref> comprises the success criteria composed of three numbers; the first is associated with the accessibility principle, the second to the guideline, and the third to the success criteria related to the accessibility barrier. Table <ns0:ref type='table'>3</ns0:ref> Summary of the evaluation of e-commerce websites. Shows the summary of accessibility barriers identified by applying the WAVE automatic review tool.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>5</ns0:ref> shows a synopsis of the accessibility principles recognized in the assessment of ecommerce websites. The most neglected accessibility principle is perceivable, representing 83.1% of the total, followed by operable with 13.7%, in third place is robust with 1.7%, and finally, understandable with 1.5%. Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> summarizes the barriers identified in the accessibility evaluation. The most affected accessibility barrier corresponds to Contrast with 54.4%, followed by Non-text Content, representing 25.6%, in third place is Link purpose, representing 11.6% of the total. The rest of the barriers, such as info and relationships, name, role, value, headings and labels, labels or instructions, bypass blocks, keyboard, the language of page and error identification, correspond to values lower than 3.1%. Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> Accessibility barriers identified when evaluating with WAVE. It presents the results related to the success criteria according to WCAG 2.1.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> presents the e-commerce sites and the level of web accessibility; among the top ten most accessible websites according to this analysis with WAVE, we have Sainsbury's Supermarkets, Walmart, Target Corporation, Macy's, IKEA, H &amp; M Hennes, Chewy, The Kroger, QVC, and Nike.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> E-commerce websites evaluated. It presents the level of web accessibility of the ten most accessible websites evaluated with the WAVE automatic review tool.</ns0:p><ns0:p>In addition, the correlation between the ranking of e-commerce sites and accessibility barriers was analyzed. In Table <ns0:ref type='table' target='#tab_0'>4</ns0:ref>, the test statistic p &gt;0.05 for Errors, Contrast Errors and Ranking have a normal distribution despite their variability. While applying Lilliefors significance correction, the variables Errors and Contrast Errors p&lt;0.05 confirm that they do not have a normal distribution. However, the variable Ranking with Lilliefors significance correction has a p&gt;0.05, confirming a normal distribution. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> presents Spearman's non-parametric correlation between e-commerce website ranking and accessibility barriers. In this case, the correlation is significant for accessibility barriers at the 0.05 level (bilateral). Table <ns0:ref type='table'>5</ns0:ref> Spearman correlation. It shows Spearman's non-parametric correlation between the ranking of e-commerce websites and accessibility barriers. Spearman's Rho correlation is 0.329, indicating that the correlation is low positive.</ns0:p><ns0:p>In analyzing the accessibility of e-commerce websites, we applied multivariate descriptive statistics and pivot tables with the Excel tool. In addition, to analyze the correlation between the ranking of e-commerce websites and accessibility barriers, we applied the IBM SPSS Statistics version 25 statistical software with which we applied the Kolmogorov-Smirnov and Lilliefors significance correction. We found a non-parametric correlation, so we applied Spearman's Rho between the ranking of e-commerce websites and the accessibility barriers of e-commerce websites. The correlation is 0.329, which indicates that the correlation is low positive.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Concerning the modification made in WCAG-EM 1.0, three additional phases were included. By applying WAVE in phase 5, it is possible to perform automatic checks, which considerably reduces review time and detects numerous problems that would take more time and would be difficult to identify manually. Our methodology allows linking WCAG 2.1 criteria to accessibility barriers; this method can be applied throughout the website development cycle. Currently, e-commerce sites have become essential tools to perform commercial transactions due to the COVID-19; with the findings obtained, it is evident that web designers and developers need to utilize the WCAG 2.1 (World Wide Web Consortium, 2018) to have more accessible and inclusive e-commerce sites. We identified that there is a significant number of e-commerce sites that occupy the top positions in the most developed countries such as the United States, United Kingdom, Greater China, Germany, Russia, and France; however, occupying the top positions do not guarantee that they comply with the WCAG 2.1 accessibility standards. The most accessible store is Sainsbury's Supermarkets, located in the United Kingdom with zero 'Errors', followed by Walmart, Target, Chewy, IKEA, and Macy's, from the United States and H &amp; M from Germany with one 'Errors', the rest of the e-commerce websites have more than one 'Errors' and more than six 'Contrast Errors'. In the evaluation of e-commerce websites, it was found that the most common errors are related to 'Contrast errors', representing 54.4% of the total, in second place 'Non-text Content', corresponding to 25.6%, 'Link Purpose' corresponds to 11.6% and 'Info and Relationships' with 3.1% of the total. Of the 50 e-commerce websites evaluated, 44.2% of the total comply with level 'A' for accessibility and 55.8% with level 'AA'; none of the sites evaluated reach level 'AAA'. The highest number of accessibility barriers are condensed in the 'perceivable' principle, representing 83.1% of the total, while 16.9% are distributed among the 'operable,' 'robust,' and 'understandable' principles. This finding implies that more barriers are related to problems for users with low vision, including older adults <ns0:ref type='bibr' target='#b10'>(Padmanaban et al., 2019)</ns0:ref>; vision deteriorates with age due to the eye's normal aging process. The existence of ocular degenerative diseases such as glaucoma, age-related macular degeneration, diabetic retinopathy, age-related cataracts, and cardiovascular accidents can mainly trigger a decrease in visual acuity and the visual field. One of the most frequently repeated barriers on e-commerce sites is contrast and color usage, vital parameters for web accessibility. In contrast, most of the world's users are visually impaired. According to World Health Organization (World Health Organization (WHO), 2021), worldwide, at least 2.2 billion people have near or distance vision problems. Achieving accessibility on e-commerce websites is an excellent challenge, so it is essential to apply the contrast ratio, which measures the difference in 'luminance' or perceived brightness between two colors. The difference in brightness is expressed as a ratio varying from 1:1. WCAG 2.1 suggests addressing contrast with the three success criteria, 1.4.3 Contrast (minimum), 1.4.6 Contrast (enhanced) and 1.4.11 Contrast without text. In WCAG 2.1, it is suggested that 4.5:1 is the minimum required. Possibly some of these combinations are not very readable for all users.</ns0:p><ns0:p>For an e-commerce website to be accessible <ns0:ref type='bibr'>(World Wide Web Consortium, 2018)</ns0:ref>, it must meet level AA for accessibility; the evaluated e-commerce websites do not present any accessibility statement or specify whether they officially comply. Since the home page of these e-commerce sites is the user's entry point, it should be a primary objective for enhancements in terms of accessibility. Improving e-commerce websites should be a responsibility that companies take on, as it enhances the user experience and reduces the technology gap that people with disabilities may experience. Accessibility issues can be reduced with the following recommendations 1) Improve contrast by considering the colors and contrasts of the screen, checking that they are displayed correctly on all devices. 2) Eliminate time limits, or at least lengthen them. It is essential to consider that users with disabilities need more time to browse online. 3) Always adding the option to 'skip content' is very useful for users who access the Internet with screen readers and avoid content that does not interest them. 4) Provide transcripts of texts: in addition to subtitling videos, it is advisable to include transcripts so that the hearing impaired can read the video content at their own pace. 5) Add captions to graphics: especially those whose description does not conform to the 'alt' attribute. The use of the 'Alt' attribute, called 'alternative text,' is essential for blind users who use screen readers. 6) Avoid using red to highlight important things; it would be fine if no people had color blindness problems. A good accessibility option is to use more giant letters or representative icons. 7) Write contents oriented to any reader, with and without disabilities. 8) Use legible fonts larger than 15px. 9) Avoid using paragraphs longer than four lines. 10) Use images and diagrams that help the reader understand the content better. This study brings originality since the 50 top-ranked e-commerce sites were evaluated. Nowadays, e-commerce websites must have accessibility policies and standards; several transactions are made electronically due to the pandemic. This research can guide developers and designers of e-commerce websites to spread the use of WCAG 2.1, which is intended to cover a more extensive set of recommendations to make the web more accessible. WCAG 2.1 can be considered as a superset containing WCAG 2.0. Therefore, as WCAG 2.1 extends WCAG 2.0, there are no incompatible requirements between one version.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations</ns0:head><ns0:p>This research has a fundamental limitation; it was evaluated using the WAVE automated review tool. Despite being a powerful tool that helps organizations improve the accessibility of websites for people with disabilities, WAVE cannot tell whether web content is accessible; only a human being can determine true accessibility. This evaluation did not include testing with users with disabilities; three accessibility experts conducted accessibility testing. No additional hardware or digital ramps were used in this study to achieve greater accessibility during the website evaluation process.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We consider the research relevant, especially during the COVID-19 period, when e-commerce is considered the leading solution in confining and indirectly improving the global economy. The procedure for evaluating e-commerce websites with the WAVE tool can be applied to any website to make it more than accessible. We recommend performing evaluations with heuristic methods based on WCAG 2.1 (World Wide Web Consortium, 2018) accessibility barriers and automated reviews with users with different disabilities. We found that 55.8% of the websites reach the 'AA' level suggested by WCAG 2.1. We found a low positive correlation between the rating of ecommerce websites and accessibility barriers according to Spearman's Rho of 0.329. The study revealed that 25.6% of e-commerce websites present images of the products they offer and that 54.4% of the sites present contrast problems related to the perception principle, which need to be solved urgently to make the sites more inclusive. Finally, we suggest that business people, governments, and academia work in multidisciplinary teams to generate laws and regulations related to web accessibility that would benefit all users, especially those with disabilities.</ns0:p></ns0:div> <ns0:div><ns0:head>Future Work</ns0:head><ns0:p>It is recommended to 1) Perform tests with other automatic review tools and compare the results obtained for future research. 2) Conduct tests with users with different disabilities. 3) Build a software tool that includes artificial intelligence algorithms that help the software learn the heuristics that may cause accessibility barriers. 4) Include hardware limitations and interstitial advertising in the study. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed It presents a sample of 50 e-commerce stores according to the ranking of the classification proposed by ecommerceDB, followed by the acronym, errors, contrast errors, alerts, features, structural elements, ARIA, and the country to which each e-commerce corresponds.</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:3:0:NEW 23 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>1 Table 2 E-commerce websites evaluated. It presents a sample of 50 e-commerce stores according to the ranking of 2 the classification proposed by ecommerceDB, followed by the acronym, errors, contrast errors, alerts, features, 3 structural elements, ARIA, and the country to which each e-commerce corresponds. </ns0:p></ns0:div> <ns0:div><ns0:head>Ranking</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:3:0:NEW 23 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2Methodology for evaluating e-commerce websites. Diagram for assessing accessibility in e-commerce websites.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5Accessibility evaluation using WAVE. It presents detailed evaluation results with accessibility principles according to WCAG 2.1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Figure 1</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Methodology for evaluating e-commerce websites.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 Figure 3</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Evaluation of accessibility with WAVE.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Accessibility evaluation using WAVE.</ns0:figDesc><ns0:graphic coords='21,42.52,205.78,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 Accessibility barriers identified when evaluating with WAVE.</ns0:figDesc><ns0:graphic coords='22,42.52,204.37,525.00,294.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 E</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 E-commerce websites evaluated.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Normality tests. Show normality tests for Lilliefors significance correction. We applied for errors, contrast errors, and ranking.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:3:0:NEW 23 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 E</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>-commerce websites evaluated.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65405:3:0:NEW 23 Jan 2022)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Universidad de Las Américas RUC: 1791362845001 Vía a Nayón a 300 metros del Redondel del Ciclista Quito, Pichincha EC 170503 Ecuador https://www.udla.edu.ec/ patricia.acosta@udla.edu.ec January 23, 2022 Dear Editors We thank the reviewers for their helpful comments on the manuscript and have edited it to address their concerns. We implemented the recommendations to improve our manuscript. We believe the manuscript is now suitable for publication in PeerJ. Dra. Patricia Acosta-Vargas Associate Professor of Computer Science UDLA On behalf of all authors. Reviewer 2 (Anonymous) Dear reviewer, we appreciate your recommendation. We have implemented what you suggested, and the change is highlighted in yellow in the PDF document. Basic reporting I have just one last remark regarding the order of sections: it should be better to put Web accessibility principles section (from line 92 to 125) after the literature review section and before Materials & Methods section since it’s inappropriate in scientific writing to make sections intertwined. I think that my suggestion in the last review was not clearly understood. For the rest, I’m satisfied and I think that the raised comments have been handled. Dear reviewer, thank you very much for your time and recommendation; we have restructured the order of the sections. We placed the web accessibility principles section (from lines 92 to 125) after the literature review section and the Materials and Methods section. "
Here is a paper. Please give your review comments after reading it.
351
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In this paper, we focused on the task scheduling problem for optimizing the Service-Level Agreement (SLA) satisfaction and the resource efficiency in Device-Edge-Cloud Cooperative Computing (DE3C) environments. Existing works only focused on one or two of three sub-problems (offloading decision, task assignment and task ordering), leading to a sub-optimal solution. To address this issue, we first formulated the problem as a binary nonlinear programming, and proposed an integer particle swarm optimization method (IPSO) to solve the problem in a reasonable time. With integer coding of task assignment to computing cores, our proposed method exploited IPSO to jointly solve the problems of offloading decision and task assignment, and integrated earliest deadline first scheme into the IPSO to solve the task ordering problem for each core. Extensive experimental results showed that our method achieved upto 9.53x and 9.64x better performance than that of several classical and state-of-the-art task scheduling methods in SLA satisfaction and resource efficiency, respectively.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Smart devices, e.g., Internet of Things (IoT) devices, smartphones, have been commonplace nowadays. 'There were 8.8 billion global mobile devices and connections in 2018, which will grow to 13.1 billion by 2023 at a CAGR of 8 percent', as shown in Cisco Annual Internet Report <ns0:ref type='bibr' target='#b9'>(Cisco, 2020)</ns0:ref>. But most of the time, users' requirements cannot be satisfied by their respective devices. This is because user devices usually have limited capacity of both resource and energy <ns0:ref type='bibr' target='#b42'>(Wu et al., 2019)</ns0:ref>. Device-Edge-Cloud Cooperative Computing (DE3C) <ns0:ref type='bibr' target='#b37'>(Wang et al., 2020)</ns0:ref> is one of the most promising ways to address the problem. DE3C extends the capacity of user devices by jointly exploiting the edge resources with low network latency and the cloud with abundant computing resources.</ns0:p><ns0:p>Task scheduling helps to improve the resource efficiency and satisfy user requirements in DE3C, by properly mapping requested tasks to hybrid device-edge-cloud resources. The goal of task scheduling is to decide whether each task is offloaded from user device to an edge or a cloud (offloading decision), which computing node an offloaded task is assigned to (task assignment), and the execution order of tasks in each computing node (task ordering) <ns0:ref type='bibr' target='#b37'>(Wang et al., 2020)</ns0:ref>. To obtain a global optimal solution, these three decision problems must be concerned jointly when designing task scheduling. Unfortunately, to the best of our knowledge, there is no work which jointly address all of these three decision problems. Therefore, in this paper, we try to address this issue for DE3C.</ns0:p><ns0:p>As the task scheduling problem is NP-Hard <ns0:ref type='bibr' target='#b10'>(Du and Leung, 1989)</ns0:ref>, several works exploited heuristic methods <ns0:ref type='bibr' target='#b26'>(Meng et al., 2019</ns0:ref><ns0:ref type='bibr' target='#b25'>(Meng et al., , 2020;;</ns0:ref><ns0:ref type='bibr' target='#b41'>Wang et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b44'>Yang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Liu et al., 2019)</ns0:ref> and metaheuristic algorithms, such as swarm intelligence <ns0:ref type='bibr' target='#b43'>(Xie et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adhikari et al., 2020)</ns0:ref> <ns0:ref type='bibr' target='#b32'>Sun et al. (2018)</ns0:ref>. Inspired by a natural behaviour, a meta-heuristic algorithm has the capability of searching the optimal solution by combining a random optimization method and a generalized search strategy. meta-heuristic algorithms can have a better performance than heuristic methods, mainly due to their global search ability <ns0:ref type='bibr' target='#b16'>(Houssein et al., 2021)</ns0:ref>. Thus, in this paper, we design task scheduling for DE3C by exploiting Particle Swarm Optimization (PSO) which is one of the most representative meta-heuristics based on swarm intelligence, due to its ability to fast convergence and powerful ability of global optimization as well as its easy implementation <ns0:ref type='bibr' target='#b38'>(Wang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In this paper, we focus on the task scheduling problem for DE3C to improve the satisfaction of Service Level Agreements (SLA) and the resource efficiency. The SLA satisfaction strongly affects the income and the reputation <ns0:ref type='bibr' target='#b30'>(Serrano et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b47'>Zhao et al., 2021)</ns0:ref> and the resource efficiency can determine the cost at a large extent for service providers <ns0:ref type='bibr' target='#b12'>(Gujarati et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b36'>Wang et al., 2016)</ns0:ref>. We first present a formulation for the problem, and then propose a task scheduling method based on PSO with integer coding to solve the problem in a reasonable time complexity. In our proposed method, we encode the assignment of tasks to computing cores into the position of a particle, and exploit earliest deadline first (EDF) approach to decide the execution order of tasks assigned to one core. This encoding approach has two advantages: 1) it has much less solution space than binary encoding, and thus has more possibility of achieving optimal solution; 2) compared with encoding the assignment of tasks to computing servers <ns0:ref type='bibr' target='#b43'>(Xie et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adhikari et al., 2020)</ns0:ref> (or coarser resource granularity), our approach makes more use of the global searching ability of PSO. In addition, we use the modulus operation to restrict the range of the value in each particle position dimension. This can make boundary values have a same possibilities to other values for each particle position dimension, and thus maintain the diversity of particles. In brief, the contributions of this paper are as followings.</ns0:p><ns0:p>&#8226; We formulate the task scheduling problem in DE3C into a binary nonlinear programming with two objectives. The major objective is to maximize the SLA satisfaction, i.e., the number of completed tasks. The second one is maximizing the resource utilization, one of the most common quantification approach for the resource efficiency.</ns0:p><ns0:p>&#8226; We propose an Integer PSO based task scheduling method (IPSO). The method exploits the integer coding of the joint solution of offloading decision and task assignment, and integrates EDF into IPSO to address the task ordering problem. Besides, the proposed method only restricts the range of particle positions by the modulus operation to maintain the particle diversity.</ns0:p><ns0:p>&#8226; We conduct simulated experiments where parameters are set referring to recent related works and the reality, to evaluate our proposed heuristic method. Experiment results show that IPSO has 24.8%-953% better performance than heuristic methods, binary PSO, and genetic algorithm (GA) which is a representative meta-heuristics based on evolution theory, in SLA satisfaction optimization.</ns0:p><ns0:p>In the rest of this paper, Section 2 formulates the task offloading problem we concerned. Section 3 presents the task scheduling approach based on IPSO. Section 4 evaluates scheduling approach presented in Section 3 by simulated experiments. Section 6 discusses some findings from experimental results. Section 6 illustrates related works and Section 7 concludes this paper.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>PROBLEM FORMULATION</ns0:head><ns0:p>In this paper, we focus on the DE3C environment, which is composed of the device tier, the edge tier, and the cloud tier, as shown in Fig 1 <ns0:ref type='figure'>.</ns0:ref> In the device tier, a user launches one or more request tasks on its device, and processes these tasks locally if the device has available computing resources. Otherwise, it offloads tasks to an edge or a cloud. In the edge tier, there are multiple edge computing centres (edges for short).</ns0:p><ns0:p>Each edge has network connections with several user devices and consists of a very few number of servers for processing some offloaded tasks. In the cloud tier, there are various types of cloud servers, usually in form of virtual machine (VM). The DE3C service provider, i.e., a cloud user, can rent some instances with any of VM types for processing some offloaded tasks. The cloud usually has a poor network performance. </ns0:p><ns0:formula xml:id='formula_0'>i, j (i &#8712; [1, M], j &#8712; [M + 1, M + E +V ])</ns0:formula><ns0:p>for transmitting data from device n i to node n j , which can be easily calculated according to the transmission channel state data <ns0:ref type='bibr' target='#b11'>(Du et al., 2019)</ns0:ref>. If a device is not covered by an edge, i.e. there is no network connection between them, the corresponding bandwidth is set as 0.</ns0:p><ns0:p>In the DE3C environment, there are T tasks, t 1 ,t 2 , ...,t T , requested by users for processing. We use binary constants</ns0:p><ns0:formula xml:id='formula_1'>x o,i , &#8704;o &#8712; [1, T ], &#8704;i &#8712; [1, M],</ns0:formula><ns0:p>to represent the ownership relationships between tasks and devices, as defined in Eq.( <ns0:ref type='formula' target='#formula_2'>1</ns0:ref>), which is known.</ns0:p><ns0:formula xml:id='formula_2'>x o,i = 1, if t o is launched by n i 0, else , &#8704;o &#8712; [1, T ], &#8704;i &#8712; [1, M]. (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>Task t o has r o computing length for processing its input data with size a o , and requires that it must be finished within the deadline d o . 1 Without loss of generality, we assume that d 1 &#8804; d 2 &#8804; ... &#8804; d T . To make our approach universal applicable, we assume there is no relationship between the computing length and the input data size for each task.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Task Execution Model</ns0:head><ns0:p>When t o is processed locally, there is no data transmission for the task, and thus, its execution time is</ns0:p><ns0:formula xml:id='formula_4'>&#964; o = M &#8721; i=1 (x o,i &#8226; r o g i ), &#8704;o &#8712; [1, T ].<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>In this paper, we consider that each task exhausts only one core during its execution, as done in many published articles. This makes our approach more universal because it is applicable to the situation that each task can exhaust all resources of a computing node by seeing the node as a core. For tasks with elastic degree of parallelism, we recommend to referring our previous work <ns0:ref type='bibr' target='#b35'>(Wang et al., 2019a)</ns0:ref> which is complementary to this work.</ns0:p><ns0:p>Due to EDF scheme providing the optimal solution for SLA satisfaction maximization in each core <ns0:ref type='bibr' target='#b28'>(Pinedo, 2016)</ns0:ref>, we can assume all tasks assigned to each core are processed in the ascending order of deadlines when establishing the optimization model. With this in mind, the finish time of task t o when it processed locally can be calculated by</ns0:p><ns0:formula xml:id='formula_5'>f t o = M &#8721; i=1 C i &#8721; q=1 (y o,i,q &#8226; o &#8721; w=1 (y w,i,q &#8226; &#964; w )), &#8704;o &#8712; [1, T ],<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>1 In this paper, we focus on hard deadline tasks, and leave soft deadline tasks as a future consideration.</ns0:p></ns0:div> <ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where y o,i,q indicates whether task t o is assigned to the qth core of node n i , which is defined in Eq. ( <ns0:ref type='formula' target='#formula_6'>4</ns0:ref>).</ns0:p><ns0:p>&#8721; o&#8722;1 w=1 (y w,i,q &#8226; &#964; w ) is the accumulated sum of execution time of tasks which are assigned to qth core of m i and have earlier deadline than t o , which is the start time of t o when it is assigned to the core. Thus,</ns0:p><ns0:formula xml:id='formula_6'>&#8721; o w=1 (y w,i,q &#8226; &#964; w ) is the finish time when t o is assigned to qth core of n i (i &#8712; [1, M]). y o,i,q = 1, if t o is assigned to qth core of n i 0, else , o &#8712; [1, T ], i &#8712; [1, M + E +V ], q &#8712; [1,C i ].<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>As each task cannot be executed by any device which doesn't launch it,</ns0:p><ns0:formula xml:id='formula_7'>C i &#8721; q=1 y o,i,q &#8804; x o,i , &#8704;o &#8712; [1, T ], &#8704;i &#8712; [1, M]. (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>)</ns0:formula><ns0:p>When a task is offloaded to the edge or the cloud tier, it starts to be executed only when both its input data and the core it is assigned are ready. Based on the EDF scheme, the ready time of input data for each task assigned to a core in an edge server or a VM can be calculated as</ns0:p><ns0:formula xml:id='formula_9'>rt o = M+E+V &#8721; i=M+1 C i &#8721; q=1 (y o,i,q &#8226; o &#8721; w=1 (y w,i,q &#8226; a w b T w,i )), &#8704;o &#8712; [1, T ],<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where rt o is the ready time of input data for task t o when the task is offloaded to an edge server or a cloud VM, respectively. For ease of our problem formulation, we use b T o,i to respectively represent the network bandwidths of transferring the input data from the device launching</ns0:p><ns0:formula xml:id='formula_10'>t o to n i (i &#8712; [M + 1, M + E +V ]). That is to say, b T o,i = &#8721; M k=1 (x o,k &#8226; b k, j ).</ns0:formula><ns0:p>Then the transmission time of the input data for t o is a o /b T o,i when it is offloaded to n i ). With the EDF scheme, the ready time of the input data for an offloaded task is the accumulated transmission time of all input data of the offloaded task and other tasks which are assigned to the same core and have earlier deadline than the offloaded task, i.e., &#8721; o w=1 (y</ns0:p><ns0:formula xml:id='formula_11'>w,i,q &#8226; a w /b T w,i ) for task t o when it is offloaded to qth core in n i (i &#8712; [M + 1, M + E +V ]).</ns0:formula><ns0:p>In this paper, we don't consider employing the task redundant execution for the performance improvement. Thus, each task can be executed by only one core, i.e.,</ns0:p><ns0:formula xml:id='formula_12'>M &#8721; i=1 C i &#8721; q=1 y o,i,q &#8804; 1, &#8704;o &#8712; [1, T ].<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>We use z o to indicate whether t o is assigned to a core for its execution, where z o = 1 means yes and z o = 0 means no. Then we have</ns0:p><ns0:formula xml:id='formula_13'>z o = M &#8721; i=1 C i &#8721; q=1 y o,i,q , &#8704;o &#8712; [1, T ].<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>And the total number of tasks which are assigned to computing cores for executions is</ns0:p><ns0:formula xml:id='formula_14'>Z = T &#8721; o=1 z o .<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>We use f t o f f o to respectively represent the finish times of t o when it is offloaded to the edge or cloud tier. For t o assigned to a core, the core is available when all tasks that are assigned to the core and have earlier deadline than the task are finished. And thus, the ready time of the core for executing t o is the latest finish time of these tasks, which respectively are</ns0:p><ns0:formula xml:id='formula_15'>rc o = M+E+V &#8721; i=M+1 C i &#8721; q=1 (y o,i,q &#8226; max w&lt;o {y w,i,q &#8226; f t o f f w }), &#8704;o &#8712; [1, T ].<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>The ready time of a task to be executed by the core it assigned to is the latter of the input data ready time and the core available time. The finish time of the task is its ready time plus its execution time. Thus, finish times of offloaded tasks are respectively Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_16'>f t o f f o = max{rt o , rc o } + M+E+V &#8721; i=M+1 C i &#8721; q=1 (y w,i,q &#8226; r o g i ), &#8704;o &#8712; [1, T ].<ns0:label>(11</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Noticing that when a task is assigned to a tier, finish times of the task in other two tiers are both 0, as shown in Eq. ( <ns0:ref type='formula' target='#formula_5'>3</ns0:ref>), and (11). Thus, the deadline constraints can be formulated as</ns0:p><ns0:formula xml:id='formula_17'>f t o + f t o f f o &#8804; d o , &#8704;o &#8712; [1, T ]. (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:p>As the occupied time of each computing node is the latest usage time of its cores 2 , and the usage time of a core is the latest finish time of tasks assigned to it. Therefore, the occupied times of computing nodes are respectively</ns0:p><ns0:formula xml:id='formula_19'>ot i = max q&#8712;[1,C i ] { max o&#8712;[1,T ] {y o,i,q &#8226; f t o }}, &#8704;i &#8712; [1, M], (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:formula xml:id='formula_21'>ot i = max q&#8712;[1,C i ] { max o&#8712;[1,T ] {y o,i,q &#8226; f t o f f o }}, &#8704;i &#8712; [M + 1, M + E +V ].<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Then the total amount of occupied computing resources for task processing is</ns0:p><ns0:formula xml:id='formula_22'>&#920; = M+E+V &#8721; i=1 (ot i &#8226;C i &#8226; g i ). (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_23'>)</ns0:formula><ns0:p>And the overall computing resource utilization of the DE3C system is</ns0:p><ns0:formula xml:id='formula_24'>U = &#8721; T o=1 (y o &#8226; r o ) &#920; , (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_25'>)</ns0:formula><ns0:p>where the numerator is the accumulated computing length of executed tasks, i.e., the amount of computing resource consumed for the task execution.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Problem Model</ns0:head><ns0:p>Based on above formulations, we can model the task scheduling problem for DE3C as</ns0:p><ns0:formula xml:id='formula_26'>Maximizing Z +U (17) subject to (2) &#8722; (16),<ns0:label>(18)</ns0:label></ns0:formula><ns0:p>where the objective ( <ns0:ref type='formula'>17</ns0:ref>) is maximizing the number of finished tasks, which is considered as the quantifiable indicator of the SLA satisfaction in this paper, and maximizing the overall computing resource utilization when the finished task number cannot be improved (noticing that the resource utilization is no more than 1). The decision variables include y o,i,q</ns0:p><ns0:formula xml:id='formula_27'>(q &#8712; [1,C i ], i &#8712; [1, M + E +V ], o &#8712; [1, T ]</ns0:formula><ns0:p>). This problem is binary nonlinear programming (BNLP), which can be solved by existing tools, e.g., lp solve <ns0:ref type='bibr' target='#b5'>(Berkelaar et al., 2020)</ns0:ref>. But these tools are not applicable to large-scale problems, as they are implemented based on branch and bound. Therefore, we propose a task scheduling method based on an integer PSO algorithm to solve the problem in a reasonable time in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>IPSO BASED TASK SCHEDULING</ns0:head><ns0:p>In this section, we present our integer PSO algorithm (IPSO) based task scheduling method in DE3C environments to improve the SLA satisfaction and the resource efficiency. Our proposed method, outlined in Algorithm 1, first employs IPSO to achieve the particle position providing the global best fitness value in Algorithm 3, where the position of each particle is the code of the assignment of tasks to cores. Then our IPSO based method can provide a task scheduling solution according to the task assignment get from the previous step by exploiting EDF scheme for task ordering in each core, as shown in Algorithm 2. In our IPSO, to quantify the quality of particles, we define the fitness function as the objective (17) of the problem we concerned,</ns0:p><ns0:formula xml:id='formula_28'>f n = Z +U. (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_29'>)</ns0:formula><ns0:p>In the followings, we will present the integer encoding and decoding approach exploited by the IPSO in section 3.1, and the detail of IPSO in section 3.2.</ns0:p><ns0:p>2 In this paper, to avoid the negative effects on task execution performance, we don't consider to use dynamic frequency scaling technologies for computing energy saving.</ns0:p></ns0:div> <ns0:div><ns0:head>5/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 IPSO based task scheduling Input: The information of tasks and resources in the DE3C system; the integer encoding and decoding method. Output: A task scheduling solution.</ns0:p><ns0:p>1: achieving the global best particle position by IPSO (see Algorithm 3); 2: decoding the position into a task scheduling solution by Algorithm 2; 3: return the task scheduling solution; </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Integer Encoding and Decoding</ns0:head><ns0:p>Our IPSO exploits the integer encoding method to convert a task assignment to cores into the position information of a particle. We first respectively assign sequence numbers to tasks and cores, both starting from one, where the task number is corresponding to a dimension of a particle position, and the value of a dimension of a particle position is corresponding to the core the corresponding task assigned to. For cloud VMs, we only number one core for each VM type as all VM instances with a type have identical price-performance ratio in real world, e.g., a1.* in Amazon EC2 3 . For each task, the number of cores which it can be assigned to is the accumulated core number of the device launching it and the edge servers having connections with the device plus the number of VM types. Thus, for each dimension in each particle position, the minimal value (p min d ) is one representing the task assigned to the first core of the device launching the task, and the maximal value (p max d ) is</ns0:p><ns0:formula xml:id='formula_30'>p max d = M &#8721; i=1 (x d,i &#8226;C i ) + M &#8721; i=1 M+E &#8721; j=M+1 &#8721; b i, j &gt;0 (x d,i &#8226;C j ) + NV, (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_31'>)</ns0:formula><ns0:p>where NV is the number of cloud VM types. The subscript d represents the dimension in each particle position. The dth dimension is corresponding to the dth task, t d .</ns0:p><ns0:p>For example, as shown in Fig. <ns0:ref type='figure'>2</ns0:ref>, assuming a DE3C consisting of two user devices, one edge server, and one cloud VM type. Each of these two devices and the edge server has two computing cores, respectively represented as dc 11 and dc 12 for the first device, dc 21 and dc 22 for the second device, and ec 1 and ec 2 for the edge server. Both devices have network connections with the edge, that is to say, tasks launched by these two devices can be offloaded to the edge for processing. Then for each task, there are five cores that the task can be assigned to, i.e., p max d = 5 for all d. Each device launches three tasks, where the first three tasks, t 1 , t 2 , and t 3 , are launched by the first device, and the last three tasks, t 4 , t 5 , and t 6 , are launched by the second device. Then we numbered two cores of each device as 1 and 2, respectively. Two cores of the edge server are numbered as 3 and 4, respectively. And the VM type is numbered as 5. By this time, the particle position [2, 3, 2, 1, 4, 5] represents t 1 is assigned to dc 12 , t 2 is assigned to ec 1 , t 3 is assigned to dc 12 , t 4 is assigned to dc 21 , t 5 is assigned to ec 2 , and t 6 is assigned to the cloud.</ns0:p><ns0:p>Given a particle position, we use the following steps to convert it to a task scheduling solution, as outlined in Algorithm 2: (i) we decode the position to the task assignment to cores based on the correspondence between them, illustrated above; (ii) with the task assignment, we conduct EDF for ordering the task execution on each core, which rejects tasks whose deadline cannot be satisfied by the core, as shown in lines 2-3 of Algorithm 2. By now, we achieve a task scheduling solution according to the particle position. After this, we can calculate the number of tasks whose deadlines are satisfied, and the overall utilization using Eq. ( <ns0:ref type='formula' target='#formula_19'>13</ns0:ref>)-( <ns0:ref type='formula' target='#formula_24'>16</ns0:ref>). And then, we can achieve the fitness of the particle using Eq. ( <ns0:ref type='formula' target='#formula_28'>19</ns0:ref>). sorting the task execution order in the descending on the deadline; 4: calculating the fitness using Eq. ( <ns0:ref type='formula' target='#formula_28'>19</ns0:ref>); 5: return the task scheduling solution and the fitness;</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>IPSO</ns0:head><ns0:p>The IPSO, exploited by our method to achieve the best particle position providing the global best fitness, consists of the following steps, as shown in Algorithm 3. 1) Initializing the position and the fly velocity for each particle randomly, and calculating its fitness.</ns0:p><ns0:p>2) Setting the local best position and fitness as the current position and fitness for each particle, respectively.</ns0:p><ns0:p>3) Finding the particle providing the best fitness, and setting the global best position and fitness as its position and fitness. 4) If the number of iterations don't reach the maximum predefined, repeat step 5)-7) for each particle. 5) For the particle, updating its velocity and position respectively using Eq. ( <ns0:ref type='formula' target='#formula_32'>21</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_34'>22</ns0:ref>), and calculating its fitness. Where &#965; i,d and p i,d are representing the velocity and the position of ith particle in dth dimension. lb i,d is the local best position for ith particle in dth dimension. gb d is the global best position in dth dimension. &#969; is the inertia weight of particles. We exploit linearly decreasing inertia weight in this paper, due to its simplicity and good performance <ns0:ref type='bibr' target='#b13'>Han et al. (2010)</ns0:ref>. a 1 and a 2 are the acceleration coefficients, which push the particle toward local and global best positions, respectively. r 1 and r 2 are two random values in the range of [0, 1]. To rationalize the updated position in dth dimension, we perform rounding operation and modulo p max i plus 1 on it.</ns0:p><ns0:formula xml:id='formula_32'>&#965; i,d = &#969; &#8226; &#965; i,d + a 1 &#8226; r 1 &#8226; (lb i,d &#8722; &#965; i,d ) + a 2 &#8226; r 2 &#8226; (gb d &#8722; &#965; i,d ), (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_33'>)</ns0:formula><ns0:formula xml:id='formula_34'>p i,d = &#8968;p i,d + &#965; i,d &#8969; mod p max i + 1. (<ns0:label>22</ns0:label></ns0:formula><ns0:formula xml:id='formula_35'>)</ns0:formula><ns0:p>1. For the particle, comparing its current fitness and local best fitness, and updating the local best fitness and position respectively as the greater one and the corresponding position.</ns0:p><ns0:p>2. Comparing the local best fitness with the global best fitness, and updating the global best fitness and position respectively as the greater one and the corresponding position.</ns0:p><ns0:p>For updating particle positions, we only discretize them and limit them to reasonable space, which is helpful for preserving the diversity of particles. Existing discrete PSO methods limit both positions and velocity for particles, and exploit the interception operator for the limiting, which sets a value as the minimum and the maximum when it is less than the minimum and greater than the maximum, respectively.</ns0:p><ns0:p>This makes the possibilities of the minimum and the maximum for particle positions are much greater than that of other possible values, and thus reduces the particle diversity, which can reduce the performance of PSO.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Complexity Analysis</ns0:head><ns0:p>As Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 3 IPSO Input: the parameters of the IPSO.</ns0:p><ns0:p>Output: The global best particle position. 1: generating the position and the velocity of each particle randomly; 2: calculating the fitness of each particle by Algorithm 2; 3: initializing the local best solution as the current position and the fitness for each particle; 4: initializing the global best solution as the local best solution of the particle providing the best fitness; 5: while the iterative number don't reach the maximum do 6:</ns0:p><ns0:p>for each particle do 7:</ns0:p><ns0:p>updating the position using Eq. ( <ns0:ref type='formula' target='#formula_32'>21</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_34'>22</ns0:ref>); 8:</ns0:p><ns0:p>calculating the fitness of each particle by Algorithm 2;</ns0:p><ns0:p>9:</ns0:p><ns0:p>updating the local best solution;</ns0:p><ns0:p>10:</ns0:p><ns0:p>updating the global best solution; 11: return the global best solution; average. Thus, the time complexity of our IPSO based method is O(IT R &#8226; NP &#8226; T 2 /NC) on average, which is quadratically increased with the number of tasks.</ns0:p><ns0:p>For PSO or GA with binary encoding method, they have similar procedures to IPSO, and thus their time complexities are also O(IT R &#8226; NP &#8226; T 2 /NC). Referring to <ns0:ref type='bibr' target='#b3'>(Bays, 1977;</ns0:ref><ns0:ref type='bibr' target='#b6'>B.V. and Guddeti, 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Benoit et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b22'>Liu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Meng et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b24'>Mahmud et al., 2020)</ns0:ref>, the time complexity of First Fit (FF) is O(T * NC), and that of First Fit Decreasing (FFD), Earliest Deadline First (EDF), Earliest Finish Time First (EFTF), Least Average Completion Time (LACT), and Least Slack Time First (LSTF) are O(T 2 * NC). In general, the numbers of the iteration and particles are constants, and all of above methods except FF exhibit quadratic complexity with the number of tasks.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>PERFORMANCE EVALUATION</ns0:head><ns0:p>In this section, we conduct extensive experiments in a DE3C environment simulated referring to published articles and the reality, to evaluate the performance of our method in SLA satisfaction and resource efficiency. In the simulated environment, there are 20 devices, 2 edges, and one cloud VM type, and each device is randomly connected with one edge. For each edge, the number of servers are randomly generated in the range of <ns0:ref type='bibr'>[1,</ns0:ref><ns0:ref type='bibr'>4]</ns0:ref>. The computing capacity of each core is randomly set in the ranges of [1, 2]GHz, [2, 3]GHz, and [2, 3]GHz, respectively, for each device, each edge server, and the VM type.</ns0:p><ns0:p>The number of tasks launched by each device is generated randomly in the range of [1, 100], which results in about 1000 total tasks on average in the system. The length, the input data size, and the deadline of each task is generated randomly in the ranges of <ns0:ref type='bibr'>[100,</ns0:ref><ns0:ref type='bibr'>2000]</ns0:ref>GHz, <ns0:ref type='bibr'>[20,</ns0:ref><ns0:ref type='bibr'>500]</ns0:ref>MB and [100, 1000]seconds, respectively. For network connections, the bandwidths for transmitting data from a device to an edge and the cloud are randomly set in ranges of <ns0:ref type='bibr'>[10,</ns0:ref><ns0:ref type='bibr'>100]</ns0:ref>Mbps and [1, 10]Mbps, respectively.</ns0:p><ns0:p>There are several parameters should be set when implementing our IPSO. Referring to <ns0:ref type='bibr' target='#b18'>(Kumar et al., 2020)</ns0:ref>, we set the maximum iteration number and the particle number as 200 and 50, respectively. Both acceleration coefficients, a 1 and a 2 , are set as 2.0, referring to <ns0:ref type='bibr' target='#b39'>(Wang et al., 2021a)</ns0:ref>. The &#969; inertia weight is linearly decreased with the iterative time in the range of [0.0, 1.4], referring to <ns0:ref type='bibr' target='#b45'>(Yu et al., 2021)</ns0:ref>. The effect of parameter settings on the performance of our method will be studied in the future.</ns0:p><ns0:p>We compare our method with the following classical and recently published methods.</ns0:p><ns0:p>&#8226; First Fit <ns0:ref type='bibr' target='#b3'>(Bays, 1977)</ns0:ref>, FF, iteratively schedules a task to the first computing core satisfying its deadline.</ns0:p><ns0:p>&#8226; First Fit Decreasing (B.V. and Guddeti, 2018), FFD, iteratively schedules the task with maximal computing length to the first computing core satisfying its deadline.</ns0:p><ns0:p>&#8226; Earliest Deadline First <ns0:ref type='bibr' target='#b4'>(Benoit et al., 2021)</ns0:ref>, EDF, iteratively schedules the task with earliest deadline to the first computing core satisfying its deadline, which is a classical heuristic method concerning the deadline constraint.</ns0:p><ns0:p>&#8226; Earliest Finish Time First, EFTF, iteratively schedules a task to the computing core providing the earliest finish time and satisfying its deadline, which is the basic idea exploited in the work proposed by <ns0:ref type='bibr' target='#b22'>Liu et al. (Liu et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>8/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Least Average Completion Time, LACT, iteratively schedules a task to the computing core satisfying its deadline, such that the average completion time of scheduled tasks is minimal, which is the basic idea exploited by Dedas <ns0:ref type='bibr' target='#b25'>(Meng et al., 2020)</ns0:ref>.</ns0:p><ns0:p>&#8226; Least Slack Time First, LSTF, iteratively schedules a task to the computing core satisfying its deadline and providing the least slack time for the task, which is the basic idea exploited in the work proposed by <ns0:ref type='bibr' target='#b24'>Mahmud et al. (Mahmud et al., 2020)</ns0:ref>.</ns0:p><ns0:p>&#8226; Genetic Algorithm, GA, which simulates the population evolution by crossover, mutation and selection operators, where a chromosome represents a task assignment to cores, and a gene is a bit representing the assignment of a task to a core. This is the basic idea used in the work of Aburukba et al. <ns0:ref type='bibr' target='#b0'>(Aburukba et al., 2020)</ns0:ref>. The number of population and the maximum generation are set as 1000 in our experiments. EDF is used for task ordering in each core.</ns0:p><ns0:p>&#8226; PSO srv, employ PSO with the integer coding of the task assignment to computing nodes, which is the basic idea exploited by existing PSO based task scheduling for DE3C, such as <ns0:ref type='bibr' target='#b43'>(Xie et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Parameters have same settings to our IPSO method in the experiment. The assignment of tasks to cores in each computing node and the task ordering in each core are solved by EDF.</ns0:p><ns0:p>&#8226; BPSO, employ PSO with the binary coding that is similar to GA. Parameter values are set as our IPSO method in the experiment. EDF is used to order the task execution in each core.</ns0:p><ns0:p>We compare the performance of these task scheduling method in the following aspects.</ns0:p><ns0:p>&#8226; SLA satisfaction is quantified by the number, the accumulated computing length, and the processed data size of completed tasks.</ns0:p><ns0:p>&#8226; Resource efficiency is quantified by the resource utilization for the overall system, and the cost efficiency for cloud VMs. The price of a VM instance is $0.1 per hour.</ns0:p><ns0:p>&#8226; Processing efficiency is quantified by the executed computing length and the processed data size of tasks per unit processing time, which are the ratios of completed computing length and processed data size to the makespan, respectively.</ns0:p><ns0:p>For each group of experiments, we repeat it more than ten times, and report the median result in the followings. For each metric value achieved by each task scheduling method, we scale it by that of FF to display the relative difference between these methods more clearly. The details of experiment results are shown as followings. Our method is abbreviated to IPSO in the followings.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>SLA Satisfaction</ns0:head><ns0:p>Fig 3 shows the performance of various task scheduling methods in maximizing the accumulated number, computing length, and processed data size of finished tasks. As shown in the figure, our method has about 36%, 46%, and 55% better performance than these heuristic methods, FF, FFD, EDF, EFTF, LACT, and LSTF, in these three SLA satisfaction metrics, respectively. This illustrates that meta-heuristic methods can achieve a much better performance than heuristic methods, due to their randomness for global searching ability. While, the performance of GA, PSO srv, and BPSO are much worse than that of these heuristic methods, which suggests us that meta-heuristic approaches must be carefully designed for a good performance.</ns0:p><ns0:p>Of these heuristic methods, EDF has the best performance in SLA satisfaction, as shown in Fig. <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>, due to its aware of the task deadline, which is the reason why we exploit it for task ordering in each core.</ns0:p><ns0:p>Compared with BPSO, IPSO has 3.22x, 5.23x, and 3.13x better performance in three SLA satisfaction metrics, respectively. This provides experimental evidence that our integer coding method significantly improves the performance of PSO for task scheduling in DE3C. The main reason is that the searching space BPSO is much larger than IPSO due to their different represents to the same problem. For example, if there are 6 candidate cores, e.g., two device cores, four edge server cores, and one cloud VM type, for processing every task in a DE3C, then considering there are 10 tasks, the search space includes 6 10 solutions for IPSO, while 2 60 for BPSO. Thus, in this case, the search space of BPSO is more than 330 million times larger than that of IPSO, and this multiple exponentially increases with the numbers of tasks and candidate cores of each task. Therefore, for an optimization problem, IPSO has much probability of searching a local or global best solution than BPSO. This is also the main reason why GA has much worse performance than IPSO, as shown in Fig. <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. PSO srv has smaller search space while worse performance than IPSO. As shown in Fig. <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>, IPSO has 1.15x, 1.66x, and 1.08x better performance than PSO srv in three SLA satisfaction metrics, respectively. This is mainly because the coding of the task assignment to a coarse-grained resource cannot take full advantage of the global searching ability of PSO. Google's previous work has verified that fine-grained resource allocation helps to improve the resource efficiency <ns0:ref type='bibr' target='#b33'>(Tirmazi et al., 2020)</ns0:ref>, thus our IPSO uses the core as the granularity of resources during the searching process, which results in a better performance in SLA satisfaction optimization compared with other methods.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Resource Efficiency</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_6'>4a</ns0:ref> and b respectively show the overall resource utilization and the cost efficiency when using various task scheduling methods in the simulated DE3C environment. As shown in the figure, our method has the best performance in optimizing both resource efficiency metric values, where our method has 36.9%-9.64x higher resource utilization and 5.6%-1.43x greater cost efficiency, compared with other methods (GA has zero cost efficiency as its solution does not offload any task to the cloud). This is mainly because the resource utilization is our second optimization objective (see Eq. 17 or 19), which results in that the solution having a higher utilization has a greater fitness in all of the solution with same number of finished tasks.</ns0:p><ns0:p>EDF has a worse cost efficiency than other heuristic methods. This may be because EDF offloads more tasks to the cloud, which leads to a greater ratio of the data transfer time and the computing time in the cloud, and thus results in a poor cost efficiency. The idea of offloading tasks with small input data size to the cloud helps to improve the cost efficiency, which is one of our consideration for designing heuristics or hybrid heuristics with high effectiveness. This is the main reason why EFTF has the best cost efficiency in all of these heuristics, because EFTF iteratively assigns the task with minimal finish time, which usually having small input data size, when making the offloading and assignment decisions in the cloud.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Processing Efficiency</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_7'>5</ns0:ref> respectively show the values of the two processing efficiency metrics when applying various task scheduling methods. As shown in the figure, these two processing efficiency metric values respectively have a similar relative performance to the corresponding SLA satisfaction metric values for these scheduling methods, as shown in Fig. <ns0:ref type='figure' target='#fig_5'>3b and c</ns0:ref>. This is because all of these methods have comparable performance in the makespan. Thus, our method also has the best performance in processing efficiency.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>Meta-heuristics, typified by PSO, can achieve better performance than heuristics. This achievement is mainly due to their global search ability which is implemented by the randomness and the converging to the global solution during their iteratively search. But the meta-heuristic based method must be designed carefully, otherwise, it may have worse performance than heuristics, such GA, PSO srv and BPSO, as presented in experimental results.</ns0:p><ns0:p>The main difference between IPSO and BPSO is the search space size for a specific problem. This inspires us that more efficient encoding method with short code length helps to reduce the size of search space, and thus improve the probability of finding the global best solution.</ns0:p><ns0:p>One of the main advantages of heuristics is that they are specifically designed for targeted problems. This produces efficient local search strategies. Thus, in some times, heuristics have better performance than meta-heuristics, such as EDF vs. BPSO. Therefore, a promising research direction is integrating a local search strategy into a meta-heuristic algorithm to cover its shortage caused by its purpose of solving general problem. While, different combining of heuristic local searches and global search strategies should result in various performance improvements, which is one of our future studies.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>RELATED WORKS</ns0:head><ns0:p>As DE3C is one of the most effective ways to solve the problem of insufficient resources of smart devices and task scheduling is a promising technology to improve the resource efficiency, several researchers have focused on the design of efficient task scheduling methods in various DE3C environments <ns0:ref type='bibr' target='#b37'>(Wang et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>11/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To improve the response time of tasks, the method proposed by <ns0:ref type='bibr' target='#b2'>(Apat et al., 2019)</ns0:ref> iteratively assigned the task with the least slack time to the edge server closest to the user. Tasks are assigned to the cloud when they cannot be finished by edges. Their work didn't consider the task scheduling on each server.</ns0:p><ns0:p>OnDisc, proposed by <ns0:ref type='bibr' target='#b15'>(Han et al., 2019)</ns0:ref>, heuristically dispatched a task to the server providing the shortest additional total weighted response time (WRT), and sees the cloud as a server, to improve overall WRT.</ns0:p><ns0:p>For minimizing the deadline violation, the heuristic method proposed by <ns0:ref type='bibr' target='#b31'>(Stavrinides and Karatza, 2019)</ns0:ref> used EDF and earliest finish time first for selecting the task and the resource in each iteration, respectively.</ns0:p><ns0:p>When a task's input data was not ready, the proposed method tried to fill a subsequent task before it.</ns0:p><ns0:p>Above research focused on the performance optimization for task execution, while didn't concern the cost of used resources. In general, a task requires more resources for a better performance, and thus there is a trade-off between the task performance and the resource cost. Therefore, several works concerned the optimization of the resource cost or the profit for service providers. For example, <ns0:ref type='bibr' target='#b7'>(Chen et al., 2020)</ns0:ref> presented a task scheduling method to optimize the profit, where the value of a task was proportional to the resource amounts and the time it took, and resources were provided in the form of VM. Their proposed method first classified tasks based on the amount of its required resources by K-means. Then, for profit maximization, their method allocated the VM class to the closest task class, and used Kuhn-Munkres method to solve the optimal matching of tasks and VMs. In their work, all VMs were seen as one VM class. This work ignored the heterogeneity between edge and cloud resources, which may lead to resource inefficiency <ns0:ref type='bibr' target='#b17'>(Kumar et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b20'>Li et al. (Li et al., 2020)</ns0:ref> proposed a hybrid method employing simulated annealing to improve artificial fish swarm algorithm for offloading decision making, and used best fit for task assignment. This work focused on media delivery applications, and thus assumed every task was formed by same-sized subtasks. This assumption limited the application scope. Mahmud et al. <ns0:ref type='bibr' target='#b24'>(Mahmud et al., 2020)</ns0:ref> proposed a method which used edge resources first, and assigned the offloaded task to the first computing node with minimal profit merit value, where the profit merit was the profit divided by the slack time.</ns0:p><ns0:p>All of the aforementioned methods employed only edge and cloud resources for task processing, even though most of user devices have been equipped with various computing resources <ns0:ref type='bibr' target='#b42'>(Wu et al., 2019)</ns0:ref> which have zero transmission latency for users' data. To exploit all the advantages of the local, edge and cloud resources, some works are proposed to address the task scheduling problem for DE3C. The method presented in <ns0:ref type='bibr' target='#b19'>(Lakhan and Li, 2019)</ns0:ref> first tried several existed task order method, e.g., EDF, EFTF, and LSTF, and selected the result with the best performance for task order. Then, the method used existed pair-wise decision methods, TOPSIS <ns0:ref type='bibr' target='#b21'>(Liang and Xu, 2017)</ns0:ref> and AHP <ns0:ref type='bibr' target='#b29'>(Saaty, 2008)</ns0:ref>, to decide the position for each task's execution, and applied a local search method exploiting random searching for the edge/cloud. For improving the delay, the approach presented in <ns0:ref type='bibr' target='#b27'>(Miao et al., 2020)</ns0:ref> first decided the amounts of data that is to be processed by the device and an edge/cloud computing node, assuming each task can be divided into two subtasks with any data size. Then they considered to migrate some subtasks between computing nodes to further improve the delay, for each task. The method proposed in <ns0:ref type='bibr' target='#b46'>(Zhang et al., 2019)</ns0:ref> iteratively assigned the task required minimal resources to the nearest edge server that can satisfy all of its requirements. <ns0:ref type='bibr' target='#b23'>Ma et al. Ma et al. (2022)</ns0:ref> proposed a load balance method for improving the revenue for edge computing. The proposed method allocated the computing resources of the edge node with the most available cores and the smallest move-up energy to the new arrived task. To improve the total energy consumption for executing deep neural networks in DE3C with deadline constraints, Chen et al. <ns0:ref type='bibr' target='#b8'>Chen et al. (2022)</ns0:ref> proposed a particle swarm optimization algorithm using mutation and crossover operators for population update. <ns0:ref type='bibr' target='#b40'>Wang et al. Wang et al. (2021b)</ns0:ref> leveraged reinforcement learning with sequence-to sequence neural network for improving the latency and the device energy in DE3C. Machine learning-based or metaheuristic-based approaches may achieve a better performance than heuristics, but in general, they consume hundreds to tens of thousands more time, which makes them not applicable to make online scheduling decisions. Different from these above works, in this paper, we design an Integer PSO based hybrid heuristic method for DE3C systems. Our work is aiming at optimizing SLA satisfaction and resource efficiency, and trying to jointly address the problems of offloading decision, task assignment, and task ordering.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>In this paper, we study on the optimization of the SLA satisfaction and the resource efficiency in DE3C environments by task scheduling. We formulate the concerned optimization problem as a BNLP, and Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>propose an integer PSO based task scheduling method to solve the problem with a reasonable time.</ns0:p><ns0:p>Different from existing PSO based method, our method exploits the integer coding of the task assignment to cores, and rationalizes the position of each particle by rounding and modulo operation to preserve the particle diversity. Simulated experiment results show that our method has better performance in both SLA satisfaction and resource efficiency compared with nine classical and recently published methods.</ns0:p><ns0:p>The main advantages of our method are the efficient encoding method and the integration of metaheuristic and heuristic. In the future, we will continue to study on more effective encoding methods and try to design hybrid methods by hybridizing meta-heuristic and heuristic search strategy for a better performance.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>and evolutionary PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022) Manuscript to be reviewed Computer Science algorithmsAburukba et al. (2020);</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The architecture of device-edge-cloud cooperative computing.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>3 https://aws.amazon.com/ Converting a particle position to a task scheduling solution Input: The particle position, [gb 1 , gb 2 , ..., gb T ]. Output: The task scheduling solution and the fitness.1: decoding the position to the assignment of tasks to computing cores; 2: for each core do 3:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>shown in Algorithm 3, there are two layers loop, which has O(IT R &#8226; NP) time complexity, where IT R and NP are the numbers of the iteration and particles, respectively. Within the loop, the most complicated part is calling Algorithm 2 which is O(NC &#8226; (T /NC) 2 ) = O(T 2 /NC) in time complexity on average, as shown in its lines 2-3, where NC is the number of cores in the DE3C system. T /NC is the average number of tasks assigned to each core, and O((T /NC) 2 ) is the time complexity of EDF for each core on 7/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The SLA satisfaction performance achieved by various scheduling methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The resource efficiency achieved by various scheduling methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The processing efficiency achieved by various scheduling methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:1:2:NEW 1 Jan 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response to Reviewers’ Comments Thank you for your letter and for the reviewers’ comments concerning our manuscript entitled “Integer particle swarm optimization based task scheduling for device-edge-cloud cooperative computing to improve SLA satisfaction”. These comments are all valuable and very helpful for improving our paper, as well as the important guiding significance to our researches. We have studied comments carefully and have made corrections which we hope meet with approval. The responds to the reviewer’s comments are as follows. Reviewer 1 [Q1] Abstract has to be rewritten. A concise and factual abstract is required. [A1] Thanks for the comment. We have revised the abstract for more conciseness and factuality. [Q2] Problem/gap not properly highlighted in the abstract, also quantitative results/improvement missing in the abstract. [A2] Thanks for the comment. We have revised the abstract to highlight the problem we focused and added the quantitative results in the abstract. [Q3] There are a number of English language-related issues and need to be revised especially a lot of conjunctions used, sentences are too long and difficult to apprehend. [A3] Thanks for the comment. We reviewed our paper thoroughly, and improved its presentation for more readability. [Q4] A meta-heuristic algorithm …. Which can have a better performance than heuristic methods (line 45-47), how? Need some references or proper justification. [A4] Thanks for the comment. We have added references we referenced to prove the argument, and added the reason in our revised paper. [Q5] According to the assumption at line 109 that tasks assigned to a core are executed based on EDF, in such a case how do you handle starvation? [A5] Thanks for the comment. In this paper, we focused on tasks with hard deadline requirements. This means that a task finished after its deadline does not bring benefits to the service provider, and thus this task is rejected for its processing. If there is a starvation, which means there are not enough resources for completing all tasks, some tasks will be rejected. [Q6] Do you think that branch and bound technique can’t be applied for large-scale problems (line 154)? [A6] Thanks for the comment. In general, existing tools use simple branch and bound method which usually has exponential time complexity in average, and thus are not appliable to solving large-scale problems. Branch and bound technique specially designed for a problem may have a polynomial time complexity, but its design and the implementation is terribly complicated, which is one of our future works. [Q7] The sentence “Then, their method used Kuhn-Munkres …” need to be rephrased (line 364) [A7] Thanks for the comment. We have rephrased the sentence in the revised manuscript. [Q8] The sentence “Different from these … “ need to be rephrased (line 393) [A8] Thanks for the comment. We have rephrased the sentence in the revised manuscript. [Q9] No paper is found from 2021 and the single paper you have mentioned as of 2021 is actually published in 2018. [A9] Thanks for the comment. We have added several references published in 2021-2022. [Q10] Add papers from 2021 [A10] Thanks for the comment. We have added several references published in 2021-2022. [Q11] The numbers of subscripts were used for variables and symbols are very high, reduce the number of subscript variables and symbols as few as possible. For this, you can divide the “problem formulation” section into subsections. [A11] Thanks for the comment. We have improved our formulation for more conciseness, and divided the “problem formulation” section into several subsections. [Q12] Do you think inertia weight has a role in the PSO algorithm? Which variant of inertia weight you have used for your algorithm. Detail about inertia weight and other terms like r1, r2, c1, c2 are not given (lines 212-213). [A12] Thanks for the comment. We adopted these terms of standard PSO. We think the adaptive of these terms can improve the performance of PSO, which has been studied in some published works. This is not in the scope of this paper, and we will study on this in the future. [Q13] Complexity of the proposed approach has been calculated but not compared with their counterparts. [A13] Thanks for the comment. We have added the complexity comparison of compared approaches in our experiment. [Q14] Which encoding scheme you have used? (Figure 2) [A14] Thanks for the comment. Tasks and dimensions of each particle position have one-to-one correspondence. The value of a dimension is the NO. of the computing core where the corresponding task is assigned. For example, for a task, if there are N candidate cores. These cores are numbered as [1, N], respectively. The value range of its corresponding dimension is [1, N], and the value corresponds to a core. [Q15] “Referring to published ……” Only mentioning that it is taken from the literature is not sufficient, you need to provide proper references (lines 247-250). [A15] Thanks for the comment. We have added proper references for this. [Q16] References not mentioned for the first three approaches (FF, FFD, and EDF) used for comparisons (lines 253, 254-255, and 256-258)? You need to provide proper references. [A16] Thanks for the comment. We have added proper references for this. [Q17] How the computing time of task on the Cloud can be greater than on the Edge? (lines 330331) [A17] Thanks for the comment. Compared with the edge, the cloud has more computing capacity but has much less network bandwidth. Thus, in general, the cloud has a less computing time but a longer data transfer time for a task. We didn’t mean that the Cloud had a greater computing time of a task, but meant that the Cloud had a greater ratio between the data transfer time and the computing time. Reviewer 2 [Q18] The authors claimed that his approach has better resource efficiency. However, the results concerning resource efficiency are not meaningful. The authors are required to obtain results concerning Average resource utilization which will be between 0 to 1 or 0 to 100 %. [A18] Thanks for the comment. We presented the relative overall resource utilization (Fig. 4 (a)) in our paper. In reported data, we scale each metric value by that of FF to highlight the relative difference between these scheduling methods. [Q19] All the figures are having no y-axis caption. Similarly, some of the results seem the same even their titles are different. For instance, Figure 3 (a) and Figure 3 (c). [A19] Thanks for the comment. We have improved the presentations of experimental results by figures. [Q20] The authors are required to add a section results and discussion that should show why their proposed approach is better than the existing and results need to be clearly described. [A20] Thanks for the comment. We have added section discussion in the revised paper. "
Here is a paper. Please give your review comments after reading it.
352
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Task scheduling helps to improve the resource efficiency and the user satisfaction for Device-Edge-Cloud Cooperative Computing (DE3C), by properly mapping requested tasks to hybrid device-edge-cloud resources. In this paper, we focused on the task scheduling problem for optimizing the Service-Level Agreement (SLA) satisfaction and the resource efficiency in DE3C environments. Existing works only focused on one or two of three subproblems (offloading decision, task assignment and task ordering), leading to a suboptimal solution. To address this issue, we first formulated the problem as a binary nonlinear programming, and proposed an integer particle swarm optimization method (IPSO) to solve the problem in a reasonable time. With integer coding of task assignment to computing cores, our proposed method exploited IPSO to jointly solve the problems of offloading decision and task assignment, and integrated earliest deadline first scheme into the IPSO to solve the task ordering problem for each core. Extensive experimental results showed that our method achieved upto 953% and 964% better performance than that of several classical and state-of-the-art task scheduling methods in SLA satisfaction and resource efficiency, respectively.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Smart devices, e.g., Internet of Things (IoT) devices, smartphones, have been commonplace nowadays.</ns0:p><ns0:p>'There were 8.8 billion global mobile devices and connections in 2018, which will grow to 13.1 billion by 2023 at a CAGR of 8 percent', as shown in Cisco Annual Internet Report <ns0:ref type='bibr' target='#b9'>(Cisco, 2020)</ns0:ref>. But most of the time, users' requirements cannot be satisfied by their respective devices. This is because user devices usually have limited capacity of both resource and energy <ns0:ref type='bibr' target='#b44'>(Wu et al., 2019)</ns0:ref>. Device-Edge-Cloud Cooperative Computing (DE3C) <ns0:ref type='bibr' target='#b39'>(Wang et al., 2020)</ns0:ref> is one of the most promising ways to address the problem. DE3C extends the capacity of user devices by jointly exploiting the edge resources with low network latency and the cloud with abundant computing resources.</ns0:p><ns0:p>Task scheduling helps to improve the resource efficiency and satisfy user requirements in DE3C, by properly mapping requested tasks to hybrid device-edge-cloud resources. The goal of task scheduling is to decide whether each task is offloaded from user device to an edge or a cloud (offloading decision), which computing node an offloaded task is assigned to (task assignment), and the execution order of tasks in each computing node (task ordering) <ns0:ref type='bibr' target='#b39'>(Wang et al., 2020)</ns0:ref>. To obtain a global optimal solution, these three decision problems must be concerned jointly when designing task scheduling. Unfortunately, to the best of our knowledge, there is no work that has jointly addressed all of these three decision problems. Therefore, in this paper, we try to address this issue for DE3C.</ns0:p><ns0:p>As the task scheduling problem is NP-Hard <ns0:ref type='bibr' target='#b10'>(Du and Leung, 1989)</ns0:ref>, several works exploited heuristic PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science methods <ns0:ref type='bibr' target='#b27'>(Meng et al., 2019</ns0:ref><ns0:ref type='bibr' target='#b26'>(Meng et al., , 2020;;</ns0:ref><ns0:ref type='bibr' target='#b43'>Wang et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b46'>Yang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b23'>Liu et al., 2019)</ns0:ref> and metaheuristic algorithms, such as swarm intelligence <ns0:ref type='bibr' target='#b45'>(Xie et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adhikari et al., 2020)</ns0:ref> and evolutionary algorithms <ns0:ref type='bibr' target='#b0'>(Aburukba et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Sun et al., 2018)</ns0:ref>, to solve the problem within a reasonable time.</ns0:p><ns0:p>Inspired by a natural behaviour, a meta-heuristic algorithm has the capability of searching the optimal solution by combining a random optimization method and a generalized search strategy. meta-heuristic algorithms can have a better performance than heuristic methods, mainly due to their global search ability <ns0:ref type='bibr' target='#b17'>(Houssein et al., 2021b)</ns0:ref>. Thus, in this paper, we design task scheduling for DE3C by exploiting Particle Swarm Optimization (PSO) which is one of the most representative meta-heuristics based on swarm intelligence, due to its ability to fast convergence and powerful ability of global optimization as well as its easy implementation <ns0:ref type='bibr' target='#b40'>(Wang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In this paper, we focus on the task scheduling problem for DE3C to improve the satisfaction of Service Level Agreements (SLA) and the resource efficiency. The SLA satisfaction strongly affects the income and the reputation <ns0:ref type='bibr' target='#b32'>(Serrano et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b49'>Zhao et al., 2021)</ns0:ref> and the resource efficiency can determine the cost at a large extent for service providers <ns0:ref type='bibr' target='#b13'>(Gujarati et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b38'>Wang et al., 2016)</ns0:ref>. We first present a formulation for the problem, and then propose a task scheduling method based on PSO with integer coding to solve the problem in a reasonable time complexity. In our proposed method, we encode the assignment of tasks to computing cores into the position of a particle, and exploit earliest deadline first (EDF) approach to decide the execution order of tasks assigned to one core. This encoding approach has two advantages: 1) it has much less solution space than binary encoding, and thus has more possibility of achieving optimal solution; 2) compared with encoding the assignment of tasks to computing servers <ns0:ref type='bibr' target='#b45'>(Xie et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adhikari et al., 2020)</ns0:ref> (or a coarser resource granularity), our approach makes more use of the global searching ability of PSO. In addition, we use the modulus operation to restrict the range of the value in each particle position dimension. This can make boundary values have a same possibilities to other values for each particle position dimension, and thus maintain the diversity of particles. In brief, the contributions of this paper are as followings.</ns0:p><ns0:p>&#8226; We formulate the task scheduling problem in DE3C into a binary nonlinear programming with two objectives. The major objective is to maximize the SLA satisfaction, i.e., the number of completed tasks. The second one is maximizing the resource utilization, one of the most common quantification approach for the resource efficiency.</ns0:p><ns0:p>&#8226; We propose an Integer PSO based task scheduling method (IPSO). The method exploits the integer coding of the joint solution of offloading decision and task assignment, and integrates EDF into IPSO to address the task ordering problem. Besides, the proposed method only restricts the range of particle positions by the modulus operation to maintain the particle diversity.</ns0:p><ns0:p>&#8226; We conduct simulated experiments where parameters are set referring to recent related works and the reality, to evaluate our proposed heuristic method. Experiment results show that IPSO has 24.8%-953% better performance than heuristic methods, binary PSO, and genetic algorithm (GA) which is a representative meta-heuristics based on evolution theory, in SLA satisfaction optimization.</ns0:p><ns0:p>In the rest of this paper, Section 2 formulates the task offloading problem we concerned. Section 3 presents the task scheduling approach based on IPSO. Section 4 evaluates scheduling approach presented in Section 3 by simulated experiments. Section 5 discusses some findings from experimental results. Section 6 illustrates related works and Section 7 concludes this paper.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>PROBLEM FORMULATION</ns0:head><ns0:p>In this paper, we focus on the DE3C environment, which is composed of the device tier, the edge tier, and the cloud tier, as shown in Fig 1 <ns0:ref type='figure'>.</ns0:ref> In the device tier, a user launches one or more request tasks on its device, and processes these tasks locally if the device has available computing resources. Otherwise, it offloads tasks to an edge or a cloud. In the edge tier, there are multiple edge computing centres (edges for short).</ns0:p><ns0:p>Each edge has network connections with several user devices and consists of a very few number of servers for processing some offloaded tasks. In the cloud tier, there are various types of cloud servers, usually in form of virtual machine (VM). The DE3C service provider, i.e., a cloud user, can rent some instances with any of VM types for processing some offloaded tasks. The cloud usually has a poor network performance. </ns0:p><ns0:formula xml:id='formula_0'>i, j (i &#8712; [1, M], j &#8712; [M + 1, M + E +V ])</ns0:formula><ns0:p>for transmitting data from device n i to node n j , which can be easily calculated according to the transmission channel state data <ns0:ref type='bibr' target='#b11'>(Du et al., 2019)</ns0:ref>. If a device is not covered by an edge, i.e. there is no network connection between them, the corresponding bandwidth is set as 0.</ns0:p><ns0:p>In the DE3C environment, there are T tasks, t 1 ,t 2 , ...,t T , requested by users for processing. We use binary constants</ns0:p><ns0:formula xml:id='formula_1'>x o,i , &#8704;o &#8712; [1, T ], &#8704;i &#8712; [1, M],</ns0:formula><ns0:p>to represent the ownership relationships between tasks and devices, as defined in Eq.( <ns0:ref type='formula' target='#formula_2'>1</ns0:ref>), which is known.</ns0:p><ns0:formula xml:id='formula_2'>x o,i = 1, if t o is launched by n i 0, else , &#8704;o &#8712; [1, T ], &#8704;i &#8712; [1, M]. (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>Task t o has r o computing length for processing its input data with size a o , and requires that it must be finished within the deadline d o . 1 Without loss of generality, we assume that d 1 &#8804; d 2 &#8804; ... &#8804; d T . To make our approach universal applicable, we assume there is no relationship between the computing length and the input data size for each task.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Task Execution Model</ns0:head><ns0:p>When t o is processed locally, there is no data transmission for the task, and thus, its execution time is</ns0:p><ns0:formula xml:id='formula_4'>&#964; o = M &#8721; i=1 (x o,i &#8226; r o g i ), &#8704;o &#8712; [1, T ].<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>In this paper, we consider that each task exhausts only one core during its execution, as done in many published articles. This makes our approach more universal because it is applicable to the situation that each task can exhaust all resources of a computing node by seeing the node as a core. For tasks with elastic degree of parallelism, we recommend to referring our previous work <ns0:ref type='bibr' target='#b37'>(Wang et al., 2019a)</ns0:ref> which is complementary to this work.</ns0:p><ns0:p>Due to EDF scheme providing the optimal solution for SLA satisfaction maximization in each core <ns0:ref type='bibr' target='#b30'>(Pinedo, 2016)</ns0:ref>, we can assume all tasks assigned to each core are processed in the ascending order of deadlines when establishing the optimization model. With this in mind, the finish time of task t o when it processed locally can be calculated by</ns0:p><ns0:formula xml:id='formula_5'>f t o = M &#8721; i=1 C i &#8721; q=1 (y o,i,q &#8226; o &#8721; w=1 (y w,i,q &#8226; &#964; w )), &#8704;o &#8712; [1, T ],<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>1 In this paper, we focus on hard deadline tasks, and leave soft deadline tasks as a future consideration.</ns0:p></ns0:div> <ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where y o,i,q indicates whether task t o is assigned to the qth core of node n i , which is defined in Eq. ( <ns0:ref type='formula' target='#formula_6'>4</ns0:ref>).</ns0:p><ns0:p>&#8721; o&#8722;1 w=1 (y w,i,q &#8226; &#964; w ) is the accumulated sum of execution time of tasks which are assigned to qth core of m i and have earlier deadline than t o , which is the start time of t o when it is assigned to the core. Thus,</ns0:p><ns0:formula xml:id='formula_6'>&#8721; o w=1 (y w,i,q &#8226; &#964; w ) is the finish time when t o is assigned to qth core of n i (i &#8712; [1, M]). y o,i,q = 1, if t o is assigned to qth core of n i 0, else , o &#8712; [1, T ], i &#8712; [1, M + E +V ], q &#8712; [1,C i ].<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>As each task cannot be executed by any device which doesn't launch it,</ns0:p><ns0:formula xml:id='formula_7'>C i &#8721; q=1 y o,i,q &#8804; x o,i , &#8704;o &#8712; [1, T ], &#8704;i &#8712; [1, M]. (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>)</ns0:formula><ns0:p>When a task is offloaded to the edge or the cloud tier, it starts to be executed only when both its input data and the core it is assigned are ready. Based on the EDF scheme, the ready time of input data for each task assigned to a core in an edge server or a VM can be calculated as</ns0:p><ns0:formula xml:id='formula_9'>rt o = M+E+V &#8721; i=M+1 C i &#8721; q=1 (y o,i,q &#8226; o &#8721; w=1 (y w,i,q &#8226; a w b T w,i )), &#8704;o &#8712; [1, T ],<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where rt o is the ready time of input data for task t o when the task is offloaded to an edge server or a cloud VM, respectively. For ease of our problem formulation, we use b T o,i to respectively represent the network bandwidths of transferring the input data from the device launching</ns0:p><ns0:formula xml:id='formula_10'>t o to n i (i &#8712; [M + 1, M + E +V ]). That is to say, b T o,i = &#8721; M k=1 (x o,k &#8226; b k, j ).</ns0:formula><ns0:p>Then the transmission time of the input data for t o is a o /b T o,i when it is offloaded to n i ). With the EDF scheme, the ready time of the input data for an offloaded task is the accumulated transmission time of all input data of the offloaded task and other tasks which are assigned to the same core and have earlier deadline than the offloaded task, i.e., &#8721; o w=1 (y</ns0:p><ns0:formula xml:id='formula_11'>w,i,q &#8226; a w /b T w,i ) for task t o when it is offloaded to qth core in n i (i &#8712; [M + 1, M + E +V ]).</ns0:formula><ns0:p>In this paper, we don't consider employing the task redundant execution for the performance improvement. Thus, each task can be executed by only one core, i.e.,</ns0:p><ns0:formula xml:id='formula_12'>M &#8721; i=1 C i &#8721; q=1 y o,i,q &#8804; 1, &#8704;o &#8712; [1, T ].<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>We use z o to indicate whether t o is assigned to a core for its execution, where z o = 1 means yes and z o = 0 means no. Then we have</ns0:p><ns0:formula xml:id='formula_13'>z o = M &#8721; i=1 C i &#8721; q=1 y o,i,q , &#8704;o &#8712; [1, T ].<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>And the total number of tasks which are assigned to computing cores for executions is</ns0:p><ns0:formula xml:id='formula_14'>Z = T &#8721; o=1 z o .<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>We use f t o f f o to respectively represent the finish times of t o when it is offloaded to the edge or cloud tier. For t o assigned to a core, the core is available when all tasks that are assigned to the core and have earlier deadline than the task are finished. And thus, the ready time of the core for executing t o is the latest finish time of these tasks, which respectively are</ns0:p><ns0:formula xml:id='formula_15'>rc o = M+E+V &#8721; i=M+1 C i &#8721; q=1 (y o,i,q &#8226; max w&lt;o {y w,i,q &#8226; f t o f f w }), &#8704;o &#8712; [1, T ].<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>The ready time of a task to be executed by the core it assigned to is the latter of the input data ready time and the core available time. The finish time of the task is its ready time plus its execution time. Thus, finish times of offloaded tasks are respectively Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_16'>f t o f f o = max{rt o , rc o } + M+E+V &#8721; i=M+1 C i &#8721; q=1 (y w,i,q &#8226; r o g i ), &#8704;o &#8712; [1, T ].<ns0:label>(11</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Noticing that when a task is assigned to a tier, finish times of the task in other two tiers are both 0, as shown in Eq. ( <ns0:ref type='formula' target='#formula_5'>3</ns0:ref>), and (11). Thus, the deadline constraints can be formulated as</ns0:p><ns0:formula xml:id='formula_17'>f t o + f t o f f o &#8804; d o , &#8704;o &#8712; [1, T ]. (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:p>As the occupied time of each computing node is the latest usage time of its cores 2 , and the usage time of a core is the latest finish time of tasks assigned to it. Therefore, the occupied times of computing nodes are respectively</ns0:p><ns0:formula xml:id='formula_19'>ot i = max q&#8712;[1,C i ] { max o&#8712;[1,T ] {y o,i,q &#8226; f t o }}, &#8704;i &#8712; [1, M], (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:formula xml:id='formula_21'>ot i = max q&#8712;[1,C i ] { max o&#8712;[1,T ] {y o,i,q &#8226; f t o f f o }}, &#8704;i &#8712; [M + 1, M + E +V ].<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Then the total amount of occupied computing resources for task processing is</ns0:p><ns0:formula xml:id='formula_22'>&#920; = M+E+V &#8721; i=1 (ot i &#8226;C i &#8226; g i ). (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_23'>)</ns0:formula><ns0:p>And the overall computing resource utilization of the DE3C system is</ns0:p><ns0:formula xml:id='formula_24'>U = &#8721; T o=1 (y o &#8226; r o ) &#920; , (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_25'>)</ns0:formula><ns0:p>where the numerator is the accumulated computing length of executed tasks, i.e., the amount of computing resource consumed for the task execution.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Problem Model</ns0:head><ns0:p>Based on above formulations, we can model the task scheduling problem for DE3C as</ns0:p><ns0:formula xml:id='formula_26'>Maximizing Z +U (17) subject to (2) &#8722; (16),<ns0:label>(18)</ns0:label></ns0:formula><ns0:p>where the objective ( <ns0:ref type='formula'>17</ns0:ref>) is maximizing the number of finished tasks (Z), which is considered as the quantifiable indicator of the SLA satisfaction in this paper, and maximizing the overall computing resource utilization (U) when the finished task number cannot be improved (noticing that the resource utilization is no more than 1). The decision variables include y o,i,q</ns0:p><ns0:formula xml:id='formula_27'>(q &#8712; [1,C i ], i &#8712; [1, M + E +V ], o &#8712; [1, T ]</ns0:formula><ns0:p>). This problem is binary nonlinear programming (BNLP), which can be solved by existing tools, e.g., lp solve <ns0:ref type='bibr' target='#b5'>(Berkelaar et al., 2020)</ns0:ref>. But these tools are not applicable to large-scale problems, as they are implemented based on branch and bound. Therefore, we propose a task scheduling method based on an integer PSO algorithm to solve the problem in a reasonable time in the next section.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>IPSO BASED TASK SCHEDULING</ns0:head><ns0:p>In this section, we present our integer PSO algorithm (IPSO) based task scheduling method in DE3C environments to improve the SLA satisfaction and the resource efficiency. Our proposed method, outlined in Algorithm 1, first employs IPSO to achieve the particle position providing the global best fitness value in Algorithm 3, where the position of each particle is the code of the assignment of tasks to cores. Then our IPSO based method can provide a task scheduling solution according to the task assignment get from the previous step by exploiting EDF scheme for task ordering in each core, as shown in Algorithm 2. In our IPSO, to quantify the quality of particles, we define the fitness function as the objective (17) of the problem we concerned,</ns0:p><ns0:formula xml:id='formula_28'>f n = Z +U, (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_29'>)</ns0:formula><ns0:p>where Z is the number of finished tasks, and U is the overall computing resource utilization. In the followings, we will present the integer encoding and decoding approach exploited by the IPSO in section 3.1, and the detail of IPSO in section 3.2.</ns0:p><ns0:p>2 In this paper, to avoid the negative effects on task execution performance, we don't consider to use dynamic frequency scaling technologies for computing energy saving.</ns0:p></ns0:div> <ns0:div><ns0:head>5/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 IPSO based task scheduling Input: The information of tasks and resources in the DE3C system; the integer encoding and decoding method. Output: A task scheduling solution.</ns0:p><ns0:p>1: achieving the global best particle position by IPSO (see Algorithm 3); 2: decoding the position into a task scheduling solution by Algorithm 2; 3: return the task scheduling solution; </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Integer Encoding and Decoding</ns0:head><ns0:p>Our IPSO exploits the integer encoding method to convert a task assignment to cores into the position information of a particle. We first respectively assign sequence numbers to tasks and cores, both starting from one, where the task number is corresponding to a dimension of a particle position, and the value of a dimension of a particle position is corresponding to the core the corresponding task assigned to. For cloud VMs, we only number one core for each VM type as all VM instances with a type have identical price-performance ratio in real world, e.g., a1.* in Amazon EC2 3 . For each task, the number of cores which it can be assigned to is the accumulated core number of the device launching it and the edge servers having connections with the device plus the number of VM types. Thus, for each dimension in each particle position, the minimal value (p min d ) is one representing the task assigned to the first core of the device launching the task, and the maximal value (p max d ) is</ns0:p><ns0:formula xml:id='formula_30'>p max d = M &#8721; i=1 (x d,i &#8226;C i ) + M &#8721; i=1 M+E &#8721; j=M+1 &#8721; b i, j &gt;0 (x d,i &#8226;C j ) + NV, (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_31'>)</ns0:formula><ns0:p>where NV is the number of cloud VM types. The subscript d represents the dimension in each particle position. The dth dimension is corresponding to the dth task, t d .</ns0:p><ns0:p>For example, as shown in Fig. <ns0:ref type='figure'>2</ns0:ref>, assuming a DE3C consisting of two user devices, one edge server, and one cloud VM type. Each of these two devices and the edge server has two computing cores, respectively represented as dc 11 and dc 12 for the first device, dc 21 and dc 22 for the second device, and ec 1 and ec 2 for the edge server. Both devices have network connections with the edge, that is to say, tasks launched by these two devices can be offloaded to the edge for processing. Then for each task, there are five cores that the task can be assigned to, i.e., p max d = 5 for all d. Each device launches three tasks, where the first three tasks, t 1 , t 2 , and t 3 , are launched by the first device, and the last three tasks, t 4 , t 5 , and t 6 , are launched by the second device. Then we numbered two cores of each device as 1 and 2, respectively. Two cores of the edge server are numbered as 3 and 4, respectively. And the VM type is numbered as 5. By this time, the particle position [2, 3, 2, 1, 4, 5] represents t 1 is assigned to dc 12 , t 2 is assigned to ec 1 , t 3 is assigned to dc 12 , t 4 is assigned to dc 21 , t 5 is assigned to ec 2 , and t 6 is assigned to the cloud.</ns0:p><ns0:p>Given a particle position, we use the following steps to convert it to a task scheduling solution, as outlined in Algorithm 2: (i) we decode the position to the task assignment to cores based on the correspondence between them, illustrated above; (ii) with the task assignment, we conduct EDF for ordering the task execution on each core, which rejects tasks whose deadline cannot be satisfied by the core, as shown in lines 2-3 of Algorithm 2. By now, we achieve a task scheduling solution according to the particle position. After this, we can calculate the number of tasks whose deadlines are satisfied, and the overall utilization using Eq. ( <ns0:ref type='formula' target='#formula_19'>13</ns0:ref>)-( <ns0:ref type='formula' target='#formula_24'>16</ns0:ref>). And then, we can achieve the fitness of the particle using Eq. ( <ns0:ref type='formula' target='#formula_28'>19</ns0:ref>). sorting the task execution order in the descending on the deadline; 4: calculating the fitness using Eq. ( <ns0:ref type='formula' target='#formula_28'>19</ns0:ref>); 5: return the task scheduling solution and the fitness;</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>IPSO</ns0:head><ns0:p>The IPSO, exploited by our method to achieve the best particle position providing the global best fitness, consists of the following steps, as shown in Algorithm 3. 1) Initializing the position and the fly velocity for each particle randomly, and calculating its fitness.</ns0:p><ns0:p>2) Setting the local best position and fitness as the current position and fitness for each particle, respectively.</ns0:p><ns0:p>3) Finding the particle providing the best fitness, and setting the global best position and fitness as its position and fitness. 4) If the number of iterations don't reach the maximum predefined, repeat step 5)-7) for each particle. 5) For the particle, updating its velocity and position respectively using Eq. ( <ns0:ref type='formula' target='#formula_32'>21</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_34'>22</ns0:ref>), and calculating its fitness. Where &#965; i,d and p i,d are representing the velocity and the position of ith particle in dth dimension. lb i,d is the local best position for ith particle in dth dimension. gb d is the global best position in dth dimension. &#969; is the inertia weight of particles. We exploit linearly decreasing inertia weight in this paper, due to its simplicity and good performance <ns0:ref type='bibr' target='#b14'>(Han et al., 2010)</ns0:ref>. a 1 and a 2 are the acceleration coefficients, which push the particle toward local and global best positions, respectively. r 1 and r 2 are two random values in the range of [0, 1]. To rationalize the updated position in dth dimension, we perform rounding operation and modulo p max i plus 1 on it.</ns0:p><ns0:formula xml:id='formula_32'>&#965; i,d = &#969; &#8226; &#965; i,d + a 1 &#8226; r 1 &#8226; (lb i,d &#8722; &#965; i,d ) + a 2 &#8226; r 2 &#8226; (gb d &#8722; &#965; i,d ), (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_33'>)</ns0:formula><ns0:formula xml:id='formula_34'>p i,d = &#8968;p i,d + &#965; i,d &#8969; mod p max i + 1. (<ns0:label>22</ns0:label></ns0:formula><ns0:formula xml:id='formula_35'>)</ns0:formula><ns0:p>The inertia weight update strategy and the values of various parameters (e.g., a 1 and a 2 ) have influences on the performance of PSO, which is one of our future works. One is advised to read related latest works, e.g., <ns0:ref type='bibr' target='#b16'>(Houssein et al., 2021a;</ns0:ref><ns0:ref type='bibr' target='#b29'>Nabi and Ahmed, 2021)</ns0:ref>, if interested to follow the details. 6) For the particle, comparing its current fitness and local best fitness, and updating the local best fitness and position respectively as the greater one and the corresponding position.</ns0:p><ns0:p>7) Comparing the local best fitness with the global best fitness, and updating the global best fitness and position respectively as the greater one and the corresponding position.</ns0:p><ns0:p>For updating particle positions, we only discretize them and limit them to reasonable space, which is helpful for preserving the diversity of particles. Existing discrete PSO methods limit both positions and velocity for particles, and exploit the interception operator for the limiting, which sets a value as the minimum and the maximum when it is less than the minimum and greater than the maximum, respectively.</ns0:p><ns0:p>This makes the possibilities of the minimum and the maximum for particle positions are much greater than that of other possible values, and thus reduces the particle diversity, which can reduce the performance of PSO.</ns0:p></ns0:div> <ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 3 IPSO Input: the parameters of the IPSO.</ns0:p><ns0:p>Output: The global best particle position. 1: generating the position and the velocity of each particle randomly; 2: calculating the fitness of each particle by Algorithm 2; 3: initializing the local best solution as the current position and the fitness for each particle; 4: initializing the global best solution as the local best solution of the particle providing the best fitness; 5: while the iterative number don't reach the maximum do 6:</ns0:p><ns0:p>for each particle do 7:</ns0:p><ns0:p>updating the position using Eq. ( <ns0:ref type='formula' target='#formula_32'>21</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_34'>22</ns0:ref>); 8:</ns0:p><ns0:p>calculating the fitness of each particle by Algorithm 2;</ns0:p><ns0:p>9:</ns0:p><ns0:p>updating the local best solution;</ns0:p><ns0:p>10:</ns0:p><ns0:p>updating the global best solution; 11: return the global best solution;</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Complexity Analysis</ns0:head><ns0:p>As shown in Algorithm 3, there are two layers loop, which has O(IT R &#8226; NP) time complexity, where IT R and NP are the numbers of the iteration and particles, respectively. Within the loop, the most complicated part is calling Algorithm 2 which is O(NC &#8226; (T /NC) 2 ) = O(T 2 /NC) in time complexity on average, as shown in its lines 2-3, where NC is the number of cores in the DE3C system. T /NC is the average number of tasks assigned to each core, and O((T /NC) 2 ) is the time complexity of EDF for each core on average. Thus, the time complexity of our IPSO based method is O(IT R &#8226; NP &#8226; T 2 /NC) on average, which is quadratically increased with the number of tasks.</ns0:p><ns0:p>For PSO or GA with binary encoding method, they have similar procedures to IPSO, and thus their time complexities are also O(IT R &#8226; NP &#8226; T 2 /NC). Referring to <ns0:ref type='bibr' target='#b3'>(Bays, 1977;</ns0:ref><ns0:ref type='bibr' target='#b6'>B.V. and Guddeti, 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Benoit et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b23'>Liu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b26'>Meng et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Mahmud et al., 2020)</ns0:ref>, the time complexity of First Fit (FF) is O(T * NC), and that of First Fit Decreasing (FFD), Earliest Deadline First (EDF), Earliest Finish Time First (EFTF), Least Average Completion Time (LACT), and Least Slack Time First (LSTF) are O(T 2 * NC). In general, the numbers of the iteration and particles are constants, and all of above methods except FF exhibit quadratic complexity with the number of tasks.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>PERFORMANCE EVALUATION</ns0:head><ns0:p>In this section, we conduct extensive experiments in a DE3C environment simulated referring to published articles and the reality, to evaluate the performance of our method in SLA satisfaction and resource efficiency. In the simulated environment, there are 20 devices, 2 edges, and one cloud VM type, and each device is randomly connected with one edge. For each edge, the number of servers are randomly generated in the range of <ns0:ref type='bibr'>[1,</ns0:ref><ns0:ref type='bibr'>4]</ns0:ref>. The computing capacity of each core is randomly set in the ranges of [1, 2]GHz, [2, 3]GHz, and [2, 3]GHz, respectively, for each device, each edge server, and the VM type.</ns0:p><ns0:p>The number of tasks launched by each device is generated randomly in the range of [1, 100], which results in about 1000 total tasks on average in the system. The length, the input data size, and the deadline of each task is generated randomly in the ranges of <ns0:ref type='bibr'>[100,</ns0:ref><ns0:ref type='bibr'>2000]</ns0:ref>GHz, [20, 500]MB and [100, 1000]seconds, respectively. For network connections, the bandwidths for transmitting data from a device to an edge and the cloud are randomly set in ranges of <ns0:ref type='bibr'>[10,</ns0:ref><ns0:ref type='bibr'>100]</ns0:ref>Mbps and [1, 10]Mbps, respectively.</ns0:p><ns0:p>There are several parameters should be set when implementing our IPSO. Referring to <ns0:ref type='bibr' target='#b19'>(Kumar et al., 2020)</ns0:ref>, we set the maximum iteration number and the particle number as 200 and 50, respectively. Both acceleration coefficients, a 1 and a 2 , are set as 2.0, referring to <ns0:ref type='bibr' target='#b41'>(Wang et al., 2021a)</ns0:ref>. The &#969; inertia weight is linearly decreased with the iterative time in the range of [0.0, 1.4], referring to <ns0:ref type='bibr' target='#b47'>(Yu et al., 2021)</ns0:ref>. The effect of parameter settings on the performance of our method will be studied in the future.</ns0:p><ns0:p>We compare our method with the following classical and recently published methods.</ns0:p><ns0:p>&#8226; First Fit <ns0:ref type='bibr' target='#b3'>(Bays, 1977)</ns0:ref>, FF, iteratively schedules a task to the first computing core satisfying its deadline.</ns0:p><ns0:p>&#8226; First Fit Decreasing (B.V. and Guddeti, 2018), FFD, iteratively schedules the task with maximal computing length to the first computing core satisfying its deadline.</ns0:p></ns0:div> <ns0:div><ns0:head>8/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Earliest Deadline First <ns0:ref type='bibr' target='#b4'>(Benoit et al., 2021)</ns0:ref>, EDF, iteratively schedules the task with earliest deadline to the first computing core satisfying its deadline, which is a classical heuristic method concerning the deadline constraint.</ns0:p><ns0:p>&#8226; Earliest Finish Time First, EFTF, iteratively schedules a task to the computing core providing the earliest finish time and satisfying its deadline, which is the basic idea exploited in the work proposed by <ns0:ref type='bibr' target='#b23'>Liu et al. (Liu et al., 2019)</ns0:ref>.</ns0:p><ns0:p>&#8226; Least Average Completion Time, LACT, iteratively schedules a task to the computing core satisfying its deadline, such that the average completion time of scheduled tasks is minimal, which is the basic idea exploited by Dedas <ns0:ref type='bibr' target='#b26'>(Meng et al., 2020)</ns0:ref>.</ns0:p><ns0:p>&#8226; Least Slack Time First, LSTF, iteratively schedules a task to the computing core satisfying its deadline and providing the least slack time for the task, which is the basic idea exploited in the work proposed by <ns0:ref type='bibr' target='#b25'>Mahmud et al. (Mahmud et al., 2020)</ns0:ref>.</ns0:p><ns0:p>&#8226; Genetic Algorithm, GA, which simulates the population evolution by crossover, mutation and selection operators, where a chromosome represents a task assignment to cores, and a gene is a bit representing the assignment of a task to a core. This is the basic idea used in the work of Aburukba et al. <ns0:ref type='bibr' target='#b0'>(Aburukba et al., 2020)</ns0:ref>. The number of population and the maximum generation are set as 1000 in our experiments. EDF is used for task ordering in each core.</ns0:p><ns0:p>&#8226; PSO srv, employ PSO with the integer coding of the task assignment to computing nodes, which is the basic idea exploited by existing PSO based task scheduling for DE3C, such as <ns0:ref type='bibr' target='#b45'>(Xie et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Parameters have same settings to our IPSO method in the experiment. The assignment of tasks to cores in each computing node and the task ordering in each core are solved by EDF.</ns0:p><ns0:p>&#8226; BPSO, employ PSO with the binary coding that is similar to GA. Parameter values are set as our IPSO method in the experiment. EDF is used to order the task execution in each core.</ns0:p><ns0:p>We compare the performance of these task scheduling method in the following aspects.</ns0:p><ns0:p>&#8226; SLA satisfaction is quantified by the number, the accumulated computing length, and the processed data size of completed tasks.</ns0:p><ns0:p>&#8226; Resource efficiency is quantified by the resource utilization for the overall system, and the cost efficiency for cloud VMs. The price of a VM instance is $0.1 per hour.</ns0:p><ns0:p>&#8226; Processing efficiency is quantified by the executed computing length and the processed data size of tasks per unit processing time, which are the ratios of completed computing length and processed data size to the makespan, respectively.</ns0:p><ns0:p>For each group of experiments, we repeat it ten times, and report the median result in the followings.</ns0:p><ns0:p>For each metric value achieved by each task scheduling method, we scale it by that of FF to display the relative difference between these methods more clearly. The details of experiment results are shown as followings. Our method is abbreviated to IPSO in the followings.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>SLA Satisfaction</ns0:head><ns0:p>Fig 3 shows the performance of various task scheduling methods in maximizing the accumulated number, computing length, and processed data size of finished tasks. As shown in the figure, our method has about 36%, 46%, and 55% better performance than these heuristic methods, FF, FFD, EDF, EFTF, LACT, and LSTF, in these three SLA satisfaction metrics, respectively. This illustrates that meta-heuristic methods can achieve a much better performance than heuristic methods, due to their randomness for global searching ability. While, the performance of GA, PSO srv, and BPSO are much worse than that of these heuristic methods, such as, our method achieves 953%, 115%, and 322% than GA, PSO srv, and BPSO, respectively, more completed task number. This suggests us that meta-heuristic approaches must be carefully designed for a good performance.</ns0:p><ns0:p>Of these heuristic methods, EDF has the best performance in SLA satisfaction, as shown in Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, due to its aware of the task deadline, which is the reason why we exploit it for task ordering in each core. Compared with BPSO, IPSO has 322%, 523%, and 313% better performance in three SLA satisfaction metrics, respectively. This provides experimental evidence that our integer coding method significantly improves the performance of PSO for task scheduling in DE3C. The main reason is that the searching space BPSO is much larger than IPSO due to their different represents to the same problem. For example, if there are 6 candidate cores, e.g., two device cores, four edge server cores, and one cloud VM type, for processing every task in a DE3C, then considering there are 10 tasks, the search space includes 6 10 solutions for IPSO, while 2 60 for BPSO. Thus, in this case, the search space of BPSO is more than 330 million times larger than that of IPSO, and this multiple exponentially increases with the numbers of tasks and candidate cores of each task. Therefore, for an optimization problem, IPSO has much probability of searching a local or global best solution than BPSO. This is also the main reason why GA has much worse performance than IPSO, as shown in Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>.</ns0:p><ns0:p>PSO srv has smaller search space while worse performance than IPSO. As shown in Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, IPSO has 115%, 166%, and 108% better performance than PSO srv in three SLA satisfaction metrics, respectively. This is mainly because the coding of the task assignment to a coarse-grained resource cannot take full advantage of the global searching ability of PSO. Google's previous work has verified that fine-grained resource allocation helps to improve the resource efficiency <ns0:ref type='bibr' target='#b36'>(Tirmazi et al., 2020)</ns0:ref>, thus our IPSO uses the core as the granularity of resources during the searching process, which results in a better performance in SLA satisfaction optimization compared with other methods.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Resource Efficiency</ns0:head><ns0:p>Fig. <ns0:ref type='figure'>4a</ns0:ref> and b respectively show the overall resource utilization and the cost efficiency when using various task scheduling methods in the simulated DE3C environment. As shown in the figure, our method has the best performance in optimizing both resource efficiency metric values, where our method has 36.9%-964% higher resource utilization and 5.6%-143% greater cost efficiency, compared with other methods (GA has zero cost efficiency as its solution does not offload any task to the cloud). This is mainly because the resource utilization is our second optimization objective (see Eq. 17 or 19), which results in that the solution having a higher utilization has a greater fitness in all of the solution with same number of finished tasks.</ns0:p><ns0:p>EDF has a worse cost efficiency than other heuristic methods. This may be because EDF offloads more tasks to the cloud, which leads to a greater ratio of the data transfer time and the computing time in the cloud, and thus results in a poor cost efficiency. The idea of offloading tasks with small input data size to the cloud helps to improve the cost efficiency, which is one of our consideration for designing heuristics or hybrid heuristics with high effectiveness. This is the main reason why EFTF has the best cost efficiency in all of these heuristics, because EFTF iteratively assigns the task with minimal finish time, which usually having small input data size, when making the offloading and assignment decisions in the cloud.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Processing Efficiency</ns0:head><ns0:p>Fig. <ns0:ref type='figure'>5</ns0:ref> respectively show the values of the two processing efficiency metrics when applying various task scheduling methods. As shown in the figure, these two processing efficiency metric values respectively have a similar relative performance to the corresponding SLA satisfaction metric values for these scheduling methods, as shown in Fig. <ns0:ref type='figure' target='#fig_3'>3b and c</ns0:ref>. This is because all of these methods have comparable performance in the makespan. Thus, our method also has the best performance in processing efficiency.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>Meta-heuristics, typified by PSO, can achieve better performance than heuristics. This achievement is mainly due to their global search ability which is implemented by the randomness and the converging to the global solution during their iteratively search. But the meta-heuristic based method must be designed carefully, otherwise, it may have worse performance than heuristics, such GA, PSO srv and BPSO, as presented in experimental results.</ns0:p><ns0:p>The main difference between IPSO and BPSO is the search space size for a specific problem. This inspires us that more efficient encoding method with short code length helps to reduce the size of search space, and thus improve the probability of finding the global best solution.</ns0:p><ns0:p>One of the main advantages of heuristics is that they are specifically designed for targeted problems. This produces efficient local search strategies. Thus, in some times, heuristics have better performance than meta-heuristics, such as EDF vs. BPSO. Therefore, a promising research direction is integrating a local search strategy into a meta-heuristic algorithm to cover its shortage caused by its purpose of solving general problem. While, different combining of heuristic local searches and global search strategies should result in various performance improvements, which is one of our future studies.</ns0:p></ns0:div> <ns0:div><ns0:head>11/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='6'>RELATED WORKS</ns0:head><ns0:p>As DE3C is one of the most effective ways to solve the problem of insufficient resources of smart devices and task scheduling is a promising technology to improve the resource efficiency, several researchers have focused on the design of efficient task scheduling methods in various DE3C environments <ns0:ref type='bibr' target='#b39'>(Wang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>To improve the response time of tasks, the method proposed by <ns0:ref type='bibr' target='#b2'>(Apat et al., 2019)</ns0:ref> iteratively assigned the task with the least slack time to the edge server closest to the user. Tasks are assigned to the cloud when they cannot be finished by edges. Their work didn't consider the task scheduling on each server.</ns0:p><ns0:p>OnDisc, proposed by <ns0:ref type='bibr' target='#b15'>(Han et al., 2019)</ns0:ref>, heuristically dispatched a task to the server providing the shortest additional total weighted response time (WRT), and sees the cloud as a server, to improve overall WRT.</ns0:p><ns0:p>For minimizing the deadline violation, the heuristic method proposed by <ns0:ref type='bibr' target='#b34'>(Stavrinides and Karatza, 2019)</ns0:ref> used EDF and earliest finish time first for selecting the task and the resource in each iteration, respectively.</ns0:p><ns0:p>When a task's input data was not ready, the proposed method tried to fill a subsequent task before it.</ns0:p><ns0:p>Above research focused on the performance optimization for task execution, while didn't concern the cost of used resources. In general, a task requires more resources for a better performance, and thus there is a trade-off between the task performance and the resource cost. Therefore, several works concerned the optimization of the resource cost or the profit for service providers. For example, <ns0:ref type='bibr' target='#b7'>(Chen et al., 2020)</ns0:ref> presented a task scheduling method to optimize the profit, where the value of a task was proportional to the resource amounts and the time it took, and resources were provided in the form of VM. Their proposed method first classified tasks based on the amount of its required resources by K-means. Then, for profit maximization, their method allocated the VM class to the closest task class, and used Kuhn-Munkres method to solve the optimal matching of tasks and VMs. In their work, all VMs were seen as one VM class. This work ignored the heterogeneity between edge and cloud resources, which may lead to resource inefficiency <ns0:ref type='bibr' target='#b18'>(Kumar et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b21'>Li et al. (Li et al., 2020)</ns0:ref> proposed a hybrid method employing simulated annealing to improve artificial fish swarm algorithm for offloading decision making, and used best fit for task assignment. This work focused on media delivery applications, and thus assumed every task was formed by same-sized subtasks. This assumption limited the application scope. Mahmud et al. <ns0:ref type='bibr' target='#b25'>(Mahmud et al., 2020)</ns0:ref> proposed a method which used edge resources first, and assigned the offloaded task to the first computing node with minimal profit merit value, where the profit merit was the profit divided by the slack time.</ns0:p><ns0:p>All of the aforementioned methods employed only edge and cloud resources for task processing, even though most of user devices have been equipped with various computing resources <ns0:ref type='bibr' target='#b44'>(Wu et al., 2019)</ns0:ref> which have zero transmission latency for users' data. To exploit all the advantages of the local, edge and cloud resources, some works are proposed to address the task scheduling problem for DE3C. The method presented in <ns0:ref type='bibr' target='#b20'>(Lakhan and Li, 2019)</ns0:ref> first tried several existed task order method, e.g., EDF, EFTF, and LSTF, and selected the result with the best performance for task order. Then, the method used existed pair-wise decision methods, TOPSIS <ns0:ref type='bibr' target='#b22'>(Liang and Xu, 2017)</ns0:ref> and AHP <ns0:ref type='bibr' target='#b31'>(Saaty, 2008)</ns0:ref>, to decide the position for each task's execution, and applied a local search method exploiting random searching for the edge/cloud. For improving the delay, the approach presented in <ns0:ref type='bibr' target='#b28'>(Miao et al., 2020)</ns0:ref> first decided the amounts of data that is to be processed by the device and an edge/cloud computing node, assuming each task can be divided into two subtasks with any data size. Then they considered to migrate some subtasks between computing nodes to further improve the delay, for each task. The method proposed in <ns0:ref type='bibr' target='#b48'>(Zhang et al., 2019)</ns0:ref> iteratively assigned the task required minimal resources to the nearest edge server that can satisfy all of its requirements. <ns0:ref type='bibr' target='#b24'>Ma et al. Ma et al. (2022)</ns0:ref> proposed a load balance method for improving the revenue for edge computing. The proposed method allocated the computing resources of the edge node with the most available cores and the smallest move-up energy to the new arrived task. To improve the total energy consumption for executing deep neural networks in DE3C with deadline constraints, Chen et al. <ns0:ref type='bibr' target='#b8'>Chen et al. (2022)</ns0:ref> proposed a particle swarm optimization algorithm using mutation and crossover operators for population update. <ns0:ref type='bibr' target='#b42'>Wang et al. Wang et al. (2021b)</ns0:ref> leveraged reinforcement learning with sequence-to sequence neural network for improving the latency and the device energy in DE3C. Machine learning-based or metaheuristic-based approaches may achieve a better performance than heuristics, but in general, they consume hundreds to tens of thousands more time, which makes them not applicable to make online scheduling decisions. Different from these above works, in this paper, we design an Integer PSO based hybrid heuristic method for DE3C systems. Our work is aiming at optimizing SLA satisfaction and resource efficiency, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and trying to jointly address the problems of offloading decision, task assignment, and task ordering.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>In this paper, we study on the optimization of the SLA satisfaction and the resource efficiency in DE3C environments by task scheduling. We formulate the concerned optimization problem as a BNLP, and propose an integer PSO based task scheduling method to solve the problem with a reasonable time.</ns0:p><ns0:p>Different from existing PSO based method, our method exploits the integer coding of the task assignment to cores, and rationalizes the position of each particle by rounding and modulo operation to preserve the particle diversity. Simulated experiment results show that our method has better performance in both SLA satisfaction and resource efficiency compared with nine classical and recently published methods.</ns0:p><ns0:p>The main advantages of our method are the efficient encoding method and the integration of metaheuristic and heuristic. In the future, we will continue to study on more effective encoding methods and try to design hybrid methods by hybridizing meta-heuristic and heuristic search strategy for a better performance.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The architecture of device-edge-cloud cooperative computing.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>3 https://aws.amazon.com/ 6/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022) Manuscript to be reviewed Computer Science Algorithm 2 Converting a particle position to a task scheduling solution Input: The particle position, [gb 1 , gb 2 , ..., gb T ]. Output: The task scheduling solution and the fitness.1: decoding the position to the assignment of tasks to computing cores; 2: for each core do 3:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The SLA satisfaction performance achieved by various scheduling methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. The resource efficiency achieved by various scheduling methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66515:2:0:NEW 22 Jan 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response to Reviewers’ Comments Thank you for your letter and for the reviewers’ comments concerning our manuscript entitled “Integer particle swarm optimization based task scheduling for device-edge-cloud cooperative computing to improve SLA satisfaction”. These comments are all valuable and very helpful for improving our paper, as well as the important guiding significance to our researches. We have studied comments carefully and have made corrections which we hope meet with approval. The responds to the reviewer’s comments are as follows. Reviewer 1 [Q1] 1. Previous comments regarding the abstract have been incorporated, however, it is recommended to start your abstract from an introductory sentence (as in the previous one) instead of a sentence like “In this paper”. [A1] Thanks for the comment. We have improved the abstract. [Q2] Authors have provided quantitative results in the abstract and in the contribution section, however, the improvements shown in the different sections are different. Moreover, in the abstract, the term “x” is used while in the contribution section the symbol “%” is used, correct them or justify if it is different? [A2] Thanks for the comment. We have presented the improvements in a consistent form in the different sections in our revised manuscript, where the symbol “%” is used in all sections. [Q3] In the last paragraph of section 1 “introduction”, Section 6 has mentioned two times? [A3] Thanks for the comment. We have amended this issue in our revised manuscript. [Q4] The symbol “Z” and “U” used in equations 17 and 19 have not been explained in the text. [A4] Thanks for the comment. We have added explanations of them in the revised manuscript. [Q5] In response to previous comments “Do you think inertia weight has a role in PSO algorithm? Which variant of inertia weight you have used for your algorithm. Detail about inertia weight and other terms like r1, r2, c1, c2 not given (lines 212-213)”, it is mentioned that it is out of scope of this paper. I agree up to some extent that you are not working the inertia weight strategies but Inertia weight is one of the most important control parameters for maintaining the balance between the global and local search of the PSO, which can affect the performance of the PSO. Moreover, the values of constants like c1 and c2 also matter. Therefore, it is suggested to use the latest variant of inertia weight strategy or at least refer/cite some latest papers in the text so that readers can explore them if interested to follow the details, some of the latest papers in this regard are given below. https://doi.org/10.1007/s11227-021-04062-2. https://doi.org/10.1007/s10586-019-02983-5 [A5] Thanks for the comment. We have added some latest papers for readers to follow the details. Reviewer 2 [Q6] The language used in the paper is very poor. The language of the whole paper still needs to be improved. [A6] We are very sorry for the mistakes in this manuscript and inconvenience they caused in your reading. The manuscript has been thoroughly revised, so we hope it can meet the journal’s standard. Thanks so much for your useful comments. [Q7] The reviewer has not incorporated changes to the article based on my first comment 'The authors claimed that his approach has better resource efficiency. However, the results concerning resource efficiency are not meaningful. The authors are required to obtain results concerning Average resource utilization which will be between 0 to 1 or 0 to 100 %. ' [A7] Thanks for the comment. Table 1 shows the resource utilizations in ten times experiments, which are between 0 to 1, as the reviewer required. As we focus on the relative performance between different scheduling methods, thus we scale each metric value by that of FF, as shown in Table 2, and report the median value (the last line in Table 2) in our paper. Table 1. The resource utilizations achieved by various methods in ten times experiments. FF FFD EDF EFTF LACT LSTF GA PSO_srv IPSO 1 0.7907 0.7569 0.8246 0.7756 0.8116 0.7899 0.4048 0.8782 0.9185 2 0.7815 0.7724 0.8104 0.7368 0.7839 0.7812 0.4194 0.8941 0.9225 3 0.8014 0.8006 0.8325 0.7700 0.8094 0.7878 0.4165 0.8821 0.9222 4 0.8015 0.7794 0.8238 0.7619 0.8047 0.7899 0.4373 0.8966 0.9300 5 0.7817 0.7699 0.8300 0.7458 0.7949 0.7768 0.4297 0.8748 0.9226 6 0.7690 0.7667 0.8073 0.7566 0.7717 0.7612 0.4035 0.8756 0.9162 7 0.7824 0.7543 0.8025 0.7422 0.7865 0.7743 0.4103 0.8770 0.9277 8 0.7838 0.7694 0.8089 0.7493 0.7973 0.7795 0.4104 0.8947 0.9252 9 0.7832 0.7707 0.7891 0.7531 0.7901 0.7785 0.4171 0.8773 0.9051 10 0.8139 0.7746 0.8274 0.7677 0.8105 0.7978 0.4161 0.8829 0.9226 Table 2. The resource utilizations scaled by that of FF in ten times experiments. FF FFD EDF EFTF LACT LSTF GA PSO_srv IPSO 1 1 0.957 1.043 0.981 1.026 0.999 0.512 1.111 1.162 2 1 0.988 1.037 0.943 1.003 1.000 0.537 1.144 1.180 3 1 0.999 1.039 0.961 1.010 0.983 0.520 1.101 1.151 4 1 0.972 1.028 0.951 1.004 0.986 0.546 1.119 1.160 5 1 0.985 1.062 0.954 1.017 0.994 0.550 1.119 1.180 6 1 0.997 1.050 0.984 1.004 0.990 0.525 1.139 1.191 7 1 0.964 1.026 0.949 1.005 0.990 0.524 1.121 1.186 8 1 0.982 1.032 0.956 1.017 0.994 0.524 1.141 1.180 9 1 0.984 1.008 0.962 1.009 0.994 0.533 1.120 1.156 10 1 0.952 1.017 0.943 0.996 0.980 0.511 1.085 1.134 median 1 0.983 1.034 0.955 1.007 0.992 0.525 1.120 1.171 [Q8] The authors have claimed almost 10 times improvement against the state-of-the-art approaches which seems incorrect. The authors are required to perform the experiments again and see whether the improvement claimed is same [A8] Thanks for the comment. Table 2 shows the relative resource utilizations achieved by various scheduling method. As shown in the table, we can see that IPSO achieves the highest resource utilization in every time. "
Here is a paper. Please give your review comments after reading it.
353
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This research enhances crowd analysis by focusing on excessive crowd analysis and crowd density predictions for Hajj and Umrah pilgrimages. Crowd analysis usually analyzes the number of objects within an image or a frame in the videos and is regularly solved by estimating the density generated from the object location annotations. However, it suffers from low accuracy when the crowd is far away from the surveillance camera. This research proposes an approach to overcome the problem of estimating crowd density taken by a surveillance camera at a distance. The proposed approach employs a fully convolutional neural network (FCNN)-based method to monitor crowd analysis, especially for the classification of crowd density. This study aims to address the current technological challenges faced in video analysis in a scenario where the movement of large numbers of pilgrims with densities ranging between 7 and 8 per square meter. To address this challenge, this study aims to develop a new dataset based on the Hajj pilgrimage scenario.</ns0:p><ns0:p>To validate the proposed method, the proposed model is compared with existing models using existing datasets. The proposed FCNN based method achieved a final accuracy of 100%, 98%, and 98.16% on the proposed dataset, the UCSD dataset, and the JHU-CROWD dataset, respectively. Additionally, The ResNet based method obtained final accuracy of 97%, 89%, and 97% for the proposed dataset, UCSD dataset, and JHU-CROWD dataset, respectively. The proposed Hajj-Crowd-2021 crowd analysis dataset and the model outperformed the other state-of-the-art datasets and models in most cases.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 38</ns0:head><ns0:p>There is considerable interest among the scientific community regarding hajj crowd evaluation, especially 39 for pedestrians <ns0:ref type='bibr' target='#b15'>(Khan (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b0'>Ahmad et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b51'>Ullah et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b35'>Khan et al. (2017)</ns0:ref>). In events such 40 as Hajj, sports, markets, concerts, and festivals, wherein a large number of people gather in a confined 41 space, it is difficult to fully analyze these situations <ns0:ref type='bibr' target='#b35'>(Khan et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Saqib et al. (2017b)</ns0:ref>; Khan analysis is one of the most important and difficult tasks of video monitoring. The most important use of crowd analysis is to calculate the density of a crowd <ns0:ref type='bibr' target='#b28'>(Ravanbakhsh et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>Khan et al. (2016a)</ns0:ref>; <ns0:ref type='bibr' target='#b35'>Saqib et al. (2017c)</ns0:ref>; <ns0:ref type='bibr' target='#b56'>Wang et al. (2018</ns0:ref><ns0:ref type='bibr' target='#b55'>Wang et al. ( , 2014))</ns0:ref>; <ns0:ref type='bibr' target='#b18'>Khan et al. (2014)</ns0:ref>).</ns0:p><ns0:p>One of the requests that has garnered much attention from the scientific community is the calculation of crowd density <ns0:ref type='bibr' target='#b49'>(Ullah et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Saqib et al. (2017a)</ns0:ref>). The density of the crowd in public assembly is necessary to provide useful information to prevent overcrowding, which can lead to a higher risk of stampede. Acknowledging the importance of estimating the density of crowds, numerous attempts have been numerous attempts to overcome this problem by utilizing efficient algorithms <ns0:ref type='bibr' target='#b30'>(Sabokrou et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Ramos et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b29'>Rota et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b46'>Ullah and Conci (2012a)</ns0:ref>). This leads researchers to conduct studies and review various crowd density estimation approaches <ns0:ref type='bibr' target='#b61'>(Zhang et al. (2016b)</ns0:ref>). Researchers have stated that the most stable and effective method of estimating multiple densities in comparison to detection-based methods is texture-based analysis <ns0:ref type='bibr'>(Ullah and Conci (2012b,a)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Ullah et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Khan and Ullah (2010)</ns0:ref>; <ns0:ref type='bibr'>Uzair et al. (2009a,b)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>Khan et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Saqib et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b31'>Saleh et al. (2015)</ns0:ref>).</ns0:p><ns0:p>This study seeks to enhance the categorization of hajj pilgrims based on crowd density. An FCNNbased framework for crowd analysis is presented in this study.</ns0:p><ns0:p>Crowd analysis is inherently a multidisciplinary topic, including scientists, psychologists, biologists, public security, and computer vision experts. Computer vision has grown in relevance in the field of deep learning in recent years. The fully convolutional neural network (FCNN), a profound learning model of grid styling data such as images, is one of the most advanced deep learning models. This technique has the advantage of using seizures during neuronal development and image classification. The FCNN algorithm relies heavily on convolutional, polling, and fully connected layers, as shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p><ns0:p>The convolutional layer is used to represent the image and compute the function mapping. In FCNN, a convolutional layer, which consists of a series of mathematical operations, plays an important role. A polling layer was added after each layer to reduce the resolution of the function mapping. A pool layer is typically used with a sampling approach to reduce the spatial dimension to detect and remove parameters with minimum distortion and function map modifications. Following polling layer sampling, features produced from the convolution layers and such characteristics were created. A totally linked layer of substrates 'flattens' the networks used for the input of the next layer. It also contains neurons that are tightly connected to other neurons in two adjacent layers. The major contributions in this paper included: 1) A fully convolutional neural network (FCNN) was introduced for crowd analysis and estimation of crowd density. We first extracted the frame from the video to estimate crowd density. We sent a full set of images for training and testing and implemented the entire CNN.</ns0:p><ns0:p>2) Created a deep learning architecture to capture spatial features to automatically evaluate crowd density and classify crowds. The remainder of this paper is arranged as follows: Section II presents the related work, Section III expounds on the proposed method, Section IV presents the experiment and result discussion, and Section V presents the conclusion.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>An early version of crowd analysis has shown that the addition of the Hajj crowd density classification to the crowd analysis system improves its robustness. For example, the study by <ns0:ref type='bibr' target='#b12'>Hu et al. (2016)</ns0:ref> a crowd surveillance approach that includes both behavior detection and crowd scene occupation remains to be established. The advantages of multi-task learning have been demonstrated by face analysis <ns0:ref type='bibr' target='#b58'>(Yan et al. (2015)</ns0:ref>), head-pose prediction <ns0:ref type='bibr' target='#b26'>(Pan et al. (2016)</ns0:ref>), and voice recognition <ns0:ref type='bibr' target='#b36'>(Seltzer and Droppo (2013)</ns0:ref>). The next discussion concentrates on what has been done in each area of crowd analysis.</ns0:p><ns0:p>Crowd Analysis: Crowd analysis algorithms are designed to provide an accurate estimate of the real number of people in a crowded image. Crowd analytics has reached a new level of sophistication due to the availability of high-level, high-variable crowd analytics such as UCF CC 50 and the advent of deep network technologies such as convolutional neural networks <ns0:ref type='bibr' target='#b36'>(Seltzer and Droppo (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b13'>Idrees et al. (2013a)</ns0:ref>). While most recent methods are used to map pixel values to a single figure <ns0:ref type='bibr' target='#b25'>(Marsden et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b60'>Zhang et al. (2015)</ns0:ref>) directly in crowd analysis, pixel-based heat map analysis has shown an increase in the efficiency of crowd analysis for complex, highly congested scenes <ns0:ref type='bibr' target='#b61'>(Zhang et al. (2016b)</ns0:ref>).</ns0:p><ns0:p>Crowd Density Level Estimation: The degree of congestion in a crowded scene is known as the degree of crowd density. The aspect of a crowded scene usually means a discrete (0-N) value or an ongoing value (0.0-1.0). Texture analysis by Wu et al. used functionality to continuously produce an estimated density level <ns0:ref type='bibr' target='#b12'>(Hu et al. (2016)</ns0:ref>). For the classification of discrete density levels, <ns0:ref type='bibr' target='#b8'>Fu et al. (2015)</ns0:ref>. utilized a deep convolutional neural network. This role mainly involves the task of uncertainty associated with a given density level estimate. It is not possible to set density labels and their unique definitions in datasets using a universal framework. The most straightforward scheme is that of discrete labels in density levels explicitly inferred from real multitude analysis values, leading to a distribution of the histogram style with the smallest subjectivity and human error.</ns0:p><ns0:p>Fully convolutional networks: Fully convolutional networks (FCNs) are a CNN variant that provides a proportionally sized map output rather than a mark of classification or regression of the given picture.</ns0:p><ns0:p>Two examples of functions used by FCNs are semantic segmentation <ns0:ref type='bibr' target='#b8'>(Fu et al. (2015)</ns0:ref>) and saliency prediction <ns0:ref type='bibr' target='#b37'>(Shekkizhar and Lababidi (2017)</ns0:ref>). In the FCN training, <ns0:ref type='bibr' target='#b62'>Zhang et al. (2016c)</ns0:ref> converted a multitude-density heat map of an image of a crowded scene, which can count exactly even in tough scenes.</ns0:p><ns0:p>One of the main features of completely convolutional networks, which makes the system especially appropriate for crowd analysis, is the use of an input of variable size that prevents the model from losing information and visual distortions typical of image down sampling and reforming.</ns0:p><ns0:p>The review of contemporary Convolutional Neural Network (CNN)-based algorithms that have proven considerable gains over prior methods that depend heavily on hand-crafted representations. They address the advantages and disadvantages of current CNN-based techniques and highlight prospective research areas in this rapidly expanding subject <ns0:ref type='bibr' target='#b40'>(Sindagi and Patel (2018)</ns0:ref>). Additionally, few research works begin with a quick overview of pioneering techniques that use hand-crafted representations before delving into depth and newly released datasets <ns0:ref type='bibr'>(Al Farid et al. (2019a,b)</ns0:ref>).</ns0:p><ns0:p>The top three performances in their crowd analysis datasets were examined for their advantages and disadvantages based on the assessment measures. They anticipate that this approach will enable them to make realistic conclusions and predictions about the future development of crowd counting while also</ns0:p></ns0:div> <ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2021:10:66465:1:2:NEW 22 Dec 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>providing possible solutions for the object counting issue in other domains. They compared and tested the density maps and prediction outcomes of many popular methods using the NWPU dataset's validation set.</ns0:p><ns0:p>Meanwhile, tools for creating and evaluating density maps are included <ns0:ref type='bibr' target='#b9'>(Gao et al. (2020)</ns0:ref>). This is mostly due to the fact that Hajj is an absolutely unique event that includes hundreds of thousands of Muslims congregating in a confined space. This article proposes a method based on convolutional neural networks (CNNs) for doing multiplicity analysis, namely crowd counting. Additionally, it presents a novel method for Hajj and Umrah applications. They addressed this issue by creating a new dataset centered on the Hajj pilgrimage scenario. <ns0:ref type='bibr' target='#b4'>(Bhuiyan et al. (2021)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>PROPOSED METHOD DEEP-CNN Features</ns0:head><ns0:p>A deep CNN can create powerful texture features for each frame without human supervision, unlike handcrafted features, which are susceptible to light fluctuations and noise. Pre-trained CNN models such as VGG19 <ns0:ref type='bibr' target='#b39'>(Simonyan and Zisserman (2014)</ns0:ref>), GoogleNet <ns0:ref type='bibr' target='#b44'>(Szegedy et al. (2015)</ns0:ref>), inceptionv3 <ns0:ref type='bibr' target='#b45'>(Szegedy et al. (2016)</ns0:ref>), and ResNet101 <ns0:ref type='bibr' target='#b11'>(He et al. (2016)</ns0:ref>) were evaluated for feature extraction in this research. Every layer is followed by a corrected linear unit layer in VGG19 <ns0:ref type='bibr' target='#b39'>(Simonyan and Zisserman (2014)</ns0:ref>). The total number of layers is 50. Rather than processing linearly, GoogleNet's design <ns0:ref type='bibr' target='#b45'>(Szegedy et al. (2016)</ns0:ref>) uses several routes, each having 22 weight levels. GoogleNet's 'inception module,' which performs the concurrent processing of multiple convolution kernels, is the building block of the software.</ns0:p><ns0:p>While Inceptionv3 <ns0:ref type='bibr' target='#b45'>(Szegedy et al. (2016)</ns0:ref>) has fewer parameters, it is very fast. As a result, it has the potential to enable a level of complexity comparable to VGGNet, but with deeper layers. Increases in the VGG19, GoogleNet, and inceptionv3 models' depth lead to saturation and a decline in inaccuracy. On the other hand, ResNet uses skip connections or direct input from one layer to another layer (called identity mapping). ResNet's skip connections improve the speed of CNNs with a lot of layers thanks to their use. ResNet may also help with the disappearing gradient issue. <ns0:ref type='bibr' target='#b11'>He et al. (2016)</ns0:ref> has more information.</ns0:p><ns0:p>Due to the strong symbolic capabilities of deep residual networks, ResNet has recently improved the performance of many computer vision applications such as semantic segmentation, object recognition, and image classification.</ns0:p><ns0:p>Python provides access to all of the pre-trained models used in our research. These models were first developed to categorize 224 &#215; 224-pixel images. They may, however, be used to extract features from pictures of any size by using feature extractors.</ns0:p></ns0:div> <ns0:div><ns0:head>Crowd Density Classification Process</ns0:head><ns0:p>The crowd density classification process, which consists of three steps, is shown below in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. We first performed full image labeling, image data training, and image data testing. In the following section, we describe the details. </ns0:p></ns0:div> <ns0:div><ns0:head>Crowd Density Process (IMAGE LABELLING)</ns0:head><ns0:p>The crowd density image labeling process followed by manual checking using the Sam counting method based on CNN shows Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. In this process, the dataset is an image. First, we collected images based on our five classes from YouTube using video capture software. Second, we performed the image selection and validation. For image selection and validation, we applied this rule. If the kaaba is occupied in the middle and people do tawaf on the circle, then we can consider the image to be selected for the labeling process. As an example, based on this rule, we selected the first image shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. As for the second image, kaaba is not positioned in the center of the image, which is why it cannot be considered Manuscript to be reviewed Computer Science for labeling. In the third image, kaaba is in the center of the image; however, the image is zoomed out. Hence, we did not consider this image as one of our datasets. Subsequently, we used the Sam-counting method based on a CNN. In fact, we did not include people. We employed people counting just to aid the process of labeling for density of the five different classes (very low, low, medium, high, and very high).</ns0:p><ns0:p>After completing the above steps, we performed manual checking to determine whether the mapping of labels and classes was correct. Accordingly, in some populations surrounding Kaaba (Tawaf area), 27000 images and 25 video sequences were recorded, including some typical crowd scenes, including touching the black stone in the Kaaba area.</ns0:p><ns0:p>We collected images based on five classes, with each class consisting of 5400 images. Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows an example of a five-class dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>UCSD</ns0:head><ns0:p>Cameras on the sidewalk at UCSD gathered the first crowd-analysis dataset <ns0:ref type='bibr' target='#b5'>(Chan et al. (2008)</ns0:ref>). A total of 2500 frames with a 238&#215;158 aspect ratio were used, and every five frames, the ground truth annotations of each pedestrian were added. Linear interpolation was used to generate labels for the remaining frames.</ns0:p><ns0:p>Each frame maintains the same viewpoint because it is gathered from the same location. The UCSD dataset did not divided the data category wise. For our experiment we have divided into five classes. The classes are Very Low, Low, Medium, High, and Very High. </ns0:p></ns0:div> <ns0:div><ns0:head>Method of Annotation (Tools)</ns0:head><ns0:p>The annotation tool was developed based on Python and open-cv for easy annotation in Hajj crowd photos based on five classes. The method supports hot encoding label forms and labels the image name based on the threshold. Each image was labeled using the same method.</ns0:p></ns0:div> <ns0:div><ns0:head>IMPLEMENTATION Crowd Density Process (TRAINING))</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref> shows the training process for crowd density. For the training process, we used an accurate density-labeled image from the previous stage using fully convolutional neural networks (FCNNs) to classify the five classes for training. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Crowd Density Process (TESTING)</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> illustrates the crowd analysis density testing process using FCNNs. First, we prepared a new test image dataset. We then passed the full set of images for testing. Second, we tested five classes using the FCNNs. Finally, we obtained classification results for the five classes. </ns0:p></ns0:div> <ns0:div><ns0:head>Dataset Comparison</ns0:head><ns0:p>According to their findings, the diversity of the dataset makes it impossible for crowd analysis networks to acquire valuable and distinguishing characteristics that are absent or disregarded in the prior datasets, which is our finding. 1) The data of various scene characteristics (density level and brightness) have a considerable effect on each other, and 2) there are numerous erroneous estimates of negative samples.</ns0:p><ns0:p>Therefore, there is a growing interest in finding a way to resolve these two issues. In addition, for the localization task, we designed a suitable measure and provided basic baseline models to help start. As a result, we believe that the suggested large-scale dataset would encourage the use of crowd analysis and localization in reality and draw greater attention to addressing the aforementioned issues. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> presents a dataset comparison with other public datasets.Our Hajj-Crowd dataset is based on five classes.</ns0:p><ns0:p>The five classes are: very low, low, medium, high, and very high. For our experiment, we used another two datasets, the UCSD and JHU-CROWD datasets. But in the UCSD and JHU-CROWD datasets, they were never divided into different classes. For our evaluation, we have divided five classes manually.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset</ns0:head></ns0:div> <ns0:div><ns0:head>Number of Image</ns0:head><ns0:p>Resolutions Extreme Congestion UCSD <ns0:ref type='bibr' target='#b5'>(Chan et al. (2008)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Network for Modelling</ns0:head><ns0:p>Consequently, the CNN is implemented as a sequential network. As a result of the convolution operations, a 2D convolution layer is created, which eventually leads to the development of a convolution KERNEL.</ns0:p><ns0:p>The following formula was used to compute the subsequent feature map values: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_0'>G[m, n] = ( f * h)[m &#8226; n] = &#8721; j &#8721; k h[ j, k] f [m &#8722; j, n &#8722; k] (1)</ns0:formula><ns0:p>The value of f is equal to that of the input image, and the value of h is equal to that of KERNEL. These are the indices of the rows and columns of the final result matrix. We use the filter that we applied to the selected pixel. Then, we take the kernel values for each color and multiply them with their corresponding image values. In summary, the results are output to the feature map. Calculating the dimensions of the output matrix, remembering to account for padding and stride involves finding the</ns0:p><ns0:formula xml:id='formula_1'>n out = n + 2p &#8722; f s + 1 (2)</ns0:formula><ns0:p>where n is the picture size, f is the filter dimension, p is the padding, and s is the stride. A tensor of the outputs is provided by several layers using numerous input units.</ns0:p><ns0:p>The pooling procedure was performed using MaxPooling 2D software. The max-pooling approach is a sample-based discretisation technique. The goal is to reduce the number of dimensions a dataset possesses to provide a visualization or a hidden layer of data. Assuming that certain features are in the sub-regions and are binned, the positions of such features in the input representation may be estimated.</ns0:p><ns0:p>After splitting the data into training and testing sets, data were assessed. We chose 27000 images for this experiment. In training, twenty one thousand six hundred fifty (21600) images are used, whereas in testing, 5400 (20%) images are employed. Before classifier initialization, the training image for the CNN model is produced. After building the model, the first convolution layer is added and initialized as an input layer to the fully connected network that is responsible for the final output layer. When optimizing using the Adam algorithm with a learning ratio of 0.001, we utilized the Adam optimizer with a learning ratio of 0.001. In addition to the training data, test data, and parameters for the number of training steps, the model is also interested in the other three components: training data, test data, and parameters for the number of training steps.</ns0:p></ns0:div> <ns0:div><ns0:head>PERFORMANCE EVALUATION AND RESULT ANALYSIS Experimental Setup</ns0:head><ns0:p>The processing of high-resolution images in a fully connected network (e.g., 1914 &#215; 922 pixels) presents a range of challenges and constraints, particularly with regard to the use of GPU memories. Only certain kernels and layers can have our FCNN convolutional (i.e., model capacity). Therefore, we aim to create the best possible FCNN architecture to process images such as those in a UCSD dataset with the highest possible resolution. We used an NVidia GTX 1660Ti 6GB RAM 16GB card. Finally, we utilized python3 in conjunction with deep learning programs such as open-cv2, NumPy, SciPy, matplotlib, TensorFlow GPU, CUDA, Keras, and other similar tools.</ns0:p></ns0:div> <ns0:div><ns0:head>Matrix Evaluation</ns0:head><ns0:p>The proposed Hajj-Crowd framework's performance may be verified using the following performance criteria: 1. Precision <ns0:ref type='bibr' target='#b10'>(Goutte and Gaussier (2005)</ns0:ref>), 2. Recall <ns0:ref type='bibr' target='#b7'>(Flach and Kull (2015)</ns0:ref>), 3. when the number of epochs increased, the error increased. After completing 100 epochs, we observed that the data loss was slightly high at 0.37. In the val-Loss, we observed that when the epoch was 0 to 20, the data loss was 0.02550. Subsequently, when the number of epochs increases, the data loss decreases.</ns0:p><ns0:p>After completing 100 epochs, we observed that the data loss was slightly high at 0.01437.</ns0:p><ns0:p>Each of these comparison tests used the exact experimental dataset. For experiment 1, the FCNN technique obtained a final accuracy of 100%, 98%, and 98.16% on the proposed dataset, the UCSD dataset, and the JHU-CROWD dataset, respectively. The proposed method's average microprecision, microrecall, and microF1 score are shown in Tables <ns0:ref type='table'>2, 3</ns0:ref>, 4, and 5. All of these assessment matrices were generated using the procedures described in <ns0:ref type='bibr' target='#b43'>(Sokolova and Lapalme (2009)</ns0:ref>). The suggested method's average microprecision, microrecall, and microF1 score are 100%, 100%, and 100% successively for the proposed dataset, compared to 97%, 97%, 97%, and 95%, 95%, and 95% for the UCSD and JHU-CROWD datasets.</ns0:p><ns0:p>All of these results indicate that the developed framework and proposed dataset significantly outperform the two state-of-the-art datasets. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Input Fully Convolutional Neural Networks Output.</ns0:figDesc><ns0:graphic coords='3,141.73,448.90,413.55,149.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66465:1:2:NEW 22 Dec 2021) Manuscript to be reviewed Computer Science 3) Built a new dataset based on Hajj pilgrimages. We created a new dataset in this research because nobody in the modern world has made this dataset related to Hajj crowds. In State-of-the-art there are few well known crowd datasets worldwide, such as ShanghaiTech, UCSD pedestrians, UCF-CC-50, Mall, WorldExpo, JHU-CROWD, and NWPU-Crowd.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Crowd Density Classification Process.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66465:1:2:NEW 22 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Image Labeling followed by manual checking on the Labeling.</ns0:figDesc><ns0:graphic coords='6,141.73,145.80,413.59,248.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>JHU-CROWDJHU-CROWD<ns0:ref type='bibr' target='#b41'>(Sindagi et al. (2019)</ns0:ref>) is one of the largest crowd analysis datasets in recent years. Itcontains 4,250 images with 330,165 annotations. JHU-CROWD, images are chosen at random from the Internet, busy street. These various scene types and densities are combined to produce a difficult dataset that can be used by researchers. Therefore, the training and test sets tended to be low-density. As a result, many CNN-based networks face new problems and possibilities owing to scale shifts and viewpoint 5/15 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66465:1:2:NEW 22 Dec 2021)Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Example of Five classes dataset.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.59,262.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Crowd Analysis Density Training Process Using FCNNs.</ns0:figDesc><ns0:graphic coords='7,141.73,565.96,413.59,140.34' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Crowd Analysis Density Testing Process using FCNN.</ns0:figDesc><ns0:graphic coords='8,141.73,150.47,413.59,131.38' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66465:1:2:NEW 22 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>F1 score<ns0:ref type='bibr' target='#b24'>(Luque et al. (2019)</ns0:ref>), 4. Final accuracy, 5. Confusion metrics (Van der<ns0:ref type='bibr' target='#b54'>Maaten and Hinton (2008)</ns0:ref>) and 6. Obtain graph, which illustrates the separability of classes. Precision, Recall, F1 score can be achieved the result by the following equation. ,TN, FN, and FP in Eqs. (3)-(7) denote true positive, true negative, false negative, and false positive, respectively. While evaluating the suggested Hajj-Crowd output, the confusion matrix provides a true overview of the actual vs. projected output and illustrates the performance's clarity. All the metrics result added on Experiment 1 and Experiment 2.Experiment 1(FCNN)The Hajj crowd dataset is a large-scale crowd density dataset. It includes 21600 training images and 5400 test images with the same resolution(1914 &#215; 922). The proposed method outperforms the state-of-the-art method in the context of a new dataset (Name HAJJ-Crowd dataset), which achieved a remarkable result 100%. For this experiment we have used another two datasets. The datasets are UCSD and JHU-CROWD dataset. The UCSD dataset and JHU-CROWD dataset contain total 2500 data and each classes contain 500 data and 4000 and each classes 800. For training we have used 80% and testing 20% respectively.All datasets we have divided into five folds.Figure 7 (a) shows a graph based on the results of the five classes and Figure 8 shows the Confusion Matrices for the test dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure7(b) clearly shows that, from 0 to 20 epochs, there is no considerable change in the accuracy, whereas from 20 to 40 epochs, we observed no loss of data. However, for 40 to 100 epochs, the accuracy continued to increase. Finally, the accuracy for 100 epochs was 100%. We have clearly seen that from 0 to 20 epochs there is slowly taking place loss of data, whereas from 20 to 40 and 40 to 100 epochs, there is a rapid loss of data. Finally, the data loss at 100 epochs is 0.01437. On the other hand, Figure7(b)shows the train accuracy and Val-accuracy; from 0 to 20 epochs, there is a slow change in the val-accuracy and at the same time no considerable change in the train accuracy. However, for 20 to 40 epochs and 40 to 100 epochs, it slowly changed with the Train accuracy as well as 20 to 40 and 40 to 100 epochs, with no considerable change in the val-accuracy. Finally, the train accuracy was 0.01437 and the val-accuracy was 1.0000. Figure7(b) shows that when the epoch was 0 to 20 epochs, the data loss was 0.35. Subsequently,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Five Classes graphical presentation results</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Graphical Result for the training dataset using ResNet method, (a) Hajj-Crowd Dataset, (b) UCSD Dataset, and (C) JHU-CROWD Dataset</ns0:figDesc><ns0:graphic coords='13,141.73,386.24,413.59,186.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='17,42.52,178.87,525.00,189.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='18,42.52,178.87,525.00,193.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='19,42.52,178.87,525.00,166.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='20,42.52,178.87,525.00,177.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='21,42.52,178.87,525.00,236.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='22,42.52,178.87,525.00,315.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='23,42.52,178.87,525.00,244.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='24,42.52,178.87,525.00,333.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='25,42.52,178.87,525.00,63.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='11,141.73,133.15,413.59,192.87' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='11,141.73,371.12,454.91,167.71' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of Eight Real World Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>)</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>158 x 238</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell>UCF-CC-50 (Idrees et al. (2013b))</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell cols='2'>2101 x 2888 No</ns0:cell></ns0:row><ns0:row><ns0:cell>Mall (Chen et al. (2012))</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>480 x 640</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell>WorldExpo'10 (Zhang et al. (2016a))</ns0:cell><ns0:cell>3980</ns0:cell><ns0:cell>576 x 720</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ShanghaiTech Part A (Zhang et al. (2016d)) 482</ns0:cell><ns0:cell>589 x 868</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ShanghaiTech Part B (Zhang et al. (2016d)) 716</ns0:cell><ns0:cell>768 x 1024</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>JHU-CROWD (Sindagi et al. (2019))</ns0:cell><ns0:cell>4,250</ns0:cell><ns0:cell>1450 x 900</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>NWPU-Crowd (Wang et al. (2020))</ns0:cell><ns0:cell>5,109</ns0:cell><ns0:cell cols='2'>2311 x 3383 Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>PROPOSED HAJJ-CROWD DATASET</ns0:cell><ns0:cell>27000</ns0:cell><ns0:cell>1914 x 922</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Five Classes Classification Report using ResNet model with JHU-CROWD Dataset.</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"09 December 2021 Journal Title: Peer J Computer Science Manuscript Title: A deep crowd density classification model for Hajj pilgrimage using fully convolutional neural network, CS-2021:10:66465:1:0: NEW Dear Editor, Based on your comments on 10 November 2021, we are submitting a revised manuscript titled A deep crowd density classification model for Hajj pilgrimage using fully convolutional neural network, ID# CS-2021:10:66465:1:0: NEW, for consideration in the Peer J Computer Science. I would especially like to thank the reviewers for their thoughtful and constructive comments. These are the most helpful reviewers’ comments I have received in 4 years of paper writing. I have included a response letter that outlines how the language editor’s comments have been addressed in the revised manuscript. Our responses are highlighted in bold. We look forward to your positive response. With Best Regards, Md Roman Bhuiyan Multimedia University, Malaysia Reviewer 1 Basic reporting Authors in their work proposed a new dataset for crowd analysis. In addition to this, proposed a fully convolutional neural network (FCNN)-based method to monitor the crowd. There is merit in the dataset annotation but the manuscript still need a lot of work. 1) Literature references are not sufficient. The related work section needs more work. There is a lot of work done in this field is reviewed in the following papers [1, 2]. [1] Sindagi, Vishwanath A., and Vishal M. Patel. 'A survey of recent advances in CNN-based single image crowd counting and density estimation.' Pattern Recognition Letters 107 (2018): 3-16. The review of contemporary Convolutional Neural Network (CNN)-based algorithms that have proven considerable gains over prior methods that depend heavily on hand-crafted representations. They address the advantages and disadvantages of current CNN-based techniques and highlight prospective research areas in this rapidly expanding subject (\cite{f62}). Additionally, few research works begin with a quick overview of pioneering techniques that use hand-crafted representations before delving into depth and newly released datasets (\cite{f65, f66}). [2] Gao, Guangshuai, et al. 'Cnn-based density estimation and crowd counting: A survey.' arXiv preprint arXiv:2003.12783 (2020). The top three performances in their crowd analysis datasets were examined for their advantages and disadvantages based on the assessment measures. They anticipate that this approach will enable them to make realistic conclusions and predictions about the future development of crowd counting while also providing possible solutions for the object counting issue in other domains. They compared and tested the density maps and prediction outcomes of many popular methods using the NWPU dataset's validation set. Meanwhile, tools for creating and evaluating density maps are included(\cite{f63}). This is mostly due to the fact that Hajj is an absolutely unique event that includes hundreds of thousands of Muslims congregating in a confined space. This article proposes a method based on convolutional neural networks (CNNs) for doing multiplicity analysis, namely crowd counting. Additionally, it presents a novel method for Hajj and Umrah applications. They addressed this issue by creating a new dataset centered on the Hajj pilgrimage scenario. 2) The English language should be improved to ensure clearly understand your text. Some examples where the language could be improved include lines 80, 93, 157 (what is L denoting here ?), Mathematical notations should be well defined. (For example in 228, What is m and n) – the current phrasing makes comprehension difficult. Based on the comments from the reviewer, we have updated our draft accordingly. Experimental design These are major issues with the experimental setup. 1. Experiment design and experimental results both the proposed method and the benchmark methods are stochastic and the results from multiple independent runs are expected. What is currently reported in the paper is the results from a single run, which is not enough to draw concrete conclusions. Furthermore, multiple runs will be needed to conduct a statistical significance test. Based on the suggestions, we have tried to complete all issues regarding the experimental design. More details on the Experiment 1 (FCNN) and Experiment 2 (ResNet) sections are described in the manuscript. 2. Performance metrics, The typical accuracy is inappropriate to be used when the dataset is imbalanced (Shanghai Tech and UCSD) and multiclass (proposed dataset, Shanghai Tech and UCSD). We know for sure that the benchmark datasets in this study fall under that category. Then why the typical accuracy is still used to assess the effectiveness of the experimented methods. The proposed Hajj-Crowd framework’s performance may be verified using the following performance criteria: 1. Precision (C. Goutte and E. Gaussier, 2005), 2. Recall(P. Flach and M. Kull, 2015), 3. F1score (A. Luque, A. Carrasco, A. Mart ́ın, and A. de las Heras,2019), 4. Final accuracy, 5. Confusionmetrics (L. van der Maaten and G. Hinton, 2008) and 6. Obtain graph, which illustrates the separability ofclasses. Precision, Recall, F1 score can be achieved the result by the following equation. The terms TP, TN, FN, and FP in Eqs. (3) – (7) denote true positive, true negative, false negative, and false positive, respectively. While evaluating the suggested Hajj-Crowd output, the confusion matrix provides a true overview of the actual vs. projected output and illustrates the performance’s clarity. All the metrics result added on Experiment 1 and Experiment 2. 3. Benchmark methods and fairness of the comparisons. Experimental design and experimental results for both the proposed method and the benchmark methods were carried out successfully. Multiple independent runs were also successfully employed in our experiment and updated in the manuscript accordingly. Multiple runs were completed to conduct a statistical significance test through precision, recall, and F1 score. 3.1- Benchmark methods I do not think any of the methods in the experiments was specifically proposed for crowd density classification. There are a number of studies that have proposed similar techniques to the proposed method [1, 2], e.g., utilising CNN or other machine learning methods to classify/estimate crowd density. Why none of these was included in the comparisons despite some of such methods being discussed in the related work section. Ans: Based on the comments, the manuscript updated considerably. [1] Gao, Guangshuai, et al. 'Cnn-based density estimation and crowd counting: A survey.' arXiv preprint arXiv:2003.12783 (2020). The given survey paper already cited in our paper at the relevant section. 3.2 – Fairness (a) The proposed method utilises pre-trained models (transfer learning) whereas the benchmark methods are trained from scratch. I do not think this is a fair comparison unless the study is about transfer learning vs conventional learning. Based on the comments, the manuscript updated considerably. (b) Overall dataset annotation looks fair except there is a chance of having a human bias in 5 classes. As it is very difficult to see the difference in the low and medium (3rd image) and the same is the case with medium and high. In my personal opinion, having three classes (low, medium and high) will be more appropriate than five classes to reduce human bias or error. The comment from the reviewer about three classes (low, medium, and high) is very good and we would like to keep this idea for our future research direction. Validity of the findings Experiments 1 and 2 have fundamental issues that need to be resolved before making any valid conclusions. 1. The main issue with stochastic methods is that different results are produced depending on the starting point of the search. In neural networks, the random value generator, more specifically the starting point of the random values generator, initialises the weights; hence, causing the network to start the process from a different point in the search space. Therefore, we must rerun the method multiple times using different seed values “while” keeping everything else untouched/identical. Based on the suggestions, we have tried to complete all issues regarding the experimental design. More details on the Experiment 1 (FCNN) and Experiment 2 (ResNet) sections are described in the manuscript. 2. How the other benchmark datasets where category wise evaluation (Mentioned in Table 1) is not available were modified into a classification problem. Since the category wise evaluation on the table causes confusion, we have deleted that column and explained it in the implementation section as below. Our Hajj-Crowd dataset is based on five classes. The five classes are: very low, low, medium, high, and very high. For our experiment, we used another two datasets, the UCSD and ShanghaiTech datasets. But in the UCSD and ShanghaiTech datasets, they were never divided into different classes. For our evaluation, we have divided five classes manually. 3 what are the hyper parameters for all the benchmarks and our proposed model? Total parameters: 1,617,985 Trainable parameters: 1,617,985 Number of Epochs Hidden Layers Hidden Units Activations Functions 4. What are the train and test sizes for other benchmark datasets? UCSD has total 2000 image data and each class contain 500. For training 80% and testing 20% were used. ShanghaiTech has a total of 1000 image data sets, and each class contains 200. For training, 80% and testing, 20% were also used. 5. Why are recall, f-score etc are not reported? Based on the reviewer comment we have reported Precision, Recall, F1 score in the manuscript. The proposed Hajj-Crowd framework’s performance may be verified using the following performance criteria: 1. Precision (C. Goutte and E. Gaussier, 2005), 2. Recall (P. Flach and M. Kull, 2015), 3. F1score (A. Luque, A. Carrasco, A. Mart ́ın, and A. de las Heras, 2019), 4. Final accuracy, 5. Confusion metrics (L. van der Maaten and G. Hinton, 2008) and 6. Obtain graph, which illustrates the separability of classes. Precision, Recall, F1 score can be achieved the result by the following equation. The terms TP, TN, FN, and FP in Eqs. (3) – (7) denote true positive, true negative, false negative, and false positive, respectively. While evaluating the suggested Hajj-Crowd output, the confusion matrix provides a true overview of the actual vs. projected output and illustrates the performance’s clarity. All the metrics result added on Experiment 1 and Experiment 2. 6. All benchmark models come with pre-trained weights. Which means they are different from each other. For example, I can use a method that was trained to do anomaly detection and fine-tune it (re-train it) for crowd classification and then compare it to a method that was trained to perform cancer image segmentation in MRIs after re-train it for crowd classification. Can you see the difference between the base models? We performed a model-wise comparison (FCNN and ResNet) as well as a comparison of the proposed dataset and the current two datasets (UCSD and ShanghaiTech). Reviewer 2 Additional comments This article needs important modifications to be suitable for this journal. I suggest major revision for this paper. The main comments are: 1) In this study, the authors would like to propose a model of deep crowd density classification for Hajj pilgrimage by using fully convolutional neural network. It can be noticed that the CNN used in this manuscript has been studied and applied in the previous literatures. The CNN used in this manuscript has been studied and applied in the previous literatures, however, we have developed a very unique alternative solution in the domain of Hajj crowd analysis. 2) The novelty of this paper should be further justified and to establish the contributions to the new body of knowledge. We have introduced matrix evaluation based on the advice of another reviewer, which strongly established the contributions. 3) Abstract section should be improved considering the proposed structure from the journal. Abstract section improved considerably. 4) In Introduction section, the authors should improve the research background, the review of significant works in the specific study area, the knowledge gap, the problem statement, and the novelty of the research. We enhanced the research context, the examination of major works in the subject field, the knowledge gap, the issue description, and the research's innovation. 5) The presentation of the results and conclusions were not enough; it should be highlighted. Results and Conclusions improved considerably. 6) In the conclusions section, the findings should be explained clearly. Conclusions have significantly improved. 7) The authors should elaborate more on the practical implications of their study, as well as the limitations of the study, and further research opportunities. We conducted extensive investigation into the practical implications of our experiment, as well as the study's shortcomings, and more research options were considered. 8) The English writing does not influence, in all the paper. There are a lot of grammatical errors which should be revised by the authors. So, the paper needs a professional English revision. The author’s guide should be considered by the authors in the writing style in all the paper. The paper's general writing style improved massively throughout our whole manuscript. "
Here is a paper. Please give your review comments after reading it.
354
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This research enhances crowd analysis by focusing on excessive crowd analysis and crowd density predictions for Hajj and Umrah pilgrimages. Crowd analysis usually analyzes the number of objects within an image or a frame in the videos and is regularly solved by estimating the density generated from the object location annotations. However, it suffers from low accuracy when the crowd is far away from the surveillance camera. This research proposes an approach to overcome the problem of estimating crowd density taken by a surveillance camera at a distance. The proposed approach employs a fully convolutional neural network (FCNN)-based method to monitor crowd analysis, especially for the classification of crowd density. This study aims to address the current technological challenges faced in video analysis in a scenario where the movement of large numbers of pilgrims with densities ranging between 7 and 8 per square meter. To address this challenge, this study aims to develop a new dataset based on the Hajj pilgrimage scenario.</ns0:p><ns0:p>To validate the proposed method, the proposed model is compared with existing models using existing datasets. The proposed FCNN based method achieved a final accuracy of 100%, 98%, and 98.16% on the proposed dataset, the UCSD dataset, and the JHU-CROWD dataset, respectively. Additionally, The ResNet based method obtained final accuracy of 97%, 89%, and 97% for the proposed dataset, UCSD dataset, and JHU-CROWD dataset, respectively. The proposed Hajj-Crowd-2021 crowd analysis dataset and the model outperformed the other state-of-the-art datasets and models in most cases.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 38</ns0:head><ns0:p>There is considerable interest among the scientific community regarding hajj crowd evaluation, especially 39 for pedestrians <ns0:ref type='bibr' target='#b15'>(Khan (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b0'>Ahmad et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b51'>Ullah et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b20'>Khan et al. (2017)</ns0:ref>). In events such 40 as Hajj, sports, markets, concerts, and festivals, wherein a large number of people gather in a confined 41 space, it is difficult to fully analyze these situations <ns0:ref type='bibr' target='#b20'>(Khan et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Saqib et al. (2017b)</ns0:ref>; Khan analysis is one of the most important and difficult tasks of video monitoring. The most important use of crowd analysis is to calculate the density of a crowd <ns0:ref type='bibr' target='#b28'>(Ravanbakhsh et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>Khan et al. (2016a)</ns0:ref>; <ns0:ref type='bibr' target='#b35'>Saqib et al. (2017c)</ns0:ref>; <ns0:ref type='bibr' target='#b56'>Wang et al. (2018</ns0:ref><ns0:ref type='bibr' target='#b55'>Wang et al. ( , 2014))</ns0:ref>; <ns0:ref type='bibr' target='#b19'>Khan et al. (2014)</ns0:ref>).</ns0:p><ns0:p>One of the requests that has garnered much attention from the scientific community is the calculation of crowd density <ns0:ref type='bibr' target='#b49'>(Ullah et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Saqib et al. (2017a)</ns0:ref>). The density of the crowd in public assembly is necessary to provide useful information to prevent overcrowding, which can lead to a higher risk of stampede. Acknowledging the importance of estimating the density of crowds, numerous attempts have been numerous attempts to overcome this problem by utilizing efficient algorithms <ns0:ref type='bibr' target='#b30'>(Sabokrou et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Ramos et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b29'>Rota et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b46'>Ullah and Conci (2012a)</ns0:ref>). This leads researchers to conduct studies and review various crowd density estimation approaches <ns0:ref type='bibr' target='#b61'>(Zhang et al. (2016b)</ns0:ref>). Researchers have stated that the most stable and effective method of estimating multiple densities in comparison to detection-based methods is texture-based analysis <ns0:ref type='bibr'>(Ullah and Conci (2012b,a)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Ullah et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Khan and Ullah (2010)</ns0:ref>; <ns0:ref type='bibr'>Uzair et al. (2009a,b)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>Khan et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Saqib et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b31'>Saleh et al. (2015)</ns0:ref>).</ns0:p><ns0:p>This study seeks to enhance the categorization of hajj pilgrims based on crowd density. An FCNNbased framework for crowd analysis is presented in this study.</ns0:p><ns0:p>Crowd analysis is inherently a multidisciplinary topic, including scientists, psychologists, biologists, public security, and computer vision experts. Computer vision has grown in relevance in the field of deep learning in recent years. The fully convolutional neural network (FCNN), a profound learning model of grid styling data such as images, is one of the most advanced deep learning models. This technique has the advantage of using seizures during neuronal development and image classification. The FCNN algorithm relies heavily on convolutional, polling, and fully connected layers, as shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p><ns0:p>The convolutional layer is used to represent the image and compute the function mapping. In FCNN, a convolutional layer, which consists of a series of mathematical operations, plays an important role. A polling layer was added after each layer to reduce the resolution of the function mapping. A pool layer is typically used with a sampling approach to reduce the spatial dimension to detect and remove parameters with minimum distortion and function map modifications. Following polling layer sampling, features produced from the convolution layers and such characteristics were created. A totally linked layer of substrates 'flattens' the networks used for the input of the next layer. It also contains neurons that are tightly connected to other neurons in two adjacent layers. The major contributions in this paper included: 1) A fully convolutional neural network (FCNN) was introduced for crowd analysis and estimation of crowd density. We first extracted the frame from the video to estimate crowd density. We sent a full set of images for training and testing and implemented the entire CNN.</ns0:p><ns0:p>2) Created a deep learning architecture to capture spatial features to automatically evaluate crowd density and classify crowds. The remainder of this paper is arranged as follows: Section II presents the related work, Section III expounds on the proposed method, Section IV presents the experiment and result discussion, and Section V presents the conclusion.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>An early version of crowd analysis has shown that the addition of the Hajj crowd density classification to the crowd analysis system improves its robustness. For example, the study by <ns0:ref type='bibr' target='#b12'>Hu et al. (2016)</ns0:ref> a crowd surveillance approach that includes both behavior detection and crowd scene occupation remains to be established. The advantages of multi-task learning have been demonstrated by face analysis <ns0:ref type='bibr' target='#b58'>(Yan et al. (2015)</ns0:ref>), head-pose prediction <ns0:ref type='bibr' target='#b26'>(Pan et al. (2016)</ns0:ref>), and voice recognition <ns0:ref type='bibr' target='#b36'>(Seltzer and Droppo (2013)</ns0:ref>). The next discussion concentrates on what has been done in each area of crowd analysis.</ns0:p><ns0:p>Crowd Analysis: Crowd analysis algorithms are designed to provide an accurate estimate of the real number of people in a crowded image. Crowd analytics has reached a new level of sophistication due to the availability of high-level, high-variable crowd analytics such as UCF CC 50 and the advent of deep network technologies such as convolutional neural networks <ns0:ref type='bibr' target='#b36'>(Seltzer and Droppo (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b13'>Idrees et al. (2013a)</ns0:ref>). While most recent methods are used to map pixel values to a single figure <ns0:ref type='bibr' target='#b25'>(Marsden et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b60'>Zhang et al. (2015)</ns0:ref>) directly in crowd analysis, pixel-based heat map analysis has shown an increase in the efficiency of crowd analysis for complex, highly congested scenes <ns0:ref type='bibr' target='#b61'>(Zhang et al. (2016b)</ns0:ref>).</ns0:p><ns0:p>Crowd Density Level Estimation: The degree of congestion in a crowded scene is known as the degree of crowd density. The aspect of a crowded scene usually means a discrete (0-N) value or an ongoing value (0.0-1.0). Texture analysis by Wu et al. used functionality to continuously produce an estimated density level <ns0:ref type='bibr' target='#b12'>(Hu et al. (2016)</ns0:ref>). For the classification of discrete density levels, <ns0:ref type='bibr' target='#b8'>Fu et al. (2015)</ns0:ref>. utilized a deep convolutional neural network. This role mainly involves the task of uncertainty associated with a given density level estimate. It is not possible to set density labels and their unique definitions in datasets using a universal framework. The most straightforward scheme is that of discrete labels in density levels explicitly inferred from real multitude analysis values, leading to a distribution of the histogram style with the smallest subjectivity and human error.</ns0:p><ns0:p>Fully convolutional networks: Fully convolutional networks (FCNs) are a CNN variant that provides a proportionally sized map output rather than a mark of classification or regression of the given picture.</ns0:p><ns0:p>Two examples of functions used by FCNs are semantic segmentation <ns0:ref type='bibr' target='#b8'>(Fu et al. (2015)</ns0:ref>) and saliency prediction <ns0:ref type='bibr' target='#b37'>(Shekkizhar and Lababidi (2017)</ns0:ref>). In the FCN training, <ns0:ref type='bibr' target='#b62'>Zhang et al. (2016c)</ns0:ref> converted a multitude-density heat map of an image of a crowded scene, which can count exactly even in tough scenes.</ns0:p><ns0:p>One of the main features of completely convolutional networks, which makes the system especially appropriate for crowd analysis, is the use of an input of variable size that prevents the model from losing information and visual distortions typical of image down sampling and reforming.</ns0:p><ns0:p>The review of contemporary Convolutional Neural Network (CNN)-based algorithms that have proven considerable gains over prior methods that depend heavily on hand-crafted representations. They address the advantages and disadvantages of current CNN-based techniques and highlight prospective research areas in this rapidly expanding subject <ns0:ref type='bibr' target='#b40'>(Sindagi and Patel (2018)</ns0:ref>). Additionally, few research works begin with a quick overview of pioneering techniques that use hand-crafted representations before delving into depth and newly released datasets <ns0:ref type='bibr'>(Al Farid et al. (2019a,b)</ns0:ref>).</ns0:p><ns0:p>The top three performances in their crowd analysis datasets were examined for their advantages and disadvantages based on the assessment measures. They anticipate that this approach will enable them to make realistic conclusions and predictions about the future development of crowd counting while also</ns0:p></ns0:div> <ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2021:10:66465:2:0:NEW 25 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>providing possible solutions for the object counting issue in other domains. They compared and tested the density maps and prediction outcomes of many popular methods using the NWPU dataset's validation set.</ns0:p><ns0:p>Meanwhile, tools for creating and evaluating density maps are included <ns0:ref type='bibr' target='#b9'>(Gao et al. (2020)</ns0:ref>). This is mostly due to the fact that Hajj is an absolutely unique event that includes hundreds of thousands of Muslims congregating in a confined space. This article proposes a method based on convolutional neural networks (CNNs) for doing multiplicity analysis, namely crowd counting. Additionally, it presents a novel method for Hajj and Umrah applications. They addressed this issue by creating a new dataset centered on the Hajj pilgrimage scenario. <ns0:ref type='bibr' target='#b4'>(Bhuiyan et al. (2021)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>PROPOSED METHOD DEEP-CNN Features</ns0:head><ns0:p>A deep CNN can create powerful texture features for each frame without human supervision, unlike handcrafted features, which are susceptible to light fluctuations and noise. Pre-trained CNN models such as VGG19 <ns0:ref type='bibr' target='#b39'>(Simonyan and Zisserman (2014)</ns0:ref>), GoogleNet <ns0:ref type='bibr' target='#b44'>(Szegedy et al. (2015)</ns0:ref>), inceptionv3 <ns0:ref type='bibr' target='#b45'>(Szegedy et al. (2016)</ns0:ref>), and ResNet101 <ns0:ref type='bibr' target='#b11'>(He et al. (2016)</ns0:ref>) were evaluated for feature extraction in this research. Every layer is followed by a corrected linear unit layer in VGG19 <ns0:ref type='bibr' target='#b39'>(Simonyan and Zisserman (2014)</ns0:ref>). The total number of layers is 50. Rather than processing linearly, GoogleNet's design <ns0:ref type='bibr' target='#b45'>(Szegedy et al. (2016)</ns0:ref>) uses several routes, each having 22 weight levels. GoogleNet's 'inception module,' which performs the concurrent processing of multiple convolution kernels, is the building block of the software.</ns0:p><ns0:p>While Inceptionv3 <ns0:ref type='bibr' target='#b45'>(Szegedy et al. (2016)</ns0:ref>) has fewer parameters, it is very fast. As a result, it has the potential to enable a level of complexity comparable to VGGNet, but with deeper layers. Increases in the VGG19, GoogleNet, and inceptionv3 models' depth lead to saturation and a decline in inaccuracy. On the other hand, ResNet uses skip connections or direct input from one layer to another layer (called identity mapping). ResNet's skip connections improve the speed of CNNs with a lot of layers thanks to their use. ResNet may also help with the disappearing gradient issue. <ns0:ref type='bibr' target='#b11'>He et al. (2016)</ns0:ref> has more information.</ns0:p><ns0:p>Due to the strong symbolic capabilities of deep residual networks, ResNet has recently improved the performance of many computer vision applications such as semantic segmentation, object recognition, and image classification.</ns0:p><ns0:p>Python provides access to all of the pre-trained models used in our research. These models were first developed to categorize 224 &#215; 224-pixel images. They may, however, be used to extract features from pictures of any size by using feature extractors.</ns0:p></ns0:div> <ns0:div><ns0:head>Crowd Density Classification Process</ns0:head><ns0:p>The crowd density classification process, which consists of three steps, is shown below in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. We first performed full image labeling, image data training, and image data testing. In the following section, we describe the details. </ns0:p></ns0:div> <ns0:div><ns0:head>Crowd Density Process (IMAGE LABELLING)</ns0:head><ns0:p>The crowd density image labeling process followed by manual checking using the Sam counting method based on CNN shows Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. In this process, the dataset is an image. First, we collected images based on our five classes from YouTube using video capture software. Second, we performed the image selection and validation. For image selection and validation, we applied this rule. If the kaaba is occupied in the middle and people do tawaf on the circle, then we can consider the image to be selected for the labeling process. As an example, based on this rule, we selected the first image shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. As for the second image, kaaba is not positioned in the center of the image, which is why it cannot be considered Manuscript to be reviewed Computer Science for labeling. In the third image, kaaba is in the center of the image; however, the image is zoomed out. Hence, we did not consider this image as one of our datasets. Subsequently, we used the Sam-counting method based on a CNN. In fact, we did not include people. We employed people counting just to aid the process of labeling for density of the five different classes (very low, low, medium, high, and very high).</ns0:p><ns0:p>After completing the above steps, we performed manual checking to determine whether the mapping of labels and classes was correct. Accordingly, in some populations surrounding Kaaba (Tawaf area), 27000 images and 25 video sequences were recorded, including some typical crowd scenes, including touching the black stone in the Kaaba area.</ns0:p><ns0:p>We collected images based on five classes, with each class consisting of 5400 images. Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows an example of a five-class dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>UCSD</ns0:head><ns0:p>Cameras on the sidewalk at UCSD gathered the first crowd-analysis dataset <ns0:ref type='bibr' target='#b5'>(Chan et al. (2008)</ns0:ref>). A total of 2500 frames with a 238&#215;158 aspect ratio were used, and every five frames, the ground truth annotations of each pedestrian were added. Linear interpolation was used to generate labels for the remaining frames.</ns0:p><ns0:p>Each frame maintains the same viewpoint because it is gathered from the same location. The UCSD dataset did not divided the data category wise. For our experiment we have divided into five classes. The classes are Very Low, Low, Medium, High, and Very High. </ns0:p></ns0:div> <ns0:div><ns0:head>Method of Annotation (Tools)</ns0:head><ns0:p>The annotation tool was developed based on Python and open-cv for easy annotation in Hajj crowd photos based on five classes. The method supports hot encoding label forms and labels the image name based on the threshold. Each image was labeled using the same method.</ns0:p></ns0:div> <ns0:div><ns0:head>IMPLEMENTATION Crowd Density Process (TRAINING))</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref> shows the training process for crowd density. For the training process, we used an accurate density-labeled image from the previous stage using fully convolutional neural networks (FCNNs) to classify the five classes for training. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Crowd Density Process (TESTING)</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> illustrates the crowd analysis density testing process using FCNNs. First, we prepared a new test image dataset. We then passed the full set of images for testing. Second, we tested five classes using the FCNNs. Finally, we obtained classification results for the five classes. </ns0:p></ns0:div> <ns0:div><ns0:head>Dataset Comparison</ns0:head><ns0:p>According to their findings, the diversity of the dataset makes it impossible for crowd analysis networks to acquire valuable and distinguishing characteristics that are absent or disregarded in the prior datasets, which is our finding. 1) The data of various scene characteristics (density level and brightness) have a considerable effect on each other, and 2) there are numerous erroneous estimates of negative samples.</ns0:p><ns0:p>Therefore, there is a growing interest in finding a way to resolve these two issues. In addition, for the localization task, we designed a suitable measure and provided basic baseline models to help start. As a result, we believe that the suggested large-scale dataset would encourage the use of crowd analysis and localization in reality and draw greater attention to addressing the aforementioned issues. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> presents a dataset comparison with other public datasets.Our Hajj-Crowd dataset is based on five classes.</ns0:p><ns0:p>The five classes are: very low, low, medium, high, and very high. For our experiment, we used another two datasets, the UCSD and JHU-CROWD datasets. But in the UCSD and JHU-CROWD datasets, they were never divided into different classes. For our evaluation, we have divided five classes manually.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset</ns0:head></ns0:div> <ns0:div><ns0:head>Number of Image</ns0:head><ns0:p>Resolutions Extreme Congestion UCSD <ns0:ref type='bibr' target='#b5'>(Chan et al. (2008)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Network for Modelling</ns0:head><ns0:p>Consequently, the CNN is implemented as a sequential network. As a result of the convolution operations, a 2D convolution layer is created, which eventually leads to the development of a convolution KERNEL.</ns0:p><ns0:p>The following formula was used to compute the subsequent feature map values: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_0'>G[m, n] = ( f * h)[m &#8226; n] = &#8721; j &#8721; k h[ j, k] f [m &#8722; j, n &#8722; k] (1)</ns0:formula><ns0:p>The value of f is equal to that of the input image, and the value of h is equal to that of KERNEL. These are the indices of the rows and columns of the final result matrix. We use the filter that we applied to the selected pixel. Then, we take the kernel values for each color and multiply them with their corresponding image values. In summary, the results are output to the feature map. Calculating the dimensions of the output matrix, remembering to account for padding and stride involves finding the</ns0:p><ns0:formula xml:id='formula_1'>n out = n + 2p &#8722; f s + 1 (2)</ns0:formula><ns0:p>where n is the picture size, f is the filter dimension, p is the padding, and s is the stride. A tensor of the outputs is provided by several layers using numerous input units.</ns0:p><ns0:p>The pooling procedure was performed using MaxPooling 2D software. The max-pooling approach is a sample-based discretisation technique. The goal is to reduce the number of dimensions a dataset possesses to provide a visualization or a hidden layer of data. Assuming that certain features are in the sub-regions and are binned, the positions of such features in the input representation may be estimated.</ns0:p><ns0:p>After splitting the data into training and testing sets, data were assessed. We chose 27000 images for this experiment. In training, twenty one thousand six hundred fifty (21600) images are used, whereas in testing, 5400 (20%) images are employed. Before classifier initialization, the training image for the CNN model is produced. After building the model, the first convolution layer is added and initialized as an input layer to the fully connected network that is responsible for the final output layer. When optimizing using the Adam algorithm with a learning ratio of 0.001, we utilized the Adam optimizer with a learning ratio of 0.001. In addition to the training data, test data, and parameters for the number of training steps, the model is also interested in the other three components: training data, test data, and parameters for the number of training steps.</ns0:p></ns0:div> <ns0:div><ns0:head>PERFORMANCE EVALUATION AND RESULT ANALYSIS Experimental Setup</ns0:head><ns0:p>The processing of high-resolution images in a fully connected network (e.g., 1914 &#215; 922 pixels) presents a range of challenges and constraints, particularly with regard to the use of GPU memories. Only certain kernels and layers can have our FCNN convolutional (i.e., model capacity). Therefore, we aim to create the best possible FCNN architecture to process images such as those in a UCSD dataset with the highest possible resolution. We used an NVidia GTX 1660Ti 6GB RAM 16GB card. Finally, we utilized python3 in conjunction with deep learning programs such as open-cv2, NumPy, SciPy, matplotlib, TensorFlow GPU, CUDA, Keras, and other similar tools.</ns0:p></ns0:div> <ns0:div><ns0:head>Matrix Evaluation</ns0:head><ns0:p>The proposed Hajj-Crowd framework's performance may be verified using the following performance criteria: 1. Precision <ns0:ref type='bibr' target='#b10'>(Goutte and Gaussier (2005)</ns0:ref>), 2. Recall <ns0:ref type='bibr' target='#b7'>(Flach and Kull (2015)</ns0:ref>), 3. when the number of epochs increased, the error increased. After completing 100 epochs, we observed that the data loss was slightly high at 0.37. In the val-Loss, we observed that when the epoch was 0 to 20, the data loss was 0.02550. Subsequently, when the number of epochs increases, the data loss decreases.</ns0:p><ns0:p>After completing 100 epochs, we observed that the data loss was slightly high at 0.01437.</ns0:p><ns0:p>Each of these comparison tests used the exact experimental dataset. For experiment 1, the FCNN technique obtained a final accuracy of 100%, 98%, and 98.16% on the proposed dataset, the UCSD dataset, and the JHU-CROWD dataset, respectively. The proposed method's average microprecision, microrecall, and microF1 score are shown in Tables <ns0:ref type='table'>2, 3</ns0:ref>, 4, and 5. All of these assessment matrices were generated using the procedures described in <ns0:ref type='bibr' target='#b43'>(Sokolova and Lapalme (2009)</ns0:ref>). The suggested method's average microprecision, microrecall, and microF1 score are 100%, 100%, and 100% successively for the proposed dataset, compared to 97%, 97%, 97%, and 95%, 95%, and 95% for the UCSD and JHU-CROWD datasets.</ns0:p><ns0:p>All of these results indicate that the developed framework and proposed dataset significantly outperform the two state-of-the-art datasets. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Input Fully Convolutional Neural Networks Output.</ns0:figDesc><ns0:graphic coords='3,141.73,448.90,413.55,149.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66465:2:0:NEW 25 Jan 2022) Manuscript to be reviewed Computer Science 3) Built a new dataset based on Hajj pilgrimages. We created a new dataset in this research because nobody in the modern world has made this dataset related to Hajj crowds. In State-of-the-art there are few well known crowd datasets worldwide, such as ShanghaiTech, UCSD pedestrians, UCF-CC-50, Mall, WorldExpo, JHU-CROWD, and NWPU-Crowd.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Crowd Density Classification Process.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66465:2:0:NEW 25 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Image Labeling followed by manual checking on the Labeling.</ns0:figDesc><ns0:graphic coords='6,141.73,145.80,413.59,248.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>JHU-CROWDJHU-CROWD<ns0:ref type='bibr' target='#b42'>(Sindagi et al. (2019)</ns0:ref>) is one of the largest crowd analysis datasets in recent years. Itcontains 4,250 images with 330,165 annotations. JHU-CROWD, images are chosen at random from the Internet, busy street. These various scene types and densities are combined to produce a difficult dataset that can be used by researchers. Therefore, the training and test sets tended to be low-density. As a result, many CNN-based networks face new problems and possibilities owing to scale shifts and viewpoint 5/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66465:2:0:NEW 25 Jan 2022) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Example of Five classes dataset.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.59,262.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Crowd Analysis Density Training Process Using FCNNs.</ns0:figDesc><ns0:graphic coords='7,141.73,565.96,413.59,140.34' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Crowd Analysis Density Testing Process using FCNN.</ns0:figDesc><ns0:graphic coords='8,141.73,150.47,413.59,131.38' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66465:2:0:NEW 25 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>F1 score<ns0:ref type='bibr' target='#b24'>(Luque et al. (2019)</ns0:ref>), 4. Final accuracy, 5. Confusion metrics (Van der<ns0:ref type='bibr' target='#b54'>Maaten and Hinton (2008)</ns0:ref>) and 6. Obtain graph, which illustrates the separability of classes. Precision, Recall, F1 score can be achieved the result by the following equation. ,TN, FN, and FP in Eqs. (3)-(7) denote true positive, true negative, false negative, and false positive, respectively. While evaluating the suggested Hajj-Crowd output, the confusion matrix provides a true overview of the actual vs. projected output and illustrates the performance's clarity. All the metrics result added on Experiment 1 and Experiment 2.Experiment 1(FCNN)The Hajj crowd dataset is a large-scale crowd density dataset. It includes 21600 training images and 5400 test images with the same resolution(1914 &#215; 922). The proposed method outperforms the state-of-the-art method in the context of a new dataset (Name HAJJ-Crowd dataset), which achieved a remarkable result 100%. For this experiment we have used another two datasets. The datasets are UCSD and JHU-CROWD dataset. The UCSD dataset and JHU-CROWD dataset contain total 2500 data and each classes contain 500 data and 4000 and each classes 800. For training we have used 80% and testing 20% respectively.All datasets we have divided into five folds.Figure 7 (a) shows a graph based on the results of the five classes and Figure 8 shows the Confusion Matrices for the test dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure7(b) clearly shows that, from 0 to 20 epochs, there is no considerable change in the accuracy, whereas from 20 to 40 epochs, we observed no loss of data. However, for 40 to 100 epochs, the accuracy continued to increase. Finally, the accuracy for 100 epochs was 100%. We have clearly seen that from 0 to 20 epochs there is slowly taking place loss of data, whereas from 20 to 40 and 40 to 100 epochs, there is a rapid loss of data. Finally, the data loss at 100 epochs is 0.01437. On the other hand, Figure7(b)shows the train accuracy and Val-accuracy; from 0 to 20 epochs, there is a slow change in the val-accuracy and at the same time no considerable change in the train accuracy. However, for 20 to 40 epochs and 40 to 100 epochs, it slowly changed with the Train accuracy as well as 20 to 40 and 40 to 100 epochs, with no considerable change in the val-accuracy. Finally, the train accuracy was 0.01437 and the val-accuracy was 1.0000. Figure7(b) shows that when the epoch was 0 to 20 epochs, the data loss was 0.35. Subsequently,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Five Classes graphical presentation results</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 1 Figure</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='19,42.52,178.87,525.00,63.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='20,42.52,178.87,525.00,315.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='21,42.52,178.87,525.00,333.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='22,42.52,178.87,525.00,177.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='23,42.52,178.87,525.00,166.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='24,42.52,178.87,525.00,244.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='25,42.52,178.87,525.00,193.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure</ns0:figDesc><ns0:graphic coords='26,42.52,178.87,525.00,236.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='11,141.73,133.15,413.59,192.87' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='11,141.73,371.12,454.91,167.71' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,141.73,379.87,413.59,186.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of Eight Real World Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>)</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>158 x 238</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell>UCF-CC-50 (Idrees et al. (2013b))</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell cols='2'>2101 x 2888 No</ns0:cell></ns0:row><ns0:row><ns0:cell>Mall (Chen et al. (2012))</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>480 x 640</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell>WorldExpo'10 (Zhang et al. (2016a))</ns0:cell><ns0:cell>3980</ns0:cell><ns0:cell>576 x 720</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ShanghaiTech Part A (Zhang et al. (2016d)) 482</ns0:cell><ns0:cell>589 x 868</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ShanghaiTech Part B (Zhang et al. (2016d)) 716</ns0:cell><ns0:cell>768 x 1024</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>JHU-CROWD (Sindagi et al. (2019))</ns0:cell><ns0:cell>4,250</ns0:cell><ns0:cell>1450 x 900</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>NWPU-Crowd (Wang et al. (2020))</ns0:cell><ns0:cell>5,109</ns0:cell><ns0:cell cols='2'>2311 x 3383 Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>PROPOSED HAJJ-CROWD DATASET</ns0:cell><ns0:cell>27000</ns0:cell><ns0:cell>1914 x 922</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Faculty of Computing and Informatics, Multimedia University, Cyberjaya, Persiaran Multimedia, 63100, Cyberjaya, Malaysia. romanbhuiyanpv@gmail.com 25 Jan. 22 Dear Editor, Based on your comments on 19 Jan. 22, we are submitting a revised manuscript titled A deep crowd density classification model for Hajj pilgrimage using fully convolutional neural network, ID# CS2021:10:66465:2:0:NEW, for consideration in the Peer J Computer Science. I would especially like to thank the reviewers for their thoughtful and constructive comments. These are the most helpful reviewers’ comments I have received in 4 years of paper writing. I have included a response letter that outlines how the language editor’s comments have been addressed in the revised manuscript. Our responses are highlighted in bold. We look forward to your positive response. With Best Regards, Md Roman Bhuiyan Multimedia University, Malaysia On behalf of all authors. Additional comments This paper needs to consider some suggestions to improve it like: (not taken into account in the first revision) 1- In the conclusions section, the findings should be explained clearly. The findings were explained in below: The proposed FCNN-based technique attained a final accuracy of 100% for our own created dataset. Additionally, the proposed dataset was classified with a final accuracy of 97% using the ResNet-based technique. 2- Conclusions have significantly improved. The authors should elaborate more on the practical implications of their study, as well as the limitations of the study, and further research opportunities. Based on the reviewer comments, we have updated the conclusion section: - It will help to alert the security personnel before overcrowding happen. In the future, we will investigate crowd behavior and pose estimation algorithms in conjunction with temporal information. "
Here is a paper. Please give your review comments after reading it.
355
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Urdu is a widely used language in South-Asia and worldwide. While there are similar datasets available in English, we created the first multi-label emotion dataset consisting of 6,043 tweets and six basic emotions in the Urdu Nastal&#237;q script. A Multi-Label (ML) classification approach was adopted to detect emotions from Urdu. The morphological and syntactic structure of Urdu makes it a challenging problem for multi-label emotion detection. In this paper, we build a set of baseline classifiers such as machine learning algorithms (Random forest (RF), Decision tree (J48), Sequential minimal optimization (SMO), AdaBoostM1, and Bagging), deep-learning algorithms (Convolutional Neural Networks (1D-CNN), Long short-term memory (LSTM), and LSTM with CNN features) and transformer-based baseline (BERT). We used a combination of text representations: stylometric-based features, pre-trained word embedding, word-based n-grams, and character-based n-grams. The paper highlights the annotation guidelines, dataset characteristics and insights into different methodologies used for Urdu based emotion classification. We present our best results using Micro-averaged F1, Macro-averaged F1, Accuracy, Hamming Loss (HL) and Exact Match (EM) for all tested methods.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Twitter is a micro blogging platform which is used by millions daily to express themselves, share opinions, and to stay informed. Twitter is an ideal platform for researchers for years to study emotions and predict the outcomes of experimental interventions <ns0:ref type='bibr'>(Mohammad and</ns0:ref><ns0:ref type='bibr'>BravoMarquez, 2017&#894; Mohammad et al., 2015)</ns0:ref>. Studying emotions in text helps us to understand the behaviour of individuals <ns0:ref type='bibr' target='#b65'>(Plutchik, 1980</ns0:ref><ns0:ref type='bibr'>, 2001&#894; James A Russell, 1977&#894; Ekman, 1992)</ns0:ref> and gives us the key to people's feelings and perceptions.</ns0:p><ns0:p>Social media text can represent various emotions: happiness, anger, disgust, fear, sadness, and surprise.</ns0:p><ns0:p>One can experience multiple emotions <ns0:ref type='bibr'>(Strapparava and</ns0:ref><ns0:ref type='bibr'>Mihalcea, 2007&#894; Li et al., 2017)</ns0:ref> in a small chunk of text while there is a possibility that text could be emotionless or neutral, making it a challenging problem to tackle. It can be easily categorized as a multilabel classification task where a given text can be about any emotion simultaneously. Emotion detection in its true essence is a multilabel classification problem since a single sentence may trigger multiple emotions such as anger and sadness. This increases the complexity of the problem and makes it more challenging to classify in a textual setting.</ns0:p><ns0:p>While there are multiple datasets available for multilabel classification in English and other Europian languages, low resource language like Urdu still requires a dataset. The Urdu language is the combination of Sanskrit, Turkish, Persian, Arabic and recently English making it even more complex to identify the true representation of emotions because of the morphological and syntactic structure <ns0:ref type='bibr' target='#b0'>Adeeba and Hussain (2011)</ns0:ref>. However, the structural similarities of Urdu with Hindi and other South Asian languages make it resourceful for the similar languages. Urdu is the national language of Pakistan that is spoken by more PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>than 170 million people worldwide as first and second language. 1 Needless to say that Urdu is also widely used on social media using right to left Nastal&#237;q script.</ns0:p><ns0:p>Therefore, a multilabel emotion dataset for Urdu was long due and needed for understanding public emotions, especially applicable in natural language applications in disaster management, public policy, commerce, and public health. It should also be noted that emotion detection directly aids in solving other text related classification tasks such as sentiment analysis <ns0:ref type='bibr' target='#b41'>(Khan et al., 2021)</ns0:ref>, human aggressiveness and emotion detection <ns0:ref type='bibr'>(Bashir et al., 2019&#894; Ameer et al., 2021)</ns0:ref>, humor detection <ns0:ref type='bibr' target='#b78'>(Weller and Seppi, 2019)</ns0:ref>, question answering and fake news detection <ns0:ref type='bibr' target='#b20'>(Butt et al., 2021a&#894; Ashraf et al., 2021a)</ns0:ref>, depression detection <ns0:ref type='bibr' target='#b57'>(Mustafa et al., 2020)</ns0:ref>, and abusive and threatening language detection <ns0:ref type='bibr' target='#b11'>(Ashraf et al., 2021b</ns0:ref><ns0:ref type='bibr'>, 2020&#894; Butt et al., 2021b&#894; Amjad et al., 2021)</ns0:ref>.</ns0:p><ns0:p>We created a Nastal&#237;q Urdu script dataset for multilabel emotion classification consisting of 6043 tweets using Ekman's six basic emotions <ns0:ref type='bibr' target='#b27'>(Ekman, 1992)</ns0:ref>. The dataset is divided into the train and test split which is publicly available along with the evaluation script. The task requires you to classify the tweet as one, or more of the six basic emotions which is the best representation of the emotion of the person tweeting. The paper presents machinelearning and neural baselines for comparison and shows that out of the various machine and deeplearning algorithms, RF performs the best and gives macroaveraged F 1 score of 56.10%, microaveraged F 1 score of 60.20%, and M1 accuracy of 51.20%.</ns0:p><ns0:p>The main contributions of this research are as follows:</ns0:p><ns0:p>&#8226; Urdu language dataset for multiclass emotion detection, containing six basic emotions (anger, disgust, joy, fear, surprise, and sadness) (publicly available&#894; see a link below)&#894;</ns0:p><ns0:p>&#8226; Baseline results of machinelearning algorithms (RF, J48, DT, SMO, AdaBoostM1, and Bagging) and deeplearning algorithms (1DCNN, LSTM, and LSTM with CNN features) to create a bench mark for multilabel emotion detection using four modes of text representations: wordbased n grams, characterbased ngrams, stylometrybased features, and pretrained word embeddings.</ns0:p><ns0:p>The rest of the paper is structured as follows.</ns0:p><ns0:p>Section 2 explains the related work on multilabel emotion classification datasets and techniques. Sec tion 3 discusses the methodology including creation of the dataset. Section 4 presents evaluation of our models. Section 5 analyzes the results. Section 6 concludes the paper and potential highlights for the future work.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>Emotion detection has been extended across a number of overlapping fields. As a result, there are a number of publicly available datasets for emotion detection.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Emotion Datasets</ns0:head><ns0:p>EmoBank <ns0:ref type='bibr' target='#b19'>(Buechel and Hahn, 2017)</ns0:ref> is an English corpus of 10,000 sentences using the valence arousal dominance (VAD) representation format annotated with dimensional emotional metadata. EmoBank distinguishes between emotions of readers and writers and is built upon multiple genres and domains.</ns0:p><ns0:p>A subset of EmoBank is birepresentationally annotated on Ekman's basic emotions which helps it in mapping between both representative formats. Affective text corpus <ns0:ref type='bibr' target='#b74'>(Strapparava and Mihalcea, 2007)</ns0:ref> is extracted from news websites (Google News, Cable News Network etc.,) to provide Ekman's emotions (e.g., joy, fear, surprise), valence (positive or negative polarity) and explore the connection between lexical semantics and emotions in news headlines. The emotion annotation is set to [0, 100] where 100 is defined as maximum emotional load and 0 indicates missing emotions completely. Whereas, annotations for valence is set to <ns0:ref type='bibr'>[100,</ns0:ref><ns0:ref type='bibr'>100]</ns0:ref> in which 0 signifies neutral headline, 100 and 100 represent extreme negative and positive headlines respectively. DailyDialog <ns0:ref type='bibr' target='#b47'>(Li et al., 2017)</ns0:ref> is a multiturn dataset for human dialogue. It is manually labelled with emotion information and communication intention and contains 13,118 sentences. The paper follows the six main Ekman's emotions (fear, disgust, anger, and surprise etc.,) complemented by the 'no emotion' category. Electoral Tweets is another dataset <ns0:ref type='bibr' target='#b55'>(Mohammad et al., 2015)</ns0:ref> which obtains the information through electoral tweets to classify emotions <ns0:ref type='bibr'>(Plutchik's emotions)</ns0:ref> and sentiment (positive/negative). The dataset consists of over 100,000 responses Emotional Intensity <ns0:ref type='bibr' target='#b54'>(Mohammad and BravoMarquez, 2017)</ns0:ref> dataset was created to detect the writers emotional intensity of emotions. The dataset consists of 7,097 tweets where the intensity is analysed by bestworst scaling (BWS) technique. The tweets were annotated with intensities of sadness, fear, anger, and joy using Crowdsourcing. Emotion Stimulus is a dataset <ns0:ref type='bibr' target='#b30'>(Ghazi et al., 2015)</ns0:ref> that identifies the textual cause of emotion. It consists of the total number of 2,414 sentences out of which 820 were annotated with both emotions and their cause, while 1594 were annotated just with emotions. Grounded Emotions dataset <ns0:ref type='bibr' target='#b50'>(Liu et al., 2017)</ns0:ref> was designed to study the correlation of users' emotional state and five types of external factors namely user predisposition, weather, social network, news exposure, and timing. The dataset was built upon social media and contains 2,557 labelled instances with 1,369 unique users. Out of these, 1525 were labelled as happy tweets and 1,032 were labelled as sad tweets.</ns0:p><ns0:p>FbValenceArousal dataset <ns0:ref type='bibr' target='#b67'>(PreotiucPietro et al., 2016)</ns0:ref> consisting of 2,895 social media posts were collected to train models for valence and arousal. It was annotated by two psychologically trained persons on two separate ordinal ninepoint scales with valence (sentiment) or arousal (intensity). The time interval was the same for every message with distinct users. Lastly, Stance Sentiment Emotion Corpus (SSEC) dataset <ns0:ref type='bibr' target='#b73'>(Schuff et al., 2017)</ns0:ref> is an extension of SemEval 2016 dataset with a total number of 4,868 tweets.</ns0:p><ns0:p>It was extended to enable a relation between annotation layers (sentiment, emotion and stance). Plutchik's fundamental emotions were used for annotation by expert annotators. The distinct feature of this dataset is that they published individual information for all annotators. A comprehensive literature review is summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Although we have taken English language emotion data set for comparison, many low resource languages have been catching up in emotion detection tasks in text <ns0:ref type='bibr'>(Kumar et al., 2019&#894; Arshad et al., 2019&#894; Plaza del Arco et al., 2020&#894; Sadeghi et al., 2021&#894; Tripto and Ali, 2018)</ns0:ref>.</ns0:p><ns0:p>XED is a finegrained multilingual emotion dataset introduced by <ns0:ref type='bibr'>(&#214;hman et al., 2020)</ns0:ref>. The collec tion comprises humanannotated Finnish (25k) and English (30k) sentences, as well as planned annota tions for 30 other languages, bringing new resources to a variety of lowresource languages. The dataset is annotated using Plutchik's fundamental emotions, with neutral added to create a multilabel multiclass dataset. The dataset is thoroughly examined using languagespecific BERT models and SVMs to show that XED performs on par with other similar datasets and is thus a good tool for sentiment analysis and emotion recognition.</ns0:p><ns0:p>The examples of annotated dataset for emotion classification show that the difference lies between annotation schemata (i.e., VAD or multilabel discreet emotion set), the domain of the dataset (i.e., social news, questionnaire, and blogs etc.,), the file format, and the language. Some of the most popular datasets released in the last decade to compare and analyze in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. For a more comprehensive review of existing datasets for emotion detection, we refer the reader to <ns0:ref type='bibr' target='#b56'>(Murthy and Kumar, 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Approaches to Emotion Detection</ns0:head><ns0:p>Sentiment classification has been around for decades and has been the centre of the research in natural language processing (NLP) <ns0:ref type='bibr' target='#b80'>(Zhang et al., 2018)</ns0:ref>. Emotion detection and classification became naturally the next step after sentiment task, while psychology is still determining efficient emotion models <ns0:ref type='bibr'>(Barrett et al., 2018&#894; Cowen and</ns0:ref><ns0:ref type='bibr' target='#b23'>Keltner, 2018)</ns0:ref>. NLP researchers embraced the most popular <ns0:ref type='bibr'>(Ekman, 1992&#894; Plutchik, 1980)</ns0:ref> definitions and started working on establishing robust techniques. In the early stages, emotion detection followed the direction of Ekman's model <ns0:ref type='bibr' target='#b27'>(Ekman, 1992)</ns0:ref> which classifies emotions in six categories (disgust, anger, joy, fear, surprise, and sadness). Many of the recent work published in emotion classification follows the wheel of emotions <ns0:ref type='bibr' target='#b65'>(Plutchik, 1980</ns0:ref><ns0:ref type='bibr' target='#b66'>(Plutchik, , 2001) )</ns0:ref> which classifies emotions as (fearanger, disgusttrust, joysadness, and surpriseanticipation) and Plutchik's <ns0:ref type='bibr' target='#b65'>(Plutchik, 1980)</ns0:ref> eight basic emotions (Ekman's emotion plus anticipation and trust) or the dimensional models making a vector space of linear combination affective states <ns0:ref type='bibr' target='#b37'>(James A Russell, 1977)</ns0:ref>.</ns0:p><ns0:p>Emotion text classification task has been divided into two methods: rulebased and machinelearning based. Famous examples stemming from expert notation can be SentiWordNet <ns0:ref type='bibr' target='#b28'>(Esuli and Sebastiani, 2007)</ns0:ref> and WordNetAffect <ns0:ref type='bibr' target='#b75'>(Strapparava and Valitutti, 2004)</ns0:ref>. Linguistic inquiry and word count (LIWC) <ns0:ref type='bibr' target='#b62'>(Pennebaker et al., 2001)</ns0:ref> is another example assigning lexical meaning to psychological tasks using a set of 73 lexicons. NRC wordemotion association lexicon <ns0:ref type='bibr' target='#b53'>(Mohammad et al., 2013)</ns0:ref> is also an avail able extension of the previous works built using eight basic emotions <ns0:ref type='bibr' target='#b65'>(Plutchik, 1980)</ns0:ref>, whereas, values of VAD <ns0:ref type='bibr' target='#b37'>(James A Russell, 1977)</ns0:ref> were also used for annotation <ns0:ref type='bibr' target='#b77'>(Warriner et al., 2013)</ns0:ref>. Rulebased</ns0:p></ns0:div> <ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_10'>2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>work was superseded by supervised featurebased learning using variations of features such as word em beddings, character ngrams, emoticons, hashtags, affect lexicons, negation and punctuation's <ns0:ref type='bibr'>(Jurgens et al., 2012&#894; Aman and Szpakowicz, 2007&#894; Alm et al., 2005)</ns0:ref>. As part of emotional computing, emotion detection is commonly employed in the educational domain. The authors presented a methodology in the study <ns0:ref type='bibr' target='#b32'>(Halim et al., 2020)</ns0:ref> for detecting emotion in email messages. The framework is built on au tonomous learning techniques and uses three machine learning classifiers such as ANN, SVM and RF and three feature selection algorithms to identify six (neutral, happy, sad, angry, positively surprised, and negatively surprised) emotional states in the email text. Study <ns0:ref type='bibr' target='#b63'>(Plazadel Arco et al., 2020)</ns0:ref> offered research of multiple machine learning algorithms for identifying emotions in a social media text. The findings of experiments with knowledge integration of lexical emotional resources demonstrated that using lexical effective resources for emotion recognition in languages other than English is a potential way to improve basic machine learning systems. IDSECM, a model for predicting emotions in textual dialogue, was also presented in <ns0:ref type='bibr' target='#b46'>(Li et al., 2020)</ns0:ref>. Textual dialogue emotion analysis and generic textual emotion analysis were contrasted by the authors. They also listed contextdependence, contagion, and persistence as hallmarks of textual dialogue emotion analysis.</ns0:p><ns0:p>Neural networkbased models <ns0:ref type='bibr' target='#b73'>(Barnes et al., 2017&#894; Schuff et al., 2017)</ns0:ref> techniques like biLSTM, CNN, and LSTM achieve better results compared to featurebased supervised model i.e., SVM and Max Ent. The leading method at this point is claimed using biLSTM architecture aided by multilayer self attention mechanism <ns0:ref type='bibr' target='#b16'>(Baziotis et al., 2018)</ns0:ref>. The stateoftheart accuracy of 59.50% was achieved. In Study <ns0:ref type='bibr' target='#b33'>(Hassan et al., 2021)</ns0:ref> authors examine three approaches: i) employing intrinsically multilingual models&#894; ii) translating training data into the target language, and iii) using a parallel corpus that is au tomatically labelled. English is used as the source language in their research, with Arabic and Spanish as the target languages. The efficiency of various classification models was investigated, such as BERT and SVMs, that have been trained using various features. For Arabic and Spanish, BERTbased monolin gual models trained on target language data outperform stateoftheart (SOTA) by 4% and 5% absolute Jaccard score, respectively. For Arabic and Spanish, BERT models achieve accuracies of 90% and 80% respectively.</ns0:p><ns0:p>One of the exciting studies <ns0:ref type='bibr' target='#b15'>(Basiri et al., 2021)</ns0:ref> proposed a CNNRNN Deep Bidirectional Model based on Attention (ABCDM). ABCDM evaluates temporal information flow in both directions utilizing two independent bidirectional LSTM and GRU layers to extract both past and future contexts. Attention mechanisms were also applied to the outputs of ABCDM's bidirectional layers to place more or less focus on certain words. To minimize feature dimensionality and extract positioninvariant local features, ABCDM uses convolution and pooling methods. The capacity of ABCDM to detect sentiment polarity, which is the most common and significant task in sentiment analysis, is a key metric of its effectiveness.</ns0:p><ns0:p>ABCDM achieves stateoftheart performance on both long review and short tweet polarity classification when compared to six previously suggested DNNs for sentiment analysis.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Research Gap</ns0:head><ns0:p>Some of the important work in Roman Urdu sentiment detection is done by multiple researchers <ns0:ref type='bibr'>(Mehmood et al., 2019&#894; Arshad et al., 2019)</ns0:ref>, however, to the best of our knowledge, no prior work on multilabel emotion classification exists for the Nastal&#237;q Urdu language. From Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, one can observe that no annotated dataset was available for multilabel emotion classification task in Nastal&#237;q script. Detecting Nastal&#237;q script on Twitter requires attention and can further aid in solving problems like abusive language detection, humor detection and depression detection in text. Our motivation was to provide an indepth feature engineering for the task, describing not only lexical features but also embedding, comparing the performance of these features for Nastal&#237;q script in Urdu. We also saw a lack of comparison between classifiers. Most of the studies used either only machine learning or only deep learning (DL) techniques, while no comparison was done between ML and DL models, whereas, we gave the baseline results for both ML and DL classifiers.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>MULTIPLE-FEATURE EMOTION DETECTION MODEL</ns0:head><ns0:p>The emotion detection model is illustrated in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> Classification algorithms and methodology thoroughly explained in Section 4.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Dataset</ns0:head><ns0:p>Multilabel emotion dataset in Urdu is neither available nor has any experiments conducted in any do main. Tweets elucidate the emotions of people as they describe their activities, opinions, and events with the world and therefore is the most appropriate medium for the task of emotion classification. The goal of this dataset is to develop a large benchmark in Urdu for the multilabel emotion classification task.</ns0:p><ns0:p>This section describes the challenges confronted during accumulation of a large benchmark twitterbased multilabel emotion dataset and discusses the data crawling method, data collection requirements, data annotation process and guidelines, interannotator agreement, and dataset characteristics and standard ization. Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> contains the examples of the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1'>Data Crawling</ns0:head><ns0:p>The dataset was obtained through Twitter and we use Ekman's emotion keywords for the collection of tweets. Twitter developer application programming interface (API) <ns0:ref type='bibr' target='#b25'>(Dorsey, 2006)</ns0:ref> was used and the resulting tweets were collected in a CSV file. The script for the purpose of scrapping was developed in python which was filtered using hashtags, query strings, and user profile name through Twitter rest API.</ns0:p><ns0:p>For each emotion, the maximum of two thousand tweets were extracted which were later refined and shrunk per keyword based on tweet quality and structure. All the tweets with multiple languages (i.e., Arabic and Persian) were eliminated from the dataset and only the purest Urdu tweets were kept. The total collected tweets, in the end, were twelve thousand. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science &#8226; Disgust &#8235;&#1578;(&#8236; ) in the text is an inherent response of dislikeness, loathing or rejection to contagious ness.</ns0:p><ns0:p>&#8226; Fear &#8235;&#1601;(&#8236; ) also including anxiety, panic and horror is an emotion in a text which can be seen triggered through a potential cumbersome situation or danger.</ns0:p><ns0:p>&#8226; Sadness ( &#8235;)&#1575;&#1583;&#1575;&#8236; also including pensiveness and grief is triggered through hardship, anguish, feeling of loss, and helplessness.</ns0:p><ns0:p>&#8226; Surprise &#8235;&#1578;(&#8236; ) also including distraction and amazement is an emotion which is prompted by an unexpected occurrence.</ns0:p><ns0:p>&#8226; Happiness ( ) also including contentment, pride, gratitude and joy is an emotion which is seen as a response to wellbeing, sense of achievement, satisfaction, and pleasure.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.3'>Annotation Guidelines</ns0:head><ns0:p>The following guidelines were set for the annotation process of the dataset:</ns0:p><ns0:p>&#8226; Three specialised annotators in the field of Urdu were selected. Both annotators had the minimum qualification of Masters in Urdu language making them the most suitable persons for the job.</ns0:p><ns0:p>&#8226; Complete dataset was provided to two of the annotators and they were asked to classify the tweets in one or multiple emotion labels with a minimum of one and maximum of six emotions. The existing emotions were labelled as 1 under each category and the rest were marked 0.</ns0:p><ns0:p>&#8226; The annotator's results were observed and analysed after every 500 tweets to ensure the credibility and correct pattern of annotation.</ns0:p><ns0:p>&#8226; The annotators were asked to identify emojis in a tweet with their corresponding labels. They were informed of the possibility of varying context between emojis and text. In such a case, multiple suited labels were selected to portray multiple or mix emotions.</ns0:p></ns0:div> <ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; Major conflicts where at least one category was labelled differently by the previous two annota tors were identified and the labelled dataset for the conflicting tweets was resolved by the third annotator.</ns0:p><ns0:p>Interannotator agreement (IAA) was computed using Cohan's Kappa Coefficient <ns0:ref type='bibr' target='#b22'>(Cohen, 1960)</ns0:ref>. We achieved kappa coefficient of 71% which shows the strength of our dataset.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.4'>Dataset Characteristics and Standardization</ns0:head><ns0:p>UrduHack 3 was used to normalize the tweets. Urdu text has diacritics (a glyph added to an alphabet for pronunciation) which needs to be removed. For both word and character level normalization, we removed the diacritics, added spaces after digits, punctuation marks, and stop words 4 form the data. For character level normalization, Unicode were assigned to each character. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> shows the frequently occurring emotions. In a multilabel setting, several emotions appear in a tweet, hence, the number of emotions exceed the number of tweets. The emotion anger ( ) is seen to be the most common emotion used in the tweets. Meanwhile Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the statistics of the tweets after normalization in train and test dataset.</ns0:p><ns0:p>The entire dataset has the vocabulary of 14101 words while each tweet average length is 9.24 words and 46.65 characters. </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>BASELINE</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Feature Representations</ns0:head><ns0:p>Four types of text representation were used: character ngrams, word ngrams, stylometric features, and pretrained word embeddings.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.1'>Count Based Features</ns0:head><ns0:p>Character ngrams and token ngrams were used as countbased features. We generated word uni, bi , and trigrams and character ngrams from trigrams to ninegrams. Term frequencyinverse document frequency (TFIDF), a feature weighting technique on countbased features 5 was also used. ScikitLearn 6 was used for the extraction of all features.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.2'>Stylometry Based Features</ns0:head><ns0:p>The second set was stylometric based features <ns0:ref type='bibr'>(Lex et al., 2010&#894; Grieve, 2007)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Ratio by N (where N = total no of characters in Urdu tweets) of white spaces by N, digits by N, letters by N, special characters by N, tabs by N, upper case letter and characters by N.</ns0:p><ns0:p>The wordbased features are as follows:</ns0:p><ns0:p>&#8226; Average of word length, sentence length, words per paragraph, sentence length in characters, and number of sentences,</ns0:p><ns0:p>&#8226; Number of paragraphs,</ns0:p><ns0:p>&#8226; Ratio of words with length 3 and 4,</ns0:p><ns0:p>&#8226; Percentage of question sentences,</ns0:p><ns0:p>&#8226; Total count of unique words and the total number of words.</ns0:p><ns0:p>The vocabularyrichness based features are as follows:</ns0:p><ns0:p>&#8226; BrunetWMeasure,</ns0:p><ns0:p>&#8226; HapaxLegomena,</ns0:p><ns0:p>&#8226; HonoreRMeasure,</ns0:p><ns0:p>&#8226; SichelSMeasure,</ns0:p><ns0:p>&#8226; SimpsonDMeasure,</ns0:p><ns0:p>&#8226; uleKMeasure.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.3'>Pre-trained Word Embeddings</ns0:head><ns0:p>Word embeddings were extracted from the tweets using fastText 7 with 300 vector space dimensions per word. Only fastText was used as it contains the most dense vocabulary for Urdu Nastal&#237;q script .Since the text was informal social media tweets, it was highly probable that some words are missing in the dictionary. In that condition, we randomly assigned all 300 dimensions with a uniform distribution in</ns0:p><ns0:formula xml:id='formula_0'>[&#8722;0.1, 0.1].</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2'>Setup and Classifiers</ns0:head><ns0:p>We treated multilabel emotion detection problem as a supervised classification task. Our goal was to pre dict multiple emotions from the six basic emotions. We used tenfold cross validation for this task which ensures the robustness of our evaluation. The tenfold cross validation takes 10 equal size partitions. Out of 10, 1 subset of the data is retained for testing and the rest for training. This method is repeated 10 times with each subset used exactly once as a testing set. The 10 results obtained are then averaged to produce estimation. For our emotion detection problem binary relevance and label combination (LC) transforma tion methods were used along with various machine and deeplearning algorithms: RF, J48, DT, SMO, AdaBoostM1, Bagging, 1D CNN, and LSTM. As evidently these algorithms perform extremely well for several NLP tasks such as sentiment analysis, and recommendation systems <ns0:ref type='bibr'>(Kim, 2014&#894; Hochreiter and Schmidhuber, 1997&#894; Breiman, 2001&#894; Kohavi, 1995&#894; Sagar et al., 2020&#894; Panigrahi et al., 2021a,b)</ns0:ref>.</ns0:p><ns0:p>We used several machine learning algorithms to test the performance of the dataset namely: RF, J48, DT, SMO, AdaBoostM1 and Bagging. AdaBoostM1 <ns0:ref type='bibr' target='#b29'>(Freund and Schapire, 1996)</ns0:ref> is a very famous en semble method which diminishes the hamming loss by creating models repetitively and assigning more weight to misclassified pairs until the maximum model number is not achieved. RF is another ensem ble classification method based on trees which is differentiated by bagging and distinct features during learning. It is robust as it overcomes the deficiencies of decision trees by combining the set of trees and input variable set randomization <ns0:ref type='bibr' target='#b18'>(Breiman, 2001)</ns0:ref>. Bagging (Bootstrap Aggregation) <ns0:ref type='bibr' target='#b17'>(Breiman, 1996)</ns0:ref> is implemented which aggregates multiple machine learning predictions and reduces variance to give a more accurate result. Lastly, SMO <ns0:ref type='bibr' target='#b34'>(Hastie and Tibshirani, 1998)</ns0:ref> which decomposes multiple variables into a series of subproblems and optimizes them as mentioned in the previous studies. DT and J48 were also tested as described in the papers <ns0:ref type='bibr'>(Salzberg, 1994&#894; Kohavi, 1995)</ns0:ref>, however, were unable to achieve Manuscript to be reviewed</ns0:p><ns0:p>Computer Science substantial results. For machine learning algorithms we used MEKA 8 default parameters to provide the baseline scores.</ns0:p><ns0:p>We experimented with our multilabel classification task with two deep learning models: 1dimensional convolutional neural network (1D CNN) and long shortterm memory (LSTM). We used LSTM (Hochre iter and Schmidhuber, 1997) which is the enhanced version of the recurrent neural network with the dif ference in operational cells and enables it to keep or forget information increasing the learning ability for longtime sequence data. CNN <ns0:ref type='bibr' target='#b42'>(Kim, 2014)</ns0:ref> takes the embeddings vector matrix of tweets as input with the multilabel distribution and then passes through filters and hidden layers. We used Adam optimizer, categorical crossentropy as a loss function, softmax activation function on the last layer, and dropout layers of 0.2 in both LSTM and 1DCNN. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the architecture of 1DCNN while Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows the architecture of LSTM model. Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> shows the fully connected layers and their parameters for 1DCNN and LSTM. The tweets were passed as word piece embeddings which were later channelled into a sequence.</ns0:p><ns0:p>Keras 9 and Pytorch 10 framework were used for the implementation of all these algorithms. For additional details on the experiments, please review the publicly available code. 11</ns0:p><ns0:p>BERT <ns0:ref type='bibr' target='#b24'>(Devlin et al., 2018)</ns0:ref> has proven in multiple studies to have a better sense of flow and language context as it is trained bidirectionally with an attention mechanism. We used the following BERT param eters: maxseqlength = 64, batch size = 32, learning rate =2e5, and numtrainepochs = 2.0. We used 0.1 dropout probability, 24 hidden layers, 340M parameters and 16 attention heads respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Metrics and Evaluation</ns0:head><ns0:p>To evaluate multilabel emotion detection, we used multilabel accuracy, microaveraged F 1 and macro averaged F 1 . Multilabel accuracy in the emotion classification considers the subsets of the actual classes for prediction as a misclassification is not hard wrong or right i.e., predicting two emotions correctly rather than declaring no emotion. For multilabel accuracy, we considered one or more gold label mea sures compared with obtained emotion labels or set of labels against each given tweet. We take the size of the intersection of the predicted and gold label sets divided by the size of their union and then average it over all tweets in the dataset.</ns0:p><ns0:p>Microaveraging, in this case, will take all True Positives (TP), True Negatives (TN), False Positives </ns0:p><ns0:formula xml:id='formula_1'>F 1 macr o = 1 | E | &#8721; e&#8712;E F e .</ns0:formula><ns0:p>Exact Match equation is mentioned below which explains the percentage of instance whose predicted labels (P t ) are exactly matching same the true set of labels (G t ).</ns0:p><ns0:formula xml:id='formula_2'>E xac t M at ch = 1 | T | T &#8721; i =1 G t = P t</ns0:formula><ns0:p>Hamming Loss equation mentioned below computes the average of incorrect labels of an instance. Lower the value, higher the performance of the classifier as this is a loss function.</ns0:p><ns0:formula xml:id='formula_3'>H ammi ng Loss = 1 | T S | T &#8721; i =1 S &#8721; j =1 G i j = P i j</ns0:formula></ns0:div> <ns0:div><ns0:head n='5'>RESULT ANALYSIS</ns0:head><ns0:p>We conducted several experiments with detailed insight into our dataset. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>embeddings and deeplearning approaches that might help to improve the results. The Table <ns0:ref type='table' target='#tab_12'>10</ns0:ref> shows the state of the art results for multilabel emotion detection in English and proves that our baseline results are in line with stateoftheart work in the machine and deep learning. In addition, computational complexity can make the reproducibility challenging of the proposed meth ods. Few years ago, it was difficult to produce the results as they can take days or weeks, although re searchers have access to GPU computing. Classifiers such as Random Forest and Adaboost that are used in this paper can lead to scalability issues. However, scalability can be addressed with appropriate feature engineering and preprocessing techniques in both academia and industry Jannach and Ludewig (2017)&#894; <ns0:ref type='bibr' target='#b48'>Linden et al. (2003)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In (2) Another limitation is fastText pretrained word embeddings does not have all of the vocab for Urdu language, therefore, some of the words could be missed as outofvocabulary. As a result, performance of the deep learning classifiers are poor as compared to the machine learning classifiers. Our dataset is expected to meet the challenges of identifying emotions for a wide range of NLP applications:</ns0:p><ns0:p>disaster management, public policy, commerce, and public health. In future, we expect to outperform our current results using novel methods, extend emotions, and detect the intensity of emotions in Urdu Nastal&#237;q script.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>https://www.ethnologue.com/language/urd 2/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021) Manuscript to be reviewed Computer Science of two questionnaires taken online about style, purpose, and emotions in electoral tweets. The tweets were annotated via Crowdsourcing.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>. The figure explains the basic architecture followed for both machine learning and deep learning classifiers. Our model has three main phases: data collection, feature extraction (i.e., character ngrams, word ngrams, stylometrybased features, and pretrained word embedding), and emotion detection classification. 4/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Multilabel emotion detection model for Urdu language</ns0:figDesc><ns0:graphic coords='7,141.73,85.69,413.58,372.12' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Examples in our dataset (translated by Google).</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.58,287.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>7 https://fasttext.cc/docs/en/crawlvectors.html 9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. 1DCNN Model Architecture.</ns0:figDesc><ns0:graphic coords='11,141.73,216.01,413.58,131.83' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. LSTM Model Architecture.</ns0:figDesc><ns0:graphic coords='11,141.73,390.98,413.58,127.33' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>(F 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>FP), and False Negatives (FN) individually for each tweets label to calculate precision and recall. The mathematical equations of microaveraged F 1 are provided in 1,2,3 respectively: P mi cr o = &#8721; e&#8712;E number o f c(e) &#8721; e&#8712;E number o f p(e) mi cr o = 2 &#215; P mi cr o &#215; R mi cr o P mi cr o + R mi cr o . The c(e) notation denotes the number of samples correctly assigned to the label e out of sample E, p(e) defines the number of samples assigned to e, and (e) represents the number of actual samples in e. Thus, Pmicro is the microaveraged precision score, and Rmicro is the microaveraged recall score. Macro averaging, on the other hand, uses precision and recall based on different emotion sets, calculating the metric independently for each class treating all classes equally. Then F 1 is calculated as mentioned in the equation for both. The mathematical equations of macroaveraged F 1 are provided in 1,2,3 respectively: P e = &#8721; e&#8712;E number o f c(e) &#8721; e&#8712;E number o f p(e) ,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>this research, we created a multilabel emotion dataset in Urdu based on social media which is the first for Urdu Nastal&#237;q script. Data characteristics for Urdu needed to refine social media data were defined. Impact of results were shown by conducting experiments, analysing results on stylometricbased features, pretrained word embedding, word ngrams, and character ngrams for multilabel emotion detection. Our experiments concluded that RF combined with BR performed the best with unigram features achieving 56.10 microaveraged F 1 , 60.20 macroaveraged F 1 , and 51.20 M1 accuracy. The superiority of machinelearning techniques over neural baselines identified a vacuum for the neural net techniques to experiment. There are several limitations of this work: (1) Reproducibility is one of the major concern because of the computational complexity and scalability of the algorithms such as RF and Adaboost.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of stateoftheart in multilabel emotion detection.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Link EmoBank Affective Text DailyDialog Electoral Tweets EmoInt Emotion Stimulus Grounded Emotions FbValence Arousal Stance Corpus Emotion Sentiment</ns0:cell><ns0:cell>Size 10,000 1,250 13,118 100,000 7,097 2,414 2,557 2,895 4,868</ns0:cell><ns0:cell>Language English English English English English English English English English</ns0:cell><ns0:cell>Data source (MASCIde et al. (2010)+SE07Strapparava and Mihalcea (2007)) News websites (i.e. Google news, CNN) Dialogues from human conversations Twitter Twitter FrameNets annotated data Twitter Facebook Twitter</ns0:cell><ns0:cell>Composition VAD Ekmans emotions + valence in dication (positive/negative). Ekman's emotion + No emotion Plutchik's emotions + sentiment (positive/negative) Intensities of sadness, fear, anger, and joy Ekman's emotions and shame Emotional state (happy or sad) + five types of external fac tors namely user predisposition, weather, social network, news exposure, and timing valence (sentiment) + arousal (intensity) Plutchik's emotions</ns0:cell></ns0:row></ns0:table><ns0:note>Section 3.1 explains all the details related to dataset: data crawling, data annotation, and character istics and standardization while Section 4.1 talks about features types and features extraction methods.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Distribution of emotions in the dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Emotions Anger ( ) Disgust &#8235;&#1578;(&#8236; ) Fear &#8235;&#1601;(&#8236; ) Sadness ( &#8235;)&#1575;&#1583;&#1575;&#8236; Surprise &#8235;&#1578;(&#8236; ) Happiness ( )</ns0:cell><ns0:cell>Train 833 756 594 2,206 1,572 1,040</ns0:cell><ns0:cell>Test 191 203 184 560 382 278</ns0:cell></ns0:row><ns0:row><ns0:cell>226 227</ns0:cell><ns0:cell cols='3'>3.1.2 Data Annotation As mentioned previously, the Twitter hashtags were used for extracting relevant tweets of a particular emotion. However, since a tweet can contain multiple emotions, the keywords alone cannot be a reliable</ns0:cell></ns0:row></ns0:table><ns0:note>228 method for annotation. Therefore, data annotation standards were prepared for expert annotators to follow 229 and maintain consistency throughout the task.230&#8226; Anger ( ) also includes annoyance and rage can be categorized as a response to a deliberate 231 attempt of anticipated danger, hurt or incitement.232 6/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Statistics based on train and test dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset All Train Test</ns0:cell><ns0:cell>Tweets 6,043 4,818 1,225</ns0:cell><ns0:cell>Words 44,525 44,525 11,425</ns0:cell><ns0:cell>Avg. Word 9.24 9.24 9.32</ns0:cell><ns0:cell>Char 224,806 224,806 57,658</ns0:cell><ns0:cell>Avg. Char 46.65 46.65 47.06</ns0:cell><ns0:cell>Vocab 14,101 9,840 4,261</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Deep learning parameters for 1DCNN and LSTM.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter Epochs Optimizer Loss Learning Rate Regularization Bias Regularization Validation Split Hidden Layer 1 Dimension Hidden Layer 1 Activation Hidden Layer 1 Dropout Hidden Layer 2 Dimension Hidden Layer 2 Activation Hidden Layer 2 Dropout Hidden Layer 3 Dimension Hidden Layer 3 Activation Hidden Layer 3 Dropout</ns0:cell><ns0:cell>1DCNN 100 Adam categorical crossentropy categorical crossentropy LSTM 150 Adam 0.001 0.0001 0.01 0.01 0.1 0.1 16 16 tanh tanh 0.2 32 32 tanh tanh 0.2 64 tanh</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Table 5 shows the result of each of the baseline machine and deeplearning classifiers using word ngrams to detect multilabel emotions from our dataset. Unigram shows the best result on RF in combination with a BR transformation method and it achieves 56.10% of macro F 1 . It outperforms bigram and trigram features. When word uni, bi, and trigrams, features are combined, AdaboostM1 gives the best results and obtains 42.60% of macro F 1 . However, results achieved with combined features are still inferior as compared to individual ngram features. A series of experiments on character ngrams were conducted. Results of char 3gram to char 9gram are mentioned in Table 6. It shows that RF consistently provides the best results paired with BR on character 3gram and obtains the macro F 1 of 52.70%. It is observed that macro F 1 decreases while increasing the number of characters in our features. A combination of character ngram (39) achieved the best results using RF with LC, but still lagged behind all individual ngram measures. Overall, word based ngram feature results are very close to each other and achieves better results than most of the char based ngram features. Best results for multilabel emotion detection using word ngram features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features Word 1gram Word 2gram Word 3gram Word 13gram</ns0:cell><ns0:cell>MLC BR LC BR BR</ns0:cell><ns0:cell>SLC RF SMO RF Combination of Word N-gram Acc. EM Word N-gram 51.20 32.30 43.60 30.30 39.90 16.60 AdaBoostM1 35.10 14.90</ns0:cell><ns0:cell>HL 19.40 21.70 28.40 30.10</ns0:cell><ns0:cell>MicroF 1 60.20 50.20 50.00 44.50</ns0:cell><ns0:cell>MacroF 1 56.10 47.50 48.10 42.60</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>Table7illustrates the results of stylometrybased features which were tested on a different set of feature groups such as characterbase, wordbase, vocabulary richness and combination of first three</ns0:figDesc><ns0:table /><ns0:note>features. Wordbased feature group depicts the macro F 1 of 42.60% which is trained on Adaboost M1 and binary relevance. Lastly, experiments on deeplearning algorithms such as 1DCNN, LSTM, LSTM 12/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using char ngram features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features Char 3gram Char 4gram Char 5gram Char 6gram Char 7gram Char 8gram Char 9gram Char 39</ns0:cell><ns0:cell>MLC BR BR BR BR BR BR BR LC</ns0:cell><ns0:cell>SLC RF Bagging Bagging Bagging RF RF RF Combination of Character N-gram Acc. EM HL Character N-gram 47.20 28.20 21.10 38.60 21.70 25.60 38.30 16.50 28.80 37.80 16.90 29.30 36.10 15.50 31.00 34.80 11.80 31.50 34.80 11.80 31.50 RF 33.60 32.90 12.10</ns0:cell><ns0:cell>MicroF 1 56.60 47.30 47.90 46.30 44.70 45.30 45.10 32.30</ns0:cell><ns0:cell>MacroF 1 52.70 44.60 46.30 45.50 43.80 43.50 43.40 33.90</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using stylometrybased features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features Characterbased Wordbased Vocabulary richness All features</ns0:cell><ns0:cell>MLC BR BR BR BR</ns0:cell><ns0:cell>SLC DT AdaBoostM1 35.10 14.90 30.10 Acc. EM HL 33.70 10.7 31.90 AdaBoostM1 34.10 11.80 31.10 AdaBoostM1 35.00 14.90 30.00</ns0:cell><ns0:cell>MicroF 1 MacroF 1 44.40 42.40 44.50 42.60 44.50 42.50 44.50 42.50</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using pretrained word embedding features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model 1D CNN fastText (300) 45.00 42.00 36.00 Features (dim) Acc. EM HL LSTM fastText (300) 44.00 42.00 35.00</ns0:cell><ns0:cell>MicroF 1 MacroF 1 35.00 54.00 32.00 55.00</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using contextual pretrained word embedding features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model LSTM BERT BERT Contextual Embeddings (768) 15.00 44.00 57.00 Features (dim) Acc. EM HL fastText (300), 1D CNN (16) 46.00 35.00 36.00</ns0:cell><ns0:cell>MicroF 1 MacroF 1 34.00 53.00 54.00 37.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>with CNN features show promising results for multilabel emotion detection. LSTM achieves the highest macro F 1 score of 55.00% while 1DCNN and LSTM with CNN features achieve slightly lower macro F 1 scores. Table 8 and Table 9 show the results of deeplearning algorithms. Considering four text representations, the bestperforming algorithm is RF with BR that trained on unigram features achieve macro F 1 score of 56.10%. Deep learning algorithms performed well using fastText pretrained word embeddings and results are consistent in all the experiments.</ns0:cell></ns0:row></ns0:table><ns0:note>Notably, machinelearning baseline using word based ngram features achieved highest macro F 1 score of 56.10% comparatively to deeplearning baseline that achieved slightly lower F 1 score of 55.00% using pretrained word embeddings. Pretrained word embedding was not able to obtain the highest results, it might be because fastText does not have all of the vocab for Urdu language and some of the words could be missed as outofvocabulary. Therefore, further research is needed for pretrained word13/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Comparison of stateoftheart results in multilabel emotion detection.In terms of reproducibility, our machine learning algorithm results are much easier to reproduce with MEKA software. It is because default parameters were used to analyze the baseline results. The main challenge for this task is to generate ngram features in a specific .arff format which is the main require ment of this software to run the experiments. For this purpose, we use sklearn library to extract features from the Urdu tweets and then use python code to convert them into .arff supported format. The code is publicly available. Hence, academics and industrial environments can repeat experiments by just follow ing the guidelines of the software.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Ameer et al. (2021) RF Model Zhang et al. (2020) MMS2S Samy et al. (2018) CGRU Ju et al. (2020) MESGN Proposed 1D CNN Proposed RF</ns0:cell><ns0:cell>Features ngram -AraVec, word2vec -fastText word unigram</ns0:cell><ns0:cell>Accuracy 45.20 47.50 53.20 49.4 45.00 51.20</ns0:cell><ns0:cell>MicroF 1 57.30 -49.50 -35.00 60.20</ns0:cell><ns0:cell>MacroF 1 55.90 56.00 64.80 56.10 54.00 56.10</ns0:cell><ns0:cell>HL 17.90 18.30 -18.00 36.00 19.40</ns0:cell></ns0:row><ns0:row><ns0:cell>5.1 Discussion</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='8'>http://meka.sourceforge.net 9 https://keras.io/ 10 https://pytorch.org 11 https://github.com/Noman712/Mutilabel_Emotion_Detection_Urdu/tree/master/code 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021)</ns0:note> <ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:1:2:NEW 28 Nov 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Original Article Title: “Multi-label emotion classification of Urdu tweets”      Dear Editor:                         Thank you very much for allowing a resubmission of our manuscript “Multi-label emotion classification of Urdu tweets”. We are very happy to have received a positive evaluation, and we would like to express our appreciation to you and all reviewers for the thoughtful comments and helpful suggestions. Reviewers raised several concerns, which we have carefully considered and made every effort to address. We fundamentally agree with all the comments made by the reviewers, and we have incorporated corresponding revisions into the manuscript. Our detailed, point-by-point responses to the editorial and reviewer comments are given below, whereas the corresponding revisions are marked in colored text in the manuscript file. Specifically, blue text indicates changes made in response to the suggestions of reviewers. Additionally, we have carefully revised the manuscript to ensure that the text is optimally phrased and free from typographical and grammatical errors. We believe that our manuscript has been considerably improved as a result of these revisions, and hope that our revised manuscript “Multi-label emotion classification of Urdu tweets” is acceptable for publication in the Peerj Computer Science. We would like to thank you once again for your consideration of our work and for inviting us to submit the revised manuscript. We look forward to hearing from you.    Best regards Hsien-Tsung Chang  Chang Chang Gung University  Department of Computer Science and Information Engineering Taoyuan, Taiwan  E-mail: smallpig@widelab.org                                           Reviewer 1  Additional comments   (Concern#1) The manuscript is centered on a very interesting and timely topic, which is also quite relevant to the themes of PeerJ Computer Science. Organization of the paper is good and the proposed method is quite novel. The length of the manuscript is about right but the keyword list is missing. The paper, moreover, does not link well with recent literature on sentiment analysis that appeared in relevant top-tier journals, e.g., the IEEE Intelligent Systems department on 'Affective Computing and Sentiment Analysis'. Also, latest trends in multilingual sentiment analysis are missing, e.g., see Lo et al.’s recent survey on multilingual sentiment analysis (from formal to informal and scarce resource languages). Finally, check recent resources for multilingual sentiment analysis, e.g., BabelSenticNet.     Author response: Thanks a lot for the suggestion. Author action:  We have added additional paragraphs in the Literature review according to your suggestions and changes can be seen in Section 2 from lines 153 to 164 and 198 to 217 in a marked pdf file.     (Concern#2) Authors seem to handle sentiment analysis simply as a binary classification problem (positive versus negative). What about the issue of neutrality or ambivalence? Check relevant literature on detecting and filtering neutrality in sentiment analysis and recent works on sentiment sensing with ambivalence handling. Author response: Thanks for your concern. Author action: We have added additional details in Section 2 from lines 153 to 164 and 198 to 217 in a marked pdf file. We also clarified in the results Section that it is a multi-label emotion detection problem rather than a binary classification task. In addition, we follow six emotions based on Ekman at el. However, we have also added the literature review related to neutrality and ambivalence for sentiment analysis.    (Concern#3) Finally, the manuscript only cites a few papers from 2020 and 2021: check latest works on attention-based deep models for sentiment analysis and recent efforts on predicting sentiment intensity using stacked ensemble.   Author response: Thanks a lot for the suggestion. Author action:   We have added the latest literature from 2020 and 2021. The changes can be seen in Section 2 from lines 153 to 164 and 198 to 217 in a marked pdf file.     (Concern#4) Some parts of the manuscript may result unclear for some readers of this journal. A short excursus on emotion categorization models and algorithms could resolve this lack of clarity (as the journal does not really feature many papers on this topic) and improve the overall readability of the paper. On a related note, the manuscript presents some bad English constructions, grammar mistakes, and misuse of articles: a professional language editing service is strongly recommended (e.g., the ones offered by IEEE, Elsevier, and Springer) to sufficiently improve the paper's presentation quality for meeting the high standards of PeerJ Computer Science.   Author response: Thanks a lot for the suggestion. Author action: We thoroughly examined the existing grammatical mistakes and problematic sentences in the article. The paper now is sufficiently improved according to the standards of PeerJ Computer Science.        (Concern#5) Finally, double-check both definition and usage of acronyms: every acronym should be defined only once (at the first occurrence) and always used afterwards (except for abstract and section titles). Also, it is not recommendable to generate acronyms for multiword expressions that are shorter than 3 words (unless they are universally recognized, e.g., AI).     Author response: Thanks a lot for the suggestion. Author action:  We carefully examined the manuscript and updated the acronyms.     Reviewer: Rutvij Jhaveri Additional comments (Concern#1) Abstract and Introduction should be revised.:   Author response: Thanks a lot for the suggestion. Author action: We updated the Abstract and Introduction in the paper.    (Concern#2) Problem statement must be clearly defined in the Introduction. Author response:  Thanks a lot for the suggestion. Author action: We defined the problem statement in the introduction. We mentioned the reasons to tackle multi-label emotion classification in general and the importance of the problem in Urdu.    (Concern#3) Use simple present tense throughout the paper: Author response:  Thanks a lot for the suggestion. Author action: We updated the manuscript by changing the sentence structure of the manuscript.    (Concern#4) Related work should have one paragraph of motivation due to limitations of existing approaches. Also, it should have references to recent similar works. Author response:  Thanks a lot for the suggestion. Author action: We have added an additional paragraph in the literature review that summarizes the limitation of the existing work. Please refer to Section 2.3 from lines 218 to 228 in a marked pdf file.   .    (Concern#5) Comparison of the work with a recent existing approach is necessary to show the performance improvement: Author response:  Thanks a lot for the suggestion. Author action: We have added an additional Table that compares our results with existing results. Please refer to Section 2.2 and Section 4. Table1 and Table 10 show the latest work and results of the existing studies. (Concern#6) Result analysis must be thorough. Author response:  Thanks a lot for the suggestion. Author action: We improved the result section and also added Table 10 for comparison of approaches.    (Concern#7) The conclusion should include the limitations of the existing work. Author response:  Thanks a lot for the suggestion. Author action: We added the limitation of the existing work in the conclusion from line number 464-468.   Reviewer: 3 Basic reporting (Concern#1)   1.     The use of the verb 'talks' refers to Section 2 in line 64 of page 2 is not suitable. 2.     Spelling errors and punctuation errors are found in the second paragraph of page 9 (line 286-295). 3.     Section 4.3: I suggest use variables represent 'number of c(e)' or 'number of p(e)' in equations on page 10. The variable c(e) should be written in mathematical form (italic). This change should apply to all variables in this article. 4.     In Section 4.2, page 9: The use of capital letters in 'We used the following BERT 301 parameters: MAXSEQLENGTH= 64, BATCH SIZE = 32, LEARNING RATE =2e5, and NUM TRAIN EPOCHS= 2.0. Author response:  Thanks a lot for the suggestion. Author action: We carefully examined the manuscript and updated all of these issues.   Experimental design (Concern#2) 1st issue: The paper should explain the importance of all evaluation metrics in the context of this task. Explain how the metrics can describe the performance of the algorithms. Author response:  Thanks a lot for the suggestion. Author action: All the standard measures are provided in the manuscript. We gave equations of Micro F1-score, Macro F1-score, Exact match, Hamming loss and explained why they are required for the evaluation of the task in section 4.3. All existing studies on multi-label use the same measures for evaluation (Refer to Table 10).     2nd issue: Coding for the project is publicly available. Author response:  Thanks a lot for the suggestion. Author action: Coding is publicly available and the link is provided for the dataset and code in the manuscript.    3rd issue: In paragraphs 3 and 4 of section 5: The deep learning algorithms performed poorly in this study, why do authors claim that the algorithms perform well and the results are promising? Author response:  Thanks a lot for the suggestion. Author action: We have added additional details in Section 4 from line number 433 to 441. We clarified in the Results section that only fastText pre-trained model has Urdu word embeddings. As a result, the deep learning results are dependent on fastText word embeddings. Moreover, fastText does not have all of the vocab for Urdu language and most of our vocabulary is missed as out of vocabulary. This is a major reason for the low performance of the deep learning models. We also compare our results with the existing dataset baseline results in Table 10. Hence, our results are in line with the latest research work.     4th issue: In section 4.2, why the values of parameters were selected in this project. Author response:  Thanks a lot for the suggestion. Author action: In the manuscript, In Section 4.2, we clarified that for machine learning models, we used default parameters. In addition, in Section 5, from lines 443 to 449, we confirmed that n-gram feature files are available on our GitHub repository and other researchers can produce the exact same results using MEKA software and these feature files.  For deep learning models, we trained our own 1D-CNN and LSTM models from scratch. We fine-tuned the parameters to achieve the best results for our classifiers. It is necessary to create competitive benchmarks rather than just producing baseline results. We mentioned all the parameters so that other researchers can also produce the same results for their future work.      Validity of the findings (Concern#3) 1st issue: The description of the challenges faced in collecting information for this dataset and the annotation process should not be considered a contribution. Author response:  Thanks a lot for the suggestion. Author action: Yes, you are right. We removed this line from the contribution part.   2nd issue: No prior work on multilabel emotion classification exists for the Urdu language, and the generation of the dataset can be considered a contribution of this research.   Author response:  Thanks a lot for the suggestion. Author action: We have added this detail in our contribution part.   3rd issue: However, the performance of deep learning and machine learning algorithms for this task are very poor with low accuracy. Would you please explain why this is the case? Author response:  Thanks a lot for the suggestion. Author action: We have added additional details in Section 4 from line number 433 to 441. We clarified in the Results section that, only fastText pre-trained model has Urdu embeddings word embeddings. As a result, the deep learning results are dependent on fastText word embeddings. Moreover, fastText does not have all of the vocab for Urdu language. This is another reason for the low performance of the deep learning models. We also compare our results with the existing dataset baseline results in Table 10. We can see that our results are promising according to the existing dataset results. Moreover, we have added Table 10 where we compare our results with the state-of-the-art results. Hence, our results are in line with the latest research.     4th issue: I would suggest more evaluation be conducted to identify algorithms with better accuracy. Please justify the relevance of your findings even though the accuracy results are mostly low for all algorithms in this study. Author response:  Thanks a lot for the suggestion. Author action: In this study, we used five machine learning classifiers and 2 deep learning classifiers. In addition, for the strong baselines, we also use transformers such as BERT. We have added Table number 10 where we compare our results with the state-of-the-art results. Hence, our results are in line with the latest research. "
Here is a paper. Please give your review comments after reading it.
356
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Urdu is a widely used language in South-Asia and worldwide. While there are similar datasets available in English, we created the first multi-label emotion dataset consisting of 6,043 tweets and six basic emotions in the Urdu Nastal&#237;q script. A Multi-Label (ML) classification approach was adopted to detect emotions from Urdu. The morphological and syntactic structure of Urdu makes it a challenging problem for multi-label emotion detection. In this paper, we build a set of baseline classifiers such as machine learning algorithms (Random forest (RF), Decision tree (J48), Sequential minimal optimization (SMO), AdaBoostM1, and Bagging), deep-learning algorithms (Convolutional Neural Networks (1D-CNN), Long short-term memory (LSTM), and LSTM with CNN features) and transformer-based baseline (BERT). We used a combination of text representations: stylometric-based features, pre-trained word embedding, word-based n-grams, and character-based n-grams. The paper highlights the annotation guidelines, dataset characteristics and insights into different methodologies used for Urdu based emotion classification. We present our best results using Micro-averaged F1, Macro-averaged F1, Accuracy, Hamming Loss (HL) and Exact Match (EM) for all tested methods.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Twitter is a micro blogging platform which is used by millions daily to express themselves, share opinions, and to stay informed. Twitter is an ideal platform for researchers for years to study emotions and predict the outcomes of experimental interventions <ns0:ref type='bibr'>(Mohammad and</ns0:ref><ns0:ref type='bibr'>BravoMarquez, 2017&#894; Mohammad et al., 2015)</ns0:ref>. Studying emotions in text helps us to understand the behaviour of individuals <ns0:ref type='bibr' target='#b67'>(Plutchik, 1980</ns0:ref><ns0:ref type='bibr'>, 2001&#894; James A Russell, 1977&#894; Ekman, 1992)</ns0:ref> and gives us the key to people's feelings and perceptions.</ns0:p><ns0:p>Social media text can represent various emotions: happiness, anger, disgust, fear, sadness, and surprise.</ns0:p><ns0:p>One can experience multiple emotions <ns0:ref type='bibr'>(Strapparava and</ns0:ref><ns0:ref type='bibr'>Mihalcea, 2007&#894; Li et al., 2017)</ns0:ref> in a small chunk of text while there is a possibility that text could be emotionless or neutral, making it a challenging problem to tackle. It can be easily categorized as a multilabel classification task where a given text can be about any emotion simultaneously. Emotion detection in its true essence is a multilabel classification problem since a single sentence may trigger multiple emotions such as anger and sadness. This increases the complexity of the problem and makes it more challenging to classify in a textual setting.</ns0:p><ns0:p>While there are multiple datasets available for multilabel classification in English and other Europian languages, low resource language like Urdu still requires a dataset. The Urdu language is the combination of Sanskrit, Turkish, Persian, Arabic and recently English making it even more complex to identify the true representation of emotions because of the morphological and syntactic structure <ns0:ref type='bibr' target='#b0'>Adeeba and Hussain (2011)</ns0:ref>. However, the structural similarities of Urdu with Hindi and other South Asian languages make it resourceful for the similar languages. Urdu is the national language of Pakistan that is spoken by more PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>than 170 million people worldwide as first and second language. 1 Needless to say that Urdu is also widely used on social media using right to left Nastal&#237;q script.</ns0:p><ns0:p>Therefore, a multilabel emotion dataset for Urdu was long due and needed for understanding public emotions, especially applicable in natural language applications in disaster management, public policy, commerce, and public health. It should also be noted that emotion detection directly aids in solving other text related classification tasks such as sentiment analysis <ns0:ref type='bibr' target='#b42'>(Khan et al., 2021)</ns0:ref>, human aggressiveness and emotion detection <ns0:ref type='bibr'>(Bashir et al., 2019&#894; Ameer et al., 2021)</ns0:ref>, humor detection <ns0:ref type='bibr' target='#b82'>(Weller and Seppi, 2019)</ns0:ref>, question answering and fake news detection <ns0:ref type='bibr' target='#b20'>(Butt et al., 2021a&#894; Ashraf et al., 2021a)</ns0:ref>, depression detection <ns0:ref type='bibr' target='#b59'>(Mustafa et al., 2020)</ns0:ref>, and abusive and threatening language detection <ns0:ref type='bibr' target='#b10'>(Ashraf et al., 2021b</ns0:ref><ns0:ref type='bibr'>, 2020&#894; Butt et al., 2021b&#894; Amjad et al., 2021)</ns0:ref>.</ns0:p><ns0:p>We created a Nastal&#237;q Urdu script dataset for multilabel emotion classification consisting of 6043 tweets using Ekman's six basic emotions <ns0:ref type='bibr' target='#b26'>(Ekman, 1992)</ns0:ref>. The dataset is divided into the train and test split which is publicly available along with the evaluation script. The task requires you to classify the tweet as one, or more of the six basic emotions which is the best representation of the emotion of the person tweeting. The paper presents machinelearning and neural baselines for comparison and shows that out of the various machine and deeplearning algorithms, RF performs the best and gives macroaveraged F 1 score of 56.10%, microaveraged F 1 score of 60.20%, and M1 accuracy of 51.20%.</ns0:p><ns0:p>The main contributions of this research are as follows:</ns0:p><ns0:p>&#8226; Urdu language dataset for multiclass emotion detection, containing six basic emotions (anger, disgust, joy, fear, surprise, and sadness) (publicly available&#894; see a link below)&#894;</ns0:p><ns0:p>&#8226; Baseline results of machinelearning algorithms (RF, J48, DT, SMO, AdaBoostM1, and Bagging) and deeplearning algorithms (1DCNN, LSTM, and LSTM with CNN features) to create a bench mark for multilabel emotion detection using four modes of text representations: wordbased n grams, characterbased ngrams, stylometrybased features, and pretrained word embeddings.</ns0:p><ns0:p>The rest of the paper is structured as follows.</ns0:p><ns0:p>Section 2 explains the related work on multilabel emotion classification datasets and techniques. Sec tion 3 discusses the methodology including creation of the dataset. Section 4 presents evaluation of our models. Section 5 analyzes the results. Section 6 concludes the paper and potential highlights for the future work.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>Emotion detection has been extended across a number of overlapping fields. As a result, there are a number of publicly available datasets for emotion detection.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Emotion Datasets</ns0:head><ns0:p>EmoBank <ns0:ref type='bibr' target='#b19'>(Buechel and Hahn, 2017)</ns0:ref> is an English corpus of 10,000 sentences using the valence arousal dominance (VAD) representation format annotated with dimensional emotional metadata. EmoBank distinguishes between emotions of readers and writers and is built upon multiple genres and domains.</ns0:p><ns0:p>A subset of EmoBank is birepresentationally annotated on Ekman's basic emotions which helps it in mapping between both representative formats. Affective text corpus <ns0:ref type='bibr' target='#b75'>(Strapparava and Mihalcea, 2007)</ns0:ref> is extracted from news websites (Google News, Cable News Network etc.,) to provide Ekman's emotions (e.g., joy, fear, surprise), valence (positive or negative polarity) and explore the connection between lexical semantics and emotions in news headlines. The emotion annotation is set to [0, 100] where 100 is defined as maximum emotional load and 0 indicates missing emotions completely. Whereas, annotations for valence is set to <ns0:ref type='bibr'>[100,</ns0:ref><ns0:ref type='bibr'>100]</ns0:ref> in which 0 signifies neutral headline, 100 and 100 represent extreme negative and positive headlines respectively. DailyDialog <ns0:ref type='bibr' target='#b48'>(Li et al., 2017)</ns0:ref> is a multiturn dataset for human dialogue. It is manually labelled with emotion information and communication intention and contains 13,118 sentences. The paper follows the six main Ekman's emotions (fear, disgust, anger, and surprise etc.,) complemented by the 'no emotion' category. Electoral Tweets is another dataset <ns0:ref type='bibr' target='#b56'>(Mohammad et al., 2015)</ns0:ref> which obtains the information through electoral tweets to classify emotions <ns0:ref type='bibr'>(Plutchik's emotions)</ns0:ref> and sentiment (positive/negative). The dataset consists of over 100,000 responses Emotional Intensity <ns0:ref type='bibr' target='#b55'>(Mohammad and BravoMarquez, 2017)</ns0:ref> dataset was created to detect the writers emotional intensity of emotions. The dataset consists of 7,097 tweets where the intensity is analysed by bestworst scaling (BWS) technique. The tweets were annotated with intensities of sadness, fear, anger, and joy using Crowdsourcing. Emotion Stimulus is a dataset <ns0:ref type='bibr' target='#b30'>(Ghazi et al., 2015)</ns0:ref> that identifies the textual cause of emotion. It consists of the total number of 2,414 sentences out of which 820 were annotated with both emotions and their cause, while 1594 were annotated just with emotions. Grounded Emotions dataset <ns0:ref type='bibr' target='#b50'>(Liu et al., 2017)</ns0:ref> was designed to study the correlation of users' emotional state and five types of external factors namely user predisposition, weather, social network, news exposure, and timing. The dataset was built upon social media and contains 2,557 labelled instances with 1,369 unique users. Out of these, 1525 were labelled as happy tweets and 1,032 were labelled as sad tweets. One aspect of sentiment and emotionrelated tasks is neutrality in texts. Neutrality often contains ambiguity and a lack of information. Hence, neutrality needs specific characterization to empower models designed for understanding sentiments. A weighted aggregation method for neutrality <ns0:ref type='bibr' target='#b78'>(Valdivia et al., 2018)</ns0:ref> showed how neutrality is a key in robust sentiment classification. Ambivalence is a phenomenon that includes both negative and positive valenced components towards an action, person or object and hence directly correlates with the sentiment level tasks. An approach for ambivalence handling in texts can be seen in <ns0:ref type='bibr' target='#b79'>Wang et al. (2020)</ns0:ref>, where the authors used MixedNegative, MixedNeutral and MixedPositive for ambivalence handling. Later, the first step was used for multilevel finescaled sentiment analysis.</ns0:p><ns0:p>FbValenceArousal dataset <ns0:ref type='bibr' target='#b69'>(PreotiucPietro et al., 2016)</ns0:ref> consisting of 2,895 social media posts were collected to train models for valence and arousal. It was annotated by two psychologically trained persons on two separate ordinal ninepoint scales with valence (sentiment) or arousal (intensity). The time interval was the same for every message with distinct users. Lastly, Stance Sentiment Emotion Corpus (SSEC) dataset <ns0:ref type='bibr' target='#b74'>(Schuff et al., 2017)</ns0:ref> is an extension of SemEval 2016 dataset with a total number of 4,868 tweets.</ns0:p><ns0:p>It was extended to enable a relation between annotation layers (sentiment, emotion and stance). Plutchik's fundamental emotions were used for annotation by expert annotators. The distinct feature of this dataset is that they published individual information for all annotators. A comprehensive literature review is summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Although we have taken English language emotion data set for comparison, many low resource languages have been catching up in emotion detection tasks in text <ns0:ref type='bibr'>(Kumar et al., 2019&#894; Arshad et al., 2019&#894; Plaza del Arco et al., 2020&#894; Sadeghi et al., 2021&#894; Tripto and Ali, 2018)</ns0:ref>.</ns0:p><ns0:p>XED is a finegrained multilingual emotion dataset introduced by <ns0:ref type='bibr'>(&#214;hman et al., 2020)</ns0:ref>. The collec tion comprises humanannotated Finnish (25k) and English (30k) sentences, as well as planned annota tions for 30 other languages, bringing new resources to a variety of lowresource languages. The dataset is annotated using Plutchik's fundamental emotions, with neutral added to create a multilabel multiclass dataset. The dataset is thoroughly examined using languagespecific BERT models and SVMs to show that XED performs on par with other similar datasets and is thus a good tool for sentiment analysis and emotion recognition.</ns0:p><ns0:p>The examples of annotated dataset for emotion classification show that the difference lies between annotation schemata (i.e., VAD or multilabel discreet emotion set), the domain of the dataset (i.e., social news, questionnaire, and blogs etc.,), the file format, and the language. Some of the most popular datasets released in the last decade to compare and analyze in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. For a more comprehensive review of existing datasets for emotion detection, we refer the reader to <ns0:ref type='bibr' target='#b57'>(Murthy and Kumar, 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Approaches to Emotion Detection</ns0:head><ns0:p>Sentiment classification has been around for decades and has been the centre of the research in natural language processing (NLP) <ns0:ref type='bibr' target='#b84'>(Zhang et al., 2018)</ns0:ref>. Emotion detection and classification became naturally the next step after sentiment task, while psychology is still determining efficient emotion models <ns0:ref type='bibr'>(Barrett et al., 2018&#894; Cowen and</ns0:ref><ns0:ref type='bibr' target='#b23'>Keltner, 2018)</ns0:ref>. NLP researchers embraced the most popular <ns0:ref type='bibr'>(Ekman, 1992&#894; Plutchik, 1980)</ns0:ref> definitions and started working on establishing robust techniques. In the early stages, emotion detection followed the direction of Ekman's model <ns0:ref type='bibr' target='#b26'>(Ekman, 1992)</ns0:ref> which classifies emotions in six categories (disgust, anger, joy, fear, surprise, and sadness). Many of the recent work published in emotion classification follows the wheel of emotions <ns0:ref type='bibr' target='#b67'>(Plutchik, 1980</ns0:ref><ns0:ref type='bibr' target='#b68'>(Plutchik, , 2001) )</ns0:ref> which classifies emotions as (fearanger, disgusttrust, joysadness, and surpriseanticipation) and Plutchik's <ns0:ref type='bibr' target='#b67'>(Plutchik, 1980)</ns0:ref> <ns0:ref type='table' target='#tab_13'>2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science space of linear combination affective states <ns0:ref type='bibr' target='#b38'>(James A Russell, 1977)</ns0:ref>.</ns0:p><ns0:p>Emotion text classification task has been divided into two methods: rulebased and machinelearning based. Famous examples stemming from expert notation can be SentiWordNet <ns0:ref type='bibr' target='#b27'>(Esuli and Sebastiani, 2007)</ns0:ref> and WordNetAffect <ns0:ref type='bibr' target='#b76'>(Strapparava and Valitutti, 2004)</ns0:ref>. Linguistic inquiry and word count (LIWC) <ns0:ref type='bibr' target='#b64'>(Pennebaker et al., 2001)</ns0:ref> is another example assigning lexical meaning to psychological tasks using a set of 73 lexicons. NRC wordemotion association lexicon <ns0:ref type='bibr' target='#b54'>(Mohammad et al., 2013)</ns0:ref> is also an avail able extension of the previous works built using eight basic emotions <ns0:ref type='bibr' target='#b67'>(Plutchik, 1980)</ns0:ref>, whereas, values of VAD <ns0:ref type='bibr' target='#b38'>(James A Russell, 1977)</ns0:ref> were also used for annotation <ns0:ref type='bibr' target='#b80'>(Warriner et al., 2013)</ns0:ref>. Rulebased work was superseded by supervised featurebased learning using variations of features such as word em beddings, character ngrams, emoticons, hashtags, affect lexicons, negation and punctuation's <ns0:ref type='bibr'>(Jurgens et al., 2012&#894; Aman and Szpakowicz, 2007&#894; Alm et al., 2005)</ns0:ref>. As part of emotional computing, emotion detection is commonly employed in the educational domain. The authors presented a methodology in the study <ns0:ref type='bibr' target='#b32'>(Halim et al., 2020)</ns0:ref> for detecting emotion in email messages. The framework is built on au tonomous learning techniques and uses three machine learning classifiers such as ANN, SVM and RF and three feature selection algorithms to identify six (neutral, happy, sad, angry, positively surprised, and negatively surprised) emotional states in the email text. Study <ns0:ref type='bibr' target='#b65'>(Plazadel Arco et al., 2020)</ns0:ref> offered research of multiple machine learning algorithms for identifying emotions in a social media text. The findings of experiments with knowledge integration of lexical emotional resources demonstrated that using lexical effective resources for emotion recognition in languages other than English is a potential way to improve basic machine learning systems. IDSECM, a model for predicting emotions in textual dialogue, was also presented in <ns0:ref type='bibr' target='#b47'>(Li et al., 2020)</ns0:ref>. Textual dialogue emotion analysis and generic textual emotion analysis were contrasted by the authors. They also listed contextdependence, contagion, and persistence as hallmarks of textual dialogue emotion analysis.</ns0:p><ns0:p>Neural networkbased models <ns0:ref type='bibr' target='#b74'>(Barnes et al., 2017&#894; Schuff et al., 2017)</ns0:ref> techniques like biLSTM, CNN, and LSTM achieve better results compared to featurebased supervised model i.e., SVM and Max Ent. The leading method at this point is claimed using biLSTM architecture aided by multilayer self attention mechanism <ns0:ref type='bibr' target='#b16'>(Baziotis et al., 2018)</ns0:ref>. The stateoftheart accuracy of 59.50% was achieved. In Study <ns0:ref type='bibr' target='#b33'>(Hassan et al., 2021)</ns0:ref> authors examine three approaches: i) employing intrinsically multilingual models&#894; ii) translating training data into the target language, and iii) using a parallel corpus that is au tomatically labelled. English is used as the source language in their research, with Arabic and Spanish as the target languages. The efficiency of various classification models was investigated, such as BERT and SVMs, that have been trained using various features. For Arabic and Spanish, BERTbased monolin gual models trained on target language data outperform stateoftheart (SOTA) by 4% and 5% absolute Jaccard score, respectively. For Arabic and Spanish, BERT models achieve accuracies of 90% and 80% respectively.</ns0:p><ns0:p>One of the exciting studies <ns0:ref type='bibr' target='#b15'>(Basiri et al., 2021)</ns0:ref> proposed a CNNRNN Deep Bidirectional Model based on Attention (ABCDM). ABCDM evaluates temporal information flow in both directions utilizing two independent bidirectional LSTM and GRU layers to extract both past and future contexts. Atten tion mechanisms were also applied to the outputs of ABCDM's bidirectional layers to place more or less focus on certain words. To minimize feature dimensionality and extract positioninvariant local fea tures, ABCDM uses convolution and pooling methods. The capacity of ABCDM to detect sentiment polarity, which is the most common and significant task in sentiment analysis, is a key metric of its ef fectiveness. ABCDM achieves stateoftheart performance on both long review and short tweet polarity classification when compared to six previously suggested DNNs for sentiment analysis. We also saw at tention based methods <ns0:ref type='bibr'>(Gan et al., 2020&#894; Basiri et al., 2021)</ns0:ref> for sentiment related tasks. An effective deep learning method can be seen in the study <ns0:ref type='bibr' target='#b15'>(Basiri et al., 2021)</ns0:ref> which uses Attentionbased Bidirectional CNNRNN addressing the problems of high feature dimensionality and feature weighting. The model uses bidirectional contexts, positioninvariant local features and pooling mechanisms for sentiment po larity detection to achieve the state of the art results. Another popular <ns0:ref type='bibr' target='#b51'>(Majumder et al., 2020)</ns0:ref> approach uses conditional random field (CRF) and bidirectional gated recurrent unit (BiGRU) based sequence tag ging method for aspect extraction. The approach later concatenates GloVe embeddings with the aspect extracted data as input to the aspectlevel sentiment analysis (ALSA) models.</ns0:p></ns0:div> <ns0:div><ns0:head>4/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Research Gap</ns0:head><ns0:p>Some of the important work in Roman Urdu sentiment detection is done by multiple researchers <ns0:ref type='bibr'>(Mehmood et al., 2019&#894; Arshad et al., 2019)</ns0:ref>, however, to the best of our knowledge, no prior work on multilabel emotion classification exists for the Nastal&#237;q Urdu language. From Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, one can observe that no annotated dataset was available for multilabel emotion classification task in Nastal&#237;q script. Detecting Nastal&#237;q script on Twitter requires attention and can further aid in solving problems like abusive language detection, humor detection and depression detection in text. Our motivation was to provide an indepth feature engineering for the task, describing not only lexical features but also embedding, comparing the performance of these features for Nastal&#237;q script in Urdu. We also saw a lack of comparison between classifiers. Most of the studies used either only machine learning or only deep learning (DL) techniques, while no comparison was done between ML and DL models, whereas, we gave the baseline results for both ML and DL classifiers. </ns0:p></ns0:div> <ns0:div><ns0:head n='3'>MULTIPLE-FEATURE EMOTION DETECTION MODEL</ns0:head><ns0:p>The emotion detection model is illustrated in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> Classification algorithms and methodology thoroughly explained in Section 4.2.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Dataset</ns0:head><ns0:p>Multilabel emotion dataset in Urdu is neither available nor has any experiments conducted in any do main. Tweets elucidate the emotions of people as they describe their activities, opinions, and events with the world and therefore is the most appropriate medium for the task of emotion classification. The goal of this dataset is to develop a large benchmark in Urdu for the multilabel emotion classification task.</ns0:p><ns0:p>This section describes the challenges confronted during accumulation of a large benchmark twitterbased Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1'>Data Crawling</ns0:head><ns0:p>The dataset was obtained through Twitter and we use Ekman's emotion keywords for the collection of tweets. Twitter developer application programming interface (API) <ns0:ref type='bibr' target='#b25'>(Dorsey, 2006)</ns0:ref> was used and the resulting tweets were collected in a CSV file. The script for the purpose of scrapping was developed in python which was filtered using hashtags, query strings, and user profile name through Twitter rest API.</ns0:p><ns0:p>For each emotion, the maximum of two thousand tweets were extracted which were later refined and shrunk per keyword based on tweet quality and structure. All the tweets with multiple languages (i.e., Arabic and Persian) were eliminated from the dataset and only the purest Urdu tweets were kept. The total collected tweets, in the end, were twelve thousand. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.2'>Data Annotation</ns0:head><ns0:p>As mentioned previously, the Twitter hashtags were used for extracting relevant tweets of a particular emotion. However, since a tweet can contain multiple emotions, the keywords alone cannot be a reliable Manuscript to be reviewed</ns0:p><ns0:p>Computer Science method for annotation. Therefore, data annotation standards were prepared for expert annotators to follow and maintain consistency throughout the task.</ns0:p><ns0:p>&#8226; Anger ( ) also includes annoyance and rage can be categorized as a response to a deliberate attempt of anticipated danger, hurt or incitement.</ns0:p><ns0:p>&#8226; Disgust &#8235;&#1578;(&#8236; ) in the text is an inherent response of dislikeness, loathing or rejection to contagious ness.</ns0:p><ns0:p>&#8226; Fear &#8235;&#1601;(&#8236; ) also including anxiety, panic and horror is an emotion in a text which can be seen triggered through a potential cumbersome situation or danger.</ns0:p><ns0:p>&#8226; Sadness ( &#8235;)&#1575;&#1583;&#1575;&#8236; also including pensiveness and grief is triggered through hardship, anguish, feeling of loss, and helplessness.</ns0:p><ns0:p>&#8226; Surprise &#8235;&#1578;(&#8236; ) also including distraction and amazement is an emotion which is prompted by an unexpected occurrence.</ns0:p></ns0:div> <ns0:div><ns0:head>7/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Happiness ( ) also including contentment, pride, gratitude and joy is an emotion which is seen as a response to wellbeing, sense of achievement, satisfaction, and pleasure.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.3'>Annotation Guidelines</ns0:head><ns0:p>The following guidelines were set for the annotation process of the dataset:</ns0:p><ns0:p>&#8226; Three specialised annotators in the field of Urdu were selected. Both annotators had the minimum qualification of Masters in Urdu language making them the most suitable persons for the job.</ns0:p><ns0:p>&#8226; Complete dataset was provided to two of the annotators and they were asked to classify the tweets in one or multiple emotion labels with a minimum of one and maximum of six emotions. The existing emotions were labelled as 1 under each category and the rest were marked 0.</ns0:p><ns0:p>&#8226; The annotator's results were observed and analysed after every 500 tweets to ensure the credibility and correct pattern of annotation.</ns0:p><ns0:p>&#8226; The annotators were asked to identify emojis in a tweet with their corresponding labels. They were informed of the possibility of varying context between emojis and text. In such a case, multiple suited labels were selected to portray multiple or mix emotions.</ns0:p><ns0:p>&#8226; Major conflicts where at least one category was labelled differently by the previous two annota tors were identified and the labelled dataset for the conflicting tweets was resolved by the third annotator.</ns0:p><ns0:p>Interannotator agreement (IAA) was computed using Cohan's Kappa Coefficient <ns0:ref type='bibr' target='#b22'>(Cohen, 1960)</ns0:ref>. We achieved kappa coefficient of 71% which shows the strength of our dataset.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.4'>Dataset Characteristics and Standardization</ns0:head><ns0:p>UrduHack 3 was used to normalize the tweets. Urdu text has diacritics (a glyph added to an alphabet for pronunciation) which needs to be removed. For both word and character level normalization, we removed the diacritics, added spaces after digits, punctuation marks, and stop words 4 form the data. For character level normalization, Unicode were assigned to each character. Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> shows the frequently occurring emotions. In a multilabel setting, several emotions appear in a tweet, hence, the number of emotions exceed the number of tweets. The emotion anger ( ) is seen to be the most common emotion used in the tweets. Meanwhile Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref> shows the statistics of the tweets after normalization in train and test dataset.</ns0:p><ns0:p>The entire dataset has the vocabulary of 14101 words while each tweet average length is 9.24 words and 46.65 characters. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1.1'>Count Based Features</ns0:head><ns0:p>Character ngrams and token ngrams were used as countbased features. We generated word uni, bi , and trigrams and character ngrams from trigrams to ninegrams. Term frequencyinverse document frequency (TFIDF), a feature weighting technique on countbased features 5 was also used. ScikitLearn 6 was used for the extraction of all features.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.2'>Stylometry Based Features</ns0:head><ns0:p>The second set was stylometric based features <ns0:ref type='bibr'>(Lex et al., 2010&#894; Grieve, 2007)</ns0:ref> which included 47 character based features, 11 wordbased features and 6 vocabularyrichness based features. Stylometry based fea tures are used to analyze literary style in emotions <ns0:ref type='bibr' target='#b5'>(Anchi&#234;ta et al., 2015)</ns0:ref> ,whereas, vocabulary richness based features are used to capture individual specific vocabulary <ns0:ref type='bibr' target='#b53'>(Milika and Kub&#225;t, 2013)</ns0:ref>.</ns0:p><ns0:p>The characterbased features are as follows:</ns0:p><ns0:p>&#8226; The wordbased features are as follows:</ns0:p><ns0:p>&#8226; Average of word length, sentence length, words per paragraph, sentence length in characters, and number of sentences,</ns0:p><ns0:p>&#8226; Number of paragraphs,</ns0:p><ns0:p>&#8226; Ratio of words with length 3 and 4,</ns0:p><ns0:p>&#8226; Percentage of question sentences,</ns0:p><ns0:p>&#8226; Total count of unique words and the total number of words.</ns0:p><ns0:p>The vocabularyrichness based features are as follows:</ns0:p><ns0:p>&#8226; BrunetWMeasure,</ns0:p><ns0:p>&#8226; HapaxLegomena,</ns0:p><ns0:p>&#8226; HonoreRMeasure,</ns0:p><ns0:p>&#8226; SichelSMeasure,</ns0:p><ns0:p>&#8226; SimpsonDMeasure,</ns0:p><ns0:p>&#8226; uleKMeasure.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.3'>Pre-trained Word Embeddings</ns0:head><ns0:p>Word embeddings were extracted from the tweets using fastText 7 with 300 vector space dimensions per word. Only fastText was used as it contains the most dense vocabulary for Urdu Nastal&#237;q script .Since the text was informal social media tweets, it was highly probable that some words are missing in the dictionary. In that condition, we randomly assigned all 300 dimensions with a uniform distribution in</ns0:p><ns0:formula xml:id='formula_0'>[&#8722;0.1, 0.1].</ns0:formula><ns0:p>5 We use the following parameters: use_idf=True, smooth_idf=True, and number of features 1,000 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Setup and Classifiers</ns0:head><ns0:p>We treated multilabel emotion detection problem as a supervised classification task. Our goal was to pre dict multiple emotions from the six basic emotions. We used tenfold cross validation for this task which ensures the robustness of our evaluation. The tenfold cross validation takes 10 equal size partitions. Out of 10, 1 subset of the data is retained for testing and the rest for training. This method is repeated 10 times with each subset used exactly once as a testing set. The 10 results obtained are then averaged to produce estimation. For our emotion detection problem binary relevance and label combination (LC) transforma tion methods were used along with various machine and deeplearning algorithms: RF, J48, DT, SMO, AdaBoostM1, Bagging, 1D CNN, and LSTM. As evidently these algorithms perform extremely well for several NLP tasks such as sentiment analysis, and recommendation systems <ns0:ref type='bibr'>(Kim, 2014&#894; Hochreiter and Schmidhuber, 1997&#894; Breiman, 2001&#894; Kohavi, 1995&#894; Sagar et al., 2020&#894; Panigrahi et al., 2021a,b)</ns0:ref>.</ns0:p><ns0:p>We used several machine learning algorithms to test the performance of the dataset namely: RF, J48, DT, SMO, AdaBoostM1 and Bagging. AdaBoostM1 <ns0:ref type='bibr' target='#b28'>(Freund and Schapire, 1996)</ns0:ref> is a very famous en semble method which diminishes the hamming loss by creating models repetitively and assigning more weight to misclassified pairs until the maximum model number is not achieved. RF is another ensem ble classification method based on trees which is differentiated by bagging and distinct features during learning. It is robust as it overcomes the deficiencies of decision trees by combining the set of trees and input variable set randomization <ns0:ref type='bibr' target='#b18'>(Breiman, 2001)</ns0:ref>. Bagging (Bootstrap Aggregation) <ns0:ref type='bibr' target='#b17'>(Breiman, 1996)</ns0:ref> is implemented which aggregates multiple machine learning predictions and reduces variance to give a more accurate result. Lastly, SMO <ns0:ref type='bibr' target='#b34'>(Hastie and Tibshirani, 1998)</ns0:ref> which decomposes multiple variables into a series of subproblems and optimizes them as mentioned in the previous studies. DT and J48 were also tested as described in the papers <ns0:ref type='bibr'>(Salzberg, 1994&#894; Kohavi, 1995)</ns0:ref>, however, were unable to achieve substantial results. For machine learning algorithms we used MEKA 8 default parameters to provide the baseline scores.</ns0:p><ns0:p>We experimented with our multilabel classification task with two deep learning models: 1dimensional convolutional neural network (1D CNN) and long shortterm memory (LSTM). We used LSTM (Hochre iter and Schmidhuber, 1997) which is the enhanced version of the recurrent neural network with the dif ference in operational cells and enables it to keep or forget information increasing the learning ability for longtime sequence data. CNN <ns0:ref type='bibr' target='#b43'>(Kim, 2014)</ns0:ref> takes the embeddings vector matrix of tweets as input with the multilabel distribution and then passes through filters and hidden layers. We used Adam optimizer, categorical crossentropy as a loss function, softmax activation function on the last layer, and dropout layers of 0.2 in both LSTM and 1DCNN. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the architecture of 1DCNN while Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows the architecture of LSTM model. Table <ns0:ref type='table' target='#tab_8'>4</ns0:ref> shows the fully connected layers and their parameters for 1DCNN and LSTM. The tweets were passed as word piece embeddings which were later channelled into a sequence.</ns0:p><ns0:p>Keras 9 and Pytorch 10 framework were used for the implementation of all these algorithms. For additional details on the experiments, please review the publicly available code. 11 Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>BERT <ns0:ref type='bibr' target='#b24'>(Devlin et al., 2018)</ns0:ref> has proven in multiple studies to have a better sense of flow and language context as it is trained bidirectionally with an attention mechanism. We used the following BERT param eters: maxseqlength = 64, batch size = 32, learning rate =2e5, and numtrainepochs = 2.0. We used 0.1 dropout probability, 24 hidden layers, 340M parameters and 16 attention heads respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Metrics and Evaluation</ns0:head><ns0:p>To evaluate multilabel emotion detection, we used multilabel accuracy, microaveraged F 1 and macro averaged F 1 . Multilabel accuracy in the emotion classification considers the subsets of the actual classes for prediction as a misclassification is not hard wrong or right i.e., predicting two emotions correctly rather than declaring no emotion. For multilabel accuracy, we considered one or more gold label mea sures compared with obtained emotion labels or set of labels against each given tweet. We take the size of the intersection of the predicted and gold label sets divided by the size of their union and then average it over all tweets in the dataset.</ns0:p><ns0:p>Microaveraging, in this case, will take all True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN) individually for each tweets label to calculate precision and recall. The mathematical equations of microaveraged F 1 are provided in 1,2,3 respectively: ,</ns0:p><ns0:formula xml:id='formula_1'>P mi cr o = &#8721; e&#8712;E number o f c(e) &#8721; e&#8712;E</ns0:formula><ns0:formula xml:id='formula_2'>F e = 2 &#215; P e &#215; R e P e + R e , F 1 macr o = 1 | E | &#8721; e&#8712;E F e .</ns0:formula><ns0:p>Exact Match equation is mentioned below which explains the percentage of instance whose predicted labels (P t ) are exactly matching same the true set of labels (G t ).</ns0:p><ns0:formula xml:id='formula_3'>E xac t M at ch = 1 | T | T &#8721; i =1 G t = P t</ns0:formula><ns0:p>Hamming Loss equation mentioned below computes the average of incorrect labels of an instance. Lower the value, higher the performance of the classifier as this is a loss function. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>RESULT ANALYSIS</ns0:head><ns0:p>We conducted several experiments with detailed insight into our dataset. <ns0:ref type='table' target='#tab_13'>8</ns0:ref> and Table <ns0:ref type='table' target='#tab_14'>9</ns0:ref> show the results of deeplearning algorithms. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Notably, machinelearning baseline using word based ngram features achieved highest macro F 1 score of 56.10% comparatively to deeplearning baseline that achieved slightly lower F 1 score of 55.00% using pretrained word embeddings. Pretrained word embedding was not able to obtain the highest results, it might be because fastText does not have all of the vocab for Urdu language and some of the words could be missed as outofvocabulary. Therefore, further research is needed for pretrained word embeddings and deeplearning approaches that might help to improve the results. The Table <ns0:ref type='table' target='#tab_15'>10</ns0:ref> shows the state of the art results for multilabel emotion detection in English and proves that our baseline results are in line with stateoftheart work in the machine and deep learning. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In addition, computational complexity can make the reproducibility challenging of the proposed meth ods. Few years ago, it was difficult to produce the results as they can take days or weeks, although re searchers have access to GPU computing. Classifiers such as Random Forest and Adaboost that are used in this paper can lead to scalability issues. However, scalability can be addressed with appropriate feature engineering and preprocessing techniques in both academia and industry Jannach and Ludewig (2017)&#894; <ns0:ref type='bibr' target='#b49'>Linden et al. (2003)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this research, we created a multilabel emotion dataset in Urdu based on social media which is the first for Urdu Nastal&#237;q script. Data characteristics for Urdu needed to refine social media data were (2) Another limitation is fastText pretrained word embeddings does not have all of the vocab for Urdu language, therefore, some of the words could be missed as outofvocabulary. As a result, performance of the deep learning classifiers are poor as compared to the machine learning classifiers. Our dataset is expected to meet the challenges of identifying emotions for a wide range of NLP applications:</ns0:p><ns0:p>disaster management, public policy, commerce, and public health. In future, we expect to outperform our current results using novel methods, extend emotions, and detect the intensity of emotions in Urdu Nastal&#237;q script.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>https://www.ethnologue.com/language/urd 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022) Manuscript to be reviewed Computer Science of two questionnaires taken online about style, purpose, and emotions in electoral tweets. The tweets were annotated via Crowdsourcing.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>eight basic emotions (Ekman's emotion plus anticipation and trust) or the dimensional models making a vector 3/19 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>. The figure explains the basic architecture followed for both machine learning and deep learning classifiers. Our model has three main phases: data collection, feature extraction (i.e., character ngrams, word ngrams, stylometrybased features, and pretrained word embedding), and emotion detection classification. Section 3.1 explains all the details related to dataset: data crawling, data annotation, and character istics and standardization while Section 4.1 talks about features types and features extraction methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Multilabel emotion detection model for Urdu language</ns0:figDesc><ns0:graphic coords='7,141.73,85.69,413.58,372.12' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Examples in our dataset (translated by Google).</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.58,287.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. 1DCNN Model Architecture.</ns0:figDesc><ns0:graphic coords='11,141.73,479.34,413.58,131.83' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. LSTM Model Architecture.</ns0:figDesc><ns0:graphic coords='12,141.73,113.45,413.58,127.33' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>F 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>number o f p(e) , R mi cr o = &#8721; e&#8712;E number o f c(e) &#8721; e&#8712;E number o f (e) , mi cr o = 2 &#215; P mi cr o &#215; R mi cr o P mi cr o + R mi cr o . The c(e) notation denotes the number of samples correctly assigned to the label e out of sample E, p(e) defines the number of samples assigned to e, and (e) represents the number of actual samples in e. Thus, Pmicro is the microaveraged precision score, and Rmicro is the microaveraged recall score. Macro averaging, on the other hand, uses precision and recall based on different emotion sets, calculating the metric independently for each class treating all classes equally. Then F 1 is calculated as mentioned in the equation for both. The mathematical equations of macroaveraged F 1 are provided in 1,2,3 respectively: P e = &#8721; e&#8712;E number o f c(e) &#8721; e&#8712;E number o f p(e) , R e = &#8721; e&#8712;E number o f c(e) &#8721; e&#8712;E number o f (e)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>defined. Impact of results were shown by conducting experiments, analysing results on stylometricbased features, pretrained word embedding, word ngrams, and character ngrams for multilabel emotion detection. Our experiments concluded that RF combined with BR performed the best with unigram features achieving 56.10 microaveraged F 1 , 60.20 macroaveraged F 1 , and 51.20 M1 accuracy. The superiority of machinelearning techniques over neural baselines identified a vacuum for the neural net techniques to experiment. There are several limitations of this work: (1) Reproducibility is one of the major concern because of the computational complexity and scalability of the algorithms such as RF and Adaboost.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of stateoftheart in multilabel emotion detection.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Link EmoBank Affective Text DailyDialog Electoral Tweets EmoInt Emotion Stimulus Grounded Emotions FbValence Arousal Stance Corpus Emotion Sentiment</ns0:cell><ns0:cell>Size 10,000 1,250 13,118 100,000 7,097 2,414 2,557 2,895 4,868</ns0:cell><ns0:cell>Language English English English English English English English English English</ns0:cell><ns0:cell>Data source (MASCIde et al. (2010)+SE07Strapparava and Mihalcea (2007)) News websites (i.e. Google news, CNN) Dialogues from human conversations Twitter Twitter FrameNets annotated data Twitter Facebook Twitter</ns0:cell><ns0:cell>Composition VAD Ekmans emotions + valence in dication (positive/negative). Ekman's emotion + No emotion Plutchik's emotions + sentiment (positive/negative) Intensities of sadness, fear, anger, and joy Ekman's emotions and shame Emotional state (happy or sad) + five types of external fac tors namely user predisposition, weather, social network, news exposure, and timing valence (sentiment) + arousal (intensity) Plutchik's emotions</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Table 2 mentions the final distribution of tweets per label. The features mentioned in each example of tweet included tweetid, tweet, hashtags, username, date, and time. The dataset is publicly available on GitHub. 2</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Distribution of emotions in the dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Emotions Anger ( ) Disgust &#8235;&#1578;(&#8236; ) Fear &#8235;&#1601;(&#8236; ) Sadness ( &#8235;)&#1575;&#1583;&#1575;&#8236; Surprise &#8235;&#1578;(&#8236; ) Happiness ( )</ns0:cell><ns0:cell>Train 833 756 594 2,206 1,572 1,040</ns0:cell><ns0:cell>Test 191 203 184 560 382 278</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Statistics based on train and test dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset All Train Test</ns0:cell><ns0:cell>Tweets 6,043 4,818 1,225</ns0:cell><ns0:cell>Words 44,525 44,525 11,425</ns0:cell><ns0:cell>Avg. Word 9.24 9.24 9.32</ns0:cell><ns0:cell>Char 224,806 224,806 57,658</ns0:cell><ns0:cell>Avg. Char 46.65 46.65 47.06</ns0:cell><ns0:cell>Vocab 14,101 9,840 4,261</ns0:cell></ns0:row><ns0:row><ns0:cell>4 BASELINE</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>4.1 Feature RepresentationsFour types of text representation were used: character ngrams, word ngrams, stylometric features, and pretrained word embeddings.3 https://pypi.org/project/urduhack/ 4 https://github.com/urduhack/urdu-stopwords/blob/master/stop_words.txt 8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Number of apostrophe, ampersands, asterisks, at the rate signs, brackets, characters without spaces, colons, commas, counts, dashes, digits, dollar signs, ellipsis, equal signs, exclamation marks, greater and less than signs, left and right curly braces, left and right parenthesis, left and right square brackets, full stops, multiple question marks, percentage signs, plus signs, question marks, tilde, underscores, tabs, slashes, semicolons, single quotes, vertical lines, and white spaces&#894; &#8226; Percentage of commas, punctuation characters, and semicolons&#894; &#8226; Ratio by N (where N = total no of characters in Urdu tweets) of white spaces by N, digits by N, letters by N, special characters by N, tabs by N, upper case letter and characters by N.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Deep learning parameters for 1DCNN and LSTM.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter Epochs Optimizer Loss Learning Rate Regularization Bias Regularization Validation Split Hidden Layer 1 Dimension Hidden Layer 1 Activation Hidden Layer 1 Dropout Hidden Layer 2 Dimension Hidden Layer 2 Activation Hidden Layer 2 Dropout Hidden Layer 3 Dimension Hidden Layer 3 Activation Hidden Layer 3 Dropout</ns0:cell><ns0:cell>1DCNN 100 Adam categorical crossentropy categorical crossentropy LSTM 150 Adam 0.001 0.0001 0.01 0.01 0.1 0.1 16 16 tanh tanh 0.2 32 32 tanh tanh 0.2 64 tanh</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>11/19</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Table 5 shows the result of each of the baseline machine and deeplearning classifiers using word ngrams to detect multilabel emotions from our dataset. Unigram shows the best result on RF in combination with a BR transformation method and it achieves 56.10% of macro F 1 . It outperforms bigram and trigram features. When word uni, bi, and trigrams, features are combined, AdaboostM1 gives the best results and obtains 42.60% of macro F 1 . However, results achieved with combined features are still inferior as compared to individual ngram features. A series of experiments on character ngrams were conducted. Results of char 3gram to char 9gram are mentioned in Table 6. It shows that RF consistently provides the best results paired with BR on character 3gram and obtains the macro F 1 of 52.70%. It is observed that macro F 1 decreases while increasing the number of characters in our features. A combination of character ngram (39) achieved the best results using RF with LC, but still lagged behind all individual ngram measures. Overall, word based ngram feature results are very close to each other and achieves better results than most of the char based ngram features. Best results for multilabel emotion detection using word ngram features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features Word 1gram Word 2gram Word 3gram Word 13gram</ns0:cell><ns0:cell>MLC BR LC BR BR</ns0:cell><ns0:cell>SLC RF SMO RF Combination of Word N-gram Acc. EM Word N-gram 51.20 32.30 43.60 30.30 39.90 16.60 AdaBoostM1 35.10 14.90</ns0:cell><ns0:cell>HL 19.40 21.70 28.40 30.10</ns0:cell><ns0:cell>MicroF 1 60.20 50.20 50.00 44.50</ns0:cell><ns0:cell>MacroF 1 56.10 47.50 48.10 42.60</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>illustrates the results of stylometrybased features which were tested on a different set of feature groups such as characterbase, wordbase, vocabulary richness and combination of first three features. Wordbased feature group depicts the macro F 1 of 42.60% which is trained on Adaboost M1 and binary relevance. Lastly, experiments on deeplearning algorithms such as 1DCNN, LSTM, LSTM with CNN features show promising results for multilabel emotion detection. LSTM achieves the highest macro F 1 score of 55.00% while 1DCNN and LSTM with CNN features achieve slightly lower macro F 1 scores. Table</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using char ngram features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features Char 3gram Char 4gram Char 5gram Char 6gram Char 7gram Char 8gram Char 9gram Char 39</ns0:cell><ns0:cell>MLC BR BR BR BR BR BR BR LC</ns0:cell><ns0:cell>SLC RF Bagging Bagging Bagging RF RF RF Combination of Character N-gram Acc. EM HL Character N-gram 47.20 28.20 21.10 38.60 21.70 25.60 38.30 16.50 28.80 37.80 16.90 29.30 36.10 15.50 31.00 34.80 11.80 31.50 34.80 11.80 31.50 RF 33.60 32.90 12.10</ns0:cell><ns0:cell>MicroF 1 56.60 47.30 47.90 46.30 44.70 45.30 45.10 32.30</ns0:cell><ns0:cell>MacroF 1 52.70 44.60 46.30 45.50 43.80 43.50 43.40 33.90</ns0:cell></ns0:row></ns0:table><ns0:note>Considering four text representations, the bestperforming algorithm is RF with BR that trained on13/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using stylometrybased features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features Characterbased Wordbased Vocabulary richness All features</ns0:cell><ns0:cell>MLC BR BR BR BR</ns0:cell><ns0:cell>SLC DT AdaBoostM1 35.10 14.90 30.10 Acc. EM HL 33.70 10.7 31.90 AdaBoostM1 34.10 11.80 31.10 AdaBoostM1 35.00 14.90 30.00</ns0:cell><ns0:cell>MicroF 1 MacroF 1 44.40 42.40 44.50 42.60 44.50 42.50 44.50 42.50</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using pretrained word embedding features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model 1D CNN fastText (300) 45.00 42.00 36.00 Features (dim) Acc. EM HL LSTM fastText (300) 44.00 42.00 35.00</ns0:cell><ns0:cell>MicroF 1 MacroF 1 35.00 54.00 32.00 55.00</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Best results for multilabel emotion detection using contextual pretrained word embedding features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model LSTM BERT BERT Contextual Embeddings (768) 15.00 44.00 57.00 Features (dim) Acc. EM HL fastText (300), 1D CNN (16) 46.00 35.00 36.00</ns0:cell><ns0:cell>MicroF 1 MacroF 1 34.00 53.00 54.00 37.00</ns0:cell></ns0:row></ns0:table><ns0:note>unigram features achieve macro F 1 score of 56.10%. Deep learning algorithms performed well using fastText pretrained word embeddings and results are consistent in all the experiments.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Comparison of stateoftheart results in multilabel emotion detection.In terms of reproducibility, our machine learning algorithm results are much easier to reproduce with MEKA software. It is because default parameters were used to analyze the baseline results. The main challenge for this task is to generate ngram features in a specific .arff format which is the main require ment of this software to run the experiments. For this purpose, we use sklearn library to extract features from the Urdu tweets and then use python code to convert them into .arff supported format. The code is publicly available. Hence, academics and industrial environments can repeat experiments by just follow ing the guidelines of the software.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Ameer et al. (2021) RF Model Zhang et al. (2020) MMS2S Samy et al. (2018) CGRU Ju et al. (2020) MESGN Proposed 1D CNN Proposed RF</ns0:cell><ns0:cell>Features ngram -AraVec, word2vec -fastText word unigram</ns0:cell><ns0:cell>Accuracy 45.20 47.50 53.20 49.4 45.00 51.20</ns0:cell><ns0:cell>MicroF 1 57.30 -49.50 -35.00 60.20</ns0:cell><ns0:cell>MacroF 1 55.90 56.00 64.80 56.10 54.00 56.10</ns0:cell><ns0:cell>HL 17.90 18.30 -18.00 36.00 19.40</ns0:cell></ns0:row><ns0:row><ns0:cell>5.1 Discussion</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>14/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022)</ns0:note></ns0:figure> <ns0:note place='foot' n='8'>http://meka.sourceforge.net 9 https://keras.io/ 10 https://pytorch.org 11 https://github.com/Noman712/Mutilabel_Emotion_Detection_Urdu/tree/master/code 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62080:2:0:NEW 20 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Original Article Title: “Multi-label emotion classification of Urdu tweets”  Dear Editor:                         Thank you very much for allowing a resubmission of our manuscript “Multi-label emotion classification of Urdu tweets”. We have updated the manuscript based on the recommendation of the reviewers; however, our task is not based on sentiment analysis. There is a major distinction between sentiment analysis and emotion detection tasks. Both of these tasks have their own applications and hundreds of research articles exist in their own respective fields.    First, the reviewer is forcing us to add a literature review related to neutrality and ambivalence for sentiment analysis and multilingual sentiment analysis. Both of these topics have no relevance to our multi-label emotion detection task and it is quite difficult to merge sentiment analysis literature review with our emotion detection literature review. However, because of his recommendation, we have added a few sentiment analysis related articles in our paper. Still, if the reviewer wanted to add his/her own papers, it would be better to directly mention the name of these papers in the review.   Second, according to the reviewer, this task is a binary classification problem that was mentioned in his/her 1st review. I believe, the reviewer has little knowledge of multi-label classification problems or the reviewer cannot distinguish between binary, multi-class and multi-label classification problems. In our article, we are not tackling the binary classification problem, so there is no relationship between neutrality or ambivalence with our emotion detection task.  Additionally, we believe that our manuscript has been considerably improved as a result of these revisions, and hope that our revised manuscript “Multi-label emotion classification of Urdu tweets” is acceptable for publication in the Peerj Computer Science.     Best regards Dr Noman Ashraf E-mail: nomanashraf712@gmail.com Reviewer 1  Additional comments   (Concern#1) Many of the claims made by the authors are not backed by the revision, e.g., author claim to have added literature related to neutrality and ambivalence for sentiment analysis but there is no mention whatsoever of neither of them.   Author response: Thanks a lot for the suggestion. Author action:  We have added additional paragraphs in the Literature review according to your suggestions and changes can be seen in the Section 2 from lines 141 to 149 and 225 to 233 in a tracked changes pdf file.       (Concern#2) Finally, presentation is still not up to PeerJ standards and important relevant literature on multilingual sentiment analysis is still missing.   Author response: Thanks a lot for the suggestion. Author action: We thoroughly examined the existing grammatical mistakes and problematic sentences in the article. The paper now is sufficiently improved according to the standards of PeerJ Computer Science.    "
Here is a paper. Please give your review comments after reading it.
357
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>One of the key challenges in facial recognition is multi-view face synthesis from a single face image. The existing generative adversarial network (GAN) deep learning methods have been proven to be effective in performing facial recognition with a set of preprocessing, post-processing and feature representation techniques to bring a frontal view into the same position in-order to achieve highaccuracy face identification. However, these methods still perform relatively weak in generating high quality frontal-face image samples under extreme face pose scenarios. The novel framework architecture of the twopathway generative adversarial network (TP-GAN), has made commendable progress in the face synthesis model, making it possible to perceive global structure and local details in an unsupervised manner. More importantly, the TP -GAN solves the problems of photorealistic frontal view synthesis by relying on texture details of the landmark detection and synthesis function, which limits its ability to achieve the desired performance in generating high-quality frontal face image samples under extreme pose. We propose, in this paper, a landmark feature-based method (LFM) for robust pose-invariant facial recognition, which aims to improve image resolution quality of the generated frontal faces under a variety of facial poses. We therefore augment the existing TP-GAN generative global pathway with a well-constructed 2D face landmark localization to cooperate with the local pathway structure in a landmark sharing manner to incorporate empirical face pose into the learning process, and improve the encoder-decoder global pathway structure for better representation of facial image features by establishing robust feature extractors that select meaningful features that ease the operational workflow toward achieving a balanced learning strategy, thus significantly improving the photorealistic face image resolution. We verify the effectiveness of our proposed method on both Multi-PIE and FEI datasets. The quantitative and qualitative experimental results show that our proposed</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Face recognition is one of the most commonly used biometric systems for identifying individuals and objects on digital media platforms. Due to changes in posture, illumination, and occlusion, face recognition faces multiple challenges. The challenge of posture changes comes into play when the entire face cannot be seen in an image. Normally, this situation may happen when a person is not facing the camera during surveillance and photo tagging. In order to overcome these difficulties, several promising face recognition algorithms based on deep learning have been developed, including generative adversarial networks (GANs). These methods have been shown to work more efficiently and accurately than humans at detection and recognition tasks. In such methods, pre-processing, post-processing, and multitask learning or feature representation techniques are combined to provide high accuracy results on a wide range of benchmark data sets <ns0:ref type='bibr' target='#b0'>(Junho et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b1'>Chao et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b2'>Xi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>Jian et al., 2018)</ns0:ref>. The main hurdle to these methods is multi-view face synthesis from a single face image <ns0:ref type='bibr' target='#b5'>(Bassel, Ilya &amp; Yuri, 2021;</ns0:ref><ns0:ref type='bibr' target='#b6'>Chenxu et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Yi et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b9'>Hang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b11'>Luan, Xi &amp; Xiaoming, 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Rui et al., 2017)</ns0:ref>. Furthermore, a recent study <ns0:ref type='bibr' target='#b14'>(Soumyadip et al., 2016)</ns0:ref> emphasized that compared with frontal face images with yaw variation less than 10 degrees, the accuracy of recognizing face images with yaw variation more than 60 degrees is reduced by 10%. The results indicate that pose variation continues to be a challenge for many real-world facial recognition applications. The existing approaches to these challenges can be divided into two main groups. In a first approach, frontalization of the input image is used to synthesize frontal-view faces <ns0:ref type='bibr' target='#b16'>(Meina et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b17'>Tal et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b18'>Christos et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b0'>Junho et al., 2015)</ns0:ref>, meaning that traditional facial recognition methods are applicable. Meanwhile, the second approach focuses on learning discriminative representations directly from non-frontal faces through either a one-joint model or multiple pose-specific models <ns0:ref type='bibr' target='#b19'>(Omkar, Andrea &amp; Andrew, 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Florian, Dmitry &amp; James, 2015)</ns0:ref>. It is necessary to explore the above approaches in more detail before proceeding. For the first approach, the conventional approaches often make use of robust local descriptors, (such as <ns0:ref type='bibr' target='#b21'>John, 1985;</ns0:ref><ns0:ref type='bibr' target='#b22'>Lowe, 1999;</ns0:ref><ns0:ref type='bibr' target='#b23'>Ahonen, Hadid &amp; Pietik&#228;inen, 2006;</ns0:ref><ns0:ref type='bibr'>Dalal &amp; Triggs, 2015)</ns0:ref>, to account for local distortions and then adapt the metric learning method to achieve pose invariance. Moreover, local descriptors are often used <ns0:ref type='bibr' target='#b26'>(Kilian &amp; Lawrence, 2009;</ns0:ref><ns0:ref type='bibr'>Tsai-Wen et al., 2013)</ns0:ref> approaches to eliminate distortions locally, followed by a metric learning method to prove pose invariance. However, due to the tradeoff between invariance and discriminability, this type of approach is relatively weak in handling images with extreme poses. A second approach, often known as face rotation, uses one-joint models or multiple pose-specific models to learn discriminative representations directly from non-frontal faces. These methods have shown good results for near-frontal face images, but they typically perform poorly for profile face images because of severe texture loss and artifacts. Due to this poor performance, researchers have been working to find more effective methods to reconstruct positive facial images from data <ns0:ref type='bibr' target='#b30'>(Yaniv et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b31'>Amin &amp; Xiaoming, 2017;</ns0:ref><ns0:ref type='bibr' target='#b2'>Xi et al., 2017)</ns0:ref>. For instance, <ns0:ref type='bibr' target='#b0'>(Junho et al., 2015)</ns0:ref>, adopted a multi-task model to improves identity preservation over a single task model from paired training data. Later on <ns0:ref type='bibr' target='#b35'>(Luan, Xi &amp; Xiaoming, 2017;</ns0:ref><ns0:ref type='bibr' target='#b13'>Rui et al., 2017)</ns0:ref>, their main contribution was a novel two-pathway GAN architecture tasks for photorealistic and identity preserving frontal view synthesis starting from a single face image. Recent work by <ns0:ref type='bibr' target='#b5'>(Bassel, Ilya &amp; Yuri, 2021;</ns0:ref><ns0:ref type='bibr' target='#b6'>Chenxu et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Yi et al.,2021)</ns0:ref> has demonstrated advances in the field of face recognition. During pose face transformation, however, some of the synthetic faces appeared incomplete and lacked fine detail. So far, the TP-GAN <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017)</ns0:ref> has made significant progress in the face synthesis model, which can perceive global structure and local details simultaneously in an unsupervised manner. More importantly, TP-GAN solves the photorealistic frontal view synthesis problems by collecting more details on local features for a global encoder-decoder network along with synthesis functions to learn multi-view face synthesis from a single face image. However, we argue that TP-GAN has two major limitations. First, it is critically dependent on texture details of the landmark detection. To be more specific, this method focuses on the inference of the global structure and the transformation of the local texture details, as their corresponding feature maps, to produce the final synthesis. The image visual quality results indicate that these techniques alone have the following deficiencies: A color bias can be observed between the synthetic frontal face obtained by TP-GAN method and the input corresponding to non-frontal input. In some cases, the synthetic faces are even incomplete and fall short in terms of fine detail. Therefore, the quality of the synthesized images still cannot meet the requirements for performing specific facial analysis tasks, such as facial recognition and face verification. Second, it uses a global structure, four local network architectures and synthesis functions for face frontalization, where training and inference are unstable under large data distribution, which makes it ineffective for synthesising arbitrary poses. The goal of this paper is to address these challenges through a landmark feature-based method (LFM) for robust pose-invariant facial recognition to improve image resolution under extreme facial poses.</ns0:p><ns0:p>In this paper, we make the following contributions:</ns0:p><ns0:p>The LFM is a newly introduced method for the existing generative global pathway structure that utilizes a 2D face landmark localization to cooperate with the local pathway structure in a landmark sharing manner to incorporate empirical face pose into the learning process. LFM of target facial details provides guidance to arbitrary pose synthesis, whereas the four-local patch network architecture remains unchanged to capture the input facial local perception information. The LFM provides an easy way for transforming and fitting two-dimensional face models in order to achieve target pose variation and learn face synthesis information from generated images. In order to better represent facial image features, we use a denoising autoencoder (DAE) to modify the structure of the generator's global-path encoder and decoder. The goal of this modification is to train the encoder decoder with multiple noise levels so that it can learn about the missing texture face details. Adding noise to the image pixels causes them to diffuse away from the manifold. As we apply DAE to the diffused image pixels, it attempts to pull the data points back onto the manifold. This implies that DAE implicitly learns the statistical structure of the data by learning a vector field from locations with no data points back to the data manifold. As a result, encoder-decoders must infer missing pieces and retrieve the denoised version in order to achieve balanced learning behavior. We optimize the training process using an accurate parameter configuration for a complex distribution of facial image data. By re-configuring the parameters (such as the learning rate, batch size, number of epochs, and etc.), the GAN performance can be better optimized during the training process. Occasionally, unstable 'un-optimized' training for the synthetic image problem results in unreliable images for extreme facial positions.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>In this section, we focus on the most recent studies which are related to the multi-view face synthesis problem using deep learning approaches. The deep learning approaches including face normalization, generative adversarial network and facial landmark detection, are reviewed.</ns0:p><ns0:p>Face normalization, or multi-view face synthesis from a single face image, is a unique challenge for computer vision systems due to its ill-posed problem. The existing solutions to address this challenge can be classified into three categories:</ns0:p><ns0:p>local texture warping methods <ns0:ref type='bibr'>(Tal et 2D/3D al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b50'>Xiangyu et al., 2015)</ns0:ref>, statistical methods <ns0:ref type='bibr' target='#b18'>(Christos et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b51'>Li et al., 2014)</ns0:ref>, and deep learning methods <ns0:ref type='bibr' target='#b2'>(Xi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Luan, Xi &amp; Xiaoming, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>(Tal et al., 2015)</ns0:ref>, employed a single reference surface for all query faces in order to produce face 3D frontalization. <ns0:ref type='bibr' target='#b50'>(Xiangyu et al., 2015)</ns0:ref>, employed a pose and expression normalization method to recover the canonical-view. <ns0:ref type='bibr' target='#b18'>(Christos et al., 2015)</ns0:ref>, proposed a joint frontal view synthesis and landmark localization method. <ns0:ref type='bibr' target='#b51'>(Li et al., 2014)</ns0:ref>, the authors concentrated on local binary patternlike feature extraction. <ns0:ref type='bibr' target='#b2'>(Xi et al., 2017)</ns0:ref>, proposed a novel deep 3DMM-conditioned face frontalization GAN in order to achieve identity-preserving frontalization and high-quality images by using a single input image with a face pose. <ns0:ref type='bibr' target='#b35'>(Luan, Xi &amp; Xiaoming, 2017)</ns0:ref>, proposed a 90 0 single-pathway framework called the disentangled representation learning-generative adversarial network (DR-GAN) to learn identity features that are invariant to viewpoints, etc.</ns0:p></ns0:div> <ns0:div><ns0:head>Generative Adversarial Networks (GANs)</ns0:head><ns0:p>The GAN is one of the most interesting research frameworks that is used for deep generative models proposed by <ns0:ref type='bibr' target='#b39'>(Ian et al., 2014)</ns0:ref>. The theory behind the GAN framework can be seen as a two-player non-cooperative game to improve the learning model. A GAN model has two main components, generator and discriminator . generates a set of images that is as plausible (G) (D) G as possible in order to confuse the , while the works to distinguish the real generated images D D from the fake. The convergence is achieved by alternately training them. The main difference between GANs and traditional generative models is that GANs generate whole images rather than pixel by pixel. In a GAN framework, the generator consists of two dense layers and a dropout layer. A normal distribution is used to sample the noise vectors and feed them into the generator networks. The discriminator can be any supervised learning model. GANs have been proven effective for a wide range of applications, such as image synthesis <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017;</ns0:ref><ns0:ref type='bibr'>Yu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b38'>Yu et al., 2020)</ns0:ref>, image super-resolution <ns0:ref type='bibr' target='#b41'>(Christian et al., 2017)</ns0:ref>, image-to-image translation <ns0:ref type='bibr' target='#b49'>(Jun-Yan et al., 2017)</ns0:ref>, etc. Several effective GAN models have been proposed to cope with the most complex unconstrained face image situations, such as changes in pose, lighting and expression. For instance, (Alec, <ns0:ref type='bibr' target='#b43'>Luke &amp; Soumith, 2016)</ns0:ref>, proposed a deep convolutional GAN to integrate a convolutional network into the GAN model to achieve more realistic face image generation. <ns0:ref type='bibr' target='#b44'>(Mehdi &amp; Simon, 2014)</ns0:ref>, proposed a conditional version of the generative adversarial net framework in both generator and discriminator. <ns0:ref type='bibr' target='#b45'>(Augustus, Christopher &amp; Jonathon, 2017)</ns0:ref>, presented an improved version of the Cycle-GAN model called 'pixel2pixel' to handle the image-to-image translation problems by using labels to train the generator and discriminator. <ns0:ref type='bibr' target='#b46'>(David, Thomas &amp; Luke, 2017)</ns0:ref>, proposed a boundary equilibrium generative adversarial network (BE-GAN) method, which focuses on the image generation task to produce high-quality image resolution, etc.</ns0:p></ns0:div> <ns0:div><ns0:head>Facial Landmark Detection</ns0:head><ns0:p>The face landmark detection algorithm is one of most successful and fundamental components in a variety of face applications, such as object detection and facial recognition. The methods used for facial landmark detection can be divided into three major groups; holistic methods, constrained local methods, and regression-based methods. In the past decade, deep learning models have proven to be a highly effective way to improve landmark detection. Several existing methods are considered to be good baseline solutions to the 2D face alignment problem for faces with controlled pose variation <ns0:ref type='bibr' target='#b52'>(Xuehan &amp; Fernando, 2013;</ns0:ref><ns0:ref type='bibr' target='#b54'>Georgios, 2015;</ns0:ref><ns0:ref type='bibr'>Xiangyu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b59'>Adrian &amp; Georgios, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b52'>(Xuehan &amp; Fernando, 2013)</ns0:ref>, proposed supervised descent method, which learns the general descent directions in a supervised manner. <ns0:ref type='bibr' target='#b54'>(Georgios, 2015)</ns0:ref>, in their method, a sequence of Jacobian matrices and hessian matrices is determined by using regression. <ns0:ref type='bibr'>(Xiangyu et al., 2016)</ns0:ref>, proposed a model with cascaded convolutional neural network to 3D solve the self-occlusion problem. <ns0:ref type='bibr' target='#b59'>(Adrian &amp; Georgios, 2017)</ns0:ref>, proposed a guided-by-2D landmarks convolutional neural network that converts annotations into annotations, etc.</ns0:p></ns0:div> <ns0:div><ns0:head>2D 3D</ns0:head><ns0:p>We can summarize some important points from our related work. Despite the fact that the existed methods produced good results on the specific face image datasets for which they were designed and provided robust alignment across poses, they are difficult to replicate if they are applied alone to different datasets. This is especially true for tasks like facial normalization or other face synthesis tasks, where deep structure learning methods still fail to generate highquality image samples under extreme pose scenarios, which results in significantly inferior final results.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed Method</ns0:head><ns0:p>In this section, we shall first briefly describe the existing TP-GAN architecture and then describe our proposed LFM method in detail.</ns0:p></ns0:div> <ns0:div><ns0:head>TP-GAN Architecture</ns0:head><ns0:p>Based on the structure shown in Fig <ns0:ref type='figure'>1</ns0:ref> to integrate facial features with their long-range dependencies and, therefore, to create faces that preserve identities, especially in cases of faces with large pose angles. In this way, we can learn a PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:10:66723:1:2:NEW 31 Dec 2021)</ns0:ref> Manuscript to be reviewed Computer Science richer feature representation and generate inferences that incorporate both contextual dependencies and local consistency. The loss functions, including pixel-wise loss, symmetry loss, adversarial loss, and identity preserving loss, are used to guide an identity preserving inference of frontal view synthesis. The discriminator is used to distinguish real facial &#119863; &#120579; &#119863; images or 'ground-truth (GT) frontal view' from synthesized frontal face images or &#119868; &#119865; &#119866; &#120579; &#119866; (&#119868; &#119875; )</ns0:p><ns0:p>'synthesized-frontal (SF) view'. A second stage involves a light-CNN model that is used to compute face dataset's identity-preserving properties. For a more detailed description, see <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>LFM for Generative Global Pathway Structure</ns0:head><ns0:p>To the best of our knowledge, this is the first study to integrate an LFM with the existing TP-GAN global pathway structure for training and evaluation purposes. In this work, we exploit a landmark detection mechanism (Adrian &amp; Georgios, 2017) that proposed for 2D-to-3D facial landmark localization to help our model obtain a high quality frontal-face image resolution. Face landmarks are the most compressed representation of a face that maintains information such as pose and facial structure. There are many situations where landmarks can provide advanced facerelated analyses without using whole face images. The landmark method used in this study was explored at <ns0:ref type='bibr' target='#b2'>(Xi Y et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>Shuicheng &amp; Jiashi. 2018;</ns0:ref><ns0:ref type='bibr' target='#b60'>Xing D, Vishwanath. Sindagi &amp; Vishal. 2018)</ns0:ref>. These methods can achieve high accuracy of face alignment by cascaded regression methods. Methods like these work well when particular poses are chosen without taking other factors into consideration, such as facial characteristics. We found that facial characteristics can play an imperative role in improving the results of the current state of the art. By adding landmarks to augment the synthesized faces, recognition accuracy will be improved since these landmarks rely on generative models to enhance the information contained within them. The process for generating facial images is shown in Fig 1. We will discuss our Fig 2 architecture in the subsequent paragraph. We perform a face detection to locate the face in the Multi-PIE and FEI datasets. The face detection can be achieved by using a Multi-Task Cascade CNN through the MTCNN library <ns0:ref type='bibr' target='#b32'>(Kaipeng Z et al.,2016)</ns0:ref>. After that, cropping and processing of the profile image. A local pathway of four landmark patch networks to capture the local</ns0:p><ns0:formula xml:id='formula_0'>&#119866; &#120579; &#119897; &#119894; , &#119894; &#8712; {0, 1, 2, 3}</ns0:formula><ns0:p>texture around four facial landmarks. Each patch learns a set of filters for rotating the centercropped patch (after rotation, the facial landmarks remain in the center) to its corresponding frontal view. Then, we used a multiple feature map to combine the four facial tensors into one. Each tensor feature is placed at a 'template landmark location' and a max-out fusing strategy is used to ensure that stitches on overlapping areas are minimized. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>characteristics. The challenge becomes even greater with these shapes when the face is under extreme poses. A texture can be defined as a function of the spatial variation of brightness intensity of pixels in an image. Each texture level represents a variable, with variations such as smoothness, coarseness, regularity, and etc., of each surface oriented in different directions. Our work focuses on two important phenomena: rotation and noise 'noise is a term used to describe image information that varies randomly in brightness or color'. As a result, if the methods used to eliminate these common phenomena are unreliable, the results will be less accurate; therefore, in practice, the methods used to create the images should be as robust and stable as possible. In addition, the images may differ in position, viewpoint, and light intensity, all of which can influence the final results, challenging texture detail capturing. In order to overcome these challenges, we must adapt a method that can capture and restore the missing texture information.</ns0:p><ns0:p>The key to solving this problem is a landmark localization method based on regression. Our work utilized Face Alignment Network (FAN). A FAN framework is based on the HourGlass (HG) architecture, which integrates four Hourglass models to model human pose through hierarchical, parallel, and multi-scale integration to improve texture maps by reconstructing selfoccluded parts of faces. The landmark detection algorithm captures and restores the texture details of the synthesised face image by repositioning the appearance spot of the mismatched or drifted patch 'template landmark location'. Figure <ns0:ref type='figure'>3</ns0:ref> illustrates some examples. Our method allows us to treat faces that have a variety of shape characteristics. In this way, the spatial variations, smoothness, and coarseness that arise due to mismatched or drifted pixels between local and global synthesized faces are eliminated. Typical landmark templates are approximately the same size as a local patch network, but each region has its own structure, texture, and filter. Next, we combined the local synthesis image with the global synthesis image (or two textures) for data augmentation. Every patch of our FAN has its own augmented channels, and each patch has its own RGB along with a depth map (D) input for each 2D local synthesis image. In this way, the texture details help us to build a more robust model around the face patch region and enhance generalization. Even though landmark feature extraction may result in some incongruous or over-smoothing due to noise, it still remains an important method for incorporating pose information during learning.</ns0:p><ns0:p>The landmark detection algorithm for our synthesis face image was built using 68 points. We then reconstructed those synthesis image into four uniform patches (or templates), &#119871;&#119890;&#119910;&#119890; = &#119866; &#120579; &#119892; 0 , , and , and each patch is comprised</ns0:p><ns0:formula xml:id='formula_1'>,&#119894; &#8712; 0 &#119877;&#119890;&#119910;&#119890; = &#119866; &#120579; &#119892; 1 ,&#119894; &#8712; 1 &#119873;&#119900;&#119904;&#119890; = &#119866; &#120579; &#119892; 2 ,&#119894; &#8712; 2 &#119872;&#119900;&#119906;&#119905;&#8462; = &#119866; &#120579; &#119892; 3</ns0:formula><ns0:p>,&#119894; &#8712; 3 of convolutional components. Each patch region has its own filter, which contains different texture details, regain size and structure information. Individual filters provide more details about specific areas in an image, such as pixels or small areas with a high contrast or that are different in color or intensity from the surrounding pixels or areas. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science concatenation stage. Those layers' act as a visualization feature map for a specific input of a fontal-face image in order to increase the amount of visual information kept by subsampling layers' structure. Then, we merged the features tensors of the local and global pathways into one tensor to produce the final synthesis face. Table <ns0:ref type='table'>1</ns0:ref> shows the workflow of all operational steps.</ns0:p><ns0:p>The landmark method provides useful information for large-pose regions, e.g., , which helps 90 0 our model to produce more realistic images.</ns0:p></ns0:div> <ns0:div><ns0:head>Global Pathway Encoder-Decoder Structure</ns0:head><ns0:p>In this section we describe our encoder-decoder formulation. Inspired by the work of <ns0:ref type='bibr' target='#b34'>(Jimei Y et al., 2016)</ns0:ref>, for the DAE, our aim is to train the encoder-decoder with multiple noise levels in order to learn more about the missing texture face details of the input face image and preserve the identity of the frontal-view image from the profile image . The encoder-decoder &#119868; &#119865; &#119868; &#119875; mechanism has to discover and capture information between the dimensions of the input in order to infer missing pieces and recover the denoised version. In a subsequent paragraph, we will discuss encoder-decoders in more technical details. The idea starts with assuming that the input data points (image pixels) lies on a manifold in , where is the a set of databases. Adding &#8477; &#119873; &#8477; &#119873; noise to the data image pixel results in diffusing away from the manifold. When we apply DAE to the diffused data image pixels, it tries to pull the data point back onto the manifold. Therefore, DAE learns a vector field pointing from locations with no data point back to the data manifold, implying that it implicitly learns the statistical structure of the data. However, a sparse coding model has been shown to be a good model for image denoising. We assume that group sparse coding, which generalizes standard sparse coding, is effective for image denoising as well, and we will view from a DAE perspective. The encoding function of sparse coding occurs in the inference process, where the network infers the latent variable from noisy input . Each &#119904; &#119909; individual symbol is defined here. Let be the RGB components of the input face image, is &#119891; &#119890; &#120567; the method that splits the input image into its RBG components, is a set of weights and bias &#8743; for the DAE, and is the activation function. In our case, the iterative shrinkage-thresholding &#119886; algorithm (ISTA) is used to perform inference and is formulated as follows:</ns0:p><ns0:formula xml:id='formula_2'>&#119904; = &#119891; &#119890; (&#119909;; {&#120567;, &#8743; , &#119886;}) = &#119868;&#119878;&#119879;&#119860; (&#119909;; &#120567;, &#8743; , &#119886;) (1)</ns0:formula><ns0:p>The decoding function is the network's reconstruction of the input from the latent variable.</ns0:p><ns0:formula xml:id='formula_3'>&#119909; = &#119891; &#119889; (&#119904;; &#120567;) = &#120567; &#119904; (2)</ns0:formula><ns0:p>where is a denoising function.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119891; &#119889;</ns0:head><ns0:p>The DAE method can be used to learn through the following. For each input data point as &#8743; noise variance which is the same in all directions. We use group sparse coding to denoise , as &#119909; described in Equation (1) and Equation ( <ns0:ref type='formula'>2</ns0:ref>). The DAE is formulated as follows:</ns0:p><ns0:formula xml:id='formula_4'>&#119864; &#119863;&#119860;&#119864; = &#8214; &#119909; -&#119909; &#8214; 2 2 (3)</ns0:formula><ns0:p>We define DAE's reconstruction error as a square error between and .</ns0:p></ns0:div> <ns0:div><ns0:head>&#119909; &#119909;</ns0:head><ns0:p>Generally, the cost function can be another form of differentiable error measure, shown as follows: (&#119868; &#119901; &#119899; ),&#119910; &#119899; )} ( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_5'>&#8710; &#8743; &#8733;-&#8706;&#119864; &#119863;&#119860;&#119864; /&#8706; &#8743;<ns0:label>(</ns0:label></ns0:formula><ns0:p>where is a weighting parameter and is a weighted sum of individual losses that together &#120572; &#119871; &#119904;&#119910;&#119899; constrain the image to reside within the desired manifold. Each individual loss function will be explained in the comprehensive loss functions section.</ns0:p><ns0:p>In order to generate the best images, we need a very good generator and discriminator. The reason for this is that if our generator is not good enough, we won't be able to fool the discriminator, resulting in no convergence. A bad discriminator will also classify images that make no sense as real, which means our model never trains, and we never produce the desired output. The image can be generated by sampling values from a Gaussian distribution and feeding them into the generator network. Based on a game-theoretical approach, our objective function is a minimax function. Using the discriminator to maximize the objective function allows us to perform gradient descent on it. The generator tries to minimize its objective function, so we can use gradient descent to compute it. In order to train the network, gradient ascent and descent must be alternated. </ns0:p></ns0:div> <ns0:div><ns0:head>min</ns0:head></ns0:div> <ns0:div><ns0:head>Gradient descent on . &#119866;</ns0:head><ns0:p>Minimax problem allows discriminate to maximize adversarial networks, so that we can perform gradient ascent on these networks; whereas generator tries to minimize adversarial networks, so that we can perform gradient descent on these networks. In practice, Equation ( <ns0:ref type='formula'>6</ns0:ref> objective function, the discriminator and generator have much stronger gradients at the start of the learning process.</ns0:p><ns0:p>When a single discriminating network is trained, the refiner network will tend to overemphasize certain features in order to fool the current discriminator network, resulting in drift and producing artifacts. We can conclude that any local patch sampled from the refined image should have similar statistics to the real image patches. We can therefore define a discriminator network that classifies each local image patch separately rather than defining a global discriminator network. The division limits both the receptive field and the capacity of the discriminator network, as well as providing a large number of samples per image for learning the discriminator network. The discriminator in our implementation is a fully convolutional network that outputs a probabilistic ( represented as a synthesized face image) map instead &#119873; &#215; &#119873; '&#119873; = 2' of one scalar value to distinguish between a ground truth frontal view (GT) and a synthesized frontal view (SF). Our discriminator loss is defined through the discrepancy between the model distribution and the data distribution, using an adversarial loss (</ns0:p><ns0:p>). By assigning each &#119871; &#119886;&#119889;&#119907; probability value to a particular region, the can now concentrate on a single semantic region &#119863; &#120579; &#119863; rather than the whole face.</ns0:p></ns0:div> <ns0:div><ns0:head>Comprehensive Loss Functions</ns0:head><ns0:p>In addition to the existing TP-GAN synthesis loss functions, which are a weighted sum of four individual loss functions ( ), a classification loss is further &#119871; &#119901;&#119909; ,'&#119871; &#119904;&#119910;&#119898; ,&#119871; &#119894;&#119901; ,&#119871; &#119905;&#119907; and &#119871; &#119901;&#119886;&#119903;&#119905; ' &#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119894;&#119891;&#119910; added to our method in order to get better results. Although pixel wise loss may bring some over-smooth effects to the refined results, it is still an essential part for both accelerated optimization and superior performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Symmetry Loss (</ns0:head><ns0:p>) <ns0:ref type='formula'>10</ns0:ref>) is used to calculate the symmetry of the synthesized face image because a face image is &#119871; &#119904;&#119910;&#119898; generally considered to be a symmetrical pattern. (&#119868; &#119875; &#119899; )) (12) is used to make the real frontal-face image and a synthesized frontal face images &#119871; &#119886;&#119889;&#119907; &#119868; &#119865; &#119866; &#120579; &#119866; (&#119868; &#119875; &#119899; )</ns0:p><ns0:formula xml:id='formula_6'>&#119923; &#119956;&#119962;&#119950; &#119871; &#119904;&#119910;&#119898; = 1 &#119882;/2 &#215; &#119867; &#119882;/2 &#8721; &#119909; = 1 &#119867; &#8721; &#119910; = 1 | &#119868; &#119901;&#119903;&#119890;&#119889; (&#119909;,&#119910;) -&#119868; &#119901;&#119903;&#119890;&#119889; (&#119882; -(&#119909; -1),&#119910;)| (</ns0:formula></ns0:div> <ns0:div><ns0:head>Identity Preserving Loss ( )</ns0:head><ns0:formula xml:id='formula_7'>&#119871; &#119894;&#119901; &#119871; &#119894;&#119901; = 2 &#8721; &#119894; = 1 1 &#119882; &#119897; &#215; &#119867; &#119897; &#119882; &#119897; &#8721; &#119909; = 1 &#119867; &#119897; &#8721; &#119910; = 1 |&#119865; &#119892;&#119905; &#119897; (&#119909;,</ns0:formula><ns0:p>indistinguishable, so that the synthesized frontal face image achieves a visually pleasing effect.</ns0:p><ns0:p>Total Variation Loss (&#119923; &#119957;&#119959; )</ns0:p><ns0:p>Generally, the face images synthesized by two pathways generative adversarial networks have unfavorable visual artifacts, which deteriorates the visualization and recognition performance. Imposing on the final synthesized face images can help to alleviate this issue. The loss is &#119871; &#119905;&#119907; &#119871; &#119905;&#119907; calculated as follows:</ns0:p><ns0:formula xml:id='formula_8'>&#119871; &#119905;&#119907; = &#119882; &#8721; &#119909; = 1 &#119867; &#8721; &#119910; = 1 |&#119868; &#119901;&#119903;&#119890;&#119889; &#119909;,&#119910; -&#119868; &#119901;&#119903;&#119890;&#119889; &#119909; -1,&#119910; | + |&#119868; &#119901;&#119903;&#119890;&#119889; &#119909;,&#119910; -&#119868; &#119901;&#119903;&#119890;&#119889; &#119909;,&#119910; -1 | (13)</ns0:formula><ns0:p>will generate a smooth synthesized face image. &#119871; &#119905;&#119907; Classification Loss (&#119923; &#119940;&#119949;&#119938;&#119956;&#119956;&#119946;&#119943;&#119962; )</ns0:p><ns0:formula xml:id='formula_9'>&#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119894;&#119891;&#119910; =-&#8721; &#119894; &#119910; &#119905;&#119903;&#119906;&#119890; &#119894; log 2 (&#119910; &#119901;&#119903;&#119890;&#119889; &#119894; ) (14)</ns0:formula><ns0:p>where is the class index, denotes the tensor of the one-hot true target of I, and is the &#119894; &#119910; &#119905;&#119903;&#119906;&#119890; &#119910; &#119901;&#119903;&#119890;&#119889; predicted probability tensor. is a cross-entropy loss which is used to ensure the &#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119894;&#119891;&#119910; synthesized frontal-face image can be classified correctly.</ns0:p><ns0:p>Local Pixel-Wise Loss (&#119923; &#119953;&#119938;&#119955;&#119957; )</ns0:p><ns0:formula xml:id='formula_10'>&#119871; &#119901;&#119886;&#119903;&#119905; = 1 &#119882; &#215; &#119867; &#119882; &#8721; &#119909; = 1 &#119867; &#8721; &#119910; = 1 | &#119868; &#119892;&#119905; &#119897;&#119900;&#119888;&#119886;&#119897; (&#119909;,&#119910;) -&#119868; &#119901;&#119903;&#119890;&#119889; &#119897;&#119900;&#119888;&#119886;&#119897; (&#119909;,&#119910;)| (15)</ns0:formula><ns0:p>This loss is to calculate the total average pixel difference between and , and it is not &#119868; &#119892;&#119905; &#119897;&#119900;&#119888;&#119886;&#119897; &#119868; &#119901;&#119903;&#119890;&#119889; &#119897;&#119900;&#119888;&#119886;&#119897; addressed in <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Generator Loss Functions (&#119923; &#119944;&#119942;&#119951;&#119942;&#119955;&#119938;&#119957;&#119952;&#119955; )</ns0:p><ns0:p>The generator loss function of the proposed method is a weighted sum of all the losses defined above:</ns0:p><ns0:formula xml:id='formula_11'>&#119871; &#119892;&#119890;&#119899;&#119890;&#119903;&#119886;&#119905;&#119900;&#119903; = &#119871; &#119901;&#119909; + &#120582; 1 &#119871; &#119904;&#119910;&#119898; + &#120582; 2 &#119871; &#119894;&#119901; + &#120582; 3 &#119871; &#119905;&#119907; + &#120582; 4 &#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119910;&#119894;&#119891;&#119910; + &#120582; 5 &#119871; &#119901;&#119886;&#119903;&#119905; (16)</ns0:formula><ns0:p>where are weights that coordinate the different losses, and they are set to be &#120582; &#119894; (&#119894; = 1~5) , , , , in our experiments. &#120582; 1 = 0.1 &#120582; 2 = 0.001 &#120582; 3 = 0.0001 &#120582; 4 = 0.1 and &#120582; 5 = 0.3</ns0:p></ns0:div> <ns0:div><ns0:head>Optimizing the Training Process</ns0:head><ns0:p>In order to optimize the training process, we propose some modifications to the TP-GAN parameters as shown in Table <ns0:ref type='table'>2</ns0:ref> in order to improve the performance in learning the frontal-face data distribution. In our experiment, we consider few parameters: learning rate, batch size, number of epochs, and loss functions. We chose those parameters based on our experience, knowledge and observations. The adopted learning rate improve the module loss accuracy for both and . The batch size is the number of examples from the training dataset used in the &#119866; &#119863; estimation of the error gradient. This parameter determines how the learning algorithm will behave. We have found that using a larger batch size has adversely affected our method performance. As a result, during initial training the discriminator may be overwhelmed by too many examples. This will lead to poor training performance. The number of training epochs is a key advantage of machine learning. As the number of epochs increases, the performance will be improved and the outcomes will be astounding. However, the disadvantage is that it takes a long time to train a large number of epochs. These parameters are essential to improve LFMTP-GAN's representation learning, gaining high-precision performance and reducing visual artifacts when synthesising frontal-face images.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments</ns0:head><ns0:p>We conducted extensive experiments to verify the effectiveness of our method by comparing it with the TP-GAN. The evaluation protocol includes frontal face image resolution and accuracy preserving face identity.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Settings</ns0:head><ns0:p>Both LFMTP-GAN and TP-GAN models are tested and trained on the Multi-PIE and FEI datasets. Multi-PIE <ns0:ref type='bibr' target='#b62'>(Ralph et al., 2010)</ns0:ref> Manuscript to be reviewed Computer Science flash illumination, followed by an image with each flash firing independently. Figure <ns0:ref type='figure'>5</ns0:ref> shows 18 an example of our model results. 'The intensity of light is determined by the brightness of the flash and the background. For example, a bright or dark flash, shadow reflection, or a white or blue background will affect intensity. This depends on the recording equipment and the positioning.' To minimize the number of saturated pixels in flash illuminated images, all cameras have been set to have a pixel value of for the brightest pixel in an image without 128 flash illumination. In the same way, the diffusers in front of each flash were added. The color balance was also manually adjusted so that the images looked similar. FEI is a Brazilian unlabeled face dataset with images for identities across a variety of different poses 2800 + 200 captured in a constrained environment. The face images were taken between 'June 2005 and March 2006' at the artificial intelligence laboratory at s&#227;o bernardo do campo, s&#227;o paulo, brazil. The FEI images were taken against a white homogenous background in an upright frontal position with a range of profile poses; and different illuminations, distinct appearances &#177; 90 0 and hairstyles were included for each subject. Our method shares the same implementation concept as TP-GAN but totally has different parameters settings. The training lasts for 10-to-18 days in each system for each dataset. The training model and source code will be released after this article is accepted.</ns0:p></ns0:div> <ns0:div><ns0:head>Visual Quality</ns0:head><ns0:p>In this subsection, we compare LFMTP-GAN with TP-GAN. Figure <ns0:ref type='figure'>6</ns0:ref> and Fig <ns0:ref type='figure'>7</ns0:ref> shows the comparison images, where the first column is the profile-face images under different face poses, the second column is the synthesized frontal-face images by TP-GAN, the third column is the synthesized frontal-face images by LFMTP-GAN, and the last column is one randomly selected frontal-face image of the category corresponding to the profile-face image. The yaw angle of the input face image are chosen circularly from . Obviously, the (15 0 ,30 0 ,45 0 ,60 0 ,75 0 and 90 0 ) resolution of the LFMTP-GAN images looks better than TP-GAN. This reveals that the use of a proper landmark localization algorithm could significantly improve the image quality of the 2D synthesized images, provide rich textural details, and contain fewer blur effects. Therefore, the visualization results shown in Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Identity Preserving</ns0:head><ns0:p>To quantitatively demonstrate the identity preserving ability of the proposed method, we evaluate the classification accuracy of synthesized frontal-face images on both Multi-PIE and FEI databases, and show their classification accuracy (%) in Table <ns0:ref type='table'>3</ns0:ref> across views and illuminations. The experiments were conducted by first employing light-CNN to extract deep features and then using the cosine-distance metric to compute the similarity of these features. The light-CNN model was trained on MS- <ns0:ref type='bibr'>Celeb-1M (Microsoft Celeb, 2016)</ns0:ref> which is a largescale face dataset, and fine-tuned on the images from Multi-PIE and FEI. Therefore, the light-CNN results on the profile images serves as our baseline. Our method produces better results</ns0:p></ns0:div> <ns0:div><ns0:head>&#119868; &#119875;</ns0:head><ns0:p>than TP-GAN as the pose angle is increased. Our approach has shown improvements in frontalfrontal face recognition. Moreover, Table <ns0:ref type='table'>4</ns0:ref> illustrates that our method is superior to many existing state-of-the-art approaches.</ns0:p><ns0:p>Many deep learning methods have been proposed for frontal view synthesis, but none of them have been proved to be sufficient for recognition tasks. <ns0:ref type='bibr' target='#b1'>Chao et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b10'>Yibo et al., 2018</ns0:ref> relied on direct methods such as CNN for face recognition, which will definitely reduce rather than improve performance. It is therefore important to verify whether our synthesis results can improve recognition performance (whether 'recognition via generation' works) or not. The next section presents the loss curves for the two models.</ns0:p></ns0:div> <ns0:div><ns0:head>Model Loss Curve Performance</ns0:head><ns0:p>This section provides a comparison with TP-GAN. We analyze the effects of our model on three tradeoff parameters named generator loss, pixel-wise loss, and identity preserving accuracy. 80% of the face image subjects from the Multi PIE and FEI datasets were used for training and evolution purposes. 90% of the image subjects were used for training, while 10% were used for testing. The recognition accuracy and corresponding loss curves are shown in Fig 10 <ns0:ref type='figure'>.</ns0:ref> We can clearly see from the curves that, the proposed method improves the TP-GAN model and provides a much better performance on both datasets. In particular, the number of epochs exceeds 4500, the loss performance decreased sharply in LFMTP-GAN model, while the loss performance decreased slightly in the TP-GAN model. Our optimization learning curves was calculated according to the metric by which the parameters of the model were optimized, i.e., loss. More importantly, our method still produces visually convincing results (as shown in <ns0:ref type='bibr'>Fig 6,</ns0:ref><ns0:ref type='bibr'>Fig 7 and Fig 9)</ns0:ref> even under extreme face poses, its recognition performance is about higher than that 1.2% of TP-GAN.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussions</ns0:head><ns0:p>The goal of this method is to match the appearance of each query face by marking the partially face surface of the generated image, such as the eyes, nose and mouth. In theory, this would have allowed the TP-GAN method to better preserve facial appearance in the updated, synthesized views. The statement holds true when the face is considered to have similar shape characteristics.</ns0:p><ns0:p>Considering that all human faces have unique shape characteristics, this may actually be counterproductive and harmful, rather than improving face recognition. We believe that it is necessary to integrate the four important local areas (eyes, nose and mouth) into their right positions on the whole face image, which is already proved to be correct from our experiment results. A majority of face recognition systems require a complete image to be recognized, however, recovering the entire image is difficult when parts of the face are missing. This makes it difficult to achieve good performance. We demonstrate how our approach can help enhance face recognition by focusing on these areas of the face and outperform other methods in the same context. Landmark detection is widely used in a variety of applications including object detection, texture classification, image retrieval and etc. TP-GAN already has landmark detection implemented for detecting the four face patches in its early stages as shown in Fig 1 <ns0:ref type='figure'>.</ns0:ref> Such a method might be valuable in obtaining further textual details that can help in recovering those facial areas and face shapes that have different characteristics. In our proposal, we offer a relatively simple yet effective method for restoring the texture details of a synthesised face image by repositioning the appearance areas of four landmark patches rather than the entire face. We show a different method for facial recognition (Table <ns0:ref type='table'>4</ns0:ref>) for faces with extreme poses. In four major poses we achieve rank-1 recognition rates (75%, 60%, 45%, and 30%). Furthermore, when it comes to global , we found that optimizing for the corrupted images resulted in a {&#119866; Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>optimization algorithms, such as loss functions, or introducing some different techniques for facial analysis and recognition. Our future research is to apply different error functions or different face analysis and recognition techniques, combined with two pathway structures, to achieve a super-resolution generative model and high-precision performance.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>The structure of TP-GAN. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:p>The TP-GAN and LFMTP-GAN loss curve plots.</ns0:p><ns0:p>Where A is the generator loss curve, and B is the pixel-wise loss curve. The horizontal axis indicates the number of epochs, which is the number of times that entire training data has been trained. The vertical axis indicates how well the model performed after each epoch; the lower the loss, the better a model. C is the identity preserving accuracy curve, which is a quality metric that measures how accurate it is to preserve a user's identity; the higher the accuracy, the better a model. 5, 6 and 7 layers 6-5, 6 and 7 act as a visual feature map for specific inputs of fontal-face images in order to retain more visual information by subsampling layers' structure.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>shown in Fig 4, we construct a noisy input by adding Gaussian white noise (GWN) to the original input profile images, as given in: , where . Here, is an &#119909; = &#119909; + &#119907; &#119907; &#8764; &#119873; (&#119874;, &#120590; 2 &#119868;) &#119868; &#119873; &#215; &#119873; PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021)Manuscript to be reviewedComputer Scienceidentity matrix, where is the size of input data (batch output of the generator), and is the &#119873; &#120590; 2</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>max &#120579; &#119863; [&#120124; &#119868; &#119865; ~ &#119875;(&#119868; &#119865; ) &#119897;&#119900;&#119892; &#119863; &#120579; &#119863; (&#119868; &#119865; ) + &#120124; &#119868; &#119875; ~ &#119875;(&#119868; &#119875; ) &#119897;&#119900;&#119892; (1 -&#119863; &#120579; &#119863; (&#119866; &#120579; &#119866; (&#119868; &#119875; )))] (7) PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>|</ns0:head><ns0:label /><ns0:figDesc>of the ground-truth category of . be the composition image of the four local facial patches &#119868; &#119868; &#119892;&#119905; &#119897;&#119900;&#119888;&#119886;&#119897; of , and be the composition image of the four local facial patches of . &#119868; &#119892;&#119905; (&#119909;,&#119910;) -&#119868; &#119901;&#119903;&#119890;&#119889; (&#119909;,&#119910;)| (9)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>&#119910;) -&#119865; &#119901;&#119903;&#119890;&#119889; &#119897; (&#119909;,&#119910;)| (11) where and respectively denote the feature map of the last two layers of the light--CNN net. It is expected that a good synthesized frontal-&#119897; -&#119905;&#8462; (&#119897; = 1, 2) face image will have similar characteristics to its corresponding real frontal-face image. We employ a fully connected layer of the pre-trained light-CNN net for the feature extraction of the pre-trained recognition network. The pre-trained model will leverage the loss to enforce identitypreserving frontal view synthesis. Adversarial Loss ( ) &#119923; &#119938;&#119941;&#119959; PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>work. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Fig 6 and Fig 7 demonstrate the effectiveness of our method across a variety of poses and datasets. Similarly, Fig 8 shows the results of these datasets in close-up image with facial pose. 90 0 Previous frontal view synthesis methods are usually based on a posture range of . It is &#177; 60 0 generally believed that if the posture is greater than , it is difficult to reconstruct the image of 60 0 the front view. Nonetheless, we will show that with enough training data and a properly designed loss function, this is achievable. In Fig 9 and Fig 10, we show that LFMTP-GAN can recover identity-preserving frontal faces from any pose, as well as comparing with state-of-the-art face frontalization methods, it performs better. In addition, our geometry estimation method does not require 3D geometry knowledge because it is driven by data alone. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>than optimizing for the clean images in order to achieve balanced learning behavior. Therefore, we neither extensively modify the nor include an {&#119866; in our modification, because that integration will increase the network complexity and cause training limitations. We obtained better results by introducing noise to the image before feeding it to the during optimization. , we propose an LFM model for synthesizing a frontal-face image from a single image to further enhance the frontal-face images quality of the TP-GAN model. To accomplish our goal smoothly, we expand the existing generative global pathway with a well-constructed 2D face landmark localization to cooperate with the local pathway structure in a landmark sharing manner to incorporate empirical face pose into the learning process, and improve the encoderdecoder global pathway structure for better facial image features representation. Compared with TP-GAN, our method can generate frontal images with rich texture details and preserve the identity information. Face landmark localization allows us to restore the missing information of the real face image from the synthetic frontal images, and provide a rich texture detail. The quantitative and qualitative experimental results of the Multi-PIE and FEI datasets show that our proposed method can not only generate high-quality perceptual facial images in extreme poses, but also significantly improves the TP-GAN results. Although LFMTP-GAN method achieves a high-quality image resolution output, there is still room for improvement by choosing differentPeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>The final output was obtained by integrating the global pathway with a 2D facial landmark localization to collaborate with the local pathway in a landmark sharing fashion.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 8 A</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Detection: the goal of this step is to identify faces that are generated by local and global pathways. 2-Facial landmarks such as the eye centers, tip of the nose, and mouth are located. 3-The feature extractor encodes identity information into a hightechnique is used to enhance the synthesis image textures detail by adding slightly modified copies of already existing data or by creating new synthetic data based on existing data.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,374.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,306.37,525.00,188.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,221.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,199.12,525.00,261.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>, the TP-GAN framework architecture consists of two stages. The first stage is a generator of two-pathways CNN that is parameterized by . Each</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#119866; &#120579; &#119866;</ns0:cell><ns0:cell>&#120579; &#119866;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>pathway has encoder-decoder</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; , &#119866; &#120579; &#119863;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell cols='2'>structure and combination of loss functions, a local</ns0:cell></ns0:row><ns0:row><ns0:cell>pathway</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119897; , &#119866; &#120579; &#119863; &#119897;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell cols='5'>of four landmark patch networks</ns0:cell><ns0:cell>&#119866; &#120579; &#119897; &#119894;</ns0:cell><ns0:cell>, &#119894; &#8712; {0, 1, 2, 3}</ns0:cell><ns0:cell>to capture the local</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>texture around four facial landmarks, and one global network</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>to process the global</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>face structure. Furthermore, the bottleneck layer</ns0:cell><ns0:cell>(&#119866; &#119892; )</ns0:cell><ns0:cell>, which is the output of</ns0:cell><ns0:cell>, is typically &#119864; &#120579; &#119866; &#119892;</ns0:cell></ns0:row></ns0:table><ns0:note>used for classification tasks with the cross-entropy loss . A global pathway helps &#119871; &#119888;&#119903;&#119900;&#119904;&#119904; -&#119890;&#119899;&#119905;&#119903;&#119900;&#119901;&#119910;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Then, a 2D zero padding technique is used to fill out the rows and columns around the template landmark location with</ns0:figDesc><ns0:table /><ns0:note>zeros. Nevertheless, local landmark detection alone cannot provide accurate texture detail for a face that has a different shape, or multiple views, because all generated local synthesis faces have the same fixed patch (or template) centralized location, regardless of their shape PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>(&#119868; &#119865; ) profile image . During the training phase of such networks, pairs of corresponding (&#119868; &#119875; ) {&#119868; &#119865; , &#119868; &#119875; } from multiple identities are required. Input and output are both based on a pixel space of &#119910; &#119868; &#119875; &#119868; &#119865; size with a color channel . We aim to learn a synthesis function that can output a &#119882; &#215; &#119867; &#215; &#119862; &#119862;</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='7'>frontal view when given a profile image. This section will be omitted since it was already</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>explained in TP-GAN architecture. Optimizing the network parameters</ns0:cell><ns0:cell>(&#119866; &#120579; &#119866; )</ns0:cell><ns0:cell>starts with</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>minimizing the specifically designed synthesis loss . For a training set with training pairs of , the optimization problem is expressed as and the aforementioned (&#119871; &#119904;&#119910;&#119899; ) &#119871; &#119888;&#119903;&#119900;&#119904;&#119904; -&#119890;&#119899;&#119905;&#119903;&#119900;&#119901;&#119910; &#119873; {&#119868; &#119865; &#119899; , &#119868; &#119875; &#119899; }</ns0:cell></ns0:row><ns0:row><ns0:cell>follows:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>4) differentiate DAE's reconstruction error with respect to weights, and is the change in weights, where &#8706;&#119864; &#119863;&#119860;&#119864; /&#8706; &#8743; &#10230; can then be learned by doing gradient descent on the DAE's cost &#8710; &#8743; &#8743; &#120579; &#119866; = 1 &#119873; argmin &#120579; &#119866; &#119873; &#8721; &#119899; = 1 {&#119871; &#119904;&#119910;&#119899; (&#119866; &#120579; &#119866; (&#119868; &#119875; &#119899; ), &#119868; &#119865; &#119899; ) + &#120572;&#119871; &#119888;&#119903;&#119900;&#119904;&#119904; -&#119890;&#119899;&#119905;&#119903;&#119900;&#119901;&#119910; (&#119866; &#120579; &#119892; &#119864;</ns0:cell></ns0:row><ns0:row><ns0:cell>function.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>Essentially, the . is the encoder, &#119866; &#120579; &#119892; &#119864; &#8477; &#119872; &#119891; &#119890; &#8477; &#119872; encodes input data is the set of database, and is learning parameters of the DAE into a hidden representation: &#119909; &#8712; &#8477; &#119873; &#8462; = &#119891; &#119890; (&#119909;;&#120579;) &#8712; &#120579;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>function. Then, it decodes</ns0:cell><ns0:cell>&#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell cols='4'>the hidden representation into a reconstruction of the input data:</ns0:cell><ns0:cell>&#119909;</ns0:cell></ns0:row><ns0:row><ns0:cell>= &#119891; &#119889; (&#8462;;&#120579;)</ns0:cell><ns0:cell cols='6'>. The objective for learning the parameter of an &#120579;</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>is to minimize the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>reconstruction error between and . Usually, there is some constraint on &#119909; &#119909;</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>to prevent</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>it from learning an identity transformation. For example, the dimension of is much smaller than &#8462;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>the input data's dimension</ns0:cell><ns0:cell cols='2'>&#119894;.&#119890;. &#119872; &#8810; &#119873;</ns0:cell><ns0:cell>, then</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>will function similarly to principle</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>component analysis (PCA). The hidden representation</ns0:cell><ns0:cell>&#8214;&#8462;&#8214; 1</ns0:cell><ns0:cell>is small, then</ns0:cell><ns0:cell>&#119866; &#120579; &#119864; &#119892;</ns0:cell><ns0:cell>functions</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>similarly to sparse coding. The DAE tries to remove noise from input data. Let</ns0:cell><ns0:cell>&#119909; = &#119909; + &#119907;</ns0:cell><ns0:cell>be a</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>noisy input by adding to an original input . DAE takes as an input, then outputs a denoising &#119907; &#119909; &#119909; signal . The objective function of a such technique is to minimize the error &#119909; = &#119891; &#119889; (&#119891; &#119890; (&#119909;; &#120579;);&#120579;)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>between and by adjusting , i.e., it tries to reconstruct the actual content well while not &#119909; &#119909; &#120579;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>reconstructing noise. DAE can also be viewed as a generative model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Adversarial Networks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>Following (Ian et al., 2014) work, adversaries network consists of two components</ns0:cell><ns0:cell>(&#119866;)</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>(&#119863;)</ns0:cell><ns0:cell>.</ns0:cell></ns0:row></ns0:table><ns0:note>'The loss function reflects the difference in distribution between the generated and original data'. We will first review some technical aspects of the training process, and then the adversaries' network. In frontal view synthesis, the aim is to generate a photorealistic andPeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:1:2:NEW 31 Dec 2021) Manuscript to be reviewed Computer Science identity-preserving frontal view image from a face image under a different pose, i.e. a</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>) might not provide enough gradients for to learn well. During the early stages of learning, when is poor, &#119866; &#119866; can reject samples with a high degree of confidence, since they are clearly different from the &#119863;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>training data. In this case, minimize &#119897;&#119900;&#119892; (1 -&#119863; &#120579; &#119863; (&#119866; &#120579; &#119866; &#119897;&#119900;&#119892; (1 -&#119863; &#120579; &#119863; , we can train to maximize saturates. As an alternative to training to (&#119868; &#119875; ))) &#119866; (&#119866; &#120579; &#119866; . As a result of this (&#119868; &#119875; ))) &#119866; &#119897;&#119900;&#119892; &#119863; &#120579; &#119863; (&#119868; &#119865; )</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Each individual loss function is presented below, and the used symbols are defined here. Let be an output image. &#119868;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Let and be the width and height of , &#119882; &#119867; &#119868; (&#119909;,&#119910;)</ns0:cell><ns0:cell>be a pixel coordinate of a 2D image,</ns0:cell><ns0:cell>&#119868; &#119901;&#119903;&#119890;&#119889;</ns0:cell><ns0:cell>be the</ns0:cell></ns0:row></ns0:table><ns0:note>predicted (i.e., synthesized) frontal-face image of , be the representative frontal-face image &#119868; &#119868; &#119892;&#119905;</ns0:note></ns0:figure> </ns0:body> "
"Dear editor / reviewers, Thank you for giving us the opportunity to submit a revised draft of the manuscript '2D Facial Landmark Localization Method for Multi-view Face Synthesis Image Using a Two Pathway Generative Adversarial Network Approach ' to PeerJ journal. We are grateful for the time and effort put forth by the editor and reviewers in providing feedback on our manuscript, as well as their insightful comments. Most of the suggestions have been incorporated into our manuscript. • Literature is not well presented in the manuscript and recent studies on 2021 are lacking. • Thank you for bringing this to our attention. Therefore, we have added and modified several parts of the original text. By making these changes, we ensure that the information is relevant to our own work and presented in a more understandable way. We have incorporated the latest three relevant references (Bassel, Ilya, and Yuri, 2021; Chenxu et al., 2021; Yi et al., 2021) into our manuscript. This revised manuscript contains changes in lines (31-60, 66, 70,71, 74,75, 78-80, 88-92, 110-120, 129-133, 142 and 143). • The proposed architecture is still missing some technical details. For instance, the architectures of generator and discriminator. Same for the training process. • This observation is technically correct. An additional paragraph was added to describe some detail technical aspects of the training process, and it is shown in lines (399-415). • The baseline performance of the proposed architecture should be explored. • We appreciate the contributor's insightful suggestion, and agree that establishing baseline performance for the architecture should be explored / investigated. In the revised manuscript, we have included the latest work related to our method 'landmark detection'. In this part, we have discussed how these methods can be utilized to solve a particular problem (face poses). Lines (260-273) have been updated with additional information. • Some abbreviations are not properly used, e.g. 'LFMTP-GAN' and 'DAE' are defined twice. • Your highlighting of this abbreviation error is much appreciated. Such errors have been corrected so that both 'LFMTP-GAN' and 'DAE' are defined only once. --------------------------------------------------------------------------------------------------------------------- Reviewer: Mehak Memon Basic reporting • Clarity in the overall structure of manuscript sentences. • Thank you for the highlighted comments, which will help us improve this work. Therefore, we have added and modified several parts of the original text. With these changes, we ensure that the overall clarity structure of manuscript sentences is more understandable. • Literary study can be further extended as there are no studies referred for the year 2021. • We appreciate the highlighted comments, which will assist me in improving this work. We have, therefore, made a number of changes to the original text. In doing this, we ensure the information we present is relevant to our own work and easy to follow. Our manuscript has been amended to include two or three references (Bassel, Ilya & Yuri, 2021; Chenxu et al., 2021; Yi et al.,2021) that present more recent methods similar to our own. line (88-91) contain the changes in the revised manuscript. • Sufficiently detailed tables and figures which is a plus point. • Thank you for your time and feedback. The figures and tables have been updated with more detail, including Table 1, Table 4, Figure 5 and Figure 9. • Abstract mentions 'improved the encoder-decoder global pathway structure'; however, the abstract fails to mention the details of this improvement in terms of accuracy or operational workflow. • Thanks for your time and comments to help us improve this work. We have revised the section you mentioned and highlighted the importance of this improvement in terms of accuracy or organizational workflow, which helps to improve the results. Our statement can be reviewed “and improve the encoder-decoder global pathway structure for better representation of facial image features by establishing robust feature extractors that select meaningful features that ease the operational workflow toward achieving a balanced learning strategy,”. Due to the lack of a direct review for these changes, we include them here as well as in our updated manuscript. • Authors mention throughout the manuscript the need for the global structure; however, the reader can still do not get the idea of its importance and impact on the overall verification process. • We thank you for bringing this to our attention. The importance and impact of the global pathway on the overall verification process have been discussed. An overview of the global pathway impact is presented here in a short statement “With Global pathways, facial features are integrated and transformed, so face identity can be maintained, especially for faces with larger pose angles.” Our work in lines (241-244) has been updated with more details. • Line 459 mentions 20 illumination levels for the image dataset. Mention the details of these levels. • Your emphasis on the subject to provide more details is much appreciated. Description of these levels have been incorporated according to both the dataset website “ https://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html “ and a publication paper “Ralph G M, Jeffrey C, Takeo K, Simon B. 2010. Multi-PIE”. Please see lines (572-282) for more details. Experimental design • Line 5,6,7 mentions there is a drop of 10% of recognition. Can authors present and validate this point through focused experiments and also show the gain of the proposed methods over the same set of images (for frontal-profile to frontal-frontal face verification). • Thanks for pointing this out. We have updated the original statement in the first manuscript. In lines (52-57), we have outlined some key points and clarified them, and here we have added a brief explanation and highlighted a few key points. “In the authors' statement, a ‘near profile’ pose obscures multiple features, especially the second eye. This corresponds roughly to a yaw wider than 60 degrees. While ‘near frontal’ refers to those types of faces where both sides are almost equally visible and the yaw is within 10 degrees of purely frontal. According to the author, the above conclusion was drawn after analyzing various face recognition algorithms and human responses to this experiment to measure the difficulty involved.” Validity of the findings • The results reported are promising. It would be great if the authors could add some more experiments to show the effect of 20 different illumination levels as for angles presented in Table 3. • We appreciate your time and comments to help us improve this work. In figure 5, we have provided more experimental results including the results of lighting conditions and background changes. These details have also been mentioned in our comments as well as in our work (see the section of experimental settings). Note: Table 3 has been changed to Table 4. Additional comments • Related work should be summarized and a small paragraph should be added at the end of the section to discuss the key takeaway of literary study to facilitate the reader. • Thanks again for bringing this point to our attention. A few key points were summarized for the reader's benefit. Those key points provide an overview of the most commonly used deep learning methods / algorithms for face landmark detection based on specific poses. In line (221-227), we have added more details to the original text. • Proposed method should be presented in one concrete table with all the operational steps involved (all in one place). • Thank you for emphasizing the need for more details. We have merged the relevant information that provides the steps of our method and the related steps leading to a successful implementation. Table 1 outlines these steps. Overall the research is well-defined and has a high potential to fill the identified research gap. --------------------------------------------------------------------------------------------------------------------- Reviewer: Khanh Le Basic reporting • English language should be improved. There still have some grammatical errors, typos, or ambiguous parts. • Thank you very much for taking the time to review this manuscript. Through this suggestion we have been able to improve our work. As part of improving the quality of overall content, we have cleaned up the structure more thoroughly, polished the language, and corrected grammar errors where-ever they appeared. • Introduction' is verbose, it could be revised to make it more concise. • We greatly appreciate the time you took to review this manuscript. This suggestion helped us to improve our work. Several corrections have been made to the introduction section as pointed out by the reviewer. • Quality of figures should be improved significantly. Some figures had a low resolution. • Thank you again for drawing our attention to this issue. We have reproduced all figures in order to ensure the quality meets the journal standard, as well as to improve the quality of the work. These figures have not changed in any way except for the quality. Experimental design • Source codes should be provided for replicating the methods. • Your valuable comments are once again appreciated. In fact, the source code of this work has been provided in the first submission and its link is https://github.com/MahmoodHB/LFMTP-GAN • Statistical tests should be conducted in the comparison to see the significance of the results. • Thank you for your insightful suggestion, and we agree that statistical tests should be conducted. The success of face recognition depends largely on the features used to represent the face pattern and the classification methods used to distinguish between faces; face localization and normalization are the foundations for extracting useful features. We compared and presented the recognition rates for each facial pose. Recognition rates are a good indicator of a model's performance. Table 4 compares our results with many existing face recognition algorithms. • The authors should describe in more detail on the hyper-parameter optimization of the models. • We greatly appreciate your thoughtful suggestion. The original text has been updated with new information that emphasizes the importance of hyper-parameters. It appears in lines (548-558). • Deep learning is well-known and has been used in previous studies i.e., PMID: 31920706, PMID: 32613242. Thus, the authors are suggested to refer to more works in this description to attract a broader readership. • Thank you for your thoughtful suggestion. Throughout our work, we have included many valuable contributions that have been relevant to the issue we are dealing with. We have incorporated three references (Bassel, Ilya & Yuri, 2021; Chenxu et al., 2021; Yi et al.,2021) to our manuscript which contain the most up-to-date related methods. Likewise, we have considered your suggestion and reviewed the two papers you provided. • Cross-validation should be conducted instead of train/ val split. • Thank you again for bringing this suggestion to our attention. The Table provided in the first submission has been revised and updated with cross validation. Each model is cross-validated using Multi-PIE and FEI datasets. Each model has five different settings. In each setting, 80% of the dataset was used for training and 20% for testing. The model runs for 70+ epochs in two completely independent systems at each setting. Details can be found in Table 3. Validity of the findings • The authors should compare the performance results to previously published works on the same problem/dataset. • I would like to thank you again for bringing this suggestion to our attention. There's a new figure that provides more details regarding the comparison with different methods in the revised manuscript. Figure 9 and Table 4 provide further details. • It lacks a lot of discussions on the results. • Thank you for taking the time and providing comments to help us improve this work. We have added a new section to discuss the results. The new section is shown in lines (649-676). • The model contained a little bit overfitting, how to explain this case? • We are very grateful that the reviewer took the time to comment on this work and help us improve it. Overfitting can be detected by looking at validation metrics, like loss or accuracy. The proposed method performs very well in terms of loss performance as shown in figure 10, which also illustrates the model's ability to identify human faces and maintain identity. Our model continues to improve, as discussed in (Model Loss Curve Performance). The resolution of the images might cause them to appear over-fitted, therefore, all facial images have been reproduced to meet the quality standard and the journal policy, and to improve the quality of the work. --------------------------------------------------------------------------------------------------------------------- New references: • Bassel Z, Ilya K, Yuri M. 2021. PFA-GAN: Pose Face Augmentation Based on Generative Adversarial Network. Vilnius University, vol. 32, no. 2: 425 – 440 DOI: https://doi.org/10.15388/21-INFOR443 • Chenxu Z, Saifeng N, Zhipeng F, Hongbo L, Ming Z, Madhukar B, Xiaohu G. 2021. 3D Talking Face with Personalized Pose Dynamics. In: IEEE Transactions on Visualization and Computer Graphics. 1 – 25 DOI: 10.1109/TVCG.2021.3117484. • Yi Z, Keren F, Cong H, Peng C. 2021. Identity-and-pose-guided Generative Adversarial Network for Face Rotation. 33 – 47 DOI: 10.1016/J.NEUCOM.2021.04.007. • Peipei L, Xiang W, Yibo H, Ran H, Zhenan S. 2019. M2fpa: A multi-yaw Multi-Pitch High-Quality Database and Benchmark for Facial Pose Analysis: 1 – 9 Available at https://arxiv.org/abs/1904.00168 --------------------------------------------------------------------------------------------------------------------- Note: Additionally, we have added further information about our work in a separate document called the appendix. The document contains some tables that relate to our work. "
Here is a paper. Please give your review comments after reading it.
358
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>One of the key challenges in facial recognition is multi-view face synthesis from a single face image. The existing generative adversarial network (GAN) deep learning methods have been proven to be effective in performing facial recognition with a set of preprocessing, post-processing and feature representation techniques to bring a frontal view into the same position in-order to achieve highaccuracy face identification. However, these methods still perform relatively weak in generating high quality frontal-face image samples under extreme face pose scenarios. The novel framework architecture of the twopathway generative adversarial network (TP-GAN), has made commendable progress in the face synthesis model, making it possible to perceive global structure and local details in an unsupervised manner. More importantly, the TP -GAN solves the problems of photorealistic frontal view synthesis by relying on texture details of the landmark detection and synthesis function, which limits its ability to achieve the desired performance in generating high-quality frontal face image samples under extreme pose. We propose, in this paper, a landmark feature-based method (LFM) for robust pose-invariant facial recognition, which aims to improve image resolution quality of the generated frontal faces under a variety of facial poses. We therefore augment the existing TP-GAN generative global pathway with a well-constructed 2D face landmark localization to cooperate with the local pathway structure in a landmark sharing manner to incorporate empirical face pose into the learning process, and improve the encoder-decoder global pathway structure for better representation of facial image features by establishing robust feature extractors that select meaningful features that ease the operational workflow toward achieving a balanced learning strategy, thus significantly improving the photorealistic face image resolution. We verify the effectiveness of our proposed method on both Multi-PIE and FEI datasets. The quantitative and qualitative experimental results show that our proposed</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Face recognition is one of the most commonly used biometric systems for identifying individuals and objects on digital media platforms. Due to changes in posture, illumination, and occlusion, face recognition faces multiple challenges. The challenge of posture changes comes into play when the entire face cannot be seen in an image. Normally, this situation may happen when a person is not facing the camera during surveillance and photo tagging. In order to overcome these difficulties, several promising face recognition algorithms based on deep learning have been developed, including generative adversarial networks (GANs). These methods have been shown to work more efficiently and accurately than humans at detection and recognition tasks. In such methods, pre-processing, post-processing, and multitask learning or feature representation techniques are combined to provide high accuracy results on a wide range of benchmark data sets <ns0:ref type='bibr' target='#b0'>(Junho et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b1'>Chao et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b2'>Xi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>Jian et al., 2018)</ns0:ref>. The main hurdle to these methods is multi-view face synthesis from a single face image <ns0:ref type='bibr' target='#b5'>(Bassel, Ilya &amp; Yuri, 2021;</ns0:ref><ns0:ref type='bibr' target='#b6'>Chenxu et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Yi et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b9'>Hang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b12'>Luan, Xi &amp; Xiaoming, 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Rui et al., 2017)</ns0:ref>. Furthermore, a recent study <ns0:ref type='bibr' target='#b14'>(Soumyadip et al., 2016)</ns0:ref> emphasized that compared with frontal face images with yaw variation less than 10 degrees, the accuracy of recognizing face images with yaw variation more than 60 degrees is reduced by 10%. The results indicate that pose variation continues to be a challenge for many real-world facial recognition applications. The existing approaches to these challenges can be divided into two main groups. In a first approach, frontalization of the input image is used to synthesize frontal-view faces <ns0:ref type='bibr' target='#b16'>(Meina et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b17'>Tal et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b18'>Christos et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b0'>Junho et al., 2015)</ns0:ref>, meaning that traditional facial recognition methods are applicable. Meanwhile, the second approach focuses on learning discriminative representations directly from non-frontal faces through either a one-joint model or multiple pose-specific models <ns0:ref type='bibr' target='#b19'>(Omkar, Andrea &amp; Andrew, 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Florian, Dmitry &amp; James, 2015)</ns0:ref>. It is necessary to explore the above approaches in more detail before proceeding. For the first approach, the conventional approaches often make use of robust local descriptors, (such as <ns0:ref type='bibr' target='#b21'>John, 1985;</ns0:ref><ns0:ref type='bibr' target='#b22'>Lowe, 1999;</ns0:ref><ns0:ref type='bibr' target='#b24'>Ahonen, Hadid &amp; Pietik&#228;inen, 2006;</ns0:ref><ns0:ref type='bibr'>Dalal &amp; Triggs, 2015)</ns0:ref>, to account for local distortions and then adapt the metric learning method to achieve pose invariance. Moreover, local descriptors are often used <ns0:ref type='bibr' target='#b26'>(Kilian &amp; Lawrence, 2009;</ns0:ref><ns0:ref type='bibr'>Tsai-Wen et al., 2013)</ns0:ref> approaches to eliminate distortions locally, followed by a metric learning method to prove pose invariance. However, due to the tradeoff between invariance and discriminability, this type of approach is relatively weak in handling images with extreme poses. A second approach, often known as face rotation, uses one-joint models or multiple pose-specific models to learn discriminative representations directly from non-frontal faces. These methods have shown good results for near-frontal face images, but they typically perform poorly for profile face images because of severe texture loss and artifacts. Due to this poor performance, researchers have been working to find more effective methods to reconstruct positive facial images from data <ns0:ref type='bibr' target='#b30'>(Yaniv et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b31'>Amin &amp; Xiaoming, 2017;</ns0:ref><ns0:ref type='bibr' target='#b2'>Xi et al., 2017)</ns0:ref>. For instance, <ns0:ref type='bibr' target='#b0'>(Junho et al., 2015)</ns0:ref>, adopted a multi-task model to improves identity preservation over a single task model from paired training data. Later on <ns0:ref type='bibr' target='#b35'>(Luan, Xi &amp; Xiaoming, 2017;</ns0:ref><ns0:ref type='bibr' target='#b13'>Rui et al., 2017)</ns0:ref>, their main contribution was a novel two-pathway GAN architecture tasks for photorealistic and identity preserving frontal view synthesis starting from a single face image. Recent work by <ns0:ref type='bibr' target='#b5'>(Bassel, Ilya &amp; Yuri, 2021;</ns0:ref><ns0:ref type='bibr' target='#b6'>Chenxu et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Yi et al.,2021)</ns0:ref> has demonstrated advances in the field of face recognition. During pose face transformation, however, some of the synthetic faces appeared incomplete and lacked fine detail. So far, the TP-GAN <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017)</ns0:ref> has made significant progress in the face synthesis model, which can perceive global structure and local details simultaneously in an unsupervised manner. More importantly, TP-GAN solves the photorealistic frontal view synthesis problems by collecting more details on local features for a global encoder-decoder network along with synthesis functions to learn multi-view face synthesis from a single face image. However, we argue that TP-GAN has two major limitations. First, it is critically dependent on texture details of the landmark detection. To be more specific, this method focuses on the inference of the global structure and the transformation of the local texture details, as their corresponding feature maps, to produce the final synthesis. The image visual quality results indicate that these techniques alone have the following deficiencies: A color bias can be observed between the synthetic frontal face obtained by TP-GAN method and the input corresponding to non-frontal input. In some cases, the synthetic faces are even incomplete and fall short in terms of fine detail. Therefore, the quality of the synthesized images still cannot meet the requirements for performing specific facial analysis tasks, such as facial recognition and face verification. Second, it uses a global structure, four local network architectures and synthesis functions for face frontalization, where training and inference are unstable under large data distribution, which makes it ineffective for synthesising arbitrary poses. The goal of this paper is to address these challenges through a landmark feature-based method (LFM) for robust pose-invariant facial recognition to improve image resolution under extreme facial poses.</ns0:p><ns0:p>In this paper, we make the following contributions:</ns0:p><ns0:p>The LFM is a newly introduced method for the existing generative global pathway structure that utilizes a 2D face landmark localization to cooperate with the local pathway structure in a landmark sharing manner to incorporate empirical face pose into the learning process. LFM of target facial details provides guidance to arbitrary pose synthesis, whereas the four-local patch network architecture remains unchanged to capture the input facial local perception information. The LFM provides an easy way for transforming and fitting two-dimensional face models in order to achieve target pose variation and learn face synthesis information from generated images. In order to better represent facial image features, we use a denoising autoencoder (DAE) to modify the structure of the generator's global-path encoder and decoder. The goal of this modification is to train the encoder decoder with multiple noise levels so that it can learn about the missing texture face details. Adding noise to the image pixels causes them to diffuse away from the manifold. As we apply DAE to the diffused image pixels, it attempts to pull the data points back onto the manifold. This implies that DAE implicitly learns the statistical structure of the data by learning a vector field from locations with no data points back to the data manifold. As a result, encoder-decoders must infer missing pieces and retrieve the denoised version in order to achieve balanced learning behavior. We optimize the training process using an accurate parameter configuration for a complex distribution of facial image data. By re-configuring the parameters (such as the learning rate, batch size, number of epochs, and etc.), the GAN performance can be better optimized during the training process. Occasionally, unstable 'un-optimized' training for the synthetic image problem results in unreliable images for extreme facial positions.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>In this section, we focus on the most recent studies which are related to the multi-view face synthesis problem using deep learning approaches. The deep learning approaches including face normalization, generative adversarial network and facial landmark detection, are reviewed.</ns0:p><ns0:p>Face normalization, or multi-view face synthesis from a single face image, is a unique challenge for computer vision systems due to its ill-posed problem. The existing solutions to address this challenge can be classified into three categories:</ns0:p><ns0:p>local texture warping methods <ns0:ref type='bibr'>(Tal et 2D/3D al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b50'>Xiangyu et al., 2015)</ns0:ref>, statistical methods <ns0:ref type='bibr' target='#b18'>(Christos et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b51'>Li et al., 2014)</ns0:ref>, and deep learning methods <ns0:ref type='bibr' target='#b2'>(Xi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Luan, Xi &amp; Xiaoming, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>(Tal et al., 2015)</ns0:ref>, employed a single reference surface for all query faces in order to produce face 3D frontalization. <ns0:ref type='bibr' target='#b50'>(Xiangyu et al., 2015)</ns0:ref>, employed a pose and expression normalization method to recover the canonical-view. <ns0:ref type='bibr' target='#b18'>(Christos et al., 2015)</ns0:ref>, proposed a joint frontal view synthesis and landmark localization method. <ns0:ref type='bibr' target='#b51'>(Li et al., 2014)</ns0:ref>, the authors concentrated on local binary patternlike feature extraction. <ns0:ref type='bibr' target='#b2'>(Xi et al., 2017)</ns0:ref>, proposed a novel deep 3DMM-conditioned face frontalization GAN in order to achieve identity-preserving frontalization and high-quality images by using a single input image with a face pose. <ns0:ref type='bibr' target='#b35'>(Luan, Xi &amp; Xiaoming, 2017)</ns0:ref>, proposed a 90 0 single-pathway framework called the disentangled representation learning-generative adversarial network (DR-GAN) to learn identity features that are invariant to viewpoints, etc.</ns0:p></ns0:div> <ns0:div><ns0:head>Generative Adversarial Networks (GANs)</ns0:head><ns0:p>The GAN is one of the most interesting research frameworks that is used for deep generative models proposed by <ns0:ref type='bibr' target='#b40'>(Ian et al., 2014)</ns0:ref>. The theory behind the GAN framework can be seen as a two-player non-cooperative game to improve the learning model. A GAN model has two main components, generator and discriminator . generates a set of images that is as plausible (G) (D) G as possible in order to confuse the , while the works to distinguish the real generated images D D from the fake. The convergence is achieved by alternately training them. The main difference between GANs and traditional generative models is that GANs generate whole images rather than pixel by pixel. In a GAN framework, the generator consists of two dense layers and a dropout layer. A normal distribution is used to sample the noise vectors and feed them into the generator networks. The discriminator can be any supervised learning model. GANs have been proven effective for a wide range of applications, such as image synthesis <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017;</ns0:ref><ns0:ref type='bibr'>Yu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b38'>Yu et al., 2020)</ns0:ref>, image super-resolution <ns0:ref type='bibr' target='#b42'>(Christian et al., 2017)</ns0:ref>, image-to-image translation <ns0:ref type='bibr' target='#b49'>(Jun-Yan et al., 2017)</ns0:ref>, etc. Several effective GAN models have been proposed to cope with the most complex unconstrained face image situations, such as changes in pose, lighting and expression. For instance, (Alec, <ns0:ref type='bibr' target='#b43'>Luke &amp; Soumith, 2016)</ns0:ref>, proposed a deep convolutional GAN to integrate a convolutional network into the GAN model to achieve more realistic face image generation. <ns0:ref type='bibr' target='#b44'>(Mehdi &amp; Simon, 2014)</ns0:ref>, proposed a conditional version of the generative adversarial net framework in both generator and discriminator. <ns0:ref type='bibr' target='#b45'>(Augustus, Christopher &amp; Jonathon, 2017)</ns0:ref>, presented an improved version of the Cycle-GAN model called 'pixel2pixel' to handle the image-to-image translation problems by using labels to train the generator and discriminator. <ns0:ref type='bibr' target='#b46'>(David, Thomas &amp; Luke, 2017)</ns0:ref>, proposed a boundary equilibrium generative adversarial network (BE-GAN) method, which focuses on the image generation task to produce high-quality image resolution, etc.</ns0:p></ns0:div> <ns0:div><ns0:head>Facial Landmark Detection</ns0:head><ns0:p>The face landmark detection algorithm is one of most successful and fundamental components in a variety of face applications, such as object detection and facial recognition. The methods used for facial landmark detection can be divided into three major groups; holistic methods, constrained local methods, and regression-based methods. In the past decade, deep learning models have proven to be a highly effective way to improve landmark detection. Several existing methods are considered to be good baseline solutions to the 2D face alignment problem for faces with controlled pose variation <ns0:ref type='bibr' target='#b52'>(Xuehan &amp; Fernando, 2013;</ns0:ref><ns0:ref type='bibr' target='#b54'>Georgios, 2015;</ns0:ref><ns0:ref type='bibr'>Xiangyu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b59'>Adrian &amp; Georgios, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b52'>(Xuehan &amp; Fernando, 2013)</ns0:ref>, proposed supervised descent method, which learns the general descent directions in a supervised manner. <ns0:ref type='bibr' target='#b54'>(Georgios, 2015)</ns0:ref>, in their method, a sequence of Jacobian matrices and hessian matrices is determined by using regression. <ns0:ref type='bibr'>(Xiangyu et al., 2016)</ns0:ref>, proposed a model with cascaded convolutional neural network to 3D solve the self-occlusion problem. <ns0:ref type='bibr' target='#b59'>(Adrian &amp; Georgios, 2017)</ns0:ref>, proposed a guided-by-2D landmarks convolutional neural network that converts annotations into annotations, etc.</ns0:p></ns0:div> <ns0:div><ns0:head>2D 3D</ns0:head><ns0:p>We can summarize some important points from our related work. Despite the fact that the existed methods produced good results on the specific face image datasets for which they were designed and provided robust alignment across poses, they are difficult to replicate if they are applied alone to different datasets. This is especially true for tasks like facial normalization or other face synthesis tasks, where deep structure learning methods still fail to generate highquality image samples under extreme pose scenarios, which results in significantly inferior final results.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed Method</ns0:head><ns0:p>In this section, we shall first briefly describe the existing TP-GAN architecture and then describe our proposed LFM method in detail.</ns0:p></ns0:div> <ns0:div><ns0:head>TP-GAN Architecture</ns0:head><ns0:p>Based on the structure shown in Fig <ns0:ref type='figure'>1</ns0:ref> to integrate facial features with their long-range dependencies and, therefore, to create faces that preserve identities, especially in cases of faces with large pose angles. In this way, we can learn a PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science richer feature representation and generate inferences that incorporate both contextual dependencies and local consistency. The loss functions, including pixel-wise loss, symmetry loss, adversarial loss, and identity preserving loss, are used to guide an identity preserving inference of frontal view synthesis. The discriminator is used to distinguish real facial &#119863; &#120579; &#119863; images or 'ground-truth frontal view' from synthesized frontal face images or &#119868; &#119865; (&#119866;&#119879;) &#119866; &#120579; &#119866; (&#119868; &#119875; )</ns0:p><ns0:p>'synthesized-frontal view'. A second stage involves a light-CNN model that is used to (&#119878;&#119865;) compute face dataset's identity-preserving properties. For a more detailed description, see <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>LFM for Generative Global Pathway Structure</ns0:head><ns0:p>To the best of our knowledge, this is the first study to integrate an LFM with the existing TP-GAN global pathway structure for training and evaluation purposes. In this work, we exploit a landmark detection mechanism (Adrian &amp; Georgios, 2017) that proposed for 2D-to-3D facial landmark localization to help our model obtain a high quality frontal-face image resolution. Face landmarks are the most compressed representation of a face that maintains information such as pose and facial structure. There are many situations where landmarks can provide advanced facerelated analyses without using whole face images. The landmark method used in this study was explored at <ns0:ref type='bibr' target='#b2'>(Xi Y et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>Shuicheng &amp; Jiashi. 2018;</ns0:ref><ns0:ref type='bibr' target='#b60'>Xing D, Vishwanath. Sindagi &amp; Vishal. 2018)</ns0:ref>. These methods can achieve high accuracy of face alignment by cascaded regression methods. Methods like these work well when particular poses are chosen without taking other factors into consideration, such as facial characteristics. We found that facial characteristics can play an imperative role in improving the results of the current state of the art. By adding landmarks to augment the synthesized faces, recognition accuracy will be improved since these landmarks rely on generative models to enhance the information contained within them. The process for generating facial images is shown in Fig 1. We will discuss our Fig 2 architecture in the subsequent paragraph. We perform a face detection to locate the face in the Multi-PIE and FEI datasets. The face detection can be achieved by using a Multi-Task Cascade CNN through the MTCNN library <ns0:ref type='bibr' target='#b32'>(Kaipeng Z et al.,2016)</ns0:ref>. After that, cropping and processing of the profile image. A local pathway of four landmark patch networks to capture the local</ns0:p><ns0:formula xml:id='formula_0'>&#119866; &#120579; &#119897; &#119894; , &#119894; &#8712; {0, 1, 2, 3}</ns0:formula><ns0:p>texture around four facial landmarks. Each patch learns a set of filters for rotating the centercropped patch (after rotation, the facial landmarks remain in the center) to its corresponding frontal view. Then, we used a multiple feature map to combine the four facial tensors into one. Each tensor feature is placed at a 'template landmark location' and a max-out fusing strategy is used to ensure that stitches on overlapping areas are minimized. Then, a 2D zero padding technique is used to fill out the rows and columns around the template landmark location with zeros. Nevertheless, local landmark detection alone cannot provide accurate texture detail for a face that has a different shape, or multiple views, because all generated local synthesis faces have the same fixed patch (or template) centralized location, regardless of their shape PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>characteristics. The challenge becomes even greater with these shapes when the face is under extreme poses. A texture can be defined as a function of the spatial variation of brightness intensity of pixels in an image. Each texture level represents a variable, with variations such as smoothness, coarseness, regularity, and etc., of each surface oriented in different directions. Our work focuses on two important phenomena: rotation and noise 'noise is a term used to describe image information that varies randomly in brightness or color'. As a result, if the methods used to eliminate these common phenomena are unreliable, the results will be less accurate; therefore, in practice, the methods used to create the images should be as robust and stable as possible. In addition, the images may differ in position, viewpoint, and light intensity, all of which can influence the final results, challenging texture detail capturing. In order to overcome these challenges, we must adapt a method that can capture and restore the missing texture information.</ns0:p><ns0:p>The key to solving this problem is a landmark localization method based on regression. Our work utilized Face Alignment Network (FAN). A FAN framework is based on the HourGlass (HG) architecture, which integrates four Hourglass models to model human pose through hierarchical, parallel, and multi-scale integration to improve texture maps by reconstructing selfoccluded parts of faces. The landmark detection algorithm captures and restores the texture details of the synthesised face image by repositioning the appearance spot of the mismatched or drifted patch 'template landmark location'. Figure <ns0:ref type='figure'>3</ns0:ref> illustrates some examples. Our method allows us to treat faces that have a variety of shape characteristics. In this way, the spatial variations, smoothness, and coarseness that arise due to mismatched or drifted pixels between local and global synthesized faces are eliminated. Typical landmark templates are approximately the same size as a local patch network, but each region has its own structure, texture, and filter. Next, we combined the local synthesis image with the global synthesis image (or two textures) for data augmentation. Every patch of our FAN has its own augmented channels, and each patch has its own RGB along with a depth map (D) input for each 2D local synthesis image. In this way, the texture details help us to build a more robust model around the face patch region and enhance generalization. Even though landmark feature extraction may result in some incongruous or over-smoothing due to noise, it still remains an important method for incorporating pose information during learning.</ns0:p><ns0:p>The landmark detection algorithm for our synthesis face image was built using 68 points. We then reconstructed those synthesis image into four uniform patches (or templates), &#119871;&#119890;&#119910;&#119890; = &#119866; &#120579; &#119892; 0 , , and , and each patch is comprised</ns0:p><ns0:formula xml:id='formula_1'>,&#119894; &#8712; 0 &#119877;&#119890;&#119910;&#119890; = &#119866; &#120579; &#119892; 1 ,&#119894; &#8712; 1 &#119873;&#119900;&#119904;&#119890; = &#119866; &#120579; &#119892; 2 ,&#119894; &#8712; 2 &#119872;&#119900;&#119906;&#119905;&#8462; = &#119866; &#120579; &#119892; 3</ns0:formula><ns0:p>,&#119894; &#8712; 3 of convolutional components. Each patch region has its own filter, which contains different texture details, regain size and structure information. Individual filters provide more details about specific areas in an image, such as pixels or small areas with a high contrast or that are different in color or intensity from the surrounding pixels or areas. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science concatenation stage. Those layers' act as a visualization feature map for a specific input of a fontal-face image in order to increase the amount of visual information kept by subsampling layers' structure. Then, we merged the features tensors of the local and global pathways into one tensor to produce the final synthesis face. Table <ns0:ref type='table'>1</ns0:ref> shows the workflow of all operational steps.</ns0:p><ns0:p>The landmark method provides useful information for large-pose regions, e.g., , which helps 90 0 our model to produce more realistic images.</ns0:p></ns0:div> <ns0:div><ns0:head>Global Pathway Encoder-Decoder Structure</ns0:head><ns0:p>In this section we describe our encoder-decoder formulation. Inspired by the work of <ns0:ref type='bibr' target='#b34'>(Jimei Y et al., 2016)</ns0:ref>, for the DAE, our aim is to train the encoder-decoder with multiple noise levels in order to learn more about the missing texture face details of the input face image and preserve the identity of the frontal-view image from the profile image . The encoder-decoder &#119868; &#119865; &#119868; &#119875; mechanism has to discover and capture information between the dimensions of the input in order to infer missing pieces and recover the denoised version. In a subsequent paragraph, we will discuss encoder-decoders in more technical details. The idea starts with assuming that the input data points (image pixels) lies on a manifold in . Adding noise to the data image pixel results &#8477; &#119873; in diffusing away from the manifold. When we apply DAE to the diffused data image pixels, it tries to pull the data point back onto the manifold. Therefore, DAE learns a vector field pointing from locations with no data point back to the data manifold, implying that it implicitly learns the statistical structure of the data. However, a sparse coding model has been shown to be a good model for image denoising. We assume that group sparse coding, which generalizes standard sparse coding, is effective for image denoising as well, and we will view from a DAE perspective. The encoding function of sparse coding occurs in the inference process, where the network infers the latent variable from noisy input . Each individual symbol is defined here.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119904; &#119909;</ns0:head><ns0:p>Let be the RGB components of the input face image, is the method that splits the input &#119891; &#119890; &#120567; image into its RBG components, is a set of weights and bias for the DAE, and is the &#8743; &#119886; activation function. In our case, the iterative shrinkage-thresholding algorithm (ISTA) is used to perform inference and is formulated as follows:</ns0:p><ns0:formula xml:id='formula_2'>&#119904; = &#119891; &#119890; (&#119909;; {&#120567;, &#8743; , &#119886;}) = &#119868;&#119878;&#119879;&#119860; (&#119909;; &#120567;, &#8743; , &#119886;) (1)</ns0:formula><ns0:p>The decoding function is the network's reconstruction of the input from the latent variable.</ns0:p><ns0:formula xml:id='formula_3'>&#119909; = &#119891; &#119889; (&#119904;; &#120567;) = &#120567; &#119904; (2)</ns0:formula><ns0:p>where is a denoising function.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119891; &#119889;</ns0:head><ns0:p>The DAE method can be used to learn through the following. For each input data point as &#8743; noise variance which is the same in all directions. We use group sparse coding to denoise , as &#119909; described in Equation (1) and Equation ( <ns0:ref type='formula'>2</ns0:ref>). The DAE is formulated as follows:</ns0:p><ns0:formula xml:id='formula_4'>&#119864; &#119863;&#119860;&#119864; = &#8214; &#119909; -&#119909; &#8214; 2 2 (3)</ns0:formula><ns0:p>We define DAE's reconstruction error as a square error between and .</ns0:p></ns0:div> <ns0:div><ns0:head>&#119909; &#119909;</ns0:head><ns0:p>Generally, the cost function can be another form of differentiable error measure, shown as follows: (&#119868; &#119901; &#119899; ),&#119910; &#119899; )} ( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_5'>&#8710; &#8743; &#8733;-&#8706;&#119864; &#119863;&#119860;&#119864; /&#8706; &#8743;<ns0:label>(</ns0:label></ns0:formula><ns0:p>where is a weighting parameter and is a weighted sum of individual losses that together &#120572; &#119871; &#119904;&#119910;&#119899; constrain the image to reside within the desired manifold. Each individual loss function will be explained in the comprehensive loss functions section.</ns0:p><ns0:p>In order to generate the best images, we need a very good generator and discriminator. The reason for this is that if our generator is not good enough, we won't be able to fool the discriminator, resulting in no convergence. A bad discriminator will also classify images that make no sense as real, which means our model never trains, and we never produce the desired output. The image can be generated by sampling values from a Gaussian distribution and feeding them into the generator network. Based on a game-theoretical approach, our objective function is a minimax function. Using the discriminator to maximize the objective function allows us to perform gradient descent on it. The generator tries to minimize its objective function, so we can use gradient descent to compute it. In order to train the network, gradient ascent and descent must be alternated. </ns0:p></ns0:div> <ns0:div><ns0:head>min</ns0:head></ns0:div> <ns0:div><ns0:head>Gradient descent on . &#119866;</ns0:head><ns0:p>Minimax problem allows discriminate to maximize adversarial networks, so that we can perform gradient ascent on these networks; whereas generator tries to minimize adversarial networks, so that we can perform gradient descent on these networks. In practice, Equation ( <ns0:ref type='formula'>6</ns0:ref> objective function, the discriminator and generator have much stronger gradients at the start of the learning process.</ns0:p><ns0:p>When a single discriminating network is trained, the refiner network will tend to overemphasize certain features in order to fool the current discriminator network, resulting in drift and producing artifacts. We can conclude that any local patch sampled from the refined image should have similar statistics to the real image patches. We can therefore define a discriminator network that classifies each local image patch separately rather than defining a global discriminator network. The division limits both the receptive field and the capacity of the discriminator network, as well as providing a large number of samples per image for learning the discriminator network. The discriminator in our implementation is a fully convolutional network that outputs a probabilistic ( represented as a synthesized face image) map instead &#119873; &#215; &#119873; '&#119873; = 2' of one scalar value to distinguish between a ground truth frontal view and a synthesized (&#119866;&#119879;) frontal view . Our discriminator loss is defined through the discrepancy between the model (&#119878;&#119865;) distribution and the data distribution, using an adversarial loss (</ns0:p><ns0:p>). By assigning each &#119871; &#119886;&#119889;&#119907; probability value to a particular region, the can now concentrate on a single semantic region &#119863; &#120579; &#119863; rather than the whole face.</ns0:p></ns0:div> <ns0:div><ns0:head>Comprehensive Loss Functions</ns0:head><ns0:p>In addition to the existing TP-GAN synthesis loss functions, which are a weighted sum of four individual loss functions ( ), a classification loss is further &#119871; &#119901;&#119909; ,'&#119871; &#119904;&#119910;&#119898; ,&#119871; &#119894;&#119901; ,&#119871; &#119905;&#119907; and &#119871; &#119897;&#119900;&#119888;&#119886;&#119897; ' &#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119894;&#119891;&#119910; added to our method in order to get better results. Although pixel wise loss may bring some over-smooth effects to the refined results, it is still an essential part for both accelerated optimization and superior performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Symmetry Loss (</ns0:head><ns0:p>) <ns0:ref type='formula'>10</ns0:ref>) is used to calculate the symmetry of the synthesized face image because a face image is &#119871; &#119904;&#119910;&#119898; generally considered to be a symmetrical pattern. (&#119868; &#119875; &#119899; )) (12) is used to make the real frontal-face image and a synthesized frontal face images &#119871; &#119886;&#119889;&#119907; &#119868; &#119865; &#119866; &#120579; &#119866; (&#119868; &#119875; &#119899; )</ns0:p><ns0:formula xml:id='formula_6'>&#119923; &#119956;&#119962;&#119950; &#119871; &#119904;&#119910;&#119898; = 1 &#119882;/2 &#215; &#119867; &#119882;/2 &#8721; &#119909; = 1 &#119867; &#8721; &#119910; = 1 | &#119868; &#119901;&#119903;&#119890;&#119889; (&#119909;,&#119910;) -&#119868; &#119901;&#119903;&#119890;&#119889; (&#119882; -(&#119909; -1),&#119910;)| (</ns0:formula></ns0:div> <ns0:div><ns0:head>Identity Preserving Loss ( )</ns0:head><ns0:formula xml:id='formula_7'>&#119871; &#119894;&#119901; &#119871; &#119894;&#119901; = 2 &#8721; &#119894; = 1 1 &#119882; &#119897; &#215; &#119867; &#119897; &#119882; &#119897; &#8721; &#119909; = 1 &#119867; &#119897; &#8721; &#119910; = 1 |&#119865; &#119892;&#119905; &#119897; (&#119909;,</ns0:formula><ns0:p>indistinguishable, so that the synthesized frontal face image achieves a visually pleasing effect.</ns0:p><ns0:p>Total Variation Loss (&#119923; &#119957;&#119959; )</ns0:p><ns0:p>Generally, the face images synthesized by two pathways generative adversarial networks have unfavorable visual artifacts, which deteriorates the visualization and recognition performance. Imposing on the final synthesized face images can help to alleviate this issue. The loss is &#119871; &#119905;&#119907; &#119871; &#119905;&#119907; calculated as follows:</ns0:p><ns0:formula xml:id='formula_8'>&#119871; &#119905;&#119907; = &#119882; &#8721; &#119909; = 1 &#119867; &#8721; &#119910; = 1 |&#119868; &#119901;&#119903;&#119890;&#119889; &#119909;,&#119910; -&#119868; &#119901;&#119903;&#119890;&#119889; &#119909; -1,&#119910; | + |&#119868; &#119901;&#119903;&#119890;&#119889; &#119909;,&#119910; -&#119868; &#119901;&#119903;&#119890;&#119889; &#119909;,&#119910; -1 | (13)</ns0:formula><ns0:p>will generate a smooth synthesized face image. &#119871; &#119905;&#119907; Classification Loss (&#119923; &#119940;&#119949;&#119938;&#119956;&#119956;&#119946;&#119943;&#119962; )</ns0:p><ns0:formula xml:id='formula_9'>&#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119894;&#119891;&#119910; =-&#8721; &#119894; &#119910; &#119905;&#119903;&#119906;&#119890; &#119894; log 2 (&#119910; &#119901;&#119903;&#119890;&#119889; &#119894; ) (14)</ns0:formula><ns0:p>where is the class index, denotes the tensor of the one-hot true target of I, and is the &#119894; &#119910; &#119905;&#119903;&#119906;&#119890; &#119910; &#119901;&#119903;&#119890;&#119889; predicted probability tensor. is a cross-entropy loss which is used to ensure the &#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119894;&#119891;&#119910; synthesized frontal-face image can be classified correctly.</ns0:p><ns0:p>Local Pixel-Wise Loss (&#119923; &#119949;&#119952;&#119940;&#119938;&#119949; )</ns0:p><ns0:formula xml:id='formula_10'>&#119871; &#119897;&#119900;&#119888;&#119886;&#119897; = 1 &#119882; &#215; &#119867; &#119882; &#8721; &#119909; = 1 &#119867; &#8721; &#119910; = 1 | &#119868; &#119892;&#119905; &#119897;&#119900;&#119888;&#119886;&#119897; (&#119909;,&#119910;) -&#119868; &#119901;&#119903;&#119890;&#119889; &#119897;&#119900;&#119888;&#119886;&#119897; (&#119909;,&#119910;)| (15)</ns0:formula><ns0:p>This loss is to calculate the total average pixel difference between and , and it is not &#119868; &#119892;&#119905; &#119897;&#119900;&#119888;&#119886;&#119897; &#119868; &#119901;&#119903;&#119890;&#119889; &#119897;&#119900;&#119888;&#119886;&#119897; addressed in <ns0:ref type='bibr' target='#b13'>(Rui et al., 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Generator Loss Functions (&#119923; &#119944;&#119942;&#119951;&#119942;&#119955;&#119938;&#119957;&#119952;&#119955; )</ns0:p><ns0:p>The generator loss function of the proposed method is a weighted sum of all the losses defined above:</ns0:p><ns0:formula xml:id='formula_11'>&#119871; &#119892;&#119890;&#119899;&#119890;&#119903;&#119886;&#119905;&#119900;&#119903; = &#119871; &#119901;&#119909; + &#120582; 1 &#119871; &#119904;&#119910;&#119898; + &#120582; 2 &#119871; &#119894;&#119901; + &#120582; 3 &#119871; &#119886;&#119889;&#119907; + &#120582; 4 &#119871; &#119905;&#119907; + &#120582; 5 &#119871; &#119888;&#119897;&#119886;&#119904;&#119904;&#119910;&#119894;&#119891;&#119910; + &#120582; 6 &#119871; &#119897;&#119900;&#119888;&#119886;&#119897; (16)</ns0:formula><ns0:p>where are weights that coordinate the different losses, and they are set to be &#120582; &#119894; (&#119894; = 1~6) , , , , , in our experiments. The &#120582; 1 = 0.1 &#120582; 2 = 0.001 &#120582; 3 = 0.005 &#120582; 4 = 0.0001 &#120582; 5 = 0.1 and &#120582; 6 = 0.3 is used to guide an identity-preserving inference of frontal view synthesis. While the &#119871; &#119892;&#119890;&#119899;&#119890;&#119903;&#119886;&#119905;&#119900;&#119903; is used to push the generative network forward so the synthesized frontal face image &#119871; &#119886;&#119889;&#119907; achieves a pleasing appearance.</ns0:p></ns0:div> <ns0:div><ns0:head>Optimizing the Training Process</ns0:head><ns0:p>In order to optimize the training process, we propose some modifications to the TP-GAN parameters as shown in Table <ns0:ref type='table'>2</ns0:ref> in order to improve the performance in learning the frontal-face data distribution. In our experiment, we consider few parameters: learning rate, batch size, number of epochs, and loss functions. We chose those parameters based on our experience, knowledge and observations. The adopted learning rate improve the module loss accuracy for both and . The batch size is the number of examples from the training dataset used in the &#119866; &#119863; estimation of the error gradient. This parameter determines how the learning algorithm will behave. We have found that using a larger batch size has adversely affected our method performance. As a result, during initial training the discriminator may be overwhelmed by too many examples. This will lead to poor training performance. The number of training epochs is a key advantage of machine learning. As the number of epochs increases, the performance will be improved and the outcomes will be astounding. However, the disadvantage is that it takes a long time to train a large number of epochs. These parameters are essential to improve LFMTP-GAN's representation learning, gaining high-precision performance and reducing visual artifacts when synthesising frontal-face images.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments</ns0:head><ns0:p>We conducted extensive experiments to verify the effectiveness of our method by comparing it with the TP-GAN. The evaluation protocol includes frontal face image resolution and accuracy preserving face identity.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental Settings</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Both LFMTP-GAN and TP-GAN models are tested and trained on the Multi-PIE and FEI datasets. Multi-PIE <ns0:ref type='bibr' target='#b62'>(Ralph et al., 2010)</ns0:ref> is a large face dataset with images for 75000 + 337 identities across a variety of different poses, illuminations and expression conditions captured in a constrained environment. Multi-PIE has poses ranging from , and illumination 15 &#177; 90 0 20 levels for each subject. All illuminations were taken within a few seconds: two without any 20 flash illumination, followed by an image with each flash firing independently. Figure <ns0:ref type='figure'>5</ns0:ref> shows 18 an example of our model results. 'The intensity of light is determined by the brightness of the flash and the background. For example, a bright or dark flash, shadow reflection, or a white or blue background will affect intensity. This depends on the recording equipment and the positioning.' To minimize the number of saturated pixels in flash illuminated images, all cameras have been set to have a pixel value of for the brightest pixel in an image without 128 flash illumination. In the same way, the diffusers in front of each flash were added. The color balance was also manually adjusted so that the images looked similar. FEI is a Brazilian unlabeled face dataset with images for identities across a variety of different poses 2800 + 200 captured in a constrained environment. The face images were taken between 'June 2005 and March 2006' at the artificial intelligence laboratory at s&#227;o bernardo do campo, s&#227;o paulo, brazil. The FEI images were taken against a white homogenous background in an upright frontal position with a range of profile poses; and different illuminations, distinct appearances &#177; 90 0 and hairstyles were included for each subject. Our method shares the same implementation concept as TP-GAN but totally has different parameters settings. The training lasts for 10-to-18 days in each system for each dataset. The training model and source code will be released after this article is accepted.</ns0:p></ns0:div> <ns0:div><ns0:head>Visual Quality</ns0:head><ns0:p>In this subsection, we compare LFMTP-GAN with TP-GAN. Figure <ns0:ref type='figure'>6</ns0:ref> and Fig <ns0:ref type='figure'>7</ns0:ref> shows the comparison images, where the first column is the profile-face images under different face poses, the second column is the synthesized frontal-face images by TP-GAN, the third column is the synthesized frontal-face images by LFMTP-GAN, and the last column is one randomly selected frontal-face image of the category corresponding to the profile-face image. The yaw angle of the input face image are chosen circularly from . Obviously, the (15 0 ,30 0 ,45 0 ,60 0 ,75 0 and 90 0 ) resolution of the LFMTP-GAN images looks better than TP-GAN. This reveals that the use of a proper landmark localization algorithm could significantly improve the image quality of the 2D synthesized images, provide rich textural details, and contain fewer blur effects. Therefore, the visualization results shown in </ns0:p></ns0:div> <ns0:div><ns0:head>Identity Preserving</ns0:head><ns0:p>To quantitatively demonstrate the identity preserving ability of the proposed method, we evaluate the classification accuracy of synthesized frontal-face images on both Multi-PIE and FEI databases, and show their classification accuracy in Table <ns0:ref type='table'>3</ns0:ref> across views and (%) illuminations. The experiments were conducted by first employing light-CNN to extract deep features and then using the cosine-distance metric to compute the similarity of these features. The light-CNN model was trained on MS- <ns0:ref type='bibr'>Celeb-1M (Microsoft Celeb, 2016)</ns0:ref> which is a largescale face dataset, and fine-tuned on the images from Multi-PIE and FEI. Therefore, the light-CNN results on the profile images serves as our baseline. Our method produces better results</ns0:p></ns0:div> <ns0:div><ns0:head>&#119868; &#119875;</ns0:head><ns0:p>than TP-GAN as the pose angle is increased. Our approach has shown improvements in frontalfrontal face recognition. Moreover, Table <ns0:ref type='table'>4</ns0:ref> illustrates that our method is superior to many existing state-of-the-art approaches.</ns0:p><ns0:p>Many deep learning methods have been proposed for frontal view synthesis, but none of them have been proved to be sufficient for recognition tasks. <ns0:ref type='bibr' target='#b1'>Chao et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Yibo et al., 2018</ns0:ref> relied on direct methods such as CNN for face recognition, which will definitely reduce rather than improve performance. It is therefore important to verify whether our synthesis results can improve recognition performance (whether 'recognition via generation' works) or not. The next section presents the loss curves for the two models.</ns0:p></ns0:div> <ns0:div><ns0:head>Model Loss Curve Performance</ns0:head><ns0:p>This section provides a comparison with TP-GAN. We analyze the effects of our model on three tradeoff parameters named generator loss, pixel-wise loss, and identity preserving accuracy. 80% of the face image subjects from the Multi PIE and FEI datasets were used for training and evolution purposes. 90% of the image subjects were used for training, while 10% were used for testing. The recognition accuracy and corresponding loss curves are shown in Fig 10 <ns0:ref type='figure'>.</ns0:ref> We can clearly see from the curves that, the proposed method improves the TP-GAN model and provides a much better performance on both datasets. In particular, the number of epochs exceeds 4500, the loss performance decreased sharply in LFMTP-GAN model, while the loss performance decreased slightly in the TP-GAN model. Our optimization learning curves was calculated according to the metric by which the parameters of the model were optimized, i.e., loss. More importantly, our method still produces visually convincing results (as shown in Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Results and Discussions</ns0:head><ns0:p>The goal of this method is to match the appearance of each query face by marking the partially face surface of the generated image, such as the eyes, nose and mouth. In theory, this would have allowed the TP-GAN method to better preserve facial appearance in the updated, synthesized views. The statement holds true when the face is considered to have similar shape characteristics. Considering that all human faces have unique shape characteristics, this may actually be counterproductive and harmful, rather than improving face recognition. We believe that it is necessary to integrate the four important local areas (eyes, nose and mouth) into their right positions on the whole face image, which is already proved to be correct from our experiment results. A majority of face recognition systems require a complete image to be recognized, however, recovering the entire image is difficult when parts of the face are missing. This makes it difficult to achieve good performance. We demonstrate how our approach can help enhance face recognition by focusing on these areas of the face and outperform other methods in the same context. Landmark detection is widely used in a variety of applications including object detection, texture classification, image retrieval and etc. TP-GAN already has landmark detection implemented for detecting the four face patches in its early stages as shown in Fig 1 <ns0:ref type='figure'>.</ns0:ref> Such a method might be valuable in obtaining further textual details that can help in recovering those facial areas and face shapes that have different characteristics. In our proposal, we offer a relatively simple yet effective method for restoring the texture details of a synthesised face image by repositioning the appearance areas of four landmark patches rather than the entire face. We show a different method for facial recognition (Table <ns0:ref type='table'>4</ns0:ref>) for faces with extreme poses. In four major poses we achieve rank-1 recognition rates ( , , , and </ns0:p></ns0:div> <ns0:div><ns0:head>},</ns0:head><ns0:p>external neural network in our modification, because that integration will increase the network complexity and cause training limitations. We obtained better results by introducing noise to the image before feeding it to the during optimization. {&#119866; Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the real face image from the synthetic frontal images, and provide a rich texture detail. The quantitative and qualitative experimental results of the Multi-PIE and FEI datasets show that our proposed method can not only generate high-quality perceptual facial images in extreme poses, but also significantly improves the TP-GAN results. Although LFMTP-GAN method achieves a high-quality image resolution output, there is still room for improvement by choosing different optimization algorithms, such as loss functions, or introducing some different techniques for facial analysis and recognition. Our future research is to apply different error functions or different face analysis and recognition techniques, combined with two pathway structures, to achieve a super-resolution generative model and high-precision performance.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>The structure of TP-GAN. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:p>The TP-GAN and LFMTP-GAN loss curve plots.</ns0:p><ns0:p>Where A is the generator loss curve, and B is the pixel-wise loss curve. The horizontal axis indicates the number of epochs, which is the number of times that entire training data has been trained. The vertical axis indicates how well the model performed after each epoch; the lower the loss, the better a model. C is the identity preserving accuracy curve, which is a quality metric that measures how accurate it is to preserve a user's identity; the higher the accuracy, the better a model. 5, 6 and 7 layers 6-5, 6 and 7 act as a visual feature map for specific inputs of fontal-face images in order to retain more visual information by subsampling layers' structure.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>shown in Fig 4, we construct a noisy input by adding Gaussian white noise (GWN) to the original input profile images, as given in: , where . Here, is an &#119909; = &#119909; + &#119907; &#119907; &#8764; &#119873; (&#119874;, &#120590; 2 &#119868;) &#119868; &#119873; &#215; &#119873; PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)Manuscript to be reviewedComputer Scienceidentity matrix, where is the size of input data (batch output of the generator), and is the &#119873; &#120590; 2</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>max &#120579; &#119863; [&#120124; &#119868; &#119865; ~ &#119875;(&#119868; &#119865; ) &#119897;&#119900;&#119892; &#119863; &#120579; &#119863; (&#119868; &#119865; ) + &#120124; &#119868; &#119875; ~ &#119875;(&#119868; &#119875; ) &#119897;&#119900;&#119892; (1 -&#119863; &#120579; &#119863; (&#119866; &#120579; &#119866; (&#119868; &#119875; )))] (7) PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>|</ns0:head><ns0:label /><ns0:figDesc>of the ground-truth category of . be the composition image of the four local facial patches &#119868; &#119868; &#119892;&#119905; &#119897;&#119900;&#119888;&#119886;&#119897; of , and be the composition image of the four local facial patches of . &#119868; &#119892;&#119905; (&#119909;,&#119910;) -&#119868; &#119901;&#119903;&#119890;&#119889; (&#119909;,&#119910;)| (9)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>&#119910;) -&#119865; &#119901;&#119903;&#119890;&#119889; &#119897; (&#119909;,&#119910;)| (11) where and respectively denote the feature map of the last two layers of the light--CNN net. It is expected that a good synthesized frontal-&#119897; -&#119905;&#8462; (&#119897; = 1, 2) face image will have similar characteristics to its corresponding real frontal-face image. We employ a fully connected layer of the pre-trained light-CNN net for the feature extraction of the pre-trained recognition network. The pre-trained model will leverage the loss to enforce identitypreserving frontal view synthesis. Adversarial Loss ( ) &#119923; &#119938;&#119941;&#119959; PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>work. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Fig 6 and Fig 7 demonstrate the effectiveness of our method across a variety of poses and datasets. Similarly, Fig 8 shows the results of these datasets in close-up image with facial pose. 90 0Previous frontal view synthesis methods are usually based on a posture range of . It is &#177; 60 0 generally believed that if the posture is greater than , it is difficult to reconstruct the image of 60 0 the front view. Nonetheless, we will show that with enough training data and a properly designed loss function, this is achievable. In Fig9and Fig 10, we show that LFMTP-GAN can recover identity-preserving frontal faces from any pose, as well as comparing with state-of-the-art face frontalization methods, it performs better. In addition, our geometry estimation method does not require 3D geometry knowledge because it is driven by data alone.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Fig 6, Fig 7 and Fig 9) even under extreme face poses, its recognition performance is about higher than that 1.2% of TP-GAN. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>, we propose an LFM model for synthesizing a frontal-face image from a single image to further enhance the frontal-face images quality of the TP-GAN model. To accomplish our goal smoothly, we expand the existing generative global pathway with a well-constructed 2D face landmark localization to cooperate with the local pathway structure in a landmark sharing manner to incorporate empirical face pose into the learning process, and improve the encoderdecoder global pathway structure for better facial image features representation. Compared with TP-GAN, our method can generate frontal images with rich texture details and preserve the identity information. Face landmark localization allows us to restore the missing information of PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>The final output was obtained by integrating the global pathway with a 2D facial landmark localization to collaborate with the local pathway in a landmark sharing fashion.The dataset was downloaded from the official TP-GNA GitHub page: https://github.com/HRLTY/TP-GAN.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 8 A</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Detection: the goal of this step is to identify faces that are generated by local and global pathways. 2-Facial landmarks such as the eye centers, tip of the nose, and mouth are located. 3-The feature extractor encodes identity information into a hightechnique is used to enhance the synthesis image textures detail by adding slightly modified copies of already existing data or by creating new synthetic data based on existing data.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,229.87,525.00,374.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,306.37,525.00,188.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,250.12,525.00,221.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,275.62,525.00,261.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>, the TP-GAN framework architecture consists of two stages. The first stage is a generator of two-pathways CNN that is parameterized by . Each</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#119866; &#120579; &#119866;</ns0:cell><ns0:cell>&#120579; &#119866;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>pathway has encoder-decoder</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; , &#119866; &#120579; &#119863;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell cols='2'>structure and combination of loss functions, a local</ns0:cell></ns0:row><ns0:row><ns0:cell>pathway</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119897; , &#119866; &#120579; &#119863; &#119897;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell cols='5'>of four landmark patch networks</ns0:cell><ns0:cell>&#119866; &#120579; &#119897; &#119894;</ns0:cell><ns0:cell>, &#119894; &#8712; {0, 1, 2, 3}</ns0:cell><ns0:cell>to capture the local</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>texture around four facial landmarks, and one global network</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>to process the global</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>face structure. Furthermore, the bottleneck layer</ns0:cell><ns0:cell>(&#119866; &#119892; )</ns0:cell><ns0:cell>, which is the output of</ns0:cell><ns0:cell>, is typically &#119864; &#120579; &#119866; &#119892;</ns0:cell></ns0:row></ns0:table><ns0:note>used for classification tasks with the cross-entropy loss . A global pathway helps &#119871; &#119888;&#119903;&#119900;&#119904;&#119904; -&#119890;&#119899;&#119905;&#119903;&#119900;&#119901;&#119910;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>(&#119868; &#119865; ) profile image . During the training phase of such networks, pairs of corresponding (&#119868; &#119875; ) {&#119868; &#119865; , &#119868; &#119875; } from multiple identities are required. Input and output are both based on a pixel space of &#119910; &#119868; &#119875; &#119868; &#119865; size with a color channel . We aim to learn a synthesis function that can output a &#119882; &#215; &#119867; &#215; &#119862; &#119862;</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='6'>frontal view when given a profile image. This section will be omitted since it was already</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>explained in TP-GAN architecture. Optimizing the network parameters</ns0:cell><ns0:cell>(&#119866; &#120579; &#119866; )</ns0:cell><ns0:cell>starts with</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>minimizing the specifically designed synthesis loss . For a training set with training pairs of , the optimization problem is expressed as and the aforementioned (&#119871; &#119904;&#119910;&#119899; ) &#119871; &#119888;&#119903;&#119900;&#119904;&#119904; -&#119890;&#119899;&#119905;&#119903;&#119900;&#119901;&#119910; &#119873; {&#119868; &#119865; &#119899; , &#119868; &#119875; &#119899; }</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>follows:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>4) differentiate DAE's reconstruction error with respect to weights, and is the change in weights, where &#8706;&#119864; &#119863;&#119860;&#119864; /&#8706; &#8743; &#10230; can then be learned by doing gradient descent on the DAE's cost &#8710; &#8743; &#8743; &#120579; &#119866; = 1 &#119873; argmin &#120579; &#119866; &#119873; &#8721; &#119899; = 1 {&#119871; &#119904;&#119910;&#119899; (&#119866; &#120579; &#119866; (&#119868; &#119875; &#119899; ), &#119868; &#119865; &#119899; ) + &#120572;&#119871; &#119888;&#119903;&#119900;&#119904;&#119904; -&#119890;&#119899;&#119905;&#119903;&#119900;&#119901;&#119910; (&#119866; &#120579; &#119892; &#119864;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>function.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>Essentially, the . is the encoder and is learning parameters of the DAE function. Then, it decodes encodes input data into a hidden representation: &#119866; &#120579; &#119892; &#119909; &#8712; &#8477; &#119873; &#8462; = &#119891; &#119890; (&#119909;;&#120579;) &#8712; &#119864; the &#8477; &#119872; &#119891; &#119890; &#120579; &#119866; &#120579; &#119892; &#119863;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>hidden representation into a reconstruction of the input data:</ns0:cell><ns0:cell>&#119909; = &#119891; &#119889; (&#8462;;&#120579;)</ns0:cell><ns0:cell>. The objective for</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>learning the parameter of an &#120579;</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>is to minimize the reconstruction error between and &#119909;</ns0:cell></ns0:row><ns0:row><ns0:cell>&#119909;</ns0:cell><ns0:cell cols='5'>. Usually, there is some constraint on</ns0:cell><ns0:cell>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>to prevent it from learning an identity</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>transformation. For example, the dimension of is much smaller than the input data's dimension &#8462;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>&#119894;.&#119890;. &#119872; &#8810; &#119873;</ns0:cell><ns0:cell>, then</ns0:cell><ns0:cell cols='2'>{&#119866; &#120579; &#119864; &#119892; , &#119866; &#120579; &#119863; &#119892;</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>will function similarly to principle component analysis (PCA). The</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>hidden representation</ns0:cell><ns0:cell>&#8214;&#8462;&#8214; 1</ns0:cell><ns0:cell>is small, then</ns0:cell><ns0:cell>&#119866; &#120579; &#119864; &#119892;</ns0:cell><ns0:cell>functions similarly to sparse coding. The DAE</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>tries to remove noise from input data. Let input . DAE takes as an input, then outputs a denoising signal be a noisy input by adding to an original &#119909; = &#119909; + &#119907; &#119907; . The &#119909; &#119909; &#119909; = &#119891; &#119889; (&#119891; &#119890; (&#119909;; &#120579;);&#120579;)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>objective function of a such technique is to minimize the error between and by adjusting , &#119909; &#119909; &#120579;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>i.e., it tries to reconstruct the actual content well while not reconstructing noise. DAE can also be</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>viewed as a generative model.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Adversarial Networks</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Following (Ian et al., 2014) work, adversaries network consists of two components</ns0:cell><ns0:cell>(&#119866;)</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>(&#119863;)</ns0:cell><ns0:cell>.</ns0:cell></ns0:row></ns0:table><ns0:note>'The loss function reflects the difference in distribution between the generated and original data'. We will first review some technical aspects of the training process, and then the adversaries' network. In frontal view synthesis, the aim is to generate a photorealistic andPeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66723:2:0:NEW 25 Jan 2022) Manuscript to be reviewed Computer Science identity-preserving frontal view image from a face image under a different pose, i.e. a</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>Each individual loss function is presented below, and the used symbols are defined here. Let be an output image. &#119868;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Let and be the width and height of , &#119882; &#119867; &#119868; (&#119909;,&#119910;)</ns0:cell><ns0:cell>be a pixel coordinate of a 2D image,</ns0:cell><ns0:cell>&#119868; &#119901;&#119903;&#119890;&#119889;</ns0:cell><ns0:cell>be the</ns0:cell></ns0:row></ns0:table><ns0:note>predicted (i.e., synthesized) frontal-face image of , be the representative frontal-face image &#119868; &#119868; &#119892;&#119905;</ns0:note></ns0:figure> </ns0:body> "
"Dear editor / reviewers, Thank you for providing an opportunity to resubmit a revised draft of '2D Facial Landmark Localization Method for Multi-view Face Synthesis Image Using a Two Pathway Generative Adversarial Network Approach ' to PeerJ journal. Thanks to the editor and reviewers for their time and effort in providing feedback on our manuscript, as well as their insightful comments. We have incorporated most of your suggestions. • There are many loss functions that are perplexing. Please, tidy up this part to avoid any misunderstanding by readers. • Thanks for the highlighted comments, which we will use to improve this work. The loss functions are summarized and clarified in the summary paragraph. It appears at line 510. • There is a typo in line 313 'is the a set'. • Thanks for taking the time to comment and help us improve this work. Our manuscript has been revised to remove these words to avoid any confusion. • Also, in Line 353 'the set of database'. • We appreciate you bringing this issue to our attention once again. The words appearing on line 353 have been revised. Note: The revised manuscript will include some minor changes. 1- We have revised Local Pixel-Wise Loss TO Local Pixel-Wise Loss 2- Table 2 has been revised. We remove the Parameters name “Adversarial Loss () and its value and placed into Generator Loss Functions ”. We have revised this to avoid confusion regarding the loss functions and not parameter settings.   "
Here is a paper. Please give your review comments after reading it.
359
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Pylogeny is a cross-platform library for the Python programming language that provides an object-oriented application programming interface for phylogenetic heuristic searches. Its primary function is to permit both heuristic search and analysis of the phylogenetic tree search space, as well as to enable the design of novel algorithms to search this space. To this end, the framework supports the structural manipulation of phylogenetic trees, in particular using rearrangement operators such as NNI, SPR, and TBR, the scoring of trees using parsimony and likelihood methods, the construction of a tree search space graph, and the programmatic execution of a few existing heuristic programs. The library supports a range of common phylogenetic file formats and can be used for both nucleotide and protein data. Furthermore, it is also capable of supporting GPU likelihood calculation on nucleotide character data through the BEAGLE library.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>analysis of the phylogenetic tree search space, as well as the design of novel algorithms to search this space. This framework is written in the Python programming language, yet it uses efficient auxiliary libraries to perform computationally expensive steps such as scoring. As a programming interface, Pylogeny addresses the needs of both researchers and programmers who are exploring the combinatorial problem associated with phylogenetic trees.</ns0:p><ns0:p>The phylogenetic tree search space describes the combinatorial space of all possible phylogenetic trees for a set of operational taxonomic units. This forms a finite graph where nodes represent tree solutions and edges represent rearrangement between two trees according to a given operator. Operators include Nearest Neighbor Interchange (NNI), Subtree Prune and Regraft (SPR), and Tree Bisection and Reconnection (TBR), most of which are implemented presently in Pylogeny <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. These nodes can be evaluated for fitness against sequence data. We can also evaluate properties of the space such as location of local and global maxima, and the quantity of the latter. The presence of multiple maxima is a confounding factor in heuristic searches which may lead to drawing incorrect conclusions on evolutionary histories.</ns0:p><ns0:p>The source code and library requires only a small number of dependencies. Python dependencies include NumPy <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>, a ubiquitous numerical library, NetworkX <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>, a graph and network library, Pandas <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>, a high-performance library for numerical data, and P4 <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>, a phylogenetic library. An additional dependency that is required is a C phylogenetic library called libpll that underlies the RAxML application and is used to score likelihood of trees <ns0:ref type='bibr' target='#b2'>[3,</ns0:ref><ns0:ref type='bibr' target='#b12'>12]</ns0:ref>. Optionally, the BEAGLE <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> package could be used for scoring as well. Most dependencies are automatically resolved by a single command by installing the package from the PyPi Package Index. Primary documentation is available on the library's website and alongside the library. All major classes and methods also possess documentation that can be accessed using a native command.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>Features</ns0:head><ns0:p>The functionality to maintain a phylogenetic landscape is implemented in the landscape class defined in the landscape module of this library. This object interacts with a large number of other classes and supports tree scoring using standard phylogenetic methods. Many of the more relevant objects are tabulated and explained in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. A large coverage of unit testing has been performed on most of the major features including tree rearrangement, heuristic exploration, and landscape construction.</ns0:p><ns0:p>The Pylogeny library can read sequence alignments and character data in formats including FASTA, NEXUS, and PHYLIP. Tree data can only currently be read in a single format with future implementations to allow for a greater breadth of formats. Persistence and management of character data is performed by an alignment module, while trees are stored by their representative string in a tree module.</ns0:p><ns0:p>They can be instantiated into a richer topology object in order to manipulate and rearrange them.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Phylogenetic Tree Manipulation and Scoring</ns0:head><ns0:p>For the purposes of this framework, if instantiated into a topology object, phylogenetic trees are modelled in memory as being rooted. Therefore, manipulation and access of the tree components, such as nodes and edges, presupposes a rooted structure. Unrooted trees, either multifurcating or bifurcating, can nevertheless still be output and read. Support is also present for splits or bipartitions (as in the bipartition object) of these trees, required by many phylogenetic applications such as consensus tree generation <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>.</ns0:p><ns0:p>Iterators can be created for visiting different elements in a tree. Unrooting, rerooting, and other simple manipulation can also be performed on a tree. For more complex manipulation, rearrangement operators (using the rearrangement module) can be performed on a tree to convert it to another topology. To save memory and computation, rearrangements are not performed unless the resultant structure is requested, storing movement information in a transient intermediate structure. This avoids large-scale instantiations of new topology objects when exploring the search space.</ns0:p><ns0:p>Scoring topologies using parsimony or likelihood is done by calling functions present in the library that wrap libpll or the high-performance BEAGLE library. These software packages are written in C or C++, the latter of which allows for increased performance by using the Graphics Processing Unit (GPU) found in a computer for processing.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Tree Search Space Graph Construction and Search</ns0:head><ns0:p>The tree search space is abstracted as a graph where a number of graph algorithms and analyses can be performed on it. We do this by utilizing routines found in the NetworkX library which has an efficient implementation of the graph in C. Accessing elements of the graph can be done by iteration or by node name, and properties of the space can be identified by function. Represents a bipartition of a phylogenetic tree. A branch in a phylogenetic tree defines a single bipartition that divides the tree into two disjoint sets of leaves. tree tree Represents a phylogenetic tree which does not contain structural information and only defines features such as its Newick string, fitness score, and origin. treeSet tree Represents an ordered, disorganized collection of trees that do not necessarily comprise a combinatorial space.</ns0:p><ns0:p>Exploring the space is done by performing rearrangements on trees as topology objects where different methods of exploration include a range of enumeration and stochastic-based sampling approaches. In order to make Newick strings consistent amongst trees in a phylogenetic tree search space, an arbitrary but efficient rooting strategy is used to avoid redundancy. Rearranging trees in the search space reroots resultant trees to the lexicographically lowest-order taxa name. This means that different rearrangements that lead to the same topology, with a possibly different ordering of leaves, can still be recognized as not being a new addition to the space. Restriction on this exploration is supported by allowing limitations on movement by disallowing breaking certain bipartitions.</ns0:p><ns0:p>A minimal example to demonstrate constructing a landscape from an alignment file, and finding trees in the space, is found below. The landscape is initialized with a single tree corresponding to an approximate maximum likelihood tree as determined using the FastTree executable <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>. The variable trees now holds a list of integers. These integers correspond to the names of new trees that have populated our landscape object. These new trees comprise the neighbors of the starting tree, a tree found using FastTree. One could now query the landscape object for new information such as listing these neighbors or writing all of the Newick strings of these trees. Reviewing Manuscript # See trees around the starting tree.</ns0:p><ns0:p>for i in trees: # Iterate over these.</ns0:p><ns0:p># Print their Newick strings. print lscape.getTree(i)</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Applying Search Space Heuristics</ns0:head><ns0:p>Performing a heuristic search of the combinatorial space comprised by a phylogenetic landscape can be done with relative ease using this library. Not only can the heuristic's steps be later analysed, the resulting space that is explored can also be later viewed and investigated for its properties. The heuristic module has a number of already defined approximate methods to discover the global maximum of the space, and with understanding of the object hierarchy, one can create their own.</ns0:p><ns0:p>As an example, one could perform a greedy hill-climbing heuristic on the search space by comparing the trees' parsimony scores. To do this, they would instantiate a parsimonyGreedy object from the heuristic module and provide an existing landscape and tree in that landscape to start the climb at.</ns0:p><ns0:p>The minimal code to achieve a search from the first tree of a landscape would be: We have applied a heuristic to the landscape which has populated it with new trees. Nothing is returned here. In order to investigate what new trees have been added, we can query the heuristic object.</ns0:p><ns0:p>Furthermore, we can access these new trees from the landscape object. newTrees = h.getPath() # List of tree names.</ns0:p><ns0:p>for name in newTrees:</ns0:p><ns0:p># Visit all trees found by heuristic. tree = lscape.getTree(name) print tree.getScore() # Print scores.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Existing Phylogenetic and Heuristic Programs</ns0:head><ns0:p>The library supports executing other software on its objects. Implementations are present in the framework to call on the FastTree <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> and RAxML heuristics for finding an approximate maximum likelihood tree. There is also an implementation for the use of TreePuzzle <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> and CONSEL <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> in order to acquire confidence interval of trees as defined by the Approximately Unbiased (AU) test <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>. Further implementations can be created by extending a base interface found in the library.</ns0:p><ns0:p>An example of code to demonstrate the use of CONSEL, to generate a confidence interval of trees with default settings, is as follows. We now have a treeSet, or collection, of tree objects assigned to the variable interval which have been deemed to be significant and relevant.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>Other Libraries</ns0:head><ns0:p>Other Python libraries that perform similar tasks to this framework include DendroPy <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>, ETE, and the P4 Phylogenetic Library. Elements of alignment management and tree manipulation are present Reviewing Manuscript in all three of these libraries, but none of them allow for the manipulation and heuristic search of a combinatorial space of phylogenetic trees. There remains a deficiency for this functionality across many languages, Python included. Despite this, this framework can serve to work alongside such libraries for further power.</ns0:p><ns0:p>DendroPy possesses a number of metrics and other tree operations that are not present in Pylogeny.</ns0:p><ns0:p>This library supports translating its tree structure to the structure found in DendroPy. Therefore, these operations can still be accessed. Similarly, ETE allows for a number of rich visualization techniques not possible with this framework that can also be accessed in such a manner. A small part of the Pylogeny library is built on the P4 Phylogenetic Library and the P4 library performs a number of operations that are found in this framework, such as scoring and manipulation of trees. We, however, did not find P4</ns0:p><ns0:p>as flexible an API as it appears to be designed for scripting and for work in a terminal rather than as a component of a larger application. For example, P4 likelihood calculations are printed to the standard output rather than returned from a function.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>from pylogeny.alignment import * from pylogeny.landscape import landscape # Open an alignment compatible with the strict # PHYLIP format. ali = phylipFriendlyAlignment('al.fasta') startTree = ali.getApproxMLTree() # Create the landscape with a root tree. lscape = landscape(ali,starting_tree=startTree, root=True) # Explore around that tree. trees = lscape.exploreTree(lscape.getRoot())</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>5</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comp Sci reviewing PDF | (CS-2015:02:3980:1:0:REVIEW 30 Apr 2015)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>from pylogeny.alignment import alignment from pylogeny.landscape import landscape from pylogeny.heuristic import parsimonyGreedy # ali = Open an alignment file. # lscape = Construct a landscape. # ... h = parsimonyGreedy(lscape,lscape.getRootNode()) h.explore()</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>from pylogeny.alignment import alignment from pylogeny.executable import consel # ali = Open an alignment file. # trees = A set of trees (e.g., a landscape). # ... AUTest = consel(trees,ali,'AUTestName') interval = AUTest.getInterval()</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Overview of the basic objects in the Pylogeny library. Allows one to write a landscape object to a file, including alignment and tree information. landscapeParser landscapeWriter Allows one to parse a landscape that was written to a file.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class Name</ns0:cell><ns0:cell>Module Name</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>alignment</ns0:cell><ns0:cell>alignment</ns0:cell><ns0:cell>Represents a biological sequence alignment of character data</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>to enable phylogenetic inference and other operations.</ns0:cell></ns0:row><ns0:row><ns0:cell>treeBranch</ns0:cell><ns0:cell>base</ns0:cell><ns0:cell>Represents a branch in a tree structure, such as a phylogenetic</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>tree, and its associated information.</ns0:cell></ns0:row><ns0:row><ns0:cell>treeNode</ns0:cell><ns0:cell>base</ns0:cell><ns0:cell>Represents a node in a tree structure, such as a phylogenetic</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>tree, and its associated information.</ns0:cell></ns0:row><ns0:row><ns0:cell>treeStructure</ns0:cell><ns0:cell>base</ns0:cell><ns0:cell>A collection of treeNode and treeBranch objects to comprise</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>a tree structure.</ns0:cell></ns0:row><ns0:row><ns0:cell>executable</ns0:cell><ns0:cell>executable</ns0:cell><ns0:cell>An interface for the instantation and running of a single call</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>of some given binary application (in a Unix shell).</ns0:cell></ns0:row><ns0:row><ns0:cell>heuristic</ns0:cell><ns0:cell>heuristic</ns0:cell><ns0:cell>An interface for a heuristic that explores a state graph and all</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>associated metadata.</ns0:cell></ns0:row><ns0:row><ns0:cell>graph</ns0:cell><ns0:cell>landscape</ns0:cell><ns0:cell>Represents a state graph.</ns0:cell></ns0:row><ns0:row><ns0:cell>landscape</ns0:cell><ns0:cell>landscape</ns0:cell><ns0:cell>Represents a phylogenetic tree search space, modelled as a</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>graph.</ns0:cell></ns0:row><ns0:row><ns0:cell>vertex</ns0:cell><ns0:cell>landscape</ns0:cell><ns0:cell>Represents a single node in the phylogenetic landscape, asso-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>ciated with a tree, and adds convenient functionality to alias</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>parent landscape functionality.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>landscapeWriter landscapeWriter newickParser newick</ns0:cell><ns0:cell>Allows one to parse a Newick or New Hampshire format string</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>of characters representing a (phylogenetic) tree.</ns0:cell></ns0:row><ns0:row><ns0:cell>rearrangement</ns0:cell><ns0:cell>rearrangement</ns0:cell><ns0:cell>Represents a movement of a branch or node on one tree to</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>another part of that same tree.</ns0:cell></ns0:row><ns0:row><ns0:cell>topology</ns0:cell><ns0:cell>rearrangement</ns0:cell><ns0:cell>An immutable representation of a phylogenetic tree where</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>movements can be performed to construct new topology or</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>tree objects.</ns0:cell></ns0:row><ns0:row><ns0:cell>bipartition</ns0:cell><ns0:cell>tree</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comp Sci reviewing PDF | (CS-2015:02:3980:1:0:REVIEW 30 Apr 2015)</ns0:note> <ns0:note place='foot' n='4'>PeerJ Comp Sci reviewing PDF | (CS-2015:02:3980:1:0:REVIEW 30 Apr 2015)Reviewing Manuscript</ns0:note> <ns0:note place='foot'>PeerJ Comp Sci reviewing PDF | (CS-2015:02:3980:1:0:REVIEW 30 Apr 2015)Reviewing Manuscript</ns0:note> </ns0:body> "
"Faculty of Computer Science 6050 University Ave, PO BOX 15000 Goldberg Computer Science Building Halifax, Nova Scotia, Canada B3H 4R2 April 30, 2015 Editorial Board PeerJ Computer Science Academic Editor Keith Crandall, Enclosed is a resubmission of a manuscript entitled “Pylogeny: an open-source Python framework for phylogenetic tree reconstruction and search space heuristics”. I am resubmitting this manuscript in response to a Revise decision made in March of 2015 (CS-2015:02:3980:1:0:NEW). This letter will serve to address points made by reviewers in the process leading up to this decision. Two reviews for my paper were received and greatly appreciated. I am happy to hear that you believe my paper to be appropriate and impactful to the field. Both reviewers, as you say, made a number of suggestions to improve the presentation of the software and its usability. I have spent a considerable amount of time making these prescribed changes to my software and feel that I have addressed their concerns. I also would like to express my gratitude for the ease and comfort of the peer review process at your journal so far. I will address both reviewers separately, addressing Reviewer 1, who chose to remain anonymous, first. The contention that this reviewer brought forward for the document was a suggestion regarding putting examples on the repository. This was a great suggestion that was shared with the second reviewer, and has been done. The repository now features a number of tutorials and ”recipes” for different tasks that the library can be used for which are explained step-by-step. To address the longer review by Jeet Sakumaran, Reviewer 2, I would first like to thank the reviewer for being transparent and incredibly helpful. When attending to his contentions regarding preliminary documentation and explanation under Basic Reporting and Experimental Design, changes that have been made were mostly done in the documentation present on the code repository. His first two suggestions for reporting were to list the relevant versions of all dependencies and to emphasize the use of the libpll library; both of these have been done. He also goes on to make suggestions for installation documentation – time was taken to install the software on a clean environment and to provide instructions reflecting this process. In the section entitled Validity of the Findings, Reviewer 2 puts forward contentions of the API documentation for the software. Many of his suggestions stem from a lack of thorough documentation of parameters and return values (inputs and outputs) for functions. All working examples in the manuscript, online, and in the API documentation have been improved to become more end-to-end as Reviewer 2 suggests as an alternative. I also expand on the coverage of tests and the support for DendroPy which I failed to push to release but is now present in the repository. Finally, I have chosen to, for the time being, dismiss the suggestion to follow the standard coding conventions recommended for Python as this is something planned for a future milestone and would take a considerable amount of time for relatively small reward. So, while the authors promise this will be done in a future release, upon resubmission this has not been done yet. Yours Faithfully, Alexander Safatli, MCS Corresponding Author Dalhousie University "
Here is a paper. Please give your review comments after reading it.
360
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Ascribing function to sequence in the absence of biological data is an on-going challenge in bioinformatics. Differentiating the toxins of venomous animals from homologues having other physiological functions is particularly problematic as there are no universally accepted methods by which to attribute toxin function using sequence data alone.</ns0:p><ns0:p>Bioinformatics tools that do exist are difficult to implement for researchers with little bioinformatics training. Here we announce a machine learning tool called 'ToxClassifier' that enables simple and consistent discrimination of toxins from non-toxin sequences with &gt; 99% accuracy and compare it to commonly used toxin annotation methods.</ns0:p><ns0:p>'ToxClassifer' also reports the best-hit annotation allowing placement of a toxin into the most appropriate toxin protein family, or relates it to a non-toxic protein having the closest homology, giving enhanced curation of existing biological databases and new venomics projects. 'ToxClassifier' is available for free, either to download ( https://github.com/rgacesa/ToxClassifier) or to use on a web-based server ( http://bioserv7.bioinfo.pbf.hr/ToxClassifier/).</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Falling costs of tandem mass spectrometry for shotgun proteomics have made generating vast amounts of protein sequence data increasingly affordable, yet the gap between obtaining these sequences and then assigning biological function to them continues to widen <ns0:ref type='bibr' target='#b10'>(Bromberg et al. 2009</ns0:ref>). Often, most sequences are deposited into protein databases with little, if any, accompanying experimental data from which biological functions can be inferred. Customarily, biological function is deduced indirectly by comparing amino acid sequence similarity to other proteins in large databases to calculate a ranking of proteins with respect to the query sequence.</ns0:p><ns0:p>Using simple pair-wise comparisons as a sequence searching procedure, the BLAST suite of programs (for example, BLASTp) was first of its kind and has gained almost unprecedented acceptance among scientists <ns0:ref type='bibr'>(Neumann et al. 2014)</ns0:ref>. Variations of the BLAST algorithm (for instance, PSI-BLAST <ns0:ref type='bibr' target='#b0'>(Altschul et al. 1997</ns0:ref>)) and development of probabilistic models (such as hidden Markov models, HMMs <ns0:ref type='bibr'>(Krogh et al. 1994</ns0:ref>)) use multiple sequence alignments to detect conserved sequences (also referred to as motifs). Use of these models enables detection of remote homology between proteins seemingly unrelated when analysed by pairwise alignment alone <ns0:ref type='bibr'>(Krogh et al. 1994)</ns0:ref>. Conversely, development of accurate algorithms and fast software tools that can automatically identify critical amino acid residues responsible for differences in protein function amongst sequences having exceptionally high sequence similarity remains a challenging problem for bioinformatics. In a post-genomic era, the toxins of venomous animals are an emerging paradigm. Animal venoms comprise predominantly toxic peptides and proteins. Duplication of genes that formerly encoded peptides and proteins having non-toxic physiological functions is one of the foremost evolutionary drivers that gives rise to the enormous functional diversity seen Manuscript to be reviewed</ns0:p><ns0:p>Computer Science in animal venom toxins <ns0:ref type='bibr'>(Fry 2005;</ns0:ref><ns0:ref type='bibr' target='#b12'>Chang &amp; Duda 2012)</ns0:ref>. However, evidence is ambiguous as to whether these genes were expressed in multiple body tissues, with the duplicate copy then recruited into venom glands with subsequent neo-functionalisation to develop toxicity, or if there was duplication with ensuing sub-functionalisation of genes encoding pre-existing but non-toxic venom gland proteins <ns0:ref type='bibr'>(Hargreaves et al. 2014)</ns0:ref>. Examples of both mechanisms have been demonstrated in different venomous animals <ns0:ref type='bibr'>(Reyes-Velasco et al. 2015;</ns0:ref><ns0:ref type='bibr'>Vonk et al. 2013;</ns0:ref><ns0:ref type='bibr'>Junqueira-de-Azevedo et al. 2015)</ns0:ref>. Nonetheless, many toxin proteins that constitute venoms share a remarkable similarity to other proteins with non-toxic physiological functions, and deciphering sequence data to disentangle these functions is not a trivial task <ns0:ref type='bibr'>(Kaas &amp; Craik 2015)</ns0:ref>. Previous proteomic data from our laboratory and subsequent results of others realised an astonishingly high sequence similarity between cnidarian (jellyfish, coral, sea anemones etc.) toxins and those of other higher venomous animals <ns0:ref type='bibr'>(Weston et al. 2012;</ns0:ref><ns0:ref type='bibr'>Weston et al. 2013;</ns0:ref><ns0:ref type='bibr'>Li et al. 2012;</ns0:ref><ns0:ref type='bibr'>Li et al. 2014)</ns0:ref>. This suggested to us that a small number of sequences, when occurring in combination, might explain this similarity and prompted the search for toxin-specific motifs <ns0:ref type='bibr'>(Starcevic &amp; Long 2013</ns0:ref>). An unsupervised procedure was developed that resulted in the identification of motifs we called 'tox-bits', and which could describe most toxins as combinations of between 2-3 'tox-bits' <ns0:ref type='bibr'>(Starcevic et al. 2015)</ns0:ref>. The 'tox-bits' are defined as HMM-profiles and were found to be superior at differentiating toxin from non-toxin sequences, when compared against standard BLAST or HMM based methods <ns0:ref type='bibr'>(Starcevic et al. 2015)</ns0:ref>.</ns0:p><ns0:p>However, implementation of 'tox-bits' HMM profiles is not straightforward for scientists with little or no bioinformatics experience. Hence, in this paper we introduce and make freely available an easy-to-use machine learning tool called the 'ToxClassifier' that employs 'tox-bits' 4) hmmerToxBits: This method is a variation of the naive-BLAST, using HMMER package Hmmsearch instead of BLAST, and the database of 'tox-bits' HMM models as the target database. A sequence is classified as a toxin if one or more HMMs can be detected within a certain e-value cut-off.</ns0:p><ns0:p>5) hmmerVenom: Modification of hmmerToxBits, the method uses HMM profiles extracted by 'venom' and 'toxin' text search of the Pfam database instead of 'tox-bit' HMMs.</ns0:p><ns0:p>6) twinHmmerPfam: A HMMER based variant of TriBLAST approach, this method performs hmmsearches against two HMM databases (toxin/positive HMMs and negative control) and compares bitscores. Sequence is annotated as a toxin if bitscore for toxin HMM database is higher. TwinHmmerPfam toxin HMMs were extracted from Pfam by keyword search for 'toxin' and 'venom', while negative control database is comprised from remainder of Pfam.</ns0:p><ns0:p>7) twinHmmerToxBits: A variation of twinHmmerPfam method, method compares hmmsearch against toxin HMMs and negative control. <ns0:ref type='bibr'>2016)</ns0:ref> was not tested as it was deemed too specialized and only accepts single sequence as input.</ns0:p></ns0:div> <ns0:div><ns0:head>User interface</ns0:head><ns0:p>The ToxClassifier web service front-end is implemented using HTML 5.1 Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>(</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The accuracy of 3 individual machine-learning classifiers to predict toxins from proteins having other physiological functions was assessed by training each classifier using 7 different annotation models. The learning classifiers were a Support Vector Machine (SVM) and Gradient</ns0:p><ns0:p>Boosted Machine (GBM) chosen as high-performing predictors, and a Generalised Linear Model (GLM) regarded as a simple classifier, but with which a baseline could be established that would allow comparison of the performance of the SVM and GBM machines. A detailed description of the annotation models is given in Methods section, but briefly the annotation models used the following sequence information from the training set as classifier inputs: either the frequency of amino acids (TBSim) or combinations of two amino-acids (BIF), the presence of absence or 'tox-bits' (SToxA), HMM scores for 'tox-bits' (SToxB), a selection of BLAST output covariants (TBEa) or a variation on TBSim and TBEa (TBEb). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the training sets were constructed, thereby allowing a comparative evaluation of training efficiency (Fig. <ns0:ref type='figure'>S1</ns0:ref>). The trained classifiers were then tested for prediction accuracy using the remaining 25% of sequences not included in the training set. Performance values were calculated to give an overall comparative classification of protein sequences as toxins or non-toxins (Table <ns0:ref type='table' target='#tab_9'>1</ns0:ref>). By comparing learning curves (Fig. <ns0:ref type='figure'>S1</ns0:ref>) and accuracy of predictions (Table <ns0:ref type='table' target='#tab_9'>1)</ns0:ref> BLAST and HMMER cut-off scores that gave the most precise toxin annotation. This value was 1.0e-20 for both BLAST and HMMER searches (Fig. <ns0:ref type='figure'>S2</ns0:ref>).</ns0:p><ns0:p>Machine learning classifiers were also evaluated against currently available published tools for toxin prediction and annotation; Animal toxin prediction server ClanTox (Kaplan et al. Manuscript to be reviewed would be considered a likely toxin, a combined score of &lt; 3 would be regarded as non-toxic, while an input sequence presenting with a score 4 or 5 would suggest a potential toxin, but would viper Bothrops atrox (Supplementary Data 1). The sequences had been annotated using standard methods and manually inspected, with the biological activities of some also being authenticated experimentally. The results of the 'ToxClassifier' predictions matched with the expert annotation (Table <ns0:ref type='table'>4</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>2007) was benchmarked using</ns0:head><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The continued decline in proteomics sequencing costs over recent years has led to an explosion in venomics data characterising the toxic peptide and protein components in many venomous animals (Kaas &amp; Craik 2015). However, there is currently no widely accepted and standard method for functional annotation of toxins from these data sources, leading to inconsistent estimates for the number of toxins in the venom of the same animal. Powers 2011) providing more appropriate measurements of performance than simple accuracy.</ns0:p><ns0:p>Another issue of toxin classification lies in unbalanced datasets, because most venomous animal genomes encode less than 100 toxins and 20,000 to 30,000 physiological non-toxic proteins; as a result, even a high performing method can generate a high number of false positive predictions. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For example, 99.5% correct prediction of non-toxins results in ~100 false positive toxins for an average animal proteome, which is in fact more than the actual number of true toxins. In order to minimize both of these problems, the <ns0:ref type='table' target='#tab_10'>2 and 3</ns0:ref>), ClanTox and ToxinPred tools were found to perform similar to BLAST based methods, while ToxClassifier demonstrated higher performance across all metrics, which is likely a result of comparatively larger training sets and combination of different internal classifiers.</ns0:p><ns0:p>In addition to high performance, the user interface of the 'ToxClassifier' web service Manuscript to be reviewed</ns0:p><ns0:p>Computer Science framework for toxin annotation to enable standardised toxin prediction in venomics projects and to allow for semi-automatic annotation or re-annotation of existing datasets. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>, 9 of the annotation model and classifier combinations were chosen to construct the 'ToxClassifier' ensemble. The trained classifiers were: BIF_SVM, BIF_GBM, STB_SVM, STB_GBM, TBS_SVM, TBS_SVM, TBEa_SVM, TBEb_SVM and TBEb_GBM. These classifiers all gave excellent accuracy scores to predict toxins from the Positive dataset (range 0.82 to 0.96) and non-toxin proteins from the Easy, Moderate and Hard datasets (range 0.92 to 1.00). No GLM classifiers were included in the ensemble because prediction accuracies were considerably lower when compared with SVM and GBM machines. Classifiers using NTB and OF annotation models were also abandoned in favour of better performing STB and BIF models. Furthermore, the TBEa_GBM model consistently underperformed compared to the TBE_SVM model and was excluded, giving an odd number of classifiers in the ensemble, thereby avoiding a 'tied vote' scenario when the outputs were interpreted collectively. The prediction accuracy of the trained machine learning classifiers was next compared to more conventional annotation methods based on sequence alignment, to determine if machine learning predictions were superior or inferior to well established and accepted bioinformatics tools. A detailed description of the annotation models based on these bioinformatics tools is given in Methods. Briefly, simple predictions were made by taking the best-hit from BLAST comparisons between a query sequence and the UniProtKB/SwissProt-ToxProt database (naiveBLAST method), or the best-hit following a HMMER hmmsearch comparison between a PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016) Manuscript to be reviewed Computer Science query sequence and either existing HMM models for toxin protein families in the Pfam database (hmmerVenom model), or our own 'tox-bits' HMM models (hmmerToxbits classifier). More sophisticated annotation models also used BLAST or HMMER searches, but the best-hit was extracted following simultaneous comparisons between the query sequence and multiple datasets. These sophisticated annotation models are also described in Methods, but briefly these models were constructed from sequence information extracted from either UniProtKB/SwissProt-ToxProt sequences supplemented with additional toxin-like sequences from the UniProtKB/SwissProt and TrEMBL databases, or UniProtKB/SwissProt-ToxProt sequences supplemented with non-toxin sequences from the 'Moderate' and 'Hard' datasets used to train the machine classifiers. Training and test sets were analogous in design and execution to the machine classifier learning, with 75% of sequence information used to construct the BLAST and HMMER databases and the remaining 25% of data used to evaluate performance. Prediction accuracy measures for each query sequence using each of the bioinformatics models were repeated 10 times to give a final balanced accuracy value. Accuracy measure calculations are described in Methods. A range of sequence-alignment scoring was also tested to select the lowest</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>higher performance for negative prediction (predicting non-toxin as non-toxin) compared to positive prediction (correctly predicting toxin as toxic), with Specificity (Spec) and Negative Prediction Value (NPV) significantly higher than Sensitivity (Sens) and Positive Prediction Value (PPV).Each of the 9 machine learning classifiers used in the 'ToxClassifier' ensemble gives a simple bit (1 or 0) value as output to predict whether the likely biological activity of the input sequence is as a toxin (1), or has a non-toxic (0) physiological role and scores are summed into final prediction score ranging from 0 to 9. Evaluation of this final prediction score was performed on test sets obtained from randomly sampling 75% of sequences in the published annotated genomes from a selection of venomous animals (king cobra Ophiophagus hannah, the duck-billed platypus Ornithorhynchus anatinus, the snakelocks sea anemone Anemonia viridis, the starlet sea anemone Nematostella vectensis; and all proteins deposited in the UniProtKB/SwissProt and TrEMBL databases attributed to snakes, spiders, wasps and Conus snails) and other animals considered to be non-venomous (human Homo sapiens, the house mouse Mus musculus and the Burmese python Python bivittatus). Calibration was performed by assessing the performance measures of the Toxclassifier ensemble relative to prediction score; calibration curves for summary of all animal genome data are presented in Figure 1. When the average correct annotation of all input sequences for all genomes was calculated, a combined score from 5 out of the 9 classifiers giving correct classification provided a good balance between the detection of toxins and the filtering of non-toxins. Hence, a calibration for the ToxClassifier ensemble was possible where an input sequence giving a combined score of &gt; 6</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>For example, the venom of the duck-billed platypus Ornithorhynchus anatinus has only 6 toxins listed following manual annotation in the latest release of the UniProtKB/SwissProt-ToxProt database (11th May 2016), yet 107 putative toxins were identified by a simple pair-wise BLASTp search PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016) Manuscript to be reviewed Computer Science using venom gland transcriptome sequences as input to search the UniProtKB/SwissProt ToxProt database (Jungo et al. 2012). In addition to separate homology searching methods to interpret the same data, many venomics projects now also include different manual filtering steps as part of the annotation process (Rachamim et al. 2014; Gacesa et al. 2015), exacerbating the problem of results verification. In this study, a selection of machine learning-based classifiers implementing a range of BLAST and HMMER-based annotation models were trained on datasets of known toxins, protein sequences assumed to be non-toxic but with homology to known toxins, and predicted proteins encoded in the genome, transcriptome or proteome of a range of venomous and nonvenomous animals. A comparison between the results presented in Tables 1-3 demonstrated that the majority of the machine learning methods consistently out-performed standard bioinformatics approaches of functional annotation. Interestingly, all tested methods demonstrated higher performance for negative prediction (classification of non-toxic sequences) compared to positive classification (prediction of toxic sequences as toxins). These results demonstrate that differentiating between physiological toxin-like proteins and actual toxins is more difficult then prediction of random proteins as non-toxic, which is to be expected considering the similarity and common origin of many toxins and toxin-like sequences (Fry 2005; Chang &amp; Duda 2012; Hargreaves et al. 2014). As such, it is important to consider balanced performance measurements when assessing toxin classifiers, with scores such as F1-score and MCC value (Matthews 1975;</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,349.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>ToxinPred was not tested on animal data due to lack of short sequences available in these datasets. SpiderP server (Wong et al. 2013) was not tested as service was not available at the time and PredCSF server (Fan et al.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='3'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>ToxClassifier meta-classifier was constructed from nine annotation model and classifier</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>combinations (BIF_SVM, BIF_GBM, STB_SVM, STB_GBM, TBS_SVM, TBS_SVM,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>TBEa_SVM, TBEb_SVM and TBEb_GBM). Each of nine classifiers reports 1 if the input</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>sequence is predicted as toxin or 0 if predicted as non-toxic, for final prediction score of 0 to 9.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Datasets used for calibration of meta-classifier were chosen from the set of venomous and non-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>venomous animals (human Homo sapiens, the house mouse Mus musculus, the Burmese python</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Python bivittatus, king cobra Ophiophagus hannah, the duck-billed platypus Ornithorhynchus</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>anatinus, the snakelocks sea anemone Anemonia viridis, the starlet sea anemone Nematostella</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>vectensis; and all proteins deposited in the UniProtKB/SwissProt and TrEMBL databases</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>attributed to snakes, spiders, wasps and Conus snails). All sequences were downloaded from</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>UniProtKB database with exception of Python bivittatus which was not available in UniProtKB</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>and was downloaded from NCBI protein database (Wheeler et al. 2003); all data is available at</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>https://github.com/rgacesa/ToxClassifier/datasets. Datasets were split into training sets</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>consisting of 75% of data and test sets including remaining 25% of sequences.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>containing 'toxin' or 'venom' keywords were ToxClassifier meta-classifier was calibrated by evaluating prediction score versus performance</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>removed. for each animal training set and for summary dataset constructed by combining animal training</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>BLAST databases for Naive-BLAST, OneBLAST and TriBLAST were constructed from 75% datasets with exclusion of Conus snail data, which was dropped due to suspected low quality of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>randomly sampled sequences in Positive, Easy, Moderate and Hard and performance was annotation. Calibrated ToxClassifier, with prediction score 5 or more as cut-off for positive</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>measured by annotating remainder of data. HMMER based methods were tested with 25% of classification, was tested on animal data test sets, and performance measures were compared to</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>random sequences from input datasets, using all appropriate HMM models. Database OneBLAST, NaiveBLAST models and ClanTox server. Data used for training and testing and all</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>construction and testing was repeated 10 times and results were averaged. calculated performance metrics are</ns0:cell><ns0:cell>available</ns0:cell><ns0:cell>at</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>ToxClassifier Meta-classifier calibration and testing https://github.com/rgacesa/ToxClassifier/datasets/toxclassifier_calibration_test.</ns0:cell></ns0:row><ns0:row><ns0:cell>ToxClassifier comparison to other published tools</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016)</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>For this model, positive database is composed from 'tox-bit' HMMs derived from Tox-Prot toxins (Starcevic et al. 2015), while negative control is Pfam database from which HMMsPeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016) Manuscript to be reviewed Computer Science Performance of ClanTox server (Kaplan et al. 2007) was tested on Positive, Easy, Moderate and Hard datasets and on animal data test sets. ToxinPred server (Gupta et al. 2013) was tested on 868 Positive dataset sequences of length up to 30 amino acids (ToxinPred sequence length limit) and on negative dataset composed 30 amino acid or shorter protein sequences randomly selected from UniProt database (5,673 non-duplicate, non-ToxProt sequences).</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Positive, East, Moderate and Hard datasets and summary of these</ns0:figDesc><ns0:table><ns0:row><ns0:cell>making it unsuitable for large scale testing. Final performance measures compared between the</ns0:cell></ns0:row><ns0:row><ns0:cell>different tools are listed in Table 2.</ns0:cell></ns0:row><ns0:row><ns0:cell>Testing of sequence-alignment based annotation models (Table 2) demonstrated that the</ns0:cell></ns0:row><ns0:row><ns0:cell>simplistic methods (naiveBLAST, hmmerToxBits and hmmerVenom) gave high prediction</ns0:cell></ns0:row><ns0:row><ns0:cell>accuracies for sequences in the Easy dataset (ranging from 0.95 for hmmerVenom to 0.99 for</ns0:cell></ns0:row><ns0:row><ns0:cell>naiveBLAST), but underperformed in annotation of the physiological toxin-like sequences in the</ns0:cell></ns0:row><ns0:row><ns0:cell>Moderate and Hard datasets (accuracies ranging from 0.74 to 0.83 for Moderate and 0.07 to 0.23</ns0:cell></ns0:row><ns0:row><ns0:cell>Moderate datasets (accuracy 0.85 -0.999), but suffered from a high false positive rate when</ns0:cell></ns0:row><ns0:row><ns0:cell>sequences in the Hard dataset were analysed (accuracy 0.44). When compared to machine</ns0:cell></ns0:row><ns0:row><ns0:cell>learning-based methods, even the most accurate of the sequence alignment-based models</ns0:cell></ns0:row><ns0:row><ns0:cell>datasets. As ToxinPred (Gupta et al. 2013) tools predict only small peptide toxins, it was tested (oneBLAST and triBLAST) were surpassed by the majority of the machine learning based</ns0:cell></ns0:row><ns0:row><ns0:cell>using a subset of Positive dataset with sequences no longer than 30 amino acids (868 sequences) classifiers, especially by TBEa and TBEb models (SVM and GBM variants), which gave the</ns0:cell></ns0:row><ns0:row><ns0:cell>and separate negative dataset composed of 5,673 random short proteins from UniProtKB</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016) Manuscript to be reviewed Computer Science database. SpiderP (Wong et al. 2013) was not benchmarked as the server no longer seems publically available. Finally, PredCSF (Fan et al. 2016) is a conotoxin-specific tool and was deemed not comparable to general annotation tools; it also only allows single sequence input, for Hard dataset (the poor performance here, also evinced by the low F1 and MCC scores). More sophisticated BLAST-based methods (oneBLAST and triBLAST) gave very high prediction accuracy scores (0.93 -0.999) for sequences in the Easy and Moderate datasets, but somewhat lower performance on sequences in the Positive and Hard datasets (0.86 -0.90). Pfam-based twinHMMER gave the highest accuracy prediction for non-toxin sequences, but underperformed compared to the other annotation models against sequences in the positive toxin dataset (accuracy 0.56). The 'tox-bits' based variant accurately predicted sequences in the Easy and highest accuracy of prediction for sequences in all test datasets. All prediction methods showed PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>Kaplan et al. 2007). Performance measurements are reported in Table3and comparison of F1-scores and MCC values across all datasets is presented in Figures2, S3/A and S3/B. Finally, the 'ToxClassifier' was assessed in a blinded experiment that used as input a set of protein sequences derived from the venom gland transcriptome of the Amazonian rain forest pit</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Apweiler 2001).</ns0:cell></ns0:row><ns0:row><ns0:cell>Performance of calibrated 'ToxClassifier' meta-classifier was evaluated on a test set</ns0:cell></ns0:row><ns0:row><ns0:cell>comprising 25% of the animal genome data not used for calibration; these results were compared</ns0:cell></ns0:row><ns0:row><ns0:cell>to naiveBLAST and OneBLAST conventional methods and to ClanTox server for animal toxin</ns0:cell></ns0:row><ns0:row><ns0:cell>prediction (</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016)Manuscript to be reviewed Computer Science require manual evaluation using additional tools, for example, InterProScan (Zdobnov &amp;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>ToxClassifier training scheme was conservative, using only well-annotated toxins from UniProtKB/ToxProt database as positives, and while this might lead to a somewhat lower positive prediction rate (due to missing likely toxins which are not annotated as such), it does serve to minimise the false positive rate.Notably, predictions were less accurate on some genome datasets, especially Conus snail proteins, with low performance metrics observed for all tested annotation methods. This discrepancy was likely caused by the assumption that sequences deposited in the UniProtKB/SwissProt-ToxProt sequence are bona fide toxins, while sequences in the UniProtKB/SwissProt and TrEMBL databases without 'toxin' or 'venom' keywords are not toxins. Given that toxin activity is attributed to most sequences without biological validation, it is</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='6'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>specific</ns0:cell><ns0:cell>animal</ns0:cell><ns0:cell cols='2'>origins.</ns0:cell><ns0:cell>For</ns0:cell><ns0:cell cols='2'>example,</ns0:cell><ns0:cell>SpiderP</ns0:cell><ns0:cell>(Wong</ns0:cell><ns0:cell>et</ns0:cell><ns0:cell>al.</ns0:cell><ns0:cell>2013)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>while ClanTox (Kaplan et al. 2007) (http://www.clantox.cs.huji.ac.il/tech.php) was trained only</ns0:cell></ns0:row><ns0:row><ns0:cell>on</ns0:cell><ns0:cell>an</ns0:cell><ns0:cell cols='2'>ion-channel</ns0:cell><ns0:cell>toxin</ns0:cell><ns0:cell /><ns0:cell>dataset</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>PredCSF</ns0:cell><ns0:cell>(Fan</ns0:cell><ns0:cell>et</ns0:cell><ns0:cell>al.</ns0:cell><ns0:cell>2016)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>(http://www.csbio.sjtu.edu.cn/bioinf/PredCSF/) is conotoxin specific. In addition, the reported</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>training set sizes are low (for example ClanTox was trained on ~600 ion channel toxins; the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>ToxinPred toxin positive training set is 1805 sequences, while as of 11th May 2016, the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>UniProtKB/SwissProt-ToxProt database contained ~6500 sequences). None of the currently</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>available machine learning methods also gives a comparison with other currently used accepted</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>bioinformatics annotation methods. When compared to ToxClassifier and conventional</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>annotation tools (Tables</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='12'>likely that the training datasets almost certainly excluded a number of toxin sequences and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>included some yet unknown toxins as non-toxic. Another limitation of the 'ToxClassifier' lies in</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>the inherent bias of the training sets; an underrepresentation in sequences from certain animal</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>lineages, particularly the basal Metazoa, e.g. Cnidaria, could lead to incorrect assignment and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>suspicious quality of existing annotation of conotoxins is a reason to treat prediction on this</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>protein class with caution. To elevate these problems, 'ToxClassifier' has been designed to also</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>report sequences suspected to have closest homology to underrepresented taxa as 'suspicious</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>toxin' and recommends manual annotation with other tools, such as InterProScan (Zdobnov &amp;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Apweiler 2001).</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='11'>Use of machine learning for toxin prediction has been attempted before and a range of</ns0:cell></ns0:row></ns0:table><ns0:note>such tools exists; however, most of the available tools are heavily specialised for toxins of PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11574:1:1:NEW 17 Aug 2016)Manuscript to be reviewed (http://www.arachnoserver.org/spiderP.html) is a predictor for spider toxins while ToxinPred(Gupta et al. 2013) (http://crdd.osdd.net/raghava/toxinpred/) predicts only small peptide toxins;</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparative accuracy values for the classification of protein sequences as toxins or non-toxins using 6 different annotation models trained on 3 separate classifier learning machines.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Classification scores for:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Annotatio n Model</ns0:cell><ns0:cell>Classifie r</ns0:cell><ns0:cell>Accuracy (Positive dataset) toxin</ns0:cell><ns0:cell>Accuracy (Easy dataset) non-toxin</ns0:cell><ns0:cell>Accuracy (Moderate dataset) non-toxin</ns0:cell><ns0:cell>Accuracy (Hard dataset) non-toxin</ns0:cell><ns0:cell cols='2'>PPV NPV</ns0:cell><ns0:cell cols='4'>Test Set Summary Sens. Spec. F1-value MCC</ns0:cell></ns0:row><ns0:row><ns0:cell>TBSim</ns0:cell><ns0:cell>GBM</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.80</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GLM</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.61</ns0:cell><ns0:cell>0.57</ns0:cell></ns0:row><ns0:row><ns0:cell>BIF</ns0:cell><ns0:cell>GBM</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GLM</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.76</ns0:cell><ns0:cell>0.74</ns0:cell></ns0:row><ns0:row><ns0:cell>SToxA</ns0:cell><ns0:cell>GVM</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.75</ns0:cell><ns0:cell>0.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell>SToxB</ns0:cell><ns0:cell>GBM</ns0:cell><ns0:cell>0.75</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.75</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GLM</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.61</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.12</ns0:cell></ns0:row><ns0:row><ns0:cell>TBEa</ns0:cell><ns0:cell>GBM</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GLM</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row><ns0:row><ns0:cell>TBEb</ns0:cell><ns0:cell>GBM</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SVM</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.96</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GLM</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparative balanced accuracy values for classification of protein sequences as toxins or non-toxins using different annotation methodologies based on BLAST or HMMER alignments.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Classification scores for:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Annotation Model</ns0:cell><ns0:cell>Tool</ns0:cell><ns0:cell>Accuracy (Positive toxin dataset)</ns0:cell><ns0:cell>Accuracy (Easy non-toxin dataset)</ns0:cell><ns0:cell>Accuracy (Moderat e non-toxin dataset)</ns0:cell><ns0:cell>Accuracy (Hard non-dataset) toxin</ns0:cell><ns0:cell>PPV</ns0:cell><ns0:cell cols='3'>Test Set Summary statistics V value NP Sens. Spec. F1-</ns0:cell><ns0:cell>MCC</ns0:cell></ns0:row><ns0:row><ns0:cell>naiveBLAST</ns0:cell><ns0:cell>BLAST</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell cols='2'>0.90 0.86 0.46</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>oneBLAST</ns0:cell><ns0:cell>BLAST</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell cols='2'>0.86 0.99 0.89</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>triBLAST</ns0:cell><ns0:cell>BLAST</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell cols='2'>0.87 0.97 0.78</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>hmmerToxBits</ns0:cell><ns0:cell>HMME R</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell cols='2'>0.91 0.87 0.48</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>0.60</ns0:cell></ns0:row><ns0:row><ns0:cell>hmmerVenom</ns0:cell><ns0:cell>HMME R</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.23</ns0:cell><ns0:cell cols='2'>0.65 0.84 0.34</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.45</ns0:cell><ns0:cell>0.38</ns0:cell></ns0:row><ns0:row><ns0:cell>twinHmmerPfam</ns0:cell><ns0:cell>HMME R</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell cols='2'>0.56 0.98 0.78</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>twinHmmerToxBi ts</ns0:cell><ns0:cell>HMME R</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.44</ns0:cell><ns0:cell cols='2'>0.85 0.92 0.59</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.67</ns0:cell></ns0:row><ns0:row><ns0:cell>ClanTox server</ns0:cell><ns0:cell>ML</ns0:cell><ns0:cell>0.66</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell cols='2'>0.66 0.95 0.65</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.66</ns0:cell><ns0:cell>0.61</ns0:cell></ns0:row><ns0:row><ns0:cell>ToxinPred server*</ns0:cell><ns0:cell>ML</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>N/A</ns0:cell><ns0:cell>N/A</ns0:cell><ns0:cell cols='2'>0.55 0.98 0.82</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.66</ns0:cell><ns0:cell>0.63</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Dr Jaume Bacardit Academic Editor PeerJ Computer Science 16th October 2016 Dear Bacardit, Re: SREP-15-34058A We are grateful for the review of our manuscript and for the opportunity to provide revision. We provide major revision of the manuscript to address all of the concerns raised by the reviewers and technical points from PeerJ staff. These changes are highlighted in a marked revision as highlighted text and as comment balloons to indicate our response to each of the referees’ comments. We now address each of the comments in detail: Reviewer 1 (Jose Izarzugaza) Results: Reviewer Comment 1: In line 111, the authors state that 'any blastp hits with duplicated FASTA sequences in the Uniprot/SwissProt-ToxProt database were discarded'. Do the authors allow for any variability or do they refer to identical (100%) sequences here? Similarly, the authors should indicate how they handle internal homology in the training sets. Response: Identical FASTA sequences are considered as duplicates and have been removed from the datasets, including FASTA entries with different headers and identical sequences. Because toxic and physiological proteins often have very close sequence and, as far as we are aware there are no methods to differentiate toxin and toxin-like physiological proteins based on sequence similarity, a decision was made not to cluster data based on sequence similarity. Whilst potentially overtraining is a concern, no overtraining was observed in the learning curves and, the meta-classifier calibration and testing on further ‘animal’ training and test datasets should, therefore, minimise potential impact; in addition, the final classifier was also tested with totally new sequences to minimise potential overtraining bias. The Methods and Discussion have been updated to clarify the removal of duplication strategy used. Reviewer Comment 2: The manuscript lacks a detailed description of the creation of the k-folds. My concern is that the authors might be incurring in overtraining if similar (i.e. non-identical globally but closely identical in the tox-bits) signals stemming from closely related proteins are considered in the training and the evaluation sets. And of course, it hinders reproducibility. Response: Training was performed on training sets of 75% of input sequences, with 10 internal bootstraps; the Methodology and Result sections have been updated to clarify these details; also see response to Comment 1 for details on handling internal homology. Reviewer Comment 3: The threshold for the selection of valid results is decided after the training, and therefore, possibly introducing overfitting. The authors correctly incorporate a leave-one-out approach in the 25-fold cross validation and separate the dataset into training and evaluation. That would be a valid approach if there was no parameter selection in the assessment. As the threshold (i.e <3 non-toxic, 4-5 pot. toxic, 6> toxic) is used afterwards, the distribution of the cross-validation should include three sets: training, test and evaluation. A similar reasoning applies to the selection of the 9 internal methods for classification, as feature selection occurred after the training. Response: The Reviewer is correct; the threshold for selection of valid results is indeed a cause for concern and potential source of overtraining. To elevate these concerns, we have re-examined our selection thresholds by separating the animal genome data into a randomly selected training set (75% data) and test set (25% data); only the training set was used for meta-classifier calibration and results have been updated to reflect test-set only prediction quality. Figure 1 was redrawn to represent only the calibration data and, the test-set data is reported in a new table (Table 3). Considering the meta-classifier was tested on data not used for calibration and on totally new sequences (Table 4), and as only redundant or oversimplified models initially introduced as ‘proof of principle testing’ were dropped from the classifier, it is unlikely that selection of internal classifiers is now a source of overtraining. We thank the Reviewer for his insight and comments. Reviewer comment 4: Although the authors correctly include commonly used statistics for the evaluation of machine learning approaches in the Supplementary Note 1B, the numbers for these are not provided and the results are discussed mainly in terms of accuracy. In the problem at hand it is interesting to now the accuracy of prediction on the positive set but also the ability to recall sequences. Similarly, Figure 1 should be reconsidered. Probably providing together specificity and sensitivity (i.e. a ROC curve). The legend could still be used to display the performance for each cut-off valies and the values for the naiveBLAST and oneBLAST methods. From the observation of the figure, it is not clear to me how the performance was calculated in the manuscript. Again, the all statistics measuring performance are needed. In particular, those focusing on the positive (i.e. toxic proteins) part: precision, recall, f-score and MCC to cite some examples. From the observation of the figure, it seems that the authors present a method very good at predicting the negative class (non-toxic, majority class) but with low recall in the positive class (toxic). In general, comparisons in the manuscript should be done with respect to precision (accuracy on the positive class) rather than overall accuracy as the prediction of non-toxic proteins is not relevant in this case and constitutes a big majority of the training set. The same applies to Table 1. Please, report all statistics for all the methods to facilitate comparison. Or even include them in Figure 1 for direct comparison. In Table 3, please, provide the individual scores. Response: While we agree that a positive prediction of toxins is more difficult than a prediction of non-toxins, false positive hits are still a major problem in toxin prediction mainly due to the size of datasets. Indeed, false positive hits are a major shortcoming of ‘standard’ annotation methods such as the oneBLAST approach, as even 99.5% correct prediction of true negatives will still result in ~100 false positive hits in an average animal genome (more than the currently accepted number of true toxins in venomous snakes). However, we fully agree that listed scores are an oversimplification and have updated Tables 1-3 and text with appropriate measures for positive and negative predictions. Figure 1 was redrawn to show detailed performance metrics obtained on the training set used for calibration of the meta-classifier and, other classifier metrics were removed to clarify these metrics are training set only and not used for comparison with other classifiers. The newly introduced Table 4 displays comparison of test set statistics for the calibrated meta-classifier and other annotation methods, while the newly introduced Figure 2 shows a comparison of tested classifiers for each of test protein sets. Again, we thank the Reviewer for his insight and comments. Reviewer 1, Validity of the findings Reviewer comment 5: In the paragraph starting in line 246 the authors describe other approaches that, although limited in scope, address a similar problem. However, the performance of these specific method is not critically compared to the performance of the ToxClassifier presented here. The readership of the manuscript would definitely benefit from a detailed benchmark with these in addition to the other sequence based generic approaches. Response: We fully agree with the Reviewer and have performed benchmarks of listed methods using our positive and negative datasets where possible; the spiderP server is currently unavailable and the PredCSF server accepts only single sequence and is designed to work only with conotoxins and was, therefore, not tested. Results have been added to Tables 2 and 3 and are expanded in the Discussion section. Reviewer 1, Comments for the Author Reviewer comment 6: The Methods section needs improvement to reach publication standards. I would suggest that the information in the Supplementary Notes is expanded and included into the Methods section. In particular, the sections referring to the different annotation systems and the statistics used for the evaluation of the methods constitute a core part of the interpretation of the results and should be described carefully in the methods sections. I mentioned above some problems with respect to the generation of the training and evaluation sets in the k-fold cross validation approach used. This should constitute a section in the Methods as well. Response: The Methods section has been revised and expanded to include detailed descriptions of annotation systems, statistics and machine learning model details. Reviewer comment 7: In line 49, the sentence 'Thus, remote .... approach alone' might need rewording. The meaning is not clear to me in its current form. Response: The sentence has been reworded for clarity. Reviewer 2 (Anonymous) Experimental design The paper presents the results of the use of standard machine learning classification methods to distinguish between toxic and non-toxic peptides and proteins. Reviewer 2, comment 1: The experiments are well designed and the databases are constructed in a principled way. However, there is a lack of comparison with other relevant programs aimed at the same goal which are described in the paper but not used as a baseline to evaluate the validity of the proposed approach. Response: We fully agree with the Reviewer and have included appropriate comparisons with other relevant tools into Tables 2 and 3. Also see response to Reviewer 1, comment 4 Reviewer 2, comment 2: From the computational point of view not sufficient information is given about how the models were trained or how the hyper-parameters of the models where chosen. This would make the reproduction to the experiments troublesome. Response: The Methods section has been expanded to include details on model construction and training; we also provide scripts for feature extraction and datasets used for model training as well as trained models in downloadable material to facilitate examination of our results and data, to help with future reproduction. Also see response to Reviewer 1 comment 5 and comment 6. Reviewer 2, Validity of the findings Reviewer 2, comment 3: Although the experiments and results are valuable, I think the novelty of the paper is weak. The application of standard machine learning methods to toxic/non-toxic protein classification is interesting but it is not a clear innovation over other existing programs that use other similar techniques. I think that the author should justify the contribution of their paper to the field in addition to the seemingly good results achieved. Response: Aside from providing ‘good results’ and ‘valuable experiments and results’, our work demonstrates problems with currently used toxin detection and annotation methods. We feel that toxin annotation is clearly in need of more standardisation and quality control, hence this manuscript should be considered ground-breaking in venom research, not only for providing a new tool for toxin detection that can be easily used by non-bioinformaticians and computer scientists, but also for bringing attention to currently existing problems with toxin annotation methodology. However, we agree that our contribution might not be clear enough; therefore we have included detailed comparisons with other tools / methods and expanded the Discussion to re-enforce the importance of the results. Also see responses to reviewer 2, comment 1 and reviewer 1 comment 4. Technical changes from PeerJ staff: # Remove Supplemental Files from Manuscript Source File Response: Done # Supplemental Files Supplemental Information (figures, tables, data, movies, etc.) Files should be uploaded as separate files under the Supplemental Files section and referenced in the manuscript. Response: Done # Standard Manuscript Sections .Please ensure that you are following the PeerJ standard sections order. Response: The manuscript sections are now structured according to PeerJ standard sections order. # References In the reference section, please provide the full author name lists for any references with et al. Response: References are reformatted to display full author names. # Tables 1) In addition to any tables embedded directly in your text manuscript, please also upload the tables in separate Word documents. 2) The file should be named using the table number: Table1.doc, Table2.doc. Response: Tables are provided as separate Word documents named Table1.docx to Table4.docx # Figures 1) Please use numbers to name your files, example: Fig1.eps, Fig2.png. 2) Please combine any figures with multiple parts into single, labelled, figure files. Ex: Figs 1A and 1B should be one figure grouping the parts either next to each other or one on top of the other and only labeled “A” and “B” on the respective figure parts. Each figure with multiple parts should label each part alphabetically (e.g. A, B, C) and all parts should be submitted together in one file. 3) In addition to providing low res files embedded in the manuscript, please upload your high res figures in either EPS, PNG, JPG (photographs only) or PDF (vector PDF's only) as primary files. 4) Figures should be at least 900 by 900 pixels (without unnecessary white space around them). Response: High resolution figures are provided as extra PNG files named Fig1 etc. # Manuscript Source File 1) Please provide the clean unmarked source file (e.g. .DOCX, .DOC, .ODT) with no tracked changes shown, all tracked changes accepted and tracked changes turned off. 2) Please do continue to include low res files embedded in the manuscript. 3) Please upload the manuscript file in the Revised Manuscript & Primary Files section. 4) If you uploaded a PDF because of formatting problems, please provide the source file as a Supplemental File and we will mark it as the correct file type as necessary if the manuscript is accepted. Response: Done Please contact me if you require any further information. Yours sincerely Paul F Long Professor of Marine Biotechnology & Therapeutics King’s College London Tel/fax: 020 7484 4842 (direct) E-mail: paul.long@kcl.ac.uk "
Here is a paper. Please give your review comments after reading it.
361
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The rapid advanced technological development alongside the Internet with its cutting-edge applications has positively impacted human society in many aspects. Nevertheless, it equally comes with the escalating privacy and critical cybersecurity concerns that can lead to catastrophic consequences, such as overwhelming the current network security frameworks. Consequently, both the industry and academia have been tirelessly harnessing various approaches to design, implement and deploy intrusion detection systems (IDSs) with event correlation frameworks to help mitigate some of these contemporary challenges. There are two common types of IDS: signature and anomalybased IDS. Signature-based IDS, specifically, Snort works on the concepts of rules.</ns0:p><ns0:p>However, the conventional way of creating Snort rules can be very costly and error-prone. Also, the massively generated alerts from heterogeneous anomaly-based IDSs is a significant research challenge yet to be addressed. Therefore, this paper proposed a novel Snort Automatic Rule Generator (SARG) that exploits the network packet contents to automatically generate efficient and reliable Snort rules with less human intervention. Furthermore, we evaluated the effectiveness and reliability of the generated Snort rules, which produced promising results. In addition, this paper proposed a novel Security Event Correlator (SEC) that effectively accepts raw events (alerts) without prior knowledge and produces a much more manageable set of alerts for easy analysis and interpretation. As a result, alleviating the massive false alarm rate (FAR) challenges of existing IDSs. Lastly, we have performed a series of experiments to test the proposed systems. It is evident from the experimental results that SARG-SEC has demonstrated impressive performance and could significantly mitigate the existing challenges of dealing with the vast generated alerts and the labor-intensive creation of Snort rules.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The advent of the Internet has come with the cost of wide-scale adoption of innovative technologies such as cloud computing <ns0:ref type='bibr' target='#b1'>(Al-Issa et al., 2019)</ns0:ref>, artificial intelligence <ns0:ref type='bibr' target='#b63'>(Miller, 2019)</ns0:ref>, the Internet of Things (IoT) (J. H. <ns0:ref type='bibr' target='#b75'>Park, 2019)</ns0:ref>, and vast ranges of web-based applications. Therefore, leading to considerable security and privacy challenges of managing these cuttingedge applications using traditional security and privacy protection mechanisms such as firewall, anti-virus, virtual private networks (VPNs), and anti-spyware <ns0:ref type='bibr' target='#b62'>(Meryem &amp; Ouahidi, 2020)</ns0:ref>. However, due to the vast range of competitive solutions such as higher efficiencies, scalability, reduced costs, computing power, and most importantly, the delivery of services, these technologies continue to revolutionize various aspects of our daily lives drastically, for instance, in the health care systems, research industry, government and private sectors, and most importantly, the global business landscape <ns0:ref type='bibr' target='#b105'>(Xue &amp; Xin, 2016)</ns0:ref>.</ns0:p><ns0:p>Moreover, countless types of cyber-attacks have evolved dramatically since the inception of the Internet and the swift growth of ground-breaking technologies. For example, social engineering or phishing <ns0:ref type='bibr' target='#b47'>(Kushwaha et al., 2017)</ns0:ref>, zero-day attack <ns0:ref type='bibr' target='#b38'>(Jyothsna &amp; Prasad, 2019)</ns0:ref>, malware attack <ns0:ref type='bibr' target='#b61'>(McIntosh et al., 2019)</ns0:ref>, denial of service (DoS) <ns0:ref type='bibr' target='#b101'>(Verma &amp; Ranga, 2020)</ns0:ref>, data breaches <ns0:ref type='bibr' target='#b88'>(Stefanova, 2018)</ns0:ref>, and unauthorized access of confidential and valuable resources <ns0:ref type='bibr' target='#b81'>(Saleh et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Additionally, according to the authors of <ns0:ref type='bibr' target='#b73'>(Papastergiou et al., 2020)</ns0:ref>, a nation's competitive edge in the global market and national security is currently driven by harnessing these efficient, productive, and highly secure leading-edge technologies with intelligent and dynamic means of timely detection and prevention of cyberattacks. Nevertheless, irrespective of the tireless efforts of security experts in defense mechanisms, hackers have always found ways to get away with targeted resources from valuable and most trusted sources worldwide by launching versatile, sophisticated, and automated cyber-attacks. As a result, causing tremendous havoc to governments, businesses, and even individuals <ns0:ref type='bibr' target='#b82'>(Sarker et al., 2020)</ns0:ref>. For instance, the authors of <ns0:ref type='bibr' target='#b14'>(Dama&#353;evi&#269;ius et al., 2021)</ns0:ref> intriguingly review various cyber-attacks and their consequences. Firstly, the paper highlights the estimated 6 trillion USD of cyber-crimes by 2021 and the diverse global ground-breaking cyber-crimes that could lead to the worldwide loss of 1 billion USD. Finally, the paper highlights a whopping 1.5 trillion USD of cyberattack revenues resulting from two to five million computers compromised daily. Furthermore, according to published statistics of AV-TEST Institute in Germany, during the year 2019, there were more than 900 million malicious executables identified among the security community, predicted to grow in subsequent years. Similarly, the past decade has witnessed devastating financial losses due to the disheartening cyberattacks and crimes, which significantly affect governments, organizations, and individuals, such as a data breach costing 8.19 million USD for the United States and 3.9 million USD on an average. Moreover, the Congressional Research Service of the USA has highlighted that cybercrime-related incidents have cost the global economy an annual loss of 400 billion USD.</ns0:p><ns0:p>Similarly, based on the 2017 Data breach statistics and the Symantec Internet Security Threat Report, 2016 alone recorded more than a whopping 3 billion zero-day attacks and an approximate 9 billion stolen data records since 2013 <ns0:ref type='bibr' target='#b41'>(Khraisat et al., 2019)</ns0:ref>. In addition, according to the authors of <ns0:ref type='bibr' target='#b31'>(Grammatikis et al., 2021)</ns0:ref>, the energy sector in Ukraine suffered catastrophic coordinated cyberattacks (APT) that led to a significant blackout affecting more than 225,000 people. They also highlighted similar alarming APT threats against the Electrical Power and Energy Systems, such as DragonFly, TRITON, and Crashoverride. Such scary incidents and security weaknesses can cause devasting consequences, such as benign disruption and sabotage that could significantly threaten individual lives and the global economy, leading to national security threats. Consequently, it is essential for security experts to design, implement and deploy robust and efficient cybersecurity frameworks to alleviate the current and subsequent alarming losses for the government and private sectors. Additionally, it is an urgent and crucial challenge to effectively identify the increasing cyber incidents and cautiously protect these relevant applications from such cybercrimes.</ns0:p><ns0:p>Therefore, the last few decades have witnessed Intrusion Detection Systems (IDSs) increasing in popularity due to their inherent ability to detect an intrusion or malicious activities in real-time. Consequently, making IDSs critical applications to safeguard numerous networks from malicious activities <ns0:ref type='bibr' target='#b16'>(Dang, 2019;</ns0:ref><ns0:ref type='bibr' target='#b62'>Meryem &amp; Ouahidi, 2020)</ns0:ref>. Finally, James P. Anderson claimed credit for the inception of the IDS concept in his paper written in 1980 <ns0:ref type='bibr' target='#b5'>(Anderson, 1980)</ns0:ref>, highlighting various methods of enhancing computer security threat monitoring and surveillance.</ns0:p><ns0:p>Intrusion Detection is the procedure of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusion,' similarly, an intrusion is an attempt to bypass the security mechanisms of a network or a computer system, thereby compromising the Confidentiality, Integrity, and Availability (CIA) <ns0:ref type='bibr' target='#b39'>(Kagara &amp; Md Siraj, 2020)</ns0:ref>.</ns0:p><ns0:p>Moreover, an IDS is any piece of hardware or software program that monitors diverse malicious activities within computer systems and networks based on network packets, network flow, system logs, and rootkit analysis <ns0:ref type='bibr' target='#b9'>(Bhosale et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b52'>H. Liu &amp; Lang, 2019)</ns0:ref>. Misused detection (knowledge or signature-based) and anomaly-based methods are the two main approaches to detecting intrusions within computer systems or networks. Nevertheless, the past decade has witnessed the rapid rise of the hybrid-based technique, which typically exploits the advantages of the two methods mentioned above to yield a more robust and effective system <ns0:ref type='bibr' target='#b81'>(Saleh et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Misused IDS (MIDS) is a technique where specific signatures of well-known attacks are stored and eventually mapped with real-time network events to detect an intrusion or intrusive activities. The MIDS technique is reliable and effective and usually gives excellent detection accuracy, particularly for previously known intrusions. Nevertheless, this approach is questionable due to its inability to detect novel attacks. Also, it requires more time to analyze and process the massive volume of data in the signature databases <ns0:ref type='bibr' target='#b43'>(Khraisat et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b55'>Lyu et al., 2021)</ns0:ref>. The authors of <ns0:ref type='bibr' target='#b32'>(Jabbar &amp; Aluvalu, 2018)</ns0:ref> presented an exceptional high-level SIDS architecture, which includes both distributed and centralized modules that effectively enhanced the protection of IoT networks against internal and external threats. Furthermore, the authors exploit the Cooja simulator to implement a DoS attack scenario on IoT devices that rely on version number modification and 'Hello' flooding. Finally, the paper's findings claimed that these attacks might influence specific IoT devices' reachability and power consumption.</ns0:p><ns0:p>In contrast, the anomaly-based detection method relies on a predefined network behavior as the crucial parameter for identifying anomalies and commonly operates on statistically substantial network packets. For instance, incoming network packets or transactions are accepted within the predefined network behavior. Otherwise, the anomaly detection system triggers an alert of anomaly <ns0:ref type='bibr' target='#b39'>(Kagara &amp; Md Siraj, 2020)</ns0:ref>. It is essential to note that the main design idea of the anomaly detection method is to outline and represent the usual and expected standard behavior profile through observing activities and then defining anomalous activities by their degree of deviation from the expected behavior profile using statistical-based, knowledge-based, and machine learning-based methods <ns0:ref type='bibr' target='#b38'>(Jyothsna &amp; Prasad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b43'>Khraisat et al., 2020)</ns0:ref>. The acceptable network behavior can be learned using the predefined network conditions, more like blocklists or allowlists that determine the network behavior outside a predefined acceptable range. For instance, 'detect or trigger an alert if ICMP traffic becomes greater than 10% of network traffic' when it is regularly only 8%.</ns0:p><ns0:p>Finally, the anomaly-based approach provides a broader range of advantages such as solid generalizability, the ability to determine internal malicious activities, and a higher detection rate of new attacks such as the zero-day attack. Nevertheless, the most profound challenge is the need for these predefined baselines and the substantial number of false alarm rates (FAR) resulting from the fluctuating cyber-attack landscape <ns0:ref type='bibr' target='#b20'>(Einy et al., 2021)</ns0:ref>. For instance, <ns0:ref type='bibr' target='#b27'>(Fitni &amp; Ramli, 2020)</ns0:ref> intelligently used Logistic regression, decision tree, and gradient boosting to propose an optimized and effective anomaly-based ensemble classifier. They evaluate their model using the original 80 features of the CSE-CIC-IDS2018 dataset, with 80% for the training phase and the remaining 20% for testing the model. Moreover, they utilized Chi-square with Spearman's rank correlation to efficiently select the most relevant 23 features from the original features. As a result, they present impressive findings such as 98.8% accuracy, 98.8%, 97.1%, and 97.9% precision, recall, and F1-score.</ns0:p><ns0:p>The hybrid-based intrusion detection systems (HBIDS) exploit the functionality of MIDS to detect well-known attacks and flag novel attacks using the anomaly method. High detection rate, accuracy, and fewer false alarm rates are some of the main advantages of this approach <ns0:ref type='bibr' target='#b43'>(Khraisat et al., 2020)</ns0:ref>. The authors of <ns0:ref type='bibr' target='#b43'>(Khraisat et al., 2020)</ns0:ref> suggested an efficient and lightweight hybridbased IDS that mitigates the security vulnerabilities of the Internet of Energy (IoE) and lessens the considerable amount of time required to design effective and optimized IDSs for IoE platforms. Furthermore, the authors intelligently exploit the combined strengths of K-means and Support Vector Machine and utilize the centroids of K-means for an exceptional training procedure, which significantly enhances the process of training and testing the Support Vector Machine without compromising classification performance. Moreover, they selected the best value of 'k' and fine-tuned the SVM for best anomaly detection. Finally, the findings claimed that the proposed solution has drastically reduced the overall detection time and impressive performance accuracy of 99.9% compared to current cutting-edge approaches.</ns0:p><ns0:p>Additionally, the two classical IDS implementation methods are Network Intrusion Detection Systems (NIDS) <ns0:ref type='bibr' target='#b64'>(Mirsky et al., 2018)</ns0:ref> and Host-based IDS (HIDS) <ns0:ref type='bibr' target='#b8'>(Aung &amp; Min, 2018)</ns0:ref>. A HIDS detection method monitors and detects internal attacks using the data from audit sources and host systems like firewall logs, database logs, application system audits, window server logs, and operating systems <ns0:ref type='bibr' target='#b41'>(Khraisat et al., 2019)</ns0:ref>. In contrast, NIDS is an intrusion detection approach that analyses and monitors the entire traffic of computer systems or networks based on flow or packet-based and tries to detect and report anomalies. For example, the distributed denial of service (DDoS), denial of service (DoS), and other suspicious activities like internal illegal access or external attacks <ns0:ref type='bibr' target='#b68'>(Niyaz et al., 2015)</ns0:ref>. Unlike HIDS, NIDS usually protects an entire network from internal or external intrusions. However, such a process can be very timeconsuming, high computational cost, and very inefficient, especially in most current cutting-edge technologies with high-speed communication systems.</ns0:p><ns0:p>Nevertheless, this approach still has numerous advantages <ns0:ref type='bibr' target='#b10'>(Bhuyan et al., 2014)</ns0:ref>. For instance, it is more resistant to attacks compared to HIDS. Furthermore, it monitors and analyses the complete network's traffic if appropriately located in a network, leading to a high probability of detection rate. Also, NIDS is platform-independent, thus enabling them to work on any platform without requiring much modification. Finally, NIDS does not add any overhead to the network traffic <ns0:ref type='bibr' target='#b72'>(Othman et al., 2018)</ns0:ref>. Snort is a classic example of NIDS <ns0:ref type='bibr' target='#b79'>(Sagala, 2015)</ns0:ref>. Irrespective of the availability of other signature-based NIDS, such as Suricata and Zeek (Bro) <ns0:ref type='bibr'>(Ali, Shah, &amp; Issac, 2018)</ns0:ref>, this work adopted Snort because it is the leading open-source NIDS with active and excellent community support. Likewise, it is easy to install and run with readily available online resources. Furthermore, because Suricata can use Snort's rulesets but is prone to false positives with the need for an intensive system and network resource, we concluded to use Snort for the proposed solutions.</ns0:p><ns0:p>Snort is a classical open-source NIDS that has the unique competence of performing packet logging and real-time network traffic analysis within computer systems and networks using content searching, matching, and protocol analysis <ns0:ref type='bibr' target='#b0'>(Aickelin et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b96'>Tasneem et al., 2018)</ns0:ref>. Therefore, significantly contributing to the protection of some major commercial networks. Snort is a signature-based IDS that detects malicious live Internet or network traffic utilizing the predefined Snort rules, commonly applied in units of packets' header, statistical information (packet size), and payload information. Thus, it has a unique feature of high detection rate and accuracy <ns0:ref type='bibr' target='#b96'>(Tasneem et al., 2018)</ns0:ref>. However, it cannot detect novel attacks and, at the same time, it requires expert knowledge to create and update rules frequently, which is both costly and faulty.</ns0:p><ns0:p>Additionally, it has a vast range of functions, which allows its configurations into three major modes for specific functions, such as the network intrusion detection, the sniffer, and packet logger mode. The IDS mode allows monitoring network traffics against a set of predefined snorts rules for any malicious attacks. In contrast, the packet logger facilitates the logging for network traffics or packets to the disk for further action by the system or network administrator. Finally, the sniffer mode enables the easy sniffing of network packets and displays them on the console <ns0:ref type='bibr' target='#b79'>(Sagala, 2015)</ns0:ref>.</ns0:p><ns0:p>Similarly, IDS have emerged as popular security frameworks that significantly minimized various cutting-edge cyber-attacks over the past decade. Anomaly-based IDS has gained quite a buzz among network and system administrators for monitoring and protecting their networks against malicious attempts, which has achieved phenomenal success, especially in detecting and protecting systems and networks against novel or zero-day attacks <ns0:ref type='bibr' target='#b94'>(Tama et al., 2019)</ns0:ref>. However, it comes with costly negative consequences of generating thousands or millions of false-positive alerts or colossal amounts of complex data for humans to process and make timely decisions <ns0:ref type='bibr' target='#b85'>(Sekharan &amp; Kandasamy, 2018)</ns0:ref>. As a result, administrators ignore these massive alerts, which creates room for potential malicious attacks against highly valued and sensitive information within a given system or network.</ns0:p><ns0:p>Accordingly, the past years have seen a growing interest in designing and developing network security management frameworks from academia and industry, which involves analyzing and managing the vast amount of data from heterogeneous devices, commonly referred to as event correlation. Event correlation has significantly mitigated modern cyberattacks challenges using its unique functionality of efficiently and effectively analyzing and making timely decisions from massive heterogeneous data (G. <ns0:ref type='bibr' target='#b90'>Suarez-Tangil et al., 2009)</ns0:ref>. Consequently, the past years have seen many researchers and professionals exploit the efficiency of event correlation techniques to address the problems mentioned earlier <ns0:ref type='bibr' target='#b17'>(Dwivedi &amp; Tripathi, 2015;</ns0:ref><ns0:ref type='bibr' target='#b26'>Ferebee et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b92'>Guillermo Suarez-Tangil et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Nevertheless, this field of research is at its infant stage as minimal work is done to address these issues. Likewise, according to the authors' knowledge, none of the above solutions provides comprehensive solutions that address the manual creation of Snort rules and the event correlation as a single solution.</ns0:p><ns0:p>Based on the above challenges, this paper proposed two effective and efficient approaches to address the problems associated with the manual creation of Snort rules and mitigating the excessive false alarm rates generated by current IDSs. First, we present an automatic rule creation technique that focuses on packet header and payload information. Generally, we need to find standard features by examining all the network traffic to create a rule. Nonetheless, this method is inefficient and requires much time to complete the rule, and the accuracy of the rules made is variable according to the interpreters' ability. Consequently, various automatic rule (signature) generation methods have been proposed <ns0:ref type='bibr' target='#b79'>(Sagala, 2015)</ns0:ref>.</ns0:p><ns0:p>However, most of these methods are used between two specific strings, which is still challenging for creating reliable and effective Snort rules. Therefore, we present a promising algorithm based on the content rules, enabling the automatic and easy creation of Snort rules using packet contents. Secondly, we equally proposed a novel model that efficiently and effectively correlates and prioritizes IDS alerts based on the severity using various features of a network packet. Moreover, the proposed system does not need prior knowledge while comparing two different alerts to measure the similarity in diverse attacks. The following are an overview of the main contributions of this research work:</ns0:p><ns0:p>o The authors proposed an optimized and efficient Snort Automatic Rule Generator (SARG) that automatically generates reliable Snort rules based on content features.</ns0:p><ns0:p>o Similarly, we present a novel Security Event (alert) Correlator (SEC) that drastically and effectively minimized the number of alerts received for convenient interpretation. o Finally, the proposed approach has recorded an acceptable number of alerts, which directly correlates with significantly mitigating the challenges of false alarm rates.</ns0:p><ns0:p>The rest of the paper is organized as follows: Section 2 discusses important background concepts. Similarly, Section 3 explains the materials and methods of the proposed system. Next, section 4 highlights the results and discussions of the proposed approach. Finally, the paper concludes in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>ESSENTIAL CONCEPTS</ns0:head><ns0:p>This section presents brief essential concepts that support the work in this research paper, which will provide readers with the necessary knowledge to appreciate this research and similar results better.</ns0:p></ns0:div> <ns0:div><ns0:head>Synopsis of HIDS and NIDS</ns0:head><ns0:p>The two conventional IDS implementation methods are HIDS <ns0:ref type='bibr' target='#b36'>(Jose et al., 2018)</ns0:ref>and NIDS <ns0:ref type='bibr' target='#b12'>(Bul'ajoul et al., 2015)</ns0:ref>. Host-based IDS is software that is usually mounted on host machines to monitor and detect any malicious activities, for example, the inappropriate use of internal resources of a computer system or networks by authorized users and numerous kinds of external attacks. As a result, HIDS is robust in detecting some malicious activities compared to NIDS (M. <ns0:ref type='bibr'>Liu et al., 2019)</ns0:ref>. Moreover, host-based IDS are installed on various hardware categories such as workstations and servers, enabling effective and reliable analysis and evaluations of the traffics transmitted over these hosts for possible malicious network packets or unfitting internal user behavior. Furthermore, the primary objective of a classical host-based IDS is to detect malicious activities, thereby making them passive devices. As a result, they generally achieve a reasonable detection rate on attack vectors such as escalation of user privileges, mounting undesired software applications, attempting to run essential suspended services or systems, and unauthorized access and login attempts <ns0:ref type='bibr' target='#b103'>(Vokorokos &amp; Bal&#225;&#381;, 2010)</ns0:ref>.</ns0:p><ns0:p>Finally, some of the crucial advantages of a host-based IDS are; ability to detect internal malicious attempts that might elude a NIDS, the freedom of accessing already decrypted data compared to NIDS, and the ability to monitor and detect advanced persistent threats (APT). In contrast, some of its disadvantages are; firstly, they are expensive as it requires lots of management efforts to mount, configure and manage. It is also vulnerable to specific DoS attacks and uses many storage resources to retain audit records to function correctly <ns0:ref type='bibr' target='#b29'>(Gaddam &amp; Nandhini, 2017;</ns0:ref><ns0:ref type='bibr' target='#b84'>Saxena et al., 2017)</ns0:ref>.</ns0:p><ns0:p>The authors of <ns0:ref type='bibr' target='#b6'>(Arrington et al., 2016)</ns0:ref> use the innovative strength of machine learning to present an interesting host-based intrusion detection system (HIDS). They utilized artificial immune systems to design the host-based IDS efficiently. Furthermore, the authors intelligently use the concepts of behavioral modeling intrusion detection systems (BMIDS) as the focal metric of determining the acceptable behaviors, which achieves a reasonable detection rate through rescinding out the noise within the environment.</ns0:p><ns0:p>Network-based IDS is a hardware or software application that can sniff packets of network traffic to identify malicious attack attempts and update the system or network administrator with an alert, making it a passive device or software. Therefore, it is necessary to use them with intrusion prevention systems (IPS) for an effective and reliable security measure. Also, networkbased IDS significantly complements the functionality of host-based IDS by enabling the detection of both anomaly and signature-based intrusion and equally serves as a crucial point of protection for both the incoming and outgoing network traffics of an organization <ns0:ref type='bibr' target='#b12'>(Bul'ajoul et al., 2015)</ns0:ref>. The central idea of any classical network-based IDS is using rulesets to identify and alert malicious attempts. The majority of the NIDS comes with pre-installed rules, which can be modified with customized rules to target specific attacks to suit a given network or computer system <ns0:ref type='bibr' target='#b71'>(Ojugo et al., 2012)</ns0:ref>.</ns0:p><ns0:p>For example, creating a rule for a possible probing attack and saving it in the local.rules of Snort IDS will ensure an alert is raised whenever an intrusion is initiated that matches the rule. The practical identification and alerting of policy violations, suspicious unknown sources, and destination network traffics, port scanning, and other common malicious attempts are some of the essential functionalities of a classical network-based IDS. However, the bulk of the networkbased IDS requires costly hardware with expensive enterprise solutions, making it hard to acquire <ns0:ref type='bibr' target='#b21'>(Elrawy et al., 2018)</ns0:ref>.</ns0:p><ns0:p>For instance, the authors of <ns0:ref type='bibr' target='#b69'>(Nyasore et al., 2020)</ns0:ref> presented an intriguing challenge of evaluating the overlap among various rules in two rulesets of the Snort NIDS. However, the work failed to assess the distinction between diverse rulesets explicitly. Consequently, the work presented in <ns0:ref type='bibr' target='#b87'>(Sommestad et al., 2021)</ns0:ref> provides an interesting empirical analysis of the detection likelihood of 12 Snort rulesets against 1143 misuse attempts to evaluate their effectiveness on a signature-based IDS. The Sourcefire Vulnerability Research Team and the Emerging Threats Labs are the architects of the utilized rulesets. The authors claimed a significant 39% raise of priority-1-alerts against the misuse attempts for the Emerging Threats compared to the ruleset of Vulnerability Research Team with only 31%.</ns0:p><ns0:p>Similarly, the authors claimed the following features were the determining factor of the detection probability: 'if the exploit is publicly known,' 'if the ruleset references the exploited vulnerability,' 'the type of software targeted,' 'the payload,' and 'the operating system of the targeted software.' They concluded that the significance of the above variables rest on the utilized rulesets and finally validated these variables with a logistic regression model and recorded a performance accuracy of 69-92% for various rulesets. Snort is a typical example of an open-source network-based intrusion detection system, and the following sections summarize Snort and its components.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> illustrates a standard representation of a HIDS and NIDS architecture. As discussed earlier, both architectures provide unique functionalities in detecting malicious activities within computer systems and networks. For instance, Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows a malicious user (attacker) who initiated a DDoS attack against one of the internal servers within the LAN. However, due to the internal security mechanisms, packets are inspected by the firewall as the first layer of protection. Interestingly, some malicious packets can bypass the firewall due to the cutting-edge attack mechanisms, necessitating NIDS <ns0:ref type='bibr' target='#b12'>(Bul'ajoul et al., 2015)</ns0:ref>. Therefore, the NIDS receives the packets and does a further packet inspection. If there are any malicious activities, the packets are blocked and returned to the firewall. Then, the firewall will drop the packets or notify the network administrator, depending on the implemented policies. It is crucial to note that the same applies to the outgoing packets from the LAN to the WAN.</ns0:p><ns0:p>In contrast, the scenario depicted at the top of Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows a regular user requesting web services. Initially, the request goes via the same process as explained above. Then, if the NIDS qualifies the requests, it is sent to the server through the network switch to the webserver. Furthermore, the server responds with the required services, which goes through the same process as the reverse. However, this process is not shown in the diagram. Lastly, Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> also presents the architecture of HIDS as labeled on individual devices, which administers further packet inspection within the host <ns0:ref type='bibr' target='#b103'>(Vokorokos &amp; Bal&#225;&#381;, 2010)</ns0:ref>, thereby enhancing the security level of a given network or computer system.</ns0:p></ns0:div> <ns0:div><ns0:head>Summary of Snort and its components</ns0:head><ns0:p>Snort is a popular and influential cross-platform lightweight network-based IDS with multiple packet tools. It is among the most renowned signature-based intrusion prevention and detection systems with various strengths and weaknesses. The power of Snort lies in the use of rules, some of which are preloaded, but we can also design customized rules to merely send alerts or block specific network traffic when they meet the specified criteria. Additionally, alerts can be sent to a console or displayed on a graphical user interface, but they can also be logged to a file for future or further analysis. Finally, Snort also enables the configuration options of logging alerts to databases such as MSQL and MongoDB or sending an email to a specified responsible person if there are alerts or suspicious attempts <ns0:ref type='bibr' target='#b4'>(Ali, Shah, Khuzdar, et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Moreover, Snort has three running modes: sniffer, packet logger, and network-based IDS mode <ns0:ref type='bibr' target='#b96'>(Tasneem et al., 2018)</ns0:ref>. The sniffer mode is run from the command line mode, and its primary function is just inspecting the header details of packets and printing it on the console. For instance, ./Snort -vd will instruct Snort to display packet data with its headers. The packer logger mode interestingly inspects packets and logs them into a file in the root directory. Then, it can be viewed using tcpdump, snort, or other applications for further analysis. For example, ./Snort -dev -l ./directory_name, will prompt Snort to go to packet logger mode and log the packets in a given directory; if the directory_name did not exist, then it will exit and throw an error. Finally, the network-based IDS mode utilizes the embedded rules to determine any potential intrusive activities within a given network. Snort does this with the help of the network interface card (NIC) running in the promiscuous mode to intercept and analyze real-time network traffic.</ns0:p><ns0:p>For instance, the command ./Snort -dev -l ./log -h 172.162.1.0/24 -c snort.conf will prompt Snort to log network packets that trigger the specified rules in snort.conf, where snort.conf is the PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:10:67287:1:0:NEW 7 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science configuration file that applies all the rules to the incoming packets for any malicious attempt <ns0:ref type='bibr' target='#b96'>(Tasneem et al., 2018)</ns0:ref>. It is equally important to note that Snort also has the strength of real-time packet logging, content matching, and searching with protocol analysis that plays a significant role in mitigating various attacks with identified loopholes. It also has the advantage of serving as a prevention tool instead of just monitoring, which significantly helps minimize well-known attacks by rejecting, dropping, or blocking any suspicious network traffic <ns0:ref type='bibr' target='#b4'>(Ali, Shah, Khuzdar, et al., 2018)</ns0:ref>.</ns0:p><ns0:p>However, it does come with some notable shortcomings. For example, a very unpopular GUI makes using Snort a bit difficult. Additionally, the vast number of network traffics can compromise the reliable and functional operation of Snort, which inspires the use of Pfring and Hyperscan to reinforce the functionality of Snort for efficiency and reliability. Finally, and most importantly, caution needs to be exercised in creating Snort rules to avoid the apparent challenge of many false alarm rates (FAR). However, it cannot detect zero-day attacks (W. <ns0:ref type='bibr' target='#b76'>Park &amp; Ahn, 2017)</ns0:ref>.</ns0:p><ns0:p>Snort comprises five logical components that determine and classify potential malicious attacks or any undesired threat against computer systems and networks <ns0:ref type='bibr' target='#b4'>(Ali, Shah, Khuzdar, et al., 2018)</ns0:ref>. Lastly, Fig. <ns0:ref type='figure'>2</ns0:ref> presents the Snort components working together to monitor and analyze the real-time network traffic for possible signs of malicious attempts and then generate alerts based on the specified rules, and the following section summarizes these components <ns0:ref type='bibr' target='#b25'>(Essid et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b66'>Mishra et al., 2016;</ns0:ref><ns0:ref type='bibr'>Shah &amp; Issac, 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Packet Sniffer-Decoder Engine</ns0:head><ns0:p>The Snort packet capturing engine uses the Data AcQuisition commonly referred to as DAQ or libpcap packet intercepting functionality to capture raw packets of real-time network traffic, as shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. In addition, the decoder engine serves a significant role in parsing the headers of the captured packets to ascertain the essential routing protocols used within the captured packets, such as UDP or TCP. Also, it finds anomalies within packet headers like invalid sizes, analysis of TCP flags, and deviation from RFC.</ns0:p></ns0:div> <ns0:div><ns0:head>The Preprocessor</ns0:head><ns0:p>Furthermore, the central role of the preprocessor that works as a plugin facilitates the detailed normalization and analysis of protocols, such as verifying the specific anomalies to protocol services like HTTP, SSH, FTP, and IMAP, which ultimately reinforces the reliability and effectiveness of the detection engine. Finally, the preprocessor is also responsible for reconstructing TCP flows, detecting port scanning, and working with fragmented network traffics to ensure an attacker or an intruder does not deceive the detection engine.</ns0:p></ns0:div> <ns0:div><ns0:head>The Detection Engine</ns0:head><ns0:p>The detection engine is the most integral component of snort. It does the essential tasks of detecting possible malicious attempts against computer systems or networks by chaining together sets of rules defined in a configuration file and applying them to the incoming network traffic packets. Moreover, various actions defined during rule creation such as alert, log, drop, reject, sdrop, pass, activate, and dynamic will be applied to any incoming suspicious traffic that meets the criteria specified in a given rule. However, it is essential to note that latency is an obvious challenge for snort, especially when performing all the above in real-time within a heavy load network, thereby leading to packet drop.</ns0:p></ns0:div> <ns0:div><ns0:head>Logging and Alerting Module</ns0:head><ns0:p>This module generates alerts and logs for the malicious network traffic or undesired activities within a computer network that meet the criteria of a given rule within the configuration file. The content of these log files or alerts significantly depends on the flagged malicious network traffic contents. Generally, the default location of all the log files is /var/log/snort, folder, which can also be configured using the command line. Finally, Lastly, it is crucial to note that when malicious network traffic triggers multiple rules, this can lead to Snort generating overwhelming alerts.</ns0:p></ns0:div> <ns0:div><ns0:head>The Output Plug-ins</ns0:head><ns0:p>Finally, this module directs the generated outputs using a plugin system, which provides the users with optimal flexibility and different highly configurable options. For example, users can log the alerts to a file, a database like MSQL and MongoDB. Also, it facilitates sending alerts via UNIX sockets, generating XML reports, sending SNMP traps, and the essential modification of network configurations like routers and firewalls.</ns0:p></ns0:div> <ns0:div><ns0:head>Snort Rule Syntax</ns0:head><ns0:p>Snort utilizes a flexible, lightweight, and straightforward authoritative rules-language primarily written in a single line as in versions preceding to 1.8. However, the present Snort versions allow the spanning of rules in multiple lines but require the addition of black (\) at the end of each line; generally, Snort rules are composed of dual logical portions, like the rule header and the rule options <ns0:ref type='bibr' target='#b45'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. The rule header comprises the specified actions, protocol, addresses of source and destination, and port numbers. In contrast, the rule option encloses the details of the content of the packets responsible for triggering an alert based on the specified actions and the alert message if any incoming malicious traffic meets the rule's criteria.</ns0:p><ns0:p>Furthermore, irrespective of whether rules can be explicitly configured, only a specific rule segment can be customized for performance reasons. For instance, already written rules must be itemized within the configuration file for it to be able to trigger a malicious event that meets its criteria. Generally, Snort rules share all sections of the rule option like the general options, payload detection options, non-payload detection options, and post-detection options. However, it can be specified differently depending on the configuration approach. Finally, Snort rules are commonly applied to the headers of the application, transport, and network layers such as FTP, HTTP, ICMP, IP, UDP, and TCP <ns0:ref type='bibr' target='#b13'>(Chanthakoummane et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b45'>Khurat &amp; Sawangphol, 2019)</ns0:ref>. However, they can also be applied to the packet payload, which is the adopted approach for SARG.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> presents a classical representation of Snort rules <ns0:ref type='bibr' target='#b45'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. The two rules shown in Table <ns0:ref type='table'>1</ns0:ref> denote that an alert will be triggered based on an icmp traffic protocol from any source IP address and port number to any destination IP address and port number if the traffic content contains a probe. Consequently, this will show a message probe attack, and the signature ID of this rule is 1000023. Similarly, the second rule is almost the same as the first, except that the action is 'log' instead of 'alert,' while the destination port number is 80 instead of 'any' number.</ns0:p></ns0:div> <ns0:div><ns0:head>The Rule Header</ns0:head><ns0:p>The Snort rule header comprises essential details such as what to do when a packet matches the criteria of a rule. The default Snort actions are: alert, log, and pass, and it is a required field for every Snort rule, and it defaults to alert if not specified explicitly. Nevertheless, there are additional options for an inline mode like the drop, reject and sdrop. The alert option produces an alert and logs the suspicious packet using a configured alert method. In contrast, the log option only logs the suspicious packets, whereas the pass ignores the packets. Also, activate performs a significant function of starting a dynamic rule and at the same time generates an alert. Finally, the drop option drops the suspicious packets and logs them for further analysis, while the sdrop blocks any suspicious packets. However, it does not log the suspicious packets <ns0:ref type='bibr' target='#b13'>(Chanthakoummane et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b35'>Jeong et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The protocol field within the rule header is required and usually defaults to IP. Nevertheless, it equally supports UDP, TCP, and ICMP options. The source IP is an optional field and, by default, is set to any, as indicated in Table <ns0:ref type='table'>1</ns0:ref>. However, it also supports a single IP address like 172.168.10.102 or a CIDR block like 172.168.10.0/16, which permits a range of IP addresses as an input. Likewise, a source port field allows a port number or range of port numbers used to process the content of the incoming packets, and the destination IP and destination port fields are almost the same as the source IP and port fields. Finally, the direction of the monitored traffic is specified using the directional operators (-&gt;, and &lt;-), and the monitoring of source to destination (-&gt;) is the most common practice <ns0:ref type='bibr' target='#b35'>(Jeong et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b45'>Khurat &amp; Sawangphol, 2019)</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> presents the rule header components with typical examples. The first example will trigger an alert for traffic from any source address with various port numbers up to and including 1024 denoted (:1024), which is going to 192.168.100.1, and ports that are greater than and including port 600 denoted (600:). Similarly, the second example will log any traffic, not from 172.16.10.0/16, designated with the negation sign (!) with any source port to 172.16.10.251 with port numbers ranging from 1 to 6000 denoted as 1:6000. Finally, the last one is almost the same as the above two except for the bidirectional mode with port numbers ranging from 1:1024 and a source and destination network address <ns0:ref type='bibr'>of !192.168.10.0/24 and 192.168.10</ns0:ref>.0/24. The negation sign (!) means the sources that are not coming from the network address of 192.168.10.0/24, which will only permit addresses ranging from <ns0:ref type='bibr'>192.168.10.1 to 192.168.10.255</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>The Rule Options</ns0:head><ns0:p>The rule option unit comprises the detection engine's central functionalities yet provides complete ease of use with various strengths and flexibility <ns0:ref type='bibr' target='#b45'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. However, this segment can only be processed if all the preceding details have been matched. Furthermore, since this unit generally requires a vast amount of processing resources and time, it is recommended to limit the scope of the rules using only the necessary fields to enable real-time processing without packet drops. For example, writing a broad range of rules should be discouraged because this will send alerts for every packet. Therefore it is recommended to only use the message, content, and SID field for writing efficient and reliable rules. The semicolon (;) separates the rule options while the option keywords are separated using the colon (:). Finally, it comprises four main classes, but this paper will only summarize the general and payload detection options. For instance, the general options have no significant effect during detection but merely provide statistics about the rule, whereas the payload inspects the packet data. The subsequent section discusses the various general and payload detection options fields <ns0:ref type='bibr' target='#b35'>(Jeong et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b59'>Masumi et al., 2021)</ns0:ref>.</ns0:p><ns0:p>The general rule and payload detection options include numerous parameters available to Snort users for rule creation <ns0:ref type='bibr' target='#b45'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. However, only the relevant ones are selected to understand the work proposed in this paper, and Table <ns0:ref type='table'>3</ns0:ref> presents the selected options with a typical example for each. Firstly, the message parameter is a crucial general rule option that enables the alerting and logging engines to print a specified message with an alert or log if a malicious event matches a specific rule using the 'msg' keyword with the desired string. Furthermore, it uses the backslash (\) symbol to escape a character to avoid confusing the rule parser, and it enhances the easy understanding of why the rule was triggered.</ns0:p><ns0:p>For instance, alert <ns0:ref type='bibr'>tcp !192.168.1.0/24 any -&gt; 192.168</ns0:ref>.1.0/24 80 (content:!'GET'; msg: 'Suspicious web request, NO match for GET request. ';). Snort will fire the message 'Suspicious web request, NO match for GET request.' if the above rule is triggered. It can also be written like, alert <ns0:ref type='bibr'>tcp !192.168.1.0/24 any -&gt; 192.168</ns0:ref>.1.0/24 80 (content:!'|47 45 54|'; msg: 'Suspicious web request, NO match for GET request.';), which replaces the content with the ASCI values of the GET keyword represented as 47 for G, 45 for E and 54 for T.</ns0:p><ns0:p>Secondly, the Snort ID option uses the sid keyword to enable the unique identification of Snort rules, which plays a significant role in rule management, such as editing or disabling a particular rule. It also allows the output plugins to quickly identify rules, commonly used along with the rev keyword. Lastly, the sid keyword is an excellent tool for tracking the alerts and logs from the generating rules. Moreover, the values of 0-99 are for future use, 100-999,999 for the rules included in the Snort distributions. Finally, greater than or equal to 1,000,000 (&gt;=1,000,000) is for the local rules as indicated in Table <ns0:ref type='table'>3</ns0:ref>. Also, similar to the functions of Snort ID, the rev keyword enables the unique identification of revision numbers of Snort rules, which permits the descriptions and signatures to be efficiently distinguished and replaced during updates, and equally used by the output modules for easy identification. Therefore, it should be used alongside the sid keyword, and Table 3 present a typical example with revision number 3.</ns0:p><ns0:p>Finally, the generator ID, commonly denoted gid, identifies the Snort part that generates the event when a particular rule is triggered. The gid is optional and not recommended for general rule writing. If not specified during rule writing, it will automatically be set to the default value of 1, and the rule will be classified as a general rule subsystem.</ns0:p><ns0:p>In contrast, particular preprocessors and decoders are designated with numerous gids above 100. However, using values starting at 1,000,000 and above is recommended to avoid conflict with the specified gids within Snort. The severity levels of rules are assigned using the priority tag, which overrides the default priority of the classtype rule defined by the config classification. This feature is significantly helpful as it enables the escalation of high-risk from low-risk events and uses one (1) as the highest priority, as indicated in the example presented in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>The content parameter is equally an essential feature that facilitates writing Snort rules that would explicitly search for specific content within the packet payload and trigger an alert or log if there is a match. This approach uses the computationally expensive Boyer-Moore patternmatching function to perform tests against the contents of the packets, which is successful if the content string matches any of the payload content. However, it is essential to note that this is case-sensitive and can be a mixture of both simple text and binary, generally enclosed with the pipe character (|) represented as bytecodes. Also, the negation (!) modifier allows us to write rules that will trigger alerts for packets that do not contain these contents, such as the example given earlier.</ns0:p><ns0:p>The offset keyword allows the author to define a start point for searching a pattern within packets, which modifies the content keyword in the rule and uses an integer as an argument. For instance, alert <ns0:ref type='bibr'>tcp !192.168.1.0/24 any -&gt; 192.168.1.0/24 80 (content: 'probe-attack'; offset:4; depth:20;)</ns0:ref> will start searching for the phrase probe-attack after 4 bytes from the start of the data. Finally, the depth option enables the specification of how deep Snort should search for a particular pattern within the packet. It also modifies the content keyword with a minimum value of 1 and a maximum of 65535.</ns0:p></ns0:div> <ns0:div><ns0:head>Snort Configurations and Rule Files</ns0:head><ns0:p>Snort provides a rich scope of customizable configuration options significant to any administrator for efficient and effective deployment and day-to-day operations. Since Snort consists of vast configuration options, this section will only highlight the necessary options to understand this work easily. Generally, the snort.conf contains all the Snort configurations, and it includes the various customizable settings and additional custom-made rules. Moreover, Snort uses this configuration file during each startup stage. The snort.conf is a sample and default configuration file shipped with the Snort distribution. However, users can use the -c command line switch to specify any name for the configuration file, such as /opt/snort/Snort -c /opt/snort/myconfiguration.conf. Nevertheless, snort.conf is the conventional name adopted by many users <ns0:ref type='bibr' target='#b23'>(Erlacher &amp; Dressler, 2020)</ns0:ref>. Regardless, the configuration file can also be saved in the home directory as .snortrc but using the configuration file name as a command-line is the common practice with advantages. For example, preprocessor configuration, variable definition, rule configuration include files, config parameters, and output module configuration. Furthermore, the functionality and flexibility of adding customized rules within any Snort distribution are important because they allow users to write user-specific rules and update thirdparty rules without overwriting the local rules. Likewise, the configuration file also enables using variables for convenience during rule writing, such as defining a variable for HOME_NET within the configuration file like var <ns0:ref type='bibr'>HOME_NET 192.168.10</ns0:ref>.0/24. The preceding example enables the use of HOME_NET within various rules, and when a change is needed, the rule writer only needs to change the variable's value instead of changing all the written rules. For instance, var <ns0:ref type='bibr'>HOME_NET [192.168.1.0/24,</ns0:ref><ns0:ref type='bibr'>192.168.32.64/26]</ns0:ref> and var EXTERNAL_NET any <ns0:ref type='bibr' target='#b66'>(Mishra et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Lastly, the rules configuration also enables the Snort users to create numerous customized rules using the variables within the configuration files and add them to the snort.conf file. The general convention is to have different Snort rules in a text file and include them within the snort.conf using the include keyword like include $RULE_PATH/myrules.rules, which permits the inclusion of the rules within myrules.rules to the snort.conf file during the next start of snort. Also, users can use the Snort commenting syntax (#) in front of a specified rule or the rule file within the Snort configuration file to manually disable the Snort rule or the entire class of rules.</ns0:p><ns0:p>Similarly, Snort is as good as staying current, requiring regular checking of new rules from the community to update the existing rules. Nevertheless, downloading a new set of rules comes with the cost of overwriting the existing ones. Therefore, caution needs to be exercised during updating Snort rules. Snort parsed all the newly added rules during a Snort startup, enabling any newly added rules. However, if there are any errors within the newly added rules, Snort will exit with an error, necessitating the correct and consistent writing of Snort rules <ns0:ref type='bibr' target='#b23'>(Erlacher &amp; Dressler, 2020;</ns0:ref><ns0:ref type='bibr' target='#b66'>Mishra et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Figs. <ns0:ref type='figure'>3 and 4</ns0:ref> show some fundamental concepts about the Snort configuration and rule files. For instance, Figs. <ns0:ref type='figure'>3(A</ns0:ref>), (B), and 3(C) show active and disabled Snort rules created using variables such as EXTERNAL_NET and HOME_NET. Similarly, the various rule files that can be modified or even add new rule files using the include keyword are demonstrated in Fig <ns0:ref type='figure'>3(C</ns0:ref>). Lastly, Figs. 4(A) and 4(B) present the successful and unsuccessful validation of the Snort configuration as discussed earlier.</ns0:p></ns0:div> <ns0:div><ns0:head>Synopsis of Alert Correlation</ns0:head><ns0:p>Alert Correlation is a systematic multi-component process that effectively analyzes alerts from various intrusion detection systems. Its sole objective is to provide a more concise and high-level view of a given computer system or network <ns0:ref type='bibr' target='#b100'>(Valeur et al., 2017)</ns0:ref>. Moreover, alert correlation is a logical process that facilitates prioritizing IDSs alerts based on the severity of the attack or the organizational security policy. Furthermore, it efficiently transforms and merges the vast number of alerts into compact reports. Additionally, it plays a significant role for the network administrators to effectively and efficiently differentiate between relevant and irrelevant attacks within a given network, resulting in reliable and secured networks. Finally, it is a meaningful technique of addressing the colossal generated raw alerts by IDSs of various PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>capabilities by aggregating and clustering these alerts to deliver a valuable compressed interpretation of a given network from an intrusion viewpoint <ns0:ref type='bibr' target='#b100'>(Valeur et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head><ns0:p>The section briefly discusses the methodologies and procedures used to design and implement the proposed Snort Automatic Rule Generator using a sequential pattern algorithm and the Security Event Correlation, abbreviated SARG-SEC. It is worthy to note that the term event is the same as alerts generated during intrusion detection.</ns0:p></ns0:div> <ns0:div><ns0:head>Snort Automatic Rule Generation (SARG)</ns0:head><ns0:p>Irrespective of the significant numbers of available literature that use other approaches of automating rule generation with remarkable success <ns0:ref type='bibr' target='#b49'>(Li et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b71'>Ojugo et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b79'>Sagala, 2015)</ns0:ref>, there is still a need for an effective and optimized auto-rule generator. According to the authors' knowledge, none of the existing literature uses the proposed approach in this paper. Therefore, this paper presents an automation method of generating content-based Snort rules from collected traffic to fill this research gap. The following sections provide a concise description of the design and implementation procedures.</ns0:p></ns0:div> <ns0:div><ns0:head>Content Rule Extraction Algorithm</ns0:head><ns0:p>Snort rules can indicate various components, but this research targets rules consisting only of header information and payload information. The steps demonstrated in the examples below are performed to generate the Snort rules automatically. For each first host, the application and service to be analyzed or the traffic of malicious code is collected. Moreover, packets with the same transmission direction are combined to form a single sequence in a single flow. Finally, the sequence is used as an input of the sequential pattern algorithm to extract the content <ns0:ref type='bibr' target='#b78'>(Pham et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Action Protocol SourceIP SourceProt -&gt; DestinationIP DestinationPort (Payload) <ns0:ref type='bibr'>offset:6;</ns0:ref><ns0:ref type='bibr'>depth:24)</ns0:ref> The sequential pattern algorithm finds the candidate content while increasing the length starting from the candidate content whose length is 1 in the input sequence and finally extracts the content having a certain level of support. Nevertheless, the exclusive use of only packet or traffic contents for rule creation can lead to significant false positives, thereby undermining the effectiveness of the created rules and increasing the potential risks against computer systems and networks. Therefore, additional information is analyzed and described in the rule to enable efficient and effective rules against malicious attacks. As a result, this paper used the content and header information of the network traffic or packets as supplementary information, thereby significantly enhancing the reliability of the automatically generated rules at the end of the process. Moreover, by applying the extracted content to input traffic, traffic matching the content is grouped, and the group's standard header and location information is analyzed <ns0:ref type='bibr' target='#b79'>(Sagala, 2015)</ns0:ref>.</ns0:p><ns0:p>Finally, the auto-generated Snort rules are applied to the network equipment with Snort engine capability.</ns0:p></ns0:div> <ns0:div><ns0:head>Sequence Configuration Steps</ns0:head><ns0:p>The sequence is constructed by only extracting the payloads of the packets divided into forward and backward directions of the flow. Moreover, the proposed approach will generate two sequences for flows that consist of two-way communication packets, whereas the unidirectional communication traffic generates a single sequence. Finally, it is worthy to note that a sequence set consists of several sequences denoted S as shown in equation 1, and a single sequence is made up of host ID and string as shown in equation 2. SequenceSet = {S 1 , S 2 , ...., S s }</ns0:p><ns0:p>(1) S i = {host_id, &lt;a 1 a 2 a 3 , ...., a n &gt;}</ns0:p><ns0:p>(2)</ns0:p></ns0:div> <ns0:div><ns0:head>Content Extraction Step</ns0:head><ns0:p>A sequence set and a minimum support map are input in the content extraction step and extract content that satisfies the minimum support. This algorithm improves the Apriori algorithm, which finds sequential patterns in an extensive database to suit the content extraction environment. The Content Set, which is the production of the algorithm, contains several contents (C) as shown in Equation <ns0:ref type='formula'>(</ns0:ref>3), and one content is a contiguous substring of a sequence string as shown in Equation ( <ns0:ref type='formula' target='#formula_0'>4</ns0:ref>). Algorithm 1 and Algorithm 2 illustrate producing a content set that satisfies a predefined minimum map from an input sequence set. Moreover, when Algorithm 1 performed the content extraction, the content of length one (1) is extracted from all sequences of the input sequence set and stored in a content set of length 1 (L 1 ) as demonstrated in (Algo.1 Line: 1-5), the content of length 1 starting with the length is increased by 1. Also, the content of all lengths is extracted and stored in its length content set (L k ) as shown in (Algo.1 Line: 6~20).</ns0:p><ns0:formula xml:id='formula_0'>ContentSet = {C 1 , C 2 , &#8230;, C c } (3) C i = {&lt;a x a x +1 &#8230;. a y &gt; | 1 &#8804; x &#8804; y &#8804; n,}<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Support = Number of support hosts / Total number of hosts (5)</ns0:p><ns0:p>It is worthy to note that the method to be used at C (Algo. 1.0 Line: 18) is the process described in Algorithm 2.0. The contents of the set L k-1 are created by comparing the contents of L k . Furthermore, to make the content of the set L k-1 by combining the contents of the set L k , contents of the set with the same length k-2 content excluding first character and k-2 content excluding the last character are possible as shown at (Algo.2.0 Line: 1~7). For example, abcd and bcde are the contents of the set L4. It has the same bcd except for a, and bcd except for e. Therefore, the content abcde of the set L5 can be created by increasing the length by 1 in the same way as above. Extracting and deleting content below the support level are repeated until the desired outcome.</ns0:p><ns0:p>The final step checked for the content inclusion relationship of all lengths extracted. If we find the content in the inclusion relationship, the content is deleted from the set as demonstrated in (Algo.1.0). Then, we deliver the final created content set to the next step. Moreover, the SequenceSet consisting of traffic from 3 hosts and minimum support of 0.6 is passed as an input. The minimum support rating of 0.6 means that since the total number of hosts is 3, the content must be observed in traffic generated by at least two hosts. Finally, all length one (1) content is extracted when the algorithm is executed.</ns0:p><ns0:p>Moreover, Algorithm 3.0 would have been given content and packet set when it represents analyzing the location information of the content. The output of this algorithm, offset when matched to a packet in packet set. The matching starts in a bit, byte position, and depth is the matching exit position, meaning the maximum byte position.</ns0:p><ns0:p>The first offset is the maximum size of the packet, and the depth initializes to 0 as shown in (Algorithm 3.0 Line: 1~7), which traverses all packets in packet set and adjusts offset and depth. Moreover, it checks whether the content received matches the packet, and if there exists a match, then the starting byte position is obtained and compared to the current offset. If the value is less than the current offset, the value changes to the current offset. Similarly, in the case of depth, if the value is more significant than the current dept., the current value is obtained using the byte position. Then it changes the value to the current depth (Algo.3 Line: 4~6). Likewise, to analyze the header information of the extracted content, we performed a process similar to the location information.</ns0:p><ns0:p>The analysis steps described above traversed all packets in the packet set and checked for possible matching with the content. If a match exists, it will store the packet's header information. After reviewing all packets, it adds the header information to the content rule if the stored header information has one unique value. Moreover, in IP addresses, the CIDR value is reduced to 32, 24, 16 orders which iterate until the unique value is extracted. Assuming the CIDR value is set to 32, which is the class D IP address range, we will try to find an exclusive value, and if not found, then apply the CIDR value to 24 to find the class C IP address, and this process continues until we obtained the desired values. For instance, if the Destination IP address to which that content is matched is <ns0:ref type='bibr'>CIDR 32,</ns0:ref><ns0:ref type='bibr'>then '111.222.333.1/32' and '111.222.333</ns0:ref>.2/32' are extracted. In contrast, it can also be set to CIDR 24 and extract '111.222.333.0/24'.</ns0:p></ns0:div> <ns0:div><ns0:head>The Schematic Diagram of the Snort Automatic Rule Generator (SARG)</ns0:head><ns0:p>Fig. <ns0:ref type='figure'>5</ns0:ref> presents a comprehensive schematic diagram of SARG. The proposed auto-rule generator (SARG) utilized well-known datasets used by many researchers and security experts to assess various network security frameworks as discussed in <ns0:ref type='bibr' target='#b50'>(Lippmann et al., 2000)</ns0:ref>. Initially, we used different pcap files to simulate live attacks against SARG that facilitates an efficient and effortless generation of reliable Snort rules. It is essential to note that &#960;-1 , &#960;-2 , &#960;-3 ,...,&#960;n , denotes the various utilized pcap files for the auto-rule generation. Consequently, SARG relies on the pcap file contents to automatically generate numerous effective Snort rules without any human intervention, and &#411;-1 , &#411;-2 , &#411;-3 , &#411;-4 ,...,&#411;n represents the various auto-generated Snort rules.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Next, the Snort.conf file is updated based on the auto-generated rules. Moreover, any device with a Snort engine denoted as &#1150; can use these auto-generated rules against incoming traffic represented as D-1 ,..., Dn to trigger alerts for malicious attempts that meet all the criteria of the rules. Finally, this paper does not document the generated alerts due to the volume of the work. However, the provided supplementary materials contain the codes and pcap files that any interested researcher can reproduce.</ns0:p></ns0:div> <ns0:div><ns0:head>Overview and Significance of Alert Correlation</ns0:head><ns0:p>Intrusion detection systems have recently witnessed tremendous interest from researchers due to their inherent ability to detect malicious attacks in real-time <ns0:ref type='bibr' target='#b99'>(Vaiyapuri &amp; Binbusayyis, 2020;</ns0:ref><ns0:ref type='bibr' target='#b108'>Zhou et al., 2020)</ns0:ref>. In addition, it has significantly mitigated the security challenges that came with ground-breaking technologies such as the Internet of Things (IoT) <ns0:ref type='bibr' target='#b101'>(Verma &amp; Ranga, 2020)</ns0:ref>, big data, and artificial intelligence <ns0:ref type='bibr' target='#b98'>(Topol, 2019)</ns0:ref>. However, regardless of the immense contributions mentioned above, it may still generate thousands of irrelevant alerts daily, which complicates the role of the security administrators in distinguishing between the essential and nonessential alerts such as false positives.</ns0:p><ns0:p>In addressing these security challenges, specifically for IDSs, the research industry has proposed significant pieces of literature with remarkable achievements in lessening the huge false alarm rates of current cutting-edge proposed systems <ns0:ref type='bibr' target='#b58'>(Mahfouz et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b108'>Zhou et al., 2020)</ns0:ref>. Nevertheless, irrespective of these significant achievements, there is still a need to propose new approaches with much efficiency and effectiveness to help mitigate the vast false alarm rates of the earlier proposed systems <ns0:ref type='bibr' target='#b34'>(Jaw &amp; Wang, 2021)</ns0:ref>. Therefore, this paper presents a novel alarm correlation model, which correlates and prioritizes IDS alerts. The following sections provide a succinct description of the design and implementation process of the proposed security event correlation model.</ns0:p></ns0:div> <ns0:div><ns0:head>The Proposed Security Events Correlation (SEC)</ns0:head><ns0:p>The proposed correlation engine is a novel model that offers efficiency and effectiveness of correlating the alerts generated by the IDS, which significantly mitigates the massive false alarm rates. Furthermore, it does not need previous knowledge while comparing different alerts to measure the similarity in various attacks. Unlike other correlation approaches that usually follow specific standards and procedures <ns0:ref type='bibr' target='#b100'>(Valeur et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b106'>Zhang et al., 2019)</ns0:ref>, we proposed a model that entails the same processes but with different approaches, as shown in the overview of the proposed model below. The following sections will briefly discuss the procedures utilized for the proposed SEC model.</ns0:p><ns0:p>The overview of the proposed system involves various phases such as monitoring interval, alert preprocessing, alert clustering, correlation, alert prioritization, and the results as shown in Fig. <ns0:ref type='figure'>6</ns0:ref>. The following sections detail the various phases of SEC with their respective procedures.</ns0:p></ns0:div> <ns0:div><ns0:head>The Monitoring Interval</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In summary, this paper defined the monitoring interval as the specific configured time on the security framework or monitoring model performing the alert correlation. The monitoring interval of SEC is 300 days. It is essential to note that the long interval of 300 days is because more days means extra alerts to correlate and aggregate, leading to effective and better results.</ns0:p></ns0:div> <ns0:div><ns0:head>Alert Preprocessing</ns0:head><ns0:p>The alert preprocessing stage of this work involves two sub-processes, namely, feature extraction and selection. This phase systematically extracts features and their equivalent values from the observed alerts. The similarity index defined below calculates the alerts with the same value for existing features such as attack category, detection time, source IP address, and port number. Consequently, it significantly helps identify new features from the available alerts, thereby overcoming attack duplication. Also, this phase involves the selection of features for alert correlation. For example, suppose the selected features such as the attack detection time, category, port number, and source IP address of one alert or multiple alerts are the same. In that case, we use the similarity index to select the relevant alerts and eliminate irrelevant ones such as duplications and similar instances. Thus, &#928; denotes the similarity between two alerts, and &#1120; represents the alert similarity index, and &#948; denotes an alert.</ns0:p><ns0:formula xml:id='formula_1'>&#928; = &#1120; (&#948;a, &#948;b), where &#948;a &#8800; &#948;b<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Lastly, this section also involves alert scrubbing that uses attack type, detection time, source IP address, and detection port to remove the incomplete alert data. This process plays a significant role in providing reliable and consistent data.</ns0:p></ns0:div> <ns0:div><ns0:head>Alert Aggregation and Clustering</ns0:head><ns0:p>The existing literature has provided various descriptions of alert aggregation, such as considering alerts to be similar if all their attributes match but with a bit of time difference. In contrast, others extended the concept to grouping all alerts with the exact root causes by aggregating alerts using various attributes. Moreover, the alert aggregation process has effectively reduced the massive alerts generated by heterogeneous IDS sensors. This phase groups similar alerts based on the similarity index extracted features based on the current reports. For instance, if the features of alert &#7928; have the same value as alert &#7820;, then this phase will automatically convert these two alerts into a single unified alert by removing the duplicated alerts, thereby significantly reducing the number of irrelevant alerts. Finally, the clustered alerts provide an effective and efficient analysis of false positives, leading to reliable and optimized security frameworks.</ns0:p><ns0:p>Lastly, Fig. <ns0:ref type='figure'>7</ns0:ref> demonstrates the alert aggregation process. Firstly, we cluster all the generated alerts, check their similarity using the similarity index defined in previous sections, and remove all the duplicated alerts.</ns0:p></ns0:div> <ns0:div><ns0:head>Correlation</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Based on existing literature, the primary objective of alert correlation is to identify underlying connections between alerts to enable the reconstruction of attacks or minimize having massive irrelevant alerts. Irrespective of the arguments, alert correlation does not significantly decrease the number of alerts. The authors of this paper believe this phase serves a substantial role in minimizing the number of attacks using the selected features and specified conditions, which helps to deliver a higher-level interpretation of the attacks that generate the alerts. Furthermore, scenario-based, temporal, statistical, and rule-based correlations are among the most commonly utilized correlation categories in existing research. The approach in this paper has fully met the attributes of the statistical correlation method, which correlates alerts based on their statistical similarity. Finally, this phase consists of the alert correlation process based on selected features like the IP addresses, attack category, and the detected time of the generated alerts. For instance, if the detection time of two independent alerts satisfies the condition of COTIME denoted as &#1120;, then the correlation is performed based on selected features. Fig. <ns0:ref type='figure'>8</ns0:ref> presents the correlation engine with the extracted features for alert correlation. The correlation engine used these features to correlate alerts that meet the conditions of the &#1120; efficiently; as a result, leading to a high-level view of the alerts, and it significantly mitigates the analysis of massive irrelevant alerts.</ns0:p></ns0:div> <ns0:div><ns0:head>Alert Prioritization</ns0:head><ns0:p>Alert prioritization is the final phase of the proposed SEC model. It plays a significant role in prioritizing the alert's severity, thereby helping the administrator identify or dedicate existing resources to the most alarming malicious attacks. Although a priority tag is assigned to each Snort rule, as explained earlier, we intend to extend its functionality by embedding an alert prioritization within the proposed SEC model. Alert prioritization is classified into high, medium, and low priority. Firstly, this paper defines the high priority as the alert counts with standard features such as the data fragmentations, source IP address, port number, and attack category.</ns0:p><ns0:p>For instance, if multiple alerts have the same destination port numbers and IP addresses, then we classify these alerts as high-priority alerts. Lastly, the high priority functionality will hugely minimize various challenging cyber-attacks such as DDoS and DoS because it handles the standard techniques utilized in these attacks, like the IP fragmentation attack, which uses the analogy of data fragmentation to attack target systems. Secondly, the medium priority alerts count the number of alerts with shared features such as the same attack category, IP addresses, and destination port number. Finally, low priority alerts are alerts counts that have standard features like IP addresses, attack category, and destination port but with varying values.</ns0:p><ns0:p>The alert prioritization categories with the various standard features discussed above are shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. However, it is essential to note that irrespective of the common features in each category, the above section entails specified conditions that uniquely distinguish them.</ns0:p></ns0:div> <ns0:div><ns0:head>Overview of the Proposed Security Events Correlation (SEC)</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_2'>10</ns0:ref> demonstrates the detailed overview of the proposed SEC. First of all, SEC accepts a collection of raw alerts as an input generated based on a specified monitoring interval set to 300 days to ensure sufficient alerts for practical analysis, thereby leading to reliable results. As explained earlier, the alert preprocessing step accepts these raw alerts and performs feature extraction and selection. Also, the alert scrubbing phase uses predefined conditions denoted as &#926;, to check for alert duplicates. If duplicates exist, the scrubbing process will remove all the copies and send the alerts with no duplicates to the correlation engine. Next, the correlation engine decides if the alerts are single or multiple instances using predefined conditions denoted as &lt;=&#1120;. Finally, the alert prioritization phase prioritizes the alerts using the above three categories, and the output phase presents the relevant alerts as the final results. It is worthy to note that the alert outputs are the actual events after removing the duplicates and the alert correlation, as shown in the equation below.</ns0:p><ns0:formula xml:id='formula_2'>&#8486; = &#960;-&#946;<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Where &#8486; denotes the final output alerts, and &#960; represents the total number of alerts, and &#946; denotes the correlated alerts.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>This section presents a comprehensive systematic analysis and performance justification of the proposed models (SARG-SEC). For example, it evaluates how well the SARG can efficiently generate standard and reliable Snort rules by executing SARG against live attacks in existing pcap files. Finally, this section will also highlight the results and performance evaluation of the (SEC) model that significantly mitigates the challenges of the vast alerts generated by the Snort IDS and the earlier proposed feature selection and ensemble-based IDS <ns0:ref type='bibr' target='#b34'>(Jaw &amp; Wang, 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation of the Generated Snort Rules by the proposed SARG</ns0:head><ns0:p>The authors of <ns0:ref type='bibr' target='#b50'>(Lippmann et al., 2000)</ns0:ref> provided a descriptive and intriguing challenge of documenting an off-line IDS dataset that a wide range of security experts and researchers heavily utilized to assess various security frameworks. Moreover, the paper presents a testbed that used a tcpdump sniffer to produce pcap files used to evaluate the proposed solutions (SARG-SEC). Furthermore, the article details the various attacks within the tcpdump files and provides further references and implementations details. Finally, the supplementary pcap files also entail the various attacks utilized to evaluate SARG-SEC. Accordingly, this section highlights the findings of the various conducted experiments to accurately assess the performance of the SARG framework, as demonstrated in the figures below.</ns0:p><ns0:p>The proposed SARG method has achieved decent performances on the auto-generation of Snort rules, as illustrated in Fig. <ns0:ref type='figure' target='#fig_2'>11</ns0:ref>. All the findings presented in this section use various pcap files as a simulation of live attacks against the proposed method to generate efficient and effective Snort rules that completely meet all the criteria of the Snort rule syntax, as presented in Table <ns0:ref type='table'>3</ns0:ref>. For instance, Fig. <ns0:ref type='figure' target='#fig_2'>11</ns0:ref> Manuscript to be reviewed Computer Science rule with the following content: alert udp 10.12. <ns0:ref type='bibr'>reference:Packet2Snort;</ns0:ref><ns0:ref type='bibr'>sid:xxxx,</ns0:ref><ns0:ref type='bibr'>rev:1)</ns0:ref>. Based on the above auto-generated Snort rule, it is self-evident that SARG has produced optimized Snort rules that meet all the criteria of Snort rule syntax as discussed in <ns0:ref type='bibr' target='#b45'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. For example, the above auto-generated rule has produced a descriptive and easy to analyze message of (msg: 'Suspicious IP10.12.19.101 and port 49680 detected ';), which details the reason or cause of the alert.</ns0:p><ns0:p>Furthermore, SARG auto-generated another similar Snort rule by setting the source IP address and port number to 'any any' and the destination IP address and port number as 10.12.19.1 and 53, respectively. Lastly, the findings presented in Fig. <ns0:ref type='figure' target='#fig_2'>11</ns0:ref> shows that SARG has auto-generated an impressive and comprehensive Snort rule that meets all standard Snort rule criteria. However, except for using the HOME_NET variable as the source address and a msg, content, dept, and offset values of (msg: 'Suspicious DNS request for fersite24.xyz. detected'; content:' |01000001000000000000|'; depth:10; offset:2;), respectively. Based on the above performances, the authors concluded that SARG has effectively met all the criteria of creating compelling and optimized Snort rules consisting of numerous general rule parameters and payload detection options <ns0:ref type='bibr' target='#b45'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. Moreover, Fig. <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> presents a series of auto-generated Snort rules using several pcap files as live attacks. All the auto-generated Snort rules shown in Fig. <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> have completely demonstrated to meet all the standards of Snort rule creation, as discussed in much existing work <ns0:ref type='bibr' target='#b35'>(Jeong et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b45'>Khurat &amp; Sawangphol, 2019)</ns0:ref>. For instance, the use of defined variables like the HOME_NET and ASCI values of ('|05|_ldap|04|_tcp|02|dc|06|_msdcs|0C|moondustries|03';) for the contents of 'content:' field of the auto-generated Snort rules. Also, all the automatically generated rules came with auto-generated messages that uniquely and vividly describe the intrusive or abnormal activity whenever the alert is triggered. Consequently, this will significantly help the administrator to be able to easily identify why the alert happened and quickly find solutions to mitigate any malicious activities.</ns0:p><ns0:p>Additionally, all the auto-generated Snort rules are saved in the local.rules to ensure consistency and avoid duplications of rules within the local.rules files, and then included in the snort.conf file. As a result, it provides the administrator with the flexibility to further finetune the generated rules to meet the specific needs of a given computer network or system. Thus, irrespective of the additional efforts of fine-tuning the auto-generated rules, for example, manually updating the sid value of the generated rules and other fields for specificity, SARG has undoubtedly minimized the time taken for Snort rule creation. Also, it can significantly lessen the financial burden of human experts for Snort rule creation, thereby making the proposed method a meaningful tool that would play a significant role in mitigating the escalating cyberattacks.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In conclusion, the results presented in Fig. <ns0:ref type='figure' target='#fig_2'>11</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> have impressively demonstrated the capability of auto-generated Snort rules, which significantly mitigates the need for costly human capacity in creating Snort rules. However, it has downsides that we could not handle, such as automatically generating the consistent sid values of the respective rules. As a result, we came with a prompt message to remind the users to update the sid value after the auto-generation of the Snort rule. Also, based on the lack of expert domain, we could not justify why SARG has two contents in a single generated Snort rule. Nevertheless, these challenges have no negative implications during the testing of the generated rules. Regardless, we intend to extend this research to establish means of solving these research challenges.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of the Single and Multiple Instances Security Event Correlations</ns0:head><ns0:p>Likewise, the following sections present the results of numerous experiments that evaluate the consistency and efficiency of the proposed security event correlator (SEC). Also, the findings presented in the subsequent sections have demonstrated the promising performances of the SEC model, which could significantly mitigate the substantial challenges of managing the vast alerts generated by heterogeneous IDSs. Firstly, Figs. <ns0:ref type='figure' target='#fig_2'>13(A</ns0:ref>) and 13(B) summarized the results of the experiments based on single and multiple instances. For example, Fig. <ns0:ref type='figure' target='#fig_2'>13(A</ns0:ref>) depicts a single sample of one thousand (1000) alerts used to evaluate the effectiveness of the proposed security event correlator (SEC), which obtained an impressive correlation performance of removing 71 irrelevant alerts due to alert duplication. Even though the 71 removed irrelevant alerts might seem insignificant, the authors believed this is an excellent performance considering the dataset sample. Also, it will be fair to state that processing these irrelevant alerts will waste valuable computational and human resources. Similarly, Fig. <ns0:ref type='figure' target='#fig_2'>13(B</ns0:ref>) shows the findings of multiple instances ranging from 1000 to 4000 alerts as the sample sizes for the individual evaluations. Again, the results demonstrated that SEC had achieved a decent correlation performance on the various sets.</ns0:p><ns0:p>For instance, Fig. <ns0:ref type='figure' target='#fig_2'>13(B</ns0:ref>) shows that out of 4000 alerts, SEC efficiently and effectively compressed it to only 3710 alerts by removing a whopping 290 irrelevant alerts that could have unnecessarily exhausted valuable resources of an organization, such a human and computational resources. Likewise, SEC replicates a similar performance for the 2000 and 1000 alert samples, eliminating a vast 250 and 71 irrelevant alerts for both instances, respectively. Therefore, considering the above significant performance of SEC for both single and multiple instances, we can argue that the proposed method could significantly contribute to the solutions of analyzing and managing the massive irrelevant alerts generated by current IDSs.</ns0:p></ns0:div> <ns0:div><ns0:head>Assessment of Single Instance with Multiple Correlation Time (COTIME)</ns0:head><ns0:p>This section presents numerous experiments based on a single number of inputs with varying correlation times (COTIME(s)). It also further assess the performance and consistency of the proposed security event correlator based on alert prioritization that efficiently demonstrated the number of high, medium, and low priority alerts. Furthermore, it shows the correlation time and the correlated alerts using a fixed sample size of 1000 alerts.</ns0:p><ns0:p>The findings presented in Figs. 14 and 15 summarized the evaluation results obtained from four experiments, highlighting some exciting results. For instance, experiments one and two illustrated in Fig. <ns0:ref type='figure' target='#fig_2'>14</ns0:ref> show that using a fixed sample of 1000 alerts and a COTIME of 40 seconds, SEC efficiently identified 71 alerts as correlated alerts, producing a manageable 929 alerts as final relevant alerts. Similarly, out of the 929 alerts, SEC effectively categorized a considerable 334 alerts as high priority alerts, 209 alerts as medium alerts, and 386 low priority alerts. Likewise, experiment two presented in Fig. <ns0:ref type='figure' target='#fig_2'>14</ns0:ref> indicates that increasing the number of COTIME to 60 seconds results in even better findings, such as identifying a massive 117 alerts as correlated alerts, thereby leading to only 883 alerts as the final output alerts. Unlike experiment one, only 163 alerts were categorized as medium priority alerts, while 334 and 386 were recorded for high and low alerts, respectively. Moreover, Fig. <ns0:ref type='figure' target='#fig_2'>15</ns0:ref> further validates that an increase in COTIME is a crucial factor in the performance of the SEC. For example, increasing the COTIME to 90 and 120 seconds results in more correlated alerts like 124 and 128 alerts for 90 and 120 seconds, respectively. Also, experiments three and four have effectively identified 157 and 154 alerts as medium alerts. The final output alerts for the two experiments are 876 and 872 alerts, respectively, with a low priority alert of 385. However, the values for high priority alerts for all these four experiments presented in Figs. 14 and 15 recorded the same values. Nonetheless, we have conducted a series of experiments to validate these similarities.</ns0:p><ns0:p>Similarly, the results illustrated in Figs. <ns0:ref type='figure' target='#fig_2'>16 and 17</ns0:ref> shows that SEC obtained impressive findings such as an increase of 133 and 165 correlated alerts for experiments five and six with a COTIME of 180 and 300 seconds, respectively. Again, this further validates that COTIME significantly correlates with the number of correlated outputs. Moreover, SEC categorized 150 and 131 alerts as medium alerts, while 383 and 374 alerts were low priority alerts with only 867 and 835 final output alerts for experiments five and six. Nevertheless, like the previous experiments, the values for high priority remain as 334 alerts for both experiments, as shown in Fig. <ns0:ref type='figure' target='#fig_2'>16</ns0:ref>.</ns0:p><ns0:p>Finally, and most importantly, Fig. <ns0:ref type='figure' target='#fig_2'>17</ns0:ref> presents some notable and exciting findings that reveal interesting correlations among the chosen factors of the conducted experiments. For instance, experiment seven obtained a massive 474 correlated alerts due to a 600 seconds increase of COTIME. As a result leading to a much proportionate distribution of alerts into various categories like 223 high priority alerts, 227 low priority alerts, a negligible 78 medium alerts with only 526 final output alerts. Similarly, SEC achieves exciting results when COTIME is 900 seconds, such as a manageable final output of 424 alerts dues to the vast 576 correlated alerts. Also, experiment eight presented in Fig. <ns0:ref type='figure' target='#fig_2'>17</ns0:ref> effectively and efficiently categorized the final output into 178, 65, and 181 alerts for high, medium, and low priority, respectively. As a result, it would be fair to conclude that SEC can significantly assist the network or system administrators with a considerably simplified analysis of alerts. Moreover, based on the results presented above, it is self-evident that while COTIME increases, the number of output alerts decreases. Therefore, we can conclude that the number of correlated alerts and COTIME are directly proportional to each other.</ns0:p></ns0:div> <ns0:div><ns0:head>Assessment of the Time Factor on Alert Correlations</ns0:head><ns0:p>Similarly, this section meticulously conducted more experiments to evaluate the correlation of time factors and the performance of the proposed unique approach (SEC). Considering that the amount of COTIME has significantly influenced the outcomes of alert correlation achieved by SEC, this section intends to validate this apparent relationship.</ns0:p><ns0:p>The results illustrated in Figs. 18 and 19present the evaluation performance of correlation time against the correlated alerts with some interesting findings. For instance, Fig. <ns0:ref type='figure' target='#fig_2'>18</ns0:ref> shows the time lag or time complexity on static COTIME with multiple instances ranging from 1000 to 4000 alerts. Moreover, time complexity or lag is the time difference between various sample inputs. Similarly, Fig. <ns0:ref type='figure' target='#fig_2'>18</ns0:ref> confirms a continuous increase in the time (COTIME) as the value of the inputs increases, which validates the results presented in Figs. 15, 16, and 17. For instance, increasing the inputs to 1500, 2000, and 2500 alerts has increased COTIME to 42, 45, and 48 seconds, respectively. Likewise, the sample of 3500 and 4000 alerts recorded 52 and 55 seconds of COTIME, respectively. Based on these performances, the authors have concluded that COTIME is a very significant factor in SEC's better performance, as shown in Fig. <ns0:ref type='figure' target='#fig_2'>18</ns0:ref> and preceding evaluations. Moreover, Fig. <ns0:ref type='figure' target='#fig_2'>19</ns0:ref> shows a more compelling assessment of COTIME with correlated alerts. For instance, there is a continuous rise in correlation time and the correlated alerts, like it takes 40 seconds COTIME to correlate 71 correlated alerts. Also, the increase of COTIME to 90, 120, and 182 seconds has achieved 117, 124, and 128 correlated alerts, respectively. Furthermore, the considerable rise of COTIME to 300, 600, and 900 seconds has acquired some interesting findings such as 165, 474, and 576 correlated alerts, which is a significant and impressive performance for SEC. Based on the above results, it will be fair to conclude that a massive increase in COTIME could lead to reliable and effective alert correlation. However, this could also lead to the demand for more processing power and other similar burdens. Irrespective of these challenges, SEC has achieved its objective of significantly minimizing the massive irrelevant alerts generated by heterogeneous IDSs. Also, SEC has enabled easy analysis of alerts, which has always been a considerable challenge for system and network administrators.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Data security has been a massive concern over the past decades due to the considerable hightech progression that has positively influenced human society in many aspects. However, the illegal mining of data due to the vulnerabilities of security mechanisms has enabled malicious users to compromise and exploit the integrity of existing systems, thereby causing colossal havoc to individuals, governments, and even private sectors. Consequently, this leads to the necessity to deliver reliable and effective security mechanisms by harnessing various techniques to design, develop and deploy optimized IDSs, for example, Snort-based IDS. Moreover, it is proven that most of the large data leaks are caused by internal employers or users with the necessary credentials, leading to the need for customized security framework solutions that can significantly minimize such persistent challenges. Nonetheless, existing studies have shown that manually creating Snort rules, which is highly stressful, costly, and error-prone, remain a challenge.</ns0:p><ns0:p>Therefore, this paper proposed a practical and inclusive approach comprising a Snort Automatic Rule Generator and a Security Event Correlator, abbreviated SARG-SEC. Firstly, this paper provides a solid and sound theoretical background for both Snort and alert correlation concepts to enlighten the readership with the essential idea of understanding the presented research content. Additionally, this paper designed and deployed an efficient and reliable approach (SARG) to augment the success of Snort and significantly minimize the stress of manually creating Snort rules, thereby enabling the network administrators to generate efficient and trustworthy Snort rules quickly. Moreover, SARG utilizes the contents of various pcap files as live attacks to automatically generate effective and optimized Snort rules that meet the entire criteria of Snort rule syntaxes as described in existing literature <ns0:ref type='bibr' target='#b45'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. The results presented in this paper have achieved impressive performances in the auto-generation of Snort rules, with little knowledge of how the contents of the rules are generated. Furthermore, the auto-generated Snort rules could serve as a beginning point for turning Snort into a content defense method that considerably lessens data leakages.</ns0:p><ns0:p>Moreover, this paper posits an optimized and consistent Security Event Correlator (SEC) that considerably alleviates the current massive challenges of managing the immense alerts engendered by heterogeneous IDSs. This paper evaluated SEC based on single and multiple instances of raw alerts with consistent and impressive results. For example, utilizing a single sample of 1000 alerts and multiple instances of 1000 to 400 alerts, SEC effectively and efficiently identified 71 and 290 alerts as correlated alerts, thereby meaningfully contributing to the solutions of the colossal challenges of the evaluation and management of the massive irrelevant alerts generated by current IDSs. Furthermore, this paper uses a single instance of 1000 alerts with varying COTIME to further measure the performance and stability of SEC based on alert prioritization that competently revealed the number of high, medium, and low priority alerts. For example, SEC achieved excellent performances on a single instance of 1000 alerts like 133 and 165 correlated alerts for experiments five and six with a COTIME of 180 and 300 seconds, respectively. Furthermore, SEC identified 383 and 374 low priority alerts, whereas 150 and 131 alerts are medium alerts with only 867 and 835 final output alerts for experiments five and six.</ns0:p><ns0:p>Lastly, to further confirm the significant correlation between the number of correlated outputs and COTIME. Experiments seven and eight present exciting results that demonstrated some intuitive relationships amongst these chosen factors. For instance, an increase of COTIME to 600 and 900 seconds shows a much balanced and acceptable alert prioritization into numerous categories like final output alerts of only 526, 227 low priority alerts, 223 high priority alerts, and a negligible 78 medium alerts. Likewise, SEC accomplishes exciting results when COTIME is to 900 seconds. For example, a convenient final output of 424 alerts due to the vast 576 correlated alerts with alert prioritization of 178, 65, and 181 alerts for high, medium, and low priority. Based on the above findings, it would be fair to conclude that SARG-SEC can considerably support or serve as a more simplified alert analysis framework for the network or system administrators. Also, it can serve as an efficient tool for auto-generating Snort rules. As a result, SARG-SEC could considerably alleviate the current challenges of managing the vast generated alerts and the manual creation of Snort rules.</ns0:p><ns0:p>However, notwithstanding the decent performance of the SARG-SEC, it has some apparent downsides that still necessitate some enhancements. For instance, the auto-generation of consistent sid values of the respective rules and why SARG has two contents in a single generated Snort rule. Similarly, we acknowledged that the sample sizes of alerts and the apparent relationship of correlated alerts with COTIME could challenge our findings. Nevertheless, we aim to extend this research to establish means of solving these research challenges. In the future, we intend to: (i) Extend SARG's functionality to effectively and efficiently generate consistent sid values for the auto-generated Snort rules and establish concepts of how SARG auto-generated contents of the rules. (ii) Investigate the apparent relationship of COTIME and correlated alerts, design and implement solutions to how we can correlate a larger sample size of alerts within an acceptable time frame to mitigate the need for unceasing computing resources. (iii) finally evaluate SEC within a live network environment instead of pcap files and autogenerate reliable and efficient Snort rules using the knowledge of the proposed anomaly IDS <ns0:ref type='bibr' target='#b34'>(Jaw &amp; Wang, 2021)</ns0:ref>. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022) Manuscript to be reviewed Computer Science o This paper also provides solid theoretical background knowledge for the readership of the journal to clearly understand the fundamental functions and capabilities of Snort and various correlation methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>demonstrated that SARG has successfully auto-generated a Snort PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 A</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,174.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,255.37,525.00,315.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,229.87,525.00,181.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,525.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,174.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,174.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='49,42.52,229.87,525.00,429.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='52,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='53,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='54,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='55,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='56,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='57,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:1:0:NEW 7 Jan 2022)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Wang Xueming College of Computer Science and Technology, Guizhou University (GZU), Guiyang, Guizhou, China; Mobile No: +8615685100757 January 6th, 2022 Dear Prof. Dr. Kathiravan Srinivasan, We want to take the opportunity to express our sincere gratitude for allowing us to resubmit a revised draft of the manuscript 'A novel hybrid-based approach of Snort Automatic Rule Generator and Security Event Correlation (SARG-SEC)' for consideration by the PeerJ Computer Science. Similarly, we sincerely appreciated the insightful, excellent, comprehensive, and most importantly, valuable feedback, comments, and reviewers' suggestions, which has significantly helped us identify some of the manuscript's drawbacks. As a result, we have entirely changed the manuscript, including the recommendation made by the Editor and various reviewers, thereby enabling an easy understanding and more coherence. We used the Microsoft tracking functionality with 'blue' indicating the changes made and the 'red' is the deletion. Consequently, some of the changes we made are discussed in the subsequent sections: Firstly, we have completed a comprehensive edit and restructuring of the manuscript with additional references for every highlighted background information based on the reviewers' recommendation. Also, we have connected all the required information to enable more coherence as suggested, which has significantly improved the manuscript. We believe this will allow the readers to quickly get all the essential details and references for further reading and easily understand the manuscript. Moreover, we have presented the results by providing new figures with appropriate content for easy understanding. Similarly, we have provided a complete enhanced content of the whole manuscript. The conclusion section offers a summarized point-by-point explanation of the fundamental objectives of the manuscript with the required methods and achieved results. Moreover, we have clearly stated the drawbacks of the manuscript as suggested, and we conclude with future work on how to improve the proposed approach. Lastly, we have comprehensively addressed all the valuable comments and suggestions of the reviewers with a detailed response as discussed below. Therefore, we believe that the manuscript is now suitable for publication in PeerJ Computer Science. Lastly, thank you so much, Prof. Dr. Kathiranvan Srinivasan, for your consideration, valuable time, and meaningful contribution to improving the manuscript. Sincerely, WangXueming Prof. Dr. Wang Xueming of Computer Science and Technology (Guizhou University) On behalf of all authors Reviewer 1 (Anonymous): Basic Reporting: [Reviewer 1's Comments]: The paper is a little bit long and wordy and exaggerated in some sections. [Reviewer 1's Comments]: Literature is insufficient to show the disadvantages of existing automatic snort rules generation and the need for current work filling the gap. [Authors' Response]: Firstly, we would like to thank you for your valuable comments. We agreed that the manuscript is a bit long and wordy. As a result, we have provided a complete edit and restructuring of the paper to simplify the wordy sections for easy readability and ensure we used the appropriate words to avoid exaggerations, as mentioned in the comments. Although we tried very hard to condense the manuscript, it is unfortunate that we could not summarize it any longer. Therefore we respectfully request that we maintain the length of the paper because we believe every included content is significant for communicating our findings. Similarly, we downloaded all the recommended papers and carefully read and evaluated the contents. We agree that the initial draft lacks some essential references. However, the current submission includes some crucial references that provide interesting and challenging solutions. The revised submission also consists of the contributions that justify the proposed solutions (SARG-SEC). However, it is essential to note that automatic Snort rule generation is not very popular research compared to cutting-edge solutions like contemporary artificial intelligence challenges. Therefore, the paper could not provide a complete section for the literature review, but it has included all the necessary references. We hope this will reasonably be enough to address the above comments. Experimental Design: [Reviewer 1's Comments]: There is no proper explanation or discussion of attack types under which SARG-SEC is evaluated. [Authors' Response]: Once again, thanks for pointing out such a crucial observation, and we agree it is a reasonable and vital contribution. Unfortunately, based on the length of the manuscript as mentioned above, we omitted the attacks used to evaluate SARG-SEC. However, the revised manuscript now includes a reference (Lippmann et al., 2000) that provided a descriptive and intriguing challenge of documenting the utilized offline IDS dataset used to evaluate the proposed solutions (SARG-SEC). Therefore, interested readers can access the provided references for the various included attacks and how they are generated. We hope this would be sufficient because documenting all the attacks would lengthen the manuscript, which is already a concern for the authors. Validity of the findings: [Reviewer 1's Comments]: Comparison of manual rules results Vs. Auto-generated snort rules results. [Authors' Response]: We agreed it might be beneficial to include the above comments in our manuscript. Nevertheless, considering the volume of the work and the length of the pages, we are humbly requesting for this to be waived because including these contents will make the manuscript to be more than 40 pages, and that could discourage potential readers, thereby compromising our objectives of reaching a broader scope of readership. However, we have already provided supplementary files that include the auto-generated rules and the codes to reproduce the work for interested readers. Similarly, the manual Snort rules can be easily accessible by potential readers on many online platforms. Therefore, we ask for your kind consideration and understanding in waiving the inclusion of this content based on the reasons mentioned. Thank you so much for the time and excellent suggestion. Reviewer 3 (Anonymous): Basic Reporting: [Reviewer 3's Comments]: The introductory part can be further enhanced (Statistics of NIDS and Cyber incidents). [Reviewer 3's Comments]: How the paper is differentiated with respect to other works. [Reviewer 3's Comments]: Provide further clarification of our work. [Authors' Response]: We would like to genuinely thank you for suggesting this improvement because we have realized many drawbacks while enhancing the manuscript. As a result, we have entirely restructured the Introduction with additional references for every highlighted background information, thereby adding more consistency and ease to follow. Also, we have provided all the necessary statistics of NIDS and Cyber incidents with examples from excellent pieces of literature. Finally, we have connected all the required information to enable more coherence as suggested, which has significantly improved the manuscript. We believe this will allow the readers to quickly get all the essential details and references for further reading and facilitate an easy understanding of the manuscript. Moreover, unlike many existing research pieces that focus on correlating alerts using specific predefined alerts and attacks with a ready-made tool, we have designed and built our tools or proposed method from scratch to correlate alerts efficiently and effectively. Also, our approach has an alert prioritization functionality, which has been overlooked in many existing pieces of research. Similarly, the auto-generator (SARG) uses packet (pcap files) contents to simulate live attacks to generate efficient Snort rules. According to the authors' knowledge, such an approach has never been presented in any existing works that we have consulted. Lastly, we have presented the results in the revised submission by providing new figures with appropriate content for easy understanding. Also, the descriptions of the various figures have been completely altered regarding a few grammatical and contextual errors. Furthermore, we have provided a detailed overview of the SARG schematic diagram that provides an easy understanding. Hopefully, these improvements will add significant value to the manuscript for readability and easy understanding, thereby further clarifying the findings. Experimental Design: [Reviewer 3's Comments]: Inclusion of similar works based on the indicative references (if applicable). [Reviewer 3's Comments]: Why do we use Snort instead of Suricata or other signature-based NIDS? [Authors' Response]: Once again, thanks for providing the links to such vital references regarding our work. Accordingly, we have downloaded all recommended papers, read and evaluated their contents, and all the articles entail essential scientific findings. Nevertheless, some of them do not fit the context of our research. Regardless, we have satisfactorily included six (6) of the recommended papers that best provide the context of the proposed article. Furthermore, we have to acknowledge that reading these references did help us improve the manuscript and expand our understanding of various concepts related to the manuscript. Therefore, we again thank you so much for raising such excellent points. Irrespective of the availability of other signature-based NIDS, such as Suricata and Zeek (Bro), this work adopted Snort because it is the leading open-source NIDS with active and excellent community support. Likewise, it is easy to install and run with readily available online resources. Furthermore, because Suricata can use Snort's rulesets but is prone to false positives with the need for an intensive system and network resource, we concluded to use Snort for the proposed solutions. Validity of the findings: [Reviewer 3's Comments]: Comparison of SARG-SEC with other works (IDS and SIEM) using some baseline metrics. [Authors' Response]: While we significantly value this suggestion, we respectfully state that SARG-SEC is not a complete IDS or SIEM solution. Nonetheless, we completely understand what you mean, and our answers or justifications are as follows: The SARG is an auto-rule generator that facilitates the effortless generation of Snort rules without many human interferences, which is just a component of a signature-based NIDS, specifically Snort. Additionally, we have provided a comprehensive description of all the procedures of auto-generating the Snort rules and the generated rules as discussed within the manuscript, demonstrating the proposed solutions' decent findings. Moreover, unlike the work presented in (Jaw & Wang, 2021), where we use standard baseline metrics such as accuracy, FAR, F1-Score, and detection rate, there are no universal baseline metrics for comparing auto-rule generators. Nonetheless, the provided supplementary materials justify the findings presented in this manuscript. Finally, the objective of SEC was not to provide all the sets of solutions offered by most current SIEM frameworks, instead to design and implement an event correlator which is just a component of contemporary SIEM solutions. Therefore, it will not be possible to coin a based line metrics that would qualify the comparison of SEC with the current SIEM models. Nevertheless, as discussed in the manuscript, SARG-SEC has successfully fulfilled all its objectives of auto-generating effective Snort rules, correlating, and prioritizing the alerts. As a result, we respectfully state that this comment might not be feasible for the proposed solutions of this manuscript. Additional Comments: [Reviewer 3's Comments]: Complete re-check of the entire manuscript for grammatical and typos. [Authors' Response]: We agreed with the need for a complete re-check of the manuscript. Consequently, we have provided an enhanced version for submission that correlates to the whole manuscript's results and content. Furthermore, we presented detailed clarifications of the essential purpose of the manuscript with the utilized methods and achieved findings. Equally, the revised manuscript specified the drawbacks of the manuscript with future work on improving the proposed approaches. Finally, all the grammatical and typos within the manuscript have been taken care of, adding significant value to the revised version. Reviewer 4 (Raghav Verma): Basic Reporting: [Reviewer 4's Comments]: The need for another round of proofreading of the grammar within this article. [Reviewer 4's Comments]: Enhanced the resolutions and quality of figures within the manuscript. [Authors' Response]: Thank you for these comments; it has inspired us to restructure the manuscript entirely. As a result, we have provided a fix for every messy and unclear part of the manuscript as suggested, and we hope it will now be much easier to follow and understand. Finally, we have enhanced all the resolutions of the figures by exporting them in 300 DPI, which immensely improves the quality of the figures within the manuscript for final publication. Likewise, we have also provided quality figures with better color choices than the initial submission. In conclusion, we would like to take the opportunity to express our gratitude to the Editor and the Reviewers for their valuable and meaningful contribution to the manuscript. We look forward to hearing from you very soon. Thank you so much, Prof. Dr. Kathiranvan Srinivasan and all the reviewers. "
Here is a paper. Please give your review comments after reading it.
362
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The rapid advanced technological development alongside the Internet with its cutting-edge applications has positively impacted human society in many aspects. Nevertheless, it equally comes with the escalating privacy and critical cybersecurity concerns that can lead to catastrophic consequences, such as overwhelming the current network security frameworks. Consequently, both the industry and academia have been tirelessly harnessing various approaches to design, implement and deploy intrusion detection systems (IDSs) with event correlation frameworks to help mitigate some of these contemporary challenges. There are two common types of IDS: signature and anomalybased IDS. Signature-based IDS, specifically, Snort works on the concepts of rules.</ns0:p><ns0:p>However, the conventional way of creating Snort rules can be very costly and error-prone. Also, the massively generated alerts from heterogeneous anomaly-based IDSs is a significant research challenge yet to be addressed. Therefore, this paper proposed a novel Snort Automatic Rule Generator (SARG) that exploits the network packet contents to automatically generate efficient and reliable Snort rules with less human intervention. Furthermore, we evaluated the effectiveness and reliability of the generated Snort rules, which produced promising results. In addition, this paper proposed a novel Security Event Correlator (SEC) that effectively accepts raw events (alerts) without prior knowledge and produces a much more manageable set of alerts for easy analysis and interpretation. As a result, alleviating the massive false alarm rate (FAR) challenges of existing IDSs. Lastly, we have performed a series of experiments to test the proposed systems. It is evident from the experimental results that SARG-SEC has demonstrated impressive performance and could significantly mitigate the existing challenges of dealing with the vast generated alerts and the labor-intensive creation of Snort rules.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The advent of the Internet has come with the cost of wide-scale adoption of innovative technologies such as cloud computing <ns0:ref type='bibr' target='#b1'>(Al-Issa et al., 2019)</ns0:ref>, artificial intelligence <ns0:ref type='bibr' target='#b57'>(Miller, 2019)</ns0:ref>, the Internet of Things (IoT) (J. H. <ns0:ref type='bibr' target='#b70'>Park, 2019)</ns0:ref>, and vast ranges of web-based applications. Therefore, leading to considerable security and privacy challenges of managing these cuttingedge applications using traditional security and privacy protection mechanisms such as firewall, anti-virus, virtual private networks (VPNs), and anti-spyware <ns0:ref type='bibr' target='#b56'>(Meryem &amp; Ouahidi, 2020)</ns0:ref>. However, due to the vast range of competitive solutions such as higher efficiencies, scalability, reduced costs, computing power, and most importantly, the delivery of services, these technologies continue to revolutionize various aspects of our daily lives drastically, for instance, in the health care systems, research industry, the global business landscape, government, and private sectors <ns0:ref type='bibr' target='#b98'>(Xue &amp; Xin, 2016)</ns0:ref>.</ns0:p><ns0:p>Moreover, countless types of cyber-attacks have evolved dramatically since the inception of the Internet and the swift growth of ground-breaking technologies. For example, social engineering or phishing <ns0:ref type='bibr' target='#b43'>(Kushwaha et al., 2017)</ns0:ref>, zero-day attack <ns0:ref type='bibr' target='#b34'>(Jyothsna &amp; Prasad, 2019)</ns0:ref>, malware attack <ns0:ref type='bibr' target='#b55'>(McIntosh et al., 2019)</ns0:ref>, denial of service (DoS) <ns0:ref type='bibr' target='#b94'>(Verma &amp; Ranga, 2020)</ns0:ref>, unauthorized access of confidential and valuable resources <ns0:ref type='bibr' target='#b76'>(Saleh et al., 2019)</ns0:ref>. Additionally, according to the authors of <ns0:ref type='bibr' target='#b68'>(Papastergiou et al., 2020)</ns0:ref>, a nation's competitive edge in the global market and national security is currently driven by harnessing these efficient, productive, and highly secure leading-edge technologies with intelligent and dynamic means of timely detection and prevention of cyberattacks. Nevertheless, irrespective of the tireless efforts of security experts in defense mechanisms, hackers have always found ways to get away with targeted resources from valuable and most trusted sources worldwide by launching versatile, sophisticated, and automated cyber-attacks. As a result, causing tremendous havoc to governments, businesses, and even individuals <ns0:ref type='bibr' target='#b77'>(Sarker et al., 2020)</ns0:ref>.</ns0:p><ns0:p>For instance, the authors of <ns0:ref type='bibr' target='#b14'>(Dama&#353;evi&#269;ius et al., 2021)</ns0:ref> intriguingly review various cyberattacks and their consequences. Firstly, the paper highlights the estimated 6 trillion USD of cyber-crimes by 2021 and the diverse global ground-breaking cyber-crimes that could lead to the worldwide loss of 1 billion USD. Finally, it highlights a whopping 1.5 trillion USD of cyberattack revenues resulting from two to five million computers compromised daily. Furthermore, according to published statistics of AV-TEST Institute in Germany, during the year 2019, there were more than 900 million malicious executables identified among the security community, and data breach costing 8.19 million USD for the United States, predicted to grow in subsequent years. Moreover, the Congressional Research Service of the USA has highlighted that cybercrime-related incidents have cost the global economy an annual loss of 400 billion USD.</ns0:p><ns0:p>Similarly, 2016 alone recorded more than a whopping 3 billion zero-day attacks and approximately 9 billion stolen data records since 2013 <ns0:ref type='bibr' target='#b37'>(Khraisat et al., 2019)</ns0:ref>. In addition, the energy sector in Ukraine suffered catastrophic coordinated cyberattacks (APT) that led to a significant blackout affecting more than 225,000 people. They also highlighted similar alarming APT threats, such as DragonFly, TRITON, and Crashoverride, that could cause devasting consequences to individual lives and the global economy, thereby leading to national security threats <ns0:ref type='bibr' target='#b29'>(Grammatikis et al., 2021)</ns0:ref>. Accordingly, it is essential for security experts to design, implement and deploy robust and efficient cybersecurity frameworks to alleviate the current and subsequent alarming losses for the government and private sectors. Additionally, it is an urgent and crucial challenge to effectively identify the increasing cyber incidents and cautiously protect these relevant applications from such cybercrimes.</ns0:p><ns0:p>Therefore, the last few decades have witnessed Intrusion Detection Systems (IDSs) increasing in popularity due to their inherent ability to detect an intrusion or malicious activities in real-time. Consequently, making IDSs critical applications to safeguard numerous networks from malicious activities <ns0:ref type='bibr' target='#b16'>(Dang, 2019;</ns0:ref><ns0:ref type='bibr' target='#b56'>Meryem &amp; Ouahidi, 2020)</ns0:ref>. Finally, James P. Anderson claimed credit for the inception of the IDS concept in his paper written in 1980 <ns0:ref type='bibr' target='#b4'>(Anderson, 1980)</ns0:ref>, highlighting various methods of enhancing computer security threat monitoring and surveillance.</ns0:p><ns0:p>Intrusion Detection is the procedure of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusion,' similarly, an intrusion is an attempt to bypass the security mechanisms of a network or a computer system, thereby compromising the Confidentiality, Integrity, and Availability (CIA) <ns0:ref type='bibr' target='#b35'>(Kagara &amp; Md Siraj, 2020)</ns0:ref>. Moreover, an IDS is any piece of hardware or software program that monitors diverse malicious activities within computer systems and networks based on network packets, network flow, system logs, and rootkit analysis <ns0:ref type='bibr' target='#b8'>(Bhosale et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b49'>H. Liu &amp; Lang, 2019)</ns0:ref>. Misused detection (knowledge or signature-based) and anomaly-based methods are the two main approaches to detecting intrusions within computer systems or networks. Nevertheless, the past decade has witnessed the rapid rise of the hybrid-based technique, which typically exploits the advantages of the two methods mentioned above to yield a more robust and effective system <ns0:ref type='bibr' target='#b76'>(Saleh et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Misused IDS (MIDS) is a technique where specific signatures of well-known attacks are stored and eventually mapped with real-time network events to detect an intrusion or intrusive activities. The MIDS technique is reliable and effective and usually gives excellent detection accuracy, particularly for previously known intrusions. Nevertheless, this approach is questionable due to its inability to detect novel attacks. Also, it requires more time to analyze and process the massive volume of data in the signature databases <ns0:ref type='bibr' target='#b39'>(Khraisat et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b52'>Lyu et al., 2021)</ns0:ref>. The authors of <ns0:ref type='bibr' target='#b30'>(Jabbar &amp; Aluvalu, 2018)</ns0:ref> presented an exceptional high-level SIDS architecture, which includes both distributed and centralized modules that effectively enhanced the protection of IoT networks against internal and external threats. Furthermore, the authors exploit the Cooja simulator to implement a DoS attack scenario on IoT devices that rely on version number modification and 'Hello' flooding. Finally, the authors claimed that these attacks might influence specific IoT devices' reachability and power consumption.</ns0:p><ns0:p>In contrast, the anomaly-based detection method relies on a predefined network behavior as the crucial parameter for identifying anomalies and commonly operates on statistically substantial network packets. For instance, incoming network packets or transactions are accepted within the predefined network behavior. Otherwise, the anomaly detection system triggers an alert of anomaly <ns0:ref type='bibr' target='#b35'>(Kagara &amp; Md Siraj, 2020)</ns0:ref>. It is essential to note that the main design idea of the anomaly detection method is to outline and represent the usual and expected standard behavior profile through observing activities and then defining anomalous activities by their degree of deviation from the expected behavior profile using statistical-based, knowledge-based, and machine learning-based methods <ns0:ref type='bibr' target='#b34'>(Jyothsna &amp; Prasad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Khraisat et al., 2020)</ns0:ref>. The acceptable network behavior can be learned using the predefined network conditions, more like blocklists or allowlists that determine the network behavior outside a predefined acceptable range. For instance, 'detect or trigger an alert if ICMP traffic becomes greater than 10% of network traffic' when it is regularly only 8%.</ns0:p><ns0:p>Finally, the anomaly-based approach provides a broader range of advantages such as solid generalizability, the ability to determine internal malicious activities, and a higher detection rate of new attacks such as the zero-day attack. Nevertheless, the most profound challenge is the need for these predefined baselines and the substantial number of false alarm rates resulting from the fluctuating cyber-attack landscape <ns0:ref type='bibr' target='#b19'>(Einy et al., 2021)</ns0:ref>. For instance, <ns0:ref type='bibr' target='#b26'>(Fitni &amp; Ramli, 2020)</ns0:ref> intelligently used logistic regression, decision tree, and gradient boosting to propose an optimized and effective anomaly-based ensemble classifier. The authors claimed impressive findings such as 98.8% performance accuracy, 98.8%, 97.1%, and 97.9% precision, recall, and F1-score.</ns0:p><ns0:p>The hybrid-based intrusion detection systems (HBIDS) exploit the functionality of MIDS to detect well-known attacks and flag novel attacks using the anomaly method. High detection rate, accuracy, and fewer false alarm rates are some of the main advantages of this approach <ns0:ref type='bibr' target='#b39'>(Khraisat et al., 2020)</ns0:ref>. The authors of <ns0:ref type='bibr' target='#b39'>(Khraisat et al., 2020)</ns0:ref> suggested an efficient and lightweight hybridbased IDS that mitigates the security vulnerabilities of the Internet of Energy (IoE) within an acceptable time frame. The authors intelligently exploit the combined strengths of K-means and SVM and utilize the centroids of K-means to enhance the process of training and testing the SVM model. Moreover, they selected the best value of 'k' and fine-tuned the SVM for best anomaly detection and claimed to have drastically reduced the overall detection time and impressive performance accuracy of 99.9% compared to current cutting-edge approaches.</ns0:p><ns0:p>Additionally, the two classical IDS implementation methods are Network Intrusion Detection Systems (NIDS) <ns0:ref type='bibr' target='#b58'>(Mirsky et al., 2018)</ns0:ref> and Host-based IDS (HIDS) <ns0:ref type='bibr' target='#b7'>(Aung &amp; Min, 2018)</ns0:ref>. A HIDS detection method monitors and detects internal attacks using the data from audit sources and host systems like firewall logs, database logs, application system audits, window server logs, and operating systems <ns0:ref type='bibr' target='#b37'>(Khraisat et al., 2019)</ns0:ref>. In contrast, NIDS is an intrusion detection approach that analyses and monitors the entire traffic of computer systems or networks based on flow or packet-based and tries to detect and report anomalies. For example, the distributed denial of service (DDoS), denial of service (DoS), and other suspicious activities like internal illegal access or external attacks <ns0:ref type='bibr' target='#b62'>(Niyaz et al., 2015)</ns0:ref>. Unlike HIDS, NIDS usually protects an entire network from internal or external intrusions. However, such a process can be very timeconsuming, high computational cost, and very inefficient, especially in most current cutting-edge technologies with high-speed communication systems.</ns0:p><ns0:p>Nevertheless, this approach still has numerous advantages <ns0:ref type='bibr' target='#b9'>(Bhuyan et al., 2014)</ns0:ref>. For instance, it is more resistant to attacks compared to HIDS. Furthermore, it monitors and analyses the complete network's traffic if appropriately located in a network, leading to a high probability of detection rate. Also, NIDS is platform-independent, thus enabling them to work on any platform without requiring much modification. Finally, NIDS does not add any overhead to the network traffic <ns0:ref type='bibr' target='#b66'>(Othman et al., 2018)</ns0:ref>. Snort is a classic example of NIDS <ns0:ref type='bibr' target='#b74'>(Sagala, 2015)</ns0:ref>. Irrespective of the availability of other signature-based NIDS, such as Suricata and Zeek (Bro) <ns0:ref type='bibr'>(Ali, Shah, &amp; Issac, 2018)</ns0:ref>, this work adopted Snort because it is the leading open-source NIDS with active and excellent community support. Likewise, it is easy to install and run with readily available online resources. Furthermore, because Suricata can use Snort's rulesets but is prone to false positives with the need for an intensive system and network resource, we concluded to use Snort for the proposed solutions.</ns0:p><ns0:p>Snort is a classical open-source NIDS that has the unique competence of performing packet logging and real-time network traffic analysis within computer systems and networks using content searching, matching, and protocol analysis <ns0:ref type='bibr' target='#b0'>(Aickelin et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b89'>Tasneem et al., 2018)</ns0:ref>. Therefore, significantly contributing to the protection of some major commercial networks. Snort is a signature-based IDS that detects malicious live Internet or network traffic utilizing the predefined Snort rules, commonly applied in units of packets' header, statistical information (packet size), and payload information. Thus, it has a unique feature of high detection rate and accuracy <ns0:ref type='bibr' target='#b89'>(Tasneem et al., 2018)</ns0:ref>. However, it cannot detect novel attacks and, at the same time, it requires expert knowledge to create and update rules frequently, which is both costly and faulty.</ns0:p><ns0:p>Similarly, IDS have emerged as popular security frameworks that significantly minimized various cutting-edge cyber-attacks over the past decade. Anomaly-based IDS has gained quite a buzz among network and system administrators for monitoring and protecting their networks against malicious attempts, which has achieved phenomenal success, especially in detecting and protecting systems and networks against novel or zero-day attacks <ns0:ref type='bibr' target='#b87'>(Tama et al., 2019)</ns0:ref>. However, it comes with costly negative consequences of generating thousands or millions of false-positive alerts or colossal amounts of complex data for humans to process and make timely decisions <ns0:ref type='bibr' target='#b80'>(Sekharan &amp; Kandasamy, 2018)</ns0:ref>. As a result, administrators ignore these massive alerts, which creates room for potential malicious attacks against highly valued and sensitive information within a given system or network.</ns0:p><ns0:p>Accordingly, the past years have seen a growing interest in designing and developing network security management frameworks from academia and industry, which involves analyzing and managing the vast amount of data from heterogeneous devices, commonly referred to as event correlation. Event correlation has significantly mitigated modern cyberattacks challenges using its unique functionality of efficiently and effectively analyzing and making timely decisions from massive heterogeneous data (G. <ns0:ref type='bibr' target='#b83'>Suarez-Tangil et al., 2009)</ns0:ref>. Consequently, the past years have seen many researchers and professionals exploit the efficiency of event correlation techniques to address the problems mentioned earlier <ns0:ref type='bibr' target='#b17'>(Dwivedi &amp; Tripathi, 2015;</ns0:ref><ns0:ref type='bibr' target='#b25'>Ferebee et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b85'>Guillermo Suarez-Tangil et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Nevertheless, this field of research is at its infant stage as minimal work is done to address these issues. Likewise, according to the authors' knowledge, none of the above solutions provides comprehensive solutions that address the manual creation of Snort rules and the event correlation as a single solution. Based on the above challenges, this paper proposed two effective and efficient approaches to address the problems associated with the manual creation of Snort rules and mitigating the excessive false alarm rates generated by current IDSs. First, we present an automatic rule creation technique that focuses on packet header and payload information. Generally, we need to find standard features by examining all the network traffic to create a rule. Nonetheless, this method is inefficient and requires much time to complete the rule, and the accuracy of the rules made is variable according to the interpreters' ability. Therefore, various automatic rule (signature) generation methods have been proposed <ns0:ref type='bibr' target='#b74'>(Sagala, 2015)</ns0:ref>.</ns0:p><ns0:p>However, most of these methods are used between two specific strings, which is still challenging for creating reliable and effective Snort rules. Therefore, we present a promising algorithm based on the content rules, enabling the automatic and easy creation of Snort rules using packet contents. Secondly, we equally proposed a novel model that efficiently and effectively correlates and prioritizes IDS alerts based on the severity using various features of a network packet. Moreover, the proposed system does not need prior knowledge while comparing two different alerts to measure the similarity in diverse attacks. The following are an overview of the main contributions of this research work:</ns0:p><ns0:p>o The authors proposed an optimized and efficient Snort Automatic Rule Generator (SARG) that automatically generates reliable Snort rules based on content features.</ns0:p><ns0:p>o Similarly, we present a novel Security Event (alert) Correlator (SEC) that drastically and effectively minimized the number of alerts received for convenient interpretation.</ns0:p><ns0:p>o This paper also provides solid theoretical background knowledge for the readership of the journal to clearly understand the fundamental functions and capabilities of Snort and various correlation methods.</ns0:p><ns0:p>o Finally, the proposed approach has recorded an acceptable number of alerts, which directly correlates with significantly mitigating the challenges of false alarm rates.</ns0:p><ns0:p>The rest of the paper is organized as follows: Section 2 discusses important background concepts. Similarly, Section 3 explains the materials and methods of the proposed system. Next, section 4 highlights the results and discussions of the proposed approach. Finally, the paper concludes in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>ESSENTIAL CONCEPTS</ns0:head><ns0:p>This section presents brief essential concepts that support the work in this research paper, which will provide readers with the necessary knowledge to appreciate this research and similar results better.</ns0:p></ns0:div> <ns0:div><ns0:head>Synopsis of HIDS and NIDS functionalities</ns0:head><ns0:p>The crucial advantages of a host-based IDS are; the ability to detect internal malicious attempts that might elude a NIDS, the freedom to access already decrypted data compared to NIDS, and the ability to monitor detect advanced persistent threats (APT). In contrast, some of its disadvantages are; firstly, they are expensive as it requires lots of management efforts to mount, configure and manage. It is also vulnerable to specific DoS attacks and uses many storage resources to retain audit records to function correctly (M. <ns0:ref type='bibr'>Liu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b79'>Saxena et al., 2017)</ns0:ref>. The authors of <ns0:ref type='bibr' target='#b5'>(Arrington et al., 2016)</ns0:ref> use the innovative strength of machine learning such as artificial immune systems to present an interesting host-based IDS. Finally, they claimed to have achieved a reasonable detection rate through rescinding out the noise within the environment.</ns0:p><ns0:p>The central idea of any classical NIDS is using rulesets to identify and alert malicious attempts. The majority of the NIDS comes with pre-installed rules that can be modified to target specific attacks <ns0:ref type='bibr' target='#b65'>(Ojugo et al., 2012)</ns0:ref>. For example, creating a rule for a possible probing attack and saving it in the local.rules of Snort IDS will ensure an alert is raised whenever an intrusion is initiated that matches the rule. Furthermore, the essential functionalities of a classical NIDS are: practical identification and alerting of policy violations, suspicious unknown sources, destination network traffics, port scanning, and other common malicious attempts. However, the bulk of NIDS requires costly hardware with expensive enterprise solutions, making it hard to acquire <ns0:ref type='bibr' target='#b20'>(Elrawy et al., 2018)</ns0:ref>. The authors of <ns0:ref type='bibr' target='#b63'>(Nyasore et al., 2020)</ns0:ref> presented an intriguing challenge of evaluating the overlap among various rules in two rulesets of the Snort NIDS. However, the work failed to assess the distinction between diverse rulesets explicitly.</ns0:p><ns0:p>Consequently, the work presented in <ns0:ref type='bibr' target='#b82'>(Sommestad et al., 2021)</ns0:ref> provides an interesting empirical analysis of the detection likelihood of 12 Snort rulesets against 1143 misuse attempts to evaluate their effectiveness on a signature-based IDS. Similarly, they listed certain features as the determining factor of the detection probability. Finally, they claimed impressive results such as a significant 39% raise of priority-1-alerts against the misuse attempts and 69-92% performance accuracy for various rulesets.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> illustrates a standard representation of a HIDS and NIDS architecture with unique functionalities in detecting malicious activities. For instance, Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows a malicious user (attacker) who initiated a DDoS attack against one of the internal servers within the LAN. However, due to the internal security mechanisms, packets are inspected by the firewall as the first layer of protection. Interestingly, some malicious packets can bypass the firewall due to the cutting-edge attack mechanisms, necessitating NIDS <ns0:ref type='bibr' target='#b12'>(Bul'ajoul et al., 2015)</ns0:ref>. Therefore, the NIDS receives the packets and does a further packet inspection. If there are any malicious activities, the packets are blocked and returned to the firewall. Then, the firewall will drop the packets or notify the network administrator, depending on the implemented policies. It is crucial to note that the same applies to the outgoing packets from the LAN to the WAN.</ns0:p><ns0:p>In contrast, the scenario depicted at the top of Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows a regular user requesting web services. Initially, the request goes via the same process as explained above. Then, if the NIDS qualifies the requests, it is sent to the server through the network switch to the webserver. Furthermore, the server responds with the required services, which goes through the same process as the reverse. However, this process is not shown in the diagram. Lastly, Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> also presents the architecture of HIDS as labeled on individual devices, which administers further packet inspection within the host <ns0:ref type='bibr' target='#b96'>(Vokorokos &amp; Bal&#225;&#381;, 2010)</ns0:ref>, thereby enhancing the security level of a given network or computer system.</ns0:p></ns0:div> <ns0:div><ns0:head>Summary of Snort and its components</ns0:head><ns0:p>Snort is a popular and influential cross-platform lightweight signature-based network intrusion detection and prevention system with multiple packet tools. The power of Snort lies in the use of rules, some of which are preloaded, but we can also design customized rules to merely send alerts or block specific network traffic when they meet the specified criteria. Additionally, alerts can be sent to a console or displayed on a graphical user interface, but they can also be logged to a file for future or further analysis. Finally, Snort also enables the configuration options of logging alerts to databases such as MSQL and MongoDB or sending an email to a PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:10:67287:2:0:NEW 30 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science specified responsible person if there are alerts or suspicious attempts <ns0:ref type='bibr' target='#b3'>(Ali, Shah, Khuzdar, et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Moreover, Snort has three running modes: sniffer, packet logger, and network-based IDS mode <ns0:ref type='bibr' target='#b89'>(Tasneem et al., 2018)</ns0:ref>. The sniffer mode is run from the command line mode, and its primary function is just inspecting the header details of packets and printing it on the console. For instance, ./Snort -vd will instruct Snort to display packet data with its headers. The packer logger mode interestingly inspects packets and logs them into a file in the root directory. Then, it can be viewed using tcpdump, snort, or other applications for further analysis. For example, ./Snort -dev -l ./directory_name, will prompt Snort to go to packet logger mode and log the packets in a given directory; if the directory_name did not exist, then it will exit and throw an error. Finally, the network-based IDS mode utilizes the embedded rules to determine any potential intrusive activities within a given network. Snort does this with the help of the network interface card (NIC) running in the promiscuous mode to intercept and analyze real-time network traffic. For instance, the command ./Snort -dev -l ./log -h 172.162.1.0/24 -c snort.conf will prompt Snort to log network packets that trigger the specified rules in snort.conf, where snort.conf is the configuration file that applies all the rules to the incoming packets for any malicious attempt <ns0:ref type='bibr' target='#b89'>(Tasneem et al., 2018)</ns0:ref>.</ns0:p><ns0:p>It is equally important to note that Snort also has the strength of real-time packet logging, content matching, and searching with protocol analysis. It also has the advantage of serving as a prevention tool instead of just monitoring <ns0:ref type='bibr' target='#b3'>(Ali, Shah, Khuzdar, et al., 2018)</ns0:ref>. However, it does come with some notable shortcomings. For example, a very unpopular GUI makes using Snort a bit difficult. Additionally, the vast number of network traffics can compromise the reliable and functional operation of Snort, which inspires the use of Pfring and Hyperscan to reinforce the functionality of Snort for efficiency and reliability. Finally, and most importantly, caution needs to be exercised in creating Snort rules to avoid the apparent challenge of many false alarm rates (FAR). (W. <ns0:ref type='bibr' target='#b71'>Park &amp; Ahn, 2017)</ns0:ref>. Snort comprises five logical components that determine and classify potential malicious attacks or any undesired threat against computer systems and networks <ns0:ref type='bibr' target='#b3'>(Ali, Shah, Khuzdar, et al., 2018)</ns0:ref>. Finally, interested readers can refer to the following references for the details of the Snort components <ns0:ref type='bibr' target='#b24'>(Essid et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b60'>Mishra et al., 2016;</ns0:ref><ns0:ref type='bibr'>Shah &amp; Issac, 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Snort Rule Syntax</ns0:head><ns0:p>Snort utilizes a flexible, lightweight, and straightforward authoritative rules-language primarily written in a single line as in versions preceding to 1.8. However, the present Snort versions allow the spanning of rules in multiple lines but require the addition of black (\) at the end of each line; generally, Snort rules are composed of dual logical portions. For instance, the rule header and the rule options <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. Generally, Snort rules share all sections of the rule option like the general options, payload detection options, non-payload detection options, and post-detection options. However, it can be specified differently depending on the configuration approach. Finally, Snort rules are generally applied to the headers of the application, transport, and network layers such as FTP, HTTP, ICMP, IP, UDP, and TCP <ns0:ref type='bibr' target='#b13'>(Chanthakoummane et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b41'>Khurat &amp; Sawangphol, 2019)</ns0:ref>. However, they can also be applied to the packet payload, which is the adopted approach for SARG.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> presents a classical representation of Snort rules <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. The two rules shown in Table <ns0:ref type='table'>1</ns0:ref> denote that an alert will be triggered based on an icmp traffic protocol from any source IP address and port number to any destination IP address and port number if the traffic content contains a probe. Consequently, this will show a message probe attack, and the signature ID of this rule is 1000023. Similarly, the second rule is almost the same as the first, except that the action is 'log' instead of 'alert,' while the destination port number is 80 instead of 'any' number.</ns0:p></ns0:div> <ns0:div><ns0:head>The Rule Header</ns0:head><ns0:p>The Snort rule header comprises the specified actions, protocol, addresses of source and destination, and port numbers. The default Snort actions are: alert, log, and pass, and it is a required field for every Snort rule, and it defaults to alert if not specified explicitly. Nevertheless, drop, reject, and sdrop are additional options for an inline mode <ns0:ref type='bibr' target='#b13'>(Chanthakoummane et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b41'>Khurat &amp; Sawangphol, 2019)</ns0:ref>.</ns0:p><ns0:p>The protocol field within the rule header is required and usually defaults to IP. Nonetheless, it equally supports UDP, TCP, and ICMP options. The source IP is an optional field and, by default, is set to any, as indicated in Table <ns0:ref type='table'>1</ns0:ref>. However, it also supports a single IP address like 172.168.10.102 or a CIDR block like 172.168.10.0/16, which permits a range of IP addresses as an input. Likewise, a source port field allows a port number or range of port numbers, and the destination IP and port fields are almost the same as the source IP and port fields. Finally, the direction of the monitored traffic is specified using the directional operators (-&gt;, and &lt;-), and the monitoring of source to destination (-&gt;) is the most common practice <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> presents the rule header components with typical examples. For example, the first example will trigger an alert for traffic from any source address with various port numbers up to and including 1024 denoted (:1024), which is going to 192.168.100.1, and ports that are greater than and including port 600 denoted (600:). Finally, the second and last examples of Table <ns0:ref type='table'>2</ns0:ref> are almost the same as the first except for the change in specific fields.</ns0:p></ns0:div> <ns0:div><ns0:head>The Rule Options</ns0:head><ns0:p>The rule option unit comprises the detection engine's central functionalities yet provides complete ease of use with various strengths and flexibility <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. However, this segment can only be processed if all the preceding details have been matched. Furthermore, since this unit generally requires a vast amount of processing resources and time, it is recommended to limit the scope of the rules using only the necessary fields to enable real-time processing without packet drops. Therefore it is recommended to only use the message, content, and SID field for writing efficient and reliable rules. The semicolon (;) separates the rule options while the option keywords are separated using the colon (:). Finally, this unit comprises four main classes, but this paper will only summarize the general and payload detection options. For instance, the general options have no significant effect during detection but merely provide statistics about the rule, whereas the payload inspects the packet data. The general rule and payload detection options include numerous parameters available to Snort users for rule creation <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. However, only the relevant ones are selected to understand the work proposed in this paper, and Table <ns0:ref type='table'>3</ns0:ref> presents the selected options with a typical example for each.</ns0:p></ns0:div> <ns0:div><ns0:head>Snort Configurations and Rule Files</ns0:head><ns0:p>Snort provides a rich scope of customizable configuration options for effective deployment and day-to-day operations. Since Snort consists of vast configuration options, this section will only highlight the necessary options to understand this work easily. Generally, the snort.conf contains all the Snort configurations, and it includes the various customizable settings and additional custom-made rules. The snort.conf is a sample and default configuration file shipped with the Snort distribution. However, users can use the -c command line switch to specify any name for the configuration file, such as /opt/snort/Snort -c /opt/snort/myconfiguration.conf. Nevertheless, snort.conf is the conventional name adopted by many users <ns0:ref type='bibr' target='#b22'>(Erlacher &amp; Dressler, 2020)</ns0:ref>. Likewise, the configuration file can also be saved in the home directory as .snortrc but using the configuration file name as a command-line is the common practice with advantages. Moreover, it also enables using variables for convenience during rule writing, such as defining a variable for HOME_NET within the configuration file like var <ns0:ref type='bibr'>HOME_NET 192.168.10</ns0:ref>.0/24. The preceding example enables the use of HOME_NET within various rules, and when a change is needed, the rule writer only needs to change the variable's value instead of changing all the written rules. For instance, var <ns0:ref type='bibr'>HOME_NET [192.168.1.0/24,</ns0:ref><ns0:ref type='bibr'>192.168.32.64/26]</ns0:ref> and var EXTERNAL_NET any <ns0:ref type='bibr' target='#b60'>(Mishra et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Lastly, the rules configuration also enables the Snort users to create numerous customized rules using the variables within the configuration files and add them to the snort.conf file. The general convention is to have different Snort rules in a text file and include them within the snort.conf using the include keyword like include $RULE_PATH/myrules.rules, which permits the inclusion of the rules within myrules.rules to the snort.conf file during the next start of snort. Also, users can use the Snort commenting syntax (#) in front of a specified rule or the rule file within the Snort configuration file to manually disable the Snort rule or the entire class of rules.</ns0:p><ns0:p>Similarly, Snort parsed all the newly added rules during a Snort startup to activate any newly added rules. However, if there are any errors within the newly added rules, Snort will exit with an error, necessitating the correct and consistent writing of Snort rules <ns0:ref type='bibr' target='#b22'>(Erlacher &amp; Dressler, 2020;</ns0:ref><ns0:ref type='bibr' target='#b60'>Mishra et al., 2016)</ns0:ref>. Finally, Figs. S1 and S2 are the standard representations of the Snort configuration and rule files.</ns0:p></ns0:div> <ns0:div><ns0:head>Synopsis of Alert Correlation</ns0:head><ns0:p>Alert Correlation is a systematic multi-component process that effectively analyzes alerts from various intrusion detection systems. Its sole objective is to provide a more concise and PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:2:0:NEW 30 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science high-level view of a given computer system or network, and it can prioritize IDSs alerts based on the severity of the attack. Additionally, it plays a significant role for the network administrators to effectively and efficiently differentiate between relevant and irrelevant attacks within a given network, resulting in reliable and secured networks <ns0:ref type='bibr' target='#b93'>(Valeur et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head><ns0:p>The section briefly discusses the methodologies and procedures used to design and implement the proposed Snort Automatic Rule Generator using a sequential pattern algorithm and the Security Event Correlation, abbreviated SARG-SEC. It is worthy to note that the term event is the same as alerts generated during intrusion detection.</ns0:p></ns0:div> <ns0:div><ns0:head>Snort Automatic Rule Generation (SARG)</ns0:head><ns0:p>Irrespective of the significant numbers of available literature that use other approaches of automating rule generation with remarkable success <ns0:ref type='bibr' target='#b45'>(Li et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b65'>Ojugo et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b74'>Sagala, 2015)</ns0:ref>, there is still a need for an effective and optimized auto-rule generator. According to the authors' knowledge, none of the existing literature uses the proposed approach in this paper. Therefore, this paper presents an automation method of generating content-based Snort rules from collected traffic to fill this research gap. The following sections provide a concise description of the design and implementation procedures.</ns0:p></ns0:div> <ns0:div><ns0:head>Content Rule Extraction Algorithm</ns0:head><ns0:p>Snort rules can indicate various components, but this research targets rules that only include header and payload information. The steps demonstrated in the examples below are performed to generate the Snort rules automatically. For each first host, the application and service to be analyzed or the traffic of malicious code is collected. Moreover, packets with the same transmission direction are combined to form a single sequence in a single flow. Finally, the sequence is used as an input of the sequential pattern algorithm to extract the content <ns0:ref type='bibr' target='#b73'>(Pham et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Action Protocol SourceIP SourceProt -&gt; DestinationIP DestinationPort (Payload) <ns0:ref type='bibr'>offset:6;</ns0:ref><ns0:ref type='bibr'>depth:24)</ns0:ref> The sequential pattern algorithm finds the candidate content while increasing the length starting from the candidate content whose length is 1 in the input sequence and finally extracts the content having a certain level of support. Nevertheless, the exclusive use of only packet or traffic contents for rule creation can lead to significant false positives, thereby undermining the effectiveness of the created rules and increasing the potential risks against computer systems and networks. Therefore, additional information is analyzed and described in the rule to enable efficient and effective rules against malicious attacks. As a result, this paper used the content and header information of the network traffic or packets as supplementary information, thereby significantly enhancing the reliability of the automatically generated rules at the end of the process. Moreover, by applying the extracted content to input traffic, traffic matching the content is grouped, and the group's standard header and location information is analyzed <ns0:ref type='bibr' target='#b74'>(Sagala, 2015)</ns0:ref>. Finally, the auto-generated Snort rules are applied to the network equipment with Snort engine capability.</ns0:p></ns0:div> <ns0:div><ns0:head>Sequence Configuration Steps</ns0:head><ns0:p>The sequence is constructed by only extracting the payloads of the packets divided into forward and backward directions of the flow. Moreover, the proposed approach will generate two sequences for flows that consist of two-way communication packets, whereas the unidirectional communication traffic generates a single sequence. Finally, it is worthy to note that a sequence set consists of several sequences denoted S as shown in equation 1, and a single sequence is made up of host ID and string as shown in equation 2. SequenceSet = {S 1 , S 2 , ...., S s }</ns0:p><ns0:p>(1) S i = {host_id, &lt;a 1 a 2 a 3 , ...., a n &gt;}</ns0:p><ns0:p>(2)</ns0:p></ns0:div> <ns0:div><ns0:head>Content Extraction Step</ns0:head><ns0:p>A sequence set and a minimum support map are input in the content extraction step and extract content that satisfies the minimum support. This algorithm improves the Apriori algorithm, which finds sequential patterns in an extensive database to suit the content extraction environment. The Content Set, which is the production of the algorithm, contains several contents (C) as shown in Equation <ns0:ref type='formula'>(</ns0:ref>3), and one content is a contiguous substring of a sequence string as shown in Equation ( <ns0:ref type='formula' target='#formula_0'>4</ns0:ref>). Algorithm 1 and Algorithm 2 illustrate producing a content set that satisfies a predefined minimum map from an input sequence set. Moreover, when Algorithm 1 performed the content extraction, the content of length one (1) is extracted from all sequences of the input sequence set and stored in a content set of length 1 (L 1 ) as demonstrated in (Algo.1 Line: 1~5), the content of length 1 starting with the length is increased by 1. Also, the content of all lengths is extracted and stored in its length content set (L k ) as shown in (Algo.1 Line: 6~20).</ns0:p><ns0:formula xml:id='formula_0'>ContentSet = {C 1 , C 2 , &#8230;, C c } (3) C i = {&lt;a x a x +1 &#8230;. a y &gt; | 1 &#8804; x &#8804; y &#8804; n,}<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Support = Number of support hosts / total number of hosts (5)</ns0:p><ns0:p>It is worthy to note that the method to be used at C (Algo. 1.0 Line: 18) is the process described in Algorithm 2.0. The contents of the set L k-1 are created by comparing the contents of L k . Furthermore, to make the content of the set L k-1 by combining the contents of the set L k , contents of the set with the same length k-2 content excluding first character and k-2 content excluding the last character are possible as shown at (Algo.2.0 Line: 1~7). For example, abcd and bcde are the contents of the set L4. It has the same bcd except for a, and bcd except for e. Therefore, the content abcde of the set L5 can be created by increasing the length by 1 in the same way as above. Extracting and deleting content below the support level are repeated until the desired outcome.</ns0:p><ns0:p>The final step checked for the content inclusion relationship of all lengths extracted. If we find the content in the inclusion relationship, the content is deleted from the set as demonstrated in (Algo.1.0). Then, we deliver the final created content set to the next step. Moreover, the SequenceSet consisting of traffic from 3 hosts and minimum support of 0.6 is passed as an input. The minimum support rating of 0.6 means that since the total number of hosts is 3, the content must be observed in traffic generated by at least two hosts. Finally, all length one (1) content is extracted when the algorithm is executed.</ns0:p><ns0:p>Furthermore, Algorithm 3.0 would have been given content and packet set when it represents analyzing the location information of the content. The output of this algorithm is offset when matched to a packet in a packet set. The matching starts in a bit, byte position, and depth is the matching exit position, meaning the maximum byte position.</ns0:p><ns0:p>The first offset is the maximum size of the packet, and the depth initializes to 0 as shown in (Algorithm 3.0 Line: 1~7), which traverses all packets in packet set and adjusts offset and depth. Moreover, it checks whether the content received matches the packet, and if there exists a match, then the starting byte position is obtained and compared to the current offset. If the value is less than the current offset, the value changes to the current offset. Similarly, in the case of depth, if the value is more significant than the current dept., the current value is obtained using the byte position. Then it changes the value to the current depth (Algo.3 Line: 4~6). Likewise, to analyze the header information of the extracted content, we performed a process similar to the location information.</ns0:p><ns0:p>The analysis steps described above traversed all packets in the packet set and checked for possible matching with the content. If a match exists, it will store the packet's header information. After reviewing all packets, it adds the header information to the content rule if the stored header information has one unique value. Additionally, in IP addresses, the CIDR value is reduced to 32, 24, 16 orders which iterate until the unique value is extracted. Assuming the CIDR value is set to 32, which is the class D IP address range, we will try to find an exclusive value, and if not found, then apply the CIDR value to 24 to find the class C IP address, and this process continues until we obtained the desired values. For instance, if the Destination IP address to which that content is matched is <ns0:ref type='bibr'>CIDR 32,</ns0:ref><ns0:ref type='bibr'>then '111.222.333.1/32' and '111.222.333</ns0:ref>.2/32' are extracted. In contrast, it can also be set to CIDR 24 and extract '111.222.333.0/24'.</ns0:p></ns0:div> <ns0:div><ns0:head>The Schematic Diagram of the Snort Automatic Rule Generator (SARG)</ns0:head><ns0:p>Fig. <ns0:ref type='figure'>2</ns0:ref> presents a comprehensive schematic diagram of SARG. The proposed auto-rule generator (SARG) utilized well-known datasets used by many researchers and security experts to assess various network security frameworks as discussed in <ns0:ref type='bibr' target='#b46'>(Lippmann et al., 2000)</ns0:ref>. Initially, we used different pcap files to simulate live attacks against SARG that facilitates an efficient and effortless generation of reliable Snort rules. It is essential to note that &#960;-1 , &#960;-2 , &#960;-3 ,...,&#960;n , denotes the various utilized pcap files for the auto-rule generation. Consequently, SARG relies on the pcap file contents to automatically generate numerous effective Snort rules without any human intervention, and &#411;-1 , &#411;-2 , &#411;-3 , &#411;-4 ,...,&#411;n represents the various auto-generated Snort rules. Next, the Snort.conf file is updated based on the auto-generated rules. In addition, any device with a Snort engine denoted as &#1150; can use these auto-generated rules against incoming traffic represented as D-1 ,..., Dn to trigger alerts for malicious attempts that meet all the criteria of the rules. Finally, this paper does not document the generated alerts due to the volume of the work. However, the provided supplementary materials contain the codes and pcap files that any interested researcher can reproduce.</ns0:p></ns0:div> <ns0:div><ns0:head>Overview and Significance of Alert Correlation</ns0:head><ns0:p>Intrusion detection systems have recently witnessed tremendous interest from researchers due to their inherent ability to detect malicious attacks in real-time <ns0:ref type='bibr' target='#b92'>(Vaiyapuri &amp; Binbusayyis, 2020;</ns0:ref><ns0:ref type='bibr' target='#b101'>Zhou et al., 2020)</ns0:ref>. In addition, it has significantly mitigated the security challenges that came with ground-breaking technologies such as the Internet of Things (IoT) <ns0:ref type='bibr' target='#b94'>(Verma &amp; Ranga, 2020)</ns0:ref>, big data, and artificial intelligence <ns0:ref type='bibr' target='#b91'>(Topol, 2019)</ns0:ref>. However, regardless of the immense contributions mentioned above, it may still generate thousands of irrelevant alerts daily, which complicates the role of the security administrators in distinguishing between the essential and nonessential alerts such as false positives.</ns0:p><ns0:p>In addressing these security challenges, specifically for IDSs, the research industry has proposed significant pieces of literature with remarkable achievements in lessening the huge false alarm rates of current cutting-edge proposed systems <ns0:ref type='bibr' target='#b54'>(Mahfouz et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b101'>Zhou et al., 2020)</ns0:ref>. Nevertheless, irrespective of these significant achievements, there is still a need to propose new approaches with much efficiency and effectiveness to help mitigate the vast false alarm rates of the earlier proposed systems <ns0:ref type='bibr' target='#b32'>(Jaw &amp; Wang, 2021)</ns0:ref>. Therefore, this paper presents a novel alert correlation model that correlates and prioritizes IDS alerts. The following sections provide a succinct description of the design and implementation process of the proposed security event correlation model.</ns0:p></ns0:div> <ns0:div><ns0:head>The Proposed Security Events Correlation (SEC)</ns0:head><ns0:p>The SEC model offers efficiency and effectiveness of correlating the alerts generated by the IDS, which significantly mitigates the massive false alarm rates. Furthermore, it does not need previous knowledge while comparing different alerts to measure the similarity in various attacks. Unlike other correlation approaches that usually follow specific standards and procedures <ns0:ref type='bibr' target='#b93'>(Valeur et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b99'>Zhang et al., 2019)</ns0:ref>, we proposed a model that entails the same processes but with different approaches, as shown in the overview of the proposed model below.</ns0:p><ns0:p>The overview of the proposed system involves various phases such as monitoring interval, alert preprocessing, alert clustering, correlation, alert prioritization, and the results as shown in Fig. <ns0:ref type='figure'>3</ns0:ref>. The following sections detail the various phases of SEC with their respective procedures.</ns0:p></ns0:div> <ns0:div><ns0:head>The Monitoring Interval</ns0:head><ns0:p>In summary, this paper defined the monitoring interval as the specific configured time on the security framework or monitoring model performing the alert correlation. The monitoring PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:2:0:NEW 30 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science interval of SEC is 300 days. It is essential to note that the long interval of 300 days is because more days means extra alerts to correlate and aggregate, leading to effective and better results.</ns0:p></ns0:div> <ns0:div><ns0:head>Alert Preprocessing</ns0:head><ns0:p>The alert preprocessing stage of this work involves two sub-processes, namely, feature extraction and selection. This phase systematically extracts features and their equivalent values from the observed alerts. The similarity index defined below calculates the alerts with the same value for existing features such as attack category, detection time, source IP address, and port number. Consequently, it significantly helps identify new features from the available alerts, thereby overcoming attack duplication. Also, this phase involves the selection of features for alert correlation. For example, suppose the selected features such as the attack detection time, category, port number, and source IP address of one alert or multiple alerts are the same. In that case, we use the similarity index to select the relevant alerts and eliminate irrelevant ones such as duplications and similar instances. Thus, &#928; denotes the similarity between two alerts, and &#1120; represents the alert similarity index, and &#948; denotes an alert. &#928; = &#1120; (&#948;a, &#948;b), where &#948;a &#8800; &#948;b (6)</ns0:p><ns0:p>Lastly, this section also involves alert scrubbing that uses attack type, detection time, source IP address, and detection port to remove the incomplete alert data. This process plays a significant role in providing reliable and consistent data.</ns0:p></ns0:div> <ns0:div><ns0:head>Alert Aggregation and Clustering</ns0:head><ns0:p>The existing literature has provided various descriptions of alert aggregation, such as considering alerts to be similar if all their attributes match but with a bit of time difference. In contrast, others extended the concept to grouping all alerts with the exact root causes by aggregating alerts using various attributes. Moreover, this phase groups similar alerts based on the similarity index of the extracted features. For instance, if the features of alert &#7928; have the same value as alert &#7820;, then this phase will automatically convert these two alerts into a single unified alert by removing the duplicated alerts, thereby significantly reducing the number of irrelevant alerts. Finally, the clustered alerts provide an effective and efficient analysis of false positives, leading to reliable and optimized security frameworks.</ns0:p><ns0:p>Lastly, Fig. <ns0:ref type='figure'>4</ns0:ref> demonstrates the alert aggregation process. Firstly, we cluster all the generated alerts, check their similarity using the similarity index defined in previous sections, and remove all the duplicated alerts.</ns0:p></ns0:div> <ns0:div><ns0:head>Correlation</ns0:head><ns0:p>Based on existing literature, the primary objective of alert correlation is to identify underlying connections between alerts to enable the reconstruction of attacks or minimize having massive irrelevant alerts. Furthermore, scenario-based, temporal, statistical, and rule-based correlations are among the most commonly utilized correlation categories in existing research. The approach in this paper has fully met the attributes of the statistical correlation method, which correlates alerts based on their statistical similarity. Finally, this phase consists of the alert correlation process based on selected features like the IP addresses, attack category, and the detected time of the generated alerts. For example, if the detection time of two independent alerts satisfies the condition of COTIME denoted as &#1120;, then the correlation is performed based on selected features. Fig. <ns0:ref type='figure'>5</ns0:ref> presents the correlation engine with the extracted features for alert correlation. The correlation engine used these features to efficiently correlate alerts that meet the conditions of the &#1120;.</ns0:p></ns0:div> <ns0:div><ns0:head>Alert Prioritization</ns0:head><ns0:p>Alert prioritization is the final phase of the proposed SEC model. It plays a significant role in prioritizing the alert's severity, thereby helping the administrator identify or dedicate existing resources to the most alarming malicious attacks. Although a priority tag is assigned to each Snort rule, as explained earlier, we intend to extend its functionality by embedding an alert prioritization within the proposed SEC model. Alert prioritization is classified into high, medium, and low priority. Firstly, this paper defines the high priority as the alert counts with standard features such as the data fragmentations, source IP address, port number, and attack category.</ns0:p><ns0:p>For instance, if multiple alerts have the same destination port numbers and IP addresses, then we classify these alerts as high-priority alerts. Lastly, the high priority functionality will hugely minimize various challenging cyber-attacks such as DDoS and DoS because it handles the standard techniques utilized in these attacks, like the IP fragmentation attack, which uses the analogy of data fragmentation to attack target systems. Secondly, the medium priority alerts count the number of alerts with shared features such as the same attack category, IP addresses, and destination port number. Finally, low priority alerts are alert counts with standard features like IP addresses, attack category, and destination port but varying values.</ns0:p><ns0:p>The alert prioritization categories with the various standard features discussed above are shown in Fig. <ns0:ref type='figure'>6</ns0:ref>. However, it is essential to note that irrespective of the common features in each category, the above section entails specified conditions that uniquely distinguish them.</ns0:p></ns0:div> <ns0:div><ns0:head>Overview of the Proposed Security Events Correlation (SEC)</ns0:head><ns0:p>Fig. <ns0:ref type='figure'>7</ns0:ref> demonstrates the detailed overview of the proposed SEC. First of all, SEC accepts a collection of raw alerts as an input generated based on a specified monitoring interval set to 300 days to ensure sufficient alerts for practical analysis. Additionally, the alert preprocessing step accepts these raw alerts and performs feature extraction and selection. Furthermore, the alert scrubbing phase uses predefined conditions denoted as &#926;, to check for alert duplicates. If duplicates exist, the scrubbing process will remove all the copies and send the alerts with no duplicates to the correlation engine. Next, the correlation engine decides if the alerts are single or multiple instances using predefined conditions denoted as &lt;=&#1120;. Finally, the alert prioritization phase prioritizes the alerts using the above three categories, and the output phase presents the relevant alerts as the final results. It is worthy to note that the alert outputs are the actual events after removing the duplicates and the alert correlation, as shown in the equation below.</ns0:p><ns0:formula xml:id='formula_1'>&#8486; = &#960;-&#946;<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Where &#8486; denotes the final output alerts, &#960; represents the total number of alerts, and &#946; denotes the correlated alerts.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>This section presents a comprehensive systematic analysis and performance justification of the proposed models (SARG-SEC). For example, it evaluates how well the SARG can efficiently generate standard and reliable Snort rules by executing SARG against live attacks in existing pcap files. Finally, this section will also highlight the results and performance evaluation of the (SEC) model that significantly mitigates the challenges of the vast alerts generated by the Snort IDS and the earlier proposed feature selection and ensemble-based IDS <ns0:ref type='bibr' target='#b32'>(Jaw &amp; Wang, 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation of the Generated Snort Rules by the proposed SARG</ns0:head><ns0:p>The authors of <ns0:ref type='bibr' target='#b46'>(Lippmann et al., 2000)</ns0:ref> provided a descriptive and intriguing challenge of documenting an off-line IDS dataset that a wide range of security experts and researchers heavily utilized to assess various security frameworks. Moreover, the paper presents a testbed that used a tcpdump sniffer to produce pcap files used to evaluate the proposed solutions (SARG-SEC). Finally, the supplementary pcap files also entail the various attacks utilized to evaluate SARG-SEC. Accordingly, this section highlights the findings of the various conducted experiments to accurately assess the performance of the SARG framework, as demonstrated in the figures below.</ns0:p><ns0:p>The proposed SARG method has achieved decent performances on the auto-generation of Snort rules, as illustrated in Fig. <ns0:ref type='figure'>8</ns0:ref>. All the findings presented in this section use various pcap files as a simulation of live attacks against the proposed method to generate efficient and effective Snort rules that completely meet all the criteria of the Snort rule syntax, as presented in Table <ns0:ref type='table'>3</ns0:ref>. For instance, Fig. <ns0:ref type='figure'>8</ns0:ref> demonstrated that SARG has successfully auto-generated a Snort rule with the following content: alert udp 10.12.19.101 49680 -&gt; any any (msg: 'Suspicious IP10.12.19.101 and port 49680 detected'; reference:Packet2Snort; classtype:trojan-activity, sid:xxxx, rev:1). Based on the above auto-generated Snort rule, it is self-evident that SARG has produced optimized Snort rules that meet all the criteria of Snort rule syntax as discussed in <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. For example, the above auto-generated rule has produced a descriptive and easy to analyze message of (msg: 'Suspicious IP10.12.19.101 and port 49680 detected ';), which details the reason or cause of the alert. Furthermore, SARG auto-generated another similar Snort rule by setting the source IP address and port number to 'any any' and the destination IP address and port number as 10.12.19.1 and 53, respectively. Lastly, the findings presented in Fig. <ns0:ref type='figure'>8</ns0:ref> shows that SARG has auto-generated an impressive and comprehensive Snort rule that meets all standard Snort rule criteria. However, the auto-generated alert uses the HOME_NET variable as the source address and a msg, content, dept, and offset values of (msg: 'Suspicious DNS request for fersite24.xyz. detected'; content:' |01000001000000000000|'; depth:10; offset:2;), respectively. Based on the above performances, the authors concluded that SARG has effectively met all the criteria of creating compelling and optimized Snort rules consisting of numerous general rule parameters and payload detection options <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>.</ns0:p><ns0:p>Moreover, Fig. <ns0:ref type='figure'>9</ns0:ref> presents a series of auto-generated Snort rules using several pcap files as live attacks. All the auto-generated Snort rules shown in Fig. <ns0:ref type='figure'>9</ns0:ref> have completely demonstrated to meet all the standards of Snort rule creation, as discussed in much existing work <ns0:ref type='bibr' target='#b33'>(Jeong et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b41'>Khurat &amp; Sawangphol, 2019)</ns0:ref>. For instance, the use of defined variables like the HOME_NET and ASCI values of ('|05|_ldap|04|_tcp|02|dc|06|_msdcs|0C|moondustries|03';) for the values of 'content:' field of the auto-generated Snort rules. Also, all the automatically generated rules came with autogenerated messages that uniquely and vividly describe the intrusive or abnormal activity whenever the alert is triggered. Consequently, this will significantly help the administrator to be able to easily identify why the alert happened and quickly find solutions to mitigate any malicious activities.</ns0:p><ns0:p>Additionally, all the auto-generated Snort rules are saved in the local.rules to ensure consistency and avoid duplications of rules within the local.rules files, and then included in the snort.conf file. As a result, it provides the administrator with the flexibility to further finetune the generated rules to meet the specific needs of a given computer network or system. Thus, irrespective of the additional efforts of fine-tuning the auto-generated rules, for example, manually updating the sid value of the auto-generated rules and other fields for specificity, SARG has undoubtedly minimized the time taken for Snort rule creation. Also, it can significantly lessen the financial burden of human experts for Snort rule creation, thereby making the proposed method a meaningful tool that would play a significant role in mitigating the escalating cyberattacks.</ns0:p><ns0:p>In conclusion, the results presented in Fig. <ns0:ref type='figure'>8</ns0:ref> and Fig. <ns0:ref type='figure'>9</ns0:ref> have impressively demonstrated the capability of auto-generating Snort rules, which considerably mitigates the need for costly human capacity in creating Snort rules. However, it has downsides that we could not handle, such as automatically generating the consistent sid values of the respective rules. As a result, we came up with a prompt message to remind the users to update the sid value after the autogeneration of the Snort rule. Also, based on the lack of expert domain, we could not justify why SARG has two contents in a single generated Snort rule. Nevertheless, these challenges have no negative implications while validating the auto-generated rules. Regardless, we intend to extend this research to establish means of solving these research challenges.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of the Single and Multiple Instances Security Event Correlations</ns0:head><ns0:p>Likewise, the following sections present the results of numerous experiments that evaluate the consistency and efficiency of the proposed security event correlator (SEC). Also, the findings presented in the subsequent sections have demonstrated the promising performances of the SEC model, which could significantly mitigate the substantial challenges of managing the vast alerts generated by heterogeneous IDSs. Firstly, Figs. 10 and 11 summarized the results of the experiments based on single and multiple instances. For example, Fig. <ns0:ref type='figure' target='#fig_0'>10</ns0:ref> depicts a single sample of one thousand (1000) alerts used to evaluate the effectiveness of the proposed security event correlator (SEC), which obtained an impressive correlation performance of removing 71 irrelevant alerts due to alert duplication. Even though the 71 removed irrelevant alerts might seem insignificant, the authors believed this is an excellent performance considering the dataset sample. Also, it will be fair to state that processing these irrelevant alerts will waste valuable computational and human resources. Similarly, Fig. <ns0:ref type='figure' target='#fig_0'>11</ns0:ref> shows the findings of multiple instances ranging from 1000 to 4000 alerts as the sample sizes for the individual evaluations. Again, the results demonstrated that SEC had achieved a decent correlation performance on the various sets.</ns0:p><ns0:p>For instance, Fig. <ns0:ref type='figure' target='#fig_0'>11</ns0:ref> shows that out of 4000 alerts, SEC efficiently and effectively compressed it to only 3710 alerts by removing a whopping 290 irrelevant alerts that could have unnecessarily exhausted an organization's valuable resources, such as human and computational resources. Likewise, SEC replicates a similar performance for the 2000 and 1000 alert samples, eliminating a vast 250 and 71 irrelevant alerts for both instances, respectively. Therefore, considering the above significant performance of SEC for both single and multiple instances, we can argue that the proposed method could significantly contribute to the solutions of analyzing and managing the massive irrelevant alerts generated by current IDSs.</ns0:p></ns0:div> <ns0:div><ns0:head>Assessment of Single Instance with Multiple Correlation Time (COTIME)</ns0:head><ns0:p>This section presents numerous experiments based on a single number of inputs with varying correlation times (COTIME(s)). It also further assess the performance and consistency of the proposed security event correlator based on alert prioritization that efficiently demonstrated the number of high, medium, and low priority alerts. Furthermore, it shows the correlation time and the correlated alerts using a fixed sample size of 1000 alerts.</ns0:p><ns0:p>The findings presented in Figs. 12 and 13 summarized the evaluation results obtained from four experiments, highlighting some exciting results. For instance, experiments one and two illustrated in Fig. <ns0:ref type='figure' target='#fig_0'>12</ns0:ref> show that using a fixed sample of 1000 alerts and a COTIME of 40 seconds, SEC efficiently identified 71 alerts as correlated alerts, producing a manageable 929 alerts as final relevant alerts. Similarly, out of the 929 alerts, SEC effectively categorized a considerable 334 alerts as high priority alerts, 209 alerts as medium alerts, and 386 low priority alerts. Likewise, experiment two presented in Fig. <ns0:ref type='figure' target='#fig_0'>12</ns0:ref> indicates that increasing the number of COTIME to 60 seconds results in even better findings, such as identifying a massive 117 alerts as correlated alerts, thereby leading to only 883 alerts as the final output alerts. Unlike experiment one, only 163 alerts were categorized as medium priority alerts, while 334 and 386 were recorded for high and low alerts, respectively. Moreover, Fig. <ns0:ref type='figure' target='#fig_0'>13</ns0:ref> further validates that an increase in COTIME is a crucial factor in the performance of the SEC. For example, increasing the COTIME to 90 and 120 seconds results in more correlated alerts like 124 and 128 alerts for 90 and 120 seconds, respectively. Also, experiments three and four have effectively identified 157 and 154 alerts as medium alerts. The final output alerts for the two experiments are 876 and 872 alerts, respectively, with a low priority alert of 385. However, the values for high priority alerts for all these four experiments presented in Figs. 12 and 13 recorded the same values. Nonetheless, we have conducted a series of experiments to validate these similarities.</ns0:p><ns0:p>Similarly, the results illustrated in Figs. <ns0:ref type='figure' target='#fig_0'>14 and 15</ns0:ref> shows that SEC obtained impressive findings such as an increase of 133 and 165 correlated alerts for experiments five and six with a COTIME of 180 and 300 seconds, respectively. Again, this further validates that COTIME significantly correlates with the number of correlated outputs. Moreover, SEC categorized 150 and 131 alerts as medium alerts, while 383 and 374 alerts were low priority alerts with only 867 and 835 final output alerts for experiments five and six. Nevertheless, like the previous experiments, the values for high priority remain as 334 alerts for both experiments, as shown in Fig. <ns0:ref type='figure' target='#fig_0'>14</ns0:ref>.</ns0:p><ns0:p>Finally, and most importantly, Fig. <ns0:ref type='figure' target='#fig_0'>15</ns0:ref> presents some notable and exciting findings that reveal interesting correlations among the chosen factors of the conducted experiments. For instance, experiment seven obtained a massive 474 correlated alerts due to a 600 seconds increase of COTIME. As a result, it resulted in a much proportionate distribution of alerts into various categories like 223 high priority alerts, 227 low priority alerts, a negligible 78 medium alerts with only 526 final output alerts. Similarly, SEC achieves exciting results when COTIME is 900 seconds, such as a manageable final output of 424 alerts dues to the vast 576 correlated alerts. Also, experiment eight presented in Fig. <ns0:ref type='figure' target='#fig_0'>15</ns0:ref> effectively and efficiently categorized the final output into 178, 65, and 181 alerts for high, medium, and low priority, respectively. As a result, it would be fair to conclude that SEC can significantly assist the network or system administrators with a considerably simplified analysis of alerts. Moreover, based on the results presented above, it is self-evident that while COTIME increases, the number of output alerts decreases. Therefore, we can conclude that the number of correlated alerts and COTIME are directly proportional to each other.</ns0:p></ns0:div> <ns0:div><ns0:head>Assessment of the Time Factor on Alert Correlations</ns0:head><ns0:p>Similarly, this section meticulously conducted more experiments to evaluate the correlation of time factors and the performance of the proposed unique approach (SEC). Considering that the amount of COTIME has significantly influenced the outcomes of alert correlation achieved by SEC, this section intends to validate this apparent relationship.</ns0:p><ns0:p>The results illustrated in Figs. 16 and 17 present the evaluation performance of correlation time against the correlated alerts with some interesting findings. For instance, Fig. <ns0:ref type='figure' target='#fig_0'>16</ns0:ref> shows the time lag or time complexity on static COTIME with multiple instances ranging from 1000 to 4000 alerts. Moreover, time complexity or lag is the time difference between various sample inputs. Similarly, Fig. <ns0:ref type='figure' target='#fig_0'>16</ns0:ref> confirms a continuous increase in the time (COTIME) as the value of the inputs increases, which validates the results presented in Figs. 13, 14, and 15. For instance, increasing the inputs to 1500, 2000, and 2500 alerts has increased COTIME to 42, 45, and 48 seconds, respectively. Likewise, the sample of 3500 and 4000 alerts recorded 52 and 55 seconds of COTIME, respectively. Based on these performances, the authors have concluded that COTIME is a crucial factor in SEC's better performance, as shown in Fig. <ns0:ref type='figure' target='#fig_0'>16</ns0:ref> and preceding evaluations.</ns0:p><ns0:p>Moreover, Fig. <ns0:ref type='figure' target='#fig_0'>17</ns0:ref> shows a more compelling assessment of COTIME with correlated alerts. For instance, there is a continuous rise in correlation time and the correlated alerts, like it takes 40 seconds COTIME to correlate 71 correlated alerts. Also, the increase of COTIME to 90, 120, and 182 seconds has achieved 117, 124, and 128 correlated alerts, respectively. Furthermore, the considerable rise of COTIME to 300, 600, and 900 seconds has achieved some interesting findings such as 165, 474, and 576 correlated alerts, which is a significant and impressive performance for SEC. Based on the above results, it will be fair to conclude that a massive increase in COTIME could lead to reliable and effective alert correlation. However, this could also lead to the demand for more processing power and other similar burdens. Irrespective of these challenges, SEC has achieved its objective of significantly minimizing the massive irrelevant alerts generated by heterogeneous IDSs. Also, SEC has enabled easy analysis of alerts, which has always been a considerable challenge for system and network administrators.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Data security has been a massive concern over the past decades due to the considerable hightech progression that has positively influenced human society in many aspects. However, the illegal mining of data due to the vulnerabilities of security mechanisms has enabled malicious users to compromise and exploit the integrity of existing systems, thereby causing colossal havoc to individuals, governments, and even private sectors. Consequently, this leads to the necessity to deliver reliable and effective security mechanisms by harnessing various techniques to design, develop and deploy optimized IDSs, for example, Snort-based IDS. Nonetheless, existing studies have shown that manually creating Snort rules, which is highly stressful, costly, and error-prone, remain a challenge. Therefore, this paper proposed a practical and inclusive approach comprising a Snort Automatic Rule Generator and a Security Event Correlator, abbreviated SARG-SEC. Firstly, this paper provides a solid and sound theoretical background for both Snort and alert correlation concepts to enlighten the readership with the essential idea of understanding the presented research content. Additionally, this paper presents an efficient and reliable approach (SARG) to augment the success of Snort, which significantly minimizes the stress of manually creating Snort rules. Moreover, SARG utilizes the contents of various pcap files as live attacks to automatically generate optimized and effective Snort rules that meet the entire criteria of Snort rule syntaxes as described in existing literature <ns0:ref type='bibr' target='#b41'>(Khurat &amp; Sawangphol, 2019)</ns0:ref>. The results presented in this paper have achieved impressive performances in the auto-generation of Snort rules, with little knowledge of how the contents of the rules are generated. Furthermore, the auto-PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:2:0:NEW 30 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science generated Snort rules could serve as a beginning point for turning Snort into a content defense method that considerably lessens data leakages.</ns0:p><ns0:p>Moreover, this paper posits an optimized and consistent Security Event Correlator (SEC) that considerably alleviates the current massive challenges of managing the immense alerts engendered by heterogeneous IDSs. This paper evaluated SEC based on single and multiple instances of raw alerts with consistent and impressive results. For example, utilizing a single sample of 1000 alerts and multiple instances of 1000 to 400 alerts, SEC effectively and efficiently identified 71 and 290 alerts as correlated alerts. Furthermore, this paper uses a single instance of 1000 alerts with varying COTIME to further measure the performance and stability of SEC based on alert prioritization that competently revealed the number of high, medium, and low priority alerts. For example, SEC achieved excellent performances on a single instance of 1000 alerts like 133 and 165 correlated alerts for experiments five and six with a COTIME of 180 and 300 seconds, respectively. Furthermore, SEC identified 383 and 374 low priority alerts, whereas 150 and 131 alerts are medium alerts with only 867 and 835 final output alerts for experiments five and six.</ns0:p><ns0:p>Lastly, to further confirm the significant correlation between the number of correlated outputs and COTIME. Experiments seven and eight present exciting results that demonstrated some intuitive relationships amongst these chosen factors. For instance, an increase of COTIME to 600 and 900 seconds shows a much balanced and acceptable alert prioritization into numerous categories like final output alerts of only 526, 227 low priority alerts, 223 high priority alerts, and a negligible 78 medium alerts. Based on the above findings, it would be fair to conclude that SARG-SEC can considerably support or serve as a more simplified alert analysis framework for the network or system administrators. Also, it can serve as an efficient tool for auto-generating Snort rules. As a result, SARG-SEC could considerably alleviate the current challenges of managing the vast generated alerts and the manual creation of Snort rules.</ns0:p><ns0:p>However, notwithstanding the decent performance of the SARG-SEC, it has some apparent downsides that still necessitate some enhancements. For instance, the auto-generation of consistent sid values of the respective rules and why SARG has two contents in a single generated Snort rule. Similarly, we acknowledged that the sample sizes of alerts and the apparent relationship of correlated alerts with COTIME could challenge our findings. Nevertheless, we aim to extend this research to establish means of solving these research challenges. In the future, we intend to: (i) Extend SARG's functionality to efficiently generate consistent sid values for the auto-generated Snort rules and establish concepts of how SARG automatically generated the contents of the rules. (ii) Investigate the apparent relationship of COTIME and correlated alerts and present solutions to how we can correlate a larger sample size of alerts within an acceptable time frame to mitigate the need for unceasing computing resources. (iii) Finally, evaluate SEC within a live network environment instead of pcap files and auto-generate reliable and efficient Snort rules using the knowledge of the proposed anomaly IDS <ns0:ref type='bibr' target='#b32'>(Jaw &amp; Wang, 2021)</ns0:ref>. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 A</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,157.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,174.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,174.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67287:2:0:NEW 30 Jan 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Wang Xueming College of Computer Science and Technology, Guizhou University (GZU), Guiyang, Guizhou, China; Mobile No: +8615685100757 January 30th, 2022 Dear Prof. Dr. Kathiravan Srinivasan, We want to take the opportunity to express our sincere gratitude for allowing us to resubmit a revised draft of the manuscript 'A novel hybrid-based approach of Snort Automatic Rule Generator and Security Event Correlation (SARG-SEC)' for consideration by the PeerJ Computer Science. Similarly, we sincerely appreciated the insightful, stress-free, and thorough review process, which has significantly helped us identify some of the manuscript's drawbacks. Likewise, we have made some changes to the manuscript, and we used the Microsoft tracking functionality with 'blue' indicating the changes made and the deletion denoted as 'red.' Accordingly, some of the changes we made are discussed in the subsequent sections: Firstly, due to the length of the manuscript, we have removed some basic or common knowledge and provided references where necessary so that the readership can easily access any information they might need to understand the work presented in the manuscript. However, we made sure that we maintained all the essential required details to enable the readers to get all the vital information quickly. Therefore, we believe the manuscript is now within an acceptable length, attracting more reading from the readership. Similarly, we have provided a complete enhanced manuscript content where necessary. Also, we have delivered improved figures as advised by the publishing officer and the checklist included in the email. Finally, the conclusion section offers a much summarized point-by-point explanation of the fundamental objectives of the manuscript with the required methods and achieved results. Consequently, we believe that the manuscript is now suitable for publication in PeerJ Computer Science. Lastly, thank you so much, Prof. Dr. Kathiranvan Srinivasan, and all the reviewers for your consideration, valuable time, and meaningful contribution to improving the manuscript. Sincerely, WangXueming Prof. Dr. Wang Xueming of Computer Science and Technology (Guizhou University) On behalf of all authors "
Here is a paper. Please give your review comments after reading it.
363
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. It is often the case that only a portion of the underlying network structure is observed in real-world settings. However, most network analysis methods are built on a complete network structure, the natural questions to ask are (a) how well these methods perform with incomplete network structure, (b) which structural observation and network analysis method to choose for a specific task, (c) is it beneficial to complete the missing structure? Methods. In this paper, we consider the incomplete network structure as one random sampling instance from a complete graph, and we choose graph neural networks (GNNs), which have achieved promising results on various graph learning tasks, as the representative of network analysis methods. To identify the robustness of GNNs under graph sampling scenarios, we systemically evaluated six state-of-the-art GNNs under four commonly used graph sampling methods.</ns0:p><ns0:p>[p]Results. We show that GNNs can still be applied under graph sampling scenarios, and simpler GNN models are able to outperform more sophisticated ones in a fairly experimental procedure. More importantly, we find that completing the sampled subgraph does not necessarily improve the performance of downstream tasks; it depends on the dataset itself. Our code is available at https://github.com/weiqianglg/evaluate-GNNs-under-graph-sampling. [p] </ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the last few years, graph neural networks (GNNs) have become standard tools for learning tasks on graphs. By iteratively aggregating information from neighborhoods, GNNs embed each node from its k-hop neighborhood and provides a significant improvement over traditional methods in node classification and link prediction tasks <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Shchur et al. 2018)</ns0:ref>. Powerful representation capabilities have led to GNNs being applied in areas such as social networks, computer vision, chemistry, and biology (Yifan Hou, Jian Zhang, James Cheng, Kaili Ma, Richard T. B. Ma, Hongzhi Chen 2020). However, most GNN models need a complete underlying network structure, which is often unavailable in real-world settings <ns0:ref type='bibr' target='#b30'>(Wei and Hu 2021)</ns0:ref>. Frequently it is the case that only a portion of the underlying network structure is observed, which can be considered as the result of graph sampling <ns0:ref type='bibr' target='#b1'>(Ahmed 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>Ahmed, Neville, and Kompella 2013;</ns0:ref><ns0:ref type='bibr' target='#b2'>Blagus, &#352;ubelj, and Bajec 2015;</ns0:ref><ns0:ref type='bibr' target='#b6'>Dwivedi et al. 2020;</ns0:ref><ns0:ref type='bibr'>P. Hu and Lau 2013)</ns0:ref>. Graph sampling has become a standard procedure when dealing with massive and time evolving networks <ns0:ref type='bibr' target='#b0'>(Ahmed, Neville, and Kompella 2013)</ns0:ref>. For example, on social networks such as Twitter and Facebook, it is impossible for third-party aggregators to collect complete network data under the restrictions for crawlers, we can only sample them by various different users. Unfortunately, many factors make it difficult to perform multiple graph sampling. First, the time consuming, communication networks such as the Internet need hours or days to be probed <ns0:ref type='bibr' target='#b24'>(Ou&#233;draogo and Magnien 2011)</ns0:ref>. Moreover, measuring the network structure is costly, e.g., experiments in biological or chemical networks. Graph sampling scenarios bring an additional challenge for GNNs, and little attention has been paid to the performance of GNN models under graph sampling. In this experimental and analysis paper, we consider the observed incomplete network structure as one random sampling instance from a complete graph , then we address the fundamental &#119866; &#119900; &#119866; problem of GNN performance under graph sampling, in order to lay a solid foundation for future research. Specifically, we investigate the following three questions: Q1: Can we use GNNs if only a portion of the network structure is observed? Q2: Which graph sampling methods and GNN models should we choose? Q3: Can the performance of GNNs be improved if we complete the partial observed network structure?</ns0:p><ns0:p>To answer the above questions, we design a fairly evaluation framework for benchmarking GNNs under graph sampling scenarios by following the principles in <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020)</ns0:ref>. Specifically, we performed a comprehensive evaluation of six prominent GNN models under four different graph sampling methods on eight different datasets with three semi-supervised network learning tasks, i.e., node classification, link prediction and graph classification. The GNN models we implemented include Graph Convolutional Networks (GCN) <ns0:ref type='bibr' target='#b18'>(Kipf and Welling 2017)</ns0:ref>, GraphSage <ns0:ref type='bibr' target='#b12'>(Hamilton, Ying, and Leskovec 2017)</ns0:ref>, MoNet <ns0:ref type='bibr' target='#b23'>(Monti et al. 2017)</ns0:ref>, Graph Attention Network (GAT) <ns0:ref type='bibr' target='#b29'>(Velickovi&#263; et al. 2017)</ns0:ref>, GatedGCN <ns0:ref type='bibr' target='#b3'>(Bresson and Laurent 2017)</ns0:ref>, and Graph Isomorphism Network (GIN) <ns0:ref type='bibr' target='#b31'>(Xu et al. 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#61548; The best GNN model and sampling method are GCN and BFS in small datasets, GAT and RW in medium datasets, respectively. &#61548; In most cases, completing a sampled subgraph is beneficial to improve the performance of GNNs; but completion is not always effective and needs to be evaluated for a specific dataset .</ns0:p><ns0:p>As far as we know, this is the first work to systematically evaluate the impact of graph sampling on GNNs.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Work</ns0:head><ns0:p>In this section, we briefly review related works on graph sampling and GNNs.</ns0:p></ns0:div> <ns0:div><ns0:head>Graph sampling</ns0:head><ns0:p>Graph sampling is a technique to pick a subset of nodes and/or edges from an original graph. The commonly studied sampling methods are node sampling, edge sampling, and traversal-based sampling <ns0:ref type='bibr' target='#b1'>(Ahmed 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>Ahmed, Neville, and Kompella 2013)</ns0:ref>. In node sampling, nodes are first selected uniformly or according to some centrality, such as degree or PageRank, then the induced subgraph among the selected nodes is extracted. In edge sampling, edges are selected directly or guided by nodes. Node sampling and edge sampling are simple and suitable for theoretical analysis, but in many real scenarios we cannot perform them due to various constraints, e.g., the whole graph is unknown (P. <ns0:ref type='bibr' target='#b13'>Hu and Lau 2013)</ns0:ref>. Traversal-based sampling, which extends from seed nodes to their neighborhood, is more practical. Therefore, a group of methods was developed, including breadth-first search (BFS), depth-first search (DFS), snowball sampling (SBS) <ns0:ref type='bibr' target='#b11'>(Goodman 1961)</ns0:ref>, forest fire sampling (FFS) <ns0:ref type='bibr' target='#b20'>(Leskovec, Kleinberg, and Faloutsos 2005)</ns0:ref>, random walk (RW), and Metropolis-Hastings random walk (MHRW). With the numerous graph sampling methods developed, the question of how they impact GNNs still remains to be answered.</ns0:p></ns0:div> <ns0:div><ns0:head>GNNs</ns0:head><ns0:p>After the first GNN model was developed <ns0:ref type='bibr' target='#b4'>(Bruna et al. 2014</ns0:ref>), various GNNs have been exploited in the graph domain. <ns0:ref type='bibr'>GCN simplifies ChebNet (Micha&#235;l Defferrard, Bresson, and Vandergheynst 2016)</ns0:ref> and speeds up graph convolution computation. GAT and MoNet extend GCN by leveraging an explicit attention mechanism <ns0:ref type='bibr' target='#b19'>(Lee et al. 2019)</ns0:ref>. Due to powerful represent capabilities, GNNs have been applied into a wide range of applications including knowledge graphs (Z. Zhang, Cui, and Zhu 2020), molecular graph generation (De Cao and Kipf 2018), graph metric learning and image recognition <ns0:ref type='bibr'>(Kajla et al. 2021;</ns0:ref><ns0:ref type='bibr'>Riba et al. 2021)</ns0:ref>. Recently, graph sampling was investigated in GNNs for scaling to larger graphs and better generalization. Layer sampling techniques have been proposed for efficient mini-batch training. GraphSage performs uniform node sampling on the previous layer neighbors <ns0:ref type='bibr'>(Zeng et al. 2019)</ns0:ref>. GIN extends GraphSage with arbitrary aggregation functions on multiple sets, which is theoretically as powerful as the Weisfeiler-Lehman test of graph isomorphism <ns0:ref type='bibr' target='#b31'>(Xu et al. 2018)</ns0:ref>. In contrast to layer sampling, GraphSAINT constructs mini-batches by directly sampling the training graph, which decouples the sampling from propagation <ns0:ref type='bibr'>(Zeng et al. 2019)</ns0:ref>. However, in most GNNs it is assumed that the underlying network structure is complete without data loss, which is often not the case. In addition, different GNNs are compared in <ns0:ref type='bibr' target='#b7'>(Errica et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b28'>Shchur et al. 2018</ns0:ref>) with regard to node classification and graph classification tasks, respectively, a systematic evaluation of deep GNNs is presented in (W. <ns0:ref type='bibr'>Zhang et al. 2021)</ns0:ref>, and a reproducible framework for benchmarking of GNNs is introduced in <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020)</ns0:ref>. The most related work to ours is <ns0:ref type='bibr' target='#b9'>(Fox and Rajamanickam 2019)</ns0:ref>, in which the robustness of GIN to additional structural noise is studied. Our work focuses on graph sampling that can be considered as a random structure removed from the original network.</ns0:p></ns0:div> <ns0:div><ns0:head>Models</ns0:head><ns0:p>We focus on the robustness of GNNs under graph sampling scenarios. As shown in Figure <ns0:ref type='figure'>1</ns0:ref>, &#119866; &#119874; is the partial observed graph from a network , which is often difficult to make complete &#119866; observations. We train GNNs on and then evaluate on three typical learning tasks: node &#119866; &#119874; We follow the principles of <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020</ns0:ref>) and develop a standardized training, validation, and testing procedure for all models for fair comparisons.</ns0:p><ns0:p>In addition, we considered multilayer perceptron (MLP) as a baseline model, which utilizes only node attributes without graph structures. . Statistics for all datasets are shown in Table <ns0:ref type='table'>1</ns0:ref>. We treated all the networks as undirected and only considered the largest connected component, moreover, we ignored edge features in our experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>Setup</ns0:head><ns0:p>Setups for our experiments are summarized in Table <ns0:ref type='table'>2</ns0:ref>. All datasets were split into training, validation, and testing data. For node classification tasks, Cora, CiteSeer and PubMed were split according to <ns0:ref type='bibr' target='#b32'>(Yang, Cohen, and Salakhutdinov 2016)</ns0:ref>, first of the 10 splits from <ns0:ref type='bibr' target='#b25'>(Pei et al. 2020)</ns0:ref> was picked for Actor, and CLUSTER was split according to <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020)</ns0:ref>; For link prediction tasks, we used a random 70%/10%/20% training/validation/test split for positive edges in all datasets; For graph classification tasks, the splits were derived from <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020)</ns0:ref>.</ns0:p><ns0:p>In GNNs, all models had a linear transform for node attributes before hidden layers. The &#119883; number of hidden layers was set to to avoid over-smoothing for small-scale datasets &#119871; &#119871; = 2 such as Actor, Core, CiteSeer and Pubmed, and we set to for ARXIV and COLLAB, &#119871; = 3 &#119871; = 4</ns0:p><ns0:p>to MNIST, CIFAR10 and CLUSTER. We added residual connections between GNN layers for medium-scale datasets (i.e., ARXIV, COLLAB, MNIST, CIFAR10 and CLUSTER) as suggested by <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020)</ns0:ref>. We chose the hidden dimension and the output dimension that made the number of parameters almost equal for each model. The number of attention heads of GAT was set to 8, and the mean aggregation function in GraphSage was adopted. In MoNet, we set the number of Gaussian kernels to 3, and used the degrees of the adjacency nodes as the input pseudo-coordinates, as proposed in <ns0:ref type='bibr' target='#b23'>(Monti et al. 2017)</ns0:ref>. We used the same training procedure for all GNN models for a fair comparison. Specifically, the maximum number of training epochs was set to 1000, and we adopted Glorot <ns0:ref type='bibr' target='#b10'>(Glorot and Yoshua 2010)</ns0:ref> and zero initialization for the weights and biases, respectively. Also we applied the Adam <ns0:ref type='bibr' target='#b16'>(Kingma and Ba 2015)</ns0:ref> optimizer, and we reduced learning rate with a factor of 0.5 when a validation metric has stopped improving after given reduce patience. Furthermore, we stopped the training procedure early if a) learning rate was less than 1e-5, or b) validation metric did not increase for 100 consecutive epochs, or c) training time was more than 12 hours. All model parameters were optimized with cross-entropy loss when was sampled.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119866; &#119874;</ns0:head><ns0:p>We implemented all the six models by the Pytorch Geometrics library <ns0:ref type='bibr' target='#b8'>(Fey and Lenssen 2019)</ns0:ref> and the four graph sampling methods based on <ns0:ref type='bibr' target='#b27'>(Rozemberczki, Kiss, and Sarkar 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66500:1:0:CHECK 27 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For each dataset, sampling method, and GNN model, we performed 4 runs with 4 different seeds, then reported the average metric. To answer Q1, we show means, , and standard deviations, , &#120583; &#120575;</ns0:p><ns0:p>of metrics for all datasets with sampling ratio using GCN and MHRW &#119903; = |&#119881; &#119904; | &#8725; |&#119881;| &#8712; [0.1,0.5] (Table <ns0:ref type='table'>3</ns0:ref>). It is worth to mention that the other GNN models and graph sampling methods had similar results. There are a few observations to be made. First, the means, , increase and the &#120583; standard deviations, , decrease as the sampling ratio increases in node classification and graph &#120575; classification tasks, which aligns with our intuition. Second, the performance is acceptable in most single graph datasets when is relatively large, e.g., compared to the complete cases, the &#119903; relative losses are all less than 15% for CiteSeer, Cora, Pubmed, ARXIV &#916; = 1 -&#120583; &#119903; &#8725; &#120583; complete and COLLAB when . This is partly because the nodes in have acquired sufficient &#119903; &#8805; 0.4 &#119866; &#119874; neighborhood structure to accomplish the messaging and aggregation needed by GNNs. Therefore, we can still use GNNs in most single graph datasets under sampling scenarios, as long as the sampling ratio, , is chosen properly. The choice of the appropriate varies depending on &#119903; &#119903; the dataset, sampling method, and GNN model. For example, in order to make on node &#916; &#8804; 10% classification tasks, the sampling ratio should satisfy for Actor and PubMed, for &#119903; &#8805; 0.5 &#119903; &#8805; 0.4</ns0:p><ns0:p>Cora, and for CiteSeer. By contrast, the performance degradation is severe for multi-&#119903; &#8805; 0.1 graph datasets (i.e., CLUSTER, MNIST, CIFAR10), which is mainly due to the fact that independent random sampling destroys the intrinsic association between graphs. Hence, we cannot directly use GNNs with independent random sampling scenarios.</ns0:p><ns0:p>To answer Q2, we show and for all datasets when we fix in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>&#120583; &#120575; &#119903; = 0.3</ns0:formula><ns0:p>According to Table <ns0:ref type='table'>4</ns0:ref>, the best performing GNN model(s) is consistent across different sampling methods for a specific dataset, especially in node classification tasks, e.g., GatedGCN for Actor, GCN for Cora, CiteSeer, and PubMed. The consistency suggests that datasets have a strong preference for a specific GNN model, and there is no silver-bullet GNN for all datasets. Another observation is that, some datasets show a tendency towards sampling methods, e.g., BFS for Actor, RW for ARXIV. To compare all GNN models and sampling methods, we consider the relative metric score, as proposed in <ns0:ref type='bibr' target='#b28'>(Shchur et al. 2018)</ns0:ref>. That is, for GNN models, we take the best from four sampling methods as 100% for each dataset, and the score of each model is &#120583; divided by this value, then the results for each model are averaged over all datasets and sampling methods. We also rank GNN models by their performance (1 for best performance, 7 for worst), and compute the average rank for each model. Similarly, we calculate the score of each sampling method. The final scores for GNN models and sampling methods are summarized in Table <ns0:ref type='table'>5</ns0:ref>. These results provide a reference for the selection of sampling methods, and a guidance for sampling-based GNN training like <ns0:ref type='bibr'>GraphSAINT (Zeng et al. 2019</ns0:ref>).</ns0:p><ns0:p>GNNs outperform MLP on average in Table <ns0:ref type='table'>5</ns0:ref>, and this confirms the superiority of GNNs, which combine structural and attribute information, compared to methods that consider only attributes. On small datasets, GCN is the best GNN model , which proves that simple methods often outperforms more sophisticated ones <ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Shchur et al. 2018</ns0:ref>). In addition, BFS is found to be the best sampling method for small datasets, partly because it samples node labels more uniformly than other methods. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows a comparison of the Kullback-Leibler divergence between label distributions of training and testing from different sampling methods in PubMed (NC); it can be seen that BFS has a lower score, which leads to better generalization power in GNNs. While on medium datasets, the best GNN model changes to GAT, and the most competitive sampling method are RW and MHRW. This may be due to the fact that RW and MHRW can obtain a more macroscopic structure compared to BFS and FFS.</ns0:p><ns0:p>To answer Q3, we considered the induced subgraph as a completion of . We chose the &#119866; ' &#119874; &#119866; &#119874; preferred GNN model for each dataset, e.g., GatedGCN for Actor, then computed the induced relative metric improvement percent as . On the other hand, Figure <ns0:ref type='figure'>3</ns0:ref> reveals the complexity of datasets under sampling scenarios, which indicates that network completion is not always effective. Some datasets benefit from network completion in all cases, e.g., Cora (NC), ARXIV and MNIST; and there are also some datasets seem to be unaffected by completion, e.g., Pubmed (LP) when</ns0:p><ns0:formula xml:id='formula_1'>(see Figure 3(b)- &#119903; &#8712; {0.3, 0.5} (c))</ns0:formula><ns0:p>; what is more, network completion has side effects on datasets such as COLLAB. The complexity may be partly explained by structure noise in network. It is evident that removing task-irrelevant edges from original structure can improve GNN performance <ns0:ref type='bibr' target='#b21'>(Luo et al. 2021;</ns0:ref><ns0:ref type='bibr'>Zheng et al. 2020)</ns0:ref>. We treat graph sampling as a structural denoising process. If the original network has only a small amount of structure noise, completion restores the informative edges &#119866; removed by sampling, thus improving the GNN performance. Whereas if the structure noise is large in , completion weakens the denoising effect of sampling and leads to performance &#119866; degradation.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We focused on the performance of GNNs with partial observed network structure. By treating the incomplete structure as one of the many graphs generated by a certain sampling process, we determined the robustness of GNNs in a statistical way via multiple independent random sampling. Specifically, we performed an empirical evaluation of six state-of-the-art GNNs on three network learning tasks (i.e., node classification, link prediction and graph classification) with four popular graph sampling methods. We confirmed that GNNs can still be applied under graph sampling scenarios in most single graph datasets, but not on multiple graph datasets. And we also identified the best GNN model and sampling method, that is, GCN and BFS for small datasets, GAT and RW for medium datasets. which provides a guideline for future applications. Moreover, we found that network completion can improve GNN performance in most cases, however, specific analysis is needed case by case due to the complexity of datasets under sampling scenarios. Thus, suggesting that completion and denoising should be done with careful </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:10:66500:1:0:CHECK 27 Jan 2022) Manuscript to be reviewed Computer Science Experiments Datasets In our benchmark, we used nine datasets including six social networks (Cora, CiteSeer, PubMed (Yang, Cohen, and Salakhutdinov 2016), Actor (Pei et al. 2020), ARXIV and COLLAB (W. Hu et al. 2020)), two super-pixel networks of images (MNIST, CIFAR10 (Dwivedi et al. 2020)) and one artificial network generated from Stochastic Block Model (CLUSTER (Dwivedi et al. 2020))</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>shows the improvements on all &#120591; = &#120583;' &#119903; /&#120583; &#119903; -1 datasets with . &#119903; &#8712; {0.1, 0.3, 0.5} From Figure 3 it can be seen that network completion can improve performance in most cases. Comparing Figure 3(a), (b) and (c) shows that the induced improvement increases as the &#120591; sampling ratio decreases especially when we perform MHRW or RW, which indicates the &#119903; necessity of network completion when is low. &#120591;</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,42.52,178.87,525.00,275.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,42.52,199.12,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>classification, link prediction and graph classification. In this paper, we treat as one of the &#119866; &#119874; many graphs generated by a certain sampling process from a known , consequently we are able &#119866; to determine the robustness of GNNs in a statistical way via multiple independent random sampled . Hu and Lau 2013), and these methods extract connected subgraphs, which is a prerequisite for GNNs. In graph sampling, we iteratively pick nodes and edges starting from a random seed node until the cardinality of the sampled node set reaches a given number. Apart from the original sampled &#119881; &#119874; &#119866; &#119874; (&#119881; &#119874; ,&#119864; &#119874; ,&#119883; &#119874; )</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#119866; &#119874;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>We denote the original network as respectively, and denotes the attribute matrix. There is no missing structure in . , where and represent node and edge sets, &#119866;(&#119881;,&#119864;,&#119883;) &#119881; &#119864; &#119883; &#8712; &#8477; |&#119881;| &#215; &#119889; &#119866;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>The observed or sampled graph is represented by</ns0:cell><ns0:cell cols='2'>&#119866; &#119874; (&#119881; &#119874; ,&#119864; &#119874; ,&#119883; &#119874; )</ns0:cell><ns0:cell>where</ns0:cell><ns0:cell>&#119881; &#119874; &#8838; &#119881;</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>. We &#119864; &#119874; &#8838; &#119864;</ns0:cell></ns0:row><ns0:row><ns0:cell>subgraph</ns0:cell><ns0:cell>, we also induce</ns0:cell><ns0:cell>&#119881; &#119874;</ns0:cell><ns0:cell cols='2'>to form</ns0:cell><ns0:cell cols='2'>, i.e., &#119874; ,&#119883; &#119874; ) &#119874; (&#119881; &#119874; ,&#119864; ' &#119866; '</ns0:cell><ns0:cell>&#119864; ' &#119874; = {(&#119906;,&#119907;)|&#119906;,&#119907; &#8712; &#119881; &#119874; ,</ns0:cell></ns0:row><ns0:row><ns0:cell>. (&#119906;,&#119907;) &#8712; &#119864;} &#119866; ' &#119874;</ns0:cell><ns0:cell cols='6'>has the same edges as between the vertices in ; hence, &#119866; &#119881; &#119874;</ns0:cell><ns0:cell>&#119866; ' &#119874;</ns0:cell><ns0:cell>can be</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>considered as a completion of . &#119866; &#119874;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>evaluate six popular GNNs (GCN, GraphSage, GAT, MoNet, GatedGCN and GIN) with four traversal-based graph sampling methods (BFS, FFS, RW, and MHRW). The six GNN models are selected according to performance and popularity; moreover, they cover all three categories of GNN models: isotropic (GCN, GraphSage), anisotropic (GAT, MoNet, GatedGCN) and Weisfeiler-Lehman (GIN) GNNs<ns0:ref type='bibr' target='#b6'>(Dwivedi et al. 2020)</ns0:ref>. We test only traversal-based sampling methods for two reasons: these methods are practical in real settings (P.</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66500:1:0:CHECK 27 Jan 2022)</ns0:note> </ns0:body> "
"Dear Editors, We would like to thank you for the quick review arrangement. Our manuscript received valuable suggestions and comments from reviewers. We have carefully revised the manuscript to address their concerns. In particular, we added experiments on another five medium-scale datasets, and extended the learning task from one to three. Because of you and the reviewers, the revised article has become better, and readers can obtain more valuable information from it. We believe that the manuscript is now suitable for publication in PeerJ. Best regards. Qiang Wei Reviewer 1 Basic reporting This is an experimental analysis paper for evaluating different baseline GNNs using graph sampling. The motivation of the work seems to inform the community of a experimental benchmark that supplies the trends of using different sampling methods while using several GNN instances. The writing of the paper is clear and the experiments are fairly compared when the models are evaluated on the 4 datasets considered. The code framework is public and from my observation it follows a similar layout/setting as Benchmarking GNNs [Dwivedi et al., 2020] following the principles of fair GNN benchmarking. However, the paper has no technical contribution apart from the aforementioned sentences; and this is obvious from the introduction and setting of the paper. Following are my concerns which may prohibit the usefulness of this paper as a benchmark of sampling methods using different GNNs: * The datasets used are quite small and there is a well-known agreement in the community to move on to more large and complex datasets than Cora, Citeseer, etc. that are used in the paper. See Dwivedi et al., 2020, Open Graph Benchmark: Hu et al., 2020 which describe the need to using other datasets than aforementioned to evaluate GNN trends fairly. It is also clear that this paper is aware of the previous works on GNN benchmarks. Hence, I do not understand the need of making a graph sampling benchmark using the same small datasets! This is quite correct. In the revised version, we have added experiments on another five datasets from Dwivedi et al., 2020 and Hu et al., 2020, i.e., ARXIV, COLLAB, CLUSTER, MNIST and CIFAR10. * The paper follows the setting of Dwivedi et al., 2020 but do not consider GatedGCNs which were best performing in that work. This may be questioning the validity of the paper's insight where it is written that GCN/BFS are generally the best performing (L65), since a previous well performing GNN baseline-GatedGCN is not considered for evaluation in this work. Thanks for the comment. GatedGCN was considered in our revised manuscript. And we found that GCN/BFS are generally the best performing in small datasets, whereas GAT/RW are more suitable in medium datasets. * If the paper's intent was to establish an evaluation trend of different GNN under sampling, it is necessary to consider robustness of experiments to make the results reliable and useful for the community. While the experiments in the paper are fair and unquestionable, it is not necessarily robust. For example, it does not seem the results are reliable if they are not evaluated on more complex and diverse datasets, and also on diverse tasks (such as graph prediction, link prediction, node prediction). This work only considers node prediction on small datasets. Besides the original node classification task, we have added link prediction and graph classification tasks, as suggested. And we performed all the three tasks on nine different size datasets from diverse domains. We believe the improved results are more reliable. Experimental design * The research questions are well defined and meaningful. If the work would be more robust, it could fill the research gaps. However, see my concerns above in 'Basic Reporting' which may not make this paper robust. We added another five medium-scale datasets, one extra GNN model and two learning tasks according to the revision. And we believe the revised manuscript has improved in robustness. * Of the experiments considered in the paper, the settings are well defined for reproducibility. Similarly, the methods are sufficiently detailed to replicate. However, the concerns are with the datasets used and the lack of robustness in such an experimental work. Please refer to comments in 'Basic Reporting' section on this. We extended the datasets and the learning tasks as mentioned. Our codes were optimized and reformatted for easier reproduction. Validity of the findings * Of the experiments performed in the paper, the underlying data is provided, but are questionable given the current state and trend of the graph deep learning field in general. In particular, the community is slowly transitioning to evaluate GNNs on more diverse, complex and medium-to-large scale datasets. The datasets used in the paper are small and I believe that the paper is aware of that. Thanks for the comment. Some of the findings of the original paper were extended by experiments on larger data sets and more learning tasks, for example, random sampling on multi-graph datasets leads to severe performance degradation of GNNs. We believe our finding has better generalizability than the original manuscript. Additional comments * In the writing of the paper, several references do not mention date and have “n.d.” instead of the date. For eg. Bruna et al. n.d. It seems this is a minor thing which the paper should have taken care of before submission. We carefully checked all the references and fixed the issue. Reviewer: Nadeem Kajla Basic reporting The authors provide a systematic review of graph-based methods. They tried to identify the performance of graph neural network models for graphlets. Authors performed the evaluation of five Graph Neural Networks. The idea is good and interesting. However I have few comments on this article. Experimental design 1- In experimental setup, author did not mention how many hidden layers are in the network. We have added experimental setups in Table 2. The number of hidden layers is 2 for Actor, Cora, CiteSeer and Pubmed; 3 for ARXIV and COLLAB; 4 for MNIST, CIFAR10 and CLUSTER. 2- What information/features are you extracting in layers of the network. We adopted six GNNs in our experiments, all of them can be formulated as message passing and aggregation. Therefore, the information we extracting in layers is the representation from neighborhood. 3- Please mention the no of training, testing and validation samples you are using in the experiments. Thanks for the comment. In the revised manuscript, we listed the parameters that differed across datasets into categories such as training/validation/testing ratio, GNN model parameters, and training parameters in Table 2. 4- can you elaborate row no 138 'there is no missing edges among...' why ? is a sampled subgraph from , and has the same sampled nodes set as , but also has extra edges between in . An example is shown in figure below, , , and , that is, all edges between are in . For clarity, we modify “There is no missing edge among in ” to “ has the same edges as between the vertices in ” at line 151. 5- what are effects you noticed for 1000 epochs for small and large datasets? any visualization graphs etc?? We followed Dwivedi et al., 2020 and set the number of epochs as 1000. In our experiments, we adopted three early stopping mechanisms (at line 190-191), therefore, the actual training epochs were much less than 1000. Typical epochs were 100-300 in our experiments, and we did not notice the differences of epochs between small and medium datasets. We did not add training visualization graphs in our manuscript, but we have added more loggings in our public code, now training details can be easily observed. 6- among five GNNs you did not considered the Siamese GNN, any reason about that? for example 'Riba, Pau, et al. 'Learning graph distances with message passing neural networks.' 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018.' and NI kajla et. al. 'Graph Neural Networks Using Local Descriptions in Attributed Graphs: An Application to Symbol Recognition and Hand Written Character Recognition' Thank you for pointing out these two valuable literatures we missed. Riba et al. proposed an efficient graph distance based on GNN, and Kajla et al. improved and extended it to written character recognition and symbol recognition. We have already added these two literatures as our references (at line 112). According to your revision, we considered more graph learning tasks in our revised manuscript, including node classification, link prediction and graph classification. We believe these three tasks are more common, and we leave other tasks as our future works. Validity of the findings 1- in Abstract row no 27 author say that to complete a sample subgraph does not improve the performance. they add that it depends on the dataset, However I am not clear about the statement. Please elaborate more about it. Thanks for the comment. The original statement was inaccurate, and we considered nine datasets on three tasks with different sampling ratio in our revised version (see line 246-261). The whole experiments for answering Q3 (Can the performance of GNNs be improved if we complete the partial observed network structure?) was redesigned and evaluated. We found that network completion can improve GNN performance in most cases, however, specific analysis is needed case by case due to the complexity of datasets under sampling scenarios. "
Here is a paper. Please give your review comments after reading it.
364
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Crowd counting has been widely studied by de ep learning in recent years. However, due to scale variation caused by perspective distortion, crowd counting is still a challenging task. In this paper, we propose a Densely Connected Multi-scale Pyramid Network (DMPNet) for count estimation and the generation of high-quality density maps. The key component of our network is Multi-scale Pyramid Network (MPN), which can extract multiscale features of the crowd effectively while keeping the resolution of the input feature map and the number of channels unchanged. To increase the information transfer between the network layer, we use dense connections to connect multiple MPNs. In addition, we also design a novel loss function, which can help our model achieve better convergence. To evaluate our method, we conduct extensive experiments on three challenging benchmark crowd counting datasets. Experimental results show that compared with the state-of-the-art algorithms, DMPNet performs well in both parameters and results. Code is available at: https://github.com/lpfworld/DMPNet .</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>With the increase of the world population, crowd counting has been widely applied in video surveillance, crowd analysis, sporting events, and other public security services <ns0:ref type='bibr' target='#b2'>(Chan, Liang &amp; Vasconcelos, 2008;</ns0:ref><ns0:ref type='bibr' target='#b0'>Boominathan, Kruthiventi &amp; Babu 2016;</ns0:ref><ns0:ref type='bibr' target='#b1'>Cao et al.,2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>Xiong et al.,2019)</ns0:ref>. In addition, it is extended to cell or bacterial counts in the medical field and vehicle counts in transportation field <ns0:ref type='bibr' target='#b6'>(Xie, Noble &amp; Zisserman, 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Hu et al., 2020)</ns0:ref>. However, crowd counting still is a challenging task due to scale variations, cluttered backgrounds, and heavy occlusion. Among them, scale variation is the most important research issue, as shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. <ns0:ref type='bibr'>CNN-based (Convolutional Neural Networks, CNN)</ns0:ref> methods have made remarkable progress in crowd counting in recent years. To extract multi-scale features of crowds, researchers designed multi-column or multi-branch networks <ns0:ref type='bibr' target='#b7'>(Sam, Surya &amp; Babu, 2016;</ns0:ref><ns0:ref type='bibr' target='#b8'>Zhang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b12'>Liu, Salzmann &amp; Fua, 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Jiang, Zhang &amp; Xu, 2020)</ns0:ref>. However, most networks are limited in their ability to extract multi-scale features due to the similarity of different columns or branches <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b7'>Sam et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b16'>, Zhang et al., 2018)</ns0:ref>. In addition, multi-scale extraction modules in these networks require a lot of computation because of the complexity of the network structure <ns0:ref type='bibr' target='#b16'>(Li, Zhang &amp; Chen, 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Guo et al., 2019)</ns0:ref>. Our MPN also adopts a multi-branch structure to ensure multi-scale feature extraction, in which pyramid convolution and group convolution are used to effectively reduce parameters.</ns0:p><ns0:p>The higher resolution feature map contains finer details and the resulting density map is of higher quality, which is helpful for count estimation <ns0:ref type='bibr' target='#b1'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Jia, Antoni &amp; Chan, 2019)</ns0:ref>. To increase receptive fields of networks, pooling operations are adopted. However, the resolution of feature maps generated by the network become smaller, resulting in the loss of crowd image details. To keep the input and output resolutions unchanged, the encoder-decoder structure is usually utilized <ns0:ref type='bibr' target='#b19'>(Jiang et al.,2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Thanasutives et al., 2021)</ns0:ref>. The network of encoder-decoder structure uses encoder to extract input image features and combine them, and then decodes the higher-level features required by these features through a specially designed decoder. Take M-SFANet (Multi-Scale-Aware Fusion Network with Attention mechanism) <ns0:ref type='bibr' target='#b10'>(Thanasutives et al., 2021)</ns0:ref> for example, the encoder of M-SFANet <ns0:ref type='bibr' target='#b10'>(Thanasutives et al., 2021)</ns0:ref> is enhanced with ASSP (Atrous Spatial Pyramid Pooling, ASSP) <ns0:ref type='bibr' target='#b11'>(Chen et al.,2017)</ns0:ref>, which can extract multi-scale features of the target object and fuse large context information. In order to further deal with the scale variation of the input image, they used the context module called CAN (Context Aware Network, CAN) <ns0:ref type='bibr' target='#b12'>(Liu et al.,2019)</ns0:ref> as the decoder. Similar to these works, we keep the input and output resolutions of MPN unchanged to ensure that the final density map generated by DMPNet contains sufficient detailed crowd information.</ns0:p><ns0:p>Different layers of neural network contain different crowd information, but with the increase of network depth, some details are gradually lost. DSNet (Dense Scale Network, DSNet) <ns0:ref type='bibr' target='#b43'>(Dai et al., 2021)</ns0:ref> proposed that using dense connected networks in the field of crowd counting can effectively extract long-distance context information and maximize the retention of network layer information. We follow this operation and connect MPNs with dense connections.</ns0:p><ns0:p>Euclidean loss is the most common loss function in crowd counting SOTA methods, which is based on pixel independence <ns0:ref type='bibr' target='#b1'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Liu et al.,2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Zhang et al.,2020)</ns0:ref>. However, texture features and pixel correlation of different regions in crowd images are different. Euclidean loss ignores the local correlation of the crowd image and does not consider the global counting error of the crowd image <ns0:ref type='bibr' target='#b1'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Dai et al., 2021)</ns0:ref>. Therefore, when designing the loss function, we not only consider the local density consistency of the image, but also consider the global counting loss of the image.</ns0:p><ns0:p>In this paper, we propose the Densely connected Multi-scale Pyramid Network (DMPNet) for crowd counting, as shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. The important component Multi-scale Pyramid Network (MPN) consists of Local Pyramid Network (LPN), Global Pyramid Network (GPN), and Multiscale Feature Fusion Network (MFFN). LPN is used to capture small heads and extract multiscale fine-grained features, while GPN is used to capture large heads and global features. They are composed of multiple levels, and each level has filters of different sizes and depths, whose output local and global features are combined by MFFN. To maximize the flow of information between layers of the network, MPNs in the network are densely connected, with each MPN receiving as input the results of MPNs before it. To optimize the loss function, we combine Euclidean loss, density level consistency loss, and MAE loss to improve the performance of DMPNet. Experiments on three datasets (ShanghaiTech Part A and Part B <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>, UCF-QNRF <ns0:ref type='bibr' target='#b28'>(Idrees et al., 2018)</ns0:ref>, UCF_CC_50 <ns0:ref type='bibr' target='#b29'>(Idrees et al., 2013)</ns0:ref> prove the effectiveness and robustness of the proposed method.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Generally, the existing crowd counting methods can be mainly classified into two categories: traditional methods and CNN-based methods <ns0:ref type='bibr' target='#b30'>(Sindagi &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b31'>Gao et al.,2020)</ns0:ref>. In this section, we give a brief review of crowd counting methods and explain the differences between our methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Traditional methods</ns0:head><ns0:p>In early studies, detection-based methods used sliding windows to detect the target, and manually extract features of the human body or specific body parts <ns0:ref type='bibr' target='#b35'>(Wu &amp; Nevatia, 2007;</ns0:ref><ns0:ref type='bibr'>Enzweiler &amp; Gavrila,2009;</ns0:ref><ns0:ref type='bibr' target='#b33'>Felzenszwalb et al., 2010)</ns0:ref>. However, even if only heads or smaller body parts of pedestrians are detected, these methods often fail to make accurate counts of dense crowd scenes due to occlusion and illumination. To improve the performance of crowd counting, feature-based regression methods attempted to extract various features from local image blocks and generate low-level information <ns0:ref type='bibr' target='#b36'>(Chan &amp; Vasconcelos, 2009;</ns0:ref><ns0:ref type='bibr' target='#b38'>Ryan et al.,2009;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ke et al., 2012)</ns0:ref>. <ns0:ref type='bibr' target='#b29'>Idrees et al. (2013)</ns0:ref> tried to fuse the features obtained by Fourier analysis and SIFT (Scaleinvariant feature transform, SIFT) interest points. However, they ignored the scale information. To overcome the problem, density estimation-based method considers the relationship between image features and data regression. <ns0:ref type='bibr' target='#b40'>Victor &amp; Andrew (2010)</ns0:ref> adopted the method of extracting features in local areas and establishing linear mapping between features and density maps. <ns0:ref type='bibr' target='#b41'>Pham et al. (2015)</ns0:ref> tried to use random forest regression to get a nonlinear map.</ns0:p></ns0:div> <ns0:div><ns0:head>CNN-based methods</ns0:head><ns0:p>The CNN-based methods can be classified into the multi-column CNN-based methods and the single-column CNN-based methods. The multi-column CNN-based methods use multi-column networks to extract the human head features of different scales and then fuse them to generate density maps. <ns0:ref type='bibr'>Zhang et al. (2016) (Multi-Column Convolutional Neural Network, MCNN)</ns0:ref> proposed to extract features using three-column networks with convolution kernels of different sizes respectively, and fused them through 1x1 convolution. <ns0:ref type='bibr' target='#b7'>Sam et al. (2016)</ns0:ref> <ns0:ref type='bibr'>(Switching Convolutional Neural Network, Switch-CNN)</ns0:ref> proposed to design an additional switch based on MCNN, that is to use the switch to select the most appropriate CNN column for different input images to improve the counting accuracy. Inspired by the image generation methods, <ns0:ref type='bibr' target='#b42'>Viresh et al. (2018)</ns0:ref> (Iterative Crowd Counting CNN, ic-CNN) proposed a two-column networks to gradually refine the obtained low-resolution density map to high-resolution density map. <ns0:ref type='bibr' target='#b30'>Sindagi et al., (2017)</ns0:ref> (Contextual Pyramid CNN, CP-CNN) used global and local feature information to generate density maps for crowd images. <ns0:ref type='bibr'>Zhu et al., (2020)</ns0:ref> <ns0:ref type='bibr'>(Relational Attention Network, RANet)</ns0:ref> proposed to use the stacked hourglass structure in human pose, optimized outputs from each hourglass module with local attention LSA and global attention GSA, and then fused the two features with a relational module. <ns0:ref type='bibr' target='#b14'>Zhu et al. (2019)</ns0:ref> (Multi-Scale Fusion Network with Attention mechanism, SFANet) proposed a dual path multi-scale fusion network architecture with attention mechanism, which contains a VGG as the front-end feature map extractor and a dual path multi-scale fusion networks as the back-end to generate density map. <ns0:ref type='bibr'>Jiang et al. (2020)</ns0:ref> (Attention Scaling Network, ASNet) proposed to use different columns to generate density maps and scale factors, then multiply them by the mask of the region of interest to output multiple attention-based density maps, and add the density maps to obtain a high-quality density map. These methods have a strong ability in extracting multi-scale features and improving the performance of crowd counting. However, they also have some disadvantages: these networks usually have a lot of parameters, and the similarity of networks with different columns results in limited feature extraction ability. In addition, training multiple CNNs at the same time will lead to slower training speed <ns0:ref type='bibr' target='#b16'>(Zhang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Cao et al., 2018;</ns0:ref><ns0:ref type='bibr'>Jiang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The single-column CNN methods try to use the multi-branch structure for optimization, which can extract multi-scale information and effectively reduce parameters <ns0:ref type='bibr' target='#b16'>(Zhang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b12'>Liu et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b16'>Zhang et al. (2018)</ns0:ref> (Congested Scene Recognition Network, CSRNet) proposed the network structure of the front and back end, in which the front-end network adopts VGG16 <ns0:ref type='bibr' target='#b27'>(Simonyan &amp; Zisserman, 2014)</ns0:ref>, and the back-end network uses dilated convolution to increase the receptive field and extract multi-scale features. <ns0:ref type='bibr' target='#b1'>Cao et al. (2018)</ns0:ref> (Scale Aggregation Network, SANet) proposed to extract multi-scale features by using convolution containing multiple levels, and the convolution kernel of each level is different in size. At the back end of SANet, the resolution of the feature map is restored to the size of the input image by deconvolution, and the final density map is obtained. <ns0:ref type='bibr' target='#b12'>Liu et al. (2019)</ns0:ref> (Context Aware Network, CAN) proposed a pooling pyramid network to extract multi-scale features and adaptively assign weights to crowd regions of different scales in images. <ns0:ref type='bibr' target='#b46'>Shi et al., (2019)</ns0:ref> (Perspective-Aware CNN, PACNN) proposed a perspective-aware network, which can integrate the perspective information into density regression to provide additional knowledge of scale variations in images. <ns0:ref type='bibr' target='#b47'>Miao et al., (2020)</ns0:ref> (Shallow feature based Dense Attention Network, SDANet) proposed to reduce the influence of background by introducing an attentional model based on shallow features, and to capture multi-scale information through dense connections of hierarchical features. <ns0:ref type='bibr' target='#b10'>Thanasutives et al., (2021)</ns0:ref> (M-SFANet) proposed to use ASPP <ns0:ref type='bibr' target='#b11'>(Chen et al., 2017)</ns0:ref> containing parallel atrous convolutional layers with different sampling rates to enhance the network, which can extract multi-scale features of the target object and incorporate larger context. <ns0:ref type='bibr'>Jiang et al., (2019) (Trellis Encoder-Decoder Network, TEDNet)</ns0:ref> proposed to build multiple decoding paths in different coding stages to aggregate features of different layers. Ma et al., <ns0:ref type='bibr' target='#b24'>(2019)</ns0:ref> (Bayesian Loss, BL) regarded crowd counting as a probability problem, the predicted density map is a probability map, each point represents the probability of existence at the point, and each point of the density map is regarded as the sample observation value.</ns0:p><ns0:p>Our DMPNet is a single-column network with multi-branch, similar to some works <ns0:ref type='bibr' target='#b1'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b12'>Liu, Salzmann &amp; Fua, 2019;</ns0:ref><ns0:ref type='bibr' target='#b43'>Dai et al., 2021)</ns0:ref>. We differ them from three aspects: (1) Each branch of our convolution kernel is not only different in size, but also different in the number of channels, which improves the ability of feature extraction of similar networks. (2) We use group convolution to process convolution kernels of different sizes, effectively reducing network parameters, and the calculation process is similar to Google MixNet (Mixed Depthwise Convolutional Network, MixNet) <ns0:ref type='bibr' target='#b44'>(Tan &amp; Le, 2019)</ns0:ref>. (3) Our DMPNet is an end-to-end architecture, without adding extra perspective maps or attention maps.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head><ns0:p>The basic idea of our approach is to implement an end-to-end network that can capture multiscale features and generate a high-quality density map, to achieve accurate crowd estimation. In this section, we first introduce our proposed DMPNet architecture, then present our loss function.</ns0:p></ns0:div> <ns0:div><ns0:head>DMPNet architecture</ns0:head><ns0:p>Similar to CSRNet <ns0:ref type='bibr' target='#b16'>(Li, Zhang &amp; Chen, 2018)</ns0:ref>, our DMPNet architecture includes a front-end network and a back-end network (see Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). In the front-end network, the first ten layers with three pooling layers of VGG16 are used to extract features from crowd images. Several works have proved that VGG16 achieves a trade-off between accuracy and computation, and is suitable for crowd counting <ns0:ref type='bibr' target='#b1'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr'>Tanjan, Le &amp; Hoai,2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Jia, Antoni &amp; Chan, 2019)</ns0:ref>. In the back-end network, MPNs that can extract multi-scale features are connected in a dense way to improve information flow between layers. The integration between the different layers in the network can also be further retained multi-scale features <ns0:ref type='bibr' target='#b45'>(Huang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b47'>Miao et al., 2020;</ns0:ref><ns0:ref type='bibr'>Amaranageswarao, Deivalakshmi &amp; Ko, 2020)</ns0:ref>. In ablation experiments, we demonstrated the effectiveness of dense connections.</ns0:p></ns0:div> <ns0:div><ns0:head>Multi-scale Pyramid Network (MPN)</ns0:head><ns0:p>MPN consists of three parts: Local Pyramid Network (LPN), Global Pyramid Network (GPN), and Multi-scale Feature Fusion Network (MFFN), illustrated in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. The design principle of MPN is to keep the resolution and channel number of input and output features unchanged, and effectively extract multi-scale features.</ns0:p></ns0:div> <ns0:div><ns0:head>Pyramid Convolution and Group Convolution</ns0:head><ns0:p>Pyramid convolution has been applied to image segmentation, image classification and other fields, and achieved remarkable results <ns0:ref type='bibr'>(Lin et al., 2017;</ns0:ref><ns0:ref type='bibr'>Duta et al., 2020;</ns0:ref><ns0:ref type='bibr'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr'>Richardson et al., 2020)</ns0:ref>. Inspired by this, we propose to apply pyramid convolution to crowd counting. Compared to standard convolution, pyramid convolution is composed of convolution kernels of different sizes and depths in N level, without increasing computational cost and complexity, illustrated in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. Each level of pyramid convolution is computed with all input features. To use different depths of the kernels at each level of pyramid convolution, we do this using group convolution. The input features are divided into four groups, and the convolution kernels are applied separately for each input group, illustrated in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>.</ns0:p><ns0:p>We compare the parameters of standard convolution and group convolution. (1) Standard convolution contains a single type of convolution kernel (with height K, width K), and the depth is equal to the number of channels of input features . such convolution kernels and input C 1 C 2 features (with height H, width W) are calculated to get output features (with height , width &#119867; ' &#119882; '</ns0:p><ns0:p>). Therefore, the parameter number of standard convolution is .</ns0:p><ns0:formula xml:id='formula_0'>(2) Group Convolution &#119896; 2 C 1 C 2</ns0:formula><ns0:p>divides the input feature map (with height H, width W) into groups, the depth is equal to the g number of channels of input features &#65292;and then performs convolution calculation within each C 1 group. The convolution kernels (with height K, width K, and the number of channels ) are also C 2 divided into corresponding groups. Each group of convolution generates feature maps (with &#119892; height W', width H', and the number of channels ). Therefore, the parameter number of C 2 /&#119892; group convolution is . The width and height of the output depend</ns0:p><ns0:formula xml:id='formula_1'>&#119896; 2 * ( C 1 &#119892; ) * ( C 2 &#119892; ) * &#119892; = &#119896; 2 C 1 C 2 /&#119892;</ns0:formula><ns0:p>on the convolution step size, and these two values are not considered here. The above calculation results prove that group convolution can generate feature maps with fewer parameters. The more feature maps, the more information that can be encoded for the crowd counting network.</ns0:p></ns0:div> <ns0:div><ns0:head>LPN, GPN, and MFFN</ns0:head><ns0:p>Based on the ability of pyramid convolution and group convolution, we design LPN and MPN to extract local and global features of crowd images, and use MFFN to combine the two, as shown in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(c) MFFN is mainly used for global and local feature fusion (fine-grain and coarse-grain features). Detailed information is shown in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>(c). First, the output of LPN and GPN is combined into the features with 1024 channels as the input of MFFN. Then, through a layer of 3x3 convolution output the features of 256 channels. Finally, we use 1x1 convolution to restore the channel numbers to 512 and obtain feature .</ns0:p></ns0:div> <ns0:div><ns0:head>&#119865; &#119900;</ns0:head></ns0:div> <ns0:div><ns0:head>Loss function</ns0:head><ns0:p>Euclidean loss is the most common loss function in crowd counting. It evaluates the difference between the ground truth and the estimated density map based on pixel independence, without considering the local density correlation of images. However, the local features of the crowd are generally consistent. In addition, Euclidean loss does not consider the counting error of the image <ns0:ref type='bibr' target='#b1'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Dai et al., 2021)</ns0:ref>. Therefore, we combine density-level consistency loss and MAE loss with Euclidean loss in the loss function.</ns0:p></ns0:div> <ns0:div><ns0:head>Euclidean loss</ns0:head><ns0:p>Euclidean loss can estimate the pixel-level error between the estimated density map and the ground truth. It is the most common loss function in crowd counting. The Euclidean loss function can be defined as follow:</ns0:p><ns0:formula xml:id='formula_2'>&#119871; E = 1 &#119873; &#8721; &#119873; &#119894; = 1 ||F(&#119883; &#119894; ;&#952;) -&#119865; &#119894; || 2 2</ns0:formula><ns0:p>Where N is the size of training batch, is the variable parameters of DMPNet. is the input &#952; &#119883; &#119894; image, represent the ground truth, and is the output of DMPNet. &#119865; &#119894; F(&#119883; &#119894; ;&#952;)</ns0:p></ns0:div> <ns0:div><ns0:head>Density level consistency loss</ns0:head><ns0:p>Due to the imbalance of crowd distribution, the density map has a local correlation, and the density level of different sub-regions is not the same. Therefore, the density map generated by the model should be consistent with the ground truth <ns0:ref type='bibr' target='#b21'>(Jia, Antoni &amp; Chan, 2019;</ns0:ref><ns0:ref type='bibr'>Jiang et al., 2020)</ns0:ref>. Referring to the setting of reference <ns0:ref type='bibr' target='#b43'>(Dai et al., 2021)</ns0:ref>, we divide the density map into subregions of different sizes and formed pool representations. Three outputs of different sizes are used (1x1,2x2,4x4), with 1x1 representing the global density level of the density map and the other two representing the density level of different local sizes in the density map. The density level consistency loss can be defined as follow:</ns0:p><ns0:formula xml:id='formula_3'>L D = 1 N N &#8721; &#119894; = 1 &#119878; &#8721; &#119895; = 1 1 &#119896; 2 &#119895; ||&#119875; &#119886;&#119907;&#119890; (F(&#119883; &#119894; ;&#952;),&#119896; &#119895; ) -&#119875; &#119886;&#119907;&#119890; (&#119863; &#119866;&#119879; &#119894; ,&#119896; &#119895; )|| 1</ns0:formula><ns0:p>Where represents the number of scale levels, is the average pooling operation, and &#119878; &#119875; &#119886;&#119907;&#119890; &#119896; &#119895; represents the specified output size of average pooling.</ns0:p></ns0:div> <ns0:div><ns0:head>MAE loss</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67116:1:1:CHECK 4 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Mean absolute error (MAE) loss can estimate the real count and the estimated count. The MAE loss can be defined as follow:</ns0:p><ns0:formula xml:id='formula_4'>&#119871; &#119860; = 1 &#119873; &#8721; &#119873; &#119894; = 1 |C(&#119868; &#119894; ) -&#119862; GT (&#119868;' &#119894; )|</ns0:formula><ns0:p>where and represent the density map generated by DMPNet and the real density map of &#119868; &#119894; &#119868;' &#119894; &#119883; &#119894; separately. represents the sum of all pixels. and represent the estimated count and C C(&#119868; &#119894; )</ns0:p><ns0:p>&#119862; GT (&#119868; ' &#119894; )</ns0:p><ns0:p>the real count of separately.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119883; &#119894;</ns0:head></ns0:div> <ns0:div><ns0:head>The final loss</ns0:head><ns0:p>The final loss consists of , and . and are weighting factors of and . According to</ns0:p><ns0:formula xml:id='formula_5'>L s ,&#119871; c &#119871; E &#120572; &#120573; L s &#119871; c</ns0:formula><ns0:p>our experiments, they are set as 10-4 and 10-3, separately.</ns0:p><ns0:formula xml:id='formula_6'>L(&#120553;) = &#119871; E + &#119886;L D + &#120573;&#119871; &#119860;</ns0:formula></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL AND DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Training methods</ns0:head></ns0:div> <ns0:div><ns0:head>Ground truth generation</ns0:head><ns0:p>The ground truth density map can represent the image containing N people. Following the methods in <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b12'>Liu, Salzmann &amp; Fua, 2019;</ns0:ref><ns0:ref type='bibr'>Jiang et al., 2020)</ns0:ref>, We convolve &#61540; ( with a Gaussian kernel (x) (which is normalized to 1) with parameter to blur each &#119909; -&#119909; &#119894; )</ns0:p><ns0:p>&#119866; &#61555; &#119894; &#61555; &#119894; head annotation. The ground truth density map can be defined as follow:</ns0:p><ns0:formula xml:id='formula_7'>(x)&#65292;with =&#61538; &#119865;(&#119909;) = &#8721; &#119873; &#119894; = 1 &#61540; (&#119909; -&#119909; &#119894; ) * &#119866; &#61555; &#119894; &#61555; &#119894; &#119889; &#119894;</ns0:formula><ns0:p>Where represents the position of pixel, is the average distance of nearest neighbors, is a &#119909; &#119894; &#119889; &#119894; &#119896; &#120573; constant. We set k=3 and &#61538;=0.3. is standard deviation, the setups are shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_8'>&#61555; &#119894;</ns0:formula></ns0:div> <ns0:div><ns0:head>Training details</ns0:head><ns0:p>Our DMPNet is implemented based on the PyTorch framework. It consists of a front-end network with the first 10 layers of VGG16 <ns0:ref type='bibr' target='#b27'>(Simonyan &amp; Zisserman, 2014</ns0:ref>) and a back-end network with three densely connected MPNs. The training batch size is 1, optimized by Adam <ns0:ref type='bibr'>(Kingma &amp; Ba, 2014)</ns0:ref>, and the learning rate is 5e-6 and the weight decay of 5e-4. Random Gaussian initialization with 0.01 standard deviation is used. Besides, we perform data enhancement on the image, and the enhancement principle followed CSRNet <ns0:ref type='bibr' target='#b16'>(Li, Zhang &amp; Chen, 2018)</ns0:ref>. Considering the illumination changes, we carry out gamma transform and gray transform on images, and the transformation principle follows DSNet <ns0:ref type='bibr' target='#b43'>(Dai et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation metrics</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67116:1:1:CHECK 4 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) can evaluate the performance of crowd counting <ns0:ref type='bibr' target='#b21'>(Jia, Antoni &amp; Chan, 2019;</ns0:ref><ns0:ref type='bibr'>Jiang et al., 2020)</ns0:ref>. MAE and RMSE represent the accuracy and robustness of the network respectively, and they can be defined as follows:</ns0:p><ns0:formula xml:id='formula_9'>&#119872;&#119860;&#119864; = 1 &#119873; |&#119863; &#119894; -&#119863; &#119866;&#119879; &#119894; | R&#119872;&#119878;&#119864; = 1 &#119873; &#8721; &#119873; &#119894; = 1 (&#119863; &#119894; -&#119863; &#119866;&#119879; &#119894; ) 2</ns0:formula><ns0:p>where N is the number of test images. and represent the actual and estimated numbers of &#119863; &#119894; &#119863; &#119866;&#119879; &#119894; people in the i-th image respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>We evaluate DMPNet on three benchmark crowd counting datasets: ShanghaiTech <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>, UCF-QNRF <ns0:ref type='bibr' target='#b28'>(Idrees et al., 2018)</ns0:ref>, UCF CC 50 <ns0:ref type='bibr' target='#b29'>(Idrees et al., 2013)</ns0:ref>. ( <ns0:ref type='formula'>1</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Comparison with State-of-the-Art</ns0:head><ns0:p>We evaluate and compare our DMPNet and SOTA methods on three challenging crowd counting datasets. The experimental results are shown in Table <ns0:ref type='table'>2</ns0:ref>. As you can see, our DMPNet is in the top two in multiple comparisons. (1) On ShanghaiTech part A <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>, MAE of our model is 98.3, which is the second best result. RMSE is 63.7, 7.2 % higher than that of the optimal model RANet <ns0:ref type='bibr' target='#b14'>(Zhu et al., 2019)</ns0:ref>. On ShanghaiTech Part B <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>, MAE and RMSE are 13.4% and 15.6% higher than DSNet <ns0:ref type='bibr' target='#b43'>(Dai et al., 2021)</ns0:ref> and SDANet <ns0:ref type='bibr' target='#b47'>(Miao et al., 2020)</ns0:ref>, respectively. The images of Part A are from the Internet with highly congested scenes. The images of Part B come from streets captured by fixed cameras with relatively sparse crowd scenes. It indicates that our DMPNet can perform well both congested and sparse crowd scenes.</ns0:p><ns0:p>(2) On UCF_QNRF <ns0:ref type='bibr' target='#b28'>(Idrees et al., 2018)</ns0:ref>, although we do not reach the best, we still have a good performance. MAE and RMSE are 98.7 and 179.8, respectively, 15.3% and 18.9% higher than M-SFANet <ns0:ref type='bibr' target='#b10'>(Thanasutives et al., 2021)</ns0:ref>. UCF_QNRF has lots of different scenes, in which the viewpoint and lighting variations are more diverse. In addition, due to the great change of crowd density, the perspective distortion of the head is more serious. Our model can handle this data well, which proves that our model has a certain adaptability to multiple scenes. In the face of crowd images close to real high-density scenes in UCF_QNRF, DMPNet can produce more accurate counting. (3) On UCF_CC_50 <ns0:ref type='bibr' target='#b29'>(Idrees et al., 2013)</ns0:ref>, 5-fold cross-validation is used to evaluate our DMPNet and achieve the second-best results of MAE and RMSE, 24.7% and 25.3% higher than M-SFANet <ns0:ref type='bibr' target='#b10'>(Thanasutives et al., 2021)</ns0:ref> and DSNet <ns0:ref type='bibr' target='#b43'>(Dai et al., 2021)</ns0:ref> respectively. UCF_CC_50 is a challenging dataset with few samples and low image resolution. The results of this data demonstrate that we can also achieve high results on small datasets.</ns0:p><ns0:p>The visualization results of our DMPNet are shown in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>, and the quality of density maps generated by DMPNet and SOTA methods is compared on ShanghaiTech Part A and Part B datasets <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref> are shown in Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. The comparison of visualization results and counting results shows that DMPNet can extract different types of crowd image information, and the density map is closer to the ground truth and higher in counting accuracy than MCNN <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref> and CSRNet <ns0:ref type='bibr' target='#b16'>(Li, Zhang &amp; Chen, 2018)</ns0:ref>. Our DMPNet has well solved the problems of crowd occlusion, perspective distortion, and scale variations.</ns0:p></ns0:div> <ns0:div><ns0:head>Ablation experiments</ns0:head><ns0:p>In this subsection, we perform several ablation experiments including Multi-scale Pyramid Network (i.e., LPN, GPN, and LPN+GPN), connected network (i.e., dense connection and without dense connection), and loss function. Following the previous works <ns0:ref type='bibr' target='#b16'>(Li, Zhang &amp; Chen, 2018;</ns0:ref><ns0:ref type='bibr'>Jiang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Zhang et al., 2020)</ns0:ref>, ablation experiments are conducted on ShanghaiTech Part A <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Effect of LPN and GPN</ns0:head><ns0:p>To verify the effects of LPN and GPN, we adjust the network structure with three different combinations. The results of LPN and GPN are summarized in Table <ns0:ref type='table'>3</ns0:ref>. In comparison, LPN achieves better results than GPN, with MAE and MSE lower 4.4% and 7.1%, respectively. When the two networks are used together, the results are further reduced by 5.4% and 6.7% relative to LPN. The results show that the proposed multi-scale extraction module is effective in capturing coarse-grained and fine-grained scales.</ns0:p></ns0:div> <ns0:div><ns0:head>Effect of Dense connection</ns0:head><ns0:p>To verify the effects of dense connections, we compare two structures, one with dense connections and the other without dense connections, and the results are shown in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p><ns0:p>Results are significantly better when dense connections are used, with MAE and MSE decreasing by 6.8% and 9.5%, respectively. This indicates that dense connection effectively prevents feature loss, increases information flow between different network layers, further enlarges scale diversity, and makes the feature more effective.</ns0:p></ns0:div> <ns0:div><ns0:head>Effect of loss function</ns0:head><ns0:p>To verify the effect of different loss function combinations, we design four different combinations, and the results are shown in Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and MSE decrease by 9.0% and 9.3%, respectively, indicating that the combination of density level consistency loss and MAE loss can help the model to better converge and improve the counting performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Effect of the number of MPN</ns0:head><ns0:p>In order to verify the influence of the number of MPNs on the results, the number of MPNs is gradually increased and dense connections are used in different structures. The results are shown in Table <ns0:ref type='table'>6</ns0:ref>. When the number of N is not greater than 3, the result of crowd counting is better as the number of MPN increases. When N=3, MAE and RMSE are 63.7 and 98.3, respectively. When N=4, the results were 64.4 and 97.7, with no significant improvement. In DMPNet, we use dense connection, so there is no need to set too many MPN numbers, which will cause the increase of parameters and the redundancy of calculation.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this paper, we propose a novel end-to-end model called DMPNet for accurate crowd counting and high-quality density map generation. The front-end network of DMPNet is VGG16, and the back-end network is stacked by three densely connected MPNs. As an important component module of DMPNet, MPN can effectively extract multi-scale features while keeping the input and output resolution unchanged. The ability of the network is further enhanced by densely connecting multiple MPNs. In addition, we combine Euclidean loss with density level consistency loss and MAE loss to further improve the effect of the model. Experimental results on three challenging datasets validate the adaptability and robustness of our method in different crowd scenes. Although we deal with scale variation well, we do not eliminate background noise in the crowd density map, which will affect the counting accuracy to some extent. In future work, we will introduce attention mechanism to deal with background noise. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science 1 Datasets Parameter Settings ShanghaiTech Part_A <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref> =4 &#61555; &#119894; ShanghaiTech Part_B <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref> =15 &#61555; &#119894; UCF_QRNF <ns0:ref type='bibr' target='#b28'>(Idrees et al., 2018)</ns0:ref> Geometry-adaptive kernels UCF_CC_50 <ns0:ref type='bibr' target='#b29'>(Idrees et al., 2013)</ns0:ref> Geometry-adaptive kernels Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed In grouping convolution, the input feature map is divided into N groups, and the convolution kernel is also divided into N groups accordingly. The calculation is carried out in the corresponding group. Each group will generate a feature map, and a total of N feature maps are generated.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67116:1:1:CHECK 4 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>.</ns0:p><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>The six rows show that: (1) The test images, (2) the ground truth, (3) Density maps produced by MCNN <ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>, (4) Density maps produced by CSRNet <ns0:ref type='bibr' target='#b16'>(Li, Zhang &amp; Chen, 2018</ns0:ref>), ( <ns0:ref type='formula'>5</ns0:ref>) Density maps produced by DSNet <ns0:ref type='bibr'>(Dai et al., 2020)</ns0:ref>, ( <ns0:ref type='formula'>6</ns0:ref>) Density maps produced by our DMPNet.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>(a) LPN is mainly used for fine-grained feature extraction. Detailed information is shown in Figure5(a). First, we use 1x1 convolution to reduce the channel of to 512.Then, four-level &#119865; &#119868; pyramid convolution with different convolution kernels sizes (9x9,7x7,5x5, and 3x3) is used to extract multi-scale features. The corresponding channel number is 32,64,128,256, and the group convolution size is 16,8,4,1. Finally, we use 1x1 convolution to increase the channel numbers of the four-level features to 512, and the output feature is obtained. All convolution operations &#119865; &#119871; are followed by BN and ReLU.(b) GPN is mainly used for coarse-grained feature extraction. Detailed information is shown in Figure 5(b). The intermediate processing of GPN and LPN is the same, but the difference is that the input feature first goes through a layer of 9x9 adaptive average pool to ensure that &#119865; &#119868; complete global information can be obtained. In addition, to restore the resolution of the output feature map, we use bilinear interpolation for up-sampling to obtain the final output . &#119865; &#119866; PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67116:1:1:CHECK 4 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>) ShanghaiTech: It includes Part A and Part B, with a total 1,198 images and 330,165 annotations. Part A contains 300 training images and 182 testing images for congested crowd scenes, counting from 33 to 3139. Part B contains 400 training images and 316 testing images, for sparse crowd scenes, counting from 9 to 578. (2) UCF-QNRF: It is the largest and most recently released dataset on crowd counting with 1,535 dense crowd images from various websites, counting from 49 to 12865. (3) UCF CC 50: It contains 50 images with 63974 annotations, counting from 94 to 4543. The average number of people in the image is 1280.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Different scales of heads exist in crowd counting datasets.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The architecture of DMPNet for crowd counting and high-quality density map.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Compare the calculation process of Standard Convolution and Pyramid Convolution.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Compare the calculation process of Standard Convolution and Group Convolution.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Three main components of Multi-scale Pyramid Network.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The visualization results and the corresponding counting results of our DMPNet.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Comparison of density maps generated by different SOTA methods on ShanghaiTech Part A and Part B dataset<ns0:ref type='bibr' target='#b8'>(Zhang et al., 2016)</ns0:ref>.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>MSE Loss, as the most common loss function in crowd counting, still plays a major role. However, after density level consistency loss and MAE loss are added, the effect is improved to a certain extent. When both are used, MAE</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67116:1:1:CHECK 4 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The setups for different datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter settings for density maps generated from different datasets.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67116:1:1:CHECK 4 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>60 151.23 162.33 276.76</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>ShanghaiTech</ns0:cell><ns0:cell cols='2'>ShanghaiTech</ns0:cell><ns0:cell cols='2'>UCF_QNRF</ns0:cell><ns0:cell>UCF_CC_50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Part A</ns0:cell><ns0:cell>Part B</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Methods</ns0:cell><ns0:cell cols='6'>MAE RMSE MAE RMSE MAE RMSE MAE RMSE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MCNN</ns0:cell><ns0:cell>110.2 173.2</ns0:cell><ns0:cell cols='2'>26.4 41.3</ns0:cell><ns0:cell cols='3'>277.0 426.0 377.6 509.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Switch-CNN 90.4 135.0</ns0:cell><ns0:cell cols='2'>21.6 33.4</ns0:cell><ns0:cell cols='3'>228.0 445.0 318.1 439.2</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>CP-CNN</ns0:cell><ns0:cell>73.6 106.4</ns0:cell><ns0:cell cols='2'>20.1 30.1</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>295.8 320.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ic-CNN</ns0:cell><ns0:cell>68.5 116.2</ns0:cell><ns0:cell cols='2'>10.7 16.0</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>260.9 365.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CSRNet</ns0:cell><ns0:cell>68.2 115.0</ns0:cell><ns0:cell cols='2'>10.6 16.0</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>266.1 397.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SANet</ns0:cell><ns0:cell>67.0 104.5</ns0:cell><ns0:cell>8.4</ns0:cell><ns0:cell>13.6</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>258.4 334.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BL</ns0:cell><ns0:cell>62.8 101.8</ns0:cell><ns0:cell>7.7</ns0:cell><ns0:cell>12.7</ns0:cell><ns0:cell cols='3'>88.7 154.8 229.3 308.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RANet</ns0:cell><ns0:cell>59.4 102.0</ns0:cell><ns0:cell>7.9</ns0:cell><ns0:cell>12.9</ns0:cell><ns0:cell>111</ns0:cell><ns0:cell>190</ns0:cell><ns0:cell>239.8 319.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SDANet</ns0:cell><ns0:cell>63.6 101.8</ns0:cell><ns0:cell>7.8</ns0:cell><ns0:cell>10.2</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>227.6 316.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SFANet</ns0:cell><ns0:cell>59.8 99.3</ns0:cell><ns0:cell>6.9</ns0:cell><ns0:cell>10.9</ns0:cell><ns0:cell cols='3'>100.8 174.5 219.6 316.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>PACNN</ns0:cell><ns0:cell>66.3 106.4</ns0:cell><ns0:cell>8.9</ns0:cell><ns0:cell>13.5</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>241.7 320.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>TEDNet</ns0:cell><ns0:cell>64.2 109.1</ns0:cell><ns0:cell>8.2</ns0:cell><ns0:cell>12.8</ns0:cell><ns0:cell>113</ns0:cell><ns0:cell>188</ns0:cell><ns0:cell>249.4 354.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DSNet</ns0:cell><ns0:cell>61.7 102.6</ns0:cell><ns0:cell>6.7</ns0:cell><ns0:cell>10.5</ns0:cell><ns0:cell cols='3'>91.4 160.4 183.3 240.6</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='7'>M-SFANet 59.69 95.66 85.DMPNet 6.76 11.89 63.7 98.3 7.6 11.8 98.7 179.8 202.4 301.5</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Content 1 Introduction...................................................................................................... 1 2 Editor's Decision .......................................................................................... 2 3 Comments from the reviewers ............................................................ 3 3.1 Reviewer 1:Pongpisit Thanasutives ..................................... 3 3.1.1 Basic reporting .............................................................................. 3 3.1.2 Experimental design.................................................................... 7 3.1.3 Validity of the findings ............................................................... 9 3.2 Reviewer 2 .......................................................................................... 11 3.2.1 Basic reporting ............................................................................ 11 3.2.2 Experimental design.................................................................. 13 3.2.3 Validity of the findings ............................................................. 13 3.2.4 Additional comments ................................................................ 14 1 Introduction Thank the reviewers and editors for their affirmation of our work. We would like to express our sincere appreciation for your approval of the revision of our paper and your new helpful comments. Those comments are all valuable and very helpful for revising and improving our paper, as well as the important guiding significance to our researches. We have resolved related issues and adjusted the content of the paper. 2 Editor's Decision My suggestions and comments to the authors are as follows: Question 1: the title could be modified to 'DMPNet: Densely connected multi-scale pyramid networks for crowd counting' Response 1: (1) We revised the title according to the editor's comments. (2) Line1-2 : 'DMPNet: Densely connected multi-scale pyramid networks for crowd counting' Question 2: The abstract should begin with the problem statement, rather than directly talking about the performance. Response 2: (1) Part of our abstract originally intended to discuss the problem of crowd counting from the beginning, but the first sentence is not appropriate, has been revised. (2) Line10-11: Crowd counting has been widely studied by deep learning in recent years. However, due to scale variation caused by perspective distortion, crowd counting is still a challenging task. Question 3: Legends of Fig. 2, 3 and 4 need more explanation. Response 3: (1) In the previous version of the paper, most of our explanations on the content of Figure 2, Figure 3 and Figure 4 were put in the manuscript. Now we have revised it to make readers more clearly understand the content in the figures. (2) Details can be found in Figure 2, Figure 3 and Figure 4. ①Figure 2. The architecture of DMPNet for crowd counting and high-quality density map. It contains VGG16 (Simonyan & Zisserman, 2014) as the frontend network and three MPNs stacked by dense connections as the back-end network. MPN is composed of LPN, GPN and MFFN. It is used to extract human head features at different scales, and the resolution and channel number of input feature maps remain unchanged. ②Figure 3. Compare the calculation process of Standard Convolution and Pyramid Convolution. In pyramid convolution, the input feature map is calculated with convolution kernels of different sizes, and then the obtained feature map is connected by channel as the output feature map. The size of convolution kernel is decreasing, and the depth of convolution kernel is increasing. ③Figure 4. Compare the calculation process of Standard Convolution and Group Convolution. In grouping convolution, the input feature map is divided into N groups, and the convolution kernel is also divided into N groups accordingly. The calculation is carried out in the corresponding group. Each group will generate a feature map, and a total of N feature maps are generated. Question 4: In the conclusion section, mention the limitation of the proposed model, and future work. Response 4: (1) Many studies use attention mechanism to filter crowd background noise. In this paper, we focus on solving the problem of scale change and local consistency in crowd counting, and do not deal with the background noise. In the future work, we will introduce the attention mechanism to solve the background noise problem, further improve the counting accuracy and the quality of the crowd density map. (2) Although we deal with scale variation well, we do not eliminate background noise in the crowd density map, which will affect the counting accuracy to some extent. In future work, we will introduce attention mechanism to deal with background noise. 3 Comments from the reviewers 3.1 Reviewer 1:Pongpisit Thanasutives The paper, at a high level, presents a way to cope with the scale variation problem in Crowd Counting research. The authors clearly write about their motivation for coming up with the proposals and also compare the counting performance with state-of-the-art methods on 4 major datasets. The main contributions of the paper are the proposed Multi-scale Pyramid Networks (MPN), which are densely connected for better information retention throughout the deep networks. I shall comment on the basic components of the paper that could be improved as follows: 3.1.1 Basic reporting Question 1: I believe that the very first paper that uses dense connections is “Dense Scale Network for Crowd Counting” (DSNet in Table 2) [1]. Thus, (Dai et. al., 2021) should be mentioned when introducing the dense connections. Response 1: (1) The view put forward by the reviewers is correct, and our study is based on DSNet (Dai et. al., 2021), which includes multi-scale extraction modules (MPN) and loss functions. We don 't have much innovation in the processing of dense connections, but we follow DSNet directly. (2) In order to highlight the contribution of DSNet to proposing dense connections, we give a clear explanation. Lines 59—63 :Different layers of neural network contain different crowd information, but with the increase of network depth, some details are gradually lost. DSNet (Dai et al., 2021) proposed that using dense connected networks in the field of crowd counting can effectively extract long-distance context information and maximize the retention of network layer information. We follow this operation and connect MPNs with dense connections. Question 2: The published year of DSNet is not consistent. (Dai et. al., 2019) or (Dai et. al., 2021)? Response 2: (1) I 'm sorry for this error. The second (Dai et. al., 2021) is correct, and we conducted a consistency review of the publication year of DSNet. Question 3: In crowd counting research, to “keep the input and output resolutions unchanged”, the encoder-decoder structure (See [3] and [5]) is usually utilized. The introduction to the structure is worth adding for better context. Response 3: (1) Following the comments of the reviewers, we add the introduction of the research on crowd counting using encoder-decoder structure (Jiang et al.,2019; Thanasutives et al., 2021). (2) Lines 46—58 (3) REFERENCES: [1] Thanasutives, Pongpisit, et al. 'Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting.' 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. [2] Jiang, Xiaolong, et al. 'Crowd counting and density estimation by trellis encoderdecoder networks.' Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [3] W. Liu, M. Salzmann, and P. Fua. Context-aware crowd counting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5099– 5108, 2019. 1, 2, 3, 4, 5, 6, 7 [4] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017. 2, 3, 7 Question 4: In the RELATED WORK Section, to better support the difference between this work and others, there should be examples of previous papers that add “extra perspective maps or attention maps”, for instance [2],[3], and [4]. Specifically [3] also adopts multiple branches of convolution with different kernel sizes, namely the ASPP structure. Response 4: (1) Following the comments of the reviewer, we added some examples of 'additional perspective or attention map'. In addition, we also added some references according to the comments of other reviewers. (2) Lines:46—56; 114—123; 141—155 Question 5: In Figure 4., please carefully check that #groups = 2 or 4? Based on Figure 4., the incoming feature channels are divided into 2 groups? Response 5: (1) We use groups = 4 in experiments. In Fig. 4, we use groups = 2 to introduce the principle of group convolution more simply. (2) Lines:195--209 Question 6: About describing the configurations (kernel_size, #output_channel, ...) of a convolution operation, It is preferable to use \times over the “*” notation. The “*” notation may be reserved for the convolution operation itself. Response 6: I'm sorry for this kind of error, we have corrected the mathematical notations. Question 7: In Figure 5, Is there any particular reason, the larger kernel_size is, the smaller output channels and the groups are? Is this just to balance computational resources between branches? Response 7: As the reviewer said, we are to balance the computing resources between branches. The goal of LPN and GPN is to use multi-scale kernel to process input without increasing computational complexity or parameters, and to realize multi-scale feature acquisition. Larger kernel_size often requires more computation. So the larger the LPN and GPN are designed as kernel _ size, the smaller the output channels and groups are. Question 8: (Typo) Change “MFFE” to “MFFN”? Response 8: I'm sorry for this kind of error, we have changed “MFFE” to “MFFN”. Question 9: (Typo) What is G (with \theta parameterization) in the density level consistency loss? Should G be F? Response 9: I'm very sorry to have this kind of mistake, the same meaning symbol should be consistent. The reviewer ' s understanding is correct, G is F. Question 10: Is C in the calculation of MAE loss, the summation over all pixels? The definition of C can be made more clear. Do I and D^{GT} notations have the same meaning? As I_{i} cannot appear twice, please revise the MAE loss formulation. Response 10: (1) The reviewer ' s understanding is correct. C represents the sum of all pixels. (2) 𝐷𝑖𝐺𝑇 and 𝐼′𝑖 notations have the same meaning. They all represent the real density map of 𝑋𝑖 . (3) We revised the MAE loss formulation. Lines: 258—263. Question 11: \alpha and \beta for weighting the losses are chosen based on what criterion? Experiences? Random? or validation count performance? Response 11: (1) This can be considered to be set based on experience. We learned from the reading of many crowd counting papers that the equilibrium parameters are usually set to 10-4 and 10-3, so we set the fixed parameters in our experiment without further discussion. (2) For example, References [1][2][3] [1] Cao X, Wang Z, Zhao Y, et al. Scale Aggregation Network for Accurate and Efficient Crowd Counting[C]// European Conference on Computer Vision. Springer, Cham, 2018. [2] Ma J, Dai Y, Tan Y P. Atrous Convolutions Spatial Pyramid Network for Crowd Counting and Density Estimation[J]. Neurocomputing, 2019. [3] J Cheng, Chen Z, Zhang X Y, et al. Exploit the potential of Multi-column architecture for Crowd Counting[J]. 2020. (3) If the reviewers believe that the discussion on this part is necessary, we will further modify our content References [1] Dai, Feng, et al. 'Dense scale network for crowd counting.' Proceedings of the 2021 International Conference on Multimedia Retrieval. 2021. [2] Zhu, Liang, et al. 'Dual path multi-scale fusion networks with attention for crowd counting.' arXiv preprint arXiv:1902.01115 (2019). [3] Thanasutives, Pongpisit, et al. 'Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting.' 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. [4] Shi, Miaojing, et al. 'Revisiting perspective information for efficient crowd counting.' Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [5] Jiang, Xiaolong, et al. 'Crowd counting and density estimation by trellis encoder-decoder networks.' Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [6] Chen, Liang-Chieh, et al. 'Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.' IEEE transactions on pattern analysis and machine intelligence 40.4 (2017): 834-848. 3.1.2 Experimental design Question 1: On defining the evaluation metrics, you can use D^{GT} instead of Z`. The notations need consistency. There are no “bolded” numbers, indicating the best results in any Table. Response 1: (1) In order to maintain the consistency of notations, we modify Z' to 𝐷𝑖𝐺𝑇 when defining the evaluation metrics. (2) We highlighted the results of the top two in bold form. Question 2: Current SOTA methods are incomplete, [2], [3], [4], and [5] are missing. Response 2: (1) We supplement these missing methods, as shown in Table 2. (2) We discussed the supplementary methods. Question 3: In Table 3., it is not clear whether (w/ LPN, w/o GPN) and (w/ GPN, w/o LPN) are trained with MFFN or not? In the 3rd row, MPN should be MFFN? Another typo? Response 3: (1) MFFN was used in our ablation experiments, which was explained in the new version in the title of Table 3, as shown in Table 3. (2) In previous versions, the third line was intended to express that we used the full MPN, which might cause ambiguity. Now it has been modified, as shown in table 3. Question 4: From Table 4., it is still unclear how many blocks of MPN blocks should be densely connected? In this paper, It is 3, but how about 1, 2, or even 4? This may be addressed in the ablation study section. Response 4: (1) (2) (3) (4) When the number of MPN is 3, the result is the best. we added new ablation experiments to discuss the effect of the number of MPN. Lines:364—371. In order to verify the influence of the number of MPNs on the results, the number of MPNs is gradually increased and dense connections are used in different structures. The results are shown in Table 6. When the number of N is not greater than 3, the result of crowd counting is better as the number of MPN increases. When N=3, MAE and RMSE are 63.7 and 98.3, respectively. When N=4, the results were 63.4 and 97.7, with no significant improvement. In DMPNet, we use dense connection, so there is no need to set too many MPN numbers, which will cause the increase of parameters and the redundancy of calculation. (5) Table 6 Table 6. The estimation errors of different MPN numbers are compared on ShanghaiTech Part A (Zhang et al., 2016). MPN(n) represents that the network contains n MPNs. Method MAE RMSE 𝑀𝑃𝑁(1) 71.0 111.3 𝑀𝑃𝑁(2) 66.2 103.4 𝑀𝑃𝑁(3) 63.7 98.3 𝑀𝑃𝑁(4) 64.4 97.7 Question 5: This paper recommends the use of the group convolution (over the normal one) but does not compare the number of trainable parameters of the proposed Multi-scale Pyramid Network, when G varies and the case that G = 1 for all the branches. Response 5: (1) When we introduce group convolution, we discuss the number of parameters and computation of group convolution and standard convolution. (2) Lines:190—204。 (3) The size of the input feature map is C∗H∗W, and the number of output feature maps is N. If G groups are to be divided, the number of input feature maps in each group is C/G, the number of output feature maps in each group is N/G, and the size of each convolution kernel is (C/G) ∗ K ∗ K. The total number of convolution kernels is still N, and the number of convolution kernels in each group is N/G. The convolution kernels are only convolved with the input feature maps of the same group, and the total number of parameters of the convolution kernels is N∗(C/G)∗K∗K. It can be seen that the number of total parameters is reduced to the original 1/G, and its connection mode is as shown in the figure above. Group1 has 2 output feature maps, and there are 2 convolution kernels, and the number of channels of each convolution kernel is 4, which is the same as the number of channels of the input map of Group1. The convolution kernel only convolves with the input map of the same group. Instead of convolution with the input feature maps of other groups. References [1] Dai, Feng, et al. 'Dense scale network for crowd counting.' Proceedings of the 2021 International Conference on Multimedia Retrieval. 2021. [2] Zhu, Liang, et al. 'Dual path multi-scale fusion networks with attention for crowd counting.' arXiv preprint arXiv:1902.01115 (2019). [3] Thanasutives, Pongpisit, et al. 'Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting.' 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. [4] Shi, Miaojing, et al. 'Revisiting perspective information for efficient crowd counting.' Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [5] Jiang, Xiaolong, et al. 'Crowd counting and density estimation by trellis encoderdecoder networks.' Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [6] Chen, Liang-Chieh, et al. 'Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.' IEEE transactions on pattern analysis and machine intelligence 40.4 (2017): 834-848. 3.1.3 Validity of the findings Question 1: (Important) Since this paper would be perceived (if it gets published) as an incrementally improved version of DSNet, some audiences might get confused by the fact that the DMPNet only outperforms DSNet in terms of lowering MSE computed from the ShanghaiTech Part_B dataset. Hence, I advise that you can also compare the number of trainable parameters or inference speed for better justification of your proposed DMPNet. I guess that DMPNet might contain fewer parameters as you employed the group convolution operation. Response 1: (1) Our work is partly based on DSNet, which we explicitly name in the paper. (2) As the reviewer understands, our network has fewer parameters than DSNet, which we also pointed out when introducing group convolution. (3) We revised the content of the paper to highlight our advantages in training parameters. Question 2: As I mentioned above, it is hard to justify that DMPNet has “better in both mean absolute error (MAE) and mean squared error (MSE)” at the current state of the paper. So, there may be doubts whether “MPN can effectively extract multi-scale features” or not? Response 2: (1) It is common to use pyramid structure network to extract multi-scale features, such as [1] [2] [3]. We are also inspired by these works to design MPN. (2) In the ablation experiment, we conducted experiments on different modules in MPN and proved the role of each module, as shown in Table 2. (3) In the visualization work, we reproduce some papers, and it can also be seen that MPN is effective for multi-scale feature extraction. (4) If reviewers still have questions, they can have further discussion. [1] Johnson R, Tong Z. Deep Pyramid Convolutional Neural Networks for Text Categorization[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2017. [2] Mehta S, Rastegari M, Caspi A, et al. ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation[J]. Springer, Cham, 2018. [3] Duta I C, Liu L, Zhu F, et al. Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition[J]. 2020. Question 3: On lines 290-291, the claim that “Our DMPNet has well solved the problems of crowd occlusion, perspective distortion, and scale variations” is a bit exaggerated. It is unclear how exactly you deal with the perspective distortion? Is this solely based on your experimental results of the UCF_QNRF dataset? Response 3: (1) Our discussion is based on the characteristics of the crowd samples in the dataset. It is somewhat exaggerated only according to the experimental results of the UCF_QNRF dataset. We have modified this argument. (2) Lines:315—322 3.2 Reviewer 2 The authors proposed a Densely connected Multi-scale Pyramid Network (DMPNet) based on VGG and Multi-scale Pyramid Network (MPN). The latter consists of three modules that aim at extracting features at different scales. The authors compared performances obtained by DMPNet with other methods at state of the art. 3.2.1 Basic reporting Question 1: Figures and Table should be placed before the bibliography Response 1: (1) We upload files separately according to the periodical layout requirements. There should be more suitable layouts after the papers are accepted. (2) If we have any errors in the format of the paper, we will make timely corrections. Question 2: Related work section should include all methods used for comparison. Response 2: (1) We supplement the missing literature in the related work and SOTA methods. (2) lines:103--154 Question 3: Introduce every acronym before using it (e.g., CNN, ASNet, etc.) Response 3: (1) We introduce the abbreviations that appear in the paper when they first appear. (2) For example: (Convolutional Neural Networks, CNN), (Attention Scaling Network, ASNet) Question 4: Viresh etal. (line 97-98) and CP-CNN method (Table 2) are not present in the bibliography Response 4: (1) We have supplemented the missing references in the paper. (2) For example: Viresh R J, Le H, Hoai M. 2018. Iterative Crowd Counting. 15th European Conference on Computer Vision. 278-293. DOI 10.1007/978-3-030-01234-2_17. Sindagi V A, Patel V M. 2017. Generating High-Quality Crowd Density Maps Using Contextual Pyramid CNNs. 2017 IEEE International Conference on Computer Vision (ICCV). 1879-1888. DOI 10.1109/ICCV.2017.206. Question 5: Figures 3 and 4 have the same caption. Response 5: (1) The titles in Fig. 3 and Fig. 4 are different. (2) According to the editing requirements, we modify the title of Figure 3 and Figure 4. ①Figure 3. Compare the calculation process of Standard Convolution and Pyramid Convolution. In pyramid convolution, the input feature map is calculated with convolution kernels of different sizes, and then the obtained feature map is connected by channel as the output feature map. The size of convolution kernel is decreasing, and the depth of convolution kernel is increasing. ②Figure 4. Compare the calculation process of Standard Convolution and Group Convolution. In grouping convolution, the input feature map is divided into N groups, and the convolution kernel is also divided into N groups accordingly. The calculation is carried out in the corresponding group. Each group will generate a feature map, and a total of N feature maps are generated. Question 6: Captions should be improved Response 6: According to the requirements of academic editors, we changed the topic of the paper to 'DMPNet: Densely connected multi-scale pyramid networks for crowd counting'. Question 7: In table 2, no results are highlighted in bold Response 7: We highlighted the results of the top two in bold form. Question 8: Figure 1 presents 'Table 1' in the caption Response 8: (1) We checked the title of figures and tables in the paper, but did not find the mistakes mentioned by the reviewers. (2) If there are other questions, we will continue to revise our paper. Question 9: In the manuscript and Figure 5, the symbol '*' was used. It is necessary to replace it with 'x' (in Latex, \times). Response 9: Reviewer 1 also raised this issue, and We have modified this error Question 10: lines 35-36, 53-54, 103-106, 107-108, 195-196 need references Response 10: We cited the papers involved in these contents. 3.2.2 Experimental design The method is described with sufficient details. Moreover, the authors provide the code. Question 1: The training methods section can be integrated into Experimental and Discussion Response1: Following the comments of reviewers, we integrate the training methods into the experiment and discussion. Question 2: The measure MSE (Section 'Evaluation Metrics') is the 'Root Mean Squared Error' (RMSE) Response 2: (1) We are sorry for this mistake, and we have made revisions. (2) See lines: 282--288 and Table 2, 3, 4, 5 Question 3: lines 267-271, 280-282 can be improved Response 3: We modified these contents. Our discussion is based on the characteristics of the crowd samples in the dataset. 3.2.3 Validity of the findings Question 1: The ablation experiment section can be improved by studying these effects on the other considered datasets. Response1: (1) The datasets we selected are benchmark and challenging crowd counting datasets, which are also used in most studies【MCNN】【CSRNet】【DSNet】 【RANet】. (2) If reviewers consider it necessary to conduct experiments on other additional data sets, we will make further improvements. (3) In the ablation experiment, we added and analyzed the effect of the number of MPN. (4) In order to verify the influence of the number of MPNs on the results, the number of MPNs is gradually increased and dense connections are used in different structures. The results are shown in Table 6. When the number of N is not greater than 3, the result of crowd counting is better as the number of MPN increases. When N=3, MAE and RMSE are 63.7 and 98.3, respectively. When N=4, the results were 63.4 and 97.7, with no significant improvement. In DMPNet, we use dense connection, so there is no need to set too many MPN numbers, which will cause the increase of parameters and the redundancy of calculation. (5) Table 6 Table 6. The estimation errors of different MPN numbers are compared on ShanghaiTech Part A (Zhang et al., 2016). MPN(n) represents that the network contains n MPNs. Method MAE RMSE 𝑀𝑃𝑁(1) 71.0 111.3 𝑀𝑃𝑁(2) 66.2 103.4 𝑀𝑃𝑁(3) 63.7 98.3 𝑀𝑃𝑁(4) 64.4 97.7 3.2.4 Additional comments Question 1: There are some typos (e.g., SOAT --> SOTA) Response1: I'm sorry for this mistake, we've improved and re-examined the grammar and writing of the paper. "
Here is a paper. Please give your review comments after reading it.
365
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The use of robots in carrying out various tasks is popular in many industries. In order to carry out a task, a robot has to move from one location to another using shorter, safer and smoother route. For movement, a robot has to know its destination, its previous location, a plan on the path it should take, a method for moving to the new location and a good understanding of its environment. Ultimately, the movement of the robot depends on motion planning and control algorithm. This paper considers a novel solution to the robot navigation problem by proposing a new hybrid algorithm. The hybrid algorithm is designed by combining the ant colony optimization algorithm and kinematic equations of the robot.</ns0:p><ns0:p>The planning phase in the algorithm will find a route to the next step which is collision free and the control phase will move the robot to this new step. Ant colony optimization is used to plan a step for a robot and kinematic equations to control and move the robot to a location. By planning and controlling different steps, the hybrid algorithm will enable a robot to reach its destination. The proposed algorithm will be applied to multiple pointmass robot navigation in a multiple obstacle and line segment cluttered environment. In this paper, we are considering a priori known environments with static obstacles. The proposed motion planning and control algorithm is applied to the tractor-trailer robotic system. The results show a collision and obstacle free navigation to the target. This paper also measures the performance of the proposed algorithm using path length and convergence time, comparing it to a classical motion planning and control algorithm, Lyapunov based control scheme (LbCS). The results show that the proposed algorithm performs significantly better than LbCS including the avoidance of local minima.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Over the years, the use of robots has evolved due to improved capacities and abilities to carry out complex and diverse activities in areas such as manufacturing, logistics, home, travel, health, mining, civil, military, and transportation <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b16'>16,</ns0:ref><ns0:ref type='bibr' target='#b17'>17,</ns0:ref><ns0:ref type='bibr' target='#b28'>28]</ns0:ref>. In almost all cases, a robot must travel from one point to another in order to complete a task. A robot should also avoid collisions and dangerous situations while navigating through its surroundings in order to reach a certain point. This is generally known as findpath or robot navigation problem which invariably has four categories: localization, path planning, motion control, and cognitive mapping <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. This paper focuses on the motion planning and control problem. Due to its usefulness in real-world applications, robot motion planning and control has been a widely researched topic over the past four decades. The primary goal of robot path planning and control is to identify the most efficient and safe route from point A to point B and subsequently control the robot to point B. There are two subtasks in robot motion planning and control. (1) plan a path which should be obstacle and collision free, and (2) control the robot to its destination.</ns0:p><ns0:p>Various methods are available in the literature for path planning which can be categorized into classical, heuristic and machine and deep learning. Artificial potential field <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>, cell decomposition <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>, road map <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>, and virtual force field <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> are examples of classical or traditional approach. Optimization algorithms such as firefly <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>, ant-colony <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>, and particle swarm <ns0:ref type='bibr' target='#b44'>[43]</ns0:ref> are examples of heuristic approaches.</ns0:p><ns0:p>Algorithms such as neural networks, decision trees, Nave Baiyes, and others are used in machine and PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65340:1:2:NEW 16 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science deep learning techniques for robot path planning <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. This paper will focus on a heuristic approach, in particular, the ant colony optimization. An Ant colony optimization (ACO) is a popular algorithm that has been applied to different problems, including path planning problems in robotics <ns0:ref type='bibr' target='#b2'>[3,</ns0:ref><ns0:ref type='bibr' target='#b30'>30]</ns0:ref>. In this paper, given the nature of the problem, the authors use the ACO algorithm developed by Socha and Dorigo <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref> for continuous domain (ACOR); however, there are different variants of ACO algorithms such as fuzzy heuristic based ACO <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref>, continuous ACO (CACO) <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> and continuous interacting ant colony (CIAC) <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> which have been utilized for different problems and situations. The authors chose the variant of ACO proposed by Socha and Dorigo because it clearly outperformed other continuous ACO variants like CACO, API and CIAC in eight test functions <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref> and it follows the original ACO formulation. Furthermore, Socha and Dorigo also compared ACOR with continuous genetic algorithm (CGA), enhanced continuous tabu search (ECTS), enhanced simulated annealing (ESA) and differential evolution <ns0:ref type='bibr'>(DE)</ns0:ref>, where ACOR performed well in one third of the test problems and performed not much worse on other problems <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>.</ns0:p><ns0:p>Once a path is planned the robot can track or follow the planned path to reach its destination. There are different methods in the literature for robot motion control such as kinematics, dynamics, artificial potential field, fuzzy logic and many more <ns0:ref type='bibr' target='#b14'>[15,</ns0:ref><ns0:ref type='bibr' target='#b37'>37,</ns0:ref><ns0:ref type='bibr' target='#b38'>38,</ns0:ref><ns0:ref type='bibr' target='#b46'>45]</ns0:ref>. This paper will focus on kinematic equations of the robots for motion control.</ns0:p><ns0:p>Majority of the studies in the literature carry out motion control in two different phases: (1) path planning, and (2) subsequent robot tracking or control. This paper will address the problem of the motion planning and control differently. A robot's next step will be planned and the robot will move to that step.</ns0:p><ns0:p>At the time of the robot's step planning, the obstacles and other robots will be considered and a collision free step will be determined. By repeating this process, the robot will reach its destination . Therefore, there is path planning and control at every step to its destination. This research will introduce a new hybrid algorithm that is a strategic combination of an ant colony optimization algorithm and kinematic equations of robot motion. The purpose of the algorithm is to plan a step (location) for the robot and the robot will use its kinematic equations to move to that step (location). This process is repeated until the final destination is reached. This is the main difference when compared to other algorithms and hybrids as most plan the entire path first and then the robot starts the journey. Selected scenarios will be used to showcase the new hybrid and a performance comparison in terms of path length and convergence time will be made with the Lyapunov based Control Scheme (LbCS), a classical approach for motion control.</ns0:p><ns0:p>The main contributions of this paper are as follows:</ns0:p><ns0:p>1. ACO-Kinematic Algorithm: A new hybrid algorithm is proposed which plans a step for the robot and the robot moves to that step and this process continues until the robot reaches its destination. In literature, according to authors knowledge, there is no hybrid algorithm that plans next step location of robot and controls it to that step location. The proposed algorithm also solves the problem of local minima.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Multi-objective problem formulation:</ns0:head><ns0:p>A new multi-objective problem is formulated in terms of path length and safety of the path. The safety objective is achieved through ant colony optimization and the path length objective is achieved by both, ant colony optimization and kinematic equations.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Application:</ns0:head><ns0:p>The methodology derived for point-mass robots has been successfully applied to tractor-trailer robotic system to show the effectiveness of the proposed algorithm in real life applications.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>Analysis:</ns0:head><ns0:p>The performance of the new hybrid algorithm has been compared to that of lyapunov based control scheme (LbCS) in terms of path length and convergence time. Such a comparison using the LbCS has been carried out for the first time. The analysis show that ACO-Kinematic performed slightly better than LbCS including the avoidance of local minima.</ns0:p><ns0:p>Section 2 presents the literature review on motion planning and control algorithms. The problem statement is discussed in section 3. Sections 4 discusses the ant colony optimization algorithm. The three objectives of motion planning namely, short, safe and smooth path are formulated and discussed in section 5. The proposed algorithm is presented in section 6. The three case studies and example scenarios, including the kinematic equations for the tractor-trailer robot are discussed in section 7. Section 8 presents Manuscript to be reviewed Computer Science three different scenarios to measure performance of the proposed algorithm and the LbCS. Finally, the paper concludes in Section 9, discussing its contributions and recommendations for future work.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>In literature, there are mostly instances of path planning and motion control being researched and implemented separately. There are a few algorithms that consider motion planning and control in parallel or simultaneously. The section will outline these algorithms and present a brief comparison with the proposed algorithm.</ns0:p><ns0:p>Firstly, there are algorithms that carry out path planning and then control the robots on these paths, usually known as path-tracking. For example, Saputro et al. implemented trajectory planning and tracking of mobile robots using a map and predictive control <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>. A map of the environment is first created and then the collision free path is searched using A* algorithm. Then a model predictive control is used to track the robot on the reference path. Likewise, Ning et al. implemented a trajectory planning and tracking control scheme for autonomous obstacle avoidance of wheeled inverted pendulum (WIP) vehicles <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>.</ns0:p><ns0:p>Motion planning and trajectory control has been applied to car parking as well <ns0:ref type='bibr' target='#b48'>[47]</ns0:ref>, where the authors first find a path by solving a static optimization problem and then use a optimal controller for parking.</ns0:p><ns0:p>The reader is referred to <ns0:ref type='bibr' target='#b4'>[5,</ns0:ref><ns0:ref type='bibr' target='#b8'>9,</ns0:ref><ns0:ref type='bibr' target='#b42'>41,</ns0:ref><ns0:ref type='bibr' target='#b45'>44,</ns0:ref><ns0:ref type='bibr' target='#b47'>46]</ns0:ref> for other examples of such algorithms.</ns0:p><ns0:p>Secondly, there are algorithms that consider path planning and motion control in parallel. <ns0:ref type='bibr'>Wahid et al.</ns0:ref> used artificial potential field method for motion planning and kinematics for the control to design vehicle collision avoidance assistance systems <ns0:ref type='bibr' target='#b43'>[42]</ns0:ref>. Lyapunov based control scheme (LbCS) was developed by Sharma et al. <ns0:ref type='bibr' target='#b33'>[33,</ns0:ref><ns0:ref type='bibr' target='#b34'>34]</ns0:ref> and has been used by many researchers in different scenarios for robot motion planning and control <ns0:ref type='bibr' target='#b5'>[6,</ns0:ref><ns0:ref type='bibr' target='#b11'>12,</ns0:ref><ns0:ref type='bibr' target='#b22'>22,</ns0:ref><ns0:ref type='bibr' target='#b24'>24,</ns0:ref><ns0:ref type='bibr' target='#b26'>26,</ns0:ref><ns0:ref type='bibr' target='#b28'>28]</ns0:ref>. LbCS is a time-invariant nonlinear method that is used to create velocity or acceleration-based controls enabling a robot to move safely in the workspace while avoiding obstacles. Raj et al. used LbCS to control a system of 1-trailer robots in a cluttered environment including a swarm of boids <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> and navigate car-like robots in 3-dimensional space <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>. Prasad et. al used LbCS to derive the acceleration-based controllers for the mobile manipulator in 3-dimensional <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> and to motion control a pair of cylindrical manipulators in a constrained 3-dimensional workspace <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>. Researchers also used LbCS for controlling quadrotors in different environments <ns0:ref type='bibr' target='#b27'>[27,</ns0:ref><ns0:ref type='bibr' target='#b29'>29,</ns0:ref><ns0:ref type='bibr' target='#b41'>40]</ns0:ref>.</ns0:p><ns0:p>All of the motion planning and control algorithms outlined in the previous paragraph except LbCS first plan the path for the entire journey and then control the motion of the robot on the planned path.</ns0:p><ns0:p>However, the proposed algorithm is different from these algorithms as it plans one step at a time and controls the motion of the robot to that step. LbCS also plans a step and then controls the motion of the robot to that step. Therefore, the proposed algorithm will be compared for performance with the LbCS.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>PROBLEM STATEMENT AND OBJECTIVES</ns0:head><ns0:p>Suppose there is an initial position and a target position for each robot in a bounded workspace with several static obstacles of different shapes and sizes. The mobile robots will start at the initial position, avoid obstacles in path and reach their designated target position. Therefore, the main objective of the mobile robots is to reach their target by taking a shorter and safer route (avoid collision with obstacles and other robots).The following assumptions are made to achieve the objective of motion control in this paper: Assumption 1 The obstacles are of circular and rectangular shapes. In some cases, it is also represented as line segments. These obstacles are of different sizes and randomly distributed in the bounded workspace. The obstacles are static with known locations.</ns0:p></ns0:div> <ns0:div><ns0:head>Assumption 2</ns0:head><ns0:p>The kinematic equations are used for the motion of point-mass and tractor-trailer robots. The robots can move in any direction.</ns0:p><ns0:p>Since this paper introduces first of a kind hybrid, more features such as handing uncertainties in the environment and locations of the static obstacles and the effects of noise will be added in the future work.</ns0:p><ns0:p>The research objectives of this paper are as follows:</ns0:p><ns0:p>&#8226; Design and implement a hybrid algorithm composed of a heuristic and a classical method for planning a step location of robot and controlling it to that step, and hence reach a target with a series of such steps. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Apply the hybrid algorithm to a real life application such as tractor-trailer robots.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>ANT COLONY OPTIMIZATION</ns0:head><ns0:p>Ant colony optimization (ACO) was initially developed by Dorigo et al. <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref> in the early 90's for combinatorial optimization but later modified for continuous domains <ns0:ref type='bibr' target='#b6'>[7,</ns0:ref><ns0:ref type='bibr' target='#b36'>36]</ns0:ref>. ACO has been inspired from natural ants, which move out of their nest in search of food and move randomly in the surrounding area. Once an ant finds food, it evaluates and carries it on its back. On the way back to the nest, the ant deposits a pheromone trail on the ground which depends on the amount of food and quality. This pheromone trail guides other ants to the food source <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>. In this paper, we have used the ant colony optimization for continuous domain (ACOR) proposed by Socha and Dorigo <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>. The original ACO is based on discrete domain where pheromone and heuristic information are used to make different probabilistic choices.</ns0:p><ns0:p>The main idea of ACOR is shifting from a discrete probability distribution to a continuous one that is, a probability density function (PDF). In ACOR, an ant samples a PDF instead of choosing a solution component when compared to ACO in discrete domain. ACOR closely follows the metaheuristic of ACO in discrete domain.</ns0:p><ns0:p>ACO algorithm for continuous domain has two types of population: archive and new. The pheromone information is stored as a solution archive. The solutions in the archive are ordered according to their quality (determined through objective function evaluations). Each solution has an associated weight proportion to the solution quality. ACOR uses pheromone information to make the probabilistic choice.</ns0:p><ns0:p>In the case of robot motion planning and control, the next step for the robot will be determined by the fittest ant (pheromone information) and the robot will move to that step using kinematic equations.</ns0:p><ns0:p>The algorithm is divided into the following phases:</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Initialization Phase</ns0:head><ns0:p>The algorithm starts with the creation of archive solution during the initialization of ants. ACOR utilizes Gaussian kernal which has three vectors of parameters: weights, means and standard deviations. The time complexity for this phase is O(n), where n is the number of ants.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Archive Population Phase</ns0:head><ns0:p>For the archive solution, the means and standard deviations are calculated using equations ( <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_0'>&#181; = &#181; 1 , ....&#181; n = s 1 , ...., s n ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where &#181; i are the means and s i the archive solutions for i = 1...n, and</ns0:p><ns0:formula xml:id='formula_1'>sd i = &#958; k &#8721; n=1 |s n &#8722; s l | k &#8722; 1 ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where sd i is the standard deviation, k is the archive population, &#958; is the convergence speed of the algorithm and s l is the chosen solution. ACOR stores in archive solution the values of the solutions' n variables and the value of their objective functions. This phase has a time complexity of O(n), where n is the number of ants.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>New Population Creation Phase</ns0:head><ns0:p>This phase involves creation of the new population array which is subsequently randomly initialized. This phase has a time complexity of O(1).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Solution Construction Phase</ns0:head><ns0:p>This phase starts which consists of the Gaussian kernel selection and then generating Gaussian random variable. Gaussian kernel selection is based on roulette wheel selection and probability. The probability is computed based on weights. The weights and probability are given in equations ( <ns0:ref type='formula' target='#formula_2'>3</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>): Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>w i = 1 qk &#8730; 2 * &#928; exp (&#8722;0.5 (i &#8722; 1) 2 (qk) 2 ),<ns0:label>(3) 4</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where, w i is the weight for individual solution, q is the intensification factor and k is the archive population, and</ns0:p><ns0:formula xml:id='formula_3'>P i = w n &#8721; k n=1 w n ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where, P i is the probability of individual archive solution, k is the archive population and w n is the weight of individual archive solution.</ns0:p><ns0:p>An ant chooses probabilistically one of the solutions in the archive using equation ( <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>).</ns0:p><ns0:p>The position of the new population is the random variable generated using means and standard deviations from the Archive Population Phase and normally distributed random numbers. This phase has a time complexity of O(n), where n is the number of ants.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Evaluation Phase</ns0:head><ns0:p>The new constructed solution is then evaluated using a fitness function and merged with the archive population. Finally, the total population is sorted to get the best solution and a new set of archive solution.</ns0:p><ns0:p>The time complexity for this phase is O(1).</ns0:p><ns0:p>All phases except the Initialization phase will be repeated until the robot reaches its destination.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>ROBOT MOTION PLANNING AND CONTROL FORMULATION</ns0:head><ns0:p>In this research, the path planning problem of robots is solved by the ACO while their motion controls by the kinematic equations. For the path planning problem, initial artificial ants and objective functions are used. For motion control, the kinematic equations will be used to control a robot to a point (step location) generated by ants. Kinematic equations which are essentially ODE's governing the motion of robot are used to control the motion of the robot. These equations are dependent on the type of the robot.</ns0:p><ns0:p>For example, a point mass robot will have a set of different kinematic equations when compared that of a tractor-trailer robot because the latter may include nonholonomic constraints. This section is further divided into ants representation, multi-objective path planning problem, and the problem formulation of path planning and motion control of robots.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Multi-Objective Path Planning Problem</ns0:head><ns0:p>The robot path planning problem is formulated as the multi-objective problem. The two objectives are obtaining short path and safe path of the robots.</ns0:p><ns0:p>The following is the definition of a point-mass robot adopted from <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>:</ns0:p><ns0:p>Definition 5.1. A jth point-mass P j is a disk of radius rp j &#8805; 0 and is positioned at (x j (t), y j (t)) &#8712; R 2 at time t &#8805; 0. Precisely, the point-mass is the set</ns0:p><ns0:formula xml:id='formula_4'>P j = {(z 1 , z 2 ) &#8712; R 2 : (z 1 &#8722; x j ) 2 + (z 2 &#8722; y j ) 2 &#8804; rp 2 j } for j = 1, 2, . . . , n,</ns0:formula><ns0:p>Definition 5.2. The target for P j is a disk of center (p j1 , p j2 ) and radius rt j which is described as</ns0:p><ns0:formula xml:id='formula_5'>T j = (z 1 , z 2 ) &#8712; R 2 : (z 1 &#8722; p j1 ) 2 + (z 2 &#8722; p j2 ) 2 &#8804; rt 2 j (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>)</ns0:formula><ns0:p>for j = 1, 2, . . . , n.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Ant Representation</ns0:head><ns0:p>The motion of each robot will be guided by a colony of ants which are randomly distributed in the working environment. Since we are considering n point-mass robots, there will be n colonies altogether with each colony having n j ants. The following is the definition of a moving ant in the jth colony:</ns0:p><ns0:p>Definition 5.3. The ith ant in the jth colony A i j is a disk of radius ra i j &#8805; 0 and is positioned at (xa i j (t), ya i j (t)) &#8712; R 2 at time t &#8805; 0. Precisely, the ith ant in the jth colony is the set Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_7'>A i j = {(z 1 , z 2 ) &#8712; R 2 : (z 1 &#8722; xa i j ) 2 + (z 2 &#8722; ya i j ) 2 &#8804; ra 2 i j } for i = 1,</ns0:formula></ns0:div> <ns0:div><ns0:head n='5.2.1'>Short Path</ns0:head><ns0:p>A robot will have a target seeking behaviour, that is, a robot should always be at a minimum distance from the target while it navigates through the cluttered environment. Therefore, the total length of the robot's path should be a minimum. Ants will be used to guide the robot to the target. In case of a multi-robot environment, each robot will have a set of ants associated with it for navigation. Since ants are used to plan the path for the robot, the fittest ant will be chosen for the robot's next step location. For each ant in the jth colony, an Euclidean formula shown in equation ( <ns0:ref type='formula' target='#formula_8'>6</ns0:ref>) is used to calculate the distance between the ith ant and the target:</ns0:p><ns0:formula xml:id='formula_8'>d i j = (p j1 &#8722; xa i j ) 2 + (p j2 &#8722; ya i j ) 2<ns0:label>(6)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='5.2.2'>Safe Path</ns0:head><ns0:p>A path is safe if it has no obstacles in it. However, the workspace considered for this research has multiple obstacles, therefore obstacle avoidance becomes necessary. There are two types of obstacles used in this paper: (1) circular obstacles and (2) line segments. The following are the definitions of a circular obstacle and a line segment, adopted from <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>:</ns0:p><ns0:p>Definition 5.4. The lth circular obstacle with center (o l1 , o l2 ) and radius ro l &gt; 0 on the z 1 z 2 plane is described as</ns0:p><ns0:formula xml:id='formula_9'>FO l = (z 1 , z 2 ) &#8712; R 2 : (z 1 &#8722; o l1 ) 2 + (z 2 &#8722; o l2 ) 2 &#8804; ro 2 l ,</ns0:formula><ns0:p>for l = 1, 2, . . . , q.</ns0:p><ns0:p>Definition 5.5. The kth line segment in the z 1 z 2 plane, from the point (a k1 , b k1 ) to the point (a k2 , b k2 ) is the set</ns0:p><ns0:formula xml:id='formula_10'>LO k = (z 1 , z 2 ) &#8712; R 2 : (z 1 &#8722; a k1 &#8722; &#955; k (a k2 &#8722; a k1 )) 2 + (z 2 &#8722; b k1 &#8722; &#955; k (b k2 &#8722; b k1 )) 2 = 0 , where &#955; k &#8712; [0, 1], k = 1, 2, . . . , m.</ns0:formula><ns0:p>For a circular obstacle, Euclidean formula shown in equation ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) is used to calculate the shortest distance between ith ant in the j colony and the lth obstacle:</ns0:p><ns0:formula xml:id='formula_11'>d1 i jl = (o l1 &#8722; xa i j ) 2 + (o l2 &#8722; ya i j ) 2<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>for i = 1, 2, . . . , n j , j = 1, 2, . . . , n and l = 1, 2, . . . , q. Since there are many obstacles, the distance between each ant and the obstacles will be calculated. The sum of the distances between ith ant in the jth colony and obstacles is given by:</ns0:p><ns0:formula xml:id='formula_12'>f 1 i j = q &#8721; l=1 d1 i jl .<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>To avoid a line segment, the distance between an ant and several points on that line segment are calculated, and the point generating minimum distance is considered. This technique is known as minimum distance technique (MDT) that has been adopted from <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref>. Avoiding the closest point on a line segment at any given time will result in avoiding the entire line segment. Again, the Euclidean formula is used to calculate the distance of an ant with a point on the line segment:</ns0:p><ns0:formula xml:id='formula_13'>d2 i jk = (a k1 + &#955; i jk (a k2 &#8722; a k1 ) &#8722; xa i j ) 2 + (b k1 + &#955; i jk (b k2 &#8722; b k1 ) &#8722; ya i j ) 2 ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>where &#955; i jk &#8712; [0, 1], i = 1, 2, . . . , n j , j = 1, 2, . . . , n and k = 1, 2, . . . , m. Like circular obstacles, there are many line segments, therefore the distance a point on the line segment and an ant will be calculated for each line segment and will be summed as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_14'>f 2 i j = m &#8721; k=1 d2 i jk .<ns0:label>(10</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In a multi-robot environment, the inter-collisions between robots must be avoided. Since ants determine the next step location of a robot, each ant in the jth colony must avoid other robots which is considered as a moving obstacle. The Euclidean distance as shown in equation ( <ns0:ref type='formula' target='#formula_15'>11</ns0:ref>) is used to avoid robots from colliding with each other:</ns0:p><ns0:formula xml:id='formula_15'>d3 i jh = (xa i j &#8722; x h ) 2 + (ya i j &#8722; y h ) 2 ,<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>for i = 1, 2, . . . , n j , j = 1, 2, . . . , n, h = 1, 2, . . . , n, h = j.</ns0:p><ns0:p>The sum of the distances between each ant and different robots is given by:</ns0:p><ns0:formula xml:id='formula_16'>f 3 i j = n &#8721; h=1 h = j d3 i jh<ns0:label>(12)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head n='5.3'>Problem Formulation</ns0:head><ns0:p>The problem is the minimization optimization problem which finds the optimal path for mobile robots in a cluttered environment. The fitness equation is designed by summing all the objective functions defined in this section:</ns0:p><ns0:formula xml:id='formula_17'>f i j = a. 1 f 1 i j + b. 1 f 2 i j + c. 1 f 3 i j + d.d i j<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>which is the fitness of ith ant for robot j and a, b, c and d are control parameters. The ant having the minimum f i j will be the fittest ant in the jth colony. It means that the ant is located at a safe distance from obstacles and at a minimum distance from the target. The jth robot will move to the fittest ant of the jth colony and this process will continue until the jth robot reaches its target. The control parameters a, b and c are the fitting parameters that decide path safety. With a high value of parameter a the ants will avoid the stationary circular obstacles from the greater distance. Similarly, a small value of b will mean that the ants will avoid the line segments from a closer distance which can compromise safety. However, when there is a decrease in the value of a, b and c, the chances of collision with obstacles, line segments and other robots are high. Likewise, a high value of d will minimize the path length and a minimum value will maximize the path length. Therefore, a proper selection of these parameters decides the success of the objective function in planning the next step for a robot.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.4'>Motion Control</ns0:head><ns0:p>The kinematic equations are used to control the robot to the step location generated ants. The kinematic equations will be used to generate smooth path for the robots in between the nodes.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>PROPOSED ALGORITHM</ns0:head><ns0:p>The proposed new algorithm is a hybrid of the ant colony optimization and kinematic equations, named ACO-Kinematic. The choice of ACO variant was made from the results of <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref> where they showed that it outperformed other ACO variants and was equivalent to some heuristic algorithms for continuous domain.</ns0:p><ns0:p>The robot's next step will be planned using the ant colony optimization algorithm and the robot will move to that step using its kinematic equations. In the literature, there are various hybrid models but according to the authors knowledge, there is none of this kind. The ACO-Kinematic pseudocode is shown in Algorithm 1.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>RESULTS</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> shows the initial parameters of the ACO-Kinematic algorithm used in the three case studies discussed in the following subsections. Parameters 1 -8 are used for path planing while parameters 9 -10 are used for motion control. Safety parameters a, b and c will be used by users to fine tune avoidance of obstacles, line segments and other robots, respectively. Convergence parameter d determines the time it will take for a robot to reach its destination. Faster convergence means compromising the safety of the robot while the slower convergence means that the operation can be costly. Therefore, the parameters in this research have been adjusted with precaution. In this research the authors have deployed the brute force method to generate the values of the control parameters. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='7.1'>Case Study 1: Single Point-Mass Robot and Multiple Obstacles</ns0:head><ns0:p>The proposed algorithm has been used to navigate the point-mass robot from source to destination in a cluttered environment. The kinematic equations governing the motion of a point-mass robot from its initial position (x 0 , y 0 ) to another point</ns0:p><ns0:formula xml:id='formula_18'>(p 1 , p 2 ) are &#7819; = &#945; 1 (p 1 &#8722; x), &#7823; = &#945; 2 (p 2 &#8722; y)<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>where &#945; 1 and &#945; 2 are positive real numbers.</ns0:p><ns0:p>Figures <ns0:ref type='figure'>1 and 2</ns0:ref> show two scenarios where the point-mass robot avoids circular and line obstacles to reach its target. The path of the point-mass robot consists of points that have been generated by the ants.</ns0:p><ns0:p>The robot moves from one step to another using its kinematic equations.</ns0:p></ns0:div> <ns0:div><ns0:head n='7.2'>Case Study 2: Multiple Point-Mass Robots and Multiple Obstacles</ns0:head><ns0:p>The proposed algorithm has been used to plan and control motion of multiple point-mass robots in a multiple obstacles (circular and rectangular shapes) environment. Figure <ns0:ref type='figure'>3</ns0:ref> shows the paths of three point-mass robots. The first robot (R1) has a initial position of <ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b46'>45)</ns0:ref> and the target placed at <ns0:ref type='bibr' target='#b46'>(45,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref>. The second robot (R2) is placed at the initial position <ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref> and has to reach the target at <ns0:ref type='bibr' target='#b46'>(45,</ns0:ref><ns0:ref type='bibr' target='#b46'>45)</ns0:ref>. The initial and target positions for the third robot (R3) are <ns0:ref type='bibr' target='#b46'>(45,</ns0:ref><ns0:ref type='bibr' target='#b25'>25)</ns0:ref> and <ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b25'>25)</ns0:ref>. The three robots start journey from their initial positions and have a goal to achieve, that is, to reach their target positions safely by taking a shortest route moving from one step to another. The three robots avoid all obstacles in their paths and also avoid colliding with each other. Figure <ns0:ref type='figure'>4</ns0:ref> shows the first robot (R1) and the second robot (R2) avoiding each other. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='7.3'>Application: Tractor-Trailer Robot</ns0:head><ns0:p>The proposed algorithm has been used for motion planning and control of a tractor-trailer robot system.</ns0:p><ns0:p>We consider a non-standard tractor-trailer robot which comprises of a rear wheel driven car-like vehicle and a hitched two-wheeled passive trailer attached to the rear axel of the vehicle (Figure <ns0:ref type='figure'>5</ns0:ref>).</ns0:p><ns0:p>Let (x, y) represent the cartesian coordinates of the tractor robot, &#952; 0 be its orientation with respect to the x-axis, while &#966; gives the steering angle with respect to its longitudinal axis. Similarly, let &#952; 1 denote the orientation of the trailer with respect to the x-axis. Letting L and L t be the lengths of the mid-axle of the tractor and trailer, respectively, the motion of the tractor-trailer robot is governed by the following kinematic equations <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref> </ns0:p><ns0:formula xml:id='formula_19'>&#7819; = v cos &#952; 0 &#8722; v 2 tan &#966; sin &#952; 0 , &#7823; = v sin &#952; 0 + v 2 tan &#966; cos &#952; 0 , &#952;0 = v L tan &#966; , &#952;1 = v Lt sin(&#952; 0 &#8722; &#952; 1 ) &#8722; c L tan &#966; cos(&#952; 0 &#8722; &#952; 1 ) , &#63740; &#63732; &#63732; &#63741; &#63732; &#63732; &#63742;<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>where v and &#966; which are the translational velocity and the steering angle, respectively, of the tractor robot,</ns0:p><ns0:p>given as <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref> </ns0:p><ns0:formula xml:id='formula_20'>v = &#945; (p 2 &#8722; y) 2 + (p 1 &#8722; x) 2 , &#966; = 7 9 tan &#8722;1 &#958; + &#946; cos |&#952; 0 &#8722; &#952; 1 |</ns0:formula><ns0:p>where &#945; is a positive real number and &#946; = max{0, 0.5</ns0:p><ns0:formula xml:id='formula_21'>&#8722; cos |&#952; 1 &#8722; &#952; 0 |} &#8226; sign(&#952; 1 &#8722; &#952; 0 ). Note that (p 1 , p 2 )</ns0:formula><ns0:p>is the next step for the robot generated by ant colony optimization and (x, y) is the current position of the robot. &#958; is obtained by numerically solving the differential equation &#958;</ns0:p><ns0:formula xml:id='formula_22'>= (p 2 &#8722; y) cos &#952; 0 &#8722; (p 1 &#8722; x) sin &#952; 0 (x &#8722; p 1 ) 2 + (y &#8722; p 2 ) 2 + 0.01 &#8722; atan2(p 2 &#8722; y, p 1 &#8722; x) + &#952; 0 , &#958; (0) = atan2((p 2 &#8722; y(0), (p 1 &#8722; x(0)) &#8722; &#952; 0 (0)</ns0:formula><ns0:p>Figures <ns0:ref type='figure'>6 and 7</ns0:ref> shows the trajectories of one tractor-trailer robot in two different scenarios. <ns0:ref type='table' target='#tab_5'>2</ns0:ref> shows the initial and target positions for each robot.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> shows that R1 is avoiding R3 by stopping and waiting for R3 to pass. R1 has successfully avoided R3 and R1 resumes its journey towards the target, as shown in Figure <ns0:ref type='figure' target='#fig_7'>9</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head n='8'>DISCUSSION</ns0:head><ns0:p>In this section, the performance of the proposed ACO-Kinematic algorithm will be compared with the Lyapunov-based Control Scheme (LbCS), which is a popular potential field-based method used to solve motion planning and control problem <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>. The performance will be measured in terms of path length and convergence time. Both algorithms have convergence and safety parameters. For LbCS, a larger convergence parameter value increases convergence time whereas a smaller convergence parameter value will decrease the convergence time. For ACO-Kinematic, a larger convergence parameter value decreases convergence time whereas a smaller convergence parameter value will increase the convergence time.</ns0:p><ns0:p>Note that a quicker time to converge can affect a robot's safety. Therefore, the parameters need to be adjusted. There is no method in literature to adjust these parameters apart from the brute-force method.</ns0:p><ns0:p>The authors have also used the brute-force method to obtain optimal parameters for the two algorithms that have been used to measure performance as shown in tables 3 and 4. The LbCS equations and parameters for the point-mass robot and tractor-trailer has been used from <ns0:ref type='bibr' target='#b40'>[39]</ns0:ref> and <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref>, respectively. The parameter values depend on the type of the robot which is also shown in the two tables. The convergence parameters d for ACO-Kinematic, and delta1 and delta2 for LbCS have the same values for both robots. Other parameters may differ in their values and are dependent on the type of the robots.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref> shows the average path lengths and the time it takes for a robot to reach the destination using Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the destination while the point-mass robot with ACO-Kinematic took 124.99s to cover the path length 65.35cm. The best-case path length for ACO-Kinematic was 64.5cm but it took the robot 138.14s to reach the destination. The worst-case path length for ACO-Kinematic was 66.28cm and had a time of 101.28s.</ns0:p><ns0:p>For Scenario 3, the point-mass robots was replaced by a non-standard tractor-trailer robotic systems.</ns0:p><ns0:p>Figures <ns0:ref type='figure'>16 and 17</ns0:ref> shows the trajectories of the tractor-trailer robot for ACO-Kinematic and LbCS, respectively. The tractor-trailer robot with the LbCS had a path length of 68.34cm and time of 240.92s.</ns0:p><ns0:p>The robot with ACO-Kinematic had the average path length of 66.27cm and time of 195.79s. The ACO-Kinematic also provided the best-case path length of 62.99cm with a time of 230.02s while the worst-case path length was 68.06cm with a time of 238.05s.</ns0:p><ns0:p>Overall, in all 3 scenarios, ACO-Kinematic was able to achieve the shorter path in a time lesser than that of LbCS.</ns0:p><ns0:p>Figures <ns0:ref type='figure' target='#fig_7'>18 and 19</ns0:ref> show the trajectories of the point-mass robots using ACO-Kinematic and LbCS for Scenario 4. While, the point-mass robot controlled by ACO-Kinematic was able to reach the target, the point-mass robot through LbCS controllers could not. This is because the LbCS system had entered into the local minima, which is one issue that most artificial potential field methods face. The ACO-Kinematic was able to solve the problem of local minima and hence a better performer than a traditional motion planning and control algorithms like LbCS.</ns0:p></ns0:div> <ns0:div><ns0:head n='9'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this paper, a unique approach is proposed for solving motion planning and control problems. In and the robot moves to that step using its kinematic equations. The algorithm is inherently capable of making the robot avoid obstacles and other robots while moving from initial to the target position. In the hybrid algorithm, the ACO plans the robot's next step while the kinematic equations controls the robot to that step location. In the authors' belief, this is the first time a hybrid algorithm of this kind is proposed for motion planning and control problem. The algorithm solves a multi-objective problem that consists of finding the safest and shortest path, successfully using kinematic equations to move the robot from one</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>3 / 21 PeerJ</ns0:head><ns0:label>321</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:09:65340:1:2:NEW 16 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure1. A point-mass robots path with initial position<ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref> and target position<ns0:ref type='bibr' target='#b46'>(45,</ns0:ref><ns0:ref type='bibr' target='#b46'>45)</ns0:ref>.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>9 / 21 PeerJFigure 3 .Figure 4 . 21 PeerJ</ns0:head><ns0:label>9213421</ns0:label><ns0:figDesc>Figure 3. Paths of three point-mass robots.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>1 yFigure 5 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 5. Schematic representation of the tractor-trailer robot</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figures 8 , 9 ,</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figures 8,<ns0:ref type='bibr' target='#b8'>9,</ns0:ref> 10 and 11 shows the paths of the three tractor-trailer robots. Table2shows the initial and</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 10 shows that 11 / 21 PeerJFigure 6 .</ns0:head><ns0:label>11216</ns0:label><ns0:figDesc>Figure 6. Path with initial position (5, 5) and target position (45, 45).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 7 . 21 PeerJFigure 8 .</ns0:head><ns0:label>7218</ns0:label><ns0:figDesc>Figure 7. Path with initial position (5, 45) and target position (45, 5).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. A tractor-trailer robot (R3) avoiding another tractor-trailer robot (R1).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>21 PeerJFigure 10 .</ns0:head><ns0:label>2110</ns0:label><ns0:figDesc>Figure 10. A tractor-trailer robot (R1) avoiding another tractor-trailer robot (R2).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Complete paths of three tractor-trailer robots.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 12 .Figure 13 . 21 PeerJFigure 14 .</ns0:head><ns0:label>12132114</ns0:label><ns0:figDesc>Figure 12. Scenario 1 -A point-mass robot's path generated by ACO-Kinematic with initial position (5, 45) and target position (45, 5).].</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Scenario 2 -A point-mass robot's path generated by LbCS with initial position (5, 5) and target position (45, 45).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>21 PeerJFigure 16 .</ns0:head><ns0:label>2116</ns0:label><ns0:figDesc>Figure 16. Scenario 3 -A tractor-trailer robot's path generated by ACO-Kinematic with initial position (5, 45) and target position (45, 5).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 17 .Figure 18 . 21 PeerJFigure 19 .</ns0:head><ns0:label>17182119</ns0:label><ns0:figDesc>Figure 17. Scenario 3 -A tractor-trailer robot's path generated by LbCS with initial position (5, 45) and target position (45, 45).</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Initialize robots initial and target positions Initialize the weights and selection probabilities While (robots current position &lt; target position) do For all Ants in Archive Size do Calculate means End For For all Ants in Archive Size do Calculate standard deviation End For Create New Population Array (Sample Size) For all Ants in Sample Size do Construct solution based on Gaussian Kernal Evaluate new solutions End For Merge main population (archive) and new population (sample size) Rank ants and find the new best position Move the robot from current position to the new best position using ACO-Kinematic Parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Kinematic equations</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Update current position to new best</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>End While</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Post-processing the results and visualization;</ns0:cell></ns0:row><ns0:row><ns0:cell>No.</ns0:cell><ns0:cell>Parameter</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Population Size (Archive)</ns0:cell><ns0:cell>300</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Sample Size</ns0:cell><ns0:cell>10000</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Deviation-Distance Ratio</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>Intensification Factor</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>a</ns0:cell><ns0:cell>0.18</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>b</ns0:cell><ns0:cell>0.18</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>c</ns0:cell><ns0:cell>0.18</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>&#945; 1</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>&#945; 2</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row></ns0:table><ns0:note>7/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65340:1:2:NEW 16 Jan 2022) Manuscript to be reviewed Computer Science Algorithm 1 ACO-Kinematic Objective function f(x) Initialize population of ants (Archive size) 8/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65340:1:2:NEW 16 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Robots' initial and target positions.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Robot Initial Position Target Position</ns0:cell></ns0:row><ns0:row><ns0:cell>R1</ns0:cell><ns0:cell>(5, 45)</ns0:cell><ns0:cell>(45, 5)</ns0:cell></ns0:row><ns0:row><ns0:cell>R2</ns0:cell><ns0:cell>(5, 25)</ns0:cell><ns0:cell>(45, 25)</ns0:cell></ns0:row><ns0:row><ns0:cell>R3</ns0:cell><ns0:cell>(5, 5)</ns0:cell><ns0:cell>(45, 45)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>ACO-Kinematic parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Parameter Point-mass robot Tractor-trailer</ns0:cell><ns0:cell>Range</ns0:cell><ns0:cell>Range Source</ns0:cell></ns0:row><ns0:row><ns0:cell>a</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>0.01 -1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>c</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>0.18</ns0:cell><ns0:cell>0.01 -1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>d</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.0001 -0.01</ns0:cell><ns0:cell>[20]</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>LbCS parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Parameter Point-mass robot Tractor-trailer Range</ns0:cell></ns0:row><ns0:row><ns0:cell>beta</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0.1 -2</ns0:cell></ns0:row><ns0:row><ns0:cell>beta1</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.1 -2</ns0:cell></ns0:row><ns0:row><ns0:cell>beta3</ns0:cell><ns0:cell>n/a</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.1 -2</ns0:cell></ns0:row><ns0:row><ns0:cell>beta4</ns0:cell><ns0:cell>n/a</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.1 -2</ns0:cell></ns0:row><ns0:row><ns0:cell>gamma</ns0:cell><ns0:cell>n/a</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.01 -2</ns0:cell></ns0:row><ns0:row><ns0:cell>delta1</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>10 -20</ns0:cell></ns0:row><ns0:row><ns0:cell>delta2</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>10 -20</ns0:cell></ns0:row></ns0:table><ns0:note>14/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65340:1:2:NEW 16 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Average path lengths and convergence time of two motion planning and control algorithms.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Algorithms</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>LbCS</ns0:cell><ns0:cell cols='2'>ACO-Kinematic</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Scenario Time (s) Path Length (cm) Time (s) Path Length (cm)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>251.02</ns0:cell><ns0:cell>58.57</ns0:cell><ns0:cell>208.72</ns0:cell><ns0:cell>57.98</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>309.44</ns0:cell><ns0:cell>66.3</ns0:cell><ns0:cell>124.99</ns0:cell><ns0:cell>65.35</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>240.92</ns0:cell><ns0:cell>68.34</ns0:cell><ns0:cell>195.79</ns0:cell><ns0:cell>66.27</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='21'>/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65340:1:2:NEW 16 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Paper Review Response Paper title: ACO-Kinematic: a hybrid first off the starting block Journal: ID-65340 / PeerJ Computer Science Reviewer 1 (Anonymous) Basic reporting 1. When several titles are cited in the text it is better to be in increasing order (lines 33,53, 103, 114) Response: The authors thank the reviewer for providing valuable suggestions. This has been corrected in the entire paper. 2. Use same format, when several titles are cited : [a] [b] [c] or [a,b,c] (line 103) Response: The paper has been corrected to ensure it has the same citation format. 3. line 134. Section 4 is too short. the section 3 can become 'problem statement and objectives' Response: The problem statement and objective sections have been combined under a new section “Problem Statement and Objectives”. 4. line 140. After mentioning Dorigo include citation of his work. Response: The citation of Dorigo’s work has been included after mentioning Dorigo. 5. line 141. The first who proposed variant of ACO for continues optimisation is Patrick Siarry (Continuous interacting ant colony algorithm based on dense heterarchy, J Dréo, P Siarry, Future Generation Computer Systems 20 (5), 841-856, 2004), cite him together with [32] Response: The suggested ACO variant has been duly cited in the Introduction. 6. line 169. Remove section 6. Kinematic equations are discussed in section 9. Response: This section has been deleted. Some of the contents have been integrated to Robot MPC Formulation section. 7. line 207. equation (13). This equation do not show the path length and minimisation of this function do not give the shorter path. The objective function must be the path length with end point the target. The collision avoiding can be constrain. Safety avoiding of obstacles can be solved defining some safety distance from obstacles. Response: The authors would like to thank the reviewer for this comment. We have corrected one statement in the paper and have now included in section “Robot MPC Formulation “ an explanation of the control parameters which are present in the objective function: The problem is the minimization optimization problem which finds the optimal path for mobile robots in a cluttered environment. The control parameters a, b and c are the fitting parameters that decide path safety. With a high value of parameter a the ants will avoid the stationary circular obstacles from the greater distance. Similarly, a small value of b will mean that the ants will avoid the line segments from a closer distance which can compromise safety. However, when there is a decrease in the value of a, b and c, the chances of collision with obstacles, line segments and other robots are high. Likewise, a high value of d will minimize the path length and a minimum value will maximize the path length. Therefore, a proper selection of these parameters decides the success of the objective function in planning the next step for a robot. Experimental design The paper needs new experiments after rewriting the objectives and constraints. Response: The objective function has been checked properly and it remains the same. An explanation is provided in Comment 7 above regarding the objective function. Validity of the findings The findings will be valid after recalculating with new objective function. Response: Since there is no change in the objective function as explained in Comment 7 above, the findings remain the same. Reviewer 2 (Mohammad Shokouhifar) This paper presents a combined technique based on ant colony optimization (ACO) algorithm and kinematic equations, named ACO-Kinematic, to solve the robot navigation problem in static environments. In this method, ACO is used to find a collision-free route to the next step, while kinematic equations are used to control and move the robot to the new selected step. The paper can be considered for publication in PeerJ Computer Science, if the following minor and major comments would be carefully addressed: 1) The main limitation of this study is to utilize the ACO-Kinematic algorithm for path planning of the mobile robot in static environments, i.e., considering only static obstacles with known location. Although the authors correctly addressed it as a limitation of their work in Conclusion, there is still a major issue: What about the static obstacles with unknown locations? Is the robot does not aware of the location of obstacles (even static), it cannot run ACO prior to find the full path for the mobile robot. Response: The authors would like to thank the reviewer for his valuable suggestions. In response to this comment, we are having static environment. We have added in the document that we are considering a priori known environments with static obstacles. The static obstacles with unknown locations will be part of our future work. 2) In the original ACO, pheromone and heuristic information (as available) are used to calculate the probability of the different choices (i.e., next steps in your study) using the ACO selection rule. Please provide more details about how next steps are selected via ACO? Why you did not consider heuristic information in your model? For example, the distance to the obstacles and angle to the target can be used as very informative heuristic information not only to speed-up the algorithm, but also to improve the solution quality. There is a fuzzy heuristic based ACO (combining fuzzy heuristic information and pheromone): “FH-ACO: Fuzzy heuristic-based ant colony optimization for joint virtual network function placement and routing”, recently published in Applied Soft Computing, 107, 107401. It utilized multi-criteria fuzzy heuristic as the heuristic information to guide ACO in finding better solutions, and utilizes a multi-criteria fuzzy heuristic model as well as pheromone to construct the full path. You should mention this paper in Introduction or Literature Review, and discuss why you did not consider heuristic information in ACO selection rule? Response: The authors have now clearly explained the reasons behind choosing the variant of ACO for this research. The authors have also listed the paper, FH-ACO, in the introduction section with other variants of ACO. The authors have also properly explained the variant of ACO used in the paper and its mechanics in Section 4. The variant used in the paper is for continuous domain and it only considers pheromone information to make a probabilistic choice. This has been described in Section 4 of the paper. Also at the moment, the role of heuristics has been taken up by the objective function. Since this is the first hybrid on motion planning and control, further features will be added to it in future work, including adding heuristic information as part of the ACO selection rule. 3) How uncertainties are handled in your model? You should have a plan to handle uncertainties of the environment, or even discuss about it as a limitation of your work in Conclusion. Even for static obstacles, what is your plan if there are uncertainties in the location of the obstacles? Response: The authors have added the uncertainties of the environments and location of the obstacles as features that will be added in the future work in conclusion. The authors have also mentioned about uncertanities in the problem statement section. 4) Please provide a time complexity analysis for the different phases of the ACO-Kinematic. Response: The time complexity analysis for different phases has been added in section 4. 5) The results of the ACO-Kinematic should be compared and justified with recently published heuristic- or metaheuristic-based path planning techniques. Response: The authors had already compared the results of ACO-Kinematic with lyapunov function (APF method) since the problem tackled in the paper also considers motion control. In addition, the authors have now presented in the introduction and section 6 the reason behind choosing the variant of ACO when compared to other variants and other heuristic or metaheuristic algorithms. 6) Finally, the paper should be carefully double-checked to be free of errors. Response: The authors have carefully checked the paper to be free of errors "
Here is a paper. Please give your review comments after reading it.
366
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>With the growth that social networks have experienced in recent years, it is entirely impossible to moderate content manually. Thanks to the different existing techniques in natural language processing, it is possible to generate predictive models that automatically classify texts into different categories. However, a weakness has been detected concerning the language used to train such models. This work aimed to develop a predictive model based on BERT, capable of detecting racist and xenophobic messages in tweets written in Spanish. A comparison was made with different Deep Learning models. A total of five predictive models were developed, two based on BERT and 3 using other deep learning techniques, CNN, LSTM and a model combining CNN+LSTM techniques. After exhaustively analyzing the results obtained by the different models, it was found that the one that got the best metrics was BETO, a BERT-based model trained only with texts written in Spanish. The results of our study show that the BETO model achieves a precision of 85.22% compared to the 82.00% precision of the mBERT model. The rest of the models obtained between 79.34% and 80.48% precision. On this basis, it has been possible to justify the vital importance of developing native transfer learning models for solving Natural Language Processing (NLP) problems in Spanish. Our main contribution is the achievement of promising results in the field of racism and hates speech in Spanish by applying different deep learning techniques.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>With the growth that social networks have experienced in recent years, it is entirely impossible to moderate content manually. Thanks to the different existing techniques in natural language processing, it is possible to generate predictive models that automatically classify texts into different categories. However, a weakness has been detected concerning the language used to train such models. This work aimed to develop a predictive model based on BERT, capable of detecting racist and xenophobic messages in tweets written in Spanish. A comparison was made with different Deep Learning models.</ns0:p><ns0:p>A total of five predictive models were developed, two based on BERT and 3 using other deep learning techniques, CNN, LSTM and a model combining CNN+LSTM techniques. After exhaustively analyzing the results obtained by the different models, it was found that the one that got the best metrics was BETO, a BERT-based model trained only with texts written in Spanish. The results of our study show that the BETO model achieves a precision of 85.22% compared to the 82.00% precision of the mBERT model. The rest of the models obtained between 79.34% and 80.48% precision. On this basis, it has been possible to justify the vital importance of developing native transfer learning models for solving Natural Language Processing (NLP) problems in Spanish. Our main contribution is the achievement of promising results in the field of racism and hates speech in Spanish by applying different deep learning techniques.</ns0:p></ns0:div> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>In recent years, the use of social networks such as Twitter, Facebook or Instagram, as well as other community forums, is generating hateful conversations among users <ns0:ref type='bibr' target='#b15'>(Del Vigna et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b52'>Watanabe et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b54'>Zhang and Luo, 2019)</ns0:ref>. The fact that people can comment anonymously is one of the factors that leads to the spread of hate speech <ns0:ref type='bibr' target='#b4'>(Barlett, 2015)</ns0:ref>.</ns0:p><ns0:p>Within hate crimes, some countries are particularly interested in studying racist and xenophobic crimes <ns0:ref type='bibr' target='#b42'>(Sayan, 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Rodr&#237;guez Maeso, 2018)</ns0:ref>. For example, in the UK, hate speech towards different Muslim and other immigrant communities has increased <ns0:ref type='bibr' target='#b3'>(Anonymous, 2017;</ns0:ref><ns0:ref type='bibr' target='#b48'>Travis, 2017)</ns0:ref>. A link was found between the rise of racism and the exit from the EU or the Manchester bombings. In other countries, such as Spain, the Congress of Deputies has recently approved the 'Non-legislative motion on preventing the spread of hate speech in the digital space' (Congreso de los Diputados of Spanish Government, 2020).</ns0:p><ns0:p>In short, the fight against racism and xenophobia are issues of international concern. Some studies are currently working on the classification of texts for the detection of racism or xenophobia <ns0:ref type='bibr' target='#b26'>(Kumari et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b1'>Al-Hassan and Al-Dossari, 2019;</ns0:ref><ns0:ref type='bibr' target='#b2'>Alotaibi and Abul Hasanat, 2020;</ns0:ref><ns0:ref type='bibr' /> PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Plaza- <ns0:ref type='bibr' target='#b33'>Del-Arco et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b10'>Chaudhry, 2015;</ns0:ref><ns0:ref type='bibr' target='#b25'>Konstantinidis et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b43'>Sazzed, 2021)</ns0:ref>. To do this, they have used different techniques in the field of artificial intelligence and, more specifically, within natural language processing. One of the most widely used techniques for this purpose is text classification on different datasets obtained from some social networks, Twitter being one of the most widely used <ns0:ref type='bibr' target='#b40'>(Saha et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b19'>Garcia and Berton, 2021)</ns0:ref>.</ns0:p><ns0:p>In order to classify texts using machine learning techniques and, more specifically, deep learning, techniques are often used such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Hierarchical Attention Network (HAN). Within these techniques, Long Short-Term and, more specifically, Bi-directional Long Short-Term (BiLSTM) are the most responsive. Analysing the latest developments in this regard are the BERT-based approaches.</ns0:p><ns0:p>However, one of the barriers encountered in detecting racist and xenophobic tweets is language. There are deep learning models that detect racist or xenophobic texts in English. Nevertheless, so far, no pretrained model has been found with texts in Spanish and there are studies showing that classification models trained with native language texts give better results <ns0:ref type='bibr' target='#b20'>(Guti&#233;rrez-Fandi&#241;o et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b36'>Pomares-Quimbaya et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b23'>Kamal et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b45'>Sharma et al., 2022;</ns0:ref><ns0:ref type='bibr' target='#b50'>Velankar et al., 2021)</ns0:ref>. Therefore, the main objectives of this research are those shown below:</ns0:p><ns0:p>&#8226; Obtain a set of texts in Spanish related to racism and xenophobia. Preprocess the data set and label said texts in a binary category, giving a value of 1 if the message is xenophobic or racist, and 0 when it is not.</ns0:p><ns0:p>&#8226; Apply different deep learning models and determine which of them is the one with the highest accuracy and precision.</ns0:p><ns0:p>In this paper, we propose a dataset related to racism and xenophobia in the Spanish language and we propose different models based on deep learning techniques to detect racist or xenophobic texts.</ns0:p><ns0:p>The paper is organized as follows. Related work is presented in the section 2. The methodology of the different proposed techniques is detailed in section 3. In section 4 the setup of different deep learning techniques and the results, showing a comparative between the different techniques are explained. Finally, results have been discussed in section 5 and we conclude in section 6.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>Within the state of the art, two different branches can be analysed, the Spanish language datasets publicly available on the web and the natural language processing techniques most commonly used in text classification.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Existing Datasets</ns0:head><ns0:p>While it is possible to find a large number of datasets in English that are properly labelled for the detection of racism, and especially hate speech, it is extremely difficult to find datasets in Spanish with these same characteristics.</ns0:p><ns0:p>A total of two Spanish datasets related to hate speech has been found, and it is completely impossible to find one related solely and exclusively to racism/xenophobia. Nevertheless, both datasets have been explored: HaterNet <ns0:ref type='bibr' target='#b30'>(Pereira-Kohatsu et al., 2019)</ns0:ref> and HatEval <ns0:ref type='bibr' target='#b5'>(Basile et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Other datasets were found that also contained texts in Spanish, but were not exclusively in Spanish, as in the case of the PHARM project, where the dataset is composed of texts in Spanish, Italian and Greek <ns0:ref type='bibr' target='#b51'>(Vrysis et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.1'>HaterNet</ns0:head><ns0:p>The first of the two datasets has been obtained thanks to HaterNet <ns0:ref type='bibr' target='#b30'>(Pereira-Kohatsu et al., 2019)</ns0:ref>, an intelligent system used by the National Office for the Fight against Hate Crimes, belonging to the Spanish Ministry of Interior <ns0:ref type='bibr'>(Ministerio Interior, 2019)</ns0:ref>. This dataset has a total of 6,000 labelled tweets indicating the presence or not of hate speech. Among the hate speech, there are, unsurprisingly, some tweets in which racism can be seen. However, they represent a rather low percentage of the total number of labelled tweets, which in practice makes the dataset invalid for training the model proposed in this project.</ns0:p></ns0:div> <ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Not only are there tweets on topics that differ completely from racism, but the labelling criteria are also somewhat confusing. For these reasons, this dataset has been completely discarded for use in the training and subsequent validation of the predictive model.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.2'>HatEval</ns0:head><ns0:p>The second dataset has been obtained thanks to HatEval <ns0:ref type='bibr' target='#b5'>(Basile et al., 2019)</ns0:ref>, a competition organised by SemEval (International Workshop on Semantic Evaluation), whose main objective was to develop a predictive model for the detection of anti-female and anti-immigration hate speech on Twitter. In order to access this dataset, it is necessary to request permission from one of its creators as shown in the repository https://github.com/msang/hateval.</ns0:p><ns0:p>The dataset consists of two different sets of tweets. The first one contains tweets written in English, which makes it completely unusable for the purpose of this project. The second one, in turn, contains 5,000 tweets written in Spanish in the training set and 1,600 in the test set.</ns0:p><ns0:p>Of these 5,000 tweets, a total of 1,971 are related to racism, which makes the set, at least initially, valid.</ns0:p><ns0:p>As regards the structure of the dataset, for each tuple, a text followed by three binary variables can be found. The first, called HS, denotes whether the tweet expresses hate speech or not. The second, known as TR, indicates whether the victim is a single person or a group of people. Finally, the variable called AG indicates whether the tweet is expressed in an aggressive way or not.</ns0:p><ns0:p>After looking closely at the dataset, it was determined that 987 (50,07%) tweets out of 1,971 were mislabelled. There are tweets from some people denouncing racism that are labelled as racist. The same goes for some comments that are clearly ironic. On other occasions, tweets that have nothing to do with racism (in South America the word 'sudaca' is used to refer to the subcontinent itself) or that are simply not racist are labelled as hate crimes.</ns0:p><ns0:p>Because of all these labelling failures, it has been determined that this dataset is also not valid for the purpose of this project.</ns0:p><ns0:p>Therefore, after having thoroughly analysed both datasets, it has been concluded that it is strictly necessary to build a suitable dataset. The whole process of building this dataset can be found in section 3.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Text classification</ns0:head><ns0:p>The techniques that can be used to create models of automatic classification of a text are very varied.</ns0:p><ns0:p>However, it is possible to group them into 3 main types of techniques: conventional machine learning, deep learning and transfer learning.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.1'>Conventional Machine Learning</ns0:head><ns0:p>Among the various conventional machine learning techniques used in text classification , and especially in the detection of hate speech on the Internet, support vector machines (SVM) and logistic regression stand out.</ns0:p><ns0:p>SVMs <ns0:ref type='bibr' target='#b12'>(Cortes and Vapnik, 1995)</ns0:ref> are extremely accurate and effective in classifying texts <ns0:ref type='bibr' target='#b0'>(Ahmad et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hasan et al., 2019)</ns0:ref>. One of their main advantages is that, unlike most other techniques, they perform particularly well on small training sets. Commonly, when using this technique, text features are usually obtained by applying TF-IDF or word embeddings, or another technique with a similar functionality such as Part of speech (POS).</ns0:p><ns0:p>Logistic regression <ns0:ref type='bibr' target='#b27'>(Lakshmi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b7'>Br Ginting et al., 2019)</ns0:ref> is a type of regression analysis used to predict the outcome of a categorical variable given a set of independent variables. Like SVMs, it is also necessary to extract text features with one of the aforementiond techniques before training begins. Unlike SVMs, they perform significantly worse on small training sets.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.2'>Deep learning</ns0:head><ns0:p>Deep learning is a specific branch of machine learning whose main difference is that, unlike traditional machine learning algorithms, it is capable of continuing to learn as it receives more and more data, without stagnating.</ns0:p><ns0:p>Among the deep learning techniques most commonly used in text classification, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) stand out.</ns0:p><ns0:p>Convolutional neural networks <ns0:ref type='bibr' target='#b28'>(Nedjah et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Roy et al., 2020)</ns0:ref> Manuscript to be reviewed Computer Science computer vision, they have been shown to perform excellently on textual classification problems as well.</ns0:p><ns0:p>When extracting text features, a stage prior to learning the network, it is common to use word embeddings.</ns0:p><ns0:p>Recurrent neural networks <ns0:ref type='bibr' target='#b29'>(Paetzold et al., 2019)</ns0:ref> are a type of neural network in which the connections between nodes form a directed graph along a temporal sequence. Among the different variants of this type of network, the Long Short-Term Memory Network (LSTM) <ns0:ref type='bibr' target='#b47'>(Talita and Wiguna, 2019;</ns0:ref><ns0:ref type='bibr' target='#b6'>Bisht et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b55'>Zhao et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Zhang et al., 2021)</ns0:ref>, which were specifically designed to avoid the problem of long-term dependency. As with CNNs, a correct extraction of text features prior to the learning period of the network, which is carried out by means of word embeddings, is of vital importance.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.3'>Transfer learning</ns0:head><ns0:p>Transfer learning is a machine learning technique in which the knowledge gained from carrying out a certain task is stored and then applied to a related problem. It is especially used for building models where a small amount of data is available for training and evaluation.</ns0:p><ns0:p>Within NLP tasks, the first attempted use of transfer learning was the creation of embeddings from large datasets such as Wikipedia. Although this was a major breakthrough in NLP, especially when training on very small datasets, there were still problems in differentiating the context in which words are written.</ns0:p><ns0:p>To solve this problem, a number of context-based pre-trained models such as Embedding from Language Models (ELMO) <ns0:ref type='bibr' target='#b31'>(Peters et al., 2018)</ns0:ref> and Bidirectional Encoder Representations from Transformers (BERT) <ns0:ref type='bibr' target='#b16'>(Devlin et al., 2019)</ns0:ref> have emerged, and a model very similar to the latter will be used in this project.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.3'>Detecting racism and hate speeches in Spanish</ns0:head><ns0:p>As regards the detection of racism, to date there is no model in Spanish developed for this specific purpose.</ns0:p><ns0:p>However, research has been carried out on hate speech in Spanish.</ns0:p><ns0:p>The most important study on hate speech in Spanish is <ns0:ref type='bibr' target='#b33'>(Plaza-Del-Arco et al., 2020)</ns0:ref>. It compares the performance of various NLP models in classifying tweets in the HatEval dataset. These models include SVM, linear regression (LR), Naive Bayes (NB), decision trees (DT), LSTM and a lexicon-based classifier.</ns0:p><ns0:p>Of these, the best performing was an Ensemble Voting Classifier, which combines the output of several classifiers when making a prediction.</ns0:p><ns0:p>The second study <ns0:ref type='bibr' target='#b13'>(del Arco et al., 2021)</ns0:ref>, by the same authors, compares the performance of other NLP models when classifying tweets from the HaterNet and HatEval datasets. These models include deep learning models such as LSTM, Bidirectional Long Short-Term Memory Networks (Bi-LSTM) and CNN, as well as transfer learning models such as mBert, XLM and BETO <ns0:ref type='bibr' target='#b31'>(Peters et al., 2018)</ns0:ref>, all of which are based on BERT. The best performer on almost all metrics was BETO.</ns0:p><ns0:p>Finally, the study <ns0:ref type='bibr' target='#b30'>(Pereira-Kohatsu et al., 2019)</ns0:ref> presents HaterNet, an intelligent system that identifies and monitors the evolution of hate speech on Twitter. In this study, in addition to building the aforementioned HaterNet dataset, a series of comparisons are also made between different text classification models. These models include LDA, QDA, Random Forest, Ridge Logistic Regression, SVM and an LSTM combined with an MLP, the latter being the best performing.</ns0:p><ns0:p>After having analysed the different existing models, and having verified that there are no models specifically developed to detect racism in texts written in Spanish, and that the hate speech detection models have all been trained on the same two datasets, which have numerous mislabelled tweets, the development of new models for this purpose is more than justified.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.4'>Summary of the literature review</ns0:head><ns0:p>A summary of the literature review is shown in the table 1, specifying the important results found in each study analysed, as well as the weaknesses found in relation to the objectives presented in our study.</ns0:p></ns0:div> <ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science The dataset are biased because the subsets were mislabelled.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>METHODOLOGY 193</ns0:head><ns0:p>This section explains the methodology used in this research. In figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> it is possible to see the complete Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Dataset</ns0:head><ns0:p>Having found that neither of the two public datasets in Spanish labelling the presence or absence of hate speech was sufficiently valid for training the proposed model, a proprietary dataset was constructed that could be fully adapted to the problem posed.</ns0:p><ns0:p>The Tweepy library <ns0:ref type='bibr' target='#b38'>(Roesslein, 2020)</ns0:ref> has been used to build this dataset. This library makes use of the Twitter API. The search patterns used for the construction of the dataset.</ns0:p><ns0:p>A keyword search was performed. These words are shown in table 2, as well as their English translation:</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. Spanish keywords used to collect dataset and their translation to English language. As far as data privacy is concerned, it is demonstrated that Cambridge Analytica is still alive and we can export people's behavioural characteristics without their consent just by acquiring publicly available data <ns0:ref type='bibr' target='#b32'>(Pitropakis et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b24'>Kandias et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b22'>Isaak and Hanna, 2018)</ns0:ref>. This information, being public and anonymized, is exempt from the request for approval by an ethics committee <ns0:ref type='bibr' target='#b17'>(Eysenbach and Till, 2001)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Preprocessing and labelling data</ns0:head><ns0:p>After analysing the different articles in which predictive models are generated, it was decided to make use of a subset of data consisting of 2,000 tweets to be labelled in the two categories to be classified: racist tweets and non-racist tweets. The subset of data chosen was balanced, with 52% of tweets labelled as non-racist and 48% of tweets labelled as racist.</ns0:p><ns0:p>From the initial set of 26,143 tweets, a pre-selection of 2,000 tweets was made by 4 students of computer engineering interpreting 50% of them as belonging to each category: racist or xenophobic tweets and tweets that were neither racist nor xenophobic. Subsequently, labelling was carried out manually by eight experts in the field of psychology. The set of 2,000 tweets was divided into 4 subsets of 500 tweets that were labelled by 2 people individually. The pairs of experts then compared their labels and decided which label to assign to tweets where there might be some doubt as to whether or not they belonged to a category. Subsequently, a clustering phase of all labelled tweets was carried out.</ns0:p><ns0:p>Before generating this subset, a pre-processing of the data was carried out, which included the following tasks:</ns0:p><ns0:p>&#8226; Removal of extremely short tweets in which it is totally impossible to identify the presence or absence of racism.</ns0:p><ns0:p>&#8226; Removal of ironic tweets.</ns0:p><ns0:p>&#8226; Removal of tweets that are excessively badly written, making them difficult to understand or simply using invalid characters repeatedly.</ns0:p><ns0:p>&#8226; Conversion of all text to lowercase.</ns0:p><ns0:p>&#8226; Removal of URLs.</ns0:p></ns0:div> <ns0:div><ns0:head>6/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; Elimination of unnecessary spaces.</ns0:p><ns0:p>&#8226; Elimination of user names.</ns0:p><ns0:p>&#8226; Elimination of unnecessary characters.</ns0:p><ns0:p>&#8226; Elimination of accents.</ns0:p><ns0:p>&#8226; Elimination of stopwords.</ns0:p><ns0:p>The following criteria were also taken into account when selecting the subset:</ns0:p><ns0:p>&#8226; Variety of terms in the dataset. While the filters used to capture the tweets are extremely varied, in practice some terms are much more popular than others. For example, the terms 'inmigraci&#243;n'</ns0:p><ns0:p>(immigration) or patera (small boat) appear in a much higher number of tweets than the term 'negra de mierda' (fucking nigga). Therefore, if this factor is not taken into account when tagging tweets,</ns0:p><ns0:p>we end up with a data set that is too poor with data that is too similar.</ns0:p><ns0:p>&#8226; Variety of meanings within the same term. Although many of the terms do not leave any doubt about their meaning, some of them may have different meanings which, if they are not both included, the behaviour of the model in the future when receiving input from these terms would be considerably</ns0:p><ns0:p>reduced. An example of words to which this criterion applies directly is: 'mora' (moor), 'mena'</ns0:p><ns0:p>(unaccompanied foreign minor) or 'panchito' (spic).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>BERT Models</ns0:head><ns0:p>BERT <ns0:ref type='bibr' target='#b16'>(Devlin et al., 2019)</ns0:ref> makes use of Transformers <ns0:ref type='bibr' target='#b49'>(Vaswani et al., 2017)</ns0:ref>, a deep learning model</ns0:p><ns0:p>proposed by researchers at Google and the University of Toronto in 2017 that has particular application in the field of natural language processing.</ns0:p><ns0:p>Like Recurrent Neural Networks (RNN), Transformers is designed to work with sequential data. In a similar way to humans, it is capable of processing natural language, serving to carry out tasks as diverse as translation or text classification. However, unlike RNNs, Transformers do not require sequential data, text in this case, to be processed in order.</ns0:p><ns0:p>This means that, when receiving a text as input, it is not necessary to process the beginning of the text before the end, which allows for much greater parallelisation and, therefore, reduces training times considerably.</ns0:p><ns0:p>Transformers have been designed using the concept of the attention mechanism, which itself was designed to memorise long sentences in machine translation tasks.</ns0:p><ns0:p>At the architectural level, it is based on an encoder-decoder architecture in which the encoders consist of a set of encoding layers that iteratively process the input layer by layer. In turn, the decoders consist of a set of decoding layers that do the same at the output of the encoder.</ns0:p><ns0:p>Thus, when Transformers receive a text, it passes through a stack of encoders. The output obtained at the last encoder is passed to each of the decoders that make up the stack of decoders, resulting in a final output. A very high-level representation of Transformers can be found in the figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.</ns0:p><ns0:p>Each encoder consists of two main components, an attention mechanism called self-attention and a feed forward neural network.</ns0:p><ns0:p>The encoder receives a list of numeric vectors as input that flows through the self-attention layer.</ns0:p><ns0:p>This layer helps the encoder to take into account the rest of the words in the sentence before encoding each word. Although the example in the figure above is very simple, the encoder, before encoding the word 'Thinking', would take into account that this word is accompanied by the word 'Machines' when generating the z1 vector, just as it would do the same with the word 'Machines' when generating the z2 vector.</ns0:p><ns0:p>The output obtained for each of the input vectors, a list of z vectors in this case, is passed through a pre-fed neural network, which will generate an output for each z input vector. This output, which in the figure is represented as r, becomes the input to the next encoder, which will perform the same process as described above.</ns0:p><ns0:p>Finally, it should be noted that each encoder has a series of residual connections that prevents the output of one layer from being processed by the next layer. These mechanisms are particularly relevant in Manuscript to be reviewed neural networks that have many hidden layers, allowing certain layers to be left unused if necessary. In addition, it should be noted that a normalisation process is applied to the output of each layer.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Since its publication, it has become the best existing solution for a large number of NLP problems.</ns0:p><ns0:p>The most special feature of the model is the way in which it deals with the meaning of words depending on the context in which they are used, using an architecture based on Transformers, which will be explained in due course.</ns0:p><ns0:p>BERT's first differentiating factor is the difference from older embedding generation techniques such as Word2Vec, Glove or FastText.</ns0:p><ns0:p>The embeddings generated using the aforementioned techniques are context-independent, i.e. each word has a unique vector that defines it whatever the context in which it is used. This means that all the different meanings of a given word are combined into a single vector.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Metrics to evaluate results</ns0:head><ns0:p>To evaluate the results obtained in each of the models developed, 3 different metrics have been used: Precision (P), Recall (R) and F1-score (F1).</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.1'>Precision</ns0:head><ns0:p>Precision Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.3'>F1-score</ns0:head><ns0:p>F1-score is a metric used to calculate the effectiveness of a classifier by taking into account its accuracy and recall values. F1 assumes that the two metrics used are of equal importance in calculating effectiveness.</ns0:p><ns0:p>If one of them is more important than the other, a different formula F &#946; would have to be used. The formula used to calculate this metric is as follows, where P equals the precision value and R equals the recall value:</ns0:p><ns0:formula xml:id='formula_0'>F 1 = 2 * P * R P + R (3)</ns0:formula></ns0:div> <ns0:div><ns0:head n='4'>EXPERIMENTS AND RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Experimental Setup</ns0:head><ns0:p>As part of the development of this project, a total of 5 different predictive models have been developed.</ns0:p><ns0:p>Two are transfer learning models and three are deep learning models.</ns0:p><ns0:p>Within the transfer learning models, a model based on BETO has been developed, which is very similar to BERT trained in Spanish, and a model based on mBERT, a BERT checkpoint that can be used on texts in a large number of different languages. With respect to deep learning models, a model has been developed that implements a convolutional neural network, another model that implements a recurrent neural network (LSTM) and a last model that fuses a convolutional neural network with another recurrent neural network.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.1'>BETO model</ns0:head><ns0:p>BETO <ns0:ref type='bibr' target='#b8'>(Ca&#241;ete et al., 2020)</ns0:ref> is a transfer learning model that has been trained in the same way as BERT, except that it has been trained using texts written in Spanish instead of English, including Wikipedia entries, subtitles of series and movies, and even news. The specific technique with which it has been trained is called Whole Word Masking, a technique very similar to the Masked Language Model in which instead of hiding random tokens, it ensures that the hidden tokens always constitute a word, which means that if a token corresponding to a sub-word is hidden, the rest of the tokens that make up the whole word are also hidden. If we make a comparison between BETO and the different BERT models, we could say that it is a model extremely similar to the base BERT, given that it has 12 layers of encoders in its architecture.</ns0:p><ns0:p>Within BETO, there are two different models, one that has been trained using words containing both upper and lower case letters (original texts therefore) and another one that has been trained using only words written in lower case letters (texts are processed to comply with this premise). Although both models offer great results, depending on the problem to be solved, it is more appropriate to use one rather than the other. In this particular case, given that the dataset used is made up of tweets (where people are not at all careful about the way they express themselves), the uncased model is the one that performs better. After carrying out a large number of tests with both models, it has been found that the results obtained by this model are between 0.5% and 1% better, which justifies the selection of this model.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.2'>mBERT model</ns0:head><ns0:p>mBERT (Multilingual BERT) <ns0:ref type='bibr' target='#b16'>(Devlin et al., 2019)</ns0:ref> is a transfer learning model that has been trained on Wikipedia texts written in 104 different languages, including Spanish. Like BETO, it also has an identical architecture to the BERT base, with 12 layers of encoders, and two different models, one trained with texts containing both lowercase and uppercase letters and the other trained with texts containing only words written in lowercase letters. Contrary to BETO, after several tests, it has been found that the cased model performs slightly better than the uncased model. However, due to some problems in the tokenisation process, it has been necessary to convert all text to lower case even though the selected model is case-sensitive.</ns0:p><ns0:p>Multilingual models frequently encounter the common problem of language detection. This type of model does not usually have any mechanism or system that allows the detection of the language in which the texts that make up the input are written, which means that on many occasions the tokenizer makes mistakes when dividing the texts into tokens. If we add to this the fact that this type of model does not have mechanisms that allow words from different languages with the same meaning to be represented in a similar way in the vector space, everything seems to indicate that the results obtained after training these</ns0:p></ns0:div> <ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science models are worse than those obtained by native models, that is, those designed to work with texts written in a single language (as is the case of BETO in Spanish or BERT in English).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.3'>CNN model</ns0:head><ns0:p>This model is made up of the following layers:</ns0:p><ns0:p>&#8226; Embedding layer.</ns0:p><ns0:p>&#8226; 3 convolutional layers each followed by a MaxPooling layer. 256, 128 and 64 filters. MaxPooling of 2x2.</ns0:p><ns0:p>&#8226; Final output layer with a single neuron, in charge of classifying the sample.</ns0:p><ns0:p>The schematic architecture of the model is shown in figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1.4'>LSTM model</ns0:head><ns0:p>This model consists of the following layers:</ns0:p><ns0:p>&#8226; Embedding layer.</ns0:p><ns0:p>&#8226; LSTM layer with 64 units.</ns0:p><ns0:p>&#8226; 2 fully connected layers with dropout between them.</ns0:p><ns0:p>&#8226; Final output layer with a single neuron, in charge of classifying the sample.</ns0:p><ns0:p>The schematic architecture of the model is shown in figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1.5'>CNN + LSTM model</ns0:head><ns0:p>This model consists of the following layers:</ns0:p><ns0:p>&#8226; Embedding layer.</ns0:p><ns0:p>&#8226; Convolutional layer with 64 filters.</ns0:p><ns0:p>&#8226; LSTM layer with 128 filters.</ns0:p><ns0:p>&#8226; Final output layer with a single neuron, in charge of classifying the sample.</ns0:p><ns0:p>The schematic architecture of the model is shown in figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Transfer Learning Parameter Optimisation (BETO and mBERT)</ns0:head><ns0:p>In order to ensure that the results obtained by the transfer learning models are as high as possible, a series of tests have been carried out in which the performance of both models has been tested as a function of the value taken by the various hyperparameters.</ns0:p><ns0:p>Although the number of hyperparameters that can be customised is really high, in practice it is totally impossible to try to modify all of them given the exponential growth experienced in the number of combinations. Therefore, it has been necessary to select those parameters that have a greater relevance in the behaviour of the model <ns0:ref type='bibr' target='#b46'>(Sun et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The parameters selected were as follows:</ns0:p><ns0:p>&#8226; Type of model: Cased or uncased.</ns0:p><ns0:p>&#8226; Number of epochs: 2, 4 and 8. The number of epochs dictates the number of times the model will process the entire training set.</ns0:p><ns0:p>&#8226; Batch size: 8, 16, 32 and 64. This parameter indicates the number of samples to be processed by the model until an internal update of the model weights is performed.</ns0:p><ns0:p>&#8226; Optimiser: Adam and Adafactor. Mechanisms used to manage the update of the model weights.</ns0:p><ns0:p>&#8226; Learning rate: 0.00002, 0.00003 and 0.00004. Parameter that determines the step size when performing an update in the model with a view to approaching a minimum in a loss function.</ns0:p><ns0:p>As the dataset is made up of tweets, the length of which is generally limited to between 3 and 35 words, the maximum sequence size is not so relevant. However, if we were working with other types of text, it would be another parameter to be modified.</ns0:p><ns0:p>The different possible combinations tested (a total of 144) and the best parameters found for each of the two models can be found in the table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>. <ns0:ref type='bibr'>[2e-5, 3e-5, 4e-5]</ns0:ref> 4e-5 4e-5</ns0:p><ns0:p>The most optimal configurations for both models are practically identical, with the exception of the model type and number of epochs. While in the case of BETO it is the uncased model that obtains the best results, in mBERT it is the cased model that obtains the best results, although the difference is minimal.</ns0:p><ns0:p>In turn, in relation to the number of epochs, BETO obtains better results with 8 and mBERT with 4.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Optimisation of deep learning parameters (CNN, LSTM and CNN + LSTM)</ns0:head><ns0:p>In order to ensure that the results obtained by the deep learning models are as high as possible, a series of tests have been carried out in which the performance of the models has been tested depending on the value of certain strategically selected parameters.</ns0:p><ns0:p>As in BERT, the number of parameters to be modified is really high, much higher still in this case since some factors such as the number of hidden neurons in each layer greatly influence the performance of the models. Although several tests have been carried out on both the structure of the different models and the number of hidden neurons in each layer, it is more important to focus on the remaining parameters that have been modified.</ns0:p><ns0:p>The selected parameters have been the following:</ns0:p><ns0:p>&#8226; Batch size: 16, 32, 64, 128.</ns0:p><ns0:p>&#8226; Dropout: 0.25, 0.5.</ns0:p><ns0:p>&#8226; Optimizador: Adam, SGD.</ns0:p></ns0:div> <ns0:div><ns0:head>13/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; Funci&#243;n de activaci&#243;n: Relu, Tanh.</ns0:p><ns0:p>&#8226; Tasa de aprendizaje: 0.01, 0.02, 0.001, 0.002.</ns0:p><ns0:p>On numerous occasions, the number of epochs is also a parameter to be modified. However, in all the models pertaining to this project, we have chosen to use a mechanism called EarlyStopping, which is a regularisation mechanism used to avoid overfitting and which consists of stopping training when the control metrics of the validation set begin to decay.</ns0:p><ns0:p>The different possible combinations tested (a total of 64 in the CNN model and 128 in the LSTM and CNN+LSTM models) and the best parameters for each of the three models can be found in table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>. The most optimal configurations for the three models are very similar, sharing values in optimiser, activation function and learning rate. Dropout, which is not present in the proposed CNN architecture, is identical in the other two models.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Hardware and Software used for the experiments</ns0:head><ns0:p>To perform the pre-processing of the tweets and apply the deep learning techniques, a Jupyter notebook, Python 3.6 was used and run on a computer with the following characteristics: Intel(R) Core(TM) i7-9700K CPU @ 3.60GHZ, 32.0GB RAM and an NVIDIA GeForce RTX 2080 6GB graphics card.</ns0:p><ns0:p>The spaCy and nltk libraries were used to pre-process the text content.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Results</ns0:head><ns0:p>The following table <ns0:ref type='table' target='#tab_6'>5</ns0:ref> shows the results obtained after numerous runs of the various predictive models.</ns0:p><ns0:p>The runtime column refers to the time spent training and validating the model. BETO offers the best results in each and every one of the metrics calculated, obtaining a wide advantage over the rest of the models. The confusion matrices of the two best-performing models are shown in figure <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>As regards to the most important metric of all, Macro F1-score, in which all classes have the same importance in determining the effectiveness of the model, BETO obtains 85.14%, improving by more than 3 points over the other transfer learning model developed (mBERT) and between 5 and 6 points over the deep learning models. Several conclusions can be drawn from these results.</ns0:p></ns0:div> <ns0:div><ns0:head>14/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:p>. Confusion matrices of the best performing models (BETO and mBERT).</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CRITICAL EVALUATION AND COMPARISON OF THE RESULTS</ns0:head><ns0:p>Firstly, comparing the two transfer learning models, it can be seen that the native model (BETO) performs substantially better than the multilingual model (mBERT). The main reasons for this difference between the two models are as follows:</ns0:p><ns0:p>&#8226; Vocabulary difference between the two models: As BETO was trained with texts written in Spanish and mBERT with texts written in 104 different languages, the number of Spanish words in the two models differs greatly, which means that the percentage of words in the dataset present in the models' vocabulary of the models is also very different. As these are transfer learning models, where a generic model is used to solve a specific problem, the difference in the word coverage of the dataset has a significant impact on the models' results.</ns0:p><ns0:p>&#8226; Difference in the tokenisers: While BETO has a tokeniser that is solely and exclusively responsible for tokenising texts written in Spanish, mBERT has a generic tokeniser that does not even have a mechanism for detecting the language in which the text it is processing is written. This means that on numerous occasions tokenisation in mBERT is carried out erroneously by recognising Spanish words as words that exist in other languages.</ns0:p><ns0:p>With respect to the deep learning models, although they perform considerably worse than the transfer learning models (especially BETO), it should be noted that the results obtained are really good if we take into account the simplicity of the models developed and the small amount of data in the training set (only 1,400 samples).</ns0:p><ns0:p>Among the factors that influence the poorer performance of these models are the following:</ns0:p><ns0:p>&#8226; Single embeddings for several meanings of the same word: Although pre-trained embeddings are used which have a coverage of 82.4% of the words (an exceptionally good figure), all meanings of the same word are represented by the same numeric vector. This means that, for words in this situation, their numeric vector is influenced by all the different meanings of the word, which causes the accuracy of the model to be reduced slightly.</ns0:p><ns0:p>&#8226; Context independence: Unlike transfer learning models, deep learning models do not have such an effective mechanism for representing words based on their context (RNNs have one, but it is nowhere near as effective as that implemented by BERT-based systems). This makes it often a bit complex for such models to take into account a word into account its specific context, which in addition to the unique embeddings for each word leads to poorer model results.</ns0:p><ns0:p>Taking into account all of the above, it is worth highlighting that BETO is, without a doubt, the model that best solves the problem posed in this research, also demonstrating how important it is to develop native transfer learning models, as they obtain better results than multilingual models.</ns0:p><ns0:p>In our research, BETO was the best performing model. Deep learning models generally operate as black boxes, so in some cases it is difficult to reason why some perform better or worse. However, our Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>performance results are consistent with those obtained in other studies <ns0:ref type='bibr' target='#b13'>(del Arco et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b20'>Guti&#233;rrez-Fandi&#241;o et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b36'>Pomares-Quimbaya et al., 2021)</ns0:ref> in which BETO was also the best performing model.</ns0:p><ns0:p>Although BETO and mBERT have very similar architectures, BETO was trained on Spanish data and mBERT was pre-trained on 104 languages. In this case it is evident, as in other articles that also make use of them <ns0:ref type='bibr' target='#b20'>(Guti&#233;rrez-Fandi&#241;o et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b36'>Pomares-Quimbaya et al., 2021)</ns0:ref> that for the present problem, the model trained specifically with the same language as the dataset for which an efficient solution is to be found offers better results.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>In this research, the two objectives initially set have been completed:</ns0:p><ns0:p>&#8226; Messages on the Twitter platform containing words related to racism were obtained. A subsample of 2,000 messages was labelled, resulting in a balanced dataset.</ns0:p><ns0:p>&#8226; Different predictive models were generated using NLP techniques. These models were based on deep learning (CNN, LSTM and CNN+LSTM) and on transfer learning models (BERT). The best performing model was based on BERT, namely BETO with a precision of 85.22%.</ns0:p><ns0:p>This fact justifies the need for the development of native transfer learning models. Having been trained</ns0:p><ns0:p>with texts written in a single language instead of dozens of languages, the vocabulary of native models is far superior to that of multilingual models, which translates into greater effectiveness in the vast majority of situations.</ns0:p><ns0:p>This research shows preliminary results that need to be further investigated by improving some weaknesses, e.g. the size of the dataset used. Future research will increase the dataset, to achieve a more robust validation of the model presented in this article. Following more robust validation, it is intended to add new languages and to integrate these models into web applications that can be useful to society. Other limitations include, as in other studies that present text classification models, the need to retrain the model to include new terms generated by society over time. In addition to this, it is proposed to implement the model in an application that helps to automatically detect racism in different websites and to present results and validation of the complete system.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Workflow of the research conducted.</ns0:figDesc><ns0:graphic coords='6,164.27,485.97,368.47,171.61' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. High-level representation of Transformers.</ns0:figDesc><ns0:graphic coords='9,206.79,63.78,283.46,228.54' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>is a metric used to calculate what percentage of the positive samples (which in this particular case equals the samples labelled as racist) have been properly classified. The formula used to calculate this metric is as follows, where c is equal to the class (0 = Non-racist, 1 = Racist), TP = True positive, FP = False positive and FN = False negative: turn, is a metric used to calculate what percentage of the samples classified as positive have been properly classified. The formula used to calculate this metric is as follows, where c is equal to the class (0 = Non-racist, 1 = Racist), TP = True positive and FN = False negative: Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. CNN architecture scheme</ns0:figDesc><ns0:graphic coords='11,277.65,227.33,141.74,359.81' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. LSTM architecture scheme</ns0:figDesc><ns0:graphic coords='12,277.65,199.44,141.74,436.64' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. CNN + LSTM architecture scheme</ns0:figDesc><ns0:graphic coords='13,263.48,203.42,170.08,325.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,150.09,63.77,396.87,152.48' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>are a type of neural network that, combined with supervised learning, process layers by mimicking the human eye, allowing them to differentiate different features in the received inputs. Although they were specifically designed for</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022)</ns0:cell><ns0:cell>3/18</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of the objectives and weaknesses of the literature review conducted.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>(Pereira-Kohatsu et al., 2019)</ns0:cell><ns0:cell>HaterNet: dataset of 6,000 labelled tweets as hate</ns0:cell><ns0:cell>Tweets labelled as racist, but not racist. Very small</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>speech or not. LDA, QDA, Random Forest, Ridge</ns0:cell><ns0:cell>percentage of tweets are related to racism, which is</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Logistic Regression, SVM and an LSTM combined</ns0:cell><ns0:cell>one of the topics that interests us for this work.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>with an MLP applied to HaterNet.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Basile et al., 2019)</ns0:cell><ns0:cell>HateEval: two datasets, one in english and the sec-</ns0:cell><ns0:cell>Only 1,971 tweets are related to racism and 987 of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ond one in Spanish. Spanish dataset has 5,000 la-</ns0:cell><ns0:cell>this subset were mislabelled. Tweets from some</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>belled tweets in the training set.</ns0:cell><ns0:cell>people dnouncing racism that are labelled as racist.</ns0:cell></ns0:row><ns0:row><ns0:cell>(Nedjah et al., 2019; Roy</ns0:cell><ns0:cell>Convolutional neural networks applied to different</ns0:cell><ns0:cell>CNN applied to different problems, not only for</ns0:cell></ns0:row><ns0:row><ns0:cell>et al., 2020)</ns0:cell><ns0:cell>problems.</ns0:cell><ns0:cell>text classification and not focused on racism and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>xenophobia.</ns0:cell></ns0:row><ns0:row><ns0:cell>(Paetzold et al., 2019)</ns0:cell><ns0:cell>Theory about recurrent neural networks (RNN).</ns0:cell><ns0:cell>Usually, training the BERT model from scratch on</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>similar dataset could produce much better result</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(Sany et al., 2022; Shahri et al., 2020).</ns0:cell></ns0:row><ns0:row><ns0:cell>(Talita and Wiguna, 2019;</ns0:cell><ns0:cell>Long Short-Term Memory Network (LSTM) ap-</ns0:cell><ns0:cell>Not all of them are focused on text classification.</ns0:cell></ns0:row><ns0:row><ns0:cell>Bisht et al., 2020; Zhao et al.,</ns0:cell><ns0:cell>plied to different problems.</ns0:cell><ns0:cell>Those that are, do not make comparisons with</ns0:cell></ns0:row><ns0:row><ns0:cell>2020; Zhang et al., 2021)</ns0:cell><ns0:cell /><ns0:cell>BERT.</ns0:cell></ns0:row></ns0:table><ns0:note>ReferenceFindings Weaknesses<ns0:ref type='bibr' target='#b0'>(Ahmad et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hasan et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Lakshmi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b7'>Br Ginting et al., 2019)</ns0:ref> Studies that corroborate good results in the task of text classification using conventional machine learning techniques (SVMs, Logistic regression).These articles are not focused on obtaining predictive models for classifying categories of racism and xenophobia in Spanish texts. (Plaza-Del-Arco et al., 2020) SVM, linear regression (LR), Naive Bayes (NB), decision trees (DT), LSTM and a lexicon-based classifier applied to a dataset composed by tweets related to xenophobia and misogyny. The tweets are labelled with value 1 if speaks about xenophobia or misogyny. This may bias the results of the model. The best model was 74.2% of F1score. (del Arco et al., 2021) LSTM, Bidirectional Long Short-Term Memory Networks (Bi-LSTM) and CNN, mBert, XLM and BETO applied to HatEval and HaterNet.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The best transfer learning hyperparameters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Hyperparameter</ns0:cell><ns0:cell>Options</ns0:cell><ns0:cell>BETO</ns0:cell><ns0:cell>mBERT</ns0:cell></ns0:row><ns0:row><ns0:cell>Model type</ns0:cell><ns0:cell>[cased, uncased]</ns0:cell><ns0:cell>uncased</ns0:cell><ns0:cell>cased</ns0:cell></ns0:row><ns0:row><ns0:cell>Epochs</ns0:cell><ns0:cell>[2, 4, 8]</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>[8, 16, 32, 64]</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>[Adam, Adafactor]</ns0:cell><ns0:cell>Adam</ns0:cell><ns0:cell>Adam</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The best deep learning hyperparameters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Hyperparameter</ns0:cell><ns0:cell>Options</ns0:cell><ns0:cell cols='3'>CNN LSTM CNN+LSTM</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>[16, 32, 64, 128]</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Dropout</ns0:cell><ns0:cell>[0.25, 0.5]</ns0:cell><ns0:cell>N/A</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>[Adam, SGD]</ns0:cell><ns0:cell cols='2'>Adam Adam</ns0:cell><ns0:cell>Adam</ns0:cell></ns0:row><ns0:row><ns0:cell>Activation function</ns0:cell><ns0:cell>[Relu, Tanh]</ns0:cell><ns0:cell>Relu</ns0:cell><ns0:cell>Relu</ns0:cell><ns0:cell>Relu</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate</ns0:cell><ns0:cell>[1e-2, 2e-2, 1e-3, 2e-3]</ns0:cell><ns0:cell>2e-3</ns0:cell><ns0:cell>2e-3</ns0:cell><ns0:cell>2e-3</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Results obtained for all models</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell /><ns0:cell>Non-racist</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Racist</ns0:cell><ns0:cell /><ns0:cell cols='3'>macro-averaged</ns0:cell><ns0:cell>runtime</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>P(%) R(%) F1(%) P(%) R(%) F1(%) P(%) R(%) F1(%)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>BETO</ns0:cell><ns0:cell>84.28</ns0:cell><ns0:cell>87.30</ns0:cell><ns0:cell>85.76</ns0:cell><ns0:cell>86.17</ns0:cell><ns0:cell>82.94</ns0:cell><ns0:cell>84.52</ns0:cell><ns0:cell>85.22</ns0:cell><ns0:cell>85.12</ns0:cell><ns0:cell>85.14</ns0:cell><ns0:cell>1,230s</ns0:cell></ns0:row><ns0:row><ns0:cell>mBERT</ns0:cell><ns0:cell>83.28</ns0:cell><ns0:cell>81.11</ns0:cell><ns0:cell>82.18</ns0:cell><ns0:cell>80.73</ns0:cell><ns0:cell>82.94</ns0:cell><ns0:cell>81.82</ns0:cell><ns0:cell>82.00</ns0:cell><ns0:cell>82.02</ns0:cell><ns0:cell>82.00</ns0:cell><ns0:cell>1,129s</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN</ns0:cell><ns0:cell>80.13</ns0:cell><ns0:cell>81.43</ns0:cell><ns0:cell>80.78</ns0:cell><ns0:cell>80.21</ns0:cell><ns0:cell>78.84</ns0:cell><ns0:cell>79.52</ns0:cell><ns0:cell>80.17</ns0:cell><ns0:cell>80.14</ns0:cell><ns0:cell>80.15</ns0:cell><ns0:cell>840s</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM</ns0:cell><ns0:cell>78.90</ns0:cell><ns0:cell>84.04</ns0:cell><ns0:cell>81.39</ns0:cell><ns0:cell>82.05</ns0:cell><ns0:cell>76.45</ns0:cell><ns0:cell>79.15</ns0:cell><ns0:cell>80.48</ns0:cell><ns0:cell>80.24</ns0:cell><ns0:cell>80.27</ns0:cell><ns0:cell>844s</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>CNN+LSTM 77.58</ns0:cell><ns0:cell>83.39</ns0:cell><ns0:cell>80.38</ns0:cell><ns0:cell>81.11</ns0:cell><ns0:cell>74.74</ns0:cell><ns0:cell>77.80</ns0:cell><ns0:cell>79.34</ns0:cell><ns0:cell>79.07</ns0:cell><ns0:cell>79.09</ns0:cell><ns0:cell>938s</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68906:1:0:CHECK 21 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Editor's Decision MAJOR REVISIONS This is an interesting manuscript, so please consider the reviewers' comments in order to improve the quality. [# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peerreview process should only be included if the authors are in agreement that they are relevant and useful #] Answer: First of all, we would like to thank the organization and the three reviewers for their valuable contributions to this paper, which we have found very helpful. Their detailed comments have enabled us to improve both the scientific content of the paper and its presentation. The changes made in the manuscript are described in detail below. General remarks: Our revision has taken full account of the comments made by the three reviewers. Our main goal, at all times has been to improve the scientific contribution of the paper, its readability and its overall presentation. We have also carried out an extensive revision of recent literature and the other changes proposed by the reviewers. Our individual responses to each one of the reviewers’ comments are set out below. 1 Reviewer 1: Basic reporting The authors of this paper have focused on a very important aspect of our society, the increasing popularity of racism and xenophobia over social media. More specifically, the authors focus on the Twitter platform and the Spanish language where they use three different deep learning models, CNN, LSTM, and BERT with the last one being the most effective during experimentation. The paper is well structured and well written. This is another important paper in the area that utilizes NLP and behavioural characteristics, showcasing existing problems. However, there are a few issues that the reviewer would like to raise with the authors. The contributions of the paper are three, the assembly of the datasets, the experimentations on top of the datasets, and last the critical comparison and analysis. Answer: We are very grateful to the reviewer for his general views on our manuscript. Below we answer each and every one of your suggestions for improvement of this manuscript. A table at the end of the related literature can help the authors summarize their novelties and the potential reader understand them. Answer: Attending your suggestion, we have summarized the literature review adding this table to the manuscript. 2 Experimental design The hardware and software specifications of the testbed environment are not mentioned and they should be for the replicability of the results. Answer: We are very grateful for this appreciation. We have added this information to the manuscript: Validity of the findings The discussion section should be renamed to critical evaluation and comparison of the results. In the same section, the authors should further explain their results, not just demonstrate the numbers. They should clarify why each methodology had different results and how they can be used potentially in an automated environment. 3 Answer: We are very grateful to the reviewer for this suggestion. We have renamed the section as indicated and modified the content. We hope we have succeeded in explaining the results and clarifying the methodology. We have added this information to the manuscript: There is no mention of introduced overhead if any between the different deep learning models. Answer: We are very grateful to the reviewer for this suggestion. We have added a new column with the runtime of the different experiments. We have added this information to the manuscript: Additional comments There is no mention of privacy. This work clearly demonstrates that Cambridge Analytica is still alive and we can export people’s behavioural characteristics without their consent just by acquiring publicly available data (e.g. https://doi.org/10.1109/MC.2018.3191268 , https://doi.org/10.1109/UICATC.2013.12 , https://doi.org/10.3390/make2030011 ). We very much appreciate these additional comments from the reviewer. In response to this suggestion we have added the following text to the manuscript: 4 In addition to this, in our research we are not dealing with patient data, but with publicly available information on a web platform, which is why these studies studies are considered exempt from IRB approval because: - Public information from a web platform, in this case Twitter, is used. - By labeling tweets, columns that could help identify the user who wrote the message have been removed. Ultimately, only labelled texts were used to train the models and it is not possible to personally identify their authors. In this document, https://research.cofc.edu/administration/documents/policies- documents/IRB_SNSguidance.pdf , you can see this text about studies with Twitter data: “It is important to understand what is considered Public Observation when observing online spaces. If the research activity consists of Public Observation then the IRB application may qualify for Exempt review. For IRB consideration, only webpages that are accessible without a user login are considered Public Observation. Some social media, such as Twitter, is entirely public, as tweets can be accessed without having an account or being logged in. Other social media, like Facebook, are a mixture of public and private. Facebook consists of public pages and private groups, so it is important to include what the privacy settings are in the IRB application, if review is required.” 5 Reviewer 2: Basic reporting This manuscript proposes several deep learning models to classify Spanish texts on xenophobia and racism. Among the objectives of the research presented are to generate a dataset of Spanish-language tweets labelled as xenophobic or racist, or not. To carry out this research, the authors have compiled their own dataset which they have shared with the rest of the scientific community and, on this, they have applied CNN, LSTM and BERT models. The manuscript is well-structured, state-of-the-art, scientifically justified, reproducible and novel. The research focuses on generating models with a Spanish dataset which, a priori, is often difficult to justify. However, the authors mention other similar research such as HaterNet and HatEval, highlighting the possible weaknesses of these experiments and thus justifying the experiments presented in this manuscript. Answer: We are very grateful to the reviewer for his general views on our manuscript. Below we answer each and every one of your suggestions for improvement of this manuscript. Experimental design At the methodological level, the authors mention the methodology applied, how they obtained the dataset and, in addition, they have shared the jupyter notebooks containing the code of the models generated and the results obtained. Validity of the findings The authors need to address some minor weaknesses before the manuscript can be considered for publication in a journal. 1. Although CNN, LSTM and BERT techniques are indicated in the title, CNN and LSTM techniques are not mentioned in the abstract. It is recommended that the authors modify the abstract by adding this information and other interesting information regarding the final results obtained, for example, the % accuracy obtained in the models. We very much appreciate these additional comments from the reviewer. In response to this suggestion we have added the following text to the abstract of the manuscript: 6 2. To improve the interpretability of the results, authors are advised to add some of the figures of the confusion matrices corresponding to the models that obtained the best results. We thank the reviewer for this suggestion. In response, we have added the confusion matrices of the two best performing models: 3. On the other hand, they should revise the manuscript and modify some concepts that appear written in two different ways, for example 'tagged' and 'labelled'. Answer: We are sorry for any errors in the text. We have carried out a thorough reading and a native English speaker has checked and corrected the errors. 7 Reviewer 3: Basic reporting This paper evaluates different approaches to detect hate speech motivated by racism or xenophobia in Spanish using Twitter data. The structure of the manuscript is clear and there is an extensive literature review. The authors justifies the need of an adapted model for supervised text classification in Spanish to detect hateful messages particularly aimed at migrants or non-caucasian people. Answer: We are very grateful to the reviewer for his general views on our manuscript. Below we answer each and every one of your suggestions for improvement of this manuscript. I consider however that the rational is still poor and needs to create arguments in the next directions: -If the problem is language-based, what other developments have shown that native models for hate speech detection are better that general ones? This is, showing examples of attempts in languages different from English that have a good performance (and not only in Spanish). -Compare the results of this paper to the best performances obtained in other languages This would make the manuscript more useful for international audiences. Answer: We greatly appreciate this suggestion on your part. We believe that it has enabled us to improve the manuscript significantly and to attract more interest from international readers. We have added the following text to the manuscript: Related to the comparison of the results with the best performances obtained in other languages, we are very grateful for the suggestion. However, we believe that comparing the f1-score metric obtained in a model trained on a dataset with those obtained in our paper on a different dataset would not be a valid comparison. What we think that would be appropriate would be to compare the accuracy obtained by the models trained on the same dataset, since, if the dataset is biased, the metrics may also be biased. We note this suggestion and take it as future work that would form part of another research project with other objectives than the present one. Regarding previous attempts, the authors mention HaterNet and HatEval, but are missing 8 Pharm. This project develops deep learning models to classify tweets with hate speech towards migrants and refugees in Spanish, Greek and Italian. In Spanish, they manually labelled more than 12,000 tweets finding 1,390 hateful messages to build the classification model with RNN. This approach does not use BERT, but still gets good metrics (f1score=0.87 for Spanish). The classification interface (http://pharm-interface.usal.es) offers more information (probably the training corpus under request too) and these two papers: Vrysis, L.; Vryzas, N.; Kotsakis, R.; Saridou, T.; Matsiola, M.; Veglis, A.; Arcila, C.; Dimoulas, C. (2021). A Web Interface for Analyzing Hate Speech. Future Internet, 13(3), 80. https://doi.org/10.3390/fi13030080 Arcila, C., Sánchez, P., Quintana, C., Amores, J. & Blanco, D. (2022, Online preprint). Hate speech and social acceptance of migrants in Europe: Analysis of tweets with geolocation. Comunicar, 71. We are grateful to have been made aware of the PHARM project through your comment. We have considered the information provided and have added one of the 2 proposed references, as we have not been able to find the preprint you suggested in the second position. In relation to your comment regarding the result obtained by the RNN model, which, without using BERT, offers an f1-score of 0.87, it should be noted that a metric must be evaluated with a model on the same dataset, so it would be interesting to see what result BERT offers for the same data that were used to train that RNN. On the other hand, many studies support the fact that BERT offers better results than RNN for problems related to text classification [1-4]. [1] M. P. Shahri, K. Lyon, J. Schearer, y I. Kahanda, «DeepPPPred: An Ensemble of BERT, CNN, and RNN for Classifying Co-mentions of Proteins and Phenotypes», dec. 2020. doi: 10.1101/2020.09.18.304329. [2] F. Harrag, M. Debbah, K. Darwish, y A. Abdelali, «BERT Transformer model for Detecting Arabic GPT2 AutoGenerated Tweets», arXiv:2101.09345 [cs], Jan. 2021, Accessed: 23th January de 2022. [Online]. Available in: http://arxiv.org/abs/2101.09345 [3] M. M. H. Sany, M. Keya, S. A. Khushbu, A. S. A. Rabby, y A. K. M. Masum, «An Opinion Mining of Text in COVID-19 Issues along with Comparative Study in ML, BERT & RNN», arXiv:2201.02119 [cs], Jan. 2022, Accessed: 23th January de 2022. [Online]. Available in: http://arxiv.org/abs/2201.02119 [4] Q. Chen, «Stock Movement Prediction with Financial News using Contextualized Embedding from BERT», arXiv:2107.08721 [cs, q-fin], jul. 2021, Accessed: 23th January de 2022. [Online]. Available in: http://arxiv.org/abs/2107.08721 On the other hand, what is the goal of point 2.2? Why is it called 'Sentiment analysis'? Do the authors refer probably to text classification? The title is confusing since SA is not the scope of this paper. 9 Answer: We are grateful for this appreciation and apologise for it. We have indeed mixed up the concepts of Sentiment analysis and text classification. In response to this, we have modified the term in the manuscript. Experimental design The method is correctly included in the paper and shows that the research is in line with the scope of the Journal. The models are well described, although I am not sure if most of the details are necessary for a Computer Science specialized audience (i.e. description of what BERT is or definition of te evaluation metrics). Thank you for your comment. Due to the structure of some of the papers published in this journal [1-6], we have decided to keep this information in order to reach a broader audience. We hope you will understand our decision. [1] J. Hassan, M. A. Tahir, y A. Ali, «Natural language understanding of map navigation queries in Roman Urdu by joint entity and intent determination», PeerJ Comput. Sci., vol. 7, p. e615, jul. 2021, doi: 10.7717/peerj-cs.615. [2] S. Renjit y S. Idicula, «Natural language inference for Malayalam language using language agnostic sentence representation», PeerJ Comput. Sci., vol. 7, p. e508, may 2021, doi: 10.7717/peerj-cs.508. [3] C.-R. Ko y H.-T. Chang, «LSTM-based sentiment analysis for stock price forecast», PeerJ Comput. Sci., vol. 7, p. e408, mar. 2021, doi: 10.7717/peerj-cs.408. [4] E. M. Aboelela, W. Gad, y R. Ismail, «The impact of semantics on aspect level opinion mining», PeerJ Comput. Sci., vol. 7, p. e558, jun. 2021, doi: 10.7717/peerj-cs.558. [5] B. R. Bhamare y J. Prabhu, «A supervised scheme for aspect extraction in sentiment analysis using the hybrid feature set of word dependency relations and lemmas», PeerJ Comput. Sci., vol. 7, p. e347, feb. 2021, doi: 10.7717/peerj-cs.347. [6] R. S. Wagh y D. Anand, «Legal document similarity: a multi-criteria decision-making perspective», PeerJ Comput. Sci., vol. 6, p. e262, mar. 2020, doi: 10.7717/peerj-cs.262. The training corpus seem to be labelled by specialists in Psychology and has specific examples of racism and xenophobia. This is a good value of the paper. Though, the reliability of this manual classification is not reported (i.e. inter-coder reliability measure), which seriously compromise the quality of the ad hoc data set. This is extremely important since this test can tell if different coders were really considering the same as hate, and then validated the qualitative category. We are pleased that you appreciate the work done on the labelling of the dataset. Regarding your doubt about the reliability of the manual classification, a pairwise validation of each subset of 500 tweets was performed and, after completion, the 8 psychology experts pooled the labels of the 2,000 tweets. We have added this information to the manuscript and hope that the procedure is completely clear and is considered correct. The added text is shown below: 10 In addition, there is a relevant privacy concern in the Twitter data. In Zenodo, the tweets include the usernames and the given label (hate/ no hate). This does not meet the ethical and data protection standards for Twitter analysis. Thank you for your appreciation. We have removed the column identifying the user who wrote each tweet. In addition, we have added text related to data privacy suggested by reviewer 1 in subsection 3.1. The added text is shown below: Validity of the findings The findings are relevant for the hate speech detection field, since they can be used to build better models in different languages. In special, the use of BETO and the verification of its enormous advantage can help other researchers and practitioners to create better models in the future. We are glad that you find the findings relevant. My concern is the validation of the results in other different data. This is, for example, collecting new Twitter data (within another timeframe or word filters) and validate the obtained models. I consider this external validation phase extremely important for ML models. We appreciate the reviewer's suggestion and understand the importance of validating the model with other data. However, the complete procedure of obtaining and labelling data to test the efficacy of the model on the data involves work that would form part of a continuation of the research presented in the manuscript. The main objective of this manuscript is to show the good results obtained and, from here, to open another line of work to further improve the model and increase confidence in it. This way of working and showing results for further work coincides with other articles such as, for example, the study by Zhou et al. [1] who, although they collected 123,977, labeled a subsample of 2,219 and applied deep learning techniques and other studies like [2] that used a similar number of tweets, 2,095, to train BERT and ML models. 11 However, we also wanted to point out that we are testing different BERT models against another problem, also collecting an original dataset. So far we have been able to present an initial study at a conference [3] and we currently have a more extended study under review in another journal. We very much appreciate this suggestion and have modified the lines of future work as shown below: [1] Zhou S, Zhao Y, Bian J, Haynos AF, Zhang R. Exploring Eating Disorder Topics on Twitter: Machine Learning Approach. JMIR Medical Informatics. 2020;8(10):e18273. doi:10.2196/18273 [2] Roitero, K., Bozzato, C., Mea, V.D., Mizzaro, S., & Serra, G. (2020). Twitter goes to the Doctor: Detecting Medical Tweets using Machine Learning and BERT. SIIRH@ECIR. [3] J. A. Benítez-Andrades et al., «BERT Model-Based Approach For Detecting Categories of Tweets in the Field of Eating Disorders (ED)», en 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), jun. 2021, pp. 586-590. doi: 10.1109/CBMS52027.2021.00105. Regarding the conclusions, I think that the limitations and future research should be better developed. Is just the size of the dataset the only limitation? What specific applications can generate this model? How can the model deteriorate in time with the use on new words/sentences? How often this model should be re-trainned to be really useful in real-life applications? We appreciate your suggestion and have added more information to the conclusions regarding limitations and future work. We hope we have improved this section. The added text is shown below: 12 "
Here is a paper. Please give your review comments after reading it.
367
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Due to ever-evolving software developments processes, companies are motivated to develop desired quality products quickly and effectively. Industries are now focusing on the delivery of configurable systems to provide several services to a wide range of customers by making different configurations in a single largest system. Nowadays, component-based systems are highly demanded due to their capability of reusability and restructuring of existing components to develop new systems. Moreover, product line engineering is the major branch of the component-based system for developing a series of systems. Software product line engineering (SPLE) provides the ability to design several software modifications according to customer needs in a cost-effective manner.</ns0:p><ns0:p>Researchers are trying to tailor the software product line (SPL) process that integrates agile development technologies to overcome the issues faced during the execution of the SPL process such as delay in product delivery, restriction to requirements change, and exhaustive initial planning. The selection of suitable components, the need for documentation, and tracing back the user requirements in the agile-integrated product line (APL) models still need to improve. Furthermore, configurable systems demand the selected features to be the least dependent. In this paper, a hybrid APL model, quality enhanced application product line engineering (QeAPLE) is proposed that provides support for highly configurable systems (HCS) by evaluating the dependency of features before making the final selection. It also has a documentation and requirement traceability function to ensure that the product meets the desired quality. Two-fold assessments are undertaken to validate the suggested model, with the proposed model being deployed on an active project. After that, we evaluated the proposed model performance and effectiveness using after implementing it in a real-world environment and compared the results with an existing method using statistical analysis. The results of the experimental</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Software development is a complex activity that involves knowledge management, fast product development, a competitive market, multiple industrial aspects, and quick advancement in technologies <ns0:ref type='bibr' target='#b5'>(Clarke et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b15'>Giray, 2021)</ns0:ref>. As a means of dealing with all these complexities, using resources efficiently, and establishing control, software development organizations mostly select those methods that help in the execution of the software product development process within a given time. There are many methods available for software development which includes traditional software development life cycles like the waterfall method. The main problem with these methods is that they are not flexible to changes and required more time for documentation and initial planning. This significantly disturbs the timeto-market and may result failing of the software product. On the other hand, agile ensures shorter releases, faster functionality delivery and feedback, timely delivery, and increases quality <ns0:ref type='bibr' target='#b10'>(Dove, Schindel &amp; Hartney, 2017;</ns0:ref><ns0:ref type='bibr'>Camacho et al., 2021)</ns0:ref>. The development that would be carried with agile improves the pace of adaptability and development, which is most important to satisfy market demands <ns0:ref type='bibr' target='#b27'>(Kl&#252;nder et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Software product line (SPL) engineering supports reusable common software resources by following a predefined architecture and plan. The reuse of different predefined features enables product tailoring to make it fit for customer needs <ns0:ref type='bibr' target='#b1'>(Aggarwal &amp; Mani, 2019;</ns0:ref><ns0:ref type='bibr'>Camacho et al., 2021;</ns0:ref><ns0:ref type='bibr'>Al-Hawari, Najadat &amp; Shatnawi, 2021)</ns0:ref>. SPL becomes a vital paradigm for companies as it favors usability, cost, productivity, quality, and time <ns0:ref type='bibr'>(Krueger &amp; Clements, 2017</ns0:ref><ns0:ref type='bibr'>, 2019;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chac&#243;n-Luna et al., 2019;</ns0:ref><ns0:ref type='bibr'>Bolander &amp; Clements, 2021)</ns0:ref>. Variability is the capacity of the product framework to be changed, re-configured, expanded, and arranged for use in a particular context, hence becoming a central concern for researchers and practitioners <ns0:ref type='bibr'>(Krueger &amp; Clements, 2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>Carvalho et al., 2019;</ns0:ref><ns0:ref type='bibr'>Wu et al., 2021;</ns0:ref><ns0:ref type='bibr'>Ali et al., 2021a)</ns0:ref>. SPL aims to develop a time-efficient and cost-effective methodology for the HCS by reusing its assets <ns0:ref type='bibr' target='#b8'>(Dintzner, van Deursen &amp; Pinzger, 2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>Carvalho et al., 2019;</ns0:ref><ns0:ref type='bibr'>Ter Beek et al., 2020)</ns0:ref>. Usually, standalone products adopt the whole variability model, yet most of the features are different <ns0:ref type='bibr' target='#b0'>(Abal et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Nowadays, agile software development and SPL have become more popular in the software development industry and both approaches are an authentic way of software development <ns0:ref type='bibr' target='#b23'>(Hohl et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hayashi &amp; Aoyama, 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Aggarwal &amp; Mani, 2019;</ns0:ref><ns0:ref type='bibr'>Oriol et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Kasauli et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b26'>Kiani et al., 2021)</ns0:ref>. The agile manifesto provides a better architecture to SPL with integrated methods along with SPLE <ns0:ref type='bibr' target='#b4'>(Chac&#243;n-Luna et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kl&#252;nder et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b26'>Kiani et al., 2021)</ns0:ref>. Recently, many researchers tried to investigate both paradigms <ns0:ref type='bibr' target='#b16'>(Haidar, Kolp &amp; Wautelet, 2017;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hayashi &amp; Aoyama, 2018;</ns0:ref><ns0:ref type='bibr'>Krueger &amp; Clements, 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kl&#252;nder et al., 2019)</ns0:ref> because both approaches share some common goals like customer satisfaction, limiting costs, reduced time to market, quality, and improved software productivity <ns0:ref type='bibr' target='#b17'>(Hanssen &amp; Faegri, 2008;</ns0:ref><ns0:ref type='bibr' target='#b1'>Aggarwal &amp; Mani, 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kl&#252;nder et al., 2019)</ns0:ref>. After combining both methods, the researchers named them agile product line engineering (APLE) <ns0:ref type='bibr' target='#b24'>(Hohl et al., 2018)</ns0:ref>. APLE, the hybrid process model having mutual benefits, satisfies the customers with their common objectives and needs. Moreover, SPL handles variability identification, variability management, and selection of the features. On the other hand, agile just need requirements to deliver the required product <ns0:ref type='bibr'>(Mohan, Ramesh &amp; Sugumaran, 2010;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abal et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chac&#243;n-Luna et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b26'>Kiani et al., 2021)</ns0:ref>. These approaches are correspondingly categorized as reactive and proactive software engineering approaches. Hence, both approaches have the same objective of improving software development efficiency.</ns0:p><ns0:p>The main issues are dynamic variation and configuration which causes irrelevant selection of components and variability management for reuse and restructuring due to lack of documentation and component repository management during HCS development based on APLE. Therefore, the objective of this research is to address the issues identified from the existing literature and described in this section like the adaption of automatic documentation of the initial document and the code. Moreover, the selection of the components or features to reduce the dependency between the features and ensure the quality of the final product variant by using test-driven development and requirement tracing functionality, and finally the configuration of both processes to be suitable for HCS development.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>Research Contributions</ns0:head><ns0:p>To overcome the mentioned problems, we develop and present a hybrid process model preserving the benefits of both i.e., Agile-SPL and HCS. Following are the contributions of this paper: &#61623; A significant review of literature has been carried out to understand the existing studies about agile SPLE and HCS. The review described that there is a need for a component Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>QeAPLE model in terms of commonalities and variabilities management in HCS with the existing method.</ns0:p><ns0:p>We also evaluated the performance of participants using the QeAPLE model as compared to existing methods. The existing model which we used for comparative analysis selected from literature i.e., Arkendi model <ns0:ref type='bibr'>(Mollahoseini Ardakani, Hashemi &amp; Razzazi, 2018)</ns0:ref>.</ns0:p><ns0:p>&#61623; The QeAPLE model provide guidelines and directions for researchers and industrialists during dynamic variability management and selection of components for reuse and restructuring in APLE during HCS development.</ns0:p><ns0:p>The rest of the paper is structured as follows: Section 2 discusses the literature review. Furthermore, it also discusses the research gap identified in the existing work. Section 3 provides the details about the proposed process model and its components. It also describes the functioning of the proposed model and its post and preconditions. Section 4 describes the evaluation of the proposed model and a comparison of the experimental results with the existing method. Finally, section 5 concludes the research work and provides the possible future directions.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>Related Work</ns0:head><ns0:p>There are several research studies found in the literature that tends to integrate agile software development with product line engineering to gain the benefits of both processes. In <ns0:ref type='bibr' target='#b23'>(Hohl et al., 2016)</ns0:ref>led a subjective study about integrating the agile process with SPLs which is helpful for organizations to incorporate the end-user changes rapidly and launch the software to the market in a timely fashion. Furthermore, they distinguish that the advancement procedure can be improved by transparency, cooperation, adaptability, productivity inside the developers' group, and software quality grounded by the reuse within the profit range. The highly configurable system requires the integration of the features that are least dependent upon each other and could be modular as high as much <ns0:ref type='bibr'>(Meinicke et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abal et al., 2018;</ns0:ref><ns0:ref type='bibr'>Ter Beek et al., 2020)</ns0:ref>. The agile SPL model should be capable of providing the product with such characteristics. The quality of HCS is difficult to analyze because of multiple variations of a single product. Consequently, a comprehensive testing mechanism is required for the achievement of product quality <ns0:ref type='bibr'>(Parejo et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abal et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Kasauli et al., 2021)</ns0:ref>. <ns0:ref type='bibr'>(Yoder, 2002)</ns0:ref> provided a tailoring approach to manage the new variant according to the product line variant, and then integration, as well as delivery of the final variant is carried out using an agile development process. The main limitation in this approach is that the documentation part and the component selection parts are not clearly described. It also does not address the HCS. Similarly, in <ns0:ref type='bibr' target='#b12'>(Ghanam &amp; Maurer, 2010)</ns0:ref>, the researchers tend to alter the variation integration mechanism using the code refactoring method. The main problem with the proposed method was that the mechanism is not optimized for the selection of the independent features. <ns0:ref type='bibr'>(Carbon et al., 2008)</ns0:ref> in their work improved the integration by test-driven development (TDD) addition. This ensures the quality of the new variant. However, it does not provide the mechanism to check component dependency, and development of the configuration system at the time of feature selection. The researchers in existing literature provide a comprehensive solution for the adoption of the integrated APLE model <ns0:ref type='bibr' target='#b16'>(Haidar, Kolp &amp; Wautelet, 2017;</ns0:ref><ns0:ref type='bibr' target='#b19'>Hayashi, Aoyama &amp; Kobata, 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hohl et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Kiani et al., 2021)</ns0:ref>. <ns0:ref type='bibr' target='#b16'>(Haidar, Kolp &amp; Wautelet, 2017)</ns0:ref>, proposed a comprehensive model for the agile product line engineering process, still, it does not support feature selection or components to make software highly modular. The proposed model not only provides test-driven development for quality assurance but also provides insights into documentation and variation management. The key issues in this approach are the negligence of feature selection before using them in TDD, and incompatibility with HCS. To solve these issues, a comprehensive method is required which will not only select the least dependent component but also deliver the automatic documentation along with requirement analysis for better variation management. The focus of this research is the execution of comprehensive steps required to use agile techniques in iterative. The approach used is reactive, which considers both application engineering AE and domain engineering DE. The main limitation of this research work is that it doesn't talk about the quality of the end product. Moreover, it doesn't discuss highly configurable systems support in the proposed approach. Similarly, other works discussed above <ns0:ref type='bibr'>(Yoder, 2002;</ns0:ref><ns0:ref type='bibr'>Carbon et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b12'>Ghanam &amp; Maurer, 2010)</ns0:ref>, have the same common issues in their contributions. <ns0:ref type='bibr' target='#b24'>(Hohl et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Kiani et al., 2021)</ns0:ref> identified that the application engineering process doesn't provide detailed feedback to the domain engineering phase, which is mainly responsible for version management. The researcher improved the APLE process by making it semi-automatic and allowing the application engineering process to send feedback to the domain engineering process. The main limitation of the existing approach is that it cannot improve end-product quality. check the feature dependency while selecting the features for the new product. To improve the APLE model, a scoping mechanism for the APLE process is proposed <ns0:ref type='bibr'>(da Silva, 2012)</ns0:ref>. It allows improved version management and provides better version control. The main limitation of this study is that the process is not favorable for HCS, as HCS requires an improved feature selection process by first checking their dependency. Moreover, it does not talk about achieving the quality of the end-product to get the most useful information from the application engineering process, to aid the domain engineering process (Tian, 2014). It has been determined that domain engineering requires much information to improve version control. The main drawback of this mechanism is that it doesn't ensure the quality of the end product. Moreover, it doesn't talk about feature dependency checks while selecting features for the new product. <ns0:ref type='bibr'>(O'Leary et al., 2012)</ns0:ref> mainly focused on the application engineering part of APLE rather than domain engineering. The main aim of the proposed mechanism was to ensure product quality. The mechanism tends to improve testing of the product to ensure the quality of the end product. The proposed mechanism's main limitation is that it does not talk about version control, and the feature selection process is also faulty that needs much improvement. <ns0:ref type='bibr' target='#b2'>(Cardoso et al., 2012)</ns0:ref>identified the need for the APLE model to produce a security surveillance system. To address the problem, the researcher proposed the APLE model for security surveillance system production. The main limitation of this research work is that it doesn't properly focus on the application engineering process and tends to achieve quality by test-driven development. Moreover, the feature dependency is also needed to be analyzed while configuring them to make a new product. Similarly, <ns0:ref type='bibr' target='#b0'>(Abal et al., 2018)</ns0:ref>proposed the APLE framework for large production units and industries. The researcher identified that the existing APLE models are only configured for small and medium enterprises. It needed to be re-tailored for large industries. The proposed framework doesn't support the quality achievement of the product and it doesn't identify the feature dependency while making their selection for a new product variant. In another work, <ns0:ref type='bibr' target='#b24'>(Hohl et al., 2018)</ns0:ref>, performed an analysis for the proposition of the APLE model for the automobile variants. Researchers analyzed that the application engineering process for automobiles is very important compared to the domain engineering process. To provide a comprehensive APLE model, the researcher first identified the appropriate recommendations, and then based on these recommendations, they proposed a novel model for the automobile industry. The main problem with the proposed mechanism is that the mechanism doesn't support quality assurance and variability management. Moreover, the feature dependency check was also missing in the proposed mechanism.</ns0:p><ns0:p>Improvement of version management is also an important aspect. <ns0:ref type='bibr' target='#b12'>(Ghanam &amp; Maurer, 2010)</ns0:ref> mainly focused on the improvement of version management for the APLE process. The main improvement they introduced was the refactoring process that provides the classified information for each of the versions. The main drawbacks include the quality check of the product being ignored while the feature dependency is also neglected while selecting the components for a new product variant. Besides version management, improvement in the APLE process to make it fast in the initial planning is also desired. The possible improvement in the APLE process identified in different studies and improves the initial planning of the product. Along with that, the quality checking of the work is also done and highlighted that the proposed mechanism is not able to provide comprehensive version management and feature dependency check. Apart from providing the APLE model in the automotive industry and surveillance camera production units, literature identified the need for the APLE process for enterprise systems that is relatively complex to handle. Researchers in this research proposed an APLE model for enterprise industries <ns0:ref type='bibr' target='#b10'>(Dove, Schindel &amp; Hartney, 2017;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hohl et al., 2017;</ns0:ref><ns0:ref type='bibr'>Kl&#252;nder, Hohl &amp; Schneider, 2018;</ns0:ref><ns0:ref type='bibr'>Uysal &amp; Mergen, 2021)</ns0:ref>. The main limitation of this research work is that it doesn't check the feature dependency while selecting new products. In <ns0:ref type='bibr' target='#b19'>(Hayashi, Aoyama &amp; Kobata, 2017;</ns0:ref><ns0:ref type='bibr'>Kl&#252;nder, Hohl &amp; Schneider, 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Kiani et al., 2021)</ns0:ref> integrated APLE process. This process is typically comprised of the scrum as an iterative application engineering process. The main limitation of the proposed approach is that it doesn't provide much feedback and nor is there any automatic documentation module. Furthermore, there is a high need to maintain version control, which depends on the feedback that came from the application engineering process. <ns0:ref type='bibr'>(da Silva et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kl&#252;nder et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Kasauli et al., 2021;</ns0:ref><ns0:ref type='bibr'>Camacho et al., 2021)</ns0:ref> emphasized that there is currently no APLE model that completely provides all the details of the integrated process. To address the identified problem, researchers proposed a new, fully comprehensive APLE model with all necessary steps required to produce a new variant iteratively. Still there is limitation of lack of a dependency check while selecting the features. The requirement of a transformation model for converting the production from a traditional SPLE process to an agile SPLE process is significant. <ns0:ref type='bibr'>(Kl&#252;nder, Hohl &amp; Schneider, 2018)</ns0:ref> proposed a new transformation model that helps the industry to follow the APLE model for the production of new variants. The main limitation of the proposed approach is that there was no definition of version control and quality achievement module. Furthermore, the feature dependency check is also a must, which is missing in the proposed approach.</ns0:p><ns0:p>These features are very useful, and hence they are more user centric. The model is built on a merge algorithm to make the feature model more comprehensive and efficient. The main limitation of the proposed approach is that the model does not provide the quality achievement of the final product. Moreover, the proposed model also fails to provide feature dependency and analysis checks before their integration into the new product. <ns0:ref type='bibr'>(da Silva et al., 2014)</ns0:ref> presented a new agility-based approach for scoping the product line details. These details are gathered using communication and interviews with the customer and more focus on user involvement to help the developer to deliver the product of the required quality. The main limitation of the proposed approach is that it doesn't talk about the quality of the product. Moreover, feature dependency is also not checked while selecting the features for the new product variant.</ns0:p><ns0:p>The key issues in this approach are the negligence of feature selection before using them for validation after variation are irrelevant, and incompatible with HCS. Therefore, variation identification, variation management, and mapping are important to manage version control and relevant selection reuse components with proper documentation, repository management, and valid identification of test cases of selected reuse components. To solve these issues, a comprehensive method is required which will not only select the least dependent component but also deliver the automatic documentation along with requirement analysis for better variation management. The summary of a literature review is discussed in Tab 1. Therefore, in proposed model resolves the identified problems by correct variation identification, accurate dependency of selected components, and validation of reuse components for variation in a new product. A novel agile-enabled software product line engineering model is introduced based on the scrum process presented in <ns0:ref type='bibr'>(Mollahoseini Ardakani, Hashemi &amp; Razzazi, 2018)</ns0:ref>, and the frameworks proposed in <ns0:ref type='bibr'>(Mellado, Fern&#225;ndez-Medina &amp; Piattini, 2010)</ns0:ref>. This model will provide support for the configuration and development of highly configurable systems. The architectural representation of the proposed model is shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The Proposed Model is explained in detail with valid Component selection along with its algorithm and prototype based on variability management using reusability and user feedback. Therefore, the proposed model bridges gaps from the user requirements identification and validation in a system based on reuse and restricting with complete documentation to manage variability. This helps in managing the complexity and resources of HCS during developing a series of HCS products from requirements to validation.</ns0:p><ns0:p>Thus, the proposed model is composed of two processes as in any other SPLE process i.e., domain engineering and application engineering. Domain engineering controls the development and maintenance of the domain and its related product development aspects like designs, features, and variability management. Moreover, all the aspects of the domain are managed in this process. On the other hand, the application engineering process controls the application-related tasks and aspects. The analysis of the application strategies like business goals and marketing strategies is also considered. After that, the application designing, implementation, and testing of the software variant are done in this process. The main components in the proposed model include dependency evaluation, variation management, documentation, and traceability testing. The problems identified in the previous versions include the lack of documentation for the component's selection and test suitcases pickups along with the end-user requirements. These requirements help the developers to provide the software of desired quality by tracking the requirements back to ensure the existence of all the functional and non-functional requirements in the system.</ns0:p><ns0:p>Moreover, the proposed model provides the classification of identified variations and commonalities based on their dependencies. These dependencies provide the list of the dependent features for the selected component. A detailed discussion about the components of the proposed model is given below:</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Main Entities of Proposed Process Model</ns0:head><ns0:p>This section discusses the entities that are part of the proposed model. These entities are important to understand the complete working of the proposed mechanism. We used QeAPLE as a basic tool for component selection and validation. For task allocation, design, and development work synchronization as well as team coordination and communication and documentation version management, we have used a team server foundation repository with a prototype repository to align all the activities of the proposed model. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1'>Application Requirements</ns0:head><ns0:p>When a new product or its variant is going to be developed, the very first thing is requirement gathering. These requirements are the instructions from the end-user or from the market that must be incorporated in the software going to be developed. For correctness and completeness, we consider the diverse perspective of stakeholders and involve stakeholders during requirements analysis and prioritization. Whenever the new requirements are gathered from the users, these requirements are checked in the domain assets repository based on cased based reasoning steps i.e., to identify new requirements based on domain expert review and experience, to find similar requirements for reuse and restricting from a repository, modified requirements according to a new system and refine non-similar requirements to get complete and correct requirements. This improves the relevant selection of components for reuse and restructuring of components with high productivity. After the selection of the components and features, the components are checked for their dependency. The component with the least dependency is selected from the list of identified components against each requirement.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.2'>Common Reference Architecture</ns0:head><ns0:p>Any company offering or maintaining the SPLE process has a generic architecture that includes all the core functionalities. These functionalities or features are then tailored according to the requirements of the end-user to make a new variant of the existing domain. This will help the developers to tackle the new product more efficiently. The architecture is also used for the identification of commonalities and variations for the new product. These variations are done in the form of classes and stored in the documentation of that particular product.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.3'>Variation and Commonalities Identification</ns0:head><ns0:p>When the requirements for the new product variants are received from the end-users, these requirements are then moved towards the generic domain architecture and product domain version control. From these modules, the variation and commonalities from the previous versions are identified. The identification for these variations is very important as these provide the identification face to the various versions of the product domain.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.4'>Component Selection</ns0:head><ns0:p>According to the received requirements, the components need to be selected from the database of the domain assets. These lists of components are then further sorted into single components list. These components have a list of features' information related to the product domain. These features are allowed to be reused in every variant corresponding to that product domain. We used steps of 'case-based reasoning' which were adopted from the study <ns0:ref type='bibr'>(Ali, Iqbal &amp; Hafeez, 2018;</ns0:ref><ns0:ref type='bibr'>Ali et al., 2021b)</ns0:ref>. The interfaces are of the QeAPLE prototype tool is depicted in Fig. <ns0:ref type='figure'>2</ns0:ref> and Fig. <ns0:ref type='figure'>3</ns0:ref>. These interfaces of the prototype describe the functionalities of the component selection after identification of changes in HCS based SPL systems using case-based reasoning steps as explained earlier with the involvement of experts and stakeholders. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.5'>Dependency Evaluation</ns0:head><ns0:p>This is one of the major portions of the proposed process model in this research work. This module ensures and provides details about the dependency of the most suited component to the requirements with other selected components in the software. The main objective of this module is to clear the dependency of the most suited component or feature. This module finds the most suited component of the least dependency of the assets and then forwards the component to the next phase.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.6'>Component Testing</ns0:head><ns0:p>The selected components then undergo the testing phase before the integration of these components to form a final product. The tests are selected from the test suits, a big repository, for the retesting of the components. The main objective of this module is to ensure the desired quality of the product. According to the requirement and component, the suitable test suit is extracted and applied to the component. If the component does not conform to the required functionalities, the component is then rejected otherwise it is selected for the integration.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.7'>Test Suit Cases Repository</ns0:head><ns0:p>This is another repository for the particular product domain. This repository is mainly composed of the test cases corresponding to the components of the product domain. These test cases are classified according to the level of nonfunctional requirements of the end-users and the type of functionality it offers. These test cases are selected on the go when a new component needs to be entered into the product. The interface is of the QeAPLE prototype tool is mentioned in Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.8'>Documentation</ns0:head><ns0:p>This is the second most important module in the proposed process model. The documentation provides the facility to store the initial details of the new variant of the product. Along with that it automatically includes the technical details about the products. Furthermore, this documentation helps to ensure the existence of all requirements in the product variant.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Flow of the Proposed Process Model</ns0:head><ns0:p>This section discusses the flow of the proposed model to elaborate on the beneficial outcomes of the proposed model. The complete state transition diagram of the proposed process model is shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>.</ns0:p><ns0:p>The process starts with gathering the requirements. Furthermore, to remove the ambiguity these requirements are made clear by using any of the best requirement gathering methods one of which is proposed by <ns0:ref type='bibr'>[38]</ns0:ref>. After the collection of the requirements, these requirements are further provided to the generic domain architecture and the domain asset repository. The generic domain architecture provides the detailing of the functional and non-functional properties of the domain product, and this helps in the extraction of the design of the new variant going to be developed. Furthermore, it also helps the identification of the commonalities and variations for the new variant.</ns0:p><ns0:p>After the identification of variants, these variations are further moved to the variation management and version control module where the new version under the corresponding class is stored. After that, the requirements and the variations for the new product variant are added to the documentation maintained for that particular product version. This will help the developer to maintain the software and provide a valid update according to the market needs and requirements.</ns0:p><ns0:p>For the selection of the most suitable components and features that conform to the new requirements for the new product variant, the domain asset repository is used. In that repository, the most suitable components are filtered out among the lists of the components. After the selection of the most suitable components and features, the selected components are provided to the dependency checker module that confirms the dependency of the selected module. This process continues in the iteration, and each component with the least dependency is finally selected at this stage.</ns0:p><ns0:p>After the selection of the least dependent components and features, the next step is the integration of these components to provide the desired software. But before the integration of these components, there is a phase where these selected components are get tested using the predefined test cases. These test cases are provided by the test case repository. This repository provides the test cases based on not only the functional properties of the product but also encounter the non-function aspect of the new product variant. Thus, it ensures the desired quality of the product. Afterward, the tested components are allowed to integrate while misfit or failed modules are again turned back and for them, replacement is arranged.</ns0:p><ns0:p>After the completion of the product, the used components and their corresponding test cases are stored in the documentation that is maintained for that particular product variant.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Experimental Evaluation</ns0:head><ns0:p>This section provides a discussion about the empirical evaluation of the proposed model. For that, an experiment is conducted in which the proposed approach is evaluated. The evaluation is made regarding the ease with which the proposed approach can understand and adapt by the practitioners, expected effort required to execute the proposed model, quality achievement of end-product achieved by using the proposed model, complexity reduced by the model for maintenance of end-product variant and improved version management for variants. The experimental details, conducted for the validation of the proposed approach are discussed below.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Experiment Design</ns0:head><ns0:p>The main objective of this evaluation is to know how it affects the development process of SPL; the experiment is conducted to compare the proposed model with one that is closely related to our approach <ns0:ref type='bibr'>(Mollahoseini Ardakani, Hashemi &amp; Razzazi, 2018)</ns0:ref>. The reason to select a single model for comparison is that mostly followed and adopted by researchers and industrialists respectively. And have lacked some of the main features in the selected model relevant HCS variability management by mapping requirements and validation activities after the identification of a relevant selection of components.</ns0:p><ns0:p>In this experiment, the proposed process model is used by the treatment group and the previously proposed process model e.g. <ns0:ref type='bibr'>(Mollahoseini Ardakani, Hashemi &amp; Razzazi, 2018)</ns0:ref> by the control group. The comparison of both models will allow a better understanding of the improvement of the proposed model with the previous one. The selection of the previously proposed approach is based on the following reasons.</ns0:p><ns0:p>Practical Relevance: The process model proposed in <ns0:ref type='bibr'>(Mollahoseini Ardakani, Hashemi &amp; Razzazi, 2018)</ns0:ref>, resembles the proposed process model in the sense that it also provides the integration of agile in the AE of the SPL process. The comparison will provide validations about the practitioners' aspect from adopting the proposed process model.</ns0:p></ns0:div> <ns0:div><ns0:head>Time Limitation:</ns0:head><ns0:p>There are some other SPLE based frameworks and models, but due to the shortage of time, this research work is confined to the comparison with only one proposed work.</ns0:p><ns0:p>The Goal, Research Questions, and Hypotheses: The goal of this experiment is the comparison of a proposed process model with one of the existing process models <ns0:ref type='bibr'>(Mollahoseini Ardakani, Hashemi &amp; Razzazi, 2018)</ns0:ref> based on the ease in understandability, required effort, desired quality achievement, required maintenance complexity, and improved version management matrices. Depending on these comparison scales, the following research questions are derived.</ns0:p><ns0:p>RQ1: Does the ease of adaption and understandability is improved? RQ2: Does reducing the effort required to execute different phases is reduced? RQ3: Does the development of desired quality product variant is achieved? RQ4: Does the maintenance cost and effort of the developed product are minimized? RQ5: Does the variation management of the product is increased?</ns0:p><ns0:p>The next step is the formulation of the hypotheses required to be approved or disapproved based on the experimental results. The null hypotheses of the experiment states that there is no difference between both proposed models based on the degree of ease, required effort, desired quality achievement, maintenance complexity, and improved version manageability. The definition of the null hypotheses for the defined research questions is given in Tab. 2.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2: Null hypotheses</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1.1'>Independent and Dependent Variables</ns0:head><ns0:p>In any empirical experimentation, there are two types of variables definition i.e., dependent variable and independent variable. The change is done in the dependent variable and its effect is measured in the independent variable. As the name suggests, the dependent variables are the variables that are dependent on treatment and show some behavior on getting change. The deviation of this change is measured on independent variables. In this experiment, the dependent variable is dependency evaluation while selecting the component, automatic initial documentation of user stories, traceability orientation testing of end-product, and dependency matrices-based version management of components. Independent components in these experiments are ease of adaptability and understanding, required effort, ability to achieve desired quality product variant, maintenance complexity, and version management of the product variants. The selected dependent and independent variables are shown in Tab. 3 and Tab. 4 respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 3: Dependent variables</ns0:head></ns0:div> <ns0:div><ns0:head>Table 4: Independent variables</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1.2'>Experiment Case</ns0:head><ns0:p>A case is a contemporary phenomenon for a better explanation in its real-life context <ns0:ref type='bibr' target='#b11'>(Geogy &amp; Dharani, 2016)</ns0:ref>. Manuscript to be reviewed Computer Science groups of students. These are the students who have studied the courses including the knowledge of coding, architecture, agile methodologies, and have some knowledge about the product line engineering processes and HCS. To remove the biasness, these students were all provided with definite classes in SPL and a HCS. Each group is composed of 30 students. The group of the first 30 students is named group A and the group of other 30 students is named group B. Group A is a control group while group B is the treatment group. A control group is a group that is used to measure the effect of change when the newly proposed approach is applied to the treatment group. Group A apply existing method on the given requirements of projects for new HCS product development based on APLE with complete previous version information. Similarly, group B developed product based on the steps of proposed model. All the participants were trained according to their methods which they apply during the development of HCS for a high-quality product. After the training of all the students, they applied their methods based on APLE on HCS development. Further, 15 subgroups were formed in each group i.e., 2 students per group. Each group was given the same domain line project idea of developing and maintaining the inventory system product line. Group A followed the existing process model to manage the domain and to generate a new variant. While group B was given the proposed process model to develop and maintain the product line and its corresponding variant.</ns0:p><ns0:p>Summarizing the above discussion, the case is an activity that is performed in this experiment to check the worth of the proposed process model based on the matrices selected as the independent variables mentioned below: &#61623; Ease of adaptability and understanding &#61623; Required effort &#61623; Ability to achieve desired quality product variant &#61623; Maintenance complexity &#61623; Version management of the product variants</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.3'>Experimental Process</ns0:head><ns0:p>The main steps of the experiment are described in Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>. The first step is to provide the students and team of selected organization project requirements are collected and transferred to every member of the company using various tools like Microsoft Teams, Cooja, etc.', for the basic details about the tasks they must perform. The reason to adopt various methods for communication used instead of single platforms is that the team and students participating in the experiment were distributed location-wise and have different communication languages and use a different medium for communications. After providing them with the required knowledge, the total number of 60 students was divided into two groups labeled (30 in each group) as i.e. A (treatment group) and B (control group). The next step after the division of the group is the provision of the details about the existing SPLE process and model to the control group and the proposed process model to the treatment group. After all the initial setup and provision of details, students are allowed to develop and maintain an inventory management system as a domain product and to allow the extraction of the various product variants. The domain development and maintenance are lengthy tasks. So, to provide the students with ease, an already developed domain product was taken as a test-bed. This domain product line is provided by a software company named Alachisoft located in Islamabad. After that, each group was asked to provide a new product variant from the domain assets using both models. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.1.4'>Participants</ns0:head><ns0:p>There are some constraints during the selection of the participants for the software experiment. It is difficult to receive relevant outcomes if the experiment has insufficient participants and if the sample is not representative enough, then test effects can be debated. <ns0:ref type='bibr'>(Ro &amp; Kubickova, 2013)</ns0:ref> suggest that in various disciplines students are used as an experimental subject and lots of debates are taking place for many years among the scientific community of using the student as a research subject. It is an extended debate in the research network for treating students as subjects in case studies and experiments. Participants selected for the execution of the experiment were third-year students who have studied agile development methods, software engineering, and software architecture. Along with that these students also have special courses for the knowledge of SPLE and HCS. The required tasks for the execution of the experiment are provided to the students in the fall semester from Sept 16, 2019, to Nov 28, 2019.</ns0:p><ns0:p>To remove the biasness, the selection of the students was made randomly, and it is ensured that all the students have the approximately same skill set. According to the setup, the control group experimented, using the existing process model, and the treatment group experimented using the proposed process model. For the evaluation of the skill level and experience of the students selected as participants, a questioner was used. Most of the participants PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68239:1:0:NEW 2 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>undergo their BS final projects. Among 60, 32 students were involved in industrial projects, 18 students performed excellence in their bachelor's degree and were awarded medals. Furthermore, these students were also asked if any of them has undergone any open-source project. In which 6 students admitted that they have performed open-source projects. Finally, the students were asked to mention their level of expertise between beginner, mediator, and experience in software engineering. Among them, 26 students went with beginners, 22 students said they are a mediator, and the remaining 12 students go with experienced. Student demographic information is shown in Tab. 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 5: Student demographics information</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1.5'>Algorithm</ns0:head><ns0:p>The purpose of the algorithm is to identify the parameters like selecting suitable components. This algorithm helps practitioners in the selection of less dependent components. Developed a tool as a prototype for the QeAPLE in which this algorithm is implemented. Requirements are the input for the algorithm and the list of the least dependent module is the output of the algorithm. At the initial stage, dependent variables are initialized to null values. The steps of the algorithm are mentioned below: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='4.2'>Analysis of Experimental Data</ns0:head><ns0:p>This section contains a discussion about the statistical analysis of the data gathered from the experiment by filling questioner from students. The questioner helps in collecting and analyzing data after experimenting to evaluate the effectiveness of the proposed model and performance of participants of both groups using the proposed model and existing model. The effectiveness of the proposed model was used to analyze whether the identified problems from the literature were resolved. Similarly, the performance of participants helps in proofing satisfaction level of the participants in terms of understandability, effort, time, and cost. To evaluate the results, a quantitative analysis procedure is adopted. The analysis of the data starts with the data normality check. For this purpose, several empirical tests including qqnorm, qqline, Shapiro wilk, and Anderson darling test are executed. The p-value obtained from the tests is shown in Tab. 6. As the p-value is less than the significance level, which shows that the data is not normal. So, to validate such data, the Mann-Whitney U test is executed for the comparison of the independent variables <ns0:ref type='bibr' target='#b14'>(Ghasemi &amp; Zahediasl, 2012)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.1'>RQ1: Easy to Adapt and Understand</ns0:head><ns0:p>The experimental data obtained for easy understandability and adaptability is normally distributed as shown in Tab. 6. Therefore, to test the hypothesis formulated for RQ1, the Mann-Whitney U test is applied, and to find the direction of change, the A12 test is applied <ns0:ref type='bibr'>(Narasimhan et al., 1986)</ns0:ref>. The results of these tests are clearly described in Tab. 6. As shown in Tab. 6, the p-value for group A and group B are 0.00032 and 0.0019 for the variable easy to understand. Furthermore, the graphical representation of these results is shown in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>. According to the results of the test, there is a significant difference between the existing and the proposed process model based on the ease of understandability and adaptability. This shows the superiority of the proposed process model over the previous one. Along with A12, the mean values were also calculated by filling the questioner from the subjects, which also supports the arguments about the excellence of the proposed process model. Finally, the null hypothesis formulated for RQ1 is rejected and as a result, the alternative hypothesis is accepted.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.2'>RQ2: Expected Effort</ns0:head><ns0:p>To calculate the effort required to follow the process model, the total time consumed for executing the proposed model is selected as a parameter. The total time required to follow for each activity is calculated and then added to get the overall time. After the execution of the experiment, the subjects are asked to fill the questioner to get their opinions. After getting the responses from the subjects, the normality test is applied to it which finds out that the data is not normally distributed. To evaluate the proposed hypothesis for RQ2, the non-parametric test i.e., Mann-Whitney U, is applied to the data.</ns0:p><ns0:p>The result obtained from the statistical tests is shown in Fig. <ns0:ref type='figure' target='#fig_8'>7</ns0:ref> and describes the time required to complete different tasks. To find the direction of the significance for both the process models, the A12 test is applied, the result of which is shown in Tab. 6. To find the magnitude of the difference the Cohens-D test is applied, the result of which is shown in Tab. 6. The test results of Cohens-D show that there is a medium difference between both the process models. Finally, the results of the experiments reject the null hypothesis and thus the alternative hypothesis is accepted.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.3'>RQ3: Better Quality Achievement</ns0:head><ns0:p>To calculate the degree to which the quality of the product variant is achieved for both the process model, the specifications of the parameters were collected and shown to the practitioners, practitioners filled the questioner after reviewing the requirements for the product and the new product variant. To check the normality of the data, the normality test was applied which provides the details about the normality of the data. To evaluate the hypothesis proposed for the RQ3, the non-parametric test was applied to the data whose p-value is shown in Tab. 6. The result obtained from the statistical tests is shown in Fig. <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Furthermore, to find the direction of the significance, the A12 test is applied which shows that the proposed process model is more effective and good as compared to the existing one. After finding the direction, the next check was the evaluation of the magnitude of the difference between both the process models. For this purpose, the Cohens-D test was applied, which proves that there is a medium difference between both the models. Therefore, the null hypothesis is straight-away rejected, and the alternative hypothesis is accepted. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.4'>RQ4: Maintenance Complexity</ns0:head><ns0:p>To evaluate the total amount of complexity for the maintenance and updating of the product, every group was asked to make some changes in the newly developed product variant. Here they first need to identify the corresponding change, then selection of the proper component, and finally the testing and integration. The evaluation parameter selected for the validation of maintenance complexity was the total time, taken by the groups to maintain or incorporate updates in the newly developed product variant. To get the statistical data, the questionnaire was filled by the subjects, and the total time taken for the incorporation of updates was recorded as shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. The incorporation of practitioner's advice is important here to acknowledge the accuracy with which the updates are performed in the developed system. The mean values gathered from the test undergoes for the normality test. The normality test provides the information that the data is not normally distributed and thus for the evaluation of the hypothesis, the non-parametric test will be used.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 8: Maintenance complexity</ns0:head><ns0:p>After checking the normality of the data, the Mann-Whitney U test was applied whose result is shown in Tab. 6. This shows that there is a difference between both approaches as the p-value is less than 0.5. To find the direction of the magnitude, the A12 test is applied which shows that the proposed process model is better than the existing model. Further to check the significance of the difference, the Cohens-D test is applied which shows that there is a medium difference between both the process models. Based on the analysis, the null hypothesis proposed for the RQ4 is rejected and the alternative hypothesis is accepted.</ns0:p><ns0:p>The values obtained from the experiment were then checked for normality. The normality test shows that the experimental data is not normally distributed. To check and validate the hypothesis the non-parametric test i.e., Mann Whitney U test is performed on the experimental data. The result of this data is shown in Tab. 6. As the results describe the value of p-value is lower than 0.5, which means there is a difference between both the process models. To check the direction of the magnitude of change, the A12 test is applied. A12 shows that the proposed process model is better than the existing process model. To check the significance of the difference, the Cohens-D test is applied which shows that there is a medium difference between the two-process model.</ns0:p><ns0:p>Based on these findings, the null hypothesis proposed for RQ5 is rejected and as a result, the alternative hypothesis is accepted.</ns0:p><ns0:p>All the experiment is based on the questionnaire which is attached in appendix A. For the reliability of the questionnaire, we performed reliability statistical analysis using the SPS tool by Appling reliability test to check data biasness and accuracy. For the reliability test, we use SPSS 23 tool and automatically extract the results. The participants' information and the result of the statistical test are in Tab. 6.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Threats to Validity</ns0:head><ns0:p>This section aims to discuss the threats to the validity of the experiment performed according to guidelines provided in <ns0:ref type='bibr' target='#b21'>(Heck &amp; Zaidman, 2018;</ns0:ref><ns0:ref type='bibr'>Lindohf et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b26'>Kiani et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.1'>Construct Validity</ns0:head><ns0:p>The main focus of this threat is the ability to measure the required facility operationally without error. In this experiment, the main objective is to measure the efficiency of both process models. Therefore, the same evaluation factors are defined for both models. Furthermore, the subjects are clarified that this activity will not perform any role in the grading of any subject. So that it would not cause any biasness. To make the experimental hypotheses private, the information about the experimental hypotheses is kept hidden from the subject to avoid any type of biasness with the researcher. Hence, to avoid error and biasness during experiment while using both methods. The participants of the proposed model and existing model were fully trained before the execution of methods during development of HCS.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.2'>Internal Validity</ns0:head><ns0:p>The main aim of this threat is the problem of biasness caused by the casual relationship between the experiment subject and the researcher. To make a clear evaluation of the proposed model, the experiment is done very carefully by providing all the necessary tutorials and labs to the experiment subject. Furthermore, to overcome the biasness, complete random groups were designed and further the students were advised to actively participate without being afraid of any grade manipulation. To ensure the complete presence of the students they are also asked to further provide their values and opinion about how the process can be improved further. The participants performance was not influenced with any type of relations and participants of both groups separately performed development activities without knowing each other's in different times and environments.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.3'>External Validity</ns0:head><ns0:p>The main concern of this threat is the generalization of the results concluded from the experiment. The experiment was conducted using the students belonging to COMSATS University. Therefore, the participants used for the execution of the experiment are not professionals. The reason behind the selection of students as an experimental subject lies in the least availability of professionals from the industry. Furthermore, most of the empirical research in software engineering uses student and experimental subjects for the execution of the experiment. Finally, the nature of the experiment doesn't require the professional to be part of the experiment.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.4'>Conclusion Validity</ns0:head><ns0:p>Violating the statistical test assumption may result in a conclusion not much accurate. The experimental data is on an interval scale that could be a risk for statistical tests for the achievement of better results. The non-parametric Mann-Whitney U test is used for making these assumptions. Our sample size fulfills the criteria for the statistical test but is not too large because of large sample size increases the power of the test.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Conclusion and Future Work</ns0:head><ns0:p>Many software development process models are described in the literature that tends to join the SPL and APL to provide the comprehensive end product variant in large industries. These process models lack the proper documentation, not ensuring the quality of the components and details about the selection of the features based on the required specification. To address these problems, a hybrid APL model, QeAPLE is proposed that provides support for HCS by evaluating the dependency of features before making the final selection. It provides a comprehensive way for the selection of the components that are least dependent upon each other. Moreover, it also provides well-detailed documentation along with the testing of the selected components to clinch the quality of software and sparing time of the post-testation of the released product variant.</ns0:p><ns0:p>The main augmentation of this research effort comprises of:</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>The presentation of innovatory knowledge about the agile, SPL, and their integration for the development of systems especially for HCS systems.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>The proposition of the new hybrid process model allows the incorporation of SPL and agile processes together with the development support for HCS using the least dependent component selection.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>The evaluation of the proposed approach using the use case study and practitioner close-ended interviews along with the empirical evaluation executed using students as subjects.</ns0:p><ns0:p>The possible future works could be:</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>The main future direction could be the shortness of the time taken for the selection of the components.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>Could be the introduction of AI technology result in better selection of component that is least dependent and highly effective for the required requirements of a variant.</ns0:p></ns0:div> <ns0:div><ns0:head>Funding Statement:</ns0:head><ns0:p>The work reported in this manuscript was supported by the National Natural Science Foundation of China under Grant 61672080.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>-based system model consisting of SPLbased features and developed under agile methodology to improve the quality of HCS during verification identification, version control, and management for reuse of components during development. &#61623; To improve quality and productivity of HCS for SPL based component-based system a QeAPLE Model is proposed for APLE for HCS based SPL to manage variabilities and relevant selection of components depending on user feedback and reusability for identification, managing, and selection of variation and their relevant components for reuse and version control. &#61623; To automate the QeAPLE model developed a prototype based on the designed algorithm for the correct and relevant selection of components for reuse to manage variability during the development of SPL-based HCS products. The implemented in a real-world environment to evaluate the performance of prototype and practice theory into practice. &#61623; To evaluate the effectiveness of the proposed model, an empirical study is performed by the practitioners with the help of the prototype in the real scenario for a practical implication of the QeAPLE model. &#61623; After that performed a comparative analysis in an empirical study to evaluate the effectiveness of the PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68239:1:0:NEW 2 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: QeAPLE model for HCS</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 :Figure 3 :</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2: App prototype 1 Figure 3: App prototype 2</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: App prototype 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>In this research work, a case is a course project conducted at COMSATS University Islamabad, Pakistan with two PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68239:1:0:NEW 2 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Experimental process</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Mean comparison for the ease of understandability</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68239:1:0:NEW 2 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Expected effort</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,359.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of literature review</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>3 Quality Ensured Agile Product Line Engineering Process Model (QeAPLE)</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>P Value</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"Response Letter to Reviewers Comments Original Article Title: “Towards a component-based system model to improve the quality of highly configurable systems” To: Re: Response to reviewers Dear Editor, Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. Best regards, Reviewer 1 (Anonymous) 1. Basic reporting Needs thorough proofread Response: Thank you for suggestion. The updated manuscript thoroughly proofread. 2. Additional comments The work Towards a component-based system model to improve the quality of highly configurable systems is the current work and better presented in most of the aspects. However, authors can further improve their work in the following sections, 1. The abstract can be improved with the significance of the results. Response: Dear Reviewer, thank you for your comment. We have incorporated accordingly within the manuscript and highlighted with red color in abstract. 2. The introduction requires modification such as being concise. Response: 3. Further authors may elaborate on the research contribution in the introduction. Response: Dear Reviewer, thank you for your comment. We have incorporated accordingly within the manuscript and highlighted with red color. 4. Provided literature is better but authors may add a bit more on the literature. Response: Dear Reviewer, thank you for your comment. We have incorporated accordingly within the manuscript and highlighted with green color. 5. The proposed framework is not readable easily. Response: Dear Reviewer, thank you for your comment. We have incorporated accordingly within the manuscript. 6. Authors may elaborate about the data set they used. Response: Dear Reviewer, thank you for your comment. We have incorporated accordingly within the manuscript and highlighted changes with dark red color. 7. A few of the references need attention to complete the required information. Response: Dear Reviewer, thank you for highlighting. We have corrected references accordingly within the manuscript and highlighted with purple color. 8. Few sentences need careful attention, they are required to be rephrased such as ' Hence, results show that the easy of ease of understanding and adaptability, required effort, high-quality achievement, and version management is significantly improved i.e., more the 50 per cent as compared to exiting method i.e., less than 50 per cent' Response: Dear Reviewer, thank you for highlighting. We have corrected and rephrases sentences accordingly within the manuscript and highlighted with pink color. 9. Please carefully check the sentence structure ' da Silva et al. [35] presented a 185 new ability-based approach for scoping the product line details'. Response: Dear Reviewer, thank you for highlighting. We have checked sentence structure and incorporated changes accordingly where applicable in the manuscript. 10. Few references are not relevant like 39, 40. Response: Dear Reviewer, thank you for highlighting. We have corrected references and remove irrelevant references within the manuscript. 11. Authors please carefully use words Hypotheses and Hypothesis. Response: Dear Reviewer, thank you for highlighting. We have incorporated accordingly within the manuscript. Reviewer 2 (Mamoona Humayun) 1. There are few unusual long statements that may exceed three lines. A thoroughly English language revision is a must, where many English language, spelling errors, grammar and punctuation errors are found. Response: Dear Reviewer, thank you very much for suggestions. We have thoroughly proofread the updated manuscript. 2. The Abstract should clarify the evaluation criteria of the Component-Based System Model and the main findings and results of the evaluation quantitatively. Response: Dear Reviewer, thank you very much for suggestions. We have incorporated accordingly within the manuscript and highlighted the changes with red color in abstract section. 3. The suggested title should be clear and enlightening, and should reflect the aim and approach of the study. Response: Dear Reviewer, thank you very much for appreciation. 4. A section should be added between the introduction and proposed model sections to clarify and emphasize the main contributions and its scientific justification and applicability, in this study. Response: Dear Reviewer, thank you very much for suggestions. We have elaborated the comments in the literature review section in the updated manuscript. 5. There are some grammatical and typo mistakes of English in paper and need some minor adjustments. Response: Dear Reviewer, thank you for highlighting. We have incorporated accordingly within the manuscript where applicable. 6. There are some formatting issues in caption of figures and tables, please address them. Response: Dear Reviewer, thank you very much for suggestions. We have incorporated accordingly within the manuscript and highlighted the changes with dark teal color. 7. Enhance the pictures’ size for better presentation and readability. Response: Dear Reviewer, we appreciate your kind comment, picture size increases as per your suggestions. 8. Additional comments The “Research Problem” section must be under the heading of Introduction for better understanding of the proposed research. “Literature Study” section is not exhaustive to narrate existing work in the identified domain so it is better to include 2 or 3 more paragraphs including their references. Include more references focusing on highly configurable systems through software product line. Response: Dear Reviewer, thank you very much for suggestions. We have incorporated accordingly and highlighted the changes within the manuscript. Reviewer 3 (Anonymous) Some comments given below which should be addressed in the final submission. 1) Correct spelling exiting into existing Line 28 and line 30. Response: Dear Reviewer, thank you very much for suggestions. We have incorporated accordingly within the manuscript. 2) Author should adopt one method, i.e., either use abbreviation and full text for all or only use abbreviation except first time. For example, SPLE, HCS, QeAPLE etc. Response: Dear Reviewer, thank you very much for suggestions. We have incorporated accordingly within the manuscript and highlighted the changes with aqua color. 3) Complete sentence at line 90. State-of-the-art ….. what next? Response: Dear Reviewer, thank you very much for suggestions. We have incorporated accordingly within the manuscript. 4) Author should read full paper for required English grammar corrections. Response: Dear Reviewer, thank you very much for suggestions. We have incorporated accordingly within the manuscript. "
Here is a paper. Please give your review comments after reading it.
368
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>IMDb being one of the popular online databases for movies and personalities, provides wide range of movie reviews from millions of users. This provides a diverse and large dataset to analyze users' sentiments about various personalities and movies. Despite being helpful to provide the critique of movies, the reviews on internet movie database (IMDB) can not be read as whole and requires automated tool to provide insight. This study provides the implementation of various machine learning models to measure the polarity of the sentiment presented in user reviews on IMDB website. For this purpose, the reviews are first preprocessed to remove redundant information and noise and then various classification models like support vector machines (SVM), Na&#239;ve Bayes classifier, random forest and gradient boosting classifier are used to predict the sentiment of these reviews.</ns0:p><ns0:p>The objective is to find the optimal process and approach to attain the highest accuracy with best generalization. Various feature engineering approaches such as term frequencyinverse document frequency (TF-IDF), bag of words, global vectors for word representations and Word2Vec are applied along with the hyperparameter tuning of the classification models to enhance the classification accuracy. Experimental results indicate that the SVM obtains the highest accuracy when used with TF-IDF features and achieves an accuracy of 89.55%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 36</ns0:head><ns0:p>Social media has become an integral part of human lives in recent times. People want to share their 37 opinions, ideas, comments, and daily life events on social media. In modern times, social media is used 38 for showcasing one's esteem and prestige by posting photos, text, and video clips, etc. The rise and wide 39 usage of social media platforms and microblogging websites provide the opportunity to share as you like 40 where people share their opinions on trending topics, politics, movie reviews, etc. Shared opinions on 41 social networking sites are generally known as short texts (ST) concerning the length of the posted text search engines as queries. Apart from being inspiring, the ST contains users' sentiments about a specific personality, topic, or movie and can be leveraged to identify the popularity of the discussed item. The process of mining the sentiment from the texts is called sentiment analysis (SA) and has been regarded as a significant research area during the last few years <ns0:ref type='bibr' target='#b19'>Hearst (2003)</ns0:ref>. Sentiments given on social media platforms like Twitter, Facebook, etc. can be used to analyze the perception of people about a personality, service, or product, as well as, used to predict the outcome of various social and political campaigns.</ns0:p><ns0:p>Thus, SA helps to increase the popularity and followers of political leaders, as well as, other important personalities. Many large companies like Azamon, Apple, and Google use the reviews of their employees to analyze the response to various services and policies. In the business sector, companies use SA to derive new strategies based on customer feedback and reviews <ns0:ref type='bibr' target='#b18'>Hand and Adams (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b4'>Alpaydin (2020)</ns0:ref>.</ns0:p><ns0:p>Besides the social media platforms, several websites serve as a common platform for discussions about social events, sports, and movies, etc., and the internet movie database (IMDB) is one of the websites that offer a common interface to discuss movies and provide reviews. Reviews are short texts that generally express an opinion about movies or products. These reviews play a vital role in the success of movies or sales of the products <ns0:ref type='bibr' target='#b1'>Agarwal and Mittal (2016)</ns0:ref>. People generally look into blogs, review sites like IMDB to know the movie cast, crew, reviews, and ratings of other people. Hence it is not only the word of mouth that brings the audience to the theaters, reviews also play a prominent role in the promotion of the movies. SA on movie reviews thus helps to perform opinion summarization by extracting and analyzing the sentiments expressed by the reviewers <ns0:ref type='bibr' target='#b20'>Ikonomakis et al. (2005)</ns0:ref>. Being said that the reviews contain valuable and very useful content, the new user can't read all the reviews and perceive the positive or negative sentiment. The use of machine learning approaches proves to ease this difficult task by automatically classifying the sentiments of these reviews. Sentiment classification involves three types of approaches including the supervised machine learning approach, using the semantic orientation of the text and use of SentiWordNet based libraries <ns0:ref type='bibr' target='#b48'>Singh et al. (2013a)</ns0:ref>.</ns0:p><ns0:p>Despite being several approaches presented, several challenges remain unresolved to achieve the best possible accuracy for sentiment analysis. For example, a standard sequence for preprocessing steps is not defined and several variations are used which tend to show slightly different accuracy. Bag of words (BoW) is widely used for sentiment analysis, however, Bow loses word order information. Investigating the influence of other feature extraction approaches is of significant importance. Deep learning approaches tend to show better results than the traditional machine learning classifiers, but the extent of their better performance is not defined. This study uses various machine learning classifiers to perform sentiment analysis on the movie reviews and makes the following contributions &#8226; This study proposes a methodology to perform the sentiment analysis on the movie reviews taken from the IMDB website. The proposed methodology involves preprocessing steps and various machine learning classifiers along with several feature extraction approaches.</ns0:p><ns0:p>&#8226; Both simple and ensemble classifiers are tested with the methodology including decision trees (DT), random forest (RF), gradient boosting classifier (GBC), and support vector machines (SVM). In addition, a deep learning model is used to evaluate its performance in comparison to traditional machine learning classifiers.</ns0:p><ns0:p>&#8226; Four feature extraction techniques are tested for their efficacy in sentiment classification. Feature extraction approaches include term frequency-inverse document frequency (TF-IDF), BoW, global vectors (GloVe) for word representations, and Word2Vec.</ns0:p><ns0:p>&#8226; Owing to the influence of the contradictions in users' sentiments in the reviews and assigned labels on the sentiment classification accuracy, in addition to the standard dataset, TextBlob annotated dataset is also used for experiments.</ns0:p><ns0:p>&#8226; The performance of the selected classifiers is analyzed using accuracy, precision, recall, and F1 score. Additionally, the results are compared with several state-of-the-art approaches to sentiment analysis.</ns0:p><ns0:p>The rest of this paper is organized as follows. Section 2 discusses few research works which are closely related to the current study. The selected dataset, machine learning classifiers, and preprocessing procedure, and the proposed methodology are described in Section 3. Results are discussed in Section 4 and finally, Section 5 concludes the paper with possible directions for future research.</ns0:p></ns0:div> <ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>A large amount of generated data on social media platforms on Facebook, and Twitter, etc. are generating new opportunities and challenges for the researchers to fetch useful and meaningful information to thrive business communities and serve the public. As a result, multidimensional research efforts have been performed for sentiment classification and analysis. Various machine learning and deep learning approaches have been presented in the literature in this regard. Few research works which is related to the current study are discussed here; we divide the research works into two categories: machine learning approaches and deep learning approaches.</ns0:p><ns0:p>The use of machine learning algorithms have been accelerated in several domains including image processing, object detection and natural language processing tasks, etc. <ns0:ref type='bibr' target='#b5'>Ashraf et al. (2019a)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Khalid et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b6'>Ashraf et al. (2019b)</ns0:ref>. For example, The study <ns0:ref type='bibr' target='#b17'>Hakak et al. (2021)</ns0:ref> The authors implement several machine learning classification models for sentiment classification of IMDB reviews into positive and negative sentiments in <ns0:ref type='bibr' target='#b37'>Pang et al. (2002a)</ns0:ref>. For this purpose, a dataset containing 752 negative reviews and 1,301 positive reviews from the IMDB website is used. The research aims at finding the suitable model with the highest F1 score and best generalization. Various combinations of features and hyperparameters are used for training the classifiers for better accuracy. K-fold cross-validation is used for evaluating the performance of the classifiers. Naive Bayes tend to achieve higher accuracy of 89.2% than the SVM classifier which achieves 81.0% accuracy.</ns0:p><ns0:p>Similarly, the study <ns0:ref type='bibr' target='#b48'>Singh et al. (2013a)</ns0:ref> conducts experimental work on performance evaluation of the SentiWordNet approach for classification of movie reviews. The SentiWordNet approach is implemented with different variations of linguistic features, scoring schemes, and aggregation thresholds. For evaluation, two large datasets of movie reviews are used that contain the posts on movies about revolutionary changes in Libya and Tunisia. The performance of the SentiWordNet approach is compared with two machine learning approaches including NB and SVM for sentiment classification. The comparative performance of the SentiWordNet and machine learning classifiers show that both NB and SVM perform better than all the variations of SentiWordNet.</ns0:p><ns0:p>A hybrid method is proposed in <ns0:ref type='bibr' target='#b49'>Singh et al. (2013b)</ns0:ref> where the features are extracted by using both statistical and lexicon methods. In addition, various feature selection methods are applied such as Chi-Square, correlation, information gain, and regularized locality preserving indexing (RLPI) for the features extraction. It helps to map the higher dimension input space to the lower dimension input space. Features from both methods are combined to make a new feature set with lower dimension input space. SVM, NB, K-nearest neighbor (KNN), and maximum entropy (ME) classifiers are trained using the IMDb movie review dataset. Results indicate that using hybrid features of TF and TF-IDF) with Lexicon features gives better results.</ns0:p><ns0:p>The authors propose an ensemble approach to improve the accuracy of sentiment analysis in <ns0:ref type='bibr' target='#b31'>Minaee et al. (2019)</ns0:ref>. The ensemble model comprises convolutional neural network (CNN) and bidirectional long short term memory (Bi-LSTM) networks and the experiments are performed on IMDB review and Stanford sentiment treebank v2 (SST2) datasets. The ensemble is formed using the predicted scores of the two models to make the final classification of the sentiment of the reviews. Results indicate that the ensemble approach performs better than the state-of-the-art approaches and achieves an accuracy of 90% to classify the sentiment from reviews.</ns0:p><ns0:p>The authors investigate the use of three deep learning classifiers including multilayer perceptron, CNN, and LSTM for sentiment analysis in <ns0:ref type='bibr' target='#b3'>Ali et al. (2019)</ns0:ref>. Besides, experiments are also carried using a hybrid model CNN-LSTM for sentiment classification, and the performance of these models is compared with support vector machines and Naive Bayes. Multilayer Perceptron (MLP) is developed as a baseline for other networks' results. LSTM network, CNN, and CNN-LSTM are applied on the IMDB dataset consisting of 50,000 movies reviews. The word2vec is applied for word embedding. Results indicate that higher accuracy of 89.2% can be achieved from the hybrid model CNN-LSTM. Individual classifiers show a lower accuracy of 86.74%, 87.70%, and 86.64% for MLP, CNN, and LSTM, respectively. Manuscript to be reviewed Computer Science order to produce a single final output. Various regularization techniques, network structures, and kernel sizes are used to generate five different models for classification. The designed models can predict the sentiment polarity of IMDB reviews with 89% or higher accuracy.</ns0:p><ns0:p>The study <ns0:ref type='bibr' target='#b21'>Jain and Jain (2021a)</ns0:ref> has experimented on the IMDB review dataset using the deep learning models for sentiment classification. It uses a convolutional neural network (CNN) and long short term memory (LSTM) with different activation functions. The highest accuracy of 0.883 is achieved with CNN using the ReLU activation function. Similarly, <ns0:ref type='bibr' target='#b34'>Nafis and Awang (2021)</ns0:ref> proposes a hybrid approach for IMDB review classification using TF-IDF and SVM. The approach called SVM-RFE uses important feature selection to train the SVM model. The important features used to train the SVM helps in boosting the performance of SVM and it achieves an accuracy score of 89.56% for IMDB reviews sentiment classification.</ns0:p><ns0:p>The study <ns0:ref type='bibr' target='#b22'>Jain and Jain (2021b)</ns0:ref> uses a machine learning approach for IMDB reviews classification.</ns0:p><ns0:p>The study performs preprocessing of data and proposes a feature selection technique using association rule mining (ARM). Results show that Naive Bayes (NB) outperforms all other used models by achieving a 0.784 accuracy score using the proposed features. The study <ns0:ref type='bibr' target='#b40'>Qaisar (2020)</ns0:ref> presents an approach using LSTM for IMDB review sentiment classification. LSTM achieves an 0.899 accuracy score on the IMDB dataset. Along the same lines, <ns0:ref type='bibr' target='#b47'>Shaukat et al. (2020)</ns0:ref> performs experiments on the IMDB reviews dataset using a supervised machine learning approach. The study proposed neural network can achieve a 0.91 accuracy score.</ns0:p><ns0:p>From the above-discussed research works, it can be inferred that supervised machine and deep learning approaches show higher performance than lexicon-based approaches. Additionally, the accuracy offered by machine learning approaches can be further improved. This study focus on using several machine learning classifiers for this purpose, in addition to three feature extraction, approaches for enhanced classification performance. This study contributes to filling the literature gap which is accuracy and efficiency for IMDB review sentiment classification using state-of-the-art techniques. </ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head><ns0:p>This section describes the dataset used for the experiments, machine learning classifiers selected for review classification, as well as, the proposed methodology and its working principles.</ns0:p></ns0:div> <ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Data description</ns0:head><ns0:p>This study uses the 'IMDB Reviews' from Kaggle which contains users' reviews about movies. The dataset has been largely used for text mining and consists of reviews of 50,000 movie reviews of which approximately 25,000 reviews belong to the positive and negative classes, respectively. Table <ns0:ref type='table'>2</ns0:ref> shows samples of reviews from both negative and positive classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2. Description of IMDB dataset variables</ns0:head><ns0:p>Review Label Gwyneth Paltrow is absolutely great in this mo.. 0 I own this movie. Not by choice, I do. I was r. 0 Well I guess it supposedly not a classic becau.. 1 I am, as many are, a fan of Tony Scott films. .. 0 I wish 'that '70s show' would come back on tel.. 1</ns0:p></ns0:div> <ns0:div><ns0:head>TextBlob</ns0:head><ns0:p>TextBlob is a lexicon-based technique that we used to annotate the dataset with sentiments as negative and positive. TextBlob finds the polarity score for each word and then sums up these polarity scores to find the sentiment <ns0:ref type='bibr' target='#b44'>Saad et al. (2021)</ns0:ref>. TextBlob assigns a polarity score between -1 and 1. A polarity score greater than 0 shows the positive sentiment, a polarity score less than 0 shows a negative sentiment while a 0 score indicates that the sentiment is neutral. In the dataset used in this study, 23 neutral sentiments are found after applying TextBlob. Pertaining to the low number of neutral sentiments which can cause class imbalance, only negative and positive sentiments are used for experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>Feature Engineering Methods</ns0:head><ns0:p>Identification of useful features from the data is an important step for the better training of machine learning classifiers. The formation of secondary features from the original features enhances the efficiency of machine learning algorithmsOghina et al. ( <ns0:ref type='formula'>2012</ns0:ref>). It is one of the critical factors to increase the authenticity of the learning algorithm and boost its performance. The desired accuracy can be achieved by excluding the meaningless and redundant data. Less quantity of meaningful data is better than having a large quantity of meaningless data <ns0:ref type='bibr' target='#b39'>Prabowo and Thelwall (2009)</ns0:ref>. So, feature engineering is the process of extracting meaningful features from raw data which helps in the learning process of algorithms and increases its efficiency and consistencyLee et al. <ns0:ref type='bibr'>(2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Bag of Words</ns0:head><ns0:p>The BoW is simple to use and easy to implement for finding the features from raw text data. Many language modeling and text classification problems can be solved using the BoW features. In Python, the BoW is implemented using the CountVectorizer. BoW counts the occurrences of a word in the given text and formulates a feature vector of the whole text comprising of the counts of each unique word in the text. Each unique word is called 'token' and the feature vector is the matrix of these tokens <ns0:ref type='bibr' target='#b29'>Liu et al. (2008)</ns0:ref>. Despite being simple, BoW often surpasses many complicated feature engineering approaches in performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Term Frequency-Inverse Document Frequency</ns0:head><ns0:p>TF-IDF is another feature engineering method that is used to extract features from raw data. It is mostly deployed in areas like text analysis and music information retrieval <ns0:ref type='bibr' target='#b29'>Yu (2008)</ns0:ref>. In this approach, weights are assigned to every term in a document based on term frequency and inverse document frequency <ns0:ref type='bibr' target='#b35'>Neethu and Rajasree (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>Biau and Scornet (2016)</ns0:ref>.</ns0:p><ns0:p>Terms having higher weights are supposed to be more important than terms having lower weights.</ns0:p><ns0:p>The weight for each term is based on the following</ns0:p><ns0:formula xml:id='formula_0'>W i, j = T F i, j ( N D f ,t )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where T F i, j is the number of occurrences of term t in document d, D f ,t is the number of documents having the term t and N is the total number of documents in the dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>5/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>TF-IDF is a kind of scoring measurement approach which is widely used in summarization and information retrieval. TF calculates the frequency of a token and gives higher importance to more common tokens in a given documentVishwanathan and Murty <ns0:ref type='bibr'>(2002)</ns0:ref>. On the other hand, IDF calculates the tokens which are rare in a corpus. In this way, if uncommon words appear in more than one document, they are considered meaningful and important. In a set of documents D, IDF weighs a token x using the following</ns0:p><ns0:formula xml:id='formula_1'>IDF(x) = N/n(x)</ns0:formula><ns0:p>(2)</ns0:p><ns0:p>Where n(x) denotes frequency of x in D and N/n(x) denotes the inverse frequency. TF-IDF is calculated using TF and IDF a</ns0:p><ns0:formula xml:id='formula_2'>T F &#8722; IDF = T F &#215; IDF (3)</ns0:formula><ns0:p>TF-IDF is applied to calculate the weights of important terms and the final output of TF-IDF is in the form of a weight matrix. Values gradually increase to the count in TF-IDF but are balanced with the frequency of the word in dataset Zhang et al. <ns0:ref type='bibr'>(2008)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Supervised machine learning Models</ns0:head><ns0:p>Several machine learning classifiers have been selected for evaluating the classification performs in this study. A brief description of each of these classifiers is provided in the following sections.</ns0:p></ns0:div> <ns0:div><ns0:head>Random Forest</ns0:head><ns0:p>Rf is based on combining multiple decision trees on various subsamples of the dataset to improve classification accuracy. These subsamples are the combination of randomly selected features which are the size of the original dataset to form a bootstrap dataset. The average of predictions from these models is used to obtain a model with low variance. Information gain ratio and Gini index are the most frequently used feature selection parameters to measure the impurity of feature <ns0:ref type='bibr' target='#b0'>Agarwal et al. (2011)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>&#8721; &#8721; j =i ( f (C i , T ) |T | )( f (C j , T ) |T | ) (4)</ns0:formula><ns0:p>where f (C i ,T ) |T | indicates the probability of being a member of class C i .</ns0:p><ns0:p>The decision trees are not pruned upon traversing each new training data set. The user can define the number of features and number of trees on each node and set the values of other hyperparameters to increase the classification accuracy <ns0:ref type='bibr' target='#b9'>Biau and Scornet (2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Naive Bayes</ns0:head><ns0:p>NB is another approach for text classification that depends upon the prediction of independent features/variables. The features/variables are not supposed to affect each other rather assumed to behave separately. Despite its simplicity, NB is easy to implement on large datasets and shows better performance than complex prediction techniques. Naive Bayes classifier is based on Bayesian theorem which calculates the posterior probability of a class and use it to calculate the maximum likelihood probability of a sample for a particular class using Singh et al. ( <ns0:ref type='formula'>2013b</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_4'>P(C i |X) = P(X|C i )P(C i ) P(X) (5)</ns0:formula><ns0:p>NB assumes that the variables are not dependent on one another and work separately, which results in</ns0:p><ns0:formula xml:id='formula_5'>P(X|C i ) &#8776; n &#8719; k=1 P(x k |C i )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>It indicates that for a given sample x, a class C i is given to the sample which gets the highest posterior probability for the sample <ns0:ref type='bibr' target='#b15'>Goel et al. (2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>6/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Decision Tree</ns0:p><ns0:p>DT is one of the most commonly used models for classification and prediction problems <ns0:ref type='bibr' target='#b42'>Rustam et al. (2019)</ns0:ref>. DT is a simple and powerful tool to understand data features and infer decisions. The decision trees are constructed by repeatedly dividing the data according to split criteria. There are three types of nodes in a decision tree: root, internal, and leaf. The root node has no incoming but zero or more outgoing edges, the internal node has exactly one incoming but two or more outgoing edges while the leaf node has one incoming while no outgoing edge <ns0:ref type='bibr' target='#b7'>Bakshi et al. (2016)</ns0:ref>; <ns0:ref type='bibr'>Tan et al. (2006)</ns0:ref>. Nodes and edges represent features and decisions of a decision tree, respectively. A decision tree can be binary or non-binary depending upon the leaves of a node. The gain ratio is one of the commonly used split criteria for DT.</ns0:p><ns0:formula xml:id='formula_6'>Gain ratio = &#8710; in f o Split Info (7)</ns0:formula><ns0:p>where split info is defined as</ns0:p><ns0:formula xml:id='formula_7'>Split Info = &#8722; k &#8721; i=1 P(v i )log 2 P(v i ) (8)</ns0:formula><ns0:p>where k indicates the total number of splits for DT which is hyperparameter tuned for different datasets to elevate the performance. DT is non-parametric, computationally inexpensive, and shows better performance even when the data have redundant attributes.</ns0:p></ns0:div> <ns0:div><ns0:head>Support Vector Machines</ns0:head><ns0:p>Originally proposed by Cortes and Vapnik in 1995 for binary classification, SVM is expanded for multiclass classification <ns0:ref type='bibr' target='#b12'>Cortes and Vapnik (1995)</ns0:ref>. SVM is widely used approach for non-linear classification, regression and outlier detection <ns0:ref type='bibr' target='#b8'>Bennett and Campbell (2000)</ns0:ref>. SVM has the additional advantage of examining the relationship theoretically and performs distinctive classification than many complex approaches like neural networks <ns0:ref type='bibr' target='#b1'>Agarwal and Mittal (2016)</ns0:ref>. SVM separates the classes by distinguishing the optimal isolating lines called hyperplane by maximizing the distance between the classes' nearest points <ns0:ref type='bibr' target='#b35'>Neethu and Rajasree (2013)</ns0:ref>. Different kernels can be used with SVM to accomplish better performance such as radial, polynomial, neural and linear <ns0:ref type='bibr' target='#b16'>Guzman and Maalej (2014)</ns0:ref>. SVM is preferred for several reasons including the lack of local minimal, structural risk minimization principle, and developing more common classification ability <ns0:ref type='bibr'>Visa et al. (2011)</ns0:ref>; <ns0:ref type='bibr'>Vishwanathan and Murty (2002)</ns0:ref>.</ns0:p><ns0:p>For optimizing the performance of the machine learning models used in this study, several hyperparameters have been fine tuned. A list of the parameters providing the highest performance is provided in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. Hyperparameters used for optimizing the performance of models.</ns0:p><ns0:p>Model Hyperparameters RF N estimators=300, random state=50, max depth=300 SVC kernel= 'linear' , C=3.0, random state=50 DT random state=50, max depth=300 GBC N estimators=300, random state=50, max depth=300, learning rate=0.2</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed methodology</ns0:head><ns0:p>With the growing production of movies over the last two decades, a large number of opinions and reviews are posted on various social media platforms and websites. Such reviews are texts that show explicit opinions about a film or product. These opinions play an important part in the success of film or sales of the products <ns0:ref type='bibr' target='#b1'>Agarwal and Mittal (2016)</ns0:ref>. People search blogs, and evaluation sites like IMDB to get the likes and dislikes of other people about films, the cast, and team, etc. but it is very difficult to read every review and comment. Evaluation of these sentiments becomes beneficial to assisting people in this task. Sentiments expressed in such reviews are important regarding the evaluation of the movies and their Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>crew. Automatic sentiment analysis with higher accuracy is extremely important in this regard and this study follows the same direction and proposes an approach to perform the sentiment analysis of movie reviews. In addition, since the contradictions in the expressed sentiments in movie reviews and their assigned labels can not be ignored, this study additionally uses lexicon-based TextBlob to determine the sentiments. Two sets of experiments are performed using the standard dataset and TextBlob annotated dataset to fill in the research gap as previous studies do not consider the contradictions in the sentiments and assigned labels. Figure <ns0:ref type='figure'>1</ns0:ref> shows the flow of the steps carried out for sentiment classification. Punctuation is removed from IMDB text reviews because punctuation does not add any value to text analysis <ns0:ref type='bibr' target='#b16'>Guzman and Maalej (2014)</ns0:ref>. Sentences are more readable for human due to punctuation, however, it is difficult for a machine to distinguish punctuation from other characters. Punctuation distorts a model's efficiency to distinguish between entropy, punctuation, and other characters <ns0:ref type='bibr' target='#b29'>Liu et al. (2008)</ns0:ref>.</ns0:p><ns0:p>So, punctuation is removed from the text in pre-processing to reduce the complexity of the feature space.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> shows the text of a sample review, before and after the punctuation has been removed.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref>. Text from sample review before and after punctuation removal.</ns0:p></ns0:div> <ns0:div><ns0:head>Before puncutation removal</ns0:head><ns0:p>After punctuation removal @Gwyneth Paltrow is absolutely... !!!great in this movie.</ns0:p><ns0:p>Gwyneth Paltrow is absolutely great in this movie I own this movie. This is number 1 movie. . . I didn't like by choice, I do. I own this movie This is number 1 movie I didnt like by choice I do I wish 'that '70s show' would come back on tel. I wish that 70s show would come back on tel.</ns0:p><ns0:p>Once the punctuation is removed, the next step is to find numerical values and remove them as they are not valuable for text analysis. Numerical values are used in the reviews as an alternative to various English words to reduce the length of reviews and ease of writing the review. For example, 2 is used for 'to' and numerical values are used instead of counting like 1 instead of 'one'. Such numerals are convenient for humans to interpret, yet offer no help in the training of machine learning classifiers. Table <ns0:ref type='table'>5</ns0:ref> shows text from sample reviews after the numeric values are removed. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref>. Sample text from movie reviews after removing numeric values.</ns0:p></ns0:div> <ns0:div><ns0:head>Input Data</ns0:head><ns0:p>After Numeric Removal Gwyneth Paltrow is absolutely great in this movie.</ns0:p><ns0:p>Gwyneth Paltrow is absolutely great in this movie. I own this movie This is number 1 movie I didnt like by choice I do.</ns0:p><ns0:p>I own this movie This is number movie I didnt like by choice I do I wish that 70s show would come back on tel.</ns0:p></ns0:div> <ns0:div><ns0:head>I wish that s show would come back on tel</ns0:head><ns0:p>In the subsequent step of numbers removal, all capital letters are converted to lower form. Machine learning classifiers can not distinguish between lower and upper case letters and consider them as different letters <ns0:ref type='bibr' target='#b43'>Rustam et al. (2021)</ns0:ref>. For example, 'Health', and 'health' are considered as two separate words if conversion is not performed from uppercase to lowercase. It may reduce the significance of most occurred terms and degrade the performance <ns0:ref type='bibr' target='#b28'>Liu et al. (2010)</ns0:ref>. It increases the complexity of the feature space and reduces the performance of classifiers. So, converting the upper case letters to lower form helps in increasing the training efficiency of the classifiers. Table <ns0:ref type='table'>6</ns0:ref> shows the text after the case is changed for the reviews.</ns0:p><ns0:p>Table <ns0:ref type='table'>6</ns0:ref>. Sample output of the review text after changing the case of review text.</ns0:p></ns0:div> <ns0:div><ns0:head>Input Data</ns0:head><ns0:p>After Case Lowering Gwyneth Paltrow is absolutely great in this movie.</ns0:p><ns0:p>gwyneth Paltrow is absolutely great in this movie. I own this movie This is number movie I didnt like by choice I do.</ns0:p><ns0:p>i own this movie this is number movie i didnt like by choice I do I wish that s show would come back on tel.</ns0:p></ns0:div> <ns0:div><ns0:head>i wish that s show would come back on tel</ns0:head><ns0:p>Stemming is an important step in pre-processing because eliminating affixes from words and changing them into their root form is very helpful to enhance the efficiency of a model <ns0:ref type='bibr' target='#b15'>Goel et al. (2016)</ns0:ref>. For example, 'help', 'helped', and 'helping' are altered forms of 'help', however, machine learning classifiers consider them as two different words <ns0:ref type='bibr' target='#b49'>Singh et al. (2013b)</ns0:ref>. Stemming changes these different forms of words into their root form. Stemming is implemented using the PorterStemmer library of Python <ns0:ref type='bibr' target='#b38'>Pang et al. (2002b)</ns0:ref>. Table <ns0:ref type='table'>7</ns0:ref> shows the sample text of review before and after stemming.</ns0:p><ns0:p>Table <ns0:ref type='table'>7</ns0:ref>. Text from sample review before and after stemming.</ns0:p></ns0:div> <ns0:div><ns0:head>Input Data</ns0:head><ns0:p>After Stemming gwyneth Paltrow is absolutely great in this movie.</ns0:p><ns0:p>gwyneth Paltrow is absolute great in this movie i own this movie this is number movie i didnt like by choice I do.</ns0:p><ns0:p>i own this movie this is number movie i didnt like by choice I do i wish that s show would come back on tel.</ns0:p></ns0:div> <ns0:div><ns0:head>i wish that s show would come back on tel</ns0:head><ns0:p>The last step in the preprocessing phase is the removal of stop words. Stop words have no importance concerning the training of the classifiers. Instead, they increase the feature vector size and reduce the performance. So they must be removed to decrease the complexity of feature space and boost the training of classifiers. Table <ns0:ref type='table' target='#tab_1'>8</ns0:ref> shows the text of the sample review after the stopwords have been removed.</ns0:p><ns0:p>After the preprocessing is complete, feature extraction takes place where BoW, TF-IDF, and GloVe are used. Feature space for the sample reviews is given in Table <ns0:ref type='table' target='#tab_6'>11 and 13</ns0:ref> gwyneth paltrow Absolute great movie choice found Very bad</ns0:p><ns0:formula xml:id='formula_8'>1 1 1 1 1 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 1 1</ns0:formula><ns0:p>trained on the training set while the test set is used to evaluate the performance of the trained models. For evaluating the performance, standard well-known parameters are used such as accuracy, precision, recall, and F1 score. </ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Parameters</ns0:head><ns0:p>Performance evaluation of the classifiers requires evaluation metrics for which accuracy, precision, recall, and F1 score are selected concerning their wide use. The introduction of the confusion matrix is necessary to define the mathematical formulas for these evaluation metrics. The confusion matrix as shown in Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> can be considered as an error matrix that indicates four quantities. The confusion matrix shows true positive (TP), false positive (FP), true negative (TN), and false-negative (FN). Each row of the matrix represents the predicted labels while each column represents actual labels <ns0:ref type='bibr' target='#b25'>Landy and Szalay (1993)</ns0:ref>.</ns0:p><ns0:p>TP indicates that the classifier predicted the review as positive and the original label is also positive.</ns0:p><ns0:p>A review is TN if it belongs to the negative class and the real outcome is also negative. In the FP case, the review is predicted as positive, but the original label is negative. Similarly, a review is called FN if it belongs to the positive class but the classifier predicted it as negative <ns0:ref type='bibr' target='#b41'>Rokach and Maimon (2005)</ns0:ref>.</ns0:p><ns0:p>Accuracy is a widely used evaluation metrics and indicates the ratio of true predictions to the total predictions. It has a maximum value of 1 for 100 percent correct prediction and the lowest value of 0 for 0 percent prediction. Accuracy can be defined as Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Recall = T P T P + FN (11)</ns0:head><ns0:p>F1 score is considered an important parameter to evaluate the performance of a classifier and has been regarded as more important than precision and recall. It defines how precise and robust is the classifier by incorporating precision and recall <ns0:ref type='bibr' target='#b11'>Bruce et al. (2002)</ns0:ref>. F1 score value varies between 0 and 1 where 1</ns0:p><ns0:p>shows the perfect performance of the classifier. F1 score is defined as</ns0:p><ns0:formula xml:id='formula_9'>F1Score = 2 &#215; precision &#215; recall precision + recall (12)</ns0:formula></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>This study uses four machine learning classifiers to classify movie reviews into positive and negative reviews, such as DT. SVM, RF, and GBC. Four feature extraction approaches are utilized including TF-IDF, Bow, Word2Vec, and GloVe on the selected dataset to extract the features. Results for these feature extraction approaches are discussed separately. Similarly, the influence of TextBlob annotated data on the classification accuracy is analyzed. The contradictions in the sentiments expressed in the reviewers and the assigned sentiments cannot be ignored, so TextBlob is used to annotate the labels.</ns0:p><ns0:p>Several experiments are performed using the standard, as well as, the TextBlob annotated dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Results using BoW Features</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>11</ns0:ref> shows the classification accuracy of the machine learning classifiers when BoW features to train and test the classifiers. Results indicate that SVM can achieve an accuracy of 0.87 with BoW features.</ns0:p><ns0:p>Overall, the performance of all the classifiers is good except for DT whose accuracy is 0.72. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Results using TF-IDF Features</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_6'>13</ns0:ref> contains the accuracy results for the classifiers using the TF-IDF features. It shows that the performance of the SVM has been elevated with an accuracy of 0.89 which is 2.29% higher than that of using BoW features. Unlike BoW which counts only the frequency of terms, TF-IDF also records the importance of terms by assigning higher weights to rare terms. So, the performance is improved when TF-IDF features are used as compared to BoW features. </ns0:p></ns0:div> <ns0:div><ns0:head>Classifiers Results using GloVe Features</ns0:head><ns0:p>Experimental results using GloVe features are shown in Table <ns0:ref type='table' target='#tab_8'>15</ns0:ref> for the selected classifiers. Results</ns0:p><ns0:p>suggest that the performance of all the classifiers has been degraded when trained and tested on GloVe features. Glove features are based on the global word-to-word co-occurrence and count the co-occurred terms from the entire corpus. GloVe model is traditionally used with deep learning models where it helps to better recognize the relationships between the given samples of the dataset. In machine learning models, its performance is poor than that of TF-IDF features <ns0:ref type='bibr' target='#b13'>Dessi et al. (2020)</ns0:ref>. SVM and RF outperform other models using GloVE features.</ns0:p></ns0:div> <ns0:div><ns0:head>Results using Word2Vec Features</ns0:head><ns0:p>Performance elevation metrics for all the classifiers using the Word2Vec features are given in Table <ns0:ref type='table' target='#tab_9'>16</ns0:ref>.</ns0:p><ns0:p>Results indicate that the performance of the classifiers is somehow better when trained and tested on Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The comparison between machine learning models results on original dataset sentiment with BoW, TF-IDF, GloVe, and Word2Vec features are shown in Figure <ns0:ref type='figure' target='#fig_11'>4</ns0:ref>. SVM is significant with all features and achieved the best score with BoW, TF-IDF, and Word2Vec. This significant performance of SVM is because of its linear architecture and binary classification problem. SVM is more significant on linear data for binary classification with its linear kernel as shown in this study. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Results on TextBlob annotated dataset</ns0:head><ns0:p>The contradictions in the users' expressed sentiments in the reviews and assigned labels can influence the sentiment classification accuracy of the models. To resolve this issue, TextBlob annotated data are used for the performance evaluation of the models. Results suggest that Textblob annotated dataset gives more accurate target sentiment with the proposed approach. Similarly, the performance of machine learning models is also improved.</ns0:p></ns0:div> <ns0:div><ns0:head>Results using BoW Features</ns0:head><ns0:p>The performance of models with BoW and TextBlob sentiments are shown in Table <ns0:ref type='table' target='#tab_10'>17</ns0:ref>. Results indicate that SVM achieved its highest accuracy of the study 0.92 with TextBlob sentiments and BoW features.</ns0:p><ns0:p>While the performance of other models such as RF, GBC, and DT has also been improved. The reason for the performance elevation is that TextBlob given sentiments have a high correlation with the sentiments in the reviews. </ns0:p></ns0:div> <ns0:div><ns0:head>Results using TF-IDF Features</ns0:head><ns0:p>The performance of models with TF-IDF features and TextBlob sentiments are shown in Table <ns0:ref type='table' target='#tab_11'>18</ns0:ref>. SVM achieves its highest accuracy score of 0.92 with TextBlob sentiment and TF-IDF features. While other models such as RF, GBC, and DT repeat their performances with TF-IDF features. </ns0:p></ns0:div> <ns0:div><ns0:head>Results using GloVe Features</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_13'>19</ns0:ref> shows the performance comparison of models using GloVe features and TextBlob sentiments and it indicates that using the performance using GloVe features and TextBlob sentiments is better as compared to their performance on the original sentiments and GloVe features. These results show the significance of TextBlob with the proposed approach. Compared to the performance on the original dataset, the accuracy of the models has been improved significantly when used with TextBlob assigned sentiments. For example, the highest accuracy with GloVe features and TextBlob sentiment is 0.81 which was only 0.75 on the original sentiments. However, the performance of the machine learning models is inferior to that of BoW and TF-IDF. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Results using Word2Vec Features</ns0:head><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Performance of Deep Learning Models</ns0:head><ns0:p>To compare the performance of the proposed approach with the latest deep learning approach, experiments have been performed using several deep learning models. Manuscript to be reviewed to that of state-of-the-art approaches. The use of SVM with TF-IDF and BoW using lexicon technique provides an accuracy of 92% which is better than the state-of-the-art approaches.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>With an ever-growing production of cinema movies, web series, and television dramas, a large number of reviews can be found on social platforms and movies websites like IMDB. Sentiment analysis of such reviews can provide insights about the movies, their team, and cast to millions of viewers. This study proposes a methodology to perform sentiment analysis on the movie reviews using supervised machine learning classifiers to assist the people in selecting the movies based on the popularity and interest of the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Current study excludes the neutral class due to low number of samples and experiments are performed using positive and negative classes. Consequently, the accuracy may have been higher that three classes.</ns0:p><ns0:p>Similarly, probable class imbalance by adding neutral class samples is not investigated and is left for future. We intend to perform further experiments using movie reviews datasets from other sources in the future. Furthermore, the study on finding the contradictions in the sentiments expressed in the reviews and the assigned labels is also under consideration.</ns0:p><ns0:p>Financial disclosure</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>uses a machine learning approach for fake new classification. Study proposes a feature selection technique and an ensemble classifier using three machine learning classifiers including DT, RF, and extra tree classifier. The proposed model achieves good accuracy score on the 'Liar dataset' as compared to the ISOT dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Similarly, an ensemble classifier is proposed in Yenter and Verma (2017) which comprises CNN and LSTM networks. The model aims at the word-level classification of the IMDB reviews. The output of the CNN network is fed into an LSTM before being concatenated and sent to a fully connected layer in 3/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1. The work flow of proposed methodology for movie review classification.</ns0:figDesc><ns0:graphic coords='9,186.52,160.88,324.02,129.51' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>for BoW and TF-IDF features, respectively. Experiments are performed with the standard dataset, as well as, the TextBlob annotated dataset to analyze the performance of the machine learning and proposed models. The data are split into training and testing sets in a 75 to 25 ratio. Machine learning classifiers are 9/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>the accuracy of predicting the positive cases. It shows what proportion of the positively predicted cases are originally positive. It is defined as Precision = T P T P + FP (10) Recall calculates the ratio of correct positive cases to the total positive cases. To get the ratio, the total number of TP is divided by the sum of TP and FN as follows 10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Confusion matrix.</ns0:figDesc><ns0:graphic coords='12,240.52,63.78,216.00,114.94' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>F1</ns0:head><ns0:label /><ns0:figDesc>score indicates that its value is the same with both positive and negative classes for all the classifiers, except for GBC who has F1 scores of 0.86 and 0.85 for positive and negative classes, respectively. Precision values are slightly different for positive and negative classes; for example, SVM has a precision of 0.88 and 0.90 for positive and negative classes. Similarly, although precision, recall, and F1 score of DT are the lowest but the values for positive and negative classes are almost the same. An equal number of the training samples in the dataset makes a good fit for the classifiers, and their accuracy and F1 scores are in agreement. 11/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>F1</ns0:head><ns0:label /><ns0:figDesc>score is the same for positive and negative classes for all classifiers which indicates the good fit of the modes on the training data. On the other hand, precision for positive and negative classes is slightly different. For example, GBS has a precision of 0.84 and 0.87 while SVM has a precision of 0.88 and 0.90 for positive and negative classes, respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Performance comparison between machine learning models using original dataset and BoW, TF-IDF, GloVe, Word2Vec features.</ns0:figDesc><ns0:graphic coords='14,150.52,448.83,396.02,231.38' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Similarly, the performance of other models such as RF, GBC, and DT has also been improved with the TextBlob sentiments. The primary reason for comparatively lower accuracy with the original sentiments is the contradiction in the expressed sentiments and the assigned labels. Using TextBlob the assigned sentiments have a high correlation to the sentiments given in the users' reviews.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Performance comparison between machine learning models using TextBlob dataset and BoW, TF-IDF, GloVe, Word2Vec features.</ns0:figDesc><ns0:graphic coords='16,150.52,358.73,396.02,231.38' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>For this purpose, state-of-the-art deep learning models are used such as LSTM Mujahid et al. (2021), CNN-LSTM Jamil et al. (2021), and Gated Recurrent Unit (GRU). The architecture of used deep learning models is provided in Figure 6. These models are used with the TextBlob annotated dataset owing to the superior results on the dataset from machine learning models. 15/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. LSTM, CNN-LSTM, and GRU architectures .</ns0:figDesc><ns0:graphic coords='17,150.52,63.78,396.01,213.20' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>machine learning algorithms including DT, RF, GBC, and SVM are utilized for sentiment analysis that is trained on the dataset preprocessed through a series of steps. Moreover, four feature extraction approaches including BoW, TF-IDF, GloVe, and Word2Vec are investigated for their efficacy in extracting the meaningful and effective features from the reviews. Results indicate that SVM achieves the highest accuracy among all the classifiers with an accuracy of 89.55% when trained and tested using TF-IDF features. The performance using BoW features is also good with an accuracy of 87.25%. Contrary to BoW which counts the occurrence of unique tokens, TF-IDF also records the importance of rare terms by assigning a higher weight to rare terms and perform better than BoW. However, the performance of the classifiers is greatly affected by GloVe and Word2Vec features which suggest that word embedding does not work well with the movie review dataset. For improving the performance of models and reducing the influence of contradictions found in the expressed sentiments and assigned labels, lexicon-based TextBlob is used for data annotation. Experimental results on TextBlob annotated dataset indicates that SVM achieves the highest accuracy of 92% with TF-IDF features. Compared to the standard dataset, the TextBlob assigned sentiments have a high correlation with the users' expressed sentiments. The performance of machine learning models is slightly lower than machine learning models with the highest accuracy of 0.90 by the CNN-LSTM. Despite the equal number of positive and negative reviews used for training, the prediction accuracy for the positive and negative classes is different. Precision, recall, and F1 score indicate the models have a good fit, and performance evaluation metrics are in agreement.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comprehensive summary of research works discussed in the related work.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Approach</ns0:cell><ns0:cell>Model</ns0:cell><ns0:cell>Aim</ns0:cell></ns0:row><ns0:row><ns0:cell>Singh et al. (2013a)</ns0:cell><ns0:cell>Lexicon-Based</ns0:cell><ns0:cell>SentiWordNet</ns0:cell><ns0:cell>Movie review classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Singh et al. (2013b)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell>RLPI , Hybrid Fea-</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell>tures, KNN</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Yenter and Verma (2017) Deep Learning</ns0:cell><ns0:cell>CNN LSTM</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Giatsoglou et al. (2017)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell>BoW-DOUBLE and</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell>Average emotion-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>DOUBLE</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mathapati et al. (2018)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Ali et al. (2019)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>Multilayer percep-</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>tron, CNN and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>LSTM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Minaee et al. (2019)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>IMDB review and Stanford</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>sentiment treebank v2 (SST2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Qaisar (2020)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Shaukat et al. (2020)</ns0:cell><ns0:cell>Deep &amp; Machine</ns0:cell><ns0:cell>NN</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Learning</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Jain and Jain (2021a)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Nafis and Awang (2021)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell cols='2'>SVM + (SVM-RFE) IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Jain and Jain (2021b)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell>NB+ ARM</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Sample reviews before and after the stop words removal.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Input Data</ns0:cell><ns0:cell>After Stopwords Removal</ns0:cell></ns0:row><ns0:row><ns0:cell>gwyneth Paltrow is absolutely great in</ns0:cell><ns0:cell>gwyneth Paltrow absolute great movie</ns0:cell></ns0:row><ns0:row><ns0:cell>this movie.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>i own this movie this is number movie i</ns0:cell><ns0:cell>own movie number movie didnt like</ns0:cell></ns0:row><ns0:row><ns0:cell>didnt like by choice I do.</ns0:cell><ns0:cell>choice do</ns0:cell></ns0:row><ns0:row><ns0:cell>i wish that s show would come back on</ns0:cell><ns0:cell>wish show would come back tel</ns0:cell></ns0:row><ns0:row><ns0:cell>tel.</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>BoW features from the preprocessed text of sample reviews.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>TF-IDF features from the preprocessed text of sample reviews.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='9'>gwyneth paltrow Absolute great movie choice found Very bad</ns0:cell></ns0:row><ns0:row><ns0:cell>0.47</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.28</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.28</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell cols='2'>0.28 0.25</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Accuracy of the selected models with BoW features.Performance of the classifiers is given in Table12in terms of precision, recall, and F1 score. The</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Classifier Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.72</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.85</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.87</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Performance evaluation metrics using BoW features. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Precision 0.71 0.72 Pos. DT Model</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>Recall 0.72 0.71</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>F1 score 0.72 0.72</ns0:cell><ns0:cell>0.72</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.85 0.87</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.88 0.84</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.86</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.83 0.87</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.88 0.82</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.86 0.85</ns0:cell><ns0:cell>0.85</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.86 0.88</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.88 0.86</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.87 0.87</ns0:cell><ns0:cell>0.87</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Accuracy of models with TF-IDF features.Results for precision, recall, and F1 score are given in Table14. Experimental results indicate that the</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Classifier Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.71</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.89</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Performance evaluation metrics using TF-IDF features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='6'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.72 0.71</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.70 0.72</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.71 0.71</ns0:cell><ns0:cell>0.71</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.86 0.86</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.85</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.86</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.84 0.87</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.88 0.83</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.85</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.88 0.90</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.90 0.88</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.89 0.89</ns0:cell><ns0:cell>0.89</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SVM performs better for text classification than other supervised learning models, especially in the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>case of large datasets as this algorithm is derived from the theory of structural risk minimization Mouthami</ns0:cell></ns0:row><ns0:row><ns0:cell>et al. (2013).</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Performance of classifiers using GloVe features. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Precision Pos. DT Model Accuracy 0.65 0.64 0.65 0.65</ns0:cell><ns0:cell>Recall 0.64 0.65 0.65</ns0:cell><ns0:cell>F1 Score 0.65 0.65 0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.75 0.74 0.74</ns0:cell><ns0:cell>0.72 0.77 0.74</ns0:cell><ns0:cell>0.73 0.75 0.74</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell><ns0:cell>0.65 0.66 0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.75</ns0:cell><ns0:cell>0.75 0.75 0.75</ns0:cell><ns0:cell>0.75 0.75 0.75</ns0:cell><ns0:cell>0.75 0.75 0.75</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Word2Vec features in comparison with GloVe features results. The performance of classifiers is not</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>significant using Word2Vec features in comparison to the results of the classifiers using BoW and TF-IDF</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>features. SVM achieved the highest accuracy of 0.88 with Word2Vec features as compared to other</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>models because Word2Vec gives linear features set which is more suitable for SVM as compared to RF,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>GBC, and DT.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Performance evaluation of classifiers using Word2Vec features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.80 0.80 0.80</ns0:cell><ns0:cell>0.80 0.80 0.80</ns0:cell><ns0:cell>0.80 0.80 0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.88 0.88 0.88</ns0:cell><ns0:cell>0.88 0.88 0.88</ns0:cell><ns0:cell>0.88 0.88 0.88</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Performance evaluation of classifiers using BoW features on TextBlob annotated dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.85 0.61 0.73</ns0:cell><ns0:cell>0.87 0.57 0.72</ns0:cell><ns0:cell>0.87 0.59 0.73</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.84 0.90 0.87</ns0:cell><ns0:cell>0.98 0.47 0.72</ns0:cell><ns0:cell>0.90 0.62 0.76</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.85 0.70 0.78</ns0:cell><ns0:cell>0.92 0.55 0.73</ns0:cell><ns0:cell>0.98 0.62 0.75</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.94 0.84 0.89</ns0:cell><ns0:cell>0.94 0.84 0.89</ns0:cell><ns0:cell>0.94 0.84 0.89</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Performance evaluation of classifiers using TF-IDF features on TextBlob annotated dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.85 0.62 0.73</ns0:cell><ns0:cell>0.87 0.58 0.72</ns0:cell><ns0:cell>0.86 0.60 0.73</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.85 0.88 0.87</ns0:cell><ns0:cell>0.98 0.51 0.74</ns0:cell><ns0:cell>0.91 0.65 0.78</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.86 0.73 0.79</ns0:cell><ns0:cell>0.92 0.57 0.75</ns0:cell><ns0:cell>0.89 0.64 0.77</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.92 0.88 0.90</ns0:cell><ns0:cell>0.96 0.78 0.87</ns0:cell><ns0:cell>0.94 0.82 0.88</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head /><ns0:label /><ns0:figDesc>Table20shows the performance of machine learning models with Word2Vec features and Textblob</ns0:figDesc><ns0:table /><ns0:note>sentiments. SVM achieves significantly better accuracy with Word2Vec features as compared to GloVe features. It gives a 0.88 accuracy score which is more than GloVe features but lower than BoW and TF-IDF features. RF and GBC achieve 0.79 and 0.70 accuracy scores, respectively. The performance of DT is degraded when used with Word2Vec features.The comparison between machine learning model results onTextBlob sentiment dataset with BoW, TF-IDF, GloVe, and Word2Vec features is given in Figure 5. SVM obtains better results with TextBlob 14/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:1:0:NEW 23 Sep 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Performance evaluation of classifiers using GloVe features on TextBlob annotated dataset. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Precision Pos. DT Model Accuracy 0.72 0.81 0.47 0.64</ns0:cell><ns0:cell>Recall 0.81 0.48 0.64</ns0:cell><ns0:cell>F1 Score 0.81 0.48 0.64</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.71 0.81 0.76</ns0:cell><ns0:cell>0.94 0.39 0.67</ns0:cell><ns0:cell>0.87 0.51 0.69</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.81 0.47 0.64</ns0:cell><ns0:cell>0.81 0.48 0.65</ns0:cell><ns0:cell>0.81 0.48 0.64</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.83 0.71 0.77</ns0:cell><ns0:cell>0.93 0.46 0.70</ns0:cell><ns0:cell>0.88 0.56 0.72</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Performance evaluation of classifiers using Word2Vec features on TextBlob annotated dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>0.78 0.87 0.83</ns0:cell><ns0:cell>0.99 0.24 0.62</ns0:cell><ns0:cell>0.87 0.48 0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.78 0.87 0.83</ns0:cell><ns0:cell>0.99 0.24 0.62</ns0:cell><ns0:cell>0.87 0.38 0.63</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.80 0.44 0.62</ns0:cell><ns0:cell>0.80 0.44 0.62</ns0:cell><ns0:cell>0.80 0.44 0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.90 0.83 0.87</ns0:cell><ns0:cell>0.95 0.71 0.83</ns0:cell><ns0:cell>0.92 0.77 0.84</ns0:cell></ns0:row></ns0:table><ns0:note>sentiments using BoW and TF-IDF features as compared to GloVe and Word2Vec features.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 21 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>Performance analysis of deep learning models</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='5'>Accuracy Class Precision Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Neg.</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>Pos.</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Avg.</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.87</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Neg.</ns0:cell><ns0:cell>0.78</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>CNN-LSTM 0.90</ns0:cell><ns0:cell>Pos.</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Avg.</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.88</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Neg.</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GRU</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>Pos.</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.85</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Avg.</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_16'><ns0:head>Performance Analysis with State-of-The-Art Approaches Performance</ns0:head><ns0:label /><ns0:figDesc>analysis has been carried out to analyze the performance of the proposed approach with other state-of-the-art approaches that utilize the IMDB movie reviews analysis. Comparison results are provided in Table22. Results indicate that the proposed methodology can achieve competitive results</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 22 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Performance analysis of the proposed methodology.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Year Reference</ns0:cell><ns0:cell>Model</ns0:cell><ns0:cell>Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>2016 Sahu and Ahuja (2016)</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>0.90</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2017 Yenter and Verma (2017) CNN+LSTM</ns0:cell><ns0:cell>0.895</ns0:cell></ns0:row><ns0:row><ns0:cell>2017 Giatsoglou et al.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"1 Response to Comments Manuscript ID: CS-2021:06:61877 Title: Classification of movie reviews using term frequency-inverse document frequency and optimized machine learning algorithms Authors: Omar Boutkhoum, Mohamed Hanine, Mohamed Nabil, Fatima EL Barakaz, Ernesto Lee*, Furqan Rustam, and Imran Ashraf* Dear Editor, Thank you very much for allowing us to revise the manuscript. We would like to thank the editor and all the reviewers for their valuable comments and suggestions. Based on the feedback, we have revised our manuscript. The detailed modifications to address reviewers’ comments are provided in the following. For clarity, we have marked our responses in blue. Whenever we copy a paragraph from the manuscript here, we mark it as a magenta color. Answers to Comments I. R EVIEWER 1 Comment 1: 1- Most relevant papers (only relevant and state-of-the-art) should be qualitatively and quantitatively analyzed with what gaps were left in these works and what this work is proposed to overcome those gaps/challenges. Response: Thank you very much. As suggested by the worthy reviewer, Related Work section has been updated by adding several recent research works from 2020 and 2021. Kindly see Lines 149 165 of the revised manuscript. The study Jain and Jain (2021b) uses a machine learning approach for IMDB reviews dataset using the deep learning models for sentiment classification. It uses a convolutional neural network (CNN) and long short term memory (LSTM) with different activation functions. The highest accuracy of 0.883 is achieved with CNN using the ReLU activation function. Similarly, Nafis and Awang (2021) proposes a hybrid approach for IMDB review classification using TF-IDF and SVM. The approach called SVMRFE uses important feature selection to train the SVM model. The important features used to train the 2 SVM helps in boosting the performance of SVM and it achieves an accuracy score of 89.56% for IMDB reviews sentiment classification. The study Jain and Jain (2021b) uses a machine learning approach for IMDB reviews classification. The study performs preprocessing of data and proposes a feature selection technique using association rule mining (ARM). Results show that Naive Bayes (NB) outperforms all other used models by achieving a 0.784 accuracy score using the proposed features. The study Qaisar (2020) presents an approach using LSTM for IMDB review sentiment classification. LSTM achieves an 0.899 accuracy score on the IMDB dataset. Along the same lines, Shaukat et al. (2020) performs experiments on the IMDB reviews dataset using a supervised machine learning approach. The study proposed neural network can achieve a 0.91 accuracy score. Comment 2: 2- Experimental should be rigorously analyzed both theoretically and visually with proper justification of obtained results as well as with potential comparative studies. Response: The authors highly appreciate the reviewers positive feedback and suggestions. Results section has been revised substantially. Performance comparison with the state-of-the-art studies has been included as well. Kindly see Page 12–16 of the revised manuscript and Page 13–17 of the highlighted manuscript for additional results. Comment 3: Can add future work also Response: Thank you very much for your suggestions. Future work has been included in Conclusion section. Kindly see Page 17–18 of the highlighted manuscript. II. R EVIEWER 2 Comment 1: The authors have to summarize the findings/gaps from recent literature in the form of a table. Response: We would like to thank the reviewer for suggestion and we update the manuscript by adding summary table in literature review. Comment 2: Some of the recent works on NLP and feature extraction such as the following can be discussed: ”An ensemble machine learning approach through effective feature extraction to classify fake news, Analysis of dimensionality reduction techniques on big data. Response: As advised by the worthy reviewer, the suggested research works have been added in the manuscript on Page 3 of the revised manuscript. Kindly see the attached highlighted version of the manuscript. 3 Table 1: Comprehensive summary of research works discussed in the related work. Reference Singh et al. (2013a) Approach Lexicon-Based Model SentiWordNet Singh et al. (2013b) Machine Learning Deep Learning RLPI , Hybrid Features, KNN CNN LSTM Giatsoglou et al. (2017) Machine Learning Mathapati et al. (2018) Deep Learning BoW-DOUBLE and Average emotionDOUBLE CNN Ali et al. (2019) Deep Learning Minaee et al. (2019) Deep Learning Multilayer perceptron, CNN and LSTM Bi-LSTM Qaisar (2020) Deep Learning LSTM Shaukat et al. (2020) Deep & Machine Learning Deep Learning NN Machine Learning Machine Learning SVM + (SVMRFE) NB+ ARM Yenter and Verma (2017) Jain and Jain (2021a) Nafis and Awang (2021) Jain and Jain (2021b) CNN Aim Movie tion IMDB tion IMDB tion IMDB tion review classificareviews classificareviews classificareviews classifica- IMDB reviews classification IMDB reviews classification IMDB review and Stanford sentiment treebank v2 (SST2) IMDB reviews classification IMDB reviews classification IMDB reviews classification IMDB reviews classification IMDB reviews classification Comment 3: In the proposed methodology section, the authors should clearly map the proposed work with the limitations of the existing works and discuss how the proposed work overcomes them. Response: Thank you very much for your valuable suggestions. Proposed methodology section is updated accordingly. Kindly see Page 7 of the revised manuscript. Comment 4: Present a detailed analysis on the results obtained. Response: he authors highly appreciate the reviewers positive feedback and suggestions. Results section has been revised substantially. Performance comparison with the state-of-the-art studies has been included as well. Kindly see Page 12–16 of the revised manuscript and Page 13–17 of the highlighted manuscript for additional results. Comment 5: Discuss about the limitations of the current work. 4 Response: We added a paragraph discussing the limitations of our study in the Conclusion section. Kindly see Page 17–18 of the revised manuscript. III. R EVIEWER 3 Comment 1: Literature review is very superficial and the last paper that is reviewed has been published in the year 2019. Response: Thank you very much. As suggested by the worthy reviewer, Related Work section has been updated by adding several recent research works from 2020 and 2021. Kindly see Lines 149 165 of the revised manuscript. Comment 2: Literature review is not guided, and authors have made very generic assumptions. Response: Thank you for your valuable suggestions. we modified the related work section by incorporating more relevant works. Furthermore, Table 1 has been added to provide a comprehensive overview of the discussed works. Comment 3: There isn’t any significant contribution in terms of methodology or results. Response: : we have updated the Proposed Methodology section to indicate how the proposed methodology is used to fill the research gap. Also the points additionally considered are highlighted. Kindly see Page 8 Lines 287–295 of the highlighted version. Comment 4: Authors need to justify the results, why a particular model should yield better results. Response: The authors highly appreciate the reviewers positive feedback and suggestions. Results section has been revised substantially. Performance comparison with the state-of-the-art studies has been included as well. Kindly see Page 12–16 of the revised manuscript and Page 13–17 of the highlighted manuscript for additional results. IV. R EVIEWER 4 Comment 1: The third contribution mentions the use of deep learning in this work which is not correct since the results only show four traditional ML algorithms. Response: Thank you very much for pointing out this limitation. Results for deep learning models including LSTM, CNN-LSTM and GRU have been added in Table 20 of the revised manuscript. Kindly see Page 12–16 of the revised manuscript and Page 13–17 of the highlighted manuscript for additional results. 5 Comment 2: Authors should specify the type of decision tree used in this paper and also other parameters such as maximum depth of the tree or number of features to consider when looking for the best split. Authors should also specify the hyperparameters used and the kernels used in this paper. Response: We have incorporated Table 3 in the revised manuscript to show the list of important parameters used to optimize the performance. Comment 3:The results in table 15 show comparable results in previous work. Hence, the impact and novelty of this work are not there. The author should indicate the strength of this work. Response: The reviewer has raised an important point here. The methodology has been revised to improve the performance. Performance comparison given in Table 22 indicates the performance of the proposed approach is better than state-of-the-art approaches with 0.92 accuracy. Comment 4: The use of ”optimized machine learning algorithms” term in the title does not reflect the real research outcome. The authors should at least describe the optimization process involved in this work. Response: Thank you very much for your feedback. Regarding optimization two important points are considered; pipeline for preparing the data for training and models’ hyperparameters fine tuning. Table 3 in the revised manuscript shows the list of important parameters used to optimize the performance. Comment 5: In terms of the evaluation validity, how the authors ensure the similar evaluation method were used by all the papers in Table 15? Response: The papers used for performance comparison utilize the same dataset which is used in current study. Each of these works aims at increasing the classification accuracy no matter which model is used. So the central point for performance analysis is the classification accuracy which is reported by all these works. Consequently, accuracy is common parameter for all these works. Thank you very much. Sincerely, Authors. "
Here is a paper. Please give your review comments after reading it.
369
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>IMDb being one of the popular online databases for movies and personalities, provides wide range of movie reviews from millions of users. This provides a diverse and large dataset to analyze users' sentiments about various personalities and movies. Despite being helpful to provide the critique of movies, the reviews on internet movie database (IMDB) can not be read as whole and requires automated tool to provide insight. This study provides the implementation of various machine learning models to measure the polarity of the sentiment presented in user reviews on IMDB website. For this purpose, the reviews are first preprocessed to remove redundant information and noise and then various classification models like support vector machines (SVM), Na&#239;ve Bayes classifier, random forest and gradient boosting classifier are used to predict the sentiment of these reviews.</ns0:p><ns0:p>The objective is to find the optimal process and approach to attain the highest accuracy with best generalization. Various feature engineering approaches such as term frequencyinverse document frequency (TF-IDF), bag of words, global vectors for word representations and Word2Vec are applied along with the hyperparameter tuning of the classification models to enhance the classification accuracy. Experimental results indicate that the SVM obtains the highest accuracy when used with TF-IDF features and achieves an accuracy of 89.55%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 36</ns0:head><ns0:p>Social media has become an integral part of human lives in recent times. People want to share their 37 opinions, ideas, comments, and daily life events on social media. In modern times, social media is used 38 for showcasing one's esteem and prestige by posting photos, text, and video clips, etc. The rise and wide 39 usage of social media platforms and microblogging websites provide the opportunity to share as you like 40 where people share their opinions on trending topics, politics, movie reviews, etc. Shared opinions on 41 social networking sites are generally known as short texts (ST) concerning the length of the posted text search engines as queries. Apart from being inspiring, the ST contains users' sentiments about a specific personality, topic, or movie and can be leveraged to identify the popularity of the discussed item. The process of mining the sentiment from the texts is called sentiment analysis (SA) and has been regarded as a significant research area during the last few years <ns0:ref type='bibr' target='#b22'>(Hearst, 2003)</ns0:ref>. Sentiments given on social media platforms like Twitter, Facebook, etc. can be used to analyze the perception of people about a personality, service, or product, as well as, used to predict the outcome of various social and political campaigns.</ns0:p><ns0:p>Thus, SA helps to increase the popularity and followers of political leaders, as well as, other important personalities. Many large companies like Azamon, Apple, and Google use the reviews of their employees to analyze the response to various services and policies. In the business sector, companies use SA to derive new strategies based on customer feedback and reviews <ns0:ref type='bibr' target='#b21'>(Hand and Adams, 2014;</ns0:ref><ns0:ref type='bibr' target='#b4'>Alpaydin, 2020)</ns0:ref>.</ns0:p><ns0:p>Besides the social media platforms, several websites serve as a common platform for discussions about social events, sports, and movies, etc., and the internet movie database (IMDB) is one of the websites that offer a common interface to discuss movies and provide reviews. Reviews are short texts that generally express an opinion about movies or products. These reviews play a vital role in the success of movies or sales of the products <ns0:ref type='bibr' target='#b2'>(Agarwal and Mittal, 2016)</ns0:ref>. People generally look into blogs, review sites like IMDB to know the movie cast, crew, reviews, and ratings of other people. Hence it is not only the word of mouth that brings the audience to the theaters, reviews also play a prominent role in the promotion of the movies. SA on movie reviews thus helps to perform opinion summarization by extracting and analyzing the sentiments expressed by the reviewers <ns0:ref type='bibr' target='#b23'>(Ikonomakis et al., 2005)</ns0:ref>. Being said that the reviews contain valuable and very useful content, the new user can't read all the reviews and perceive the positive or negative sentiment. The use of machine learning approaches proves to ease this difficult task by automatically classifying the sentiments of these reviews. Sentiment classification involves three types of approaches including the supervised machine learning approach, using the semantic orientation of the text, and use of SentiWordNet based libraries <ns0:ref type='bibr' target='#b49'>(Singh et al., 2013a)</ns0:ref>.</ns0:p><ns0:p>Despite being several approaches presented, several challenges remain unresolved to achieve the best possible accuracy for sentiment analysis. For example, a standard sequence for preprocessing steps is not defined and several variations are used which tend to show slightly different accuracy. Bag of words (BoW) is widely used for sentiment analysis, however, BoW loses word order information. Investigating the influence of other feature extraction approaches is of significant importance. Deep learning approaches tend to show better results than the traditional machine learning classifiers, but the extent of their better performance is not defined. This study uses various machine learning classifiers to perform sentiment analysis on the movie reviews and makes the following contributions</ns0:p><ns0:p>&#8226; This study proposes a methodology to perform the sentiment analysis on the movie reviews taken from the IMDB website. The proposed methodology involves preprocessing steps and various machine learning classifiers along with several feature extraction approaches.</ns0:p><ns0:p>&#8226; Both simple and ensemble classifiers are tested with the methodology including decision trees (DT), random forest (RF), gradient boosting classifier (GBC), and support vector machines (SVM). In addition, a deep learning model is used to evaluate its performance in comparison to traditional machine learning classifiers.</ns0:p><ns0:p>&#8226; Four feature extraction techniques are tested for their efficacy in sentiment classification. Feature extraction approaches include term frequency-inverse document frequency (TF-IDF), BoW, global vectors (GloVe) for word representations, and Word2Vec.</ns0:p><ns0:p>&#8226; Owing to the influence of the contradictions in users' sentiments in the reviews and assigned labels on the sentiment classification accuracy, in addition to the standard dataset, TextBlob annotated dataset is also used for experiments.</ns0:p><ns0:p>&#8226; The performance of the selected classifiers is analyzed using accuracy, precision, recall, and F1 score. Additionally, the results are compared with several state-of-the-art approaches to sentiment analysis.</ns0:p><ns0:p>The rest of this paper is organized as follows. Section 2 discusses a few research works which are closely related to the current study. The selected dataset, machine learning classifiers, and preprocessing procedure, and the proposed methodology are described in Section 3. Results are discussed in Section 4 and finally, Section 5 concludes the paper with possible directions for future research.</ns0:p></ns0:div> <ns0:div><ns0:head>2/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>A large amount of generated data on social media platforms on Facebook, and Twitter, etc. are generating new opportunities and challenges for the researchers to fetch useful and meaningful information to thrive business communities and serve the public. As a result, multidimensional research efforts have been performed for sentiment classification and analysis. Various machine learning and deep learning approaches have been presented in the literature in this regard. Few research works which is related to the current study are discussed here; we divide the research works into two categories: machine learning approaches and deep learning approaches.</ns0:p><ns0:p>The use of machine learning algorithms has been accelerated in several domains including image processing, object detection and natural language processing tasks, etc. <ns0:ref type='bibr' target='#b5'>(Ashraf et al., 2019a;</ns0:ref><ns0:ref type='bibr' target='#b26'>Khalid et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b6'>Ashraf et al., 2019b)</ns0:ref>. For example, The study <ns0:ref type='bibr' target='#b20'>(Hakak et al., 2021)</ns0:ref> uses a machine learning approach for the fake new classification. The study proposes a feature selection technique and an ensemble classifier using three machine learning classifiers including DT, RF, and extra tree classifier. The proposed model achieves a good accuracy score on the 'Liar dataset' as compared to the ISOT dataset.</ns0:p><ns0:p>The authors implement several machine learning classification models for sentiment classification of IMDB reviews into positive and negative sentiments in <ns0:ref type='bibr' target='#b38'>(Pang et al., 2002a)</ns0:ref>. For this purpose, a dataset containing 752 negative reviews and 1,301 positive reviews from the IMDB website is used. The research aims at finding the suitable model with the highest F1 score and best generalization. Various combinations of features and hyperparameters are used for training the classifiers for better accuracy. K-fold cross-validation is used for evaluating the performance of the classifiers. Naive Bayes tend to achieve higher accuracy of 89.2% than the SVM classifier which achieves 81.0% accuracy.</ns0:p><ns0:p>Similarly, the study <ns0:ref type='bibr' target='#b49'>(Singh et al., 2013a)</ns0:ref> conducts experimental work on performance evaluation of the SentiWordNet approach for classification of movie reviews. The SentiWordNet approach is implemented with different variations of linguistic features, scoring schemes, and aggregation thresholds. For evaluation, two large datasets of movie reviews are used that contain the posts on movies about revolutionary changes in Libya and Tunisia. The performance of the SentiWordNet approach is compared with two machine learning approaches including NB and SVM for sentiment classification. The comparative performance of the SentiWordNet and machine learning classifiers show that both NB and SVM perform better than all the variations of SentiWordNet.</ns0:p><ns0:p>A hybrid method is proposed in <ns0:ref type='bibr' target='#b50'>(Singh et al., 2013b)</ns0:ref> where the features are extracted by using both statistical and lexicon methods. In addition, various feature selection methods are applied such as Chi-Square, correlation, information gain, and regularized locality preserving indexing (RLPI) for the features extraction. It helps to map the higher dimension input space to the lower dimension input space.</ns0:p><ns0:p>Features from both methods are combined to make a new feature set with lower dimension input space. SVM, NB, K-nearest neighbor (KNN), and maximum entropy (ME) classifiers are trained using the IMDb movie review dataset. Results indicate that using hybrid features of TF and TF-IDF) with Lexicon features gives better results.</ns0:p><ns0:p>The authors propose an ensemble approach to improve the accuracy of sentiment analysis in <ns0:ref type='bibr' target='#b33'>(Minaee et al., 2019)</ns0:ref>. The ensemble model comprises convolutional neural network (CNN) and bidirectional long short term memory (Bi-LSTM) networks and the experiments are performed on IMDB review and Stanford sentiment treebank v2 (SST2) datasets. The ensemble is formed using the predicted scores of the two models to make the final classification of the sentiment of the reviews. Results indicate that the ensemble approach performs better than the state-of-the-art approaches and achieves an accuracy of 90% to classify the sentiment from reviews.</ns0:p><ns0:p>The authors investigate the use of three deep learning classifiers including multilayer perceptron, CNN, and LSTM for sentiment analysis in <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref>. Besides, experiments are also carried using a hybrid model CNN-LSTM for sentiment classification, and the performance of these models is compared with support vector machines and Naive Bayes. Multilayer Perceptron (MLP) is developed as a baseline for other networks' results. LSTM network, CNN, and CNN-LSTM are applied on the IMDB dataset consisting of 50,000 movies reviews. The word2vec is applied for word embedding. Results indicate that higher accuracy of 89.2% can be achieved from the hybrid model CNN-LSTM. Individual classifiers show a lower accuracy of 86.74%, 87.70%, and 86.64% for MLP, CNN, and LSTM, respectively.</ns0:p><ns0:p>Similarly, an ensemble classifier is proposed in <ns0:ref type='bibr' target='#b56'>(Yenter and Verma, 2017)</ns0:ref> The study <ns0:ref type='bibr' target='#b24'>(Jain and Jain, 2021a)</ns0:ref> conducts experiments using the IMDB review dataset with deep learning models for sentiment classification. It uses a convolutional neural network (CNN) and long short-term memory (LSTM) with different activation functions. The highest accuracy of 0.883 is achieved with CNN using the ReLU activation function. Similarly, <ns0:ref type='bibr' target='#b35'>(Nafis and Awang, 2021)</ns0:ref> proposes a hybrid approach for IMDB review classification using TF-IDF and SVM. The approach called SVM-RFE uses important feature selection to train the SVM model. Feature selection helps in boosting the performance of SVM and increases the accuracy to 89.56% for IMDB reviews sentiment classification. The study <ns0:ref type='bibr' target='#b16'>(Giatsoglou et al., 2017)</ns0:ref> proposed an approach for sentiment analysis using a machine learning model.</ns0:p><ns0:p>A hybrid feature vector is proposed by combining word2vec and BoW technique and experiments are performed using four datasets containing online user reviews in Greek and English language. In a similar fashion, <ns0:ref type='bibr' target='#b32'>(Mathapati et al., 2018)</ns0:ref> performs sentiment analysis on IMDB reviews using a deep learning approach. The study used a CNN and LSTM recurrent neural network to obtain significant accuracy on the IMDB reviews dataset.</ns0:p><ns0:p>The study <ns0:ref type='bibr' target='#b25'>(Jain and Jain, 2021b</ns0:ref>) uses a machine learning approach for IMDB reviews classification.</ns0:p><ns0:p>The study performs preprocessing of data and proposes a feature selection technique using association rule mining (ARM). Results show that Naive Bayes (NB) outperforms all other used models by achieving a 0.784 accuracy score using the proposed features. The study <ns0:ref type='bibr' target='#b41'>(Qaisar, 2020)</ns0:ref> presents an approach using LSTM for IMDB review sentiment classification. LSTM achieves an 0.899 accuracy score on the IMDB dataset. Along the same lines, <ns0:ref type='bibr' target='#b48'>(Shaukat et al., 2020)</ns0:ref> performs experiments on the IMDB reviews dataset using a supervised machine learning approach. The study proposed neural network can achieve a 0.91 accuracy score.</ns0:p><ns0:p>From the above-discussed research works, it can be inferred that supervised machine and deep learning approaches show higher performance than lexicon-based approaches. Additionally, the accuracy offered by machine learning approaches can be further improved. This study focus on using several machine learning classifiers for this purpose, in addition to three feature extraction, approaches for enhanced classification performance. This study contributes to filling the literature gap which is accuracy and efficiency for IMDB review sentiment classification using state-of-the-art techniques. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head><ns0:p>This section describes the dataset used for the experiments, machine learning classifiers selected for review classification, as well as, the proposed methodology and its working principles.</ns0:p></ns0:div> <ns0:div><ns0:head>Data description</ns0:head><ns0:p>This study uses the 'IMDB Reviews' from Kaggle which contains users' reviews about movies <ns0:ref type='bibr'>(dat, 2018)</ns0:ref>.</ns0:p><ns0:p>The dataset has been largely used for text mining and consists of reviews of 50,000 movie reviews of which approximately 25,000 reviews belong to the positive and negative classes, respectively. Table <ns0:ref type='table'>2</ns0:ref> shows samples of reviews from both negative and positive classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2. Description of IMDB dataset variables</ns0:head><ns0:p>Review Label Gwyneth Paltrow is absolutely great in this mo.. 0 I own this movie. Not by choice, I do. I was r. 0 Well I guess it supposedly not a classic becau.. 1 I am, as many are, a fan of Tony Scott films. .. 0 I wish 'that '70s show' would come back on tel.. 1</ns0:p></ns0:div> <ns0:div><ns0:head>TextBlob</ns0:head><ns0:p>TextBlob is a Python library that we used to annotate the dataset with new sentiments <ns0:ref type='bibr' target='#b31'>(Loria, 2018)</ns0:ref>.</ns0:p><ns0:p>TextBlob finds the polarity score for each word and then sums up these polarity scores to find the sentiment.</ns0:p><ns0:p>TextBlob assigns a polarity score between -1 and 1. A polarity score greater than 0 shows the positive sentiment, a polarity score less than 0 shows a negative sentiment while a 0 score indicates that the sentiment is neutral. In the dataset used in this study, 23 neutral sentiments are found after applying TextBlob. Pertaining to the low number of neutral sentiments which can cause class imbalance, only negative and positive sentiments are used for experiments. Contradiction in Textblob annotated label and original dataset label is shown in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. Contradiction in Textblob and original dataset labels.</ns0:p></ns0:div> <ns0:div><ns0:head>Review</ns0:head><ns0:p>Textblob Original movie makers always author work mean yes things condensed sake viewer interest look anne green gables wonderful job combining important events cohesive whole simply delightful believe chose combine three novels together anne avonlea dreadful mess look missed paul irving little elizabeth widows windy poplars anne college years heaven sake delightful meet priscilla rest redmond gang kevin sullivan taken things one movie time instead jumbling together combining characters events way movie good leave novels montgomery beautiful work something denied movie let seeing successful way brough anne green gables life</ns0:p></ns0:div> <ns0:div><ns0:head>Positive Negative</ns0:head></ns0:div> <ns0:div><ns0:head>Feature Engineering Methods</ns0:head><ns0:p>Identification of useful features from the data is an important step for the better training of machine learning classifiers. The formation of secondary features from the original features enhances the efficiency of machine learning algorithms <ns0:ref type='bibr' target='#b37'>(Oghina et al., 2012)</ns0:ref>. It is one of the critical factors to increase the accuracy of the learning algorithm and boost its performance. The desired accuracy can be achieved by excluding the meaningless and redundant data. Less quantity of meaningful data is better than having a large quantity of meaningless data <ns0:ref type='bibr' target='#b40'>(Prabowo and Thelwall, 2009)</ns0:ref>. So, feature engineering is the process of extracting meaningful features from raw data which helps in the learning process of algorithms and increases its efficiency and consistency <ns0:ref type='bibr' target='#b28'>(Lee et al., 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>5/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>Bag of Words</ns0:head><ns0:p>The BoW is simple to use and easy to implement for finding the features from raw text data. Many language modeling and text classification problems can be solved using the BoW features. In Python, the BoW is implemented using the CountVectorizer. BoW counts the occurrences of a word in the given text and formulates a feature vector of the whole text comprising of the counts of each unique word in the text. Each unique word is called 'token' and the feature vector is the matrix of these tokens <ns0:ref type='bibr' target='#b30'>(Liu et al., 2008)</ns0:ref>. Despite being simple, BoW often surpasses many complicated feature engineering approaches in performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Term Frequency-Inverse Document Frequency</ns0:head><ns0:p>TF-IDF is another feature engineering method that is used to extract features from raw data. It is mostly deployed in areas like text analysis and music information retrieval <ns0:ref type='bibr' target='#b57'>(Yu, 2008)</ns0:ref>. In this approach, weights are assigned to every term in a document based on term frequency and inverse document frequency <ns0:ref type='bibr' target='#b36'>(Neethu and Rajasree, 2013;</ns0:ref><ns0:ref type='bibr' target='#b11'>Biau and Scornet, 2016)</ns0:ref>. Terms having higher weights are supposed to be more important than terms having lower weights. The weight for each term is based on the equation 1.</ns0:p><ns0:formula xml:id='formula_0'>W i, j = T F t,d ( N D t )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where T F t,d is the number of occurrences of term t in document d, D f ,t is the number of documents having the term t and N is the total number of documents in the dataset.</ns0:p><ns0:p>TF-IDF is a kind of scoring measurement approach which is widely used in summarization and information retrieval. TF calculates the frequency of a token and gives higher importance to more common tokens in a given document <ns0:ref type='bibr' target='#b54'>(Vishwanathan and Murty, 2002)</ns0:ref>. On the other hand, IDF calculates the tokens which are rare in a corpus. In this way, if uncommon words appear in more than one document, they are considered meaningful and important. In a set of documents D, IDF weighs a token x using the equation 2.</ns0:p><ns0:formula xml:id='formula_1'>IDF(x) = N/n(x) (2)</ns0:formula><ns0:p>Where n(x) denotes frequency of x in D and N/n(x) denotes the inverse frequency. TF-IDF is calculated using TF and IDF as shown in equation 3.</ns0:p><ns0:formula xml:id='formula_2'>T F &#8722; IDF = T F &#215; IDF (3)</ns0:formula><ns0:p>TF-IDF is applied to calculate the weights of important terms and the final output of TF-IDF is in the form of a weight matrix. Values gradually increase to the count in TF-IDF but are balanced with the frequency of the word in dataset <ns0:ref type='bibr' target='#b58'>(Zhang et al., 2008)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Word2Vec</ns0:head><ns0:p>Word2Vec is one of the widely used NLP techniques for feature extraction in text mining that transforms text words into vectors <ns0:ref type='bibr' target='#b55'>(Wang et al., 2016)</ns0:ref>. Given a corpus of text, Word2Vec uses a neural network model for learning word associations. Each unique word has an associated list of numbers called 'vector'. The cosine similarity of the vectors represents the semantic similarity between the words that are represented by vectors.</ns0:p></ns0:div> <ns0:div><ns0:head>GloVe</ns0:head><ns0:p>GloVe from Global Vectors is an unsupervised model used to obtain words' vector representation <ns0:ref type='bibr' target='#b10'>(Bhoir et al., 2017)</ns0:ref>. The vector representation is obtained by mapping the words in a space such that the distance between the words represents the semantic similarity. Developed at Stanford, GloVe can determine the similarity between words and arrange them in the vectors. The output matrix by the GloVe gives vector space of word with linear substructure.</ns0:p></ns0:div> <ns0:div><ns0:head>Supervised machine learning Models</ns0:head><ns0:p>Several machine learning classifiers have been selected for evaluating the classification performs in this study. A brief description of each of these classifiers is provided in the following sections. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Random Forest</ns0:head><ns0:p>Rf is based on combining multiple decision trees on various subsamples of the dataset to improve classification accuracy. These subsamples are the combination of randomly selected features which are the size of the original dataset to form a bootstrap dataset. The average of predictions from these models is used to obtain a model with low variance. Information gain ratio and Gini index are the most frequently used feature selection parameters to measure the impurity of feature <ns0:ref type='bibr' target='#b1'>(Agarwal et al., 2011)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>&#8721; &#8721; j =i ( f (C i , T ) |T | )( f (C j , T ) |T | ) (4)</ns0:formula><ns0:p>where f (C i ,T ) |T | indicates the probability of being a member of class C i .</ns0:p><ns0:p>The decision trees are not pruned upon traversing each new training data set. The user can define the number of features and number of trees on each node and set the values of other hyperparameters to increase the classification accuracy <ns0:ref type='bibr' target='#b11'>(Biau and Scornet, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Gradient Boosting Classifier</ns0:head><ns0:p>GBC is an ensemble classifier used for classification tasks with enhanced accuracy base on boosting <ns0:ref type='bibr' target='#b7'>(Ayyadevara, 2018)</ns0:ref>. It combines many weak learners sequentially to reduce the error gradually. This study uses the GBC with decision tree as a weak learner. GBC performance depends on the loss function and mostly the logarithmic loss function is used for classification. In addition, weak learners and adaptive components are important parameters of GBC. The hyperparameters setting of GBC used in this study is shown in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>. GBC is used with 300 n estimators indicating that 300 weak learners (decision trees) are combined under boosting method and each tree is restricted to 300 max depth. The learning rate is set to 0.2 which helps to reduce model overfitting <ns0:ref type='bibr' target='#b44'>(Rustam et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Decision Tree</ns0:head><ns0:p>DT is one of the most commonly used models for classification and prediction problems. DT is a simple and powerful tool to understand data features and infer decisions. The decision trees are constructed by repeatedly dividing the data according to split criteria. There are three types of nodes in a decision tree: root, internal, and leaf. The root node has no incoming but zero or more outgoing edges, the internal node has exactly one incoming but two or more outgoing edges while the leaf node has one incoming while no outgoing edge <ns0:ref type='bibr' target='#b8'>(Bakshi et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b51'>Tan et al., 2006)</ns0:ref>. Nodes and edges represent features and decisions of a decision tree, respectively. A decision tree can be binary or non-binary depending upon the leaves of a node. The gain ratio is one of the commonly used split criteria for DT.</ns0:p><ns0:formula xml:id='formula_4'>Gain ratio = &#8710; in f o Split Info (5)</ns0:formula><ns0:p>where split info is defined as in equation 6.</ns0:p><ns0:formula xml:id='formula_5'>Split Info = &#8722; k &#8721; i=1 P(v i )log 2 P(v i ) (6)</ns0:formula><ns0:p>where k indicates the total number of splits for DT which is hyperparameter tuned for different datasets to elevate the performance. DT is non-parametric, computationally inexpensive, and shows better performance even when the data have redundant attributes.</ns0:p></ns0:div> <ns0:div><ns0:head>Support Vector Machines</ns0:head><ns0:p>Originally proposed by Cortes and Vapnik in 1995 for binary classification, SVM is expanded for multi-class classification <ns0:ref type='bibr' target='#b14'>(Cortes and Vapnik, 1995)</ns0:ref>. SVM is a widely used approach for non-linear classification, regression, and outlier detection <ns0:ref type='bibr' target='#b9'>(Bennett and Campbell, 2000)</ns0:ref>. SVM has the additional advantage of examining the relationship theoretically and performs distinctive classification than many complex approaches like neural networks <ns0:ref type='bibr' target='#b2'>(Agarwal and Mittal, 2016)</ns0:ref>. SVM separates the classes by distinguishing the optimal isolating lines called hyperplane by maximizing the distance between the classes' nearest points <ns0:ref type='bibr' target='#b36'>(Neethu and Rajasree, 2013)</ns0:ref>. Different kernels can be used with SVM to Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>accomplish better performance such as radial, polynomial, neural, and linear <ns0:ref type='bibr' target='#b19'>(Guzman and Maalej, 2014)</ns0:ref>.</ns0:p><ns0:p>SVM is preferred for several reasons including the lack of local minimal, structural risk minimization principle, and developing more common classification ability <ns0:ref type='bibr' target='#b53'>(Visa et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b54'>Vishwanathan and Murty, 2002)</ns0:ref>.</ns0:p><ns0:p>For optimizing the performance of the machine learning models used in this study, several hyperparameters have been fine-tuned according to experience from the literature on text classification tasks. A list of the parameters and corresponding values used for experiments in this study is provided in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head>Proposed methodology</ns0:head><ns0:p>With the growing production of movies over the last two decades, a large number of opinions and reviews are posted on various social media platforms and websites. Such reviews are texts that show explicit opinions about a film or product. These opinions play an important part in the success of film or sales of the products <ns0:ref type='bibr' target='#b2'>(Agarwal and Mittal, 2016)</ns0:ref>. People search blogs, and evaluation sites like IMDB to get the likes and dislikes of other people about films, the cast, and team, etc. but it is very difficult to read every review and comment. Evaluation of these sentiments becomes beneficial to assisting people in this task. Sentiments expressed in such reviews are important regarding the evaluation of the movies and their crew. Automatic sentiment analysis with higher accuracy is extremely important in this regard and this study follows the same direction and proposes an approach to perform the sentiment analysis of movie reviews. In addition, since the contradictions in the expressed sentiments in movie reviews and their assigned labels can not be ignored, this study additionally uses lexicon-based TextBlob to determine the sentiments. Two sets of experiments are performed using the standard dataset and TextBlob annotated dataset to fill in the research gap as previous studies do not consider the contradictions in the sentiments and assigned labels. Figure <ns0:ref type='figure'>1</ns0:ref> Punctuation is removed from IMDB text reviews because punctuation does not add any value to text analysis <ns0:ref type='bibr' target='#b19'>(Guzman and Maalej, 2014)</ns0:ref>. Sentences are more readable for humans due to punctuation, however, it is difficult for a machine to distinguish punctuation from other characters. Punctuation distorts a model's efficiency to distinguish between entropy, punctuation, and other characters <ns0:ref type='bibr' target='#b30'>(Liu et al., 2008)</ns0:ref>.</ns0:p><ns0:p>So, punctuation is removed from the text in pre-processing to reduce the complexity of the feature space.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> shows the text of a sample review, before and after the punctuation has been removed.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref>. Text from sample review before and after punctuation removal.</ns0:p></ns0:div> <ns0:div><ns0:head>Before puncutation removal</ns0:head><ns0:p>After punctuation removal @Gwyneth Paltrow is absolutely... !!!great in this movie.</ns0:p><ns0:p>Gwyneth Paltrow is absolutely great in this movie I own this movie. This is number 1 movie. . . I didn't like by choice, I do. I own this movie This is number 1 movie I didnt like by choice I do I wish 'that '70s show' would come back on tel.</ns0:p></ns0:div> <ns0:div><ns0:head>I wish that 70s show would come back on tel</ns0:head><ns0:p>Once the punctuation is removed, the next step is to find numerical values and remove them as they are not valuable for text analysis. Numerical values are used in the reviews as an alternative to various English words to reduce the length of reviews and ease of writing the review. For example, 2 is used for 'to' and numerical values are used instead of counting like 1 instead of 'one'. Such numerals are convenient for humans to interpret, yet offer no help in the training of machine learning classifiers. Table <ns0:ref type='table'>6</ns0:ref> shows text from sample reviews after the numeric values are removed. Table <ns0:ref type='table'>6</ns0:ref>. Sample text from movie reviews after removing numeric values.</ns0:p></ns0:div> <ns0:div><ns0:head>Input Data</ns0:head><ns0:p>After Numeric Removal Gwyneth Paltrow is absolutely great in this movie.</ns0:p><ns0:p>Gwyneth Paltrow is absolutely great in this movie I own this movie This is number 1 movie I didnt like by choice I do. I own this movie This is number movie I didnt like by choice I do I wish that 70s show would come back on tel.</ns0:p></ns0:div> <ns0:div><ns0:head>I wish that s show would come back on tel</ns0:head><ns0:p>In the subsequent step of numbers removal, all capital letters are converted to lower form. Machine learning classifiers can not distinguish between lower and upper case letters and consider them as different letters. For example, 'Health', and 'health' are considered as two separate words if conversion is not performed from uppercase to lowercase. It may reduce the significance of most occurred terms and degrade the performance <ns0:ref type='bibr' target='#b29'>(Liu et al., 2010)</ns0:ref>. It increases the complexity of the feature space and reduces the performance of classifiers. So, converting the upper case letters to lower form helps in increasing the training efficiency of the classifiers. Table <ns0:ref type='table'>7</ns0:ref> shows the text after the case is changed for the reviews.</ns0:p><ns0:p>Stemming is an important step in pre-processing because eliminating affixes from words and changing them into their root form is very helpful to enhance the efficiency of a model <ns0:ref type='bibr' target='#b17'>(Goel et al., 2016)</ns0:ref>. For example, 'help', 'helped', and 'helping' are altered forms of 'help', however, machine learning classifiers consider them as two different words <ns0:ref type='bibr' target='#b50'>(Singh et al., 2013b)</ns0:ref>. Stemming changes these different forms of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>7</ns0:ref>. Sample output of the review text after changing the case of review text.</ns0:p></ns0:div> <ns0:div><ns0:head>Input Data</ns0:head><ns0:p>After Case Lowering Gwyneth Paltrow is absolutely great in this movie.</ns0:p><ns0:p>gwyneth paltrow is absolutely great in this movie I own this movie This is number movie I didnt like by choice I do.</ns0:p><ns0:p>i own this movie this is number movie i didnt like by choice i do I wish that s show would come back on tel. i wish that s show would come back on tel words into their root form. Stemming is implemented using the PorterStemmer library of Python <ns0:ref type='bibr' target='#b39'>(Pang et al., 2002b)</ns0:ref>. Table <ns0:ref type='table'>8</ns0:ref> shows the sample text of review before and after stemming.</ns0:p><ns0:p>Table <ns0:ref type='table'>8</ns0:ref>. Text from sample review before and after stemming.</ns0:p></ns0:div> <ns0:div><ns0:head>Input Data</ns0:head><ns0:p>After Stemming gwyneth Paltrow is absolutely great in this movie.</ns0:p><ns0:p>gwyneth paltrow is absolute great in this movie i own this movie this is number movie i didnt like by choice I do.</ns0:p><ns0:p>i own this movie this is number movie i didnt like by choice i do i wish that s show would come back on tel.</ns0:p></ns0:div> <ns0:div><ns0:head>i wish that s show would come back on tel</ns0:head><ns0:p>The last step in the preprocessing phase is the removal of stop words. Stop words have no importance concerning the training of the classifiers. Instead, they increase the feature vector size and reduce the performance. So they must be removed to decrease the complexity of feature space and boost the training of classifiers. Table <ns0:ref type='table'>9</ns0:ref> shows the text of the sample review after the stopwords have been removed. Table <ns0:ref type='table'>9</ns0:ref>. Sample reviews before and after the stop words removal.</ns0:p></ns0:div> <ns0:div><ns0:head>Input Data</ns0:head><ns0:p>After Stopwords Removal gwyneth Paltrow is absolutely great in this movie. gwyneth paltrow absolute great movie i own this movie this is number movie i didnt like by choice I do. own movie number movie didnt like choice do i wish that s show would come back on tel.</ns0:p></ns0:div> <ns0:div><ns0:head>wish show would come back tel</ns0:head><ns0:p>After the preprocessing is complete, feature extraction takes place where BoW, TF-IDF, and GloVe are used. Feature space for the sample reviews is given in Tables <ns0:ref type='table' target='#tab_4'>10 and 11</ns0:ref> for BoW and TF-IDF features, respectively. Experiments are performed with the standard dataset, as well as, the TextBlob annotated dataset to analyze the performance of the machine learning and proposed models. No. absolute back choice come didnt do great gwyneth like</ns0:p><ns0:formula xml:id='formula_6'>1 1 0 0 0 0 0 1 1 0 2 0 0 1 0 1 1 0 0 1 3 0 1 0 1 0 0 0 0 0 No. movie</ns0:formula><ns0:p>number own paltrow show tel wish would</ns0:p><ns0:formula xml:id='formula_7'>1 1 0 0 1 0 0 0 0 2 2 1 1 0 0 0 0 0 3 0 0 0 0 1 1 1 1</ns0:formula><ns0:p>The Table <ns0:ref type='table' target='#tab_0'>11</ns0:ref>. TF-IDF features from the preprocessed text of sample reviews.</ns0:p><ns0:p>No. absolute back choice come didnt do great gwyneth like 1 0.467351 0.000000 0.000000 0.000000 0.000000 0.000000 0.467351 0.467351 0.000000 2 0.000000 0.000000 0.346821 0.000000 0.346821 0.346821 0.000000 0.000000 0.346821 3 0.000000 0.408248 0.000000 0.408248 0.000000 0.000000 0.000000 0.000000 0.000000 No. movie number own paltrow show tel wish would 1 0.355432 0.000000 0.000000 0.467351 0.000000 0.000000 0.000000 0.000000 2 0.527533 0.346821 0.346821 0.000000 0.000000 0.000000 0.000000 0.000000 3 0.000000 0.000000 0.000000 0.000000 0.408248 0.408248 0.408248 0.408248</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Parameters</ns0:head><ns0:p>Performance evaluation of the classifiers requires evaluation metrics for which accuracy, precision, recall, and F1 score are selected concerning their wide use. The introduction of the confusion matrix is necessary to define the mathematical formulas for these evaluation metrics. The confusion matrix as shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> can be considered as an error matrix that indicates four quantities. The confusion matrix shows true positive (TP), false positive (FP), true negative (TN), and false-negative (FN). Each row of the matrix represents the actual labels while each column represents predicted labels <ns0:ref type='bibr' target='#b27'>(Landy and Szalay, 1993)</ns0:ref>. TP indicates that the classifier predicted the review as positive and the original label is also positive.</ns0:p><ns0:p>A review is TN if it belongs to the negative class and the real outcome is also negative. In the FP case, the review is predicted as positive, but the original label is negative. Similarly, a review is called FN if it belongs to the positive class but the classifier predicted it as negative <ns0:ref type='bibr' target='#b42'>(Rokach and Maimon, 2005)</ns0:ref>.</ns0:p><ns0:p>Accuracy is a widely used evaluation metrics and indicates the ratio of true predictions to the total predictions. It has a maximum value of 1 for 100 percent correct prediction and the lowest value of 0 for 0 percent prediction. Accuracy can be defined as Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_8'>Recall = T P T P + FN (9)</ns0:formula><ns0:p>F1 score is considered an important parameter to evaluate the performance of a classifier and has been regarded as more important than precision and recall. It defines how precise and robust is the classifier by incorporating precision and recall <ns0:ref type='bibr' target='#b13'>(Bruce et al., 2002)</ns0:ref>. F1 score value varies between 0 and 1 where 1</ns0:p><ns0:p>shows the perfect performance of the classifier. F1 score is defined as</ns0:p><ns0:formula xml:id='formula_9'>F1Score = 2 &#215; precision &#215; recall precision + recall (10)</ns0:formula></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>This study uses four machine learning classifiers to classify movie reviews into positive and negative reviews, such as DT. SVM, RF, and GBC. Four feature extraction approaches are utilized including TF-IDF, BoW, Word2Vec, and GloVe on the selected dataset to extract the features. Results for these feature extraction approaches are discussed separately. Similarly, the influence of TextBlob annotated data on the classification accuracy is analyzed. The contradictions in the sentiments expressed in the reviewers and the assigned sentiments cannot be ignored, so TextBlob is used to annotate the labels.</ns0:p><ns0:p>Several experiments are performed using the standard, as well as, the TextBlob annotated dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Results using BoW Features</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_5'>12</ns0:ref> shows the classification accuracy of the machine learning classifiers when BoW features to train and test the classifiers. Results indicate that SVM can achieve an accuracy of 0.87 with BoW features.</ns0:p><ns0:p>Overall, the performance of all the classifiers is good except for DT whose accuracy is 0.72. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Results using TF-IDF Features</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_7'>14</ns0:ref> contains the accuracy results for the classifiers using the TF-IDF features. It shows that the performance of the SVM has been elevated with an accuracy of 0.89 which is 2.29% higher than that of using BoW features. Unlike BoW which counts only the frequency of terms, TF-IDF also records the importance of terms by assigning higher weights to rare terms. So, the performance is improved when TF-IDF features are used as compared to BoW features. SVM performs better for text classification than other supervised learning models, especially in the case of large datasets as this algorithm is derived from the theory of structural risk minimization <ns0:ref type='bibr' target='#b34'>(Mouthami et al., 2013)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Classifiers Results using GloVe Features</ns0:head><ns0:p>Experimental results using GloVe features are shown in Table <ns0:ref type='table' target='#tab_9'>16</ns0:ref> for the selected classifiers. Results</ns0:p><ns0:p>suggest that the performance of all the classifiers has been degraded when trained and tested on GloVe features. Glove features are based on the global word-to-word co-occurrence and count the co-occurred terms from the entire corpus. GloVe model is traditionally used with deep learning models where it helps to better recognize the relationships between the given samples of the dataset. In machine learning models, its performance is poor than that of TF-IDF features <ns0:ref type='bibr' target='#b15'>(Dessi et al., 2020)</ns0:ref>. SVM and RF outperform other models using GloVE features. </ns0:p></ns0:div> <ns0:div><ns0:head>Results using Word2Vec Features</ns0:head><ns0:p>Performance elevation metrics for all the classifiers using the Word2Vec features are given in Table <ns0:ref type='table' target='#tab_0'>17</ns0:ref>.</ns0:p><ns0:p>Results indicate that the performance of the classifiers is somehow better when trained and tested on Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Word2Vec features in comparison with GloVe features results. The performance of classifiers is not significant using Word2Vec features in comparison to the results of the classifiers using BoW and TF-IDF features. SVM achieved the highest accuracy of 0.88 with Word2Vec features as compared to other models because Word2Vec gives linear features set which is more suitable for SVM as compared to RF, GBC, and DT.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>17</ns0:ref>. Performance evaluation of classifiers using Word2Vec features. </ns0:p></ns0:div> <ns0:div><ns0:head>Model Accuracy Precision</ns0:head></ns0:div> <ns0:div><ns0:head>Results on TextBlob annotated dataset</ns0:head><ns0:p>The contradictions in the users' expressed sentiments in the reviews and assigned labels can influence the sentiment classification accuracy of the models. To resolve this issue, TextBlob annotated data are used for the performance evaluation of the models. Results suggest that Textblob annotated dataset gives more accurate target sentiment with the proposed approach. Similarly, the performance of machine learning models is also improved.</ns0:p></ns0:div> <ns0:div><ns0:head>Results using BoW Features</ns0:head><ns0:p>The performance of models with BoW and TextBlob sentiments are shown in Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the performance elevation is that TextBlob given sentiments have a high correlation with the sentiments in the reviews. </ns0:p></ns0:div> <ns0:div><ns0:head>Results using GloVe Features</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_14'>20</ns0:ref> shows the performance comparison of models using GloVe features and TextBlob sentiments and it indicates that using the performance using GloVe features and TextBlob sentiments is better as compared to their performance on the original sentiments and GloVe features. These results show the significance of TextBlob with the proposed approach. Compared to the performance on the original dataset, the accuracy of the models has been improved significantly when used with TextBlob assigned sentiments. For example, the highest accuracy with GloVe features and TextBlob sentiment is 0.81 which was only 0.75 on the original sentiments. However, the performance of the machine learning models is inferior to that of BoW and TF-IDF. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the performance of other models such as RF, GBC, and DT has also been improved with the TextBlob sentiments. The primary reason for comparatively lower accuracy with the original sentiments is the contradiction in the expressed sentiments and the assigned labels. Using TextBlob the assigned sentiments have a high correlation to the sentiments given in the users' reviews. </ns0:p></ns0:div> <ns0:div><ns0:head>Performance of Deep Learning Models</ns0:head><ns0:p>To compare the performance of the proposed approach with the latest deep learning approach, experiments have been performed using several deep learning models. For this purpose, state-of-the-art deep learning models are used such as LSTM, CNN-LSTM, and Gated Recurrent Unit (GRU). The architecture of used deep learning models is provided in Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>. These models are used with the TextBlob annotated dataset Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>owing to the superior results on the dataset from machine learning models. to that of state-of-the-art approaches. The use of SVM with TF-IDF and BoW using lexicon technique provides an accuracy of 92% which is better than the state-of-the-art approaches.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistical T-test</ns0:head><ns0:p>The T-test is performed in this study to show the statistical significance of the proposed approach. The T-test is applied to SVM results with the proposed approach and original dataset. The output from the T-test favors either null hypothesis or alternative hypothesis.</ns0:p><ns0:p>&#8226; Accept Null Hypothesis: This means that the compared results are statistically equal. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b16'>(Giatsoglou et al., 2017)</ns0:ref> BoW-DOUBLE and Average emotion-DOUBLE 0.83 2018 <ns0:ref type='bibr' target='#b32'>(Mathapati et al., 2018)</ns0:ref> CNN 0.89 2019 <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref> CNN+LSTM 0.89 2019 <ns0:ref type='bibr' target='#b12'>(Bodapati et al., 2019)</ns0:ref> LSTM + DNN 0.885 2020 <ns0:ref type='bibr' target='#b52'>(Tripathi et al., 2020)</ns0:ref> TF-IDF +LR 0.891 2020 <ns0:ref type='bibr' target='#b41'>(Qaisar, 2020)</ns0:ref> LSTM 0.899 2020 <ns0:ref type='bibr' target='#b48'>(Shaukat et al., 2020)</ns0:ref> NN 0.91 2021 <ns0:ref type='bibr' target='#b24'>(Jain and Jain, 2021a)</ns0:ref> CNN 0.883 2021 <ns0:ref type='bibr' target='#b35'>(Nafis and Awang, 2021)</ns0:ref> SVM + (SVM-RFE) 0.895 2021 <ns0:ref type='bibr' target='#b25'>(Jain and Jain, 2021b)</ns0:ref> NB+ ARM 0.784 2021 Proposed SVM + Lexicon +BoW &amp; TF-IDF 0.92</ns0:p><ns0:p>&#8226; Reject Null Hypothesis: This means that the compared results are not statistically equal. </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>With an ever-growing production of cinema movies, web series, and television dramas, a large number of reviews can be found on social platforms and movies websites like IMDB. Sentiment analysis of such reviews can provide insights about the movies, their team, and cast to millions of viewers. This study proposes a methodology to perform sentiment analysis on the movie reviews using supervised machine learning classifiers to assist the people in selecting the movies based on the popularity and interest of the reviews. Four machine learning algorithms including DT, RF, GBC, and SVM are utilized for sentiment analysis that is trained on the dataset preprocessed through a series of steps. Moreover, four feature extraction approaches including BoW, TF-IDF, GloVe, and Word2Vec are investigated for their efficacy in extracting the meaningful and effective features from the reviews. Results indicate that SVM achieves the highest accuracy among all the classifiers with an accuracy of 89.55% when trained and tested using TF-IDF features. The performance using BoW features is also good with an accuracy of 87.25%. Contrary to BoW which counts the occurrence of unique tokens, TF-IDF also records the importance of rare terms by assigning a higher weight to rare terms and perform better than BoW. However, the performance of the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science current study excludes the neutral class due to a low number of samples and experiments are performed using positive and negative classes. Consequently, the accuracy may have been higher as compared to that with three classes. Similarly, probable class imbalance by adding neutral class samples is not investigated and is left for the future. We intend to perform further experiments using movie reviews datasets from other sources in the future. Furthermore, the study on finding the contradictions in the sentiments expressed in the reviews and the assigned labels is also under consideration.</ns0:p><ns0:p>Financial disclosure</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>which comprises CNN and LSTM networks. The model aims at the word-level classification of the IMDB reviews. The output of the CNN network is fed into an LSTM before being concatenated and sent to a fully connected layer to 3/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021) Manuscript to be reviewed Computer Science produce a single final output. Various regularization techniques, network structures, and kernel sizes are used to generate five different models for classification. The designed models can predict the sentiment polarity of IMDB reviews with 89% or higher accuracy.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1. The work flow of proposed methodology for movie review classification.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>data are split into training and testing sets in a 75 to 25 ratio. Machine learning classifiers are 10/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021) Manuscript to be reviewed Computer Science trained on the training set while the test set is used to evaluate the performance of the trained models. For evaluating the performance, standard well-known parameters are used such as accuracy, precision, recall, and F1 score.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Confusion matrix.</ns0:figDesc><ns0:graphic coords='12,240.52,343.45,216.00,114.94' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>the accuracy of predicting the positive cases. It shows what proportion of the positively predicted cases is originally positive. It is defined as Precision = T P T P + FP (8) Recall calculates the ratio of correct positive cases to the total positive cases. To get the ratio, the total number of TP is divided by the sum of TP and FN as follows 11/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>F1</ns0:head><ns0:label /><ns0:figDesc>score indicates that its value is the same with both positive and negative classes for all the classifiers, except for GBC who has F1 scores of 0.86 and 0.85 for positive and negative classes, respectively.Precision values are slightly different for positive and negative classes; for example, SVM has a precision of 0.88 and 0.90 for positive and negative classes. Similarly, although precision, recall, and F1 score of DT are the lowest but the values for positive and negative classes are almost the same. An equal number of the training samples in the dataset makes a good fit for the classifiers, and their accuracy and F1 scores are in agreement.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>. W avg. Pos. Neg. W avg. Pos. Neg. W avg. machine learning models results on original dataset sentiment with BoW, TF-IDF, GloVe, and Word2Vec features are shown in Figure4. SVM is significant with all features and achieved the best score with BoW, TF-IDF, and Word2Vec. This significant performance of SVM is because of its linear architecture and binary classification problem. SVM is more significant on linear data for binary classification with its linear kernel as shown in this study.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Performance comparison between machine learning models using original dataset and BoW, TF-IDF, GloVe, Word2Vec features.</ns0:figDesc><ns0:graphic coords='15,150.52,314.90,396.02,231.38' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Performance comparison between machine learning models using TextBlob dataset and BoW, TF-IDF, GloVe, Word2Vec features.</ns0:figDesc><ns0:graphic coords='17,150.52,239.11,396.02,231.38' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. LSTM, CNN-LSTM, and GRU architectures .</ns0:figDesc><ns0:graphic coords='18,150.52,87.16,396.01,213.20' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>classifiers is greatly affected by GloVe and Word2Vec features which suggest that word embedding does not work well with the movie review dataset. For improving the performance of models and reducing the influence of contradictions found in the expressed sentiments and assigned labels, lexicon-based TextBlob is used for data annotation. Experimental results on TextBlob annotated dataset indicates that SVM achieves the highest accuracy of 92% with TF-IDF features. Compared to the standard dataset, the TextBlob assigned sentiments have a high correlation with the users' expressed sentiments. The performance of machine learning models is slightly lower than machine learning models with the highest accuracy of 0.90 by the CNN-LSTM. Despite the equal number of positive and negative reviews used for training, the prediction accuracy for the positive and negative classes is different. Precision, recall, and F1 score indicate the models have a good fit, and performance evaluation metrics are in agreement. The 18/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comprehensive summary of research works discussed in the related work.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Approach</ns0:cell><ns0:cell>Model</ns0:cell><ns0:cell>Aim</ns0:cell></ns0:row><ns0:row><ns0:cell>(Singh et al., 2013a)</ns0:cell><ns0:cell>Lexicon-Based</ns0:cell><ns0:cell>SentiWordNet</ns0:cell><ns0:cell>Movie review classification</ns0:cell></ns0:row><ns0:row><ns0:cell>(Singh et al., 2013b)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell>RLPI , Hybrid Fea-</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell>tures, KNN</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>(Yenter and Verma, 2017) Deep Learning</ns0:cell><ns0:cell>CNN LSTM</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>(Giatsoglou et al., 2017)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell>BoW-DOUBLE and</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell>Average emotion-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>DOUBLE</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Mathapati et al., 2018)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>(Ali et al., 2019)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>Multilayer percep-</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>tron, CNN and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>LSTM</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Minaee et al., 2019)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>IMDB review and Stanford</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>sentiment treebank v2 (SST2)</ns0:cell></ns0:row><ns0:row><ns0:cell>(Qaisar, 2020)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>(Shaukat et al., 2020)</ns0:cell><ns0:cell>Deep &amp; Machine</ns0:cell><ns0:cell>NN</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Learning</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Jain and Jain, 2021a)</ns0:cell><ns0:cell>Deep Learning</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell>(Nafis and Awang, 2021)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell cols='2'>SVM + (SVM-RFE) IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Jain and Jain, 2021b)</ns0:cell><ns0:cell>Machine Learn-</ns0:cell><ns0:cell>NB+ ARM</ns0:cell><ns0:cell>IMDB reviews classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:cell><ns0:cell>4/21</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Hyperparameters used for optimizing the performance of models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Hyperparameters</ns0:cell><ns0:cell /><ns0:cell>Values range used for tuning</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>n estimators=300,</ns0:cell><ns0:cell>random state=50,</ns0:cell><ns0:cell>n estimators = {50 to 500}, random state =</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max depth=300</ns0:cell><ns0:cell /><ns0:cell>{2 to 60}, max depth = {50 to 500}</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell cols='2'>kernel= 'linear' , C=3.0, random state=50</ns0:cell><ns0:cell>kernel= {'linear', 'poly', 'sigmoid'} ,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>C={1.0 to 5.0}, random state={2 to 60}</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell cols='2'>random state=50, max depth=300</ns0:cell><ns0:cell>random state = {2 to 60}, max depth = {50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>to 500}</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>n estimators=300,</ns0:cell><ns0:cell>random state=50,</ns0:cell><ns0:cell>n estimators = {50 to 500}, random state =</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>max depth=300, learning rate=0.2</ns0:cell><ns0:cell>{2 to 60}, max depth = {50 to 500}, learn-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ing rate = {0.1 to 0.8}</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>BoW features from the preprocessed text of sample reviews.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Accuracy of the selected models with BoW features.Performance of the classifiers is given in Table13in terms of precision, recall, and F1 score. The</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Classifier Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.72</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.85</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.87</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Performance evaluation metrics using BoW features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='6'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.71 0.72</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.72 0.71</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.72 0.72</ns0:cell><ns0:cell>0.72</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.85 0.87</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.88 0.84</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.86</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.83 0.87</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.88 0.82</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.86 0.85</ns0:cell><ns0:cell>0.85</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.86 0.88</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.88 0.86</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.87 0.87</ns0:cell><ns0:cell>0.87</ns0:cell></ns0:row></ns0:table><ns0:note>12/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Accuracy of models with TF-IDF features.Results for precision, recall, and F1 score are given in Table15. Experimental results indicate that the F1 score is the same for positive and negative classes for all classifiers which indicates the good fit of the modes on the training data. On the other hand, precision for positive and negative classes is slightly different. For example, GBS has a precision of 0.84 and 0.87 while SVM has a precision of 0.88 and 0.90 for positive and negative classes, respectively.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Classifier Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.71</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.89</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Performance evaluation metrics using TF-IDF features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='6'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.72 0.71</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.70 0.72</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.71 0.71</ns0:cell><ns0:cell>0.71</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.86 0.86</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.85</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.86</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.84 0.87</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.88 0.83</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86 0.85</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.88 0.90</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.90 0.88</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.89 0.89</ns0:cell><ns0:cell>0.89</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Performance of classifiers using GloVe features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>0.64 0.65 0.65</ns0:cell><ns0:cell>0.64 0.65 0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.75 0.74 0.74</ns0:cell><ns0:cell>0.72 0.77 0.74</ns0:cell><ns0:cell>0.73 0.75 0.74</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell><ns0:cell>0.65 0.66 0.65</ns0:cell><ns0:cell>0.65 0.65 0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.75</ns0:cell><ns0:cell>0.75 0.75 0.75</ns0:cell><ns0:cell>0.75 0.75 0.75</ns0:cell><ns0:cell>0.75 0.75 0.75</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Results indicatethat SVM achieved its highest accuracy of the study 0.92 with TextBlob sentiments and BoW features.While the performance of other models such as RF, GBC, and DT has also been improved. The reason for</ns0:figDesc><ns0:table /><ns0:note>14/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Performance evaluation of classifiers using BoW features on TextBlob annotated dataset.The performance of models with TF-IDF features and TextBlob sentiments are shown in Table19. SVM achieves its highest accuracy score of 0.92 with TextBlob sentiment and TF-IDF features. While other models such as RF, GBC, and DT repeat their performances with TF-IDF features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.85 0.61 0.73</ns0:cell><ns0:cell>0.87 0.57 0.72</ns0:cell><ns0:cell>0.87 0.59 0.73</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.84 0.90 0.87</ns0:cell><ns0:cell>0.98 0.47 0.72</ns0:cell><ns0:cell>0.90 0.62 0.76</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.85 0.70 0.78</ns0:cell><ns0:cell>0.92 0.55 0.73</ns0:cell><ns0:cell>0.98 0.62 0.75</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.94 0.84 0.89</ns0:cell><ns0:cell>0.94 0.84 0.89</ns0:cell><ns0:cell>0.94 0.84 0.89</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Results using TF-IDF Features</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Performance evaluation of classifiers using TF-IDF features on TextBlob annotated dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.85 0.62 0.73</ns0:cell><ns0:cell>0.87 0.58 0.72</ns0:cell><ns0:cell>0.86 0.60 0.73</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.85 0.88 0.87</ns0:cell><ns0:cell>0.98 0.51 0.74</ns0:cell><ns0:cell>0.91 0.65 0.78</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.86 0.73 0.79</ns0:cell><ns0:cell>0.92 0.57 0.75</ns0:cell><ns0:cell>0.89 0.64 0.77</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.92 0.88 0.90</ns0:cell><ns0:cell>0.96 0.78 0.87</ns0:cell><ns0:cell>0.94 0.82 0.88</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Performance evaluation of classifiers using GloVe features on TextBlob annotated dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Model Accuracy</ns0:cell><ns0:cell cols='3'>Precision Pos. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg. Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.81 0.47 0.64</ns0:cell><ns0:cell>0.81 0.48 0.64</ns0:cell><ns0:cell>0.81 0.48 0.64</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.71 0.81 0.76</ns0:cell><ns0:cell>0.94 0.39 0.67</ns0:cell><ns0:cell>0.87 0.51 0.69</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.81 0.47 0.64</ns0:cell><ns0:cell>0.81 0.48 0.65</ns0:cell><ns0:cell>0.81 0.48 0.64</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.83 0.71 0.77</ns0:cell><ns0:cell>0.93 0.46 0.70</ns0:cell><ns0:cell>0.88 0.56 0.72</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Results using Word2Vec Features</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_15'><ns0:head /><ns0:label /><ns0:figDesc>Table21shows the performance of machine learning models with Word2Vec features and Textblob sentiments. SVM achieves significantly better accuracy with Word2Vec features as compared to GloVe features. It gives a 0.88 accuracy score which is more than GloVe features but lower than BoW and TF-IDF features. RF and GBC achieve 0.79 and 0.70 accuracy scores, respectively. The performance of DT is degraded when used with Word2Vec features.The comparison between machine learning model results on TextBlob sentiment dataset with BoW, TF-IDF, GloVe, and Word2Vec features is given in Figure5. SVM obtains better results with TextBlob sentiments using BoW and TF-IDF features as compared to GloVe and Word2Vec features. Similarly,</ns0:figDesc><ns0:table /><ns0:note>15/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_16'><ns0:head>Table 21 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>Performance evaluation of classifiers using Word2Vec features on TextBlob annotated dataset. Neg. W avg. Pos. Neg. W avg. Pos. Neg. W avg.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Precision Pos. DT Model Accuracy 0.69 0.78 0.87 0.83</ns0:cell><ns0:cell>Recall 0.99 0.24 0.62</ns0:cell><ns0:cell>F1 Score 0.87 0.48 0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.78 0.87 0.83</ns0:cell><ns0:cell>0.99 0.24 0.62</ns0:cell><ns0:cell>0.87 0.38 0.63</ns0:cell></ns0:row><ns0:row><ns0:cell>GBC</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.80 0.44 0.62</ns0:cell><ns0:cell>0.80 0.44 0.62</ns0:cell><ns0:cell>0.80 0.44 0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.90 0.83 0.87</ns0:cell><ns0:cell>0.95 0.71 0.83</ns0:cell><ns0:cell>0.92 0.77 0.84</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_18'><ns0:head>Table 22 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Performance analysis of deep learning models</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='5'>Accuracy Class Precision Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Neg.</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>Pos.</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Avg.</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.87</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Neg.</ns0:cell><ns0:cell>0.78</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>CNN-LSTM 0.90</ns0:cell><ns0:cell>Pos.</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Avg.</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.88</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Neg.</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>GRU</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>Pos.</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.85</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Avg.</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_19'><ns0:head>Performance Analysis with State-of-The-Art Approaches Performance</ns0:head><ns0:label /><ns0:figDesc>analysis has been carried out to analyze the performance of the proposed approach with other state-of-the-art approaches that utilize the IMDB movie reviews analysis. Comparison results are provided in Table23. Results indicate that the proposed methodology can achieve competitive results</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_21'><ns0:head>Table 23 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Performance analysis of the proposed methodology.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Year Reference</ns0:cell><ns0:cell>Model</ns0:cell><ns0:cell>Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>2016 (Sahu and Ahuja, 2016)</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>0.90</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2017 (Yenter and Verma, 2017) CNN+LSTM</ns0:cell><ns0:cell>0.895</ns0:cell></ns0:row><ns0:row><ns0:cell>2017</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_22'><ns0:head>Table 24 .</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Statistical T-test output values The output values of T-test in terms of T-statistic and critical Value are shown in Table 24. T-test infers that the null hypothesis can be rejected in favor of the alternative hypothesis because the T-statistic value is less than the critical value indicating that the compared values are statistically different from each other.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Student T-test output parameters Output value</ns0:cell></ns0:row><ns0:row><ns0:cell>T-statistic</ns0:cell><ns0:cell>-0.182</ns0:cell></ns0:row><ns0:row><ns0:cell>Critical Value</ns0:cell><ns0:cell>0.000</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='21'>/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:61877:2:0:NEW 17 Dec 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"1 Response to Comments Manuscript ID: CS-2021:06:61877 Title: Classification of movie reviews using term frequency-inverse document frequency and optimized machine learning algorithms Authors: Muhammad Zaid Naeem, Furqan Rustam, Arif Mehmood, Mui-zzud-din, Imran Ashraf* and Gyu Sang Choi* Dear Editor, Thank you very much for allowing us to revise the manuscript. We would like to thank the editor and all the reviewers for their valuable comments and suggestions. Based on the feedback, we have revised our manuscript. The detailed modifications to address reviewers’ comments are provided in the following. For clarity, we have marked our responses in blue. Whenever we copy a paragraph from the manuscript here, we mark it as a magenta color. Answers to Comments I. R EVIEWER 5 Comment 1: In section ”Related Work” authors present a wide research in the context of the gap of knowledge that they want to fill. However, two of the works in Table 1 are not reviewed in the text above: Giatgoglou et al (2017) and Mathapati et al (2018). Response: Thank you very much. As suggested by the worthy reviewer, Related Work section has been updated. Kindly see Lines 162 167 of the revised manuscript. The study (Giatsoglou et al., 2017) proposed an approach for sentiment analysis using a machine learning model. A hybrid feature vector is proposed by combining word2vec and BoW technique and experiments are performed using four datasets containing online user reviews in Greek and English language. In a similar fashion, (Mathapati et al., 2018) performs sentiment analysis on IMDB reviews using a deep learning approach. The study used a CNN and LSTM recurrent neural network to obtain significant accuracy on the IMDB reviews dataset. 2 Comment 2: I thank you for providing the code, however your code file need more comments to be useful to future readers. For example, the introduction doesn’t match with the code (Multi-layer Perceptron is not used, according to the paper). Besides, there aren’t any comments about the kind of features engineering method used in each point. Response: We would like to thank the reviewer for highlighting this issue in code and we updated the code according to reviewer concerns. Revised code is provided with this response file. Comment 3: The way of citing the bibliographic references is not correct the most of the times. The cites must be between parentheses if order to make the text more readable. For example, in line 48, instead of ”...as a significant research are during the last few years Hearst (2003)” should be ”...as a significant research are during the last few years (Hearst, 2003)” The kind of cite, using the name of the author outside of the parentheses, should only be used if the name of author is part of the sentence. For example: Singh et al (2013) conduct experimental work about... Response: Thank you very much for your suggestions. The citation format has been changed and now citations are enclosed in () within the manuscript text. Comment 4: a) Bag of Words is abbreviate as BoW, but sometimes it is written as Bow. Authors should revise the text to homogenize this acronym and others (RF) Response: We would like to thank the reviewer for pointing out such errors. We checked the use of BoW for consistency and revised accordingly. Comment 5: Some blank spaces are needed in lines 196 (between algorithms and Oghina), 201 (between consistency and Lee) Response: As advised by the worthy reviewer, the suggested changes are made in the revised manuscript. Comment 6: Equations must be referenced with their number between parentheses instead of saying ”the following”. For example, in lines 219 and 220, TF-IDF is calculated using TF and IDF as (3). Response: Thank you very much for your valuable suggestions. Equations are now properly referred. Comment 7: The sample reviews of the Table 5, 6, 7 and 8 contain point at the end of the sentences, when these were removed in Table 4. Response: The authors highly appreciate the reviewers positive feedback and suggestions. Sample text in Tables have been corrected. Comment 8: In Table 6, the first sample review after case lowering yet contains a ”P”, In line 325, it is indicated that Tables 11 and 13 show BoW and TF-IDF features, but the tables are 9 and 10. 3 Response: It has been corrected as pointed out. Comment 9: In line 181, bibliography reference to the data set is missing Response: Thank you very much. The reference for the dataset has been added. Comment 10: A bibliography reference to TextBlow is missing. It is a Python Library for processing textual data that provides a simple API for diving into common natural language processing (NLP) tasks such as sentiment analysis between others. It is not a lexicon-based technique. Response: We would like to thank the reviewer for highlighting this issue in manuscript and we updated the manuscript according to reviewer’s concern and also added citation for TextBlob. Comment 11: TextBlob is used to annotate the dataset with sentiments (negative or positive) in order to tackle with ”the contradiction in the user sentiments in the reviews and assigned labels” How do the authors know that classification of the reviews is contradictory and TextBlob result is better? Authors must explain this point. Response: Reviewer’s is concern is genuine. For clarification we added Table 3 in the revised manuscript which contains the ’original’ label from the dataset and label from the Textblob. Original label is ’Negative’ while Textblob assigned label is ’positive’. Textblob assigned labels are more reliable as is confirmed by the higher classification accuracy from the models when used with Textblob labeled dataset. Table 3: Contradiction in Textblob and original dataset labels. Review movie makers always author work mean yes things condensed sake viewer interest look anne green gables wonderful job combining important events cohesive whole simply delightful believe chose combine three novels together anne avonlea dreadful mess look missed paul irving little elizabeth widows windy poplars anne college years heaven sake delightful meet priscilla rest redmond gang kevin sullivan taken things one movie time instead jumbling together combining characters events way movie good leave novels montgomery beautiful work something denied movie let seeing successful way brough anne green gables life Textblob Positive Original Negative Comment 12: In line 197, what does authenticity of a learning algorithm mean? or it is a mistake and it should be ”accuracy”. Response: The authors are grateful for your useful feedback. It was a typo that has been corrected. 4 Comment 13: In subsection ”Feature Engineering Methods” only Bag of Words and TF-IDF are explained. Authors should also explain and reference the other two methods used in their study (Global Vector and Word2Vec). Response: Thank you very much. Word2Vec and GloVe have been discussed on Page 6 of the revised manuscript. Word2Vec Word2Vec is one of the widely used NLP techniques for feature extraction in text mining that transforms text words into vectors (Wang et al., 2016). Given a corpus of text,Word2Vec uses a neural network model for learning word associations. Each unique word has an associated list of numbers called ’vector’. The cosine similarity of the vectors represents the semantic similarity between the words that are represented by vectors. GloVe GloVe from Global Vectors is an unsupervised model used to obtain words’ vector representation (Bhoir et al., 2017). The vector representation is obtained by mapping the words in a space such that the distance between the words represents the semantic similarity. Developed at Stanford, GloVe can determine the similarity between words and arrange them in the vectors. The output matrix by the GloVe gives vector space of word with linear substructure. Comment 14: Authors should revise the equation (1) and its explanation in line 217. In it TFij appears, but the explanation talk about term ”t” and document ”d”. Response: We have modified Equation (1) as suggested by the reviewer. Comment 15: In subsection ”Supervised machine learning Models”, author explain Naive Bayes, but in the Introduction and in the results, they used Gradient Boosting Classifier. Since gradient boosting classifiers are a group of machine learning algorithms that combine many weak learning models together to create a strong predictive model, which weak learning models does the gradient boosting classified used combine? Response: We appreciate reviewer’s time and effort for careful review of the article. We replaced the Naive Bayes section with description of Gradient Boosting classifier on Page 7 of the revised manuscript. Gradient Boosting Classifier GBC is an ensemble classifier used for classification tasks with enhanced accuracy base on boosting (Ayyadevara, 2018). It combines many weak learners sequentially to reduce the error gradually. This study uses the GBC with decision tree as a weak learner. GBC performance depends on the loss function and mostly the logarithmic loss function is used for classification. In addition, weak learners and adaptive components are important parameters of GBC. The hyperparameters setting of GBC used in this study is shown in Table 4. GBC is used with 300 n estimators indicating that 300 weak 5 learners (decision trees) are combined under boosting method and each tree is restricted to 300 max depth. The learning rate is set to 0.2 which helps to reduce model overfitting (Rustam et al., 2020). Comment 16: The title of the paper contains ”optimized machine learning algorithms” and Table 3 shows the hyperparameters used for optimizing the performance of models. Author should explain how and why they have chosen these parameters to ensure the use of optimized machine learning algorithms. Response: Thank you very much for your feedback. Regarding optimization of machine learning models, different values for hyperparameters are obtained from the literature review. However, since different values have been used from different studies, we defined values range, e.g., ’n estimator=50 to 500’ to analyze the performance of a model with different values of parameters. In addition, the importance of different parameters for different models is also analyzed. Value ranges for different parameters used during the fin-tuning process are provided in Table 4 of the revised manuscript. Table 4: Hyperparameters used for optimizing the performance of models. Model RF Hyperparameters n estimators=300, random state=50, max depth=300 SVM kernel= ‘linear’ random state=50 DT random state=50, max depth=300 GBC n estimators=300, random state=50, max depth=300, learning rate=0.2 , C=3.0, Values range used for tuning n estimators = {50 to 500}, random state = {2 to 60}, max depth = {50 to 500} kernel= {‘linear’, ‘poly’, ‘sigmoid’} , C={1.0 to 5.0}, random state={2 to 60} random state = {2 to 60}, max depth = {50 to 500} n estimators = {50 to 500}, random state = {2 to 60}, max depth = {50 to 500}, learning rate = {0.1 to 0.8} Comment 17: The BoW features showed in Table 9 don’t match with those in the preprocessed text of sample reviews in Table 8. TF-IDF values in Table 10 are miscalculated. If a word have a a frequency of 0 in a document, its TF-IDF can’t be upper 0 Response: We updated the manuscript to rectify these errors. Comment 18: In Figure 3, rows of the matrix represent actual labels and columns represent predicted labels. However, in lines 337 and 338 the opposite appears. Response: Description for Figure 3 has been updated. Comment 19: The results are not very conclusive and the analysis of them it is merely a reading of the table with no discussion or a deep explanation. 6 Response: Discussions of the results have been expanded. The performance of machine learning models is good when used with TF-IDF features extracted from the original dataset and SVM outperforms with a significant 0.89 accuracy score. TF-IDF generates a weighted feature set as compared to BoW, GloVe and Word2Vec features which helps to improve the accuracy of learning models. On the other hand, the accuracy of DT is reduced by 1% from 72% to 71% because DT is a rule-based model that performs well on simple term frequency as compared to weighted features. Weighted features introduce complexity in DT learning process. SVM performs well because TF-IDF provides linear feature set with binary class which is more suitable for SVM that performs better being the linear model. Performance of machine learning models is improved with Textblob data annotation. It is so because the contradictions in the labels of several tweets and the sentiments actually expressed in the tweets, can be resolved by Textblob as shown in Table 3. Machine learning models perform well with TF-IDF and BoW features and SVM otains the highest accuracy of 0.92 accuracy score indicating that Textblob annotation is more correlated with text features as compared to original labels. Comment 20: There are not statics test to see if the differences are significant. Response: Statistical T-test has been added on Page 17 of the revised manuscript. Statistical T-test The T-test is performed in this study to show the statistical significance of the proposed approach. The T-test is applied to SVM results with the proposed approach and original dataset. The output from the T-test favors either null hypothesis or alternative hypothesis. • Accept Null Hypothesis: This means that the compared results are statistically equal. • Reject Null Hypothesis: This means that the compared results are not statistically equal. Table 5: Statistical T-test output values Student T-test output parameters T-statistic Critical Value Output value -0.182 0.000 The output values of T-test in terms of T-statistic and critical Value are shown in Table 5. T-test infers that the null hypothesis can be rejected in favor of the alternative hypothesis because the T-statistic value is less than the critical value indicating that the compared values are statistically different from each other. Thank you very much. 7 Sincerely, Authors. "
Here is a paper. Please give your review comments after reading it.
371
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper proposes a pattern-based stock trading system using ANN-based deep learning and utilizing the results to analyze and forecast highly volatile stock price patterns. Three highly volatile price patterns containing at least a record of the price hitting the daily ceiling in the recent trading days are defined. The implications of each pattern are briefly analyzed using chart examples. The training of the neural network was conducted with stock data filtered in three patterns and trading signals were generated using the prediction results of those neural networks. Using data from the KOSPI and KOSDAQ markets, It was found that that the proposed pattern-based trading system can achieve better trading performances than domestic and overseas stock indices. The significance of this study is the development of a stock price prediction model that exceeds the market index to help overcome the continued freezing of interest rates in Korea. Also, the results of this study can help investors who fail to invest in stocks due to the information gap.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Predicting stock prices has long been of interest to many related fields including economics, mathematics, physics, and computer science. There is an ongoing debate about whether or not it is possible to predict stock prices and, if it is possible, how much these predictions can outperform the market. However, the field of AI (artificial intelligence) has recently reported price forecasting techniques with the application of various machine learning techniques that show a significant level of statistical confidence <ns0:ref type='bibr' target='#b21'>(Hsu, 2011;</ns0:ref><ns0:ref type='bibr' target='#b17'>Hadavandi et al., 2010;</ns0:ref><ns0:ref type='bibr'>G.Armano et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b13'>Ding et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Jiang, 2021)</ns0:ref>. A number of studies building intelligent trading systems have also been conducted based on the results of these AI price forecasting techniques <ns0:ref type='bibr' target='#b35'>(Lin et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b28'>O et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b41'>Song et al., 2020)</ns0:ref>. <ns0:ref type='bibr'>O et al. (2004, 2006)</ns0:ref> concluded that trading performance can be additionally improved by training and utilizing independent predictors for different stock price patterns. Most of the existing stock price prediction technical analyses have based the input features on the moving average (MA) stock price, which can effectively express the recent trends of price fluctuations. For example, the MACD (moving average convergence divergence) utilizes the difference between the long term and short-term moving average to represent the convergence and divergence of the moving average values. <ns0:ref type='bibr'>O et al. (2006)</ns0:ref> performed pattern-defined predictions using patterns related to a crossover, reversal into an uptrend, and reversal into a downtrend among 5-day, 10-day and 20-day moving average lines. However, all of the MA-based technical indicators, including the MACD, have a 'time-lag' limitation because buy and sell signals are mostly generated after price trends have already been developed. This study will attempt to predict highly volatile stock price patterns by introducing the concept of the 'upper limit price,' which is defined independently from the moving average. This study will also utilize Japanese candlestick indicators and more short-term technical indicators to complement the time-lag problem. In the Japanese candlestick indicator, a candlestick summarizes the intraday variations of a stock price, expressing the differences between the opening price, highest, lowest, and closing prices, through which the most recent price fluctuations can be summarized more closely. According to various empirical analyses of the Korean stock market, the Korean stock market shows market inefficiency due to information asymmetry <ns0:ref type='bibr' target='#b34'>(Lee, 2007;</ns0:ref><ns0:ref type='bibr' target='#b9'>Bark, 1991)</ns0:ref>. Although market inefficiency is lower than that of larger foreign stock markets, there is still an issue of the information gap. Therefore, this paper suggest that special investment and analysis information to overcome market inefficiency. However, since technical analysis indicators are price-and chart-based information that many people already know, a new chart analysis technique is needed. The efficient market hypothesis asserted by <ns0:ref type='bibr' target='#b14'>Fama (1965)</ns0:ref> is rejected if it exceeds the market rate of return using specific information. This study proposes a new 'highly volatile stock price pattern' that does not yet exist in technical analysis. Using the pattern proposed in this paper, it is possible to develop a predictive model that exceeds the market return. The results of this study are in conflict with the efficient market hypothesis. In summary, this study assumes market inefficiency in the Korean stock market and provides new information that is expected to affect price fluctuations. The highly volatile stock price pattern is defined by the relationship between the 'upper limit' and stock price in the Korean stock market. Through fund simulation it was found that investors can obtain efficient returns through a deep learning stock price prediction model using highly volatile patterns. It was also found that it was difficult to predict stock prices by only analyzing simple charts such as moving averages, so a definition of a particular pattern of variation is needed.. This pattern can be found when there are stock prices that show an upper limit. This pattern can also appear over various periods of time even when the chart shows the characteristics of a random walk.</ns0:p><ns0:p>The experiment was conducted based on related research that the Korean stock market shows more trend changes than random work characteristics <ns0:ref type='bibr' target='#b1'>(Aggarwal, 2018;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ryoo &amp; Smith, 2002;</ns0:ref><ns0:ref type='bibr' target='#b3'>Ayadi &amp; Pyun, 1994)</ns0:ref>. Studies related to stock market analysis from China, India, and Mongolia, which are similar to Korea, were also unable to prove the random walk theory according to the stock market <ns0:ref type='bibr' target='#b18'>(Han et al., 2019;</ns0:ref><ns0:ref type='bibr'>Damdindori et al. 2016)</ns0:ref>. The present study will show that the proposed stock trading system can achieve better trading performances than domestic and overseas stock indices by performing pattern-specific training on highly volatile stock price patterns. The experiments were conducted on the stock variation data extracted from the price data of approximately 2,000 stocks listed in the KOSPI and KOSDAQ markets. The limitations of existing studies and the strengths of this study are as follows.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations of existing studies</ns0:head><ns0:p>1. Limitations of Profitability Verification: Existing studies suggest only accuracy or prediction error using stock price prediction models. However, the actual performance of the stock price prediction model requires return verification.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Limitations of using simple input features:</ns0:head><ns0:p>Existing studies have used very simple input features such as simple price features, volume, price rate of change and so on.</ns0:p><ns0:p>Simple input features are limited and using more advanced input features can greatly influence the performance of predictive models.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Limitations on prediction accuracy:</ns0:head><ns0:p>Most of the predictive models in existing studies have not achieved high accuracy.</ns0:p></ns0:div> <ns0:div><ns0:head>Strengths of this study</ns0:head><ns0:p>This study conducted a clear performance verification through yield comparison with several indices. In addition, although a simple deep neural network was used, advanced input features improved the performance of the stock price prediction model. Although there are time series models such as LSTM, this study uses high-volatile pattern filtering input features. As a result, stock price data of a portion without the corresponding pattern is partially filtered. Therefore, the continuity of time series data is insignificant. As a result, deep learning helped us identify the importance of input features such as high volatility patterns in predicting stock price. The deep neural network we used is a rather simple structure, but its sufficient performance has been verified. The novelty of this study is summarized as follows. &#61548; Second, it enables investors to make comfortable investments without daily data analysis. By using the neural network model, a lot of data can be handled at once, and the prediction results are reliable results that have been verified through a sufficient period of fund simulation.</ns0:p><ns0:p>&#61548; Third, advanced filtering technology enables sufficient stock price prediction even in a simple deep learning model. Most of the existing studies focus on the structure of the model when conducting deep learning stock price prediction studies. For example, performance is compared using multiple models for the same data. However, in addition to the issue of selecting the structure of the model, the composition of data and the importance of filtering algorithms are proposed through this study.</ns0:p><ns0:p>This paper is organized as follows: 'Related Works' describes related studies; 'Background' describes the knowledge used prior to the main methodology of this study; in 'Materials &amp; Methods,' the moving average-based patterns typically utilized in existing studies will be introduced, the formal definition of the three highly volatile stock price patterns will be presented, and the meanings of the individual patterns will be described through chart demonstrations; in 'Experiments,' the input features and the target value of the neural network learning system will be described; in 'Results of experiments,' the experimental results will be presented; and, finally, future research directions will be suggested in the 'Conclusion.'</ns0:p></ns0:div> <ns0:div><ns0:head>Related Works</ns0:head><ns0:p>Researches related to stock price prediction have traditionally been conducted using ARIMA <ns0:ref type='bibr'>(Benvenuto D et al, 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Ariyo et al, 2014)</ns0:ref>, Regression <ns0:ref type='bibr' target='#b31'>(Refenes A et al, 1994;</ns0:ref><ns0:ref type='bibr' target='#b22'>Yang H et al, 2002)</ns0:ref>, and Bayesian <ns0:ref type='bibr' target='#b38'>(Pella J &amp; Masuda M, 2001)</ns0:ref> to reflect the characteristics of time series data. ARIMA is a statistical model widely used in the financial sector. However, it is a model that is used exclusively for short-term predictions, and has the disadvantage that it is difficult to confirm long-term investment performance. Since stock price prediction is closely related to profits, direct investment in a short-term verified model can lead to risks. In addition, for the reason that the amount of stock data accumulated from the past is very vast, a model that can handle large amounts of data is needed. In this regard, there is also a study result that predicts the index using the ARIMA model has an accuracy of up to 38% <ns0:ref type='bibr' target='#b12'>(Devi et al, 2013)</ns0:ref>. However, the prediction accuracy was very low to carry out actual investments using this model. Baysian is a model that can perform classification based on probabilistic theory and used to predict stock prices in the past. However, with the recent development of artificial neural networks, it is widely used as a comparative model. Baysian is also evaluated as not suitable for mid or long-term prediction, such as the ARIMA model. In related studies using the Bayesian model, predictions were performed with up to 78% accuracy <ns0:ref type='bibr' target='#b25'>(Malagrino et al, 2018)</ns0:ref>. This was better performance than the ARIMA model, but it showed lower values than the neural network model to be described below. Above all, these existing predictive models often misinterpret information due to underfitting and overfitting problems, so they often do not help much in decision-making activities for stock price prediction. In addition, it has already been proven that neural networks perform better than traditional methods <ns0:ref type='bibr' target='#b10'>(Bustos &amp; Pomares-Quimbaya, 2020)</ns0:ref>. For the above reasons, neural network-based technique has been used a lot in recent stock price prediction. Representatively, there is a study using Elliott Wave Indicator as a stock price prediction study using neural networks <ns0:ref type='bibr' target='#b32'>(Lakshminarayanan et al, 2006)</ns0:ref>. This study used data based on technical analysis, and the model accuracy was 93.83%. This study used its own technical indicators and made predictions for five stocks. Based on these findings, it is possible to assume that more advanced input features can lead to improved model performance. Several other papers similar to this paper used machine learning techniques such as regression and SVM, models such as CNN and LSTM, or ensemble techniques to predict stock prices <ns0:ref type='bibr' target='#b30'>(Oncharoen &amp; Vateekul, 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Liu &amp; Wang, 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Jiang, 2021)</ns0:ref>. <ns0:ref type='bibr' target='#b11'>Cao &amp; Wang (2019)</ns0:ref> attempted to predict the stock index of various countries using the CNN application model. Only historical data was used as input features, and the down-sampling and convolution techniques belonging to CNN were mainly used to improve stock price prediction performance. As a result, they suggested that the CNN-SVM mixed model had the best performance. However, simple input features were used, and the exact performance and return of the model were not verified, so it was difficult to confirm the actual performance of the CNN-SVM model. Another study <ns0:ref type='bibr' target='#b16'>(Guang et al., 2019)</ns0:ref> looked at the rate of the return of the stock price forecast model. This model is not a period-specific return, but an absolute return that does not take into account investments and assets that may vary from person to person. In addition, the profits earned have no comparison with the stock index, which can be regarded as a rate of return due to market rise. A similar study <ns0:ref type='bibr' target='#b37'>(Pang et al., 2020)</ns0:ref> included word-embedding techniques such as LSTM (Long Short-Term Memory). Here stock data, which is a time series characteristic, is referred to as a stock vector, and is used as an input feature. The model they developed tried to predict the Shanghai Stock Index and showed an accuracy of about 57%, but the study did not have a comparison of returns through prediction. High volatile stock price prediction model derived higher accuracy than the model presented by Pang et al. In addition, only a few studies <ns0:ref type='bibr' target='#b15'>(Feng et al., 2018;</ns0:ref><ns0:ref type='bibr'>Araujo et al., 2019)</ns0:ref> using various in-depth models have both derived returns and evaluated their performance, and most studies cannot guarantee a clear return because they provide no comparison with the stock index. Recent evidence suggests that input feature selection is very important in model learning. The selection of key input features can lead to improved <ns0:ref type='bibr' target='#b19'>(Hooshmand &amp; Gad, 2020)</ns0:ref>. Accordingly, in this paper, the study was conducted with a focus on data composition and novelty rather than the structure of the model.</ns0:p></ns0:div> <ns0:div><ns0:head>Background (Moving Average)-based patterns &#119924;&#119912;</ns0:head><ns0:p>Before examining the high volatile stock price pattern using the Japanese candlestick indicator, which is the focus in this study, the meaning and limitations of the four MA-based patterns presented in related studies are examined, taking the 'divergence' pattern and 'reversal' pattern of MA (Moving Average) as examples. <ns0:ref type='bibr'>(O et al., 2006)</ns0:ref> that defined and used the patterns based on the moving average. The moving average is the average stock price over a period of time and is used to summarize stock price trends. Moving average is also denoted by in which 5, 10, 20 &#119872;&#119860; days and so on are used as the window of time; as an example, the 5-day moving average of stock at trading day can be calculated as follows:</ns0:p><ns0:p>&#119904; &#119905;</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#119872;&#119860;5 &#119904; &#119905; = 1 5 &#8721; 4 &#119896; = 0 &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -&#119896;</ns0:formula><ns0:p>In the same way as above, the volume can also be calculated as the moving average of the volume, and the 5-day volume moving average of stock trading day can be counted as follows:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>&#119881;&#119872;&#119860;5 &#119904; &#119905; = 1 5 &#8721; 4 &#119896; = 0 &#119907;&#119900;&#119897;&#119906;&#119898;&#119890; &#119904; &#119905; -&#119896;</ns0:formula><ns0:p>where is the closing price of the trading day The slope of the line connecting a moving &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; &#119905;.</ns0:p><ns0:p>average to another moving average is denoted as and can be calculated using the following &#119866;&#119903;&#119886;&#119889; Equation (3):</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_2'>&#119866;&#119903;&#119886;&#119889;5 &#119904; &#119905; = &#119872;&#119860;5 &#119904; &#119905; -&#119872;&#119860;5 &#119904; &#119905; -1 &#119872;&#119860;5 &#119904; &#119905;</ns0:formula><ns0:p>Equation (4) defines the training target set that corresponds to the divergence pattern. &#119863; &#119887;&#119890;&#119886;&#119903; Divergence refers to when the short-term moving average is located relatively below the longerterm moving averages, resulting from the continuation of the downward trend of the stock price for a considerable period of time. Equation (4) represents that the 5-day moving average is smaller than the 10-day moving average and the 10-day moving average is smaller than the 20day moving average: (4)</ns0:p><ns0:formula xml:id='formula_3'>&#119863; &#119887;&#119890;&#119886;&#119903; = {(&#119909; &#119904;,&#119905; , &#119900; &#119904;,&#119905; )| &#119872;&#119860;5 &#119904; &#119905; &lt; &#119872;&#119860;10 &#119904; &#119905; &lt; &#119872;&#119860;20 &#119904; &#119905; , &#119904; &#8712; &#120572;, &#119905; &#8712; &#120573;}</ns0:formula><ns0:p>where is a vector of the input feature, and is the target value representing the price &#119909; &#119904;,&#119905; &#119900; &#119904;,&#119905; fluctuation after the occurrence of the pattern; and are the entire stock set and the entire &#120572; &#120573; trading day set, respectively. Figure <ns0:ref type='figure'>1</ns0:ref> is an example of the charts corresponding to a divergence pattern. The description of the graph in Figure <ns0:ref type='figure'>1</ns0:ref> is as follows. Sixteen trading days from A to B correspond to the divergence pattern. In this case, relatively steep price rises were shown around point B; but, in general, even if a rebound was to appear, the rise would not be that large. Equation ( <ns0:ref type='formula'>5</ns0:ref> Manuscript to be reviewed Computer Science downtrend to an uptrend; trading day A, B, C in Figure <ns0:ref type='figure'>2</ns0:ref> show reversal to uptrend patterns. Among these, A shows the case where the 5-day line reversed to an uptrend due to the sharp &#119872;&#119860; rise of the stock price. This case shows the weakness of MA as a prediction indicator because the price had already risen before the trading signal was issued; the phenomenon of generating the signals after the price movement has already occurred is called 'time-lag'. This MA-based pattern arises very frequently so it has the strength of using a large amount of training data, but time lag decreases this pattern's predictive ability.</ns0:p><ns0:formula xml:id='formula_4'>&#119863; &#119879;&#119880; = {(&#119909; &#119904;,&#119905; , &#119900; &#119904;,&#119905; )| (&#119866;&#119903;&#119886;&#119889;5 &#119904; &#119905; -1 &lt; 0 &amp;&amp; &#119866;&#119903;&#119886;&#119889;5 &#119904; &#119905; &gt; 0) (5) || (&#119866;&#119903;&#119886;&#119889;10 &#119904; &#119905; -1 &lt; 0&amp;&amp; &#119866;&#119903;&#119886;&#119889;10 &#119904; &#119905; &gt; 0), || (&#119866;&#119903;&#119886;&#119889;20 &#119904; &#119905; -1 &lt; 0&amp;&amp; &#119866;&#119903;&#119886;&#119889;20 &#119904; &#119905; &gt; 0), &#119904; &#8712; &#120572;, &#119905; &#8712; &#120573;}</ns0:formula></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>Highly volatile stock price patterns</ns0:head><ns0:p>This section describes a technique for filtering data showing a high volatile stock price pattern on a stock chart. There are a total of three filtering algorithms, and using them, data with high fluctuation patterns form a cluster. Machine learning-based algorithms such as k-means were not necessarily used when creating clusters. This is because the time it takes to create a cluster is very short compared to similar studies <ns0:ref type='bibr' target='#b4'>(Alguliyev et al, 2019)</ns0:ref>.</ns0:p><ns0:p>In the pattern-based stock trading system, multiple independent predictors are trained on the data clustered in line with the stock price patterns and employed in the final trading. In this paper, the highly volatile stock price patterns will be defined and employed as a way to achieve more predictability as an extension of pattern-based prediction techniques. Korean stock markets set the legal limits of price fluctuations in a day; both the KOSPI and KOSDAQ markets apply &#177; 15% of the previous day's closing price to the price fluctuation limits. In general, the 'upper limit' refers to when the closing price of a particular trading day is closest to a 15% rise from the previous day's closing price, or about a 14% rise in the price depending on the price band of the item. In Equation ( <ns0:ref type='formula'>6</ns0:ref>), represents the closing price at the upper limit of stock s on &#119878;&#119886;&#119899;&#119892;&#8462;&#119886;&#119899; &#119904; &#119905; trading day t and is defined as follows:</ns0:p><ns0:formula xml:id='formula_5'>= (6) &#119878;&#119886;&#119899;&#119892;&#8462;&#119886;&#119899; &#119904; &#119905; { &#119905;&#119903;&#119906;&#119890; &#119894;&#119891;(&#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; &#8805; 1.14 &#215; &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -1 ) &#119891;&#119886;&#119904;&#119897;&#119890; &#119900;&#119905;&#8462;&#119890;&#119903;&#119908;&#119894;&#119904;&#119890;</ns0:formula><ns0:p>where refers to the closing price. The three highly volatile stock price patterns are defined &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; based on the definition of the upper limit. If the price of a particular stock has risen to the upper limit, the price volatility of the subsequent trading days is bound to be expanded. The rise of the price to the upper limit should be accompanied by large transaction volume because a short-term surge and plunge may occur due to the collective psychology of the trading public. The highly volatile stock price patterns in this paper deal only with the case where price adjustments were made 1 to 2 trading days after the upper limit price appeared. The time target used for predictions was when both the first rising wave, represented by the rising to the upper limit price, and the first falling wave, represented by the adjustment, have been completed so extreme volatilities have been relaxed. High volatile stock price patterns were found by chart analysis experts who analyzed charts over the years. This pattern is actually mainly seen in mid-to low-priced stocks in the Korean stock market. Investors have to look directly at a vast amount of data to utilize this pattern for real investment. However, if the prediction model using deep learning is properly defined, it is easy to predict when the price rises after a certain pattern, and even check whether it can actually make profits.</ns0:p></ns0:div> <ns0:div><ns0:head>Adjustment pattern with one candlestick after consecutive upper limits ( ) &#119953;&#120783;</ns0:head><ns0:p>An adjustment pattern with 1 candlestick after consecutive upper limits is when the first rising wave is so strong that the upper limit prices appear consecutively; it can be calculated using the equation below <ns0:ref type='bibr'>(7)</ns0:ref>. represents the open price of stock s on trading day t. Figure <ns0:ref type='figure'>3</ns0:ref> shows &#119900;&#119901;&#119890;&#119899; &#119904; &#119905; an example of where (a) is a normal example in which a pattern occurs and the upper limit &#119901;1 &#119901;1</ns0:p><ns0:p>price appears next day, and (b) is a counter example where the empty rectangular, called a negative candlestick, represents when the closing price is lower than the opening price, meaning that the price became lower after intraday trading. The positive candlestick filled with the grey color is the opposite. The last condition of the condition statement means that the candlestick in the pattern formation for the day should be a negative candlestick or should be a positive candlestick of less than 5% of the difference between the opening price and the closing price. If the positive candlestick is larger by a difference of more than 5%, the price is considered adjusted. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>An adjustment pattern with one candlestick after a single upper limit is when the upper limit price condition is replaced by the single upper limit price; Figure <ns0:ref type='figure'>4</ns0:ref> shows a normal example and a counter example of the case. Due to the relatively weaker rising intensity, it is estimated that the ratio of normal examples is likely to be rather lower than p1 patterns.</ns0:p><ns0:p>(8)</ns0:p><ns0:formula xml:id='formula_6'>&#119901;2 &#119904; &#119905; = { &#119905;&#119903;&#119906;&#119890; &#119894;&#119891;(&#119878;&#119886;&#119899;&#119892;&#8462;&#119886;&#119899; &#119904; &#119905; -2 = &#119891;&#119886;&#119897;&#119904;&#119890; &#119886;&#119899;&#119889; &#119878;&#119886;&#119899;&#119892;&#8462;&#119886;&#119899; &#119904; &#119905; -1 = &#119905;&#119903;&#119906;&#119890; &#119886;&#119899;&#119889; (&#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; ) &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; &lt; 0.05) &#119891;&#119886;&#119897;&#119904;&#119890; &#119900;&#119905;&#119890;&#119903;&#119908;&#119894;&#119904;&#119890;</ns0:formula></ns0:div> <ns0:div><ns0:head>Adjustment pattern with two candlesticks after upper limits ( ) &#119953;&#120785;</ns0:head><ns0:p>An adjustment pattern with two candlesticks after the upper limits is the case in which the price has been adjusted over two trading days after the upper limit price, or when a negative candlestick or a small positive candlestick (less than 5 % in size) forms after or is formed.</ns0:p></ns0:div> <ns0:div><ns0:head>&#119901;1 &#119901;2</ns0:head><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> shows the examples of . The last candlestick on each chart of (a) and (b) is the price &#119901;3 fluctuation immediately after the pattern occurs.</ns0:p><ns0:p>(9)</ns0:p><ns0:formula xml:id='formula_7'>&#119901;3 &#119904; &#119905; = { &#119905;&#119903;&#119906;&#119890; &#119894;&#119891;(&#119878;&#119886;&#119899;&#119892;&#8462;&#119886;&#119899; &#119904; &#119905; -1 = &#119905;&#119903;&#119906;&#119890; &#119886;&#119899;&#119889; (&#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -1 -&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; -1 ) &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -1 &lt; 0.05 &#119886;&#119899;&#119889; (&#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; ) &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; &lt; 0.05 &#119891;&#119886;&#119897;&#119904;&#119890; &#119900;&#119905;&#119890;&#119903;&#119908;&#119894;&#119904;&#119890;</ns0:formula><ns0:p>These examples of the three patterns examined above imply that price fluctuations after the pattern occurs can vary depending on the slope of the moving average and the form of the candlestick. As an example, the under tail is attached to the last candlestick in all the patterns shown in the three normal examples. This means that the stock ended with the emergence of buying powers leading the rebound in price after trading hours and is more likely to be bullish the next day. Since the various factors act in combination on the direction of the stock price after the appearance of the pattern, the neural network training that will be described in the next section is needed to utilize these patterns in the trading system.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiments Input features configuration for neural network</ns0:head><ns0:p>In order to train the neural networks for future price predictions for each pattern presented in the previous sections, the input feature set constructing an input vector was used for the input to &#119961; &#119956;, &#119957; the neural network from the training data and the target value corresponding to the desired output was defined. Disparity, representing the distance between and the current price, is denoted &#119924;&#119912; as and the from the 5-day line can be calculated using the following equation: &#119915;&#119946;&#119956;&#119953; &#119915;&#119946;&#119956;&#119953; &#119924;&#119912; Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>(10)</ns0:p><ns0:formula xml:id='formula_8'>&#119863;&#119894;&#119904;&#119901;5 &#119904; &#119905; = &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -&#119872;&#119860;5 &#119904; &#119905; -1 &#119872;&#119860;5 &#119904; &#119905;</ns0:formula><ns0:p>Apart from the moving average line, the input features relating to Japanese candlesticks include:</ns0:p><ns0:p>(rate of change) in the trading day price compared to the previous day, , (upper &#119929;&#119914; &#119913;&#119952;&#119941;&#119962; &#119932;&#119930; shadow), and (lower shadow). These are defined in equations ( <ns0:ref type='formula' target='#formula_18'>11</ns0:ref>) through ( <ns0:ref type='formula' target='#formula_13'>14</ns0:ref>), respectively: &#119923;&#119930; (11)</ns0:p><ns0:formula xml:id='formula_9'>&#119877;&#119862; &#119904; &#119905; = 100x &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -&#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -1 &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:formula xml:id='formula_11'>&#119861;&#119900;&#119889;&#119910; &#119904; &#119905; = 100x &#119900;&#119901;&#119890;&#119899; &#119904; &#119905; -&#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; -1 min (&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; , &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; ) (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>)</ns0:formula><ns0:formula xml:id='formula_13'>&#119880;&#119878; &#119904; &#119905; = 100x &#8462;&#119894;&#119892;&#8462; &#119904; &#119905; -max (&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; , &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; ) max (&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; , &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; )<ns0:label>(14)</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>&#119871;&#119878; &#119904; &#119905; = 100x min (&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; , &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; ) -&#119897;&#119900;&#119908; &#119904; &#119905; min (&#119900;&#119901;&#119890;&#119899; &#119904; &#119905; , &#119888;&#119897;&#119900;&#119904;&#119890; &#119904; &#119905; )</ns0:formula><ns0:p>where are the opening, highest, and lowest price of the trading day .</ns0:p><ns0:formula xml:id='formula_15'>&#119952;&#119953;&#119942;&#119951; &#119956; &#119957; ,&#119945;&#119946;&#119944;&#119945; &#119956; &#119957; &#119938;&#119951;&#119941; &#119949;&#119952;&#119960; &#119956; &#119957; &#119853;</ns0:formula><ns0:p>The input vector of each predictor, including these indicators, is as follows:</ns0:p><ns0:formula xml:id='formula_16'>&#119961; &#119956;, &#119957; &#119961; &#119956;, &#119957; = (&#119929;&#119914; &#119956; &#119957; , &#119929;&#119914; &#119956; &#119957; -&#120783; , &#119929;&#119914; &#119956; &#119957; -&#120784; , &#119913;&#119952;&#119941;&#119962; &#119956; &#119957; , &#119913;&#119952;&#119941;&#119962; &#119956; &#119957; -&#120783; , &#119913;&#119952;&#119941;&#119962; &#119956; &#119957; -&#120784; , &#119932;&#119930; &#119956; &#119957; , &#119923;&#119930; &#119956; &#119957; , &#119918;&#119955;&#119938;&#119941;&#120787; &#119956; &#119957; , ,<ns0:label>(15)</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>&#119918;&#119955;&#119938;&#119941;&#120783;&#120782; &#119956; &#119957; , &#119918;&#119955;&#119938;&#119941;&#120784;&#120782; &#119956; &#119957; , &#119915;&#119946;&#119956;&#119953;&#120787; &#119956; &#119957; , &#119915;&#119946;&#119956;&#119953;&#120783;&#120782; &#119956; &#119957; &#119915;&#119946;&#119956;&#119953;&#120784;&#120782; &#119956; &#119957; , &#119933;&#119918;&#119955;&#119938;&#119941;&#120787; &#119956; &#119957; , &#119933;&#119918;&#119955;&#119938;&#119941;&#120783;&#120782; &#119956; &#119957; , &#119933;&#119918;&#119955;&#119938;&#119941;&#120784;&#120782; &#119956; &#119957; , &#119933;&#119915;&#119946;&#119956;&#119953;&#120787; &#119956; &#119957; , &#119933;&#119915;&#119946;&#119956;&#119953;&#120783;&#120782; &#119956; &#119957; , &#119933;&#119915;&#119946;&#119956;&#119953;&#120784;&#120782; &#119956; &#119957; )</ns0:formula><ns0:p>where is the slope of the volume moving average line. is the difference between &#119933;&#119918;&#119955;&#119938;&#119941; &#119933;&#119915;&#119946;&#119956;&#119953; the volume moving average and the total volume; these two indicators can be calculated by entering the total volume instead of the close price in the equations of the and . Each &#119918;&#119955;&#119938;&#119941; &#119915;&#119946;&#119956;&#119953; input feature should be normalized as a value between 0 and 1 before used.</ns0:p></ns0:div> <ns0:div><ns0:head>Experimental process and environment</ns0:head><ns0:p>The first of the stock price prediction process takes data from the stock database for a certain period of time. Thereafter, the data of the part showing the high volatile pattern is filtered and the input feature is calculated. The calculated result is saved as a text file and is used for deep learning. The text file consists of training, validation, testing, and fund simulation files, and finally the model outputs dates and stocks that are expected to rise by more than 10%. The stock database stores daily KOSPI/KOSDAQ data from October 1990 and it updates data every day. Therefore, even if the composition of the stock index changes, the changed information is newly added to the database, so it is possible to predict the changed index. A prediction model using the neural network structures as shown in Figure <ns0:ref type='figure'>6</ns0:ref> below was trained using 20 input features and binary classified target vectors. The target vector uses a binary classification format, and if the price rises by more than 10% within 5 days, it will be marked [0,1] otherwise [1,0]. Experiments will be conducted on a desktop with 18.04 versions of Ubuntu with RTX 3070 9GB graphics card. The model was constructed using Keras in Tensorflow 2.0 <ns0:ref type='bibr' target='#b0'>(Abadi et al., 2016)</ns0:ref> and the hidden layer was composed of three hidden layers. In addition, the use of detailed parameters is as follows. Tanh was used as an activation function for each layer, and the learning rate was set to 0.01. In addition, the dropout ratio was set to 0.5.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiment Results</ns0:head></ns0:div> <ns0:div><ns0:head>Performance evaluation of prediction model</ns0:head><ns0:p>This section will present the experiment results of applying the proposed trading system to the prices of 2,268 stocks listed on the KOSDAQ and KOSPI markets of the Korean Stock Exchange. The entire dataset used in the experiment is divided into 4 sub-sets; the details are shown in Table <ns0:ref type='table'>1</ns0:ref>. In other related studies, there are no results of measuring returns through the prediction model. In studies related to stock price prediction, not only the accuracy of the prediction model but also the measurement of the rate of profit should be conducted at the same time. In this paper, a fund simulation dataset is additionally configured for this purpose.</ns0:p><ns0:p>Training, verification, and test data are used exclusively for determining prediction models. Only the data that was not used to generate the prediction model was used for fund simulation. This process is for cross-validation and accurate rate of profit measurement. This study was aimed only at predicting the domestic stock market, so only data from the KOSPI and KOSDAQ were used. . The total number of KOSPI and KOSDAQ stocks in Korea is about 2,436. In this study, data were collected from October 2017 to December 2020 based on KOSPI and KOSDAQ stocks, the total number of data used for training, verification, and testing, and additional fund simulations exceeded 2 million. Because of the large amount of data, other analytical techniques and theories were needed to add data from overseas stock markets.</ns0:p><ns0:p>The experiments showed that as training progressed, loss decreased and accuracy increased, as shown in Figure <ns0:ref type='figure'>7</ns0:ref>. The loss calculation used MSE, and the equation is as follows. Where is &#119925; the number of samples, is the predicted value and is real value. Mean square error is defined &#119927; &#119912; as the variance between predicted and actual values <ns0:ref type='bibr' target='#b26'>(Namasudra et al, 2021)</ns0:ref>.</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_18'>) &#119872;&#119878;&#119864; = 1 &#119873; &#8721; &#119873; 1 (&#119875; &#119894; -&#119860; &#119894; ) 2<ns0:label>11</ns0:label></ns0:formula><ns0:p>Test datasets showed a slight increase in loss and a slight decrease in accuracy with each epoch. However, we can see significantly better numerical results than the training process.</ns0:p><ns0:p>The evaluation of the prediction model is shown in Table <ns0:ref type='table'>2</ns0:ref>. The model's evaluation was performed on two criteria: accuracy and F1 score. Accuracy is a metric for the classification model as a percentage of the total predictions performed. The F1 score is the harmonic mean of The experiments showed that the stock price prediction model using highly volatile stock price patterns finally showed an accuracy of 96.23% and an F1 score of 0.9638. This result is slightly lower or better compared to other classification models <ns0:ref type='bibr' target='#b2'>(Agrawal et al, 2021;</ns0:ref><ns0:ref type='bibr' target='#b27'>Ndichu et al, 2020)</ns0:ref> not stock price prediction models. However, in the case of stock price forecasts, this figure can be said to be a good result due to high uncertain volatility. The predictors by pattern were constructed using the training data by pattern, and the optimum trading policies were selected by performing the integrated multiple simulation presented in <ns0:ref type='bibr' target='#b34'>Lee (2007)</ns0:ref>, applying the 'trading policy selection set' to each predictor. Here, the integrated multiple simulations refer to the technique to find the optimal trading policies best suited to a given predictive neural network. For example, when a prediction is performed on the fund simulation set, stocks and dates that will rise more than 10% are derived. The stocks to rise consist of those with a neural network threshold of more than 0.5. It was found that the optimum trading policy had a 20% in profit realization rate, -12% in stop loss rate, and a holding period of 19 days. The results of this experiment were compared with similar studies using other filtering algorithms to derive the results shown in Table <ns0:ref type='table'>3</ns0:ref>. Highly volatile filtering algorithm defines the pattern of fluctuations in stock prices using the concept of upper limits. In comparison, the remaining three algorithms are 'Resisted plunge filtering', 'Nosedive filtering', and 'Rise stock filtering', respectively. 'Resisted plunge' refers to the type in which an ascending stock drops for a short period. 'Nosedive' literally means a slump. In this case, it is to collect stocks that have shown a period of collapse. 'Rise stock' filtering represented the long-term upward trend <ns0:ref type='bibr' target='#b40'>(Song, Y et al, 2018)</ns0:ref>. Experiments were conducted using the same data and model structure. The experimental results showed that Resisted plunge filtering achieved 72.39% accuracy, Nosedive filtering achieved 75.11% accuracy, and Rise filtering achieved 64.93% accuracy. In contrast, it can be seen that the highly volatile pattern filtering algorithm is 96.23%, which is higher than the accuracy of the other filtering algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>Results of Fund simulation</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67714:1:1:NEW 27 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As a result of the fund simulation using optimal trading policies, it was able to earn better profits than the domestic stock index during the same period. Industrial indicators were not predicted separately because they were both included in the KOSPI and KOSDAQ already as this data contains all stock data by industry. As a consequence, the number of stock trades was small because the stock price pattern showing a highly volatile pattern did not appear much in the fluctuation pattern of all stocks. However, when a highly volatile pattern occurs, It was found that there is a high probability that a subsequent upward pattern will appear. In conclusion, highly volatile stock price patterns play a big role in predicting stock price rise. Additionally, to supplement the stock price prediction performance of the high volatile model proposed in this paper, fund simulation results were compared it with the Nikkei 225, NYSE, and NASDAQ, which are representative stock indices in Japan and the United States. The results are shown in a graph with the domestic stock index in Figure <ns0:ref type='figure'>8</ns0:ref>. Even when other indices were lowered, high volatile stock price prediction model showed a steady upward graph. Finally, this model can earn the highest cumulative return. Figure <ns0:ref type='figure'>8</ns0:ref> shows the percentage of returns from each asset. The reasons that the trading system proposed in the paper achieved better trading performance than the domestic and overseas stock indices are: first, including the microscopic price change processes of the most recent three days in the training input features helped train the neural network training; second, defining the scope of each pattern by using more strengthened constraints than the moving average pattern seems to have contributed to the improvement of learning performance as well as the ultimate trading performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper constructed a pattern-based stock trading system which learned data corresponding to the three highly volatile stock price patterns and utilized that data for trading. The highly volatile stock price pattern can be observed over a long period of time and almost guarantees a short-term rise after the pattern occurs. The significance of this study is the development of a stock price prediction model that exceeds market indices to overcome the continued freezing of interest rates in Korea, Japan, and the U.S. Also, the results of this study can help investors who fail to invest in stocks due to the information gap. If special analysis techniques and indicators such as high volatility patterns are proven to be effective through this research method, individual investors can use these methods in the future. In addition, a number of other patterns of variation can be added to expand the model, and if a positive return is proven, anyone can use the fund simulation for their own investments. Additional studies will have to be conducted to achieve much better trading performances through microscopic analysis and classification of other highly volatile stock price patterns not used in this paper. Improving the input feature set used in this paper, and reflecting the variations of the periods in the target values may also help achieve better results. In addition, performance changes will be measured by applying the NAR Neural Network Time Series (NAR-NNTS), a recently studied model <ns0:ref type='bibr' target='#b26'>(Namasudra et al, 2021)</ns0:ref> that is suitable for data with uncertainty in future studies. Finally, this study was conducted only using Korean stock data as the Korean stock market is very different from overseas stock markets such as those in the United States. However, in order to highlight the strengths of prediction model by performing cross-comparison with overseas stock indices, comparisons with three indices were added. Price restrictions such as upper and lower limits are usually used in Asia including China, Japan, and Thailand. This paper identified patterns inspired by Japanese candle charts and compared them with the Japanese stock index. In addition, it completed the yield comparison by adding comparison with overseas stock markets such as the U.S. As a result, the proposed model showed a higher return than the overseas market growth rate during the same period.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>&#61548;</ns0:head><ns0:label /><ns0:figDesc>First, an advanced filtering technology that can improve the performance of the stock price prediction model was proposed. The pattern of stock price fluctuations proposed in this paper was created based on the results of analyzing domestic stock charts. Since it is PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67714:1:1:NEW 27 Jan 2022) Manuscript to be reviewed Computer Science a pattern generated based on the actual chart, it reflects the market well and shows excellent performance compared to the existing moving average-based pattern.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>) defines the training target set that corresponds to the reversal to uptrend &#119863; &#119879;&#119880; pattern. Reversal to uptrend means that one of the moving average lines reverses from a PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67714:1:1:NEW 27 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>one candlestick after single upper limit ( ) &#119953;&#120784; PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67714:1:1:NEW 27 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>precision and recall(Chakraborty et al, 2020). The equation for accuracy and f1 score is as follows.(12)&#119860;&#119888;&#119888;&#119906;&#119903;&#119886;&#119888;&#119910; = 100x &#119879;&#119903;&#119906;&#119890; &#119875;&#119900;&#119904;&#119894;&#119905;&#119894;&#119907;&#119890;&#119904; + &#119879;&#119903;&#119906;&#119890; &#119873;&#119890;&#119892;&#119886;&#119905;&#119894;&#119907;&#119890;&#119904; &#119879;&#119903;&#119906;&#119890; &#119875;&#119900;&#119904;&#119894;&#119905;&#119894;&#119907;&#119890;&#119904; + &#119879;&#119903;&#119906;&#119890; &#119873;&#119890;&#119892;&#119886;&#119905;&#119894;&#119907;&#119890;&#119904; + &#119865;&#119886;&#119897;&#119904;&#119890; &#119875;&#119900;&#119904;&#119894;&#119905;&#119894;&#119907;&#119890; + &#119865;&#119886;&#119897;&#119904;&#119890; &#119873;&#119890;&#119892;&#119886;&#119905;&#119894;&#119907;&#119890;&#119904;</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,339.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,354.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,321.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,236.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,180.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,225.75' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67714:1:1:NEW 27 Jan 2022)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Response to Reviewers’ Comments Dear Reviewers and Editor, We are resubmitting this paper (title : Development of a stock trading system based on a neural network using highly volatile stock price patterns) after a careful review of your opinion. We hope this paper is satisfactory to you and will lead to your positive decision. We deeply appreciate your attention on this paper. We have carefully considered your review, we added references, formulas, and detailed experiments along with the overall revision of the content. we supplemented the contribution of our paper by comparing it with papers related to deep learning stock price prediction. Details and answers are below. For the reviewer 1. Point 1. How the experimental environment has been developed? / There must be a discussion on how the results are generated. Technical details are missing. Response 1: According to your opinion, I added information about the environment and stock database. This can be found in the 'Experimental Process and Environment', which adds the research process and detailed description. The contents are as follows. The first of the stock price prediction process takes data from the stock database for a certain period of time. Thereafter, the data of the part showing the high volatile pattern is filtered and the input feature is calculated. The calculated result is saved as a text file and is used for deep learning. The text file consists of training, validation, testing, and fund simulation files, and finally the model outputs dates and stocks that are expected to rise by more than 10%. The stock database stores daily KOSPI/KOSDAQ data from October 1990 and it updates data every day. Therefore, even if the composition of the stock index changes, the changed information is newly added to the database, so it is possible to predict the changed index. You can find it line 370~390. Point 2. Why the dataset is divided into four parts? Response 2: I added an explanation to line 400 to 409 about the reason for configuring the dataset into four-parts. In other related studies, there are no results of measuring returns through the prediction model. In studies related to stock price prediction, not only the accuracy of the prediction model but also the measurement of the rate of profit should be conducted at the same time. In this paper, a fund simulation dataset is additionally configured for this purpose. Training, verification, and test data are used exclusively for determining prediction models. Only the data that was not used to generate the prediction model was used for fund simulation. This process is for cross-validation and accurate rate of profit measurement. Point 3. How the accuracy is 96.23%? Response 3: The model's evaluation was performed on two criteria: accuracy and F1 score. Accuracy is a metric for the classification model as a percentage of the total predictions performed. The F1 score is the harmonic mean of precision and recall. You can find it line 427~441. Point 4. The proposed scheme must be compared with at least two existing schemes. Response 4: I added the comparison result between the proposed scheme and other three pattern filtering schemes. The experiment was conducted using the same data and model structure. The experimental results showed that other schemes did not achieve comparable accuracy than the proposed highly volatile pattern filtering. You can find it line 451~463 Point 5. Motivations of the paper are not clear/ Contributions must be represented point-wise/  It is hard to identify the novelty of the proposed work. Response 5: “Strengths of this study” was clearly changed to clarify the motivation and contribution of this paper. Each contribution is expressed in a bullet and we also revealed why we did not use the latest deep learning model. The novelty of this study is summarized as follows. • First, an advanced filtering technology that can improve the performance of the stock price prediction model was proposed. The pattern of stock price fluctuations proposed in this paper was created based on the results of analyzing domestic stock charts. Since it is a pattern generated based on the actual chart, it reflects the market well and shows excellent performance compared to the existing moving average-based pattern. • Second, it enables investors to make comfortable investments without daily data analysis. By using the neural network model, a lot of data can be handled at once, and the prediction results are reliable results that have been verified through a sufficient period of fund simulation. • Third, advanced filtering technology enables sufficient stock price prediction even in a simple deep learning model. Most of the existing studies focus on the structure of the model when conducting deep learning stock price prediction studies. For example, performance is compared using multiple models for the same data. However, in addition to the issue of selecting the structure of the model, the composition of data and the importance of filtering algorithms are proposed through this study. You can find it line 123~149. Point 6. In the 'Related Works' section, the existing schemes must be discussed one by one. Delete the limitations from this section and add them in the Introduction section. Response 6: I actively reflected your advice, shifted the limitations of the existing research in the introduction, and described the strengths of our research. And in 'Related works', the explanation of each study and comparison with our study were conducted one by one. You can find it line 162~197. Point 7. How the price pattern is chosen. Response 7: High volatile stock price patterns were found by chart analysis experts who analyzed charts over the years. This pattern is actually mainly seen in mid- to low-priced stocks in the Korean stock market. Investors have to look directly at a vast amount of data to utilize this pattern for real investment. However, if the prediction model using deep learning is properly defined, it is easy to predict when the price rises after a certain pattern, and even check whether it can actually make profits. Point 8. The proposed scheme is unstructured. divide the proposed scheme in many sub-sections, and then, discuss the entire proposed scheme under each sub-section. The organization of the paper is poor. Add section number. Response 8: We changed the configuration by adding details section of contents. For example, I added the 'Input features configuration for natural network' section (line 352). Second, we added 'Experimental process and environment' so that anyone can know the details of the study (line 377). Third, the section was constructed by separating the main methodology from the contents of MA (moving average line). This can be found in the 'background' section (line 220). Finally, the results of the prediction experiment and the results of the rate of profits were described separately. This can be found on line 397~487. Point 9. Equations and figures are not represented properly. All the key terms must be defined. Response 9: For the equation presented in the paper, I added undefined key terms and accuracy equation. For example, Equation (2) was added to line 234, and a description of the formula such as not described in Equation (7) was added. In addition, we added formulas and explanations for accuracy and f1 score, which are the metrics used to evaluate the model. This can be found from Line 275 to Line 435. Finally, we added a loss function equation used for model training and testing Point 10. The English language is very poor. Never use I, we, or our in a Research Article. Response 10: We revised the expressions of I, we, and our that were inappropriately included in the paper as a whole. Typically, it can be found on lines 76 and 81. Point 11. Important references are missing. Add the following references. Response 11: Among the references you recommended, I have referred to almost everything related to our study. However, in the case of “Enhanced neural network based univariate time series forecasting model”, Distributed and Parallel Databases, 2021. DOI: 10.1007/s10619-021-07364-9”, the search result could not be found, so it could not be referenced. In addition, “Fast and secure data accessing by using DNA computing for the cloud environment, IEEE Transactions on Services Computing, 2020. DOI: 10.1109/TSC.2020.3046471” and “Securing multimedia by using DNA based encryption in the cloud computing environment, ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 3s, 2020. DOI: https://doi.org/10.1145/3392665” were excluded because they could not find a connection with our study. For the reviewer 2. Point 1. Each stock market index uses proprietary methods to determine which companies or investments to include and that can change over time. How is that handled here? Response 1: The stock database stores daily KOSPI/KOSDAQ data from October 1990 and it updates data every day. Therefore, even if the composition of the stock index changes, the changed information is newly added to the database, so it is possible to predict the changed index. You can find it line 384~386. Point 2: This is a time series prediction problem, why are we not using a variation of LSTM’s? LSTM’s almost always guarantee better results than DNN. Authors should experiment with modern methods rather than leaving it at DNN Response 2: There is a time series-specific model such as LSTM, but in this study, the time series of data is insignificant by the high-volatile pattern filtering algorithm. For this reason, a deep learning model such as LSTM considering time series is not used. In addition, this paper proposed that using advanced filtering such as high volatile price patterns can create a stock price prediction model with high predictive performance. You can find it line 162~218. Moreover, “Strengths of this study” was clearly changed to clarify the motivation and contribution of this paper. Each contribution is expressed in a bullet and we also revealed why we did not use the latest deep learning model. The novelty of this study is summarized as follows. • First, an advanced filtering technology that can improve the performance of the stock price prediction model was proposed. The pattern of stock price fluctuations proposed in this paper was created based on the results of analyzing domestic stock charts. Since it is a pattern generated based on the actual chart, it reflects the market well and shows excellent performance compared to the existing moving average-based pattern. • Second, it enables investors to make comfortable investments without daily data analysis. By using the neural network model, a lot of data can be handled at once, and the prediction results are reliable results that have been verified through a sufficient period of fund simulation. • Third, advanced filtering technology enables sufficient stock price prediction even in a simple deep learning model. Most of the existing studies focus on the structure of the model when conducting deep learning stock price prediction studies. For example, performance is compared using multiple models for the same data. However, in addition to the issue of selecting the structure of the model, the composition of data and the importance of filtering algorithms are proposed through this study. You can find it line 123~149. For the reviewer 3. Point 1. Provide the complete implementation process? Response 1: According to your opinion, I added information about the environment and stock database. This can be found in the 'Experimental Process and Environment', which adds the research process and detailed description. The contents are as follows. The first of the stock price prediction process takes data from the stock database for a certain period of time. Thereafter, the data of the part showing the high volatile pattern is filtered and the input feature is calculated. The calculated result is saved as a text file and is used for deep learning. The text file consists of training, validation, testing, and fund simulation files, and finally the model outputs dates and stocks that are expected to rise by more than 10%. The stock database stores daily KOSPI/KOSDAQ data from October 1990 and it updates data every day. Therefore, even if the composition of the stock index changes, the changed information is newly added to the database, so it is possible to predict the changed index. You can find it line 377~395. Point 2. Does any specific reason for datasets is divided into more parts? Response 2: I added an explanation to line 403 to 409 about the reason for configuring the dataset into four-parts. In other related studies, there are no results of measuring returns through the prediction model. In studies related to stock price prediction, not only the accuracy of the prediction model but also the measurement of the rate of profit should be conducted at the same time. In this paper, a fund simulation dataset is additionally configured for this purpose. Training, verification, and test data are used exclusively for determining prediction models. Only the data that was not used to generate the prediction model was used for fund simulation. This process is for cross-validation and accurate rate of profit measurement. Point 3. How was 96.23% accuracy arrived at, what was the measuring method used to arrive at 96.23% accuracy? / Research work is silent in recommending a particular method/process for measuring accuracy. Response 3: The model's evaluation was performed on two criteria: accuracy and F1 score. Accuracy is a metric for the classification model as a percentage of the total predictions performed. The F1 score is the harmonic mean of precision and recall. You can find it line 427~441. Point 4. The proposed scheme must be compared with at least ‘four’ traditional methods. Response 4: I added the comparison result between the proposed scheme and other three pattern filtering schemes. The experiment was conducted using the same data and model structure. The experimental results showed that other schemes did not achieve comparable accuracy than the proposed highly volatile pattern filtering. You can find it line 451~463 Point 5. Motivation of the paper was not clear./ Need to add more technical novelty details. Response 7: “Strengths of this study” was clearly changed to clarify the motivation and contribution of this paper. Each contribution is expressed in a bullet and we also revealed why we did not use the latest deep learning model. The novelty of this study is summarized as follows. • First, an advanced filtering technology that can improve the performance of the stock price prediction model was proposed. The pattern of stock price fluctuations proposed in this paper was created based on the results of analyzing domestic stock charts. Since it is a pattern generated based on the actual chart, it reflects the market well and shows excellent performance compared to the existing moving average-based pattern. • Second, it enables investors to make comfortable investments without daily data analysis. By using the neural network model, a lot of data can be handled at once, and the prediction results are reliable results that have been verified through a sufficient period of fund simulation. • Third, advanced filtering technology enables sufficient stock price prediction even in a simple deep learning model. Most of the existing studies focus on the structure of the model when conducting deep learning stock price prediction studies. For example, performance is compared using multiple models for the same data. However, in addition to the issue of selecting the structure of the model, the composition of data and the importance of filtering algorithms are proposed through this study. You can find it line 123~149. First Author mailing addresses are as follows: Jangmin Oh School of Computer Science and Information, Sungshin Women’s University, 10 Dongsomun-ro 20da-gil Seongbuk-gu Seoul, 136-051, Korea, jangmin.oh@sungshin.ac.kr Email : jangmin.oh@sungshin.ac.kr Thank you very much for your consideration. I hope to hear from you soon. Sincerely, Jangmin Oh "
Here is a paper. Please give your review comments after reading it.
372
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Single sign-on (SSO) enables users to authenticate across multiple related but independent systems using a single username and password. While the number of higher education institutions adopting SSO continues to grow, little is known about the academic community's security awareness regarding SSO. This paper aims to examine the security awareness of SSO across various demographic groups within a single higher education institution based on their age, gender, and academic roles. Additionally, we investigate some psychological factors (i.e., privacy concerns and personality traits) that may influence users' level of SSO security awareness. Using survey data collected from 283 participants (faculty, staff, and students) and analyzed using a hierarchical linear regression model, we discovered a generational gap, but no gender gap, in security awareness of SSO. Additionally, our findings confirm that students have a significantly lower level of security awareness than faculty and staff. Finally, we discovered that privacy concerns have no effect on SSO security awareness on their own. Rather, they interact with the user's personality traits, most notably agreeableness and conscientiousness. The findings of this study lay the groundwork for future research and interventions aimed at increasing cybersecurity awareness among users of various demographic groups as well as closing any existing gaps between them.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Single sign-on (SSO) is a cybersecurity measure that enables the use of a single username and password to authenticate the same user across multiple related but independent network, computer, or information systems without having users enter their authentication credentials multiple times. SSO enables end users to increase their productivity by significantly reducing the time required for authentication processes <ns0:ref type='bibr' target='#b25'>(James et al., 2020)</ns0:ref>. On the other hand, it may also result in financial savings for the institution that implements it, for example, through cost savings in their information technology expenditures <ns0:ref type='bibr' target='#b8'>(Chinitz, 2000;</ns0:ref><ns0:ref type='bibr' target='#b13'>Gellert et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b33'>Lane &amp; Marie, 2010)</ns0:ref>. However, it was not until the late 2000s that SSO adoption became more widespread in a wide variety of organizations and enterprises, mainly due to performance issues <ns0:ref type='bibr' target='#b33'>(Lane &amp; Marie, 2010)</ns0:ref> or insecure implementation <ns0:ref type='bibr' target='#b3'>(Bai et al., 2013)</ns0:ref> in its early days. Nonetheless, SSO adoption was not uniform across sectors and regions of the world. Even in the early 2010s, some people still refused to adopt SSO because they did not perceive an urgent need for it although that perception changed as SSO's design and implementation improved <ns0:ref type='bibr' target='#b45'>(Sun et al., 2011)</ns0:ref>.</ns0:p><ns0:p>There are numerous types of information systems in the academic community, particularly in higher education institutions, including learning management systems (LMS), academic information systems (AIS), management information systems (MIS), and payroll services. In general, there are at least three distinct academic roles (i.e., students, faculty, and staff), each of which usually has a different type and level of access to the institution's information systems. Historically, colleges and universities required users to have separate accounts for each system. This situation resulted in significant frustration for the users and increased support costs for the institution. Implementing SSO resolves that issue by allowing users to log in to all systems using the same username and password.</ns0:p><ns0:p>Along with the conveniences that SSO provides, there is an arguably greater risk associated with the fact that that same account now has access to everything the user has access to. If attackers gain access to an SSO account, they have the potential to cause additional damage, not just to the user whose SSO account was compromised, but also to other users and the institution itself. Even more so now that many universities have been forced to fully embrace online education in the aftermath of the COVID-19 pandemic, forcing everyone, including those with limited online experience, to quickly adapt to this digital transformation, the risk has increased. As a result, safeguarding SSO accounts is becoming increasingly critical, even if not all users are aware of such issues.</ns0:p><ns0:p>Numerous studies have been conducted on security awareness, including in the academic community. However, little is known about the use of SSO in the academic community in general, and even more specifically about users' security awareness regarding their SSO accounts. This study aims to determine the level of security awareness among members of the academic community regarding SSO accounts. We are particularly interested in examining the psychological factors within individuals that can help predict their level of security awareness, specifically their privacy concerns and personalities, and determining whether there is any interaction between them. Additionally, we would like to determine whether the level of awareness varies by demographic characteristics (e.g., age, gender) and academic roles (i.e., student, faculty, and staff). To be more precise, the term 'staff' here refers to administrative personnel, including librarians and technical support personnel, as opposed to non-administrative personnel such as janitorial and security personnel.</ns0:p><ns0:p>The following section discusses our theoretical framework, beginning with the relationship between security awareness, our dependent variable, and each of the independent variables, which include demographic variables, academic roles, familiarity with SSO, privacy concerns, and Big-Five personality traits. Following that, we discuss the research design in detail, including information about our participants, research instruments, and the data analysis procedures. Finally, we present the statistical analysis results before discussing the key findings and concluding by restating the main takeaways and identifying future research directions.</ns0:p></ns0:div> <ns0:div><ns0:head>Theoretical Framework</ns0:head></ns0:div> <ns0:div><ns0:head>Demographics and Security Awareness</ns0:head><ns0:p>Among demographic variables, gender and age have been identified as significant factors that differentiate cyber security behaviors among users. For example, <ns0:ref type='bibr' target='#b1'>Anwar et al (2017)</ns0:ref> discovered that female users are more likely than male users to have behaviors that increase the likelihood of becoming a victim of cybercrimes. For example, they tend to reuse the same passwords across multiple social media accounts, open email attachments from unknown people, and click peculiar short URLs posted on the Internet. Meanwhile, <ns0:ref type='bibr' target='#b19'>Grimes et al (2010)</ns0:ref> discovered that older users are less familiar with cyber security measures (e.g., keeping their passwords private) and are less knowledgeable about cyber security risks (e.g., having difficulty in recognizing phishings, computer viruses, and spams). In another study, <ns0:ref type='bibr' target='#b39'>Pratama &amp; Firmansyah (2021)</ns0:ref> revealed that females and older users were less likely to be aware of, let alone adopt, two-factor authentication (2FA), making them particularly vulnerable to cyber security threats. Taking these findings into account, we hypothesize that: H1: Females are less aware of SSO security H2: Older people are less aware of SSO security It is worth highlighting that by no means do we assume that being female and older in and of itself then make people less aware of SSO security. Rather, in this study we examine whether such associations, which does not necessarily mean causation, as shown in the literature between the respected demographic variables and security awareness still exist and if they are also true in the case of SSO security. Such significant findings will expose demographic gaps needing to be addressed by future research, for instance, on why the gaps keep occurring and how to close them.</ns0:p><ns0:p>Most studies in cybersecurity awareness and behaviors in the academic community tend to focus on either students <ns0:ref type='bibr' target='#b12'>(Farooq et al., 2015;</ns0:ref><ns0:ref type='bibr'>Ngoqo &amp; Flowerday, 2015;</ns0:ref><ns0:ref type='bibr'>Zwilling et al., 2022)</ns0:ref> or faculty/staff <ns0:ref type='bibr' target='#b47'>(Yerby &amp; Floyd, 2018)</ns0:ref> only. In one study involving both academic roles, faculty/staff reported higher security behaviors than students <ns0:ref type='bibr' target='#b17'>(Gratian et al., 2018)</ns0:ref>. Taking that into account and due to the nature of the role that faculty and staff usually have more systems and data to access within an academic institution, and thus more to lose than students should their SSO accounts be compromised, we hypothesize that: H3: Students are less aware of SSO security than faculty and staff SSO Familiarity and Security Awareness SSO adoption in higher education is relatively recent in comparison to other industrial and commercial organizations. In this particular institution where the study was conducted, the SSO system, managed and operated directly by the university's Board of Information Systems, was implemented using Shibboleth, an open source SSO system based on Security Assertion Markup Language (SAML) protocol. The SSO was not fully implemented university-wide until 2019, just a few months before the onset of the COVID-19 pandemic. Taking that into account, we hypothesize that: H4: Familiarity to SSO positively predicts SSO security awareness Privacy Concerns and Security Awareness Individual concerns over what, when, and how their private information is being shared to others when using information technology products and services have been widely discussed in the literature <ns0:ref type='bibr' target='#b37'>(Petronio &amp; Child, 2020)</ns0:ref>. For instance, multiple studies have revealed that the privacy paradox, discrepancy between stated privacy attitudes and actual privacy behaviors, in using the technologies does exist in various contexts and across cultures <ns0:ref type='bibr' target='#b0'>(Aleisa et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Barth et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b28'>Kokolakis, 2017)</ns0:ref>. Some argue that this phenomenon can be explained by privacy calculus, which reflects the discrepancy between anticipated risks and expected benefits associated with letting go of some private information <ns0:ref type='bibr' target='#b14'>(Goad et al., 2021)</ns0:ref>. Should the benefits be higher, users tend to compromise their privacy as opposed to holding it in any other cases. These arguments suggest that privacy concerns lead to more cautious decisions in whether to use information technology related products or services, including SSO implementation as pointed out by some studies in the literature <ns0:ref type='bibr' target='#b9'>(Cho et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Heckle &amp; Lutters, 2007)</ns0:ref>. Bringing all those findings to the current study's context, we predict that: H5: Privacy concerns positively predict SSO security awareness Big-Five Personality and Security Awareness PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:08:64514:1:0:NEW 30 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Earlier psychological research established a link between cyber security behavior and the Big-Five personality, the characteristics of which are summarized in Table 1 <ns0:ref type='bibr' target='#b15'>(Gosling et al., 2003)</ns0:ref>. For instance, <ns0:ref type='bibr' target='#b41'>Russell et al (2017)</ns0:ref> found negative correlations between emotional instability (neuroticism) and secure cyber behaviors (e.g., using protection software against malware and virus) and between conscientiousness and insecure cyber behaviors (e.g., using unsecured wireless networks), whereas <ns0:ref type='bibr' target='#b43'>Shappie et al (2020)</ns0:ref> revealed that in addition to conscientiousness; agreeableness and openness positively predict cybersecurity behaviors (e.g., keeping anti-virus software up to date). Meanwhile, <ns0:ref type='bibr' target='#b27'>Kennison &amp; Chan-Tin (2020)</ns0:ref> reported rather different results: extraversion, agreeableness, and emotional stability -not openness nor conscientiousness-explain why some users are prone to commit risky cyber behaviors (e.g., not signing out of a shared computer, sharing password with someone else) while others are not.</ns0:p><ns0:p>It appears that the relationships between Big-Five personality traits and cybersecurity awareness seem to vary across contexts and depend on the indicators measured in the study. However, in terms of SSO security awareness, and also by taking into consideration the Indonesian context as a collectivist country, we argue that being extraverted and agreeable is associated with lower SSO security awareness. Users having these traits are more likely to share their passwords with someone else, either voluntarily out of trust or when asked by others they respect or fear due to social status. On the other hand, we argue that being conscientious, emotionally stable, and open is associated with higher SSO security awareness. Users having these traits are arguably more cautious in their decision making and thus will avoid risky behavior with their SSO accounts. Thus, our hypotheses are as follows:</ns0:p><ns0:p>H6: Extraversion negatively predicts SSO awareness H7: Agreeableness negatively predicts SSO awareness H8: Conscientiousness positively predicts SSO awareness H9: Emotional stability positively predicts SSO awareness H10: Openness positively predicts SSO awareness Furthermore, since past studies reported significant correlations between agreeableness and privacy concerns <ns0:ref type='bibr' target='#b30'>(Korzaan &amp; Boswell, 2008)</ns0:ref>, and between conscientiousness and privacy concerns <ns0:ref type='bibr' target='#b26'>(Junglas et al., 2008)</ns0:ref>, we thus expect the aforementioned variables will interact with each other in predicting SSO security awareness. As such, our two final hypotheses are as follows:</ns0:p><ns0:p>H11: Agreeableness interacts with privacy concerns in predicting SSO security awareness H12: Conscientiousness interacts with privacy concerns in predicting SSO security awareness</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:08:64514:1:0:NEW 30 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Participants</ns0:head><ns0:p>After obtaining approval from the Directorate of Research and Community Services within the university (No: 01.A/DirDPPM/70/DPPM/I/2021), we sent out a link to an online survey through broadcast email and WhatsApp messages to the academic community at one of the largest private universities in Indonesia, which has approximately 23,000 students and 1,000 faculty and staff. Between April 16 and May 4, 2021, a total of 283 participants ranging from 17 to 59 years of age (M = 26.63, SD = 10.23) completed the survey after providing their consents. The questionnaire was delivered in Bahasa Indonesia (see Supplementary Materials). To ensure eligibility and avoid duplication, all participants were required to use their SSO accounts to access the survey. More information about the demographics of respondents is available in Table <ns0:ref type='table'>2</ns0:ref>. Measures Apart from the three demographic variables (i.e., gender, age, and academic role), there are three independent variables (i.e., SSO familiarity, privacy concerns, and Big-Five personality) and one dependent variable (SSO account security awareness) in this study. Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> summarizes the variables of interest along with their respective measurement items that we developed for this study as follows: SSO Familiarity. We developed three items to measure how well users are familiar with the SSO system in their university. The three items cover their overall knowledge of SSO along with its features and risk. We then aggregated the three items to calculate a composite score of SSO familiarity in the range of 0 to 100.</ns0:p><ns0:p>Privacy Concerns. We adopted privacy concerns scales developed by <ns0:ref type='bibr' target='#b6'>Buchanan et al (2007)</ns0:ref> to measure user privacy concerns in this study. Specifically, we included only five items related to user accounts. We also calculated a composite score of privacy concerns in the range of 0 to 100 by aggregating all five items.</ns0:p><ns0:p>Big-Five Personality. We adopted the Ten Item Personality Inventory (TIPI), a very brief measure of the Big-Five personality domains <ns0:ref type='bibr' target='#b15'>(Gosling et al., 2003)</ns0:ref>, which has been widely used by many researchers in need of short personality measures in the past two decades. Specifically, we used the one that has been translated and validated in Bahasa Indonesia <ns0:ref type='bibr' target='#b21'>(Hanif, 2018)</ns0:ref> in this study.</ns0:p><ns0:p>SSO Account Security Awareness. We developed five items for each one of the Knowledge, Attitude, and Behavior dimension, resulting in a total of 15 items to measure SSO Account Security Awareness in this study by adapting the Human Aspects of the Information Security Questionnaire (HAIS-Q) <ns0:ref type='bibr' target='#b36'>(Parsons et al., 2017)</ns0:ref>. We then calculated a composite score of SSO account security awareness in the range of 0 to 100 by using the weighted average method (30% for Knowledge, 20% for Attitude, and 50% for Behavior) to be classified further into three categories, i.e., 'Poor' (&lt; 60), 'Average' (60-79.99), and 'Good' (&#8805; 80) as recommended by <ns0:ref type='bibr' target='#b31'>Kruger &amp; Kearney (2006)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Data Analysis</ns0:head><ns0:p>We employed hierarchical linear regression in R 3.6.3 to analyze the data. As illustrated in Figure <ns0:ref type='figure'>1</ns0:ref>, we conducted three steps of regression analysis with some additional independent variables in each model. In the first regression, we included only SSO familiarity and privacy concerns in addition to the three demographic variables (i.e., gender, age, and academic role) as the predictors. Next, we introduced the Big-Five personality variables to the model in the second regression. Finally, we added the interaction terms between privacy concerns and two out of five Big-Five personality variables (i.e., agreeableness and conscientiousness) in the third regression following the link between them as shown in the literature <ns0:ref type='bibr' target='#b34'>(Osatuyi, 2015)</ns0:ref>. Additionally, we ran several diagnostic tests on the regression model (i.e., Residual Plot, Normal Q-Q Plot, Scale-Location Plot, and Cook's distance) to look for potential outliers where we identified three influential cases that we then omitted prior to repeating the hierarchical regression analysis. The dataset along with the corresponding R code for analysis are available on our GitHub repository 1 .</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>The summary statistics are provided in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> for the dependent variable and in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> for the independent variables. As can be seen, the average SSO security awareness score for all participants in this study is 69.31 out of 100, which falls into the 'Average' category according to the rubric by <ns0:ref type='bibr' target='#b31'>Kruger &amp; Kearney (2006)</ns0:ref>. When considering each individual measurement item, the mean for the majority of items is indeed between 60 and 79.99. Certain items relating to password reuse (K1, A1), password length (K4, A4), and the use of incognito mode on a shared device (B5) are classified as 'Poor', while others relating to account sharing (K2, A2, B2) and password complexity (K3) are classified as 'Good'. Applying the same categorization to SSO familiarity (i.e., 80.86 out of 100) and privacy concerns (i.e., 85.90 out of 100), however, means they both fall into the 'Good' category.</ns0:p><ns0:p>The scatterplots in Figure <ns0:ref type='figure'>2</ns0:ref> illustrate how SSO security scores vary by demographic variables. As can be seen, the SSO security awareness scores and age tend to form a negative linear relationship. This relationship is typically consistent across genders and academic roles.</ns0:p><ns0:p>Additionally, the dumbbell plots in Figure <ns0:ref type='figure'>3</ns0:ref> indicate that SSO security awareness is relatively consistent across genders, but not across academic roles. Students consistently demonstrated significantly lower levels of knowledge, attitude, and behavior regarding SSO account security compared to faculty and staff. Apart from their score in attitude that is much lower and closer to student's score, staff scored fairly close to faculty in terms of knowledge and behavior, resulting in no significant differences in total score between the two.</ns0:p><ns0:p>Following that, Table <ns0:ref type='table'>6</ns0:ref> summarizes the results of the hierarchical regression analysis. As can be seen, all independent variables, with the exception of gender, were found to be statistically significant in the first regression and they remained significant in the second regression after the addition of Big-Five personality traits as independent variables in the model. While only one of the five personality traits (i.e., extraversion) was found to be statistically significant in the second regression, the addition of interaction terms between some traits (i.e., agreeableness and conscientiousness) and privacy concerns in the third regression altered the result. As it turned out, statistical significance was found for all but one of the Big-Five personality traits (i.e., openness). With the addition of these interaction terms, another significant finding emerged: privacy concerns were no longer significant predictors of SSO security awareness on their own. Rather than that, they interact with agreeableness and conscientiousness, as depicted in Figure <ns0:ref type='figure'>4</ns0:ref>. Furthermore, as shown in Table <ns0:ref type='table'>6</ns0:ref>, these interaction terms are the two strongest predictors in the final model based on their standardized coefficients.</ns0:p><ns0:p>Taking all of the preceding findings into account, Table <ns0:ref type='table'>7</ns0:ref> summarizes the results of hypothesis tests and Figure <ns0:ref type='figure'>5</ns0:ref> illustrates the final model based on those findings.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head></ns0:div> <ns0:div><ns0:head>Demographics and SSO Security Awareness</ns0:head><ns0:p>As illustrated in Figure <ns0:ref type='figure'>2</ns0:ref> and futher confirmed in Table <ns0:ref type='table'>6</ns0:ref>, the generational gap remains present in this study. Older users have lower security awareness than their younger colleagues. While arguments from prior research that older people did not get the same chance as younger people did in terms of digital literacy exposure including cyber security measures <ns0:ref type='bibr' target='#b19'>(Grimes et al., 2010)</ns0:ref> may still hold water, this finding brings a more serious issue in this study's context. In Indonesia, which may also be true in many other countries, age arguably correlates with job seniority position. Putting this into the context of SSO, therefore, the older the users, the more systems and information are at risk should any cybersecurity incidents happen. Linking with our previous argument that users of the same educational setting should arguably receive similar exposure, it could be the case that such programs used by the IT department in introducing SSO technology and its security may not well address their older users yet. In other words, they seem to work effectively only for younger generations. The fact that SSO familiarity significantly predicts SSO awareness further supports our argument.</ns0:p><ns0:p>Interestingly, our analysis results confirmed all demographic hypotheses except for gender. As illustrated in Figure <ns0:ref type='figure'>3</ns0:ref>, the security awareness scores are nearly identical between males and females. Even more so, the association between gender and SSO security awareness remains non-significant after putting psychological factors as well as their interaction terms with privacy concerns into the equation as shown in Table <ns0:ref type='table'>6</ns0:ref>. These unexpected findings contradict past research reporting that female users tend to be less aware of cyber security measures <ns0:ref type='bibr' target='#b1'>(Anwar et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b39'>Pratama &amp; Firmansyah, 2021)</ns0:ref>. Considering that this study takes place within a single higher education institution, it could be the case that both male and female users have already been exposed to similar levels of SSO usage within their institution. Ergo, such gender distinctions have no bearing on their security awareness. The absence of statistically significant differences in SSO security awareness between males and females in this study is encouraging because it demonstrates that organizations can rely on both male and female users having the same level of SSO security awareness. It could also be attributed to the organization's success in educating users regardless of their gender.</ns0:p><ns0:p>On the other hand, past research indicating gender gap in cybersecurity awareness either took place in a workplace in which participants' chances to get exposed to such cyber measures might vary <ns0:ref type='bibr' target='#b1'>(Anwar et al., 2017)</ns0:ref> or their participants came from different places altogether <ns0:ref type='bibr' target='#b39'>(Pratama &amp; Firmansyah, 2021)</ns0:ref>. Those having IT related backgrounds and working directly with external clients might be more aware of cyber threats compared to those having no IT backgrounds and working with internal clients only. Considering that females are still underrepresented in IT related jobs <ns0:ref type='bibr' target='#b40'>(Richter, 2021)</ns0:ref>, it could be the main reason why such a gender gap existed in past research. As such, we argue that any study revealing a gender disparity in cybersecurity awareness should delve deeper into the reason for it than simply gender.</ns0:p><ns0:p>As also illustrated in Figure <ns0:ref type='figure'>3</ns0:ref> and confirmed in Table <ns0:ref type='table'>6</ns0:ref>, we found that students are less aware of SSO security than staff and faculty, who share similar SSO security awareness level. On one hand, it can be the case because they have less things to lose if such incidents happen. As <ns0:ref type='bibr' target='#b39'>Pratama &amp; Firmansyah, (2021)</ns0:ref> argue, how sensitive people are to cyber threats and how well they adhere to cyber security measures is directly proportional to the magnitude of their potential losses should such incidents occur. Students, arguably, have less to lose in regard to their SSO account. On the contrary, faculty and academic staff have a plethora of sensitive data at risk, ranging from financial and salary information to any other private or confidential data, both to them as users and to their institution. As such, it is unsurprising that faculty and staff are more cognizant of SSO account security than students are. On the other hand, the fact that students are significantly less aware of SSO account security also leads to another suspicious behavior. Some students might intentionally share their SSO accounts. While it would be unethical to suspect all students, given that academic cheating is perceived as collaboration in collectivistic cultures such as Indonesia <ns0:ref type='bibr' target='#b23'>(Jamaluddin et al., 2021)</ns0:ref>, the possibility that a few students misuse their SSO accounts by intentionally sharing them with each other for academic dishonesty cannot be ruled out.</ns0:p></ns0:div> <ns0:div><ns0:head>Privacy Concerns, Big-Five Personality, and SSO Security Awareness</ns0:head><ns0:p>As expected, privacy concerns are positively associated with SSO security awareness, at least in the first two regressions as shown in Table <ns0:ref type='table'>6</ns0:ref>. Holding all other variables constant, the more concerned users are about their privacy, the more aware they are of SSO security. Interestingly, when we factor in their interaction with the Big-Five personality constructs, this association becomes irrelevant. In other words, the extent to which privacy concerns may affect users' SSO security awareness is determined by their personality traits. Users with a high degree of PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science agreeableness (i.e., warm, not critical) are generally less aware of SSO security, but this would change if they also had increased privacy concerns. On the contrary, users who are naturally conscientious (i.e., organized, cautious) tend to have a high level of security awareness regarding their SSO accounts regardless of their privacy concerns, even if the latter can help those with a low degree of conscientiousness improve their security awareness. In this regard, regardless of their level of privacy concerns, it is the users' personality that naturally compels them to be more circumspect and critical, thereby increasing their awareness of the risks associated with their SSO accounts.</ns0:p><ns0:p>In contrast to the two aforementioned traits, extraversion and emotional stability account for SSO security awareness in ways that go beyond privacy concerns. In this regard, the more extraverted users are, the less aware they are of SSO security. By contrast, users who are emotionally stable are more likely to be aware of SSO security. Interestingly, even after controlling for demographic and privacy concerns variables, only the openness trait has no significant association with SSO security awareness. Our attempt to determine whether there are any interactions between these three characteristics and privacy concerns, which revealed none, confirms that these findings are robust.</ns0:p><ns0:p>These findings altogether suggest the needs of a tailored approach should interventions be designed to increase users' SSO security awareness. For example, intervention emphasizing privacy risk may work best for users with a high degree of agreeableness but is less efficient for users with high degree of conscientiousness. While for users with a high degree of emotional stability, it may be better to teach them about SSO security measures. A particular attention should be paid to users with extraverted traits given the negative association with security awareness in the model. Some conventional interventions may not work as effectively as it is for other personality traits. Perhaps, such further behavioral interventions may be needed. We suggest future research to explore this area more to shed light on different types of education and interventions that can work better for different types of personality traits.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Our study discovered unique relationships between SSO security awareness, demographic characteristics, privacy concerns, and personality traits. The degree to which users are aware of SSO security varies according to their demographic characteristics and is determined in part by their personality traits, some of which (i.e., agreeableness and conscientiousness) interact with their level of privacy concerns. The absence of gender disparities in SSO security awareness in this study suggests that it is possible to close the gap under the right circumstances, including but not limited to education and policy. It also suggests that closing generational gaps may be a greater challenge than closing gender gaps in cybersecurity awareness.</ns0:p></ns0:div> <ns0:div><ns0:head>Future Work</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This study lays the groundwork for future research and interventions aimed at increasing user awareness of SSO security and closing any existing gaps between different demographic groups of users, particularly in higher education settings. An experimental study examining various types of intervention on SSO security awareness is one way to accomplish this. With the growing adoption of SSO by colleges and universities worldwide, addressing this issue of SSO security awareness is becoming increasingly important. Also, to gain a more holistic understanding of security awareness and practices surrounding SSO, we propose that researchers conduct similar studies in other parts of the world, taking into account cultural differences that may affect cybersecurity awareness, particularly regarding SSO security.</ns0:p></ns0:div> <ns0:div><ns0:head>Variable</ns0:head><ns0:p>Frequency Percentage Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Gender</ns0:head><ns0:p>Computer Science including friends or colleagues, is not prohibited. 3. A combination of uppercase, lowercase, numbers, and special characters is a must when choosing password, including for the university's SSO account. 4. Using a password that is 8 characters long or shorter is not prohibited. 5. When signing-in to the university account through the SSO system on a device that is not my own, using the incognito or private mode in the web browser is necessary. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>K</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,525.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,324.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,199.12,525.00,324.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,286.53,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Variables of interest and measurement items</ns0:cell></ns0:row><ns0:row><ns0:cell>* reverse items were inverted prior to calculation</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Variables of interest and measurement items</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Variable Code Familiarity with SSO</ns0:head><ns0:label /><ns0:figDesc>1. I know what the university's SSO account is. 2. I know what systems and data are accessible with my university's SSO account. 3. I am aware of the risk of negative impacts if my university's SSO account is used by other people. Are you concerned about people online not being who they say they are? 5. Are you concerned that an email you send may be read by someone else besides the person you sent it to?</ns0:figDesc><ns0:table><ns0:row><ns0:cell>F</ns0:cell></ns0:row><ns0:row><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>F2</ns0:cell></ns0:row><ns0:row><ns0:cell>F3</ns0:cell></ns0:row><ns0:row><ns0:cell>Privacy concerns 1. In general, how concerned are you about your privacy while you are using the internet? 2. Are you concerned about online organizations not being who they claim they are? 3. Are you concerned about online identity theft? 4. Pr Pr1 Pr2 Pr3 Pr4</ns0:cell></ns0:row><ns0:row><ns0:cell>Pr5</ns0:cell></ns0:row><ns0:row><ns0:cell>Knowledge</ns0:cell></ns0:row><ns0:row><ns0:cell>1. Using the same password for the university's SSO account and other</ns0:cell></ns0:row><ns0:row><ns0:cell>personal accounts like social media is not prohibited.</ns0:cell></ns0:row><ns0:row><ns0:cell>2. Sharing my password for the university's SSO account to other people,</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>I share my password for the university's SSO account with friends or colleagues at the university. 3. I use a combination of uppercase, lowercase, numbers, and special characters for all my passwords, including the university's SSO account. 4. I always use passwords that are more than 8 characters long, including for the university's SSO account. 5. I hardly ever use incognito or private mode in the web browser when signing into the university's SSO account on a device that is not my own. Summary statistics of the dependent variable</ns0:figDesc><ns0:table><ns0:row><ns0:cell>K1r*</ns0:cell></ns0:row><ns0:row><ns0:cell>K2r*</ns0:cell></ns0:row><ns0:row><ns0:cell>K3</ns0:cell></ns0:row><ns0:row><ns0:cell>K4r*</ns0:cell></ns0:row><ns0:row><ns0:cell>K5</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Summary statistics of the independent variablesTable 5. Summary statistics of the independent variables Faculty member is used as the reference category; numbers reported are unstandardized coefficients (B), standard errors of unstandardized coefficients (SE B), standardized coefficients (&#946;), and p-values (p); the bold and blue numbers denote statistically significant values (p &lt; .05);</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64514:1:0:NEW 30 Jan 2022)</ns0:note> <ns0:note place='foot' n='1'>https://github.com/ahmadrafie/ssostudy</ns0:note> </ns0:body> "
"January 30, 2022 Dear Editor, We would like to express our gratitude to you and all reviewers for their constructive comments on the manuscript, which we have revised to address their concerns. Specifically, we provided our response to your and their comments and suggestions directly beneath them in blue. We believe that this manuscript is now ready for publication in PeerJ Computer Science. Dr. Ahmad R. Pratama Assistant Professor of Informatics On behalf of all authors Editor's Decision Add a paper structure paragraph at the end of the Introduction Section. Add the future research directions section. Ensure your bibliography is up-to-date. We broke down the Introduction section into two sections: Introduction (line 36) and Theoretical Framework (line 88). We added a new paragraph at the end of the new Introduction section (line 80) to lay out the paper structure as requested. Likewise, we added a new subsection called Future Work (line 385) at the end of the Conclusion section. Our bibliography has been updated to include additional references published in reputable journals and conference proceedings within the last three years, as well as some papers suggested by the reviewers. Reviewer 1 Basic reporting The paper is well written and is easy to follow. The results are reported with professional diagrams and tables. Thank you for the compliment. Improvement suggestion: 1. The figures should be vector graphics. I suggest the authors regenerate them into pdf or ps graphics Thank you for the suggestion. All figures have been regenerated as PDF files per PeerJ guidelines. 2. Citations from top security conferences (e.g., NDSS, IEEE S&P) are largely missing. The authors are suggested to look through literatures starting with the following paper: AuthScan: Automatic Extraction of Web Authentication Protocols from Implementations Many of our references come from reputable computer science, computer network, and computer security journals, such as Computers & Security, Information & Computer Security, Journal of Computer Information Systems, and Information Security Journal. Nevertheless, we acknowledge that we only had one reference from top security conferences, such as IEEE Trustcom/BigDataSE/I​SPA (Farooq et al., 2015) (line 116). That said, we agreed to incorporate the suggested paper from NDSS (Bai et al., 2013) in the Introduction section (line 46) along with another one (Heckle & Lutters, 2007) from ACM Symposium on Usable Privacy and Security (SOUPS) as suggested by the other reviewer in the Theoretical Framework section (line148), both of which are highly relevant to our study. Experimental design The paper manages to make clear the hypotheses, and design use studies to validate them. I am very happy to see the experiments well follow the ethics. Thank you for the nice words. Improvement suggestion: 1. The privacy concerns (Table 3) in the questionnaire seem not complete. For example, the relying party (RP) may access user's data on identify provider (IDP). It would be good if the authors may consider to include these. Considering the data collection was already done, adding a new construct for privacy concerns is not feasible for the current study. Regardless, the issue with the relying party is irrelevant for this university's SSO implementation, as neither the authentication server nor the services provided by the university requires a third party. All of them are deployed, managed, and operated by the university's Board of Information Systems as we explained in the SSO Familiarity subsection (line 127-131). Validity of the findings The conclusions are well stated, and backed by the user studies. Thank you. Reviewer: Lokesh Ramamoorthi Basic reporting Thank you so much for the editor for giving me an opportunity to review this article. Very well written article in Professional English. The research is clear and unambiguous, and very relevant to current topic of information security. SSO is an area where a lot of security attacks can happen in organizations. Educational sector often overlooks security and privacy of information. This article is timely to elaborate with appropriate data about the security mindset of participants in the educational institution. Thank you for the nice words. We are glad that you enjoyed reading this paper. Please to verify spacing and formatting issues before sending the final copy. We have addressed all spacing and formatting issues in the revised manuscript. Experimental design The design and reserach fits within the Information Security domain The research is well defined and attached sumary and survey results fills in the idenfied knowledge gap of SSO. Thank you Validity of the findings Data has been well analysed and conclusions are linked to original research questions. Thank you Reviewer 3 Basic reporting There are many papers in the usability security literature that focus on 2FA evaluations, for example Das “Why Johnny Doesn’t Use Two Factor A Two-Phase Usability Study of the FIDO U2F Security key - https://link.springer.com/chapter/10.1007%2F978-3-662-58387-6_9 and since the paper mentions 2FA a couple more well known references could be included. There is a similar paper by the same principal author related to older users and this could provide a complement to the Pratama and Firmansyah paper mention on lines 85-86. We mentioned 2FA only once in this paper, when discussing the relationship between demographic factors and security awareness in general (line 100). We read the suggested 2FA paper (Das et al., 2018), but believe it focuses more on the hardware side of 2FA, in contrast to the other 2FA paper we used as a reference (Pratama & Firmansyah, 2021), which discusses the user side in detail. Having said that, we chose not to cite it in our paper in order to stay on topic. The authors mention privacy concerns and it would be good to make reference to https://cups.cs.cmu.edu/soups/2007/posters/p173_heckle.pdf Heckle and Lutters, SOUPS 2007. On the other hand, the other suggested papers about privacy concerns (Heckle & Lutters, 2007), is an excellent addition to our references, particularly because it discusses the relationship between privacy and SSO implementation. We added this paper, along with another more recent one that we discovered (Cho et al., 2020), to the Theoretical Framework section, particularly in the Privacy Concerns and Security Awareness subsection (line 148). The level of the grammar and flow of the paper was good. The strength of the paper is the way that they have drawn broadly from the literature since there is not a great deal of literature on SSO evaluations in higher education Thank you for the compliment. Experimental design It would have been interesting to see a hypothesis which took into account whether the subjects they were studying had an impact on their awareness, for example if they were from an arts background or a science background. This related to line 142 so it would be interesting why they chose that hypothesis and now one which was linked to their academic backgrounds? Should the participants consist of students and faculty only, their background would be a great additional variable in the model. However, it could not work that way because we also had (administrative) staff in this study. So for example in Figure 3 the security awareness was compared to behaviour attitude and knowledge. I would like to have seen more explanation as to why this aspect was not expanded on more to include the type of faculty they were related to. If some of the staff were professional services staff then it would have been good to have split the categorisation of people more finely. It was not entirely clear what the difference was between students, faculty and staff were on line 73. We have added more information in the second last paragraph of the introduction section to clarify what kind of staff participated in this study (line 77-79). Splitting our participants into more categories would require more samples for the analysis to be conducted, which is why we focused on their age, gender, and academic roles only for this study. The methodology did not indicate on line 188 what the size of the overall population was and what were the limitations of the survey. Did everyone complete all the answers and so were the answers normalised? We have added this information in our Participants subsection (line 197 and line 200-202). Validity of the findings Thank you for providing the tables at the back of the articles but in the main body of the discussion no specific numbers were presented which related to the tables. I would have expected the discussion section to make reference to the tables. The commentary in the discussion did match the tables but the narrative would have been strengthened with more specific referencing to the tables. We acknowledge this problem, and thus we have added references to tables and figures in the discussion section (line 289, 303, 306, 325, and 344). Reviewer: Scott Debb Basic reporting Background information is covered pretty well. Please see attached PDF for specific comments though. Thank you. We have made some necessary changes as suggested in the comments within the attached PDF, such as: 1. Removing redundancy (line 37-40). 2. Brief explanation on how SSO can help institutions save money (line 42-43) 3. Different roles of users in academic institutions (line 50-54) 4. And other writing style suggestions (line 59, 62-65, 68-71, 126-127, 144-145, 154-155, 159-161) Experimental design Rationale and procedures seems good. Please see attached PDF for specific comments about how some of the DV's and IV's are being reported though. We are pleased that you believe the paper's rationale and procedures seem good. We have clarified the way we reported our DV and IVs, both within the manuscript and in the corresponding tables. As already mentioned in the manuscript, all measurement items are available in Table 3 (line 208). Validity of the findings Findings seem to be on target. Suggest modifying some of the figures and note that two figures are not mentioned in text. Also, some of the implications that pertain directly to personality (Big 5) are not well developed and could be made stronger. Please see attached PDF for specific comments though. We agreed that combining Fig.3 and Fig.4 into a single figure (now Fig.3, first mentioned in line 265) was a good idea. Similarly, we combined Fig.5 and Fig.6 (now Fig..4, first mentioned in line 281), which were incorrectly referred to as Fig.3 and Fig.4 in the previous version of the manuscript, resulting in confusion as if they both were not mentioned in the text). Having said that, the revised manuscript contains in total five figures rather than seven in the previous version of the manuscript. We have strengthened the arguments for the implications that directly relate to the Big-Five personality traits and their interactions with privacy concerns by making a few minor changes in the discussion section (line 343-374). Additional comments Please see attached PDF for specific comments. One of the biggest issues is to make sure the article gets proofread by a native English speaker to ensure grammatical consistency. Each of the three authors is fluent in English. One was born in an English-speaking country, while the other two have spent more than seven years each studying for their master's and doctoral degrees in English-speaking countries. We even double-checked our manuscript using premium grammar checkers and had it read and evaluated by our colleagues who are native English speakers, none of whom mentioned grammatical errors as issues, let alone one of the biggest one, in this paper. On the basis of those premises, and as evidenced by the compliments of the other reviewers, i.e., ”The paper is well written and is easy to follow” (Reviewer 1), ”Very well written article in Professional English” (Reviewer 2), and ”The level of the grammar and flow of the paper was good” (Reviewer 3), we are quite confident in the English in this paper. Having said that, while we acknowledge that the previously submitted manuscript contained some minor issues that even native English speakers cannot guarantee are free of, we respectfully disagree with the assertion that it was one of the biggest issues with the manuscript. "
Here is a paper. Please give your review comments after reading it.
373
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Two cross-sectional studies investigated the effects of competition and cooperation with virtual players on exercise performance in an immersive Virtual Reality (VR) cycle exergame. Study One examined the effects of: (1) self-competition whereby participants played the exergame while competing against a replay of their previous exergame session (Ghost condition), and (2) playing the exergame with a virtual trainer present (Trainer condition) on distance travelled and calories expended while cycling. Study Two examined the effects of (1) competition with a virtual trainer system (Competitive condition) and ( <ns0:ref type='formula'>2</ns0:ref>) cooperation with a virtual trainer system (Cooperative condition). Post exergame enjoyment and motivation were also assessed.</ns0:p><ns0:p>The results of Study One showed that the trainer system elicited a lesser distance travelled than when playing with a ghost or on one's own. These results also showed that competing against a ghost was more enjoyable than playing on one's own or with a the virtual trainer. There was no significant difference between the participants' rated enjoyment and motivation and their distance travelled or calories burned. The findings of Study Two showed that the competitive trainer elicited a greater distance travelled and caloric expenditure, and was rated as more motivating. As in study one, enjoyment and motivation were not correlated with distance travelled and calories burned.</ns0:p><ns0:p>Conclusion: Taken together, these results demonstrate that a competitive experience in exergaming is an effective tool to elicit higher levels of exercise from the user, and can be achieved through virtual substitutes for another human player.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Regular exercise is instrumental to the maintenance of physical, mental and psychological health, and to achieving increased longevity <ns0:ref type='bibr' target='#b12'>(Lee and Paffenbarger, 2000;</ns0:ref><ns0:ref type='bibr' target='#b16'>Nelson et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b26'>Warburton et al., 2006)</ns0:ref>.</ns0:p><ns0:p>At least 150 minutes of moderate-intensity exercise, or 75 minutes of high-intensity exercise each week is recommended to attain health-related benefits <ns0:ref type='bibr' target='#b5'>(Garber et al., 2011)</ns0:ref>. However, a significant number of people fail to initiate or maintain regular exercise at the recommended levels <ns0:ref type='bibr' target='#b7'>(Hagstr&#246;mer et al., 2007;</ns0:ref><ns0:ref type='bibr'>ACSM, 1991)</ns0:ref>. Furthermore, for individuals who are prescribed exercise to address medical conditions, adherence is often low <ns0:ref type='bibr' target='#b10'>(Jones et al., 2005)</ns0:ref>.</ns0:p><ns0:p>Mounting evidence suggests that exergames are a promising means for increasing physical activity in otherwise sedentary individuals <ns0:ref type='bibr' target='#b25'>(Warburton et al., 2007)</ns0:ref>. Introducing the gameplay components of a video game to exercise has been shown to increase both exercise motivation and performance <ns0:ref type='bibr' target='#b17'>(Peng and Crouse, 2013;</ns0:ref><ns0:ref type='bibr' target='#b21'>Song et al., 2010)</ns0:ref>. Competition and cooperation are elements that appear in traditional video games and play a significant role in the enjoyment of players and their choice of games <ns0:ref type='bibr' target='#b24'>(Vorderer et al., 2003)</ns0:ref>. This makes these factors ideal targets for investigation in exergaming, and past research has shown that competition and cooperation both have an influence on exercise performance, motivation, and enjoyment <ns0:ref type='bibr' target='#b17'>(Peng and Crouse, 2013;</ns0:ref><ns0:ref type='bibr' target='#b22'>Staiano et al., 2013)</ns0:ref>.</ns0:p><ns0:p>While competition and cooperation are useful features for an exergame, they have the downside of normally requiring the presence of another person. Virtual players, such as AI opponents or replays ('Ghosts') can provide a substitute for human partners. Because the behaviour of a virtual player can be controlled by the game, virtual players offer the possibility of a multiplayer experience customised to be most motivating for the player, or one that guides the player to exercise at an intensity suitable for the PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10549:1:0:REVIEW 13 Aug 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science intended exercise outcome.</ns0:p><ns0:p>We present two exergaming systems with which we investigated the use of virtual players to provide a competitive or cooperative experience. The first is a 'ghost-replay' system, in which the player is able to record play sessions and then compete against either their own recordings or the recordings of other players. In such a replay system, the user should always be motivated to improve, by focusing on beating the ghost of their last attempt. The second is an AI player in the form of a virtual 'trainer' system, which adapts to the fitness level of the user. We present two user studies. The first compares the ghost replay system with a simple AI trainer. The second utilities a more advanced trainer system, based on the design of the first training system but allowing for differing behaviour profiles. This second study compares two trainer profiles: one that competes with the player and one that cooperates with them.</ns0:p><ns0:p>Using these two studies, this paper aims to answer the following research questions: R1 How does self-competition provided by a ghost replay system influence the user's enjoyment, motivation, or exercise performance during play of a virtual reality exergame? R2 How does playing with a competitive or cooperative trainer system influence the user's enjoyment, motivation, or exercise performance during play of a virtual reality exergame? R3 How does the competitive inclination of the user influence the effectiveness of a competitive or cooperative trainer system on the user?</ns0:p><ns0:p>Based on existing research in this area discussed in the next section, we hypothesise that self competition via the ghost replay system should increase both the user's enjoyment and motivation in the exergame, as well as their overall exercise performance. We hypothesise less of an effect for a trainer system than the ghost replay system, but expect that a trainer system will be more effective when aligned with the user's personality.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>There has been increasing interest in exergaming research over the last decade. The research suggests that exergames have the potential to motivate otherwise sedentary individuals to exercise <ns0:ref type='bibr' target='#b25'>(Warburton et al., 2007)</ns0:ref>, but that they can also suffer from limited adherence similar to regular exercise <ns0:ref type='bibr' target='#b15'>(Mestre et al., 2011)</ns0:ref>.</ns0:p><ns0:p>Our focus here is on competitive and social factors that can hopefully improve adherence and motivate users to increase their physical activity, and on past examples of virtual training systems designed to motivate players.</ns0:p><ns0:p>Competitive and cooperative factors have been shown to influence motivation when playing exergames.</ns0:p><ns0:p>Peng and Crouse (2013) compared three conditions in an exergame: single player versus a pre-test score, cooperation in the same physical space, and parallel competition in separate physical spaces. Their results indicate that parallel competition in separate physical spaces is particularly effective as it provides the highest enjoyment, physical activity, and motivation for future play. Interestingly, they found that cooperation was more enjoyable and motivating than solitary play, but solitary play led to greater levels of physical exertion.</ns0:p><ns0:p>Competitive factors can influence exercise performance as well as motivation. <ns0:ref type='bibr' target='#b21'>Song et al. (2010)</ns0:ref> looked at the effects of competitive exergame gameplay on performance and motivation in individuals with competitive and non-competitive personalities. While competition increased exercise performance in players with both personality types, non-competitive players reported lower enjoyment of the game than competitive players, and were less likely to engage in voluntary additional play. This is an important consideration for exergame design: while increased performance may offer short-term benefits, the potential long-term decrease in motivation (e.g., reduced adherence to an exercise program) for players who are not competitive likely outweighs these benefits in the long run.</ns0:p><ns0:p>Cooperative gameplay has also shown benefits in exergaming. In a study conducted by <ns0:ref type='bibr' target='#b22'>Staiano et al. (2013)</ns0:ref>, participants played the Nintendo Wii Active game over a period of 20 weeks. Participants were assigned to either a cooperative or a competitive gameplay condition. Participants assigned to the cooperative condition lost significantly more weight than participants in the control and competitive conditions. The authors attributed the greater effectiveness of the cooperative condition to the social factors involved: as participants worked together to earn points they provided increased support and motivation for one another.</ns0:p></ns0:div> <ns0:div><ns0:head>2/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10549:1:0:REVIEW 13 Aug 2016)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There is evidence that the use of Virtual Training systems (Virtual Trainers) influences users' motivation and exercise adherence, and may avoid some of the downsides of traditional multiplayer gaming. In particular, situations in which an individual feels stigmatised can affect exercise motivation negatively by increasing anxiety and avoidant behaviours <ns0:ref type='bibr' target='#b11'>(Lantz et al., 1997)</ns0:ref>.</ns0:p><ns0:p>Current research on virtual trainers has focused on the use of a trainer separated from the gameplay. <ns0:ref type='bibr' target='#b9'>Ijsselsteijn et al. (2006)</ns0:ref> studied an exergame in which a virtual coach provided users with regular feedback about their heart rate. The trainer was a virtual human female character that was displayed in the corner of the screen. The feedback was provided in the form of pre-recorded voice cues and corresponding text shown in a speech bubble above the coach, e.g. 'Your heart rate is too low. Cycle faster.' The trainer lowered tension surrounding performance and player control, while not affecting enjoyment. The results also indicated that greater immersion in the game is linked with increased motivation.</ns0:p><ns0:p>The direct instructions used in the aforementioned study by <ns0:ref type='bibr' target='#b9'>Ijsselsteijn et al. (2006)</ns0:ref> have potential downsides. <ns0:ref type='bibr' target='#b8'>Hepler et al. (2012)</ns0:ref> report that the effectiveness of these prompts and cues may rely on the personality and past behaviour of the user. For example, a user with a history of sedentary behaviour may ignore an instruction such as 'cycle faster'. Furthermore, the user's interpretation of feedback can have a significant effect on how it motivates the user. If the feedback is interpreted as controlling, the user may not be inclined to respond to it <ns0:ref type='bibr' target='#b2'>(Deci and Ryan, 1985)</ns0:ref>. As a consequence, cues to exercise harder when the current level of exertion is insufficient should not be presented in a way that may be perceived as controlling, as this is likely detrimental to motivation. <ns0:ref type='bibr' target='#b13'>Li et al. (2014)</ns0:ref> also examined the use of a virtual training system for active video games. In their system, the user's bodily motion was detected using a Kinect 3D sensor, and the user gained points by mimicking the motions shown on screen by the trainer. While this system had a limited degree of gamification, the research indicates that training in an immersive virtual environment is motivating. <ns0:ref type='bibr' target='#b28'>Wilson and Brooks (2013)</ns0:ref> compared training with a virtual trainer in an exergame to training with a certified human trainer in a traditional exercise program. While the levels of exertion (measured by heart rate and rate of perceived exertion) are higher with a human trainer, the results showed no significant difference in exercise adherence between the two trainer types.</ns0:p><ns0:p>In a similar study, <ns0:ref type='bibr' target='#b4'>Feltz et al. (2014)</ns0:ref> had participants completing exercises either alone, partnered with a human, partnered with a human-like virtual player, or partnered with a non-human-like virtual player. The partners were designed to appear more capable than the participant at the exercise task. In similar results to Wilson and Brooks, exercise performance was higher with the human partner than the virtual partners, but all partnered conditions showed higher performance than the solitary condition.</ns0:p><ns0:p>These two studies suggest that a properly designed virtual trainer could be suitable as a longer-term motivational tool for exercise. Such a trainer would also likely improve health outcomes by encouraging a greater degree of exercise performance.</ns0:p><ns0:p>While there has been a moderate amount of research on competition and cooperation in exergames, this research has been heavily focused on the use of these factors with other human players. Similarly, while there has been a moderate amount of research on virtual trainers, the training systems in existing research have little gamification and do not look at competition or cooperation.</ns0:p></ns0:div> <ns0:div><ns0:head>STUDY 1: SELF-COMPETITION VS SIMPLE TRAINER Methods</ns0:head><ns0:p>A cross-sectional within-subjects study was conducted to examine the effects of competition and cooperation using different representations of another player in an immersive virtual reality (VR) exergame.</ns0:p><ns0:p>Specifically, the study examined exergame performance in three conditions: (1) solitary play in the exergame with no virtual player 'Default Condition', (2) a ghost condition whereby participants played the exergame while competing against a replay of their performance in the first condition 'Ghost condition', and (3) playing the exergame with a virtual trainer present 'Trainer condition'. The main outcome variables were distance travelled on the Exercycle, calories expended on the Exercycle, and rate of perceived exertion (RPE). In addition, the study explored participants' responses to self-report measures of enjoyment and motivation, following the completion of each condition.</ns0:p><ns0:p>A total of 22 individuals participated in the study. Three participants withdrew from the study due to suffering from discomfort related to the use of the Oculus Rift during the session. The remaining 19 (15 male, 4 female, mean age: 31.5, standard deviation: 9.2) were able to complete it. Informed consent was obtained from all participants, and the study was approved by the University of Auckland Human</ns0:p></ns0:div> <ns0:div><ns0:head>3/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2016:05:10549:1:0:REVIEW 13 Aug 2016)</ns0:ref> Manuscript to be reviewed All participants completed the control condition first in order to provide data to be used during the Ghost condition. The order of the Ghost and Trainer conditions were then randomized so as to counterbalance potential order effects.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Design</ns0:head><ns0:p>For this research, we extended an existing exergame described in <ns0:ref type='bibr' target='#b19'>Shaw et al. (2015)</ns0:ref>. This exergame was chosen as it elicited high intensity exercise from the users, and was rated by the users as enjoyable. In this exergame, the user cycles along a procedurally generated track, avoiding obstacles and collecting bonuses, in an effort to maximise their score. The speed at which the user moves is governed by the rate at which they pedal on the exercycle. A 3D camera tracks their movements, allowing them to steer by leaning from side to side. The game is presented to the user via an Oculus Rift Head Mounted Display (HMD), providing them with an immersive experience.</ns0:p><ns0:p>We extended this exergame, adding a replay system and a simple virtual trainer system. The base gameplay was also slightly modified, changing obstacles to slow the player and penalize their score, rather than causing them to replay a section. This was necessary in order to avoid divergence between the ghost replay system described below, and the user's current play session.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows a screenshot of the exergame, and illustrates some of the gameplay elements.</ns0:p><ns0:p>The exergame allows for playback of a participant's previous attempts through a 'ghost racer' system, in which the participant sees a non-interactive replay of the past attempt on the track as they play. This offers encouragement to exercise harder in order to beat their previous attempt. When users are lagging behind their 'ghost', they are also able to see points where their previous run failed to avoid obstacles, and thus they may be able to adapt their play to avoid more obstacles. The player's ghost has the same appearance as the trainers (shown in figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>): a simplified figure on a bike. When close to the player, the ghost and trainers are semi-transparent, increasing in opacity as they move further away. This is to prevent them from blocking the players view of obstacles or bonuses and becoming a potential source of frustration.</ns0:p><ns0:p>The simple virtual trainer system behaves in a similar fashion to the ghost system. When the player begins to move, another 'player' appears and travels along the track with them. However, rather than showing previous behaviour, the trainer attempts to show optimal behaviour, both in terms of gameplay and exercise. With regard to gameplay, the trainer chooses an optimal path through the track, avoiding all obstacles. With regard to exercise, the trainer adjusts its speed to guide the user towards an ideal exercise heart rate, as explained below.</ns0:p></ns0:div> <ns0:div><ns0:head>4/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2016:05:10549:1:0:REVIEW 13 Aug 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>First, the current heart rate of the user, as measured by the handlebar sensors, is used to estimate a relative heart rate, that is, a percentage of the user's expected maximum heart rate based on their age. This is done using Tanaka et al.'s regression equation 208 &#8722; 0.7 &#215; age <ns0:ref type='bibr' target='#b23'>(Tanaka et al., 2001)</ns0:ref>. The trainer attempts to set a speed suitable for keeping the user's heart rate at the level associated with moderate to vigorous exertion, that is, 64% -90% of their expected maximum heart rate <ns0:ref type='bibr' target='#b5'>(Garber et al., 2011)</ns0:ref>. If the user's heart rate is below 64% ('low heart rate'), the trainer increases its speed, requiring the user to work harder to catch up. If their heart rate exceeds 90% of their maximum ('high heart rate'), it decreases its speed, allowing them to exert less effort to keep pace. While the user's heart rate is in the target zone ('average heart rate'), the trainer stays a short distance in front of the user providing a target to follow in order to motivate them.</ns0:p></ns0:div> <ns0:div><ns0:head>Procedure</ns0:head><ns0:p>Participants completed a pre-test questionnaire to provide general demographic data: their age, gender, and baseline self-report measures of the typical number of hours spent exercising and playing video games each week. Participants were then given a written outline of the test procedure and conditions, and written instructions on how to play the exergame. They were assisted with adjusting the exercycle and motion tracking equipment.</ns0:p><ns0:p>Participants first completed the ten-minute 'Control' condition, followed by the 'Ghost' and 'Trainer' conditions in a counterbalanced order, separated by five-minute breaks. In the control condition, the participant's play attempt was recorded to provide the ghost for the later Ghost condition. During the break following each condition, participants were given a sheet listing the different exertion levels on the Borg RPE scale <ns0:ref type='bibr' target='#b1'>(Borg, 1982)</ns0:ref>, and asked to rate their level of exertion. They were also given a post-condition questionnaire in which they were asked to rate how enjoyable and motivating they found the condition, and were invited to give feedback about the Ghost and Trainer systems, and about the exergame in general.</ns0:p></ns0:div> <ns0:div><ns0:head>Measures</ns0:head></ns0:div> <ns0:div><ns0:head>Distance</ns0:head><ns0:p>For each condition, the distance travelled in kilometers on the exercycle was assessed as the total kilometers travelled at the end of each exercise session. This was measured from the exercycle's output.</ns0:p></ns0:div> <ns0:div><ns0:head>Calories Expended</ns0:head><ns0:p>The total kilocalories expended on the exercycle as the total Calories expended at the end of each exercise session. This was measured from the exercycle's output.</ns0:p></ns0:div> <ns0:div><ns0:head>Rate of Perceived Exertion: (RPE)</ns0:head><ns0:p>The RPE scale <ns0:ref type='bibr' target='#b1'>(Borg, 1982)</ns0:ref> is a brief self-administered rating scale that was designed to measure an individual's subjective rating of exercise intensity. At the end of each exercise session, participants rated their perception of effort or 'how hard they felt they had worked' during each exercise session, using a scale ranging from '6' (least exertion) through to '20' (most exertion).</ns0:p></ns0:div> <ns0:div><ns0:head>Enjoyment</ns0:head><ns0:p>Enjoyment was assessed with one item. Participants were asked to rate the statement 'I enjoyed playing the exergame' on a seven-point Likert rating scale. Ratings ranged from 1(Strongly disagree) to 7 (Strongly agree).</ns0:p></ns0:div> <ns0:div><ns0:head>Motivation</ns0:head><ns0:p>Motivation was assessed with one item. Participants were asked to rate the statement 'I found the exergame motivating' on a seven-point Likert rating scale. Ratings ranged from 1(Strongly disagree) to 7 (Strongly agree).</ns0:p></ns0:div> <ns0:div><ns0:head>Data Analyses</ns0:head><ns0:p>The normality and sphericity assumptions of repeated measures analysis of variance (RM-ANOVA) were tested with the Shapiro-Wilk test and Mauchly's sphericity test respectively. With a p-value threshold of 0.05, the normality assumption holds for the measures of distance travelled, calories expended, and RPE, but does not hold for the measures of enjoyment and motivation. The sphericity assumption holds for all measures except motivation.</ns0:p></ns0:div> <ns0:div><ns0:head>5/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10549:1:0:REVIEW 13 Aug 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science RM-ANOVA with post-hoc Bonferroni tests were conducted to examine the effects of the Control, Ghost, and Trainer conditions on distances travelled, calories expended, and rate of perceived exertion.</ns0:p><ns0:p>Due to the non-normally distributed data, the effects of the three conditions on enjoyment and motivation were examined with Friedman tests.</ns0:p><ns0:p>Pearson correlation analyses were used to examine the association between the participant information gathered during the pre-test, and the measures listed above.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the means and standard deviations of the various measures across the three conditions.</ns0:p><ns0:p>The results of a RM-ANOVA showed that there is a significant main effect Condition on distance travelled (F = 15.28, p &lt;.001). The results of the post-hoc Bonferroni tests showed that distance travelled in the Trainer condition was significantly lower than in the Control condition (p = .001), and the Ghost condition (p &lt;.001). There was no significant difference between the Control and Ghost conditions.</ns0:p><ns0:p>There was also a significant main effect Condition on calories expended (F = 4.64, p = 0.16). However, the results of the post-hoc Bonferroni tests did not show any pairwise significances.</ns0:p><ns0:p>There was a significant main effect of Condition on RPE (F = 3.79, p = .032). The results of the post-hoc Bonferroni tests showed that the RPE rating for the Ghost condition is significantly higher than for the Control condition (p = 0.01). There were no significant difference between the Trainer condition and either of the other two conditions.</ns0:p><ns0:p>The results of a Friedman test showed a significant main effect Condition on enjoyment across the three conditions (p = .016). The Ghost condition was significantly more enjoyable than the Control and Trainer conditions.</ns0:p><ns0:p>The results of a Friedman test showed no significant main effect Condition on motivation across the three conditions (p = .370).</ns0:p><ns0:p>The results of a Pearson correlation showed that enjoyment of a condition, and level of motivation in that condition have no significant correlation with distance travelled in the condition.</ns0:p><ns0:p>The use of player recordings of past performance to encourage self-competition shows significant promise to encourage users to exercise via an exergame, particularly if they enjoy competition. Verbal and qualitative feedback from the participants indicated that being able to see and beat their previous attempt was highly enjoyable during the Ghost condition. This study failed to show benefits for the use of a multiplayer-style virtual trainer system, however that may be due to flaws in the trainer system discussed further below.</ns0:p><ns0:p>It is not too surprising to see that the Ghost condition did not encourage players to exercise significantly harder than in the Control condition. Unless turning around to look directly backwards, players would only see their ghost when it was ahead. Thus they would only receive motivation from the ghost to speed up when doing worse than it.</ns0:p><ns0:p>Attitudes towards the trainer system were less positive. Overall, it was not significantly more enjoyable than playing in the absence of another player. From verbal and written feedback, several of the participants who did not find the trainer system motivating found that it reduced their enjoyment of the exergame, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science citing unrealistic behaviour: 'The trainer system moved strange'. We suspect this may be related to the framing of the trainer system. While the ghost system was clearly competitive, the trainer system had no particular framing as either competitive or cooperative. If the trainer was ahead, it would show participants an optimal performance, but participants were able to push themselves above the target heart rate and pull ahead; 'competing' with it.</ns0:p><ns0:p>The trainer system was extremely effective at avoiding obstacles, and often navigated through obstacles with superhuman dexterity. When this occurred, participants tended to react negatively, stating that they felt that the trainer was 'cheating', and was not helping them as it was not showing them an optimal path that they were capable of following.</ns0:p><ns0:p>It should however be noted that while the Ghost condition was better received than both the Control and Trainer conditions, the overall participant response to all three of the conditions was generally positive, with the mean enjoyability and motivation ratings still being high. The exergame in general was regarded as enjoyable and motivating, and the Trainer system did not detract from that.</ns0:p></ns0:div> <ns0:div><ns0:head>STUDY 2: COMPETITIVE VS COOPERATIVE TRAINER Methods</ns0:head><ns0:p>A cross-sectional within-subjects study was conducted to examine the effects of competition and cooperation with a virtual trainer in the exergame environment. The effects of: (1) solitary play in the exergame with no virtual player (Default Condition), (2) competition whereby participants played the exergame while competing against a virtual trainer with a competitive behaviour profile, and (3) cooperation whereby participants played the exergame working with a virtual trainer with a cooperative behaviour profile on distance travelled and calories expended while cycling. Participants were recruited through open advertisement. A total of 28 individuals participated in the study, of which 25 (21 male, 4 female, mean age: 24.3, standard deviation: 9.2) were able to complete it. Informed consent was obtained from all participants, and the study was approved by the University of Auckland Human Participants Ethics Committee (reference number: 8450). The study took place between the 20th of July and the 21st of September, 2015.</ns0:p></ns0:div> <ns0:div><ns0:head>Design</ns0:head><ns0:p>The virtual trainer system described in Study 1 was fairly limited in its capability to interact with the user.</ns0:p><ns0:p>For the second study, we designed and implemented a more advanced virtual trainer (shown in Figure Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>2) based on the evaluation of the initial trainer design. The full design of the advanced trainer system is detailed in <ns0:ref type='bibr' target='#b20'>Shaw et al. (2016)</ns0:ref>.</ns0:p><ns0:p>As research discussed earlier in this paper indicates, competition as part of an exergame can affect different users very differently depending on how competitive they are. The behaviour of the first trainer system was not clearly framed as either competitive or cooperative. The advanced trainer system was designed to be customizable for either competition or cooperation in order to appeal to different personality types. In order to do that, the advanced trainer implements two behaviour profiles: a competitive profile and a cooperative one. While the competitive trainer profile is programmed to challenge and race against the player, the cooperative trainer profile attempts to help the player achieve a higher score.</ns0:p><ns0:p>Similar to the previous trainer, the advanced trainer always chooses a path that is close to optimal for scoring points and attempts to avoid obstacles. The trainer looks ahead to avoid obstacles in the distance.</ns0:p><ns0:p>For example, if there is an obstacle in the centre of the track, and beyond that is one on the left side, the trainer will choose to go right when avoiding the central obstacle. As such, a user can follow the trainer and potentially achieve a higher score. However, the trainer only looks 40 metres ahead when planning its path. Beyond this point the flat nature of the track (visible in Figures <ns0:ref type='figure' target='#fig_1'>1 and 2</ns0:ref>) and resolution of the Oculus Rift make it difficult for a human to clearly make out obstacles. Limiting the trainer's perception to this distance helps to keep its navigational ability on par with that of a human.</ns0:p><ns0:p>A moderate number of participants in Study 1 (6 of 19) mentioned the unrealistic agility of the first trainer system as something that they did not like about it. For this reason, the lateral movement speed of the advanced trainer was capped at the maximum achievable via the motion controls used by the human player.</ns0:p></ns0:div> <ns0:div><ns0:head>Competitive Trainer</ns0:head><ns0:p>The advanced trainer modifies its behaviour based on the heart rate of the user, considering the same low, average, and high heart rate zones as the simple trainer (see Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>). For the competitive trainer profile, when the player's heart rate is too low, the trainer's speed will increase up to 1.3 times that of the player.</ns0:p><ns0:p>When in the average heart rate zone, the trainer's speed will approximately match that of the player. And when the player is in the high heart rate zone, the trainer's speed will drop down to 0.7 times that of the player.</ns0:p><ns0:p>The speed of the trainer also takes into consideration the distance from the player. If the player spends an extended time in the low heart rate zone, it is important that the trainer does not get too far ahead, otherwise it may be demotivating, or at least no longer motivating if the trainer has moved out of the player's view. Similarly, it is important that the trainer does not fall too far behind if the player is spending an extended period of time above the target heart rate zone, otherwise the benefit of the trainer will be lost even if the player's heart rate falls back into the target zone. Because the game is presented via HMD, clamping the trainer's distance means that the player is always able to look over their shoulder and see the trainer following them.</ns0:p><ns0:p>While in the target zone, the speed variation means that the trainer behaves as a human player of similar abilities, in that it occasionally pulls slightly ahead and occasionally falls slightly behind. This means that the user is always being made aware of the presence of the trainer and is encouraged to compete and stay ahead. As the user tends towards the upper end of the target heart rate zone, the trainer spends more time behind the user, while at the lower end of the zone it spends more time ahead of the user.</ns0:p></ns0:div> <ns0:div><ns0:head>Cooperative Trainer</ns0:head><ns0:p>The cooperative trainer is designed to cooperate with the player, providing assistance to the player in achieving the goals of maximising their score and maintaining an ideal target heart-rate. The cooperative trainer always gives the player a target to focus on, much like a lead cyclist in real-world group cycling activities. Following the cooperative trainer helps a player to follow a near optimal path along the track, avoiding all obstacles. Within this game, cooperation is one-directional: the trainer cooperates with the player, but the player is not providing meaningful assistance to the trainer.</ns0:p><ns0:p>The cooperative trainer uses a similar heart rate based mechanism to the competitive trainer, but it always sits in front of the player, regardless of speed. If the user is in the low heart rate zone, the trainer maintains a position well ahead of but clearly visible to the player. In the high heart rate zone the trainer stays only barely ahead of the player. And in the average heart rate zone, the trainer varies its position in a Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science similar fashion to the competitive trainer, but within the bounds given by its positions when the player is in the high or low heart rate zones. (see Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>Procedure</ns0:head><ns0:p>Participants completed a pre-test questionnaire to provide general demographic data: their age, gender, and baseline self-report measures of the typical number of hours spent exercising and playing video games each week. As part of the pre-experiment questionnaire, participants also filled out the Sport Orientation Questionnaire (SOQ) <ns0:ref type='bibr' target='#b6'>(Gill and Deeter, 1988)</ns0:ref> and the Task and Ego Orientation in Sport Questionnaire (TEOSQ) <ns0:ref type='bibr' target='#b3'>(Duda, 1989)</ns0:ref>. These are validated and commonly used questionnaires that provide five personality metrics related to competitiveness in sporting activities: competitiveness, goal orientation, and winning orientation from the SOQ, and task and ego orientation from the TEOSQ.</ns0:p><ns0:p>Following the questionnaires, participants were then given a written outline of the test procedure and conditions, and written instructions on how to play the exergame. The investigator assisted participants with adjusting the exercycle and calibrated the motion-tracking equipment.</ns0:p><ns0:p>Participants completed the three conditions in a counterbalanced order determined by the method of Latin Squares. Prior to each condition, the participants were given a verbal explanation of that condition, including an explanation of the trainer's behaviour. The three conditions were separated by five-minute breaks. During the break following each condition, participants were given a post-condition questionnaire in which they were asked to rate how enjoyable and motivating they found the condition, and were invited to give general feedback about the trainer (where applicable) and the exergame in general.</ns0:p></ns0:div> <ns0:div><ns0:head>Measures</ns0:head></ns0:div> <ns0:div><ns0:head>Distance</ns0:head><ns0:p>For each condition, the distance travelled in kilometers on the exercycle was assessed as the total kilometers travelled at the end of each exercise session. This was measured from the exercycle's output.</ns0:p></ns0:div> <ns0:div><ns0:head>Calories Expended</ns0:head><ns0:p>The total kilocalories expended on the exercycle as the total Calories expended at the end of each exercise session. This was measured from the exercycle's output.</ns0:p></ns0:div> <ns0:div><ns0:head>Enjoyment</ns0:head><ns0:p>Enjoyment was assessed with one item. Participants were asked to rate the statement 'I enjoyed playing the exergame' on a seven-point Likert rating scale. Ratings ranged from 1(Strongly disagree) to 7 (Strongly agree).</ns0:p></ns0:div> <ns0:div><ns0:head>Motivation</ns0:head><ns0:p>Motivation was assessed with one item. Participants were asked to rate the statement 'I found the exergame motivating' on a seven-point Likert rating scale. Ratings ranged from 1(Strongly disagree) to 7 (Strongly agree).</ns0:p></ns0:div> <ns0:div><ns0:head>Data Analyses</ns0:head><ns0:p>The normality and sphericity assumptions of repeated measures analysis of variance (RM-ANOVA) were tested with the Shapiro-Wilk test and Mauchly's sphericity test respectively. With a p-value threshold of 0.05, the normality assumption holds for the distance and calories measures, but does not hold for the enjoyment and motivation measures. The sphericity assumption holds for all measures except motivation.</ns0:p><ns0:p>RM-ANOVA with post-hoc Bonferroni tests were conducted to examine the effects of the Control, Competitive, and Cooperative conditions on distances travelled and calories expended.</ns0:p><ns0:p>Due to the non-normally distributed data, the effects of the three conditions on enjoyment and motivation were examined with Friedman tests.</ns0:p><ns0:p>Pearson correlation analyses were used to examine the association between the participant information gathered pre-test, and the measures listed above.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> shows the means and standard deviations of the various measures across the three conditions.</ns0:p><ns0:p>The results of a RM-ANOVA showed that there was a significant main effect Condition on distance travelled (F = 7.4, p = 0.002). The results of the post-hoc Bonferroni-corrected tests showed that distance travelled in the Competitive condition was significantly higher than in the Control condition (p = .047), Manuscript to be reviewed</ns0:p><ns0:p>Computer Science and the Cooperative condition (p = .006). There was no significant difference between the Control and Cooperative conditions.</ns0:p><ns0:p>There was a significant main effect Condition on calories expended (F = 6.63, p = 0.003). The results of the post-hoc Bonferroni tests showed that calorific expenditure in the Competitive condition was significantly higher than in the Control condition (p = .022), and the Cooperative condition (p = .011).</ns0:p><ns0:p>There was no significant difference between the Control and Cooperative conditions.</ns0:p><ns0:p>The results of a Friedman test did not show a significant difference in enjoyment across the three conditions (p = .756).</ns0:p><ns0:p>The results of a Friedman test showed a significant difference in motivation across the three conditions (p = .027). The Competitive condition was significantly more motivating than the Control and Cooperative conditions.</ns0:p><ns0:p>Enjoyment of a condition, and level of motivation in that condition showed no significant correlation with distance travelled in the condition.</ns0:p><ns0:p>Task orientation as measured by the TEOSQ does not show any significant correlation with any other measurement.</ns0:p><ns0:p>Ego orientation as measured by the TEOSQ shows a moderate positive correlation with distance travelled in the baseline condition (r=0.41, p &lt;0.05), but no significant correlation with distance in the other conditions, or with calories burned in any of the conditions. It shows a moderate negative correlation with motivation in the cooperative condition (r = -0.41, p &lt;0.05), but no significant correlation with motivation in the other two conditions. There was no significant correlation between ego orientation and enjoyment of any of the conditions.</ns0:p><ns0:p>None of the SOQ measurements showed any significant correlation with distance travelled or calories burned in any of the conditions. They also failed to show any significant correlation with reported enjoyment or motivation in any of the conditions.</ns0:p><ns0:p>Time spent on regular exercise and time spent playing video games do not appear to be correlated with the personality metrics measured by the SOQ and TEOSQ. No significant correlations are shown between these lifestyle factors and the personality traits. Surprisingly, for the participants in our study, there was also no significant correlation between time spent exercising and exercise performance in the exergame (distance travelled and calories burned measures). However, time spent on regular exercise did show a moderate negative correlation with rated enjoyment of the baseline condition (r = -0.41, p &lt;0.05).</ns0:p><ns0:p>Unsurprisingly, enjoyment and motivation for each of the conditions were closely related, with a moderate positive correlation between enjoyment and motivation of the default condition (r = 0.49, p &lt;0.05), and the competitive condition (r = 0.46, p &lt;0.05). This was most noticeable in the cooperative condition, with a strong positive correlation between enjoyment and motivation (r = 0.77, p &lt;0.001).</ns0:p><ns0:p>The competitive trainer provided a fairly interactive experience for the participants. Regardless of whether they were ahead of the trainer or behind it, the variations in the trainer's speed caused the race to always appear to be in a situation where the lead could be taken by either player. The cooperative trainer however, seemed less interactive. While its behaviour was more strictly defined as cooperative than that of the trainer in study 1, simply receiving assistance the trainer is an experience of limited interactivity. This is reflected in our results with regard to distance travelled, calories burned, and the motivation rating.</ns0:p><ns0:p>It is interesting to note that the cooperative trainer did not prove to be particularly more effective for showing them an ideal path and giving them a target to focus on, the player cannot affect the trainer beyond their heart rate changing its speed. Thus players can get the feeling of being helped, but not of helping. The moderate negative correlation between ego orientation and motivation when playing with the cooperative trainer is interesting. In this case, it may be because the non-ego oriented players found receiving assistance with their gameplay to be motivating.</ns0:p><ns0:p>Our results show an interesting contrast with the findings of <ns0:ref type='bibr' target='#b21'>Song et al. (2010)</ns0:ref>. Like their results, our results do not show non-competitive individuals performing worse in the competitive condition. However, their results showed reduced enjoyment and motivation for non-competitive individuals in a competitive experience. Our results, however, do not show that. This may be because in our case, the competition is against a virtual trainer, rather than another human. Further, in our study the player's opponent is designed to match their fitness level, thus the competition is always fair. These factors likely reduce the negative aspects of a competitive experience for non-competitive individuals.</ns0:p><ns0:p>Verbal and written feedback about the trainers' behaviour was consistent with that given in Study 1:</ns0:p><ns0:p>when the trainers avoided obstacles in a fashion that the participants perceived as inhuman, the participants reacted negatively. Despite the fact that the agility of the trainer was reduced to a human-like level in Study 2, a Chi-Square test showed no significant difference in the number of participants mentioning 'unrealistic' or 'cheating' movements in their open feedback (Study 1 N: 6, Study 2 N: 9, p = 0.76).</ns0:p><ns0:p>Player perceptions of the behaviour of virtual trainers and players would be an interesting future research area.</ns0:p></ns0:div> <ns0:div><ns0:head>LIMITATIONS</ns0:head><ns0:p>These two studies suffer from some limitations in their experimental design and procedure. In Study 1, the need for a dataset to be used by the ghost replay system meant that the default condition could not be counterbalanced with the other two conditions. Additionally, as mentioned above if a participant was beating their ghost, they would have to look behind in order to see it and compare their performance.</ns0:p><ns0:p>However, if they pulled too far ahead the ghost could end up too distant to see.</ns0:p><ns0:p>In Study 2, the user's rate of perceived exertion was not measured. As such, we are unable to see how this is influenced by the competitive or cooperative trainers.</ns0:p><ns0:p>Our use of the SOQ and TEOSQ in Study 2 to analyse the participant's personality may be a limiting factor when considering the effectiveness of our trainer systems. The SOQ and TEOSQ are designed to measure personality traits in a sporting context. While an individual may be generally competitive, it is not unreasonable to assume that they could be competitive in a sporting context but not when playing video games, or vice versa. Furthermore, the physical component of our exergame: cycling on an exercycle, is not itself a sporting activity.</ns0:p><ns0:p>In both studies, participants were experiencing a novel technology: the Oculus Rift HMD. It is possible that some of the participants in the study chose to participate in order to access this technology, and this may have influenced their perception towards the exergame. As consumer grade HMDs become more available, the risk of this will hopefully decrease for future work.</ns0:p><ns0:p>Both studies recruited participants through open advertisement, and in both cases a larger number of males than females responded. This may be due to greater male interest in playing an exergame, which is consistent with the fact that more males than females play video games <ns0:ref type='bibr' target='#b14'>(Lucas and Sherry, 2004;</ns0:ref><ns0:ref type='bibr' target='#b27'>Williams et al., 2008)</ns0:ref>. As such, the findings of these studies may be biased towards individuals inclined to play video games.</ns0:p><ns0:p>In both studies, the 'enjoyment' and 'motivation' constructs were only measured with a single item in the post-condition questionnaires. This reduces their reliability in assessing the opinions of the participants.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>We have presented a set of systems for an immersive VR exergame that attempt to provide the benefits of a multiplayer experience with regard to the use of competition and cooperation as a motivational tool.</ns0:p><ns0:p>Our results indicate that competition is a useful tool in exergaming, but do not show that that is necessarily the case for cooperation. Virtual players, either a replay or an AI trainer provide an effective Manuscript to be reviewed</ns0:p><ns0:p>Computer Science substitute for a human player in order to increase the motivation of the user, and can increase the user's exercise performance. However a cooperative virtual player appears no more effective than solitary play.</ns0:p><ns0:p>Interestingly, our results do not indicate an influence for the personality of the player on what kind of virtual trainer system they prefer.</ns0:p><ns0:p>Using the user's heart rate as a tool for governing the behaviour of a virtual trainer appears an effective means of balancing the trainer's performance such that the user exercises at a worthwhile intensity.</ns0:p><ns0:p>There are two main implications that our results hold for the design of virtual players for use in exergaming systems. Firstly, our studies indicate that a more interactive experience leads to greater exercise intensity, likely through greater player investment in the experience. Secondly, the experience should be clearly competitive or cooperative as an experience with unclear orientation may be less effective than solitary play.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Sample screenshot of the exergame showing cannons, sandpits, and a bonus on the track.</ns0:figDesc><ns0:graphic coords='6,141.73,63.78,413.58,206.79' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Screenshot of the exergame showing the trainer in front of the player. The red colour indicates that the player's heart rate is in the high zone. Visible at the top of the image is an overhead beam, which the trainer is ducking underneath.</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,413.58,223.35' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10549:1:0:REVIEW 13 Aug 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Colour and position of the competitive trainer relative to the player.</ns0:figDesc><ns0:graphic coords='11,141.73,74.88,413.58,332.16' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Colour and position of the cooperative trainer relative to the player.</ns0:figDesc><ns0:graphic coords='11,141.73,459.12,413.58,236.09' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of distance traveled (km), calories burned, rate of perceived exertion (RPE), and rated enjoyment and motivation in Study 1. = 19. For each outcome assessment, means sharing a letter in their superscript are not significantly different at the .05 level. Significant mean differences at the 0.5 level are indicated by a &gt;b.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Control</ns0:cell><ns0:cell /><ns0:cell>Ghost</ns0:cell><ns0:cell /><ns0:cell>Trainer</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell></ns0:row><ns0:row><ns0:cell>Distance</ns0:cell><ns0:cell>3.98 a</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>4.02 a</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>3.57 b</ns0:cell><ns0:cell>0.79</ns0:cell></ns0:row><ns0:row><ns0:cell>Calories</ns0:cell><ns0:cell>78.74 a</ns0:cell><ns0:cell cols='4'>11.57 79.16 a 15.32 73.63 a</ns0:cell><ns0:cell>14.16</ns0:cell></ns0:row><ns0:row><ns0:cell>RPE</ns0:cell><ns0:cell>15.74 b</ns0:cell><ns0:cell>1.29</ns0:cell><ns0:cell cols='2'>16.79 a 1.62</ns0:cell><ns0:cell>16.37 b</ns0:cell><ns0:cell>1.89</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Enjoyment 5.58 b</ns0:cell><ns0:cell>1.31</ns0:cell><ns0:cell>6.16 a</ns0:cell><ns0:cell>1.21</ns0:cell><ns0:cell>5.37 b</ns0:cell><ns0:cell>1.68</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Motivation 5.68 a</ns0:cell><ns0:cell>1.49</ns0:cell><ns0:cell>6.11 a</ns0:cell><ns0:cell>1.15</ns0:cell><ns0:cell>5.68 a</ns0:cell><ns0:cell>1.53</ns0:cell></ns0:row><ns0:row><ns0:cell>Notes: N</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of distance traveled (km), calories burned, enjoyment, and motivation in Study 2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Control</ns0:cell><ns0:cell /><ns0:cell>Competitive</ns0:cell><ns0:cell /><ns0:cell>Cooperative</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell></ns0:row><ns0:row><ns0:cell>Distance</ns0:cell><ns0:cell>4.22 b</ns0:cell><ns0:cell cols='2'>0.66 4.46 a</ns0:cell><ns0:cell cols='2'>0.76 4.06 b</ns0:cell><ns0:cell>0.67</ns0:cell></ns0:row><ns0:row><ns0:cell>Calories</ns0:cell><ns0:cell>64.88 b</ns0:cell><ns0:cell cols='2'>6.71 67.96 a</ns0:cell><ns0:cell cols='2'>8.59 63.92 b</ns0:cell><ns0:cell>7.57</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Enjoyment 5.52 a</ns0:cell><ns0:cell cols='2'>1.00 5.24 a</ns0:cell><ns0:cell cols='2'>1.69 5.56 a</ns0:cell><ns0:cell>1.42</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Motivation 5.00 b</ns0:cell><ns0:cell cols='2'>1.35 6.16 a</ns0:cell><ns0:cell cols='2'>0.75 5.12 b</ns0:cell><ns0:cell>1.76</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>Notes: N = 25. For each outcome assessment, means sharing a letter in their</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>superscript are not significantly different at the .05 level. Significant mean differences</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>at the 0.5 level are indicated by a &gt;b.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Department of Computer Science The University of Auckland 38 Princes St Auckland 1010 lsha074@aucklanduni.ac.nz August 13, 2016 Dear John Carroll and Reviewers, Thank you for giving us the opportunity to revise and resubmit this manuscript. We appreciate the insightful and valuable comments provided by all of you and have substantially modified the manuscript to address your comments. In the cases where we respectfully disagree with your assessment, we have provided an explanation of why. We address your comments (italicised) below. In the case of all comments about minor typos and formatting issues, we have fixed them but have not added a specific response. Reviewer 1: Due to the experimental nature of the study in a controlled laboratory setting, the Introduction section of the paper should be rewritten to include a 'Research Questions' statement and the associated 'Hypotheses' substantiated by prior research. The current literature review section should be separated into a standalone Related Work section. Also, it'd be great for the authors to state the primary contributions upfront at the end of the Introduction section to better guide the readers. We agree that these are all suitable changes and we have updated the paper accordingly. In study 1, the participant assignments were 'randomized' (line 121) whereas study 2 follows a 'Latin Square' (line 317) To clarify, in study 1 participants first completed the control condition in order to produce data that could be used by the ghost condition, and then completed the ghost and trainer conditions in alternating order (which would be equivalent to a “Latin Square” approach for only two items). In study 2, the order of all three conditions was determined by the “Latin Square” method. We have updated the paper to indicate this more clearly. Study 1 contains 3 conditions (e.g., control, ghost, and trainer; line 161-166) but study 2 contains also 3 conditions but is mentioned as '2' conditions (line 250-254) which is inconsistent with Table 3's caption. Agreed – this was unclear and we have fixed it to clearly indicate that there were three conditions in each study. Study 2 did not measure RPE. We agree, this is a limitation, and we have added this to a limitations section of the paper. The paper does not provide a correlation table for study 1, but provides a correlation table for study 2 (though the addition of this table serves very little purpose due to the general lack of significant correlations). As you point out, there are few significant correlations. We have removed the correlation table for study 2 and simply refer to any correlations in the text. The authors could've done a dimension reduction to reduce SOQ, especially since none of the covariates correlate with any of the DVs. We agree, this could have been interesting. We performed a Prinicipal Component Analysis to reduce the SOQ, but unfortunately this did not result in correlations with any of the DVs either. Significant relationships should be denoted in Table 1 and Table 2 (e.g., using superscripts a, b, c given a > b > c) Agreed – we have added superscripts to these tables. The paper contains several unsubstantiated claims. For example, statements in Line 232233, 402-403, and 411-414 need to be justified with statistical analysis--need to provide N, posthoc comparisons, and clustering results (e.g., k-mean). Without stats, the claims are completely unsubstantiated. We agree, and after doing statistical analysis we have made the necessary adjustments or removals. The most significant issues with this paper is that the argument of competitive or cooperative is mischaracterized given the way the experimental conditions are implemented. Technically, paper experiments with 4 conditions--control (twice), ghost, basic trainer (inhuman agility), trainer-gamified (simulating more naturalistic AI-bot, which the authors classify as 'competitive'), and trainer-unbeatable (supposed to simulate 'cooperative' condition but fails to do so; see my comments later). By definition, 'cooperation' is a situation in which the players work together toward a common goal, but the 'cooperative' condition that is implemented in the experiment is one in which the AI-bot *always* wins without any cooperative element (see Figure 4 and line 398-401). This can also explain why enjoyment/motivation of competitive > control & cooperative (see Table 2). In the 'cooperative' condition, the players always 'lose' by definition. The way in which cooperative condition is implemented in this work is better described as a 'tutorial' or an 'assisted/cheat mode' in which optimal path is hinted. Again, there's no cooperative element in this implementation, which breaks the framing of competitive vs. cooperative framing of the paper. Thank you for this feedback. We agree that one definition of cooperation is “a situation in which the players work together toward a common goal.” The competitive vs cooperative framing of Study 2 was designed using the definition of cooperation as “offering assistance or willingness to assist” (Collins English Dictionary). Adopting this definition of cooperation, the cooperative elements included into the VR trainer included direction on how to avoid obstacles and achieve a higher score, forewarning for upcoming obstacles obscured by nearby features, and guidance on staying within the ideal heart rate zone. Cooperation where both the player and the AI share a common goal would be interesting and worthwhile research, but we believe that that is something that would merit a paper on its own. Such cooperation would be out of the scope of this paper. That said, we acknowledge that in traditional multiplayer video games, cooperation of the sort you describe is common. We have updated our description of the cooperate trainer to more clearly describe the manner of cooperation that exists in this game. We must also respectfully disagree that the cooperative condition is one where the player will always “lose”. At no point is the experience with the cooperative trainer framed as a competition; in this condition there is no concept of “win” or “lose”. As in traditional cooperative games, the fact that one’s partner may be positioned in the front does not mean that one is being defeated by their partner. The system should implement a feedback mechanism (e.g., with a distance or progress bar visual indicator indicating the gap in distance or audio feedback) so that the players can still know the AI's progress even if the AI falls behind (line 229). For the ghost, we agree that this would be useful to the player. While the player was able to look behind and see the ghost if they were not too far ahead, unlike the competitive trainer in study 2, the ghost was able to fall far behind if the player greatly outperformed their previous play attempt. We have updated the paper to acknowledge this limitation. In the case of the competitive trainer, due to the fact that the trainer is not permitted to fall too far behind, the player is always able to look over their shoulder to see the AI trailing them. Also, by virtue of the fact that the player is keeping ahead of the trainer, they know that they are exercising at the high end of the target zone, or above it. The design section has been updated to clarify this. The 2 studies can be treated as 1 and the analysis can be done across the 4 conditions stated above. It appears that the ghost condition may outperform any of the AI-bot conditions (comparing with self or with ghost conditions of others in similar profile; see Quantified Self literature). This may be a stronger argument that the current framing of the paper. Thank you for this feedback and suggestions. Although both studies examined competition and cooperation, different aspects of these constructs were investigated in each study. Study one examined the effects of self-competition whereas competition against a “virtual other player” was examined in study 2. Because the division of the trainer in study 2 into a competitive and a cooperative version was based on the fact that the trainer in study 1 was not clearly framed as one or the other, we feel study 2 makes more sense as a follow-up to study 1. Future work needs to account for novel effect and halo effect that could be present in the described studies. We agree – in particular the novelty of the Oculus Rift may have had an influence on how participants reacted to the experience. We have acknowledged this in our limitations section. Reviewer 2: And a tiny error: line 110, authors mention three condition while there are only two. To clarify, there were three conditions (control, ghost, and trainer). We have updated this section to state this correctly. Are 5 minutes enough to separate two sessions. I'm not expert in sport but it seems that this duration is short and even the control condition can decrease the fitness and then the performances of users. In the same line, even if different orders are proposed to different users, it could be interesting to test a possible order effect on the data. We agree, in an ideal situation we would be able to let the participant’s heart rate fully recover to rest prior to starting each condition. However, past research indicates that this takes a substantial amount of time – impractical for a user study (Javorka, M., Zila, I., Balhárek, T., & Javorka, K.. (2002). Heart rate recovery after exercise: relations to heart rate variability and complexity. Brazilian Journal of Medical and Biological Research, 35(8), 991-1000). That said, the aforementioned paper does show that a large portion of heart rate recovery happens within the first five minutes of rest. Why do you evaluate subjective feeling with only one question? Generally, a questionnaire includes some redundant questions and an evaluation of the coherency of the different answers to these question is evaluated. We agree that this is a limitation in the paper, and have acknowledged it in the limitations section. That said, we suspect that the use of multi-item measurements for these constructs would produce similar results. I would like to know which condition is used to record the ghost (is the user already with a trainer or not?). This point could modify the behavior of the ghost and then, the results. Whatever the condition is, it is important to mention it and also to discuss the impact of the recording of the ghost on the results. The ghost was recorded in the control condition in study 1 (this was always the first condition performed by the participant). As such the ghost shows the behaviour of the player in the game when there is no other player present. We have updated the paper to clearly state that this is the case. To finish, it is always more easy to compare conditions. However, independent conditions measurement can be surprising. It could be interesting that the authors give their opinion on this point. What were the results if they had test only one condition without comparison? We agree that it would have been very interesting to measure conditions individually. We suspect that had the measurements been individual, we would have seen fairly similar results, particularly in the case of the two competitive conditions. We wonder if participants might have given the other conditions higher ratings for motivation and enjoyment had they not also experienced or been about to experience another condition they enjoyed or expected to enjoy more. Reviewer 3: The introduction and motivation to the study can be better argued by reviewing more related studies and explaining the gaps in current research. It is unclear why the 2 studies need to manipulate the social factors of competition and cooperation, and encourage the player to maintain an appropriate level of exercise. Why are the study conditions considered in the 2 studies? We agree – the motivation part of the introduction has been rewritten, and in the separate related work section added at Reviewer 1’s request, we have identified areas where existing research on competition, cooperation, and virtual trainers is lacking. Why is the exergame selected? One of the main reasons was practical: as we developed the initial version of the game, it was the best match for our available equipment, and we had the source code and associated knowledge. That said, our past research with the game indicated that it was effective at eliciting desirable levels of exercise, and was found to be generally enjoyable by players. We have updated the paper to indicate this fact. Why are the 3 conditions tested? What hypotheses are formulated? We are interested in the two basic types of virtual player: replays and AI players, and in both competition and cooperation. For study 1, this led to a comparison of these two types. As playing with a replay is inherently going to be a competitive experience, study two focused on AI players split into competition and cooperation. We have added these motivations to the paper. We have also added hypotheses to the introduction, based on related work. Explain the sampling profiles. Two studies had more males. Could the study findings be biased? The sampling profiles come from the distribution of individuals who responded to advertisement of the study. We suspect that the greater quantity of males is due to a higher percentage of males playing video games. This may mean that the results are more applicable to gamers than to the general population, but individuals interested in playing video games are an ideal target for an exergame based intervention. We have acknowledged these facts in the limitations section. Why are the measurement items selected? In the case of the 'enjoyment' and 'motivation' constructs, it is a one-item measurement. Is this sufficient? We agree that this is a limitation in the paper, and have acknowledged it in the limitations section. That said, we suspect that the use of multi-item measurements for these constructs would produce similar results. It would have been better to combine the 2 studies into a single study and the same subjects be tested under the conditions, so the findings can be more accurately interpreted. We agree that it would be useful to have a single study in which the same subjects experienced the conditions of both of the studies we presented in this paper. Such a study would have some practical concerns however. We generally found that doing three ten-minute sessions of the exergame was very tiring for most people. Requiring participants to complete more than three sessions would likely have increased our drop-out rate. The necessity of breaks between the conditions would also mean that the user study sessions would be very long. We thank you all for the time you have taken to review this paper. We believe the comments provided by the three reviewers have helped us to improve the paper. We hope that with the changes we have made it is now suitable for publication in PeerJ. Kind regards on behalf of all of the authors, Lindsay Shaw "
Here is a paper. Please give your review comments after reading it.
374
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>We investigate and analyze methods to violence detection in this study to completely disassemble the present condition and anticipate the emerging trends of violence discovery research. In this systematic review, we provide a comprehensive assessment of the video violence detection problems that have been described in state-of-the-art researches. This work aims to address the problems as state-of-the-art methods in video violence detection, datasets to develop and train real-time video violence detection frameworks, discuss and identify open issues in the given problem. In this study, we analyzed 80 research papers that have been selected from 154 research papers after identification, screening, and eligibility phases. As the research sources, we used five digital libraries and three high ranked computer vision conferences that were published between 2015 and 2021. We begin by briefly introducing core idea and problems of videobased violence detection; after that, we divided current techniques into three categories based on their methodologies: conventional methods, end-to-end deep learning-based methods, and machine learning-based methods. Finally, we present public datasets for testing video based violence detectionmethods' performance and compare their results. In addition, we summarize the open issues in violence detection in videoand evaluate its future tendencies.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>INTRODUCTION</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Surveillance and anomaly detection have become more important as the quantity of video data has grown rapidly <ns0:ref type='bibr'>(Feng et. al., 2021)</ns0:ref>. When compared to regular activity, such aberrant occurrences are uncommon. As a result, creating automated video surveillance systems for anomaly detection has become a need to reduce labor and time waste. Detecting abnormalities in films is a difficult job since the term 'anomaly' is often imprecise and poorly defined <ns0:ref type='bibr' target='#b112'>(Yang et. al., 2018)</ns0:ref>. They differ greatly depending on the conditions and circumstances in which they occur. Bicycling on a standard route, for example, is a typical activity, but doing so in a walk-only lane should be noted as unusual. The uneven internal occlusion is a noteworthy, yet difficult to explain characteristic of the abnormal behavior. Furthermore, owing to its large dimensionality, resolution, noise, and rapidly changing events and interactions, video data encoding and modeling are more challenging. Other difficulties include lighting changes, perspective shifts, camera movements, and so on <ns0:ref type='bibr' target='#b114'>(Yazdi et. al., 2018)</ns0:ref>. Violence detection is one of the most crucial elements of video-based anomaly detection <ns0:ref type='bibr'>(Khan et. al., 2019)</ns0:ref>. The usage of video cameras to monitor individuals has become essential due to the rise in security concerns across the globe, and early detection of these violent actions may significantly minimize the dangers. A violence detection system's primary goal is to identify some kind of aberrant behavior that fits under the category of violence <ns0:ref type='bibr'>(Mabrouk et. al., 2019)</ns0:ref>. If an event's conduct differs from what one anticipates, it is considered violent. A person striking, kicking, lifting the other person, and so on are examples of such anomalies <ns0:ref type='bibr' target='#b73'>(Shao et. al., 2017)</ns0:ref>. Item in an unusual place, odd motion patterns such as moving in a disorganized way, abrupt motions, fallen objects are all examples of violent occurrences <ns0:ref type='bibr' target='#b51'>(Munn et. al., 2018)</ns0:ref>. Since human monitoring of the complete video stream is impractical owing to the repetitive nature of the work and the length of time required, automated identification of violent events in real-time is required to prevent such incidents <ns0:ref type='bibr' target='#b91'>(Tripathi et. al., 2018)</ns0:ref>. Many scholars considered various methods to improve violence detection performance. Using a comprehensive literature review, various techniques of detecting violence from surveillance camera videos are examined and addressed in depth in this study. The main goal of this review is to provide an in-depth, systematic overview of the techniques for detecting violence in video. Various techniques of detecting violence in video and aggressive behavior have been developed during the past decade. These techniques must be classified, analyzed, and summarized. To perform a systematic literature review, we created basic search phrases to find the most relevant studies on the detection of violent behavior accessible in five digital libraries as ScienceDirect, IEEEXplore digital library, Springer, Wiley, Scopus databases, and well-known conferences in the computer vision area as Conference on Computer Vision and Pattern Recognition (CVPR), International Conference on Computer Vision (ICCV), and European Conference on Computer Vision (ECCV). Research highlights of this systematic review are described as follows:</ns0:p><ns0:p>&#61623; Review of state-of-the-art violence detection methods highlighting their originality, key features, and limitations. The remainder of this review is split into five parts. The research methodology of the current review is described in Section II. The fundamental idea and main concepts of violence detection in videos are discussed in Section III. The methods of violence detection in videos are explored in-depth in Section IV. Video feature descriptors and their significance are described in Section V. Worldwide datasets to train the models for violence detection are discussed in Section VI. Evaluation metrics that were used to test violence detection methods are described in Section VII. Challenges and open issues in violence detection are discussed in Section VIII. The last section concludes the review by discussing trends, future perspectives, and open problems of violence detection.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>RESEARCH METHODOLOGY</ns0:head><ns0:p>Both qualitative and quantitative analytic techniques were integrated and used in this systematic literature review <ns0:ref type='bibr' target='#b61'>(Ramzan et. al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Lejmi et. al., 2019)</ns0:ref>. Table <ns0:ref type='table'>1</ns0:ref> demonstrates inclusion and exclusion criterias of the collected studies. We included two criteria as inclusion and six criteria as exclusion criteria.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 1. Inclusion and exclusion criteria.</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> illustrates the stages of the review as well as the number of articles that were included and eliminated. For a collection of articles, search terms and query strings were determined. The query string consists of three search terms: 'Video' and 'Violence detection' with the logic operator 'AND' between them. Science Direct, IEEE Xplore, Springer, and Wiley libraries, and 'Scopus' abstract and citation database were utilized. In addition, we asked for publications from CVPR, ICCV, and ECCV conferences. Articles published between 2015 and 2021 were considered in our study. 58 recordings were eliminated during the screening step, 16 records were eliminated during the Eligibility stage due to a lack of complete texts, duplication. As a result of the article collection, 80 research articles were included in the systematic literature review. In our systematic literature review, we put four research questions. Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> demonstrates research questions and their motivations. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1'>Data Analysis</ns0:head><ns0:p>In this subsection, we provide general analysis to the obtained results. Fig. <ns0:ref type='figure'>2</ns0:ref> demonstrates yearwise distribution of the papers that dedicated to violence detection. Fig. <ns0:ref type='figure'>2a</ns0:ref> illustrates distribution of violence detection papers from January, 2016 to November, 2021. As the figure shows, the interest to the given problem increases year by year. Fig. <ns0:ref type='figure'>2b</ns0:ref> presents distribution of applied methods in the selected papers. As it is illustrated in the figure, in 2016-2017 machine learning methods were popular in video violence detection problem. Moreover, we can observe decreasing of conventional methods usage, and increasing trend of deep learning based techniques. Fig. <ns0:ref type='figure'>3</ns0:ref> demonstrates percentage of each method usage. SVM is consistently applied in the detection of violence in video occupying 24% of all methods used. Conventional methods that were used in 2015 to 2018, take about one fifth of all the applied methods. In machine learning, four algorithms frequently used in violence detection, in detail k nearest neighbors (2%), Adaptive boosting (4%), Random forest (7%), and k-means (2%). Increasing of deep learning techniques in video based violence detection can be associated with the increasing of computational performance of equipment. 43% of all the applied methods use deep learning for violence detection problem. Convolutional neural networks is the most applied method for the given problem. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>CONCEPTS</ns0:head><ns0:p>The main objective of a violence detection system is to identify events in real-time so that hazardous situations may be avoided. It is, nevertheless, essential to comprehend certain key principles. Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> depicts the fundamental stages of video-based violence detection methods.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1'>Action Recognition</ns0:head><ns0:p>Action recognition is a technology that can identify human actions. Human activities are categorized into four groups based on the intricacy of the acts and the number of bodily parts engaged in the action. Gestures, actions, interactions, and group activities are the four categories <ns0:ref type='bibr' target='#b1'>(Aggarwal et. al., 2011)</ns0:ref>. A gesture is a series of motions performed with the hands, head, or other body parts to convey a certain message. A single person's actions are a compilation of numerous gestures. Interactions are a set of human activities involving at least two people. When there are two actors are involved, one should be a human and the other may be a human or an object. When PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>there are more than two participants and one or more interacting objects, group activities involve a mix of gestures, actions, or interactions <ns0:ref type='bibr' target='#b1'>(Aggarwal et. al., 2011)</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Violence Detection</ns0:head><ns0:p>The detection of violence is a specific issue within the larger topic of action recognition. The goal of violence detection is to identify whether or not violence happens in a short amount of time automatically and efficiently. Automatic video identification of human activities has grown more essential in recent years for applications such as video surveillance, human-computer interaction, and video retrieval based on content <ns0:ref type='bibr' target='#b59'>(Poppe, 2010;</ns0:ref><ns0:ref type='bibr'>Sun &amp; Liu, 2013)</ns0:ref>. The goal of violence detection is to identify whether or not violence happens automatically and effectively. In any case, detecting violence is a tough task in and of itself, since the notion of violence is subjective. Because it possesses features that distinguish it from generic acts, violence detection is a significant problem not just at the application level but also at the research level.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.'>CLASSIFICATION OF VIOLENCE DETECTION TECHNIQUES</ns0:head><ns0:p>In everyday life, violence is defined as suspicious occurrences or actions. The use of computer vision to recognize such actions in surveillance cameras has become a popular issue in the area of action recognition <ns0:ref type='bibr' target='#b54'>(Naik &amp; Gopalakrishna, 2017)</ns0:ref>. Scientists presented various approaches and methods for detecting violent or unusual occurrences, citing the fast rise in crime rates as an example of the need for more efficient identification. Various methods for detecting violence have been developed in past few years. Based on the classifier employed, violence detection methods are divided into three categories: violence detection using machine learning, violence detection using SVM, and violence detection using deep learning. Because SVM and deep learning are extensively employed in computer vision, they are categorized independently. Tables explain the specifics of each technique. The techniques are given in the order in which they were developed. A methodology for detecting objects and a method for extracting features are also discussed.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Violence Detection Using Machine Learning Techniques</ns0:head><ns0:p>In this subsection, we review violence detection techniques that applied classical machine learning techniques. In Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>, we summarize different classification techniques for violence detection in videos by indicating object detection, feature extraction, classification, the applicability of the methods for different types of scenes, and their evaluation parameters when using in different datasets. Further, we describe each technique in detail. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the field of computer vision, action recognition has now become a relevant research area. Nevertheless, most researches have concentrated on relatively basic activities such as clapping, walking, running, and so on. The identification of particular events with immediate practical application, such as fighting or general violent conduct, has received much less attention. In certain situations, such as prisons, mental institutions, or even camera phones, video surveillance may be very helpful. A new technique for detecting violent sequences was suggested by <ns0:ref type='bibr'>Gracia et al..</ns0:ref> To distinguish between fighting and non-fighting episodes, features derived from motion blobs are utilized. The proposed method was assessed using three different datasets as 'Movies' dataset with 200 video clips <ns0:ref type='bibr' target='#b9'>(Bermejo et. al., 2011)</ns0:ref>, the 'Hockey fight' dataset that consists of 1000 video clips <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref>, and the UCF-101 dataset of realistic action videos collected from Youtube <ns0:ref type='bibr' target='#b143'>(Soomro et. al., 2012)</ns0:ref>. The proposed method was compared with other five related methods as Bag of Words (BoW) <ns0:ref type='bibr'>(Wang et. al., 2021)</ns0:ref> using scale-invariant feature transform (MoSIFT) <ns0:ref type='bibr' target='#b13'>(Chen &amp; Hauptmann, 2009)</ns0:ref> and STIP <ns0:ref type='bibr' target='#b95'>(Ushapreethi &amp; GG, 2020)</ns0:ref> features, Violent Flows (ViF) method <ns0:ref type='bibr' target='#b21'>(Souza &amp; Pedrini, 2017)</ns0:ref>, Local Motion method <ns0:ref type='bibr'>(Zhang et. al., 2019)</ns0:ref>, also variant v-1 and variant v-2 methods that applied KNN, AdaBoost, and Random Forest classifiers. Although the proposed technique falls short from a perspective of performance, it has a much quicker calculation time, making it suitable for practical uses. Automatically detecting aggressive behaviors in video surveillance situations such as train stations, schools, and mental institutions is critical. Previous detection techniques, on the other hand, often extract descriptors surrounding spatiotemporal interesting spots or statistic characteristics in motion areas, resulting in restricted capacities to identify video-based violent activities efficiently. <ns0:ref type='bibr'>Wang et. al.</ns0:ref> present a new technique for detecting violent sequences to solve this problem <ns0:ref type='bibr' target='#b124'>(Zhou et. al., 2017)</ns0:ref>. To begin, the motion areas are divided into segments based on the distribution of optical flow fields. Second, we suggest extracting two types of low-level characteristics to describe the emergence and dynamics of violent behaviors in the motion areas. The Local Histogram of Oriented Gradient (LHOG) descriptor <ns0:ref type='bibr' target='#b17'>(Dalal and Triggs, 2005)</ns0:ref> derived from RGB pictures and the LHOF descriptor <ns0:ref type='bibr' target='#b18'>(Dalal et. al., 2006)</ns0:ref> extracted from optical flow images are the suggested low-level features. Finally, to remove duplicate information, the collected features are coded using the Bag of Words (BoW) model, and a specific-length vector is produced for each video clip. Finally, SVM is used to classify the video-level vectors. The suggested detection technique outperforms the prior approaches in three difficult benchmark datasets, according to experimental findings. We chose to start at a basic level to describe what is often present in film with violent human behaviors: jerky and unstructured motion, that is due to the fact that aggressive occurrences are difficult to quantify owing to their unpredictability and sometimes need high-level interpretation. In order to capture its structure and distinguish the unstructured movements, a new problemspecific Rotation-Invariant feature modeling MOtion Coherence (RIMOC) was suggested <ns0:ref type='bibr' target='#b66'>(Ribeiro et. al., 2016)</ns0:ref>. It is based on eigenvalues calculated locally and densely from second-order statistics of Histograms of Optical Flow vectors from successive temporal instants, then embedded into a spheric Riemannian manifold. In a poorly supervised way, the proposed RIMOC feature is utilized to develop statistical models of normal coherent movements. Events with irregular mobility may be identified in space and time using a multi-scale approach combined with an inference-based method, making them ideal candidates for aggressive events. There is no special dataset available for violence and aggressive behavior detection. A big dataset is produced for this goal, which comprises of sequences from two distinct sites: an in-lab fake train and a genuine underground railway line, real train, and then four datasets are formed: fake train, real train, real train station, and real-life settings. These datasets are used in the trials, and the findings indicate that the suggested approach outperforms all state-of-the-art methods in terms of ROC per frame and falsepositive rate. Yao et al. present a multiview fight detection technique based on optical flow statistical features and random forest <ns0:ref type='bibr' target='#b113'>(Yao et. al., 2021)</ns0:ref>. This technique may provide fast and reliable information to cyber-physical monitoring systems. Motion Direction Inconsistency (MoDI) and Weighted Motion Direction Inconsistency (WMoDI), two new descriptors, are developed to enhance the performance of current techniques for films with various filming perspectives and to address misjudgment on nonfighting activities like jogging and chatting. The motion regions are first marked using the YOLO V3 method, and then the optical flow is calculated to retrieve descriptors. Finally, Random Forest is utilized to classify data using statistical descriptor features. The experiments were performed using CASIA Action Dataset <ns0:ref type='bibr' target='#b145'>(Wang et. al., 2007)</ns0:ref> and the UT-Interaction Dataset <ns0:ref type='bibr' target='#b121'>(Zhang et. al., 2017)</ns0:ref>. All films of fighting, as well as 15 additional videos in five categories, were chosen from the CASIA Action Dataset. The findings demonstrated that the proposed approach improves violence detection accuracy and reduces the incidence of missing and false alarms, and it is robust against films with various shooting perspectives.</ns0:p><ns0:p>Fast face detection <ns0:ref type='bibr' target='#b149'>(Arceda et. al., 2016)</ns0:ref> is developed to accomplish the objective of identifying faces in violent videos to improve security measures. For the initial step of violent scene identification, the authors utilized the ViF descriptor <ns0:ref type='bibr'>(Ke&#231;eli et. al., 2017)</ns0:ref> in conjunction with Horn-Schunck <ns0:ref type='bibr'>(Ke&#231;eli et. al., 2017)</ns0:ref>. Then, to enhance the video quality, the non-adaptive interpolation super-resolution algorithm was used, followed by the firing of the Kanade-Lucas-Tomasi (KLT) face detector <ns0:ref type='bibr'>(Alex Net, 2021)</ns0:ref>. The authors used CUDA to parallelize the superresolution and face detection algorithms in order to achieve a very fast processing time. The Boss Dataset <ns0:ref type='bibr' target='#b149'>(Bas et. al., 2011)</ns0:ref> was utilized in the tests, as well as a violence dataset based on security camera footage. Face detection yields encouraging results in terms of area under the curve (AUC) and accuracy. For years, computer vision researchers have been exploring how to identify violence. Prior studies, on the other hand, are either shallow, as in the categorization of short-clips and the single scenario, or undersupplied, as in the single modality and multimodality based on hand-crafted characteristics. To address this issue, the XD-Violence dataset, a large-scale and multi-scene dataset with a total length of 217 hours and 4754 untrimmed films with audio signals and poor labels was proposed <ns0:ref type='bibr' target='#b150'>(Wu et. al., 2020)</ns0:ref>. Then, to capture different relations among video snippets and integrate features, a neural network with three parallel branches was proposed: a holistic branch that catches long-range dependencies using similarity prior, a localized branch that captures local positional relations utilizing proximity prior, and a score branch that dynamically captures the closeness of predicted score. In addition, to fulfill the requirements of online detection, the proposed approach incorporates an approximator. Authors use the frame-level precision-recall curve (PRC) and corresponding area under the curve (average precision, AP) <ns0:ref type='bibr' target='#b150'>(Wu et. al., 2020)</ns0:ref> instead of the receiver operating characteristic curve (ROC) and corresponding AUC <ns0:ref type='bibr' target='#b118'>(Yoganand &amp; Kavida, 2018;</ns0:ref><ns0:ref type='bibr'>Xie et. al., 2016)</ns0:ref> because AUC typically shows an optimistic result when dealing with class-imbalanced data, whereas PRC and AP focus on positive samples (violence). The proposed approach beats other state-of-the-art algorithms in the publicly available dataset created by authors. Furthermore, numerous experimental findings indicate that multimodal (audio-visual) input and modeling connections have a beneficial impact. Most conventional activity identification techniques' motion target detection and tracking procedures are often complex, and their applicability is limited. To solve this problem, a fast method of violent activity recognition is introduced which is based on motion vectors <ns0:ref type='bibr'>(Xie et. al., 2016)</ns0:ref>. First and foremost, the motion vectors were directly retrieved from compressed video segments. The motion vectors' characteristics in each frame and between frames were then evaluated, and the Region Motion Vectors (RMV) descriptor was produced. To classify the RMV to identify aggressive situations in movies, a SVM classifier with radial basis kernel function was used in the final step. In order to evaluate the proposed method, the authors created VVAR10 dataset that consists of 296 positive samples and 277 negative samples by sorting video clips from UCF sports <ns0:ref type='bibr'>(Xie et. al., 2016)</ns0:ref>, UCF50 <ns0:ref type='bibr' target='#b62'>(Reddy &amp; Shah, 2013)</ns0:ref>, HMDB51 <ns0:ref type='bibr' target='#b33'>(Kuehne et. al., 2011)</ns0:ref> datasets. Experiments have shown that the proposed method can detect violent scenes with 96.1% accuracy in a short amount of time. That is why the proposed method can be used in embedded systems.</ns0:p><ns0:p>Most of the research in the field of action recognition has concentrated on people identification and monitoring, loitering, and other similar activities, while identification of violent acts or conflicts has received less attention. Local spatiotemporal feature extractors have been explored in previous studies; nevertheless, they come with the overhead of complicated optical flow estimates. Despite the fact that the temporal derivative is a faster alternative to optical flow, it produces a low-accuracy and scale-dependent result when used alone <ns0:ref type='bibr' target='#b39'>(Li et. al., 2018)</ns0:ref>. As a result, a cascaded approach of violence detection was suggested <ns0:ref type='bibr'>(Febin et. al., 2020)</ns0:ref>, based on motion boundary SIFT (MoBSIFT) and a movement filtering method. The surveillance films are examined using a movement filtering algorithm based on temporal derivatives in this approach, which avoids feature extraction for most peaceful activities. Only filtered frames may be suitable for feature extraction. Motion boundary histogram (MBH) is retrieved and merged with SIFT <ns0:ref type='bibr' target='#b42'>(Lowe, 2004)</ns0:ref> and histogram of optical flow feature to create MoBSIFT descriptor. The models were trained using MoBSIFT and MPEG Flow (MF) <ns0:ref type='bibr' target='#b31'>(Kantorov &amp; Laptev, 2014)</ns0:ref> descriptors using AdaBoost, RF, and SVM classifiers. Because of its great tolerance to camera motions, the suggested MoBSIFT surpasses current techniques in terms of accuracy. The use of movement filtering in conjunction with MoBSIFT has also been shown to decrease time complexity.</ns0:p><ns0:p>In computer vision, Lagrangian theory offers a comprehensive set of tools for evaluating nonlocal, long-term motion information. Authors propose a specialized Lagrangian method for the automatic identification of violent situations in video footage based on this theory <ns0:ref type='bibr'>(Senst et. al., 2017)</ns0:ref>. The authors propose a new feature based on a spatio-temporal model that utilizes appearance, background motion correction, and long-term motion information and leverages Lagrangian direction fields. They use an expanded bag-of-words method in a late-fusion way as a classification strategy on a per-video basis to guarantee suitable spatial and temporal feature sizes. Experiments were conducted in three datasets as 'Hockey Fight' <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref>, 'Violence in Movies' <ns0:ref type='bibr' target='#b9'>(Bermejo et. al., 2011)</ns0:ref>, 'Violent Crowd' <ns0:ref type='bibr'>(Hassner et. al., 2012)</ns0:ref>, and 'London Metropolitan Police (London Riots 2011)' <ns0:ref type='bibr' target='#b14'>(Cheng &amp; Williams, 2012)</ns0:ref> datasets. Multiple public benchmarks and non-public, real-world data from the London Metropolitan Police are used to verify the proposed system. Experimental results demonstrated that the implementation of Lagrangian theory is a useful feature in aggressive action detection and the classification efficiency rose over the state-of-the-art techniques like two-stream convolutional neural network (CNN, ConvNet), ViF, HoF+BoW with STIP, HOG+BoW with STIP, etc. in terms of accuracy and ROC-AUC measure.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Violence Detection Techniques Using SVM</ns0:head><ns0:p>The methods for detecting violence using the SVM as a classifier are described in-depth here. A collection of SVM based violent incident recognition methods is shown in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>. SVM is a supervised learning method that is used to tackle classification issues. We display data on (number features) dimension space in SVM and distinguish between two groups. SVM is a popular technique in computer vision since it is robust and takes quantitative information into account. It is used to do binary classification jobs. Kernel is the foundation of SVM. Kernel is a function that transforms data into a high-dimensional space in which the issue may be solved. The lack of transparency in the findings is a significant drawback of SVM <ns0:ref type='bibr' target='#b6'>(Auria &amp; Moro, 2007)</ns0:ref>. SVM-based techniques for detecting violence are now described in full separately.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 4. Violence Detection Techniques Using SVM.</ns0:head><ns0:p>A new method for identifying school violence was proposed <ns0:ref type='bibr' target='#b116'>(Ye et. al., 2020)</ns0:ref>. This technique uses the KNN algorithm to identify foreground moving objects and then uses morphological processing methods to preprocess the identified targets. Then, to optimize the circumscribed rectangular frame of moving objects, a circumscribed rectangular frame integrating technique was proposed. To explain the distinctions between school violence and everyday activities, rectangular frame characteristics and optical-flow features were retrieved. To decrease the feature dimension, the Relief-F and Wrapper algorithms were applied. SVM is used as a classifier, and 5-fold crossvalidation was conducted. The results show 94.4 percent precision and 89.6 percent accuracy. In order to improve recognition performance, a DT-SVM two-layer classifier is created. Authors utilized boxplots to identify certain DT layer characteristics that can differentiate between everyday activities and physical violence. The SVM layer conducted categorization for the remaining activities. The accuracy of this DT-SVM classifier was 97.6 percent, while the precision was 97.2 percent, indicating a considerable increase. Surveillance systems are grappling with how to identify violence. However, it has not received nearly as much attention as action recognition. Existing vision-based techniques focus mostly on detecting violence and make little attempt to pinpoint its location. To tackle this problem, Zhange et. al. presented a quick and robust method for identifying and localizing violence in surveillance situations to address this issue <ns0:ref type='bibr' target='#b123'>(Zhang et al., 2016)</ns0:ref>. A Gaussian Model of Optical Flow (GMOF) is suggested for this purpose in order to extract potential violent areas, which are adaptively modeled as a departure from the usual crowd behavior seen in the picture. Following that, each video volume is subjected to violence detection by intensively sampling the potential violent areas. The authors also propose a new descriptor called the Orientation Histogram of Optical Flow (OHOF), which is input into a linear SVM for classification to differentiate violent events from peaceful ones. Experimental results on violence video datasets like 'Hockey' <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref>, 'BEHAVE' <ns0:ref type='bibr' target='#b11'>(Blunsden &amp; Fisher, 2010)</ns0:ref>, 'CAVIAR' <ns0:ref type='bibr'>(Caviar, 2004)</ns0:ref> have shown the superiority of the proposed methodology over the state-of-the-art descriptors like MoSIFT and SIFT, HOG, HOF, and Combination of HOG and HOF (HNF), in terms of detection accuracy, AUC-ROC, and processing performance, even in crowded scenes. With more and more surveillance cameras deployed nowadays, the market need for smart violence detection is steadily increasing, despite the fact that it is still a challenging subject in the study. In order to recognize violence in videos in a realistic manner, a novel feature extraction method named Oriented VIolent Flows (OViF) was proposed by Gao et. al., <ns0:ref type='bibr' target='#b139'>(Gao et. al., 2016)</ns0:ref> . In statistical motion orientations, the proposed method fully exploits the motion magnitude change information. The features are selected using AdaBoost, and the SVM classifier is subsequently trained on the features. Experiments are carried out on the 'Hockey' and 'Violent-Flow' <ns0:ref type='bibr'>(Xu et. al., 2018)</ns0:ref> datasets to assess the new approach's performance. The findings indicate that the suggested technique outperforms the baseline methods LTP and ViF in terms of accuracy and AUC. Furthermore, feature and multi-classifier combination methods have been shown to help improve the performance of the violence detector. The experiment results demonstrate that the combination of ViF and OViF using AdaBoost with a combination of Linear-SVM surpasses the state-of-the-art on the Violent-Flows database. The final best violence detection rates are 87.50% and 88.00% on Hockey Fight and Violent-Flows separately using ViF + OViF with Adaboost + SVM. One of the most important stages in the development of machine learning applications is data representation. Data representation that is efficient aids in better classification across classes. Deepak et. al. investigate Spatio-Temporal Autocorrelation of Gradients (STACOG) as a handmade feature for extracting violent activity characteristics from surveillance camera videos <ns0:ref type='bibr' target='#b140'>(Deepak et. al., 2020)</ns0:ref>. The proposed strategy is divided into two stages: (1) Extraction of STACOG based Features features; (2) Discriminative learning of violent/non-violent behaviors using an SVM Classifier. Two well-known datasets were used to test the proposed approach. The Hockey fight dataset <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref> contains 1000 video clips and the Crowd Violence Dataset. The proposed 'STACOG features + SVM' model shown 91.38% accuracy in violence detection overcoming state-of-the-art methods like HOF+BoW, HNF+BoF, ViF+SVM, BiLSTM, GMOF, and others. In video processing, aggression detection is critical, and a surveillance system that can operate reliably in an academic environment has become a pressing requirement. To solve this problem, a novel framework for an automatic real-time video-based surveillance system is proposed <ns0:ref type='bibr' target='#b2'>(Al-Nawashi et. al., 2017)</ns0:ref>. The proposed system is divided into three phases during the development process. The first stage is preprocessing stage that includes abnormal human activity detection and content-based image retrieval (CBIR) in the event that the system identifies unusual student behavior. In the first stage, students are registered by entering their personal data including first name, second name, birthday, course, student id card, and photos. The entered data is stored in a central database for conducting a search when abnormal actions are detected. The video is then turned into frames in the second step. Motion objects are detected using a temporal-differencing method, and motion areas are identified using the Gaussian function. Furthermore, a form model based on the OMEGA equation is employed as a filter for identified items, whether human or nonhuman. SVM is used to classify human behaviors into normal and abnormal categories. When a person engages in abnormal behavior, the system issues an automated warning. It also adds a method to get the identified item from the database using CBIR for object detection and verification. Finally, a software-based simulation using MATLAB is performed, and experimental findings indicate that the system performs simultaneous tracking, semantic scene learning, and abnormality detection in an academic setting without the need of humans. Kamoona et. al. proposes a model-based method for anomaly identification for surveillance video. There are two stages to the system <ns0:ref type='bibr' target='#b27'>(Kamoona et. al., 2019)</ns0:ref>. Multiple handcrafted features have been presented on this platform. Deep learning techniques have also been used to extract spatialtemporal characteristics from video data, such as C3D features <ns0:ref type='bibr' target='#b84'>(Sultani et. al., 2018)</ns0:ref>, as well as anomaly detection using SVM. The next phase is behavior modeling. In this phase, SVM is trained using a Bag of Visual Word (BOVW) to learn the typical behavior representation. <ns0:ref type='bibr'>Song et. al.</ns0:ref> proposes a new framework for high-level activity analysis based on late fusion and multi-independent temporal perception layers, which is based on late fusion <ns0:ref type='bibr' target='#b79'>(Song et. al., 2018)</ns0:ref>. It is possible to manage the temporal variety of high-level activities using this approach. Multitemporal analysis, multi-temporal perception layers, and late fusion are all part of the framework. Based on situation graph trees (SGT) and SVM, authors create two kinds of perception layers (SVMs). Through a phase of late fusion, the data from the multi-temporal perception layers are fused into an activity score. To test the proposed method, the framework is applied to the detection of violent events by visual observation. The experiments are conducted applying three well-known databases: BEHAVE <ns0:ref type='bibr' target='#b11'>(Blunsden &amp; Fisher, 2010)</ns0:ref>, NUS-HGA <ns0:ref type='bibr' target='#b128'>(Zhuang et. al., 2017)</ns0:ref>, and a number of YouTube videos depicting real-life situations. The tests yielded an accuracy of 70.2% (SVM), and 87.2% (SGT) in different datasets, demonstrating how the proposed multi-temporal technique outperforms single-temporal approaches. <ns0:ref type='bibr' target='#b97'>(Vashistha et. al., 2018)</ns0:ref> utilized Linear SVM to categorize incoming video as violent or nonviolent, extracting important characteristics like centroid, direction, velocity, and dimensions. Their approach took into account two feature vectors, i.e. ViF and the Local Binary Pattern (LBP). Because calculating LBP or ViF individually it takes less time than combining these feature vectors, their study found that combining LBP and ViF did not offer substantial direction for future development.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Violence Detection Techniques Using Deep Learning</ns0:head><ns0:p>The methods for detecting violence that utilizes deep learning algorithms in the suggested frameworks are described in-depth here. Convolutional Neural Netowks (CNN) <ns0:ref type='bibr'>(Zhang et. al., 2019)</ns0:ref> and its imrovemebts are widely used in violence detection in video. Table <ns0:ref type='table'>5</ns0:ref> shows a collection of recognition techniques that are based on deep learning. Neural networks are the foundation of deep learning. Using additional convolutional layers, the method is utilized to categorize the violent recognition based on the data set and retrieved features. Now, techniques for detecting violence that utilize deep learning algorithms are discussed in depth individually. While most studies have focused on the issue of action recognition, fighting detection has received much less attention. This skill may be very valuable. To build complicated handmade characteristics from inputs, most techniques require on domain expertise. Deep learning methods, on the other hand, may operate directly on raw inputs and extract necessary features automatically. As a result, Ding et. al. created a new 3D ConvNets approach for video violence detection, which does not need any previous information <ns0:ref type='bibr' target='#b141'>(Ding et. al., 2014)</ns0:ref>. The convolution on the collection of video frames is computed using a 3D CNN, and therefore motion information is retrieved from the input data. The back-propagation technique is used to obtain gradients and the model has been trained to apply supervised learning. Experimental validation was carried out in the context of the 'Hockey fights' <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref> dataset to assess the approach. The findings indicate that the approach outperforms manual features in terms of performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 5. Violence detection using deep learning techniques.</ns0:head><ns0:p>Campus violence is a worldwide social phenomenon that is the most dangerous sort of school bullying occurrence. There are various possible strategies to identify campus violence as AI and remote monitoring capabilities advance, such as video-based techniques. Ye et. al. combine visual and audio data for campus violence detection <ns0:ref type='bibr' target='#b117'>(Ye et. al., 2021)</ns0:ref>. Role-playing is used for campus violence data collection, and 4096-dimension feature vectors are extracted from every 16 frames of video frames. For feature extraction and classification, the 3D CNN is used, and overall precision of 92.00 percent is attained. Three speech emotion datasets are used to extract melfrequency cepstral coefficients (MFCCs) as acoustic features: CASIA dataset <ns0:ref type='bibr' target='#b145'>(Wang et. al., 2007)</ns0:ref> that has 960 samples, Finnish emotional dataset <ns0:ref type='bibr' target='#b96'>(Vaaras et. al., 2021)</ns0:ref> that consists of 132 samples, and Chinese emotional dataset <ns0:ref type='bibr' target='#b81'>(Poria et. al., 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>An enhanced Dempster-Shafer (D-S) algorithm is proposed to handle the problem of evidence dispute. As a result, recognition accuracy reached 97%. To address the issue of large-scale visual place identification, the NetVLAD architecture is presented, where the goal is to rapidly and correctly identify the location of a supplied query image <ns0:ref type='bibr' target='#b5'>(Arandjelovic et. al., 2016)</ns0:ref>. NetVLAD is a CNN-based approach for weakly supervised place recognition. In this work, three major contributions are presented. First, for the location recognition problem, CNN architecture is created that can be trained in a direct end-to-end way. The central feature of this approach, NetVLAD, is a novel generalized VLAD layer inspired by the widely used picture format 'Vector of Locally Aggregated Descriptors (VLAD).' The layer may be easily integrated into any CNN model and trained using backpropagation. The second contribution is the construction of a training method based on a novel weakly supervised ranking loss, to learn architectural parameters in an end-to-end way using Google Street View Time Machine pictures showing the same locations over time. Finally, using the Pittsburgh <ns0:ref type='bibr' target='#b87'>(Torii et. al., 2013)</ns0:ref> and Tokyo 24/7 <ns0:ref type='bibr' target='#b88'>(Torii et. al., 2015)</ns0:ref> datasets, the authors demonstrated that the proposed architecture outperforms non-learned image representations and off-the-shelf CNN descriptors on two difficult place recognition benchmarks, and outperforms current state-of-the-art image representations on standard image retrieval benchmarks. A real-time violence detection system is presented <ns0:ref type='bibr' target='#b133'>(Fenil et. al., 2019)</ns0:ref>, which analyzes large amounts of streaming data and recognizes aggression using a human intelligence simulation. The system's input is a massive quantity of real-time video that feeds from various sources, which are analyzed using the Spark framework. The frames are split and the characteristics of individual frames are retrieved using the HOG function in the Spark framework. The frames are then labeled based on characteristics such as the violence model, human component model, and negative model, which are trained using the BDLSTM network for violent scene detection. The data may be accessed in both directions via the bidirectional LSTM. As a result, the output is produced in the context of both past and future data. The violent interaction dataset (VID) is used to train the network, which contains 2314 movies with 1077 fights and 1237 no-fights. The authors also generated a dataset of 410 video episodes with neutral scenes and 409 video episodes with violence. The accuracy of 94.5% in detecting violent behavior validates the model's performance and demonstrates the system's durability. <ns0:ref type='bibr'>Mu et. al.</ns0:ref> are presented a violent scene identification method based on acoustic data from video <ns0:ref type='bibr'>(Mu et. al., 2016)</ns0:ref>. CNN in two ways: as a classifier and as a deep acoustic feature extractor. To begin, the 40-dimensional Mel Filter-Bank (MFB) is used as the CNN's input feature. The video is then divided into little pieces. To investigate the local features, MFB features are split into three feature maps. Then CNN is utilized to represent features. The CNN-based features are applied to construct SVM classifiers. Then the violent scene detection process is applied to each frame of video. After that, detection is generated by applying maximum or minimum pooling to the segment level. Experiments are conducted using the MediaEval dataset <ns0:ref type='bibr'>(Demarty et. al., 2014)</ns0:ref>, and the findings indicate that the proposed approach outperforms the fundamental techniques in terms of average precision: audio alone, visual solely, and audio learned fusion and visual.</ns0:p><ns0:p>A new deep violence detection approach based on handcrafted techniques' distinctive characteristics was presented <ns0:ref type='bibr' target='#b48'>(Mohtavipour et. al., 2021)</ns0:ref>. These characteristics are linked to appearance, movement speed, and representative images, and they are supplied to a CNN as spatial, temporal, and spatiotemporal streams. With each frame, the spatial stream teaches the neural network how to recognize patterns in the surroundings. With a modified differential magnitude of optical flow, the temporal stream included three successive frames to learn motion patterns of aggressive behavior. Furthermore, the authors developed a discriminative feature with a new differential motion energy picture in the spatio-temporal stream to make violent behaviors more understandable. By combining the findings of several streams, this method includes many elements of aggressive conduct. The proposed CNN network was trained using three datasets: Hockey <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref>, Movie <ns0:ref type='bibr' target='#b9'>(Bermejo et. al., 2011)</ns0:ref>, and ViF <ns0:ref type='bibr' target='#b64'>(Rota et. al., 2015)</ns0:ref>. The proposed method beat state-of-the-art approaches in terms of accuracy and processing time. Sudhakaran and Lanz proposed a deep neural network for detecting violent scenes in videos <ns0:ref type='bibr'>(Sudhakaran &amp; Lanz, 2017)</ns0:ref>. To extract frame-level characteristics from a video, a CNN is applied. The frame-level characteristics are then accumulated using LSTM that uses a convolutional gate. The CNN, in combination with the ConvLSTM, can capture localized spatio-temporal characteristics, allowing for the analysis of local motion in the video. The paper also proposed feeding the model neighboring frame differences as input, pushing it to encode the changes in the video. In terms of recognition accuracy, the presented feature extraction process is tested on three common benchmark datasets as 'Hockey' <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref>, 'Movies' <ns0:ref type='bibr' target='#b9'>(Bermejo et. al., 2011)</ns0:ref>, and 'Violent-Flows' <ns0:ref type='bibr'>(Xu et. al., 2018)</ns0:ref>. Findings were compared to those produced using state-ofthe-art methods. It was discovered that the suggested method had a promising capacity for identifying violent films prevailing state-of-the-art methods as three streams + LSTM, ViF, and ViF+OViF.</ns0:p><ns0:p>To identify violent behaviors of a single person, an ensemble model of the Mask RCNN and LSTM was proposed <ns0:ref type='bibr' target='#b53'>(Naik &amp; Gopalakrishna, 2021)</ns0:ref>. Initially, human key points and masks were extracted, and then temporal information was captured. Experiments have been performed in datasets as Weizmann <ns0:ref type='bibr' target='#b10'>(Blank et. al., 2005)</ns0:ref>, KTH <ns0:ref type='bibr' target='#b67'>(Schuldt et. al., 2004)</ns0:ref>, and own Dataset respectively. The results demonstrated that the proposed model outperforms individual models showing a violence detection accuracy rate of 93.4% in its best result. The proposed approach is more relevant to the industry, which is beneficial to society in terms of security. Typical approaches depend on hand-crafted characteristics, which may be insufficiently discriminative for the job of recognizing violent actions. Inspired by the good performance of deep learning-based approaches, propose a novel method for human violent behavior detection in videos by incorporating trajectory and deep CNN, that includes the advantage of hand-crafted features and deep-learned features <ns0:ref type='bibr'>(Meng et. al., 2021)</ns0:ref>. To assess the proposed method, tests on two distinct violence datasets are performed: 'Hockey Fights' <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref> and 'Crowd Violence' <ns0:ref type='bibr' target='#b80'>(Song et. al., 2019)</ns0:ref> dataset. On these datasets, the findings show that the proposed approach outperforms state-of-the-art methods like HOG, HOF, ViF, and others.</ns0:p><ns0:p>Rend&#243;n-Segador et. al. present a new approach for determining whether a video has a violent scene or not, based on an adapted 3D DenseNet, for a multi-head self-attention layer, and a bidirectional ConvLSTM module that enables encoding relevant spatio-temporal features <ns0:ref type='bibr' target='#b63'>(Rend&#243;n-Segador et. al., 2021)</ns0:ref>. In addition, an ablation analysis of the input frames is carried out, comparing dense optical flow and neighboring frames removal, as well as the effect of the attention layer, revealing that combining optical flow and the attention mechanism enhances findings by up to 4.4 percent. The experiments were performed using four datasets, exceeding state-of-the-art methods, reducing the number of network parameters needed (4.5 million), and increasing its efficiency in test accuracy (from 95.6 percent on the most complex dataset to 100 percent on the simplest), and inference time (from 95.6 percent on the most complex dataset to 100 percent on the simplest dataset) (less than 0.3 s for the longest clips). Human action recognition has become a major research topic in computer vision. Tasks like violent conduct or fights have been researched less, but they may be helpful in a variety of surveillance video situations such as jails, mental hospitals, or even on a personal mobile phone. Their broad applicability piques interest in developing violence or fight detectors. The main feature of the detectors is efficiency, which implies that these methods should be computationally quick. Although handcrafted spatio-temporal characteristics attain excellent accuracy for both appearance and motion, extraction of certain features remains prohibitive for practical uses. For the first time, the deep learning paradigm is applied to a job using a 3D CNN that accepts the whole video stream as input. However, motion characteristics are critical for this job, and utilizing full video as input causes noise and duplication in the learning process. A hybrid feature 'handcrafted/learned' framework was developed for this purpose <ns0:ref type='bibr'>(Serrano et. al., 2018)</ns0:ref>. The technique attempts to get an illustrative picture from the video sequence used as an input for feature extraction, using Hough forest as a classifier. 2D CNN is then utilized to categorize that picture and determine the sequence's conclusion. Experiments are carried out on three violence detection datasets as 'Hockey' <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref>, 'Movie' <ns0:ref type='bibr' target='#b9'>(Bermejo et. al., 2011)</ns0:ref>, and 'Behavior' <ns0:ref type='bibr' target='#b130'>(Zhou et. al., 2018)</ns0:ref>. The findings show that the suggested approach outperforms the various handmade and deep learning methods in terms of accuracies and standard deviations. Two-stream CNN architecture, as well as an SVM classifier, is proposed <ns0:ref type='bibr' target='#b108'>(Xia et. al., 2018)</ns0:ref>. Feature extraction, training, and label fusion are the three phases of the method. Each stream CNN employs an Imagenet VGG-f architecture that has been pre-trained. The first stream collects visual information from successive frame differences, whereas the second stream extracts motion data. Then, using sight and motion information, two SVM classifiers are trained. Finally, a label fusion technique is used to get the detection result. The primary benefit of this technique is that it takes very little time to process. However, since this technique can not identify aggressive behaviors amongst individuals at close range, it is difficult to detect violence in large groups. Accattoli used two-stream CNNs in a similar way <ns0:ref type='bibr' target='#b0'>(Accattoli et. al., 2020)</ns0:ref>. To capture long temporal information, they suggest combining CNNs with better trajectories. To extract geographical and temporal information, they utilize two VGG-19 networks. Video frames are used to extract spatial information, while dense optical flow pictures are used to retrieve temporal information.</ns0:p><ns0:p>In smart cities, schools, hospitals, and other surveillance domains, an improved security system is required for the identification of violent or aberrant actions in order to prevent any casualties that may result in social, economic, or environmental harm. For this purpose, a three-staged end-toend deep learning violence detection system is presented <ns0:ref type='bibr'>(Ullah et. al., 2019)</ns0:ref>. To minimize and overcome the excessive processing of use-less frames, people are first identified in the surveillance video stream using a lightweight CNN model. Second, a 16-frame sequence containing identified people is sent to 3D CNN, which extracts the spatiotemporal characteristics of the sequences and feeds them to the Softmax classifier. The authors also used open visual inference and neural networks optimization tools created by Intel to optimize the 3D CNN model, which transforms the training model into intermediate representation and modifies it for optimum execution at the end platform for the ultimate prediction of violent behavior. When violent behavior is detected, an alarm is sent to the closest police station or security agency so that immediate preventative measures may be taken. The datasets 'Violent Crowd' <ns0:ref type='bibr'>(Hassner et. al., 2012)</ns0:ref>, 'Hockey' <ns0:ref type='bibr' target='#b135'>(Nievas et. al., 2011)</ns0:ref>, and 'Violence in Movies' <ns0:ref type='bibr'>(Bermejo et. al., 2012)</ns0:ref> are used in the experiments. The experimental findings show that the proposed approach outperforms state-of-the-art algorithms such as ViF, AdaBoost, SVM, Hough Forest, and 2D CNN, sHOT, and others in terms of accuracy, precision, recall, and AUC.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>VIDEO FEATURES AND DESCRIPTORS</ns0:head><ns0:p>This section goes through the feature descriptors that violence detection papers utilized in their research as well as other recent state-of-the-art descriptors. The fundamental components for detecting activity from the video are video features. The dataset and characteristics collected from video to evaluate the pattern of activity have a direct impact on the methodology's accuracy. For example, in combat situations, the movement of various objects increases faster. The movement of objects in a typical setting is normal and not too rapid. The direction of item movement in relation to time and space is also utilized to investigate unusual occurrences. Table <ns0:ref type='table'>6</ns0:ref> lists all of the features that were utilized in the research. A number of scholars, such as Lam et al. <ns0:ref type='bibr' target='#b36'>(Lejmi et. al., 2019)</ns0:ref>, have worked hard to identify fights and physical violence. Previous studies <ns0:ref type='bibr' target='#b1'>(Aggarwal and Ryoo, 2011;</ns0:ref><ns0:ref type='bibr' target='#b59'>Poppe, 2010)</ns0:ref> used blood or explosions as signals of violence, but these cues are seldom alarming. One study recently developed a feature that offers strong multimodal audio and visual signals by first combining the audio and visual characteristics and then exposing the combined multi-modal patterns statistically <ns0:ref type='bibr'>(Sun &amp; Liu, 2013)</ns0:ref>. Multiple kernel learning is used to increase the multimodality of movies by combining visual and audio data <ns0:ref type='bibr' target='#b54'>(Naik &amp; Gopalakrishna, 2017)</ns0:ref>. Audio-based techniques, on the other hand, are always constrained in real life due to the lack of an audio channel. The problem of detecting violent interactions is basically one of action recognition. The objective is to extract characteristics that may describe the sequences throughout the battle using computer vision technology. Handcrafted features and learning features are the two types of features available. Table <ns0:ref type='table'>6</ns0:ref>. Video features were used in the selected studies.</ns0:p><ns0:p>Hand-crafted features. Human-designed features are referred to as hand-crafted features. In action recognition, Space-Time Interest Points (STIPs) <ns0:ref type='bibr'>(Gracia et. al., 2015)</ns0:ref> and Improved Dense Trajectories (iDTs) <ns0:ref type='bibr' target='#b9'>(Bermejo et. al., 2011)</ns0:ref> are often employed. <ns0:ref type='bibr'>Deniz et al.</ns0:ref> proposed a new approach for detecting violent sequences that utilizes severe acceleration patterns as the primary characteristic and applies the Radon transform on the power spectrum of successive frames to identify violent sequences <ns0:ref type='bibr'>(Nievas et. al., 2011). Yang et. al.</ns0:ref> presented additional characteristics derived from motion blobs between successive frames to identify combat and non-fight sequences recently <ns0:ref type='bibr' target='#b112'>(Yang et. al., 2018)</ns0:ref>. A robust and understandable method based on motion statistical characteristics from optical flow pictures was presented in similar works <ns0:ref type='bibr' target='#b114'>(Yazdi et. al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b143'>Soomro et. al., 2012)</ns0:ref>. <ns0:ref type='bibr'>Zhang et al.</ns0:ref> use a GMOF to identify potential violence areas and a linear SVM with OHOF input vectors to discover fight regions <ns0:ref type='bibr'>(Zhang et. al., 2019)</ns0:ref>. This kind of approach based on hand-crafted features is simple and effective for a small-scale dataset, but when used to a large dataset, its deficiencies are exposed, resulting in slow training times, large memory consumption, and inefficient execution. Learning features. Deep neural networks learn features, which are referred to as learning features. Physical violence detection based on deep learning has made significant progress because of the increase in computing power brought on by GPUs and the gathering of large-scale training sets. Two-stream ConvNets were created <ns0:ref type='bibr'>(Wang et. al., 2021)</ns0:ref> and comprise spatial and temporal nets that use the ImageNet dataset <ns0:ref type='bibr' target='#b13'>(Chen &amp; Hauptmann, 2009)</ns0:ref> for pre-training and optical flow to explicitly capture motion information. <ns0:ref type='bibr'>Tran et al. used 3D ConvNets (De Souza, 2017)</ns0:ref> trained on a large-scale supervised dataset to learn both appearance and motion characteristics. Zhang et al. recently used long-range temporal structure (LTC) neural networks to train a movie and found that LTC-CNN models with increasing temporal extents enhanced action identification accuracy <ns0:ref type='bibr'>(Zhang et al., 2019)</ns0:ref>. However, because of the computational complexity, these techniques are restricted to a video frame rate of no more than 120 frames. The temporal segment network <ns0:ref type='bibr' target='#b73'>(Shao et. al., 2017)</ns0:ref> used a sparse temporal sampling approach with video-level supervision to learn valid information from the whole action video, attaining state-of-the-art performance on the two difficult datasets HMDB51 (69.4 percent) and UCF101 (94.2 percent).</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Histogram of Oriented Gradients (HOG)</ns0:head><ns0:p>HOGs are a feature descriptor for object identification and localization that can compete with DNN's performance <ns0:ref type='bibr' target='#b17'>(Dalal &amp; B. Triggs, 2005)</ns0:ref>. The gradient direction distribution is utilized as a feature in HOG. Because the brightness of corners and edges vary greatly, calculating the gradient together with the directions may assist in the detection of this knowledge from the images.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>Histogram of Optical Flow (HOF)</ns0:head><ns0:p>A pattern of apparent motion of objects, surfaces, and edges is produced as a result of the relative motion between an observer and a scene. This process is called Optical Flow. The histogram of oriented optical flow (HOF) <ns0:ref type='bibr' target='#b18'>(Dalal et. al., 2006)</ns0:ref> is an optical flow characteristic that depicts the series of events at each point in time. It is scale-invariant and unaffected by motion direction.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3'>SPACE -Time Interest Points</ns0:head><ns0:p>Laptev and Lindeberg and Laptev proposed the space-temporal interest point detector by expanding the Harris detector. A second-moment matrix is generated for each spatiotemporal interest point after removing points with high gradient magnitude using a 3D Harris corner detector <ns0:ref type='bibr' target='#b35'>(Laptev and Lindeberg, 2004;</ns0:ref><ns0:ref type='bibr' target='#b34'>Laptev, 2005)</ns0:ref>. This descriptor's characteristics are used to describe the spatiotemporal, local motion, and appearance information in volumes. Space-Time Interest Points (STIP) is a space-time extension of the Harris corner detection operator. The measured interest spots have a significant degree of intensity fluctuation in space and non-constant mobility in time. These important sites may be found on a variety of geographical and temporal scales. Then, for 3D video patches in the vicinity of the recognized STIPs, HOG, HOF, and a combination of HOG and HOF called HNF feature vectors are retrieved. These characteristics may be utilized to recognize motion events with high accuracy, and they are resistant to changes in pattern size, frequency, and velocity.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.4'>MoSIFT</ns0:head><ns0:p>MoSIFT <ns0:ref type='bibr' target='#b13'>(Chen &amp; Hauptmann, 2009</ns0:ref>) is an extension of the popular SIFT <ns0:ref type='bibr' target='#b42'>(Lowe, 2004)</ns0:ref> image descriptor for video. The standard SIFT extracts histograms of oriented gradients in the image. The 256-dimensional MoSIFT descriptor consists of two portions: a standard SIFT image descriptor and an analogous HOF, which represents local motion. These descriptors are extracted only from regions of the image with sufficient motion. The MoSIFT descriptor has shown better performance in recognition accuracy than other state-of-the-art descriptors <ns0:ref type='bibr'>(Chen and Hauptmann, 2020)</ns0:ref> but the approach is significantly more computationally expensive than STIP.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.5'>Violence Flow Descriptor</ns0:head><ns0:p>The violence flow, which utilizes the frequencies of discrete values in a vectorized form, is an essential feature descriptor. This is different from other descriptors in that instead of assessing magnitudes of temporal information, the magnitudes are compared for each, resulting in much more meaningful measurements in terms of the previous frame <ns0:ref type='bibr'>(Zhang et. al., 2019)</ns0:ref>. Instead of looking at local appearances, the similarities between flow-magnitudes in terms of time are investigated.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.6'>Bag-of-Words (BoW)</ns0:head><ns0:p>The Bag-of-Words (BoW) method, which originated in the text retrieval community <ns0:ref type='bibr'>(Laptev, 2004)</ns0:ref>, has lately gained popularity for a picture <ns0:ref type='bibr' target='#b37'>(Lewis, 1998)</ns0:ref> and video comprehension <ns0:ref type='bibr' target='#b16'>(Csurka et. al., 2004)</ns0:ref>. Each video sequence is represented as a histogram over a collection of visual words in this method, which results in a fixed-dimensional encoding that can be analyzed with a conventional classifier. The cluster centers produced via k-means clustering across a large collection of sample low-level descriptors are usually described as the lexicon of visual words in a learning phase <ns0:ref type='bibr' target='#b41'>(Lopes et. al., 2010)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.7'>Motion boundary histograms</ns0:head><ns0:p>By measuring derivatives independently for the horizontal and vertical components of the optical flow, <ns0:ref type='bibr'>Dalal et al. developed the MBH descriptor (Dalal et. al., 2006)</ns0:ref> for human detection. The relative motion between pixels is encoded by the descriptor. Because MBH depicts the gradient of the optical flow, information regarding changes in the flow field (i.e. motion boundaries) is preserved while locally constant camera motion is eliminated. MBH is more resistant to camera motion than optical flow, making it better at action detection.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.8'>Vector of Locally Aggregated Descriptors</ns0:head><ns0:p>Soltanian, et al. presented a state-of-the-art VLAD descriptor <ns0:ref type='bibr' target='#b76'>(Soltanian et. al., 2019)</ns0:ref>. VLAD varies from the BoW image descriptor in that it records the difference between the cluster center and the number of SIFTs allocated to the cluster, rather than the number of SIFTs assigned to the cluster. It inherits parts of the original SIFT descriptor's invariances, such as in-plane rotational invariance, and is tolerable to additional changes like picture scaling and clipping. VLAD retrieval systems typically do not utilize the original local descriptors, which is another variation from the conventional BoW method. These are employed in BoW systems for spatial verification and reranking <ns0:ref type='bibr' target='#b26'>(J&#233;gou et. al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b24'>Jegou et. al., 2008)</ns0:ref>, but for extremely big picture datasets, they need too much storage to be kept in memory on a single machine. VLAD is comparable to the previous Fisher vectors <ns0:ref type='bibr' target='#b60'>(Philbin et. al., 2007)</ns0:ref> in that they both store features of the SIFT distribution given to a cluster center. VLAD is made up of areas taken from an image using an affine invariant detector and characterized with the 128-D SIFT descriptor. The nearest cluster of a vocabulary of size k is then given to each description (where k is typically 64 or 256 so that clusters are quite coarse). The residuals (vector discrepancies between descriptors and cluster centers) are collected for each of the k clusters, and the k 128-D sums of residuals are concatenated into a single k 128-D descriptor <ns0:ref type='bibr' target='#b60'>(Philbin et. al., 2007)</ns0:ref>. VLAD is comparable to other residual descriptors such as Fisher vectors <ns0:ref type='bibr' target='#b4'>(Arandjelovic &amp; Zisserman, 2013)</ns0:ref> and super-vector coding <ns0:ref type='bibr' target='#b57'>(Perronnin &amp; Dance, 2007)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.9'>MoBSIFT</ns0:head><ns0:p>The MoBSIFT descriptor is a mixture of the MoSIFT and MBH descriptors. The two main stages of the MoBSIFT method are interest point identification and feature description. The video is converted into a few interest points once interest points are detected, and a feature description is done locally around these interest points. By combining the MBH <ns0:ref type='bibr' target='#b18'>(Dalal, et. al., 2006)</ns0:ref> with the movement filtering technique, the MoSIFT descriptor <ns0:ref type='bibr' target='#b13'>(Chen and Hauptmann, 2009)</ns0:ref> is enhanced in both accuracy and complexity. Camera motion is a significant issue for any system since motion data is considered an essential signal in action detection. MBH is thought to be a useful feature for avoiding the effects of camera motion.</ns0:p><ns0:p>It is suggested that movement filtering be used to decrease complexity by excluding most nonviolent movies from complicated feature extraction <ns0:ref type='bibr' target='#b125'>(Zhou et. al., 2010)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.'>DATASETS</ns0:head><ns0:p>The real-world datasets that are utilized to evaluate the proposed violence detection methods are described in this section. The specifics of all datasets linked to the violence are summarized in Table <ns0:ref type='table' target='#tab_3'>7</ns0:ref>. </ns0:p></ns0:div> <ns0:div><ns0:head n='7.'>EVALUATION PARAMETERS</ns0:head><ns0:p>In this section, we describe evaluation parameters that are used to test the performance of physical violence detection systems. To evaluate the effectiveness of classification algorithms, the following indicators are usually used: accuracy, completeness (also called True Positive Rate, TPR), and F-measure. After the classification, it is possible to obtain four types of results: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). Let us explain the meaning of these terms: True Positive: Its value represents the number of instances that have been correctly classified as violent. False Negative: Its value represents the number of neutral videos that have been misclassified as violent programs. False positive: Its value represents the number of normal classes that have been misclassified as violent. True negative: Its value represents the number of normal classes that have been correctly classified as normal. Then, to estimate the accuracy, use the following definition and formula.</ns0:p></ns0:div> <ns0:div><ns0:head>Accuracy</ns0:head><ns0:p>Accuracy is the ratio of the number of correct predictions to the total number of test samples. It tells us whether a model is being trained correctly and how it may perform generally. Nevertheless, it works well if only each class has an equal number of elements.</ns0:p><ns0:formula xml:id='formula_0'>, FN FP TN TP TN TP Accuracy &#61483; &#61483; &#61483; &#61483; &#61501; (1)</ns0:formula></ns0:div> <ns0:div><ns0:head>Precision</ns0:head><ns0:p>Precision tells how often a prediction is correct when the model predicts positive. So precision measures the portion of positive identifications in a prediction set that were actually correct.</ns0:p><ns0:formula xml:id='formula_1'>, FP TP TP precision &#61483; &#61501; (2)</ns0:formula></ns0:div> <ns0:div><ns0:head>Recall</ns0:head><ns0:p>Recall is the number of correct positive results divided by the number of all relevant samples so recall represents the proportion of actual positives that were identified correctly. </ns0:p></ns0:div> <ns0:div><ns0:head>AUC-ROC</ns0:head><ns0:p>When predicting the probability, the greater we can get the true positive rate (TPR) at a lower false positive rate (FPR), the better the quality of the classifier. Therefore, we can introduce the following metric that evaluates the quality of the classifier that calculates the probability of an object belonging to a positive class:</ns0:p><ns0:formula xml:id='formula_2'>, &#61682; &#61501; 1 0 TPRdFPR AUC (5)</ns0:formula><ns0:p>This value is the area under the ROC curve. Here, . The ROC curve is a graphical tool</ns0:p><ns0:formula xml:id='formula_3'>] 1 , 0 [ &#61646; AUC</ns0:formula><ns0:p>for evaluating the accuracy of binary classification models. It allows to find the optimal balance between sensitivity and specificity of the model, which corresponds to the point of the ROC curve closest to the coordinate (0,1), in which sensitivity and specificity are equal to 1, when there are no false-positive and false-negative classifications.</ns0:p></ns0:div> <ns0:div><ns0:head>False alarm</ns0:head><ns0:p>False Alarm: volume of false classes relative to the sum of classes.</ns0:p><ns0:formula xml:id='formula_4'>, FP TN FP FA &#61483; &#61501; (6)</ns0:formula></ns0:div> <ns0:div><ns0:head>Missing alarm</ns0:head><ns0:p>Missing Alarm (MA): volume of false classes relative to the sum of classes.</ns0:p><ns0:formula xml:id='formula_5'>, FN TP FN MA &#61483; &#61501; (7)</ns0:formula></ns0:div> <ns0:div><ns0:head n='8.'>CHALLENGES TO VIOLENCE DETECTION IN VIDEO</ns0:head><ns0:p>Due to the many challenges faced when capturing moving people, detecting aggressive conduct is tough. The major problems that need to be addressed are discussed below:</ns0:p></ns0:div> <ns0:div><ns0:head n='8.1'>Dynamic Illumination Variations</ns0:head><ns0:p>When studying images with shifting light, which is a prevalent feature of realistic surroundings, tracking becomes problematic. When collecting video at night, outside cctv are exposed to environmental lighting variations, which might result in low contrast, making content interpretation complicated. <ns0:ref type='bibr'>Zhou,</ns0:ref><ns0:ref type='bibr'>et al. [43]</ns0:ref> shown a better susceptibility to light fluctuation. The authors employed the LHOG descriptor, that is derived from a cluster of nodes. The LHOG features consequence from color space. In this research, two ways to deal with light variation were utilized: initially, the authors cut the block pitch in half, resulting in a halfblock overlap. The normalization process is then carried out per LHOG. The motion magnitude photos are used to derive the LHOF, which catches real-time data. Furthermore, the adaptive background subtraction technique [2] provides a dependable way for dealing with illumination variations, as well as repeated and long-term scenario alterations.</ns0:p></ns0:div> <ns0:div><ns0:head n='8.2'>Motion Blur</ns0:head><ns0:p>This is a difficult challenge for optical flow-based motion estimates to solve: Parts of human body as head, arms, legs, elbows, and shoulders are mathematical feature points that create distinctive abstraction of different stances. <ns0:ref type='bibr'>Deniz et al. proposed</ns0:ref> an approach for monitoring high accelerations that does not require tracking <ns0:ref type='bibr'>[3]</ns0:ref>. Extreme acceleration causes visual blur, making tracking less accurate or impossible, according to the researchers. Camera movement may, in fact, produce picture blur. They used a deconvolution primary processing to get rid of it. To deduce global motion between each couple of successive frames, the phase of correlation approach is applied initially. If global motion is identified, the predicted slope and length of displacement are utilized to create a Point Spread Function and deconvolve next frame applying Lucy-Richardson iterative deconvolution technique.</ns0:p></ns0:div> <ns0:div><ns0:head n='8.3'>Presence of a Non-stationary Background</ns0:head><ns0:p>Since low-resolution movies often feature background movement caused by camera movement or changes in illuminance, noise correction is required. Whereas the amplitude of the optical flow vector is a very powerful signal for detecting the degree of movement, and the flow direction may offer additional motion information, [8] uses the optical flow technique for motion analysis. The optical flow among each adjacent frames is computed using a background motion-resilient technique. The backdrop normally moves at a consistent pace in response to the camera movement. Human actions are more prone to move in irregular patterns. This enabled noise from background movement to be filtered. A 3*3 Gaussian kernel is used to minimize noise, a histogram equalization is used to disperse pixel intensities across a greater contrast range, and a background subtraction utilizing Mixture of Gaussians is used to exclude items not linked to the actors. The overwhelming background components may obstruct action recognition predictive performance. To address this problem, we must consider both localizing action samples and reducing the effect of backdrop video. Because of the camera movement, <ns0:ref type='bibr'>Wang et al. [34]</ns0:ref> noted a significant degree of horizontal movement in the backdrop. They proposed adding warped optical flow as an input modality, inspired by enhanced dense trajectory <ns0:ref type='bibr'>[32]</ns0:ref>. By predicting the homography matrix and accounting for camera motion, they were able to extract them. That helps to concentrate on the performer by removing the background motion. The temporal segment network architecture receives additional aggregating capabilities in <ns0:ref type='bibr'>[33]</ns0:ref>. This is a good way to emphasize crucial pieces while reducing ambient noise.</ns0:p></ns0:div> <ns0:div><ns0:head n='8.4'>Non-professionally produced content</ns0:head><ns0:p>The volume of non-professionally generated material has expanded dramatically in recent years. People can take images and videos everywhere and at any time owing to the widespread use of cameras and cellphones. Social media and online forums are used to disseminate the material. Only a little amount of study has been done to explore the scene structure of non-professionally generated material, despite the fact that journalists often use amateur content, particularly in news coverage on TV channels and news sites on the Internet. For example, if no professional team was presented at an occurrence or if one was submitted but missed a potentially interesting event. We see possibilities for a new discipline of social media-focused video scene identification algorithms. Films published on social networks sharing sites are typically brief, with added handheld camera movements, and the quality of two scenes might vary greatly. The new problem is not recognizing scenes in these short films, but rather identifying a scene (situation) that occurred in real life among a large number of videos on a social media sharing platform that incorporates footage from many sources and of varying quality. A scene may be displayed from several perspectives, exposing more information, and so providing a better experience for the viewer, by mixing footage from different individuals. <ns0:ref type='bibr'>Chu et al. [14,</ns0:ref><ns0:ref type='bibr'>15] and Del Fabro et al. [18,</ns0:ref><ns0:ref type='bibr'>19]</ns0:ref> both suggest interesting ideas that are connected to this problem.</ns0:p></ns0:div> <ns0:div><ns0:head n='8.5'>Few publicly available datasets</ns0:head><ns0:p>Because this is a relatively new study topic, there are much fewer publically accessible datasets for violence detection in video. Furthermore, the data imbalance between positive and negative samples precludes the implementation of supervised models. The lack of ground truth data as well as the ambiguous character of the anomalies makes it difficult to develop end-to-end trainable deep learning models <ns0:ref type='bibr'>(Lloyd et. al., 2018)</ns0:ref>. The modeling is further confounded by the large variation within positive instances (anomalous occurrences may include a wide range of distinct instances, despite the fact that most training data is restricted) <ns0:ref type='bibr' target='#b126'>(Zhu et. al., 2018)</ns0:ref>. As a result, there is a need for appropriate standards to assess the methods employed for violence detection in videos <ns0:ref type='bibr' target='#b15'>(Constantin et. al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='8.6'>Computational and time-consuming cost</ns0:head><ns0:p>In general, the time-consuming phase of feature representation during video violence detection, which is both computationally and time-consuming, serves as a significant barrier for the implementation of violence detection in real applications <ns0:ref type='bibr' target='#b15'>(Constantin et. al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b58'>Popoola &amp; Wang, 2012)</ns0:ref>. As a result, the majority of current algorithms for detecting violence in videos have significant time and space complexity costs. As a result, these techniques are unsuitable for practical uses <ns0:ref type='bibr'>(Vu et. al., 2020)</ns0:ref>. As a result, for improved feature extraction and description, more dense and deeper DNN models are required. In addition, simple, effortless, and more effective methods for detecting violence are required. However, the high dimensional structure coupled with a non-local change between frames increases the complexity of the algorithms used to identify video abnormalities.</ns0:p></ns0:div> <ns0:div><ns0:head n='9.'>DISCUSSION</ns0:head><ns0:p>Despite considerable advancements in the video-based physical detection area, certain restrictions remain, making it more complex and demanding. In reality, selecting the features that make a moving item is a challenging task since it has a major impact on the behavior's description and analysis. It's problematic, for instance, to describe the action when the scene's backdrop changes often or when new items appear unexpectedly in the scene. Furthermore, the appearance of the moving object may be affected by a variety of variables, including clothes (dress, suit, footwear, etc.) and scene location (outdoor/indoor, etc.). In order to collect meaningful information regarding an object's behavior, features that are resistant to scene modifications (rotating, occlusion, blurring, cluttered backdrops, etc.) and less susceptible to changes in the object's appearance must be used. Furthermore, most algorithms for detecting violence assume that the moving item is located in front of the camera. In fact, though, the point of view is arbitrary. There are some studies that utilize several cameras to record various perspectives of the moving item and then unite them to circumvent this constraint. Even if such methods are efficient and provide excellent results, they are complex, time-consuming, and unsuitable for practical uses. On the other hand, depending on the context in which the action is done, as well as the time and location of the action, the observed behavior may have many interpretations. Hugging, for example, is a common everyday activity for most people, but hitting, kicking, use fighting techniques are considered aberrant conduct that must be alarmed. On the other hand, depending on the context in which the action is done, as well as the time and location of the action, the observed behavior may have many interpretations. To address the constraints stated above, suggested methods utilized a massive quantity of training data that included all permissible situations. To cope with huge amounts of data, cloud computing has become popular, since it enables sophisticated algorithms, such as deep learning, to operate efficiently on larger datasets. In reality, because of their deep structures, the usage of deep learning techniques has exploded in order to achieve considerably more learning capacity. Among the 80 reviewed papers, 4 articles (5%) are classified as review articles <ns0:ref type='bibr' target='#b74'>(Shidik et. al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b61'>Ramzan et. al., 2019;</ns0:ref><ns0:ref type='bibr'>K. Pawar and V. Attar, 2018;</ns0:ref><ns0:ref type='bibr' target='#b82'>Sreenu and Durai, 2019)</ns0:ref>. The article reviews 220 papers that published between 2010 and 2019. It provides a review about methods, frameworks, techniques in video-based intelligent violence detection systems. While it asserts that its primary contribution is a thorough overview of the state-of-the-art of intelligence video surveillance, several significant shortcomings should be highlighted. Manuscript to be reviewed Computer Science directed to video surveillance as protection, object detection, activity recognition, traffic controlling, and disaster and accident monitoring that far from specific violence detection area. Next review <ns0:ref type='bibr' target='#b61'>(Ramzan et. al., 2019)</ns0:ref> explores violence detection techniques using machine learning methods. The review is not systematic as it was written in a free form. Authors classify videobased violence detection into three categories as machine learning based, SVM based and convolutional neural network based violence detection techniques. These strategies are discussed in depth, as well as their advantages and disadvantages. Additionally, thorough tables detail the datasets and video attributes that are employed in all procedures and play a critical part in the violence detection process. This article aims at studying and analyzing deep learning techniques for video-based anomalous activity detection. As outcome of the study, the graphical taxonomy has been put forth based on kinds of anomalies, level of anomaly detection, and anomaly measurement for anomalous activity detection. The focus has been given on various anomaly detection frameworks having deep learning techniques as their core methodology. Deep learning approaches from both the perspectives of accuracy oriented anomaly detection and real-time processing oriented abnormality detection are compared. This study also sheds light upon research issues and challenges, application domains, benchmarked dataset and future directions in the domain of deep learning based anomaly detection (K. Pawar and V. Attar, 2018). The next section of the evaluation delves further into object identification, action recognition, crowd analysis, and ultimately violence detection in a crowd setting <ns0:ref type='bibr' target='#b82'>(Sreenu and Durai, 2019)</ns0:ref>. The article superficially examines research in the indicated areas without a deep disclosure of the methods used. Authors present four main approaches as a classification of abnormal behavior: Hiden Markov Model, Gaussian Mixture Model (GMM), optical flow, and STT. Also, 13 datasets for the violence detection were shown. Nevertheless, as the purpose of the paper is related to different domains, the presented datasets also dedicated to different applications of deep learning in video surveillance. There are only four popular datasets in violence detection as CAVIAR, BEHAVE, Movie, and Hockey dataset. The rest are not directly related to the detection of violence. Therefore, to the authors' knowledge, in the present literature, there is still a lack of a more formal and objective systematic review that is specifically focused on 'Violence Detection Techniques in Video Surveillance Security Systems' and analyses it from multiple perspectives. The contributions that are made in this work, as mentioned in Section I, can be used to address this issue.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>With the increasing development of surveillance cameras in many areas of life to watch human behavior, the need for systems that automatically identify violent occurrences increases proportionally. Violent action detection has become a prominent subject in computer vision attracting new researchers. Many academics have suggested various methods for detecting such actions in videos. The primary aim of this systematic review is to examine the most recent studies in the field of violence detection. The various types of video violence detection techniques, which perform using machine learning, SVM, and deep learning were examined in this study. First, we looked at the most common techniques for extracting and describing features. Furthermore, all datasets and video characteristics were utilized in all techniques, as well as those that play a critical part in the identification process are documented in thorough tables. The accuracy of object identification, feature extraction, and classification methods, as well as the dataset utilized, are all factors. Following that, we gave a thorough review of descriptors for violence detection. In addition, we discussed the most difficult datasets and assessment criteria for video violence detection methods. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>&#61623;</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022) Manuscript to be reviewed Computer Science &#61623; Study of ranking and importance of video feature descriptors for detecting violence in video. Exploration of datasets and evaluation criteria for violence detection in video. &#61623; Discussion of limitations, challenges, and open issues of the video-based violence detection problem.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Systematic literature review flowchart.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3. Distribution of violence detection methods</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Fundamental stages of video-based violence detection.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022) F1 Score is a measure of the test's accuracy It is the Harmonic Mean between precision and recall. The value of the F1 Score can be between 0 and 1. When the F1 score is equal to 1, the model is considered to work perfectly.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Intelligence video surveillance was considered superficially without delving into the essence of the methods used. Moreover, the paper included only journal papers into their study that excludes state-of-the-art researches of high value conferences as ICCV [], ECCV[], ICML [], CVPR []. Thirdly, main accent PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Finally, we discussed challenges, open issues, and future directions for violence detection in video. Our research may help to highlight the strategies and procedures for detecting violent behavior from surveillance videos. Demarty, C. H., Ionescu, B., Jiang, Y. G., Quang, V. L., Schedl, M., &amp; Penet, C. (2014, June). Benchmarking violent scenes detection in movies. In 2014 12th International Workshop on Content/doi.org/10.1109/CBMI.2014.6849827 Ding, C., Fan, S., Zhu, M., Feng, W., &amp; Jia, B. (2014, December). Violence detection in video by using 3D convolutional neural networks. In International Symposium on Visual Computing (pp. 551-558). Springer, Cham, DOI: https://doi.org/10.1007/978-3-319-14364-4_53 Enzweiler, M., Eigenstetter, A., Schiele, B., &amp; Gavrila, D. M. (2010, June). Multi-cue pedestrian classification with partial oc-clusion handling. In 2010 IEEE computer society conference on computer /dx.doi.org/10.1109%2FCVPR.2010.5540111 Febin, I. P., Jayasree, K., &amp; Joy, P. T. (2020). Violence detection in videos for an intelligent surveillance system using MoBSIFT and movement filtering algorithm. Pattern Analysis and Applications, 23(2), 611-623, DOI: https://doi.org/10.1007/s10044-019-00821-3 Fehrman, B., &amp; McGough, J. (2014, April). Handling occlusion with an inexpensive array of cameras. In 2014 Southwest Symposium on Image Analysis and Interpretation (pp. 105-108). IEEE, DOI: https://doi.org/10.1109/SSIAI.2014.6806040 Feng, J., Liang, Y., &amp; Li, L. (2021). Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpreta-bility. Computational Intelligence and Neuroscience, 2021, DOI: https://doi.org/10.1155/2021/7367870 Fenil, E., Manogaran, G., Vivekananda, G. N., Thanjaivadivel, T., Jeeva, S., &amp; Ahilan, A. (2019). Real time violence detection framework for football stadium comprising of big data analysis and deep learning through bidirectional LSTM. Computer Networks, 151, 191-200, DOI: https://doi.org/10.1016/j.comnet.2019.01http://groups.inf.ed.ac.uk/vision/CAVIAR/CAVIARDATA1/. Accessed 12 August 2021, DOI: https://doi.org/10.1109/TCSVT.2020.2991191 Fu, E. Y., Leong, H. V., Ngai, G., &amp; Chan, S. (2015, December). Automatic fight detection based on motion analysis. In 2015 IEEE International Symposium on Multimedia (ISM) (pp. 57-60). IEEE, DOI: https://doi.org/10.1109/ISM.2015.98 Gao, Y., Liu, H., Sun, X., Wang, C., &amp; Liu, Y. (2016). Violence detection using oriented violent flows. Image and vision com-puting, 48, 37-41, DOI: https://doi.org/10.1016/j.imavis.2016.01.006 Hara, K., Kataoka, H., &amp; Satoh, Y. (2017). Learning spatio-temporal features with 3d residual networks for action recognition. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 3154-3160). Hassner, T.; Itcher, Y.; Kliper-Gross, O. Violent flows: Real-time detection of violent crowd behavior. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Providence, RI, USA, 16-21 June 2012; pp. 1-6, DOI: https://doi.org/10.1109/CVPRW.2012.6239348</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 Fundamental</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,327.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 . Research questions and their motivations.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 . Violence detection techniques that use machine learning.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>that has 370 samples.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 7 . Datasets for Violence Detection in Video.</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 . Violence detection techniques that use machine learning.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Graci a et. al., 2015</ns0:cell><ns0:cell>Motion Blob acceleration measure vector method for detection of fast fighting from video</ns0:cell><ns0:cell>Ellipse detection method</ns0:cell><ns0:cell>An algorithm to find the acceleratio n</ns0:cell><ns0:cell>Spatio-n temporal features use for classificatio</ns0:cell><ns0:cell>Both crowded and less crowded</ns0:cell><ns0:cell>Accurac y about 90%</ns0:cell></ns0:row><ns0:row><ns0:cell>Zhou et. al., 2018</ns0:cell><ns0:cell>FightNet for Violent Interaction Detection</ns0:cell><ns0:cell>Temporal Segment Network</ns0:cell><ns0:cell>Image acceleratio n</ns0:cell><ns0:cell>Softmax</ns0:cell><ns0:cell>Both crowded and uncrowde d</ns0:cell><ns0:cell>97% in Hockey, 100% in Movies dataset</ns0:cell></ns0:row><ns0:row><ns0:cell>Ribeir o et. al., 2016</ns0:cell><ns0:cell>RIMOC method focuses on speed and direction of an object on the base of HOF</ns0:cell><ns0:cell>Covarianc e Matrix method STV based</ns0:cell><ns0:cell>Spatio-temporal vector method (STV)</ns0:cell><ns0:cell>STV uses supervised learning</ns0:cell><ns0:cell>Both crowded and uncrowde d</ns0:cell><ns0:cell>For normal situation 97% accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>Yao et. al., 2021</ns0:cell><ns0:cell>Multiview fight detection method</ns0:cell><ns0:cell>YOLO-V3 network</ns0:cell><ns0:cell>Optical flow</ns0:cell><ns0:cell>Random Forest</ns0:cell><ns0:cell>Both crowded and uncrowde d</ns0:cell><ns0:cell>97.66% accuracy , 97.66 F1-score</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Two step</ns0:cell><ns0:cell>Vif object</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Lower</ns0:cell></ns0:row><ns0:row><ns0:cell>Arced a et. al., 2016</ns0:cell><ns0:cell>detection of violent and faces in video by using ViF descriptor and normalization</ns0:cell><ns0:cell>recognitio n CUDA method and KLT face</ns0:cell><ns0:cell>Horn shrunk method for Histogram</ns0:cell><ns0:cell>Interpolation classificatio n</ns0:cell><ns0:cell>Less crowded</ns0:cell><ns0:cell>frame rate 14% too high rate of 35% fs/s</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>algorithms.</ns0:cell><ns0:cell>detector</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>97%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wu et. al., 2020</ns0:cell><ns0:cell>HL-Net to simultaneously capture long-range relations relations and local distance</ns0:cell><ns0:cell>HLC approxima tor</ns0:cell><ns0:cell>CNN based model</ns0:cell><ns0:cell>Weak supervision</ns0:cell><ns0:cell>Both d scene Crowded uncrowde and</ns0:cell><ns0:cell>78.64%</ns0:cell></ns0:row><ns0:row><ns0:cell>Xie et. al., 2016</ns0:cell><ns0:cell>SVM method for theory frames recognition based on statistical</ns0:cell><ns0:cell>Vector method normalizat ion</ns0:cell><ns0:cell>Macro for features block technique</ns0:cell><ns0:cell>Region for video motion and descriptor</ns0:cell><ns0:cell>Crowded</ns0:cell><ns0:cell>96.1% accuracy</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 . Violence Detection Techniques Using SVM.</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>A Video-Based</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>DT-SVM</ns0:cell><ns0:cell>Motion Co-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ye et. al., 2020</ns0:cell><ns0:cell>School Violence</ns0:cell><ns0:cell>occurrence Feature</ns0:cell><ns0:cell>Optical flow extraction</ns0:cell><ns0:cell>Crowded</ns0:cell><ns0:cell>97.6%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Detecting</ns0:cell><ns0:cell>(MCF)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Algorithm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>GMOF</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Zhang et. al., 2016</ns0:cell><ns0:cell>framework with tracking and detection</ns0:cell><ns0:cell>Gaussian Mixture model</ns0:cell><ns0:cell>OHFO for optical flow extraction</ns0:cell><ns0:cell>Crowded</ns0:cell><ns0:cell>82%-89% accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>module</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Gao et. al., 2016</ns0:cell><ns0:cell>Violence detection using Oriented ViF</ns0:cell><ns0:cell>Optical Flow method</ns0:cell><ns0:cell>Combination descriptor of ViF and OViF</ns0:cell><ns0:cell>Crowded</ns0:cell><ns0:cell>90%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>91.38%</ns0:cell></ns0:row><ns0:row><ns0:cell>Deepak et. al., 2020</ns0:cell><ns0:cell>Autocorrelation of gradients based violence detection</ns0:cell><ns0:cell>Motion boundary histograms</ns0:cell><ns0:cell>Frame based feature extraction</ns0:cell><ns0:cell>Crowded</ns0:cell><ns0:cell>accuracy in Crowd Violence; 90.40% in Hockey</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>dataset</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Framework</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>includes</ns0:cell><ns0:cell>Optical flow</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>preprocessing,</ns0:cell><ns0:cell>and temporal</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Al-Nawashi et. al., 2017</ns0:cell><ns0:cell>detection of activity and image retrieval. It identifies the abnormal event</ns0:cell><ns0:cell>difference for object detection method for CBIR</ns0:cell><ns0:cell>Gaussian function for analysis video future</ns0:cell><ns0:cell>Less crowded</ns0:cell><ns0:cell>97% accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>and image from</ns0:cell><ns0:cell>retrieving</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>data-based</ns0:cell><ns0:cell>images.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>images.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sparsity-Based</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>64.7% F1</ns0:cell></ns0:row><ns0:row><ns0:cell>Kamoona et. al., 2019</ns0:cell><ns0:cell>Naive Bayes Approach for Anomaly Detection in</ns0:cell><ns0:cell>Sparsity-Based Na&#168;&#305;ve Bayes</ns0:cell><ns0:cell>C3D feature extraction</ns0:cell><ns0:cell>Both crowded and uncrowded</ns0:cell><ns0:cell>score; 52.1% precision; 85.3% recall in UCF</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Real</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>dataset</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)</ns0:note> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65791:1:1:NEW 4 Feb 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Al-Farabi Kazakh National University, Almaty, Kazakhstan 050040, Al-Farabi, 71, Almaty, Kazakhstan January 29th, 2022 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Dr. Batyrkhan S. Omarov Associate Professor of Information Systems Department, Al-Farabi Kazakh National University On behalf of all authors. Reviewer 1 (Anonymous) Basic reporting no comment Experimental design no comment Validity of the findings no comment Additional comments This paper provided a survey by the assessment of video violence detection problems that occurs in state-of-the-arts for both qualitative and quantitative studies in terms of procedure, datasets, and performance indicators. Some challenges and future directions are also covered. The topic seems to be very interesting, however, I have some major concerns about the paper that will further enhance the paper quality and its body structure. 1. The wording style of the abstract is too sloppy and instead of actual contents description, the background studies is largely covered. Authors need to make the abstract more compact and representative for the whole paper contents. Answer: Dear Reviewer, thank you for your advice. We changed the abstract by removing actual contents description and including the paper contributions. 2. A reader gets confuse when face the sections without numbering that is an important aspect of paper body. Resolve this issue. Answer: By considering the recommendation of the reviewer, we numbered each section and subsection. 3. I did not find any novelty after going through the contribution lists that are already the practice of existing surveys such as methods coverage, their features extraction sets, datasets, etc. Authors are suggested to clearly highlight their contributions and mention why this survey is needed if there already exist several violence detection surveys. Answer: In this study, our goal was to review of state-of-the-art researches for violence detection problem. In this paper we reviewed the papers for articles for last 7 years. We put four research questions and tried to answer to those questions in our study. In addition, we overviewed other review papers in this domain and compared the proposed systematic literature review. 4. I did not find the any visual statistical information of year-wise violence detection papers distribution. Authors need to include the details of paper coverage from each year, present their taxonomy, broadly categorize them into machine learning or conventional techniques and deep learning by investigating each year for the category. Similarly, authors are suggested to present visual or tabular representation of working flow of the survey for the ease of readers. Next, I did not most recent violence detection literature that authors need to includes such as: An intelligent system for complex violence pattern analysis and detection. International Journal of Intelligent Systems. 2021 Jul 5 AI assisted Edge Vision for Violence Detection in IoT based Industrial Surveillance Networks. IEEE Transactions on Industrial Informatics. 2021 Sep 29 Violence detection and face recognition based on deep learning. Pattern Recognition Letters. 2021. Answer: We included Figure 2 and Figure 3 for visual illustration of the obtained results. Figure 2a presents year-wise distribution of violence detection papers, Figure 2b demonstrates violence detection papers by applied methods, Figure 3 illustrates distribution of methods that used in violence detection papers. Moreover, we included the suggested papers into our study. 5. Challenges and future directions are not well-structured as they are explained in wordy manners. It is suggested to explain each challenge in separate small section. Answer: By recommendation of the reviewer, we explained each challenge in a separate small division in Section 8. 6. Finally, I recommend to consider the English proficiency in terms of spelling and grammatical corrections. Answer: Yes, we acknowledge that we are not native English speakers. By considering a recommendation of the reviewer, our manuscript was proofread by local English correction center. Reviewer 2 (Pedro Palos-Sanchez) Basic reporting This systematic review provide a comprehensive assessment of the video violence detection problems that have been described in state-of-the-art researches. This paper deals with an interesting topic especially in context of video violence. Therefore, it requires revisions to improve the quality of work. 1. More suitable title should be selected for the article. Please use different terms in the 'Title' and the 'Keywords'. The abstract should be ordered by answering questions such as Originality of the manuscript, Objectives, Method (indicate how many papers you located in the different stages of the SLR and how many you were finally left with) and Finding. Answer: Dear Reviewer. Thank you very much for the feedback of our manuscript. We sincerely appreciate your effort and review. By considering the reviewer’s recommendation we used different terms in Title and Keywords. Moreover, we changed the Abstract of the systematic analysis by considering observer’s recommendations. Experimental design 2. I suggest a new Table with SLR and bibliometric analysis have been used in ``several previous research papers''. Answer: Dear Reviewer, we included Table 1 for explanation of Inclusion/Exclusion criteria of selected papers, in Figure 1 we outlined illustrated SLR flowchart, in Table 2 shown Research Questions of the review. 3. A flowchart should be added to the article to show the research methodology. In this sense, I propose to update the methodology as this paper does: B. Kitchenham and S. Charters, 'Guidelines for performing sandstematic literature reviews in software engineering version 2.3', Engineering, vol. 45, no. 5, pp. 1051, 2007. In this paper you will see that the flowchart in figure 2 presents the Number of articles in each stage after applying the inclusion and exclusion criteria. Abarca, V. M. G., Palos-Sanchez, P. R., & Rus-Arias, E. (2020). Working in virtual teams: a systematic literature review and a bibliometric analysis. IEEE Access, 8, 168923-168940. Answer: Dear Reviewer, we included flowchart of the SLR in Figure 1 by indicating number of articles in each stage after applying the inclusion and exclusion criteria. Validity of the findings 4. Consider the length of the conclusions. Answer: The conclusion has been changed. Now, the conclusion consists of 214 words. 5. Add DOI for all references. Answer: The proposed correction by the reviewer has been introduced. 6. It is suggested to include a SLR protocol and question research. Answer: We inserted four research questions for further research of the SLR. In addition, we included SLR protocol as a supplementary material. 7. It is suggested to compare the results of the present research with some similar systematic literature review which is done before. Answer: In Discussion section, we appended overview of the previous published review papers and compare them with the proposed SLR. Additional comments The manuscript is of interest and examines a problem of great interest. However, the methodology applied needs to be improved. Answer: Thank you very much for your comments and suggestions. We attempted to revise everything by suing reviewers’ comments and suggestions. As we noticed, because of corrections and the revision, the paper improved much more. "
Here is a paper. Please give your review comments after reading it.
375
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Scientific data management plays a key role in the reproducibility of scientific results. To reproduce results, not only the results but also the data and steps of scientific experiments must be made findable, accessible, interoperable, and reusable. Tracking, managing, describing, and visualizing provenance helps in the understandability, reproducibility, and reuse of experiments for the scientific community. Current systems lack a link between the data, steps, and results from the computational and non-computational processes of an experiment. Such a link, however, is vital for the reproducibility of results. We present a novel solution for the end-to-end provenance management of scientific experiments. We provide a framework, CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), which allows scientists to capture, manage, query and visualize the complete path of a scientific experiment consisting of computational and noncomputational data and steps in an interoperable way. CAESAR integrates the REPRODUCE-ME provenance model, extended from existing semantic web standards, to represent the whole picture of an experiment describing the path it took from its design to its result. ProvBook, an extension for Jupyter Notebooks, is developed and integrated into CAESAR to support computational reproducibility. We have applied and evaluated our contributions to a set of scientific experiments in microscopy research projects.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Reproducibility of results is vital in every field of science. The scientific community is interested in the results of experiments that are accessible, reproducible, and reusable. Recent surveys conducted among researchers show the existence of a problem in reproducing published results in different disciplines <ns0:ref type='bibr' target='#b6'>(Baker, 2016;</ns0:ref><ns0:ref type='bibr' target='#b62'>Samuel &amp; K&#246;nig-Ries, 2021)</ns0:ref>. Recently, there is a rapidly growing awareness in scientific disciplines on the importance of reproducibility. As a consequence, measures are being taken to make the data used in the publications FAIR (Findable, Accessible, Interoperable, Reusable) <ns0:ref type='bibr'>(Wilkinson et al., 2016)</ns0:ref>. However, this is too little too late: These measures are usually taken at the point in time when papers are being published. These measures do not include the management and description of several trials of an experiment, negative results from these trials, dependencies between the data and steps, etc. However, many challenges, particularly for data management, are faced by scientists much earlier in the scientific cycle. If they are not addressed properly and in a timely manner, they often make it impossible to provide truly FAIR data at the end. We, therefore, argue, that scientists need support from the very beginning of an experiment in handling the potentially large amounts of heterogeneous research data and its derivation.</ns0:p><ns0:p>A key factor to support scientific reproducibility is the provenance information that tells about the origin or history of the data. Recording and analysis of provenance data of a scientific experiment play a vital role for scientists to know the methods and steps taken to generate the output, to reproduce own results or other scientist's results <ns0:ref type='bibr' target='#b64'>(Taylor &amp; Kuyatt, 1994)</ns0:ref>. In addition to the preservation of data and results, the datasets and the metadata need to be collected and organized in a structured way from the beginning of the experiments. At the same time, information should be represented and expressed in an interoperable PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science way so that scientists can understand the data and results. Therefore, we need to start addressing this issue at the stage when the data is created. Thus, scientific research data management needs to start at the earlier stage of the research lifecycle to play a vital role in this context.</ns0:p><ns0:p>In this paper, we aim to provide end-to-end provenance capture and management of scientific experiments to support reproducibility. To define our aim, we define the main research question, which structure the remainder of the article: How can we capture, represent, manage and visualize a complete path taken by a scientist in an experiment, including the computational and non-computational steps to derive a path towards experimental results? To address the research question, we aim to create a conceptual model using semantic web technologies to describe a complete path of a scientific experiment. We design and develop a provenance-based semantic framework to populate this model, collect information about the experimental data and results along with the settings, runs, and execution environment and visualize them. The main contribution of this paper is the framework for the end-to-end provenance management of scientific experiments, called CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility) integrated with ProvBook <ns0:ref type='bibr' target='#b61'>(Samuel &amp; K&#246;nig-Ries, 2018c</ns0:ref>) and REPRODUCE-ME data model <ns0:ref type='bibr' target='#b55'>(Samuel, 2019a)</ns0:ref>. CAESAR supports computational reproducibility using our tool ProvBook, which is designed and developed to capture, store, compare and track the provenance of results of different executions of Jupyter Notebooks. The complete path of a scientific experiment interlinking the computational and non-computational data and steps is semantically represented using the REPRODUCE-ME data model.</ns0:p><ns0:p>In the following sections, we provide a detailed description of our findings. We start with an overview of the current state-of-the-art ('Related Work'). We describe the experimental methodology used in the development of CAESAR ('Materials &amp; Methods'). In the 'Results' section, we describe CAESAR and its main modules. We describe the evaluation strategies and results in the 'Evaluation' section. In the 'Discussion' section, we discuss the implications of our results and the limitations of our approach. We conclude the article by highlighting our major findings in the 'Conclusion' section.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Scientific data management plays a key role in knowledge discovery, data integration, and reuse. Preservation of digital objects has been studied for long in the digital preservation community. Some works give more importance to software and business process conservation <ns0:ref type='bibr' target='#b40'>(Mayer et al., 2012)</ns0:ref>, while other works focus on scientific workflow preservation <ns0:ref type='bibr' target='#b7'>(Belhajjame et al., 2015)</ns0:ref>. We focus our approach more on the data management solutions for scientific data, including images. <ns0:ref type='bibr' target='#b20'>Eliceiri et al. (2012)</ns0:ref> provide a list of biological imaging software tools. BisQue is an open-source, server-based software system that can store, display and analyze images <ns0:ref type='bibr' target='#b35'>(Kvilekval et al., 2010)</ns0:ref>. OMERO, developed by the Open Microscopy Environment <ns0:ref type='bibr'>(OME)</ns0:ref>, is another open-source data management platform for imaging metadata primarily for experimental biology <ns0:ref type='bibr' target='#b0'>(Allan et al., 2012)</ns0:ref>. It has a plugin architecture with a rich set of features, including analyzing and modifying images. It supports over 140 image file formats using BIO-Formats <ns0:ref type='bibr' target='#b37'>(Linkert et al., 2010)</ns0:ref>. OMERO and BisQue are the two closest solutions that meet our requirements in the context of scientific data management. A general approach to document experimental metadata is provided by the CEDAR workbench <ns0:ref type='bibr' target='#b23'>(Gonc &#184;alves et al., 2017)</ns0:ref>. It is a metadata repository with a web-based tool that helps users to create metadata templates and fill in the metadata using those templates. However, these systems do not directly provide the features to fully capture, represent and visualize the complete path of a scientific experiment and support computational reproducibility and semantic integration.</ns0:p><ns0:p>Several tools have been developed to capture complete computational workflows to support reproducibility in the context of scientific workflows, scripts, and computational notebooks. Scientific Workflows, which are a complex set of data processes and computations, are constructed with the help of a Scientific Workflow Management System (SWfMS) <ns0:ref type='bibr' target='#b19'>(Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b38'>Liu et al., 2015)</ns0:ref>. Different SWfMSs have been developed for different uses cases and domains <ns0:ref type='bibr' target='#b45'>(Oinn et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b1'>Altintas et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b19'>Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b63'>Scheidegger et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b22'>Goecks et al., 2010)</ns0:ref>. Most of the SWfMSs provide provenance support by capturing the history of workflow executions. These systems focus on the computational steps of an experiment and do not link the results to the experimental metadata. Despite the provenance modules present in these systems, there are currently many challenges in the context of reproducibility of scientific workflows <ns0:ref type='bibr' target='#b70'>(Zhao et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Cohen-Boulakia et al., 2017)</ns0:ref>. Workflows created by different scientists are difficult for others to understand or re-run in a different environment, resulting in workflow decays <ns0:ref type='bibr' target='#b70'>(Zhao et al., 2012)</ns0:ref>. The lack of interoperability between scientific workflows and the steep learning curve required by scientists are some of the limitations according to the study of different Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>SWfMSs <ns0:ref type='bibr' target='#b15'>(Cohen-Boulakia et al., 2017)</ns0:ref>. The Common Workflow Language <ns0:ref type='bibr' target='#b2'>(Amstutz et al., 2016)</ns0:ref> is an initiative to overcome the lack of interoperability of workflows. Though there is a learning curve associated with adopting workflow languages, this ongoing work aims to make computational methods reproducible, portable, maintainable, and shareable.</ns0:p><ns0:p>Many tools have been developed to capture the provenance of results from the scripts at different levels of granularity <ns0:ref type='bibr' target='#b24'>(Guo &amp; Seltzer, 2012;</ns0:ref><ns0:ref type='bibr' target='#b18'>Davison, 2012;</ns0:ref><ns0:ref type='bibr' target='#b44'>Murta et al., 2014;</ns0:ref><ns0:ref type='bibr'>McPhillips et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Burrito <ns0:ref type='bibr' target='#b24'>(Guo &amp; Seltzer, 2012</ns0:ref>) captures provenance at operating system level and provides an user interface for documenting and annotating the provenance of non-computational processes. <ns0:ref type='bibr' target='#b13'>Carvalho et al. (Carvalho et al., 2016)</ns0:ref> present an approach to convert scripts into reproducible Workflow Research Objects. However, it is a complex process that requires extensive involvement of scientists and curators with extensive knowledge of the workflow and script programming in every step of the conversion. The lack of documentation of computational experiments along with their results and the ability to reuse parts of code are some of the issues hindering reproducibility in script-based environments. In recent years, computational notebooks have gained widespread adoption because they enable computational reproducibility and allow users to share code along with documentation. Jupyter Notebook <ns0:ref type='bibr' target='#b33'>(Kluyver et al., 2016)</ns0:ref>, which was formerly known as the IPython notebook, is a widely used computational notebook that provides an interactive environment supporting over 100 programming languages with millions of users around the world. Even though it supports reproducible research, recent studies by <ns0:ref type='bibr' target='#b54'>Rule et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b50'>Pimentel et al. (2019)</ns0:ref> point out the need for provenance support in computational notebooks. Overwriting and re-execution of cells in any order can lead to the loss of results from previous trials. Some research works have attempted to track provenance from computational notebooks <ns0:ref type='bibr' target='#b27'>(Hoekstra, 2014;</ns0:ref><ns0:ref type='bibr' target='#b51'>Pimentel et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b14'>Carvalho et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b34'>Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b31'>Kery &amp; Myers, 2018;</ns0:ref><ns0:ref type='bibr' target='#b47'>Petricek et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Head et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b66'>Wenskovitch et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b65'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b39'>Macke et al., 2021)</ns0:ref>. <ns0:ref type='bibr' target='#b51'>Pimentel et al. (2015)</ns0:ref> propose a mechanism to capture and analyze the provenance of python scripts inside IPython Notebooks using noWorkflow <ns0:ref type='bibr' target='#b49'>(Pimentel et al., 2017)</ns0:ref>. PROV-O-Matic <ns0:ref type='bibr' target='#b27'>(Hoekstra, 2014)</ns0:ref> is another extension for earlier versions of IPython Notebooks to save the provenance traces to Linked Data file using PROV-O. In recent approaches, custom Jupyter kernels are developed to trace runtime user interactions and automatically manage the lineage of cell execution <ns0:ref type='bibr' target='#b34'>(Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b39'>Macke et al., 2021)</ns0:ref>. However, some of these approaches do not capture the execution history of computational notebooks, require changes to the code by the user, and are limited to Python scripts. In our approach, the provenance tracking feature is integrated within a notebook, so there is no need for users to change the scripts and learn a new tool. We also make available the provenance information in an interoperable way.</ns0:p><ns0:p>There exists a gap in the current state-of-the-art systems as they do not interlink the data, the steps, and the results from both the computational and non-computational processes of a scientific experiment. We bridge this gap by developing a framework to capture the provenance, provide semantic integration of experimental data and support computational reproducibility. Hence, it is important to extend the current tools and at the same time, reuse their rich features to support the reproducibility and understandability of scientific experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS &amp; METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Requirement Analysis</ns0:head><ns0:p>The prerequisite to developing an end-to-end provenance management platform arises from the requirements collected from interviews we conducted with scientists working in the Collaborative Research Center (CRC) ReceptorLight, as well as a workshop conducted to foster reproducible science (BEXIS2, 2017). These scientists come from different disciplines, including Biology, Computer Science, Ecology, and Chemistry. We also conducted an exploratory study to understand scientific experiments and the research practices of scientists related to reproducibility <ns0:ref type='bibr' target='#b62'>(Samuel &amp; K&#246;nig-Ries, 2021)</ns0:ref>. The detailed insights from these meetings and the survey helped us design, develop, and evaluate CAESAR. The platform was developed as part of the ReceptorLight project. Each module developed in CAESAR went through different stages like understanding requirements and use cases, designing the model, developing a prototype, testing and validating the prototype, and finally evaluating the work.</ns0:p><ns0:p>Several doctoral students and researchers from different disciplines were involved in each stage of the work as the end-users. We used the feedback received from the domain researchers at each phase to improve the framework.</ns0:p><ns0:p>We describe here a summary of the insights of the interviews on the research practices of scientists.</ns0:p><ns0:p>A scientific experiment consists of non-computational and computational data and steps. Computa- Manuscript to be reviewed Computer Science tional data is generated from computational tools like computers, software, scripts, etc. Activities in the laboratory like preparation of solutions, setting up the experimental execution environment, manual interviews, observations, etc., are examples of non-computational activities. Measures taken to reproduce a non-computational step are different than those for a computational step. The reproducibility of a non-computational step depends on various factors like the availability of experiment materials (e.g., animal cells or tissues) and instruments, the origin of the materials (e.g., distributor of the reagents), human and machine errors, etc. Hence, non-computational steps need to be described in sufficient detail for their reproducibility <ns0:ref type='bibr' target='#b30'>(Kaiser, 2015)</ns0:ref>.</ns0:p><ns0:p>The conventional way of recording the experiments in hand-written lab notebooks is still in use in biology and medicine. This creates a problem when researchers leave projects and join new projects. To understand the previous work conducted in a research project, all the information regarding the project, including previously conducted experiments along with the trials, analysis, and results, must be available to the new researchers. This information is also required when scientists are working on big collaborative projects. In their daily research work, a lot of data is generated and consumed through computational and non-computational steps of an experiment. Different entities like devices, procedures, protocols, settings, computational tools, and execution environment attributes are involved in experiments. Several people play various roles in different steps and processes of an experiment. The outputs of some noncomputational steps are used as inputs to the computational steps. Hence, an experiment must not only be linked to its results but also to different entities, people, activities, steps, and resources. Therefore, the complete path towards the results of an experiment must be shared and described in an interoperable manner to avoid conflicts in experimental outputs.</ns0:p><ns0:p>Design and Development Our aim was to design a provenance-based semantic framework for the end-to-end management of scientific experiments, including the computational and non-computational steps. To achieve our aim, we focused on the following modules: provenance capture, representation, management, comparison, and visualization. We used an iterative and layered approach in the design and development of CAESAR. We first investigated the existing frameworks that capture and store the experimental metadata and the data for the provenance capture module. We further narrowed down our search to imaging-based data management systems due to the extensive use of images and instruments in the experimental workflows in the ReceptorLight project. Based on our requirements, we selected OMERO as the underlying framework for the development of CAESAR. Very active development community ensuring a continued effort to improve the system, a faster release cycle, a well-documented API to write own tools, and the ability to extend the web interface with plugins provided additional benefits to OMERO. However, they lack in providing the provenance support of experimental data, including the computational processes, and also lack in semantically representing the experiments. We designed and developed CAESAR by extending OMERO to capture the provenance of scientific experiments.</ns0:p><ns0:p>For the provenance representation module, we use semantic web technologies to describe the heterogeneous experimental data as machine-readable and link them with other datasets on the web. We develop the REPRODUCE-ME data model and ontology by extending existing web standards, PROV-O <ns0:ref type='bibr' target='#b36'>(Lebo et al., 2013)</ns0:ref> and P-Plan <ns0:ref type='bibr' target='#b21'>(Garijo &amp; Gil, 2012)</ns0:ref>. The REPRODUCE-ME Data Model is a generic data model for the representation of scientific experiments with their provenance information. An Experiment is considered as the central point of the REPRODUCE-ME data model. The model consists of eight components: Data, Agent, Activity, Plan, Step, Setting, Instrument, Material. The ontology is developed from the competency questions collected from the scientists in the requirement analysis phase <ns0:ref type='bibr' target='#b55'>(Samuel, 2019a)</ns0:ref>.</ns0:p><ns0:p>It is extended from PROV-O to represent all agents, activities, and entities involved in an experiment. It extends from P-Plan to represent the steps, the input and output variables, and the complete path taken from an input to an output of an experiment. Using the REPRODUCE-ME ontology, we can describe and semantically query the information for scientific experiments, input and output associated with an experiment, execution environmental attributes, experiment materials, steps, execution order of steps and activities, agents involved and their roles, script/Jupyter Notebook executions, instruments and their settings, etc. The ontology also consists of classes and properties, which describe the elements responsible for the image acquisition process in a microscope, from OME data model <ns0:ref type='bibr' target='#b46'>(OME, 2016)</ns0:ref>.</ns0:p><ns0:p>For the provenance management module, we use PostgreSQL database and Ontology-based Data Access (OBDA) approach <ns0:ref type='bibr' target='#b52'>(Poggi et al., 2008)</ns0:ref>. OBDA is an approach to access the various data sources using ontologies and mappings. The details of the structure of the underlying data sources are isolated from the users using a high-level global schema provided by ontologies. It helps to efficiently access a large</ns0:p></ns0:div> <ns0:div><ns0:head>4/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62007:1:1:NEW 23 Aug 2021)</ns0:ref> Manuscript to be reviewed Computer Science amount of data from different sources and avoid replication of data that is already available in relational databases. It also provides many high-quality services to domain scientists without worrying about the underlying technologies. There are different widely-used applications involving large data sources that use OBDA <ns0:ref type='bibr' target='#b11'>(Br&#252;ggemann et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b32'>Kharlamov et al., 2017)</ns0:ref>. As the image metadata in OMERO and the experimental data in CAESAR are already stored in the PostgreSQL database, we investigated the effective ways to represent scientific experiments' provenance information without duplicating the data.</ns0:p><ns0:p>Based on this, we selected to use the OBDA approach to represent this data semantically and at the same time avoid replication of data. To access the various databases in CAESAR, we used Ontop <ns0:ref type='bibr' target='#b12'>(Calvanese et al., 2017)</ns0:ref> for OBDA. We use the REPRODUCE-ME ontology to map the relational data in the OMERO and the ReceptorLight database using Ontop's native mapping language. We used federation for the OMERO and ReceptorLight databases provided by the rdf4j SPARQL Endpoint 1 . We used the Protege plugin provided by Ontop to write the mappings. A virtual RDF graph is created in OBDA using the ontology with the mappings <ns0:ref type='bibr' target='#b12'>(Calvanese et al., 2017)</ns0:ref>. SPARQL, the standard query language in the semantic web community, is used to query the provenance graph. We used the approach where the RDF graphs are kept virtual and queried only during query execution. The virtual approach helps avoid the materialization cost and provides the benefits of the matured relational database systems. However, there are some limitations in this approach using Ontop due to unsupported functions and data type.</ns0:p><ns0:p>To support computational reproducibility in CAESAR, we focused on providing the management of the provenance of the computational parts of an experiment. Computational notebooks, which have gained widespread attention because of their support for reproducible research, motivated us to look into this direction. These notebooks, which are extensively used and openly available, provide various features to run and share the code and results. To provide users access to computational environment and resources, JupyterHub 2 is installed in CAESAR. JupyterHub provides a customizable and scalable way to serve Jupyter notebook for multiple users. In spite of the support for reproducible research, the provenance information of the execution of these notebooks was missing. To further support reproducibility in these notebooks, we developed ProvBook, an extension of Jupyter Notebooks, to capture the provenance information of their executions. The design of ProvBook is kept simple so that it can be used by researchers irrespective of their disciplines. We added the support to compare the differences in executions of the notebooks by different authors. We also extended the REPRODUCE-ME ontology to describe the computational experiments, including scripts and notebooks <ns0:ref type='bibr' target='#b59'>(Samuel &amp; K&#246;nig-Ries, 2018a)</ns0:ref>, which was missing in the current state of the art.</ns0:p><ns0:p>For the provenance visualization module, we focused on visualizing the complete path of an experiment by linking the non-computational and computational data and steps. Providing users with a complete picture of an experiment and tracking its provenance are our two goals in designing the visualization component in CAESAR. To do so, we integrated the REPRODUCE-ME ontology and ProvBook in CAESAR. We developed visualization modules to provide a complete story of an experiment starting from its design to its publication. The visualization module, Project Dashboard, provides a complete overview of all the experiments conducted in a research project <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. We later developed the ProvTrack module to track the provenance of individual scientific experiments. The underlying technologies are transparent to scientists based on these approaches. We followed a Model-View-Controller architecture pattern for the development of CAESAR. We implement the webclient in the Django-Python framework and the Dashboard in ReactJs. Java is used to implement the new services extended by OMERO.server.</ns0:p><ns0:p>We use D3 JavaScript library <ns0:ref type='bibr'>(D3.js, 2021)</ns0:ref> for the rendering of provenance graphs in the ProvTrack.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We present CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), an end-to-end semantic-based provenance management platform. It is extended from OMERO <ns0:ref type='bibr' target='#b0'>(Allan et al., 2012)</ns0:ref>. With the integration of the rich features provided by OMERO and our provenance-based extensions, CAESAR provides a platform to support scientists to describe, preserve and visualize their experimental data by providing the linking of the datasets with the experiments along with the execution environment and images. It provides extensive features, including the semantic representation of experimental data using the REPRODUCE-ME ontology and computational reproducibility support using ProvBook. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> 1 https://rdf4j.org/ depicts the architecture of CAESAR. We describe briefly the core modules of CAESAR required for the end-to-end provenance management.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Capture.</ns0:head><ns0:p>This module provides a metadata editor with a very rich set of features allowing the scientists to easily record all the data of the non-computational steps performed in their experiments as well as the protocols, the materials, etc. This metadata editor is a form-based provenance capture system. It provides the feature to document the experimental metadata and interlink with other experiment databases. An Experiment form is the key part of this system that documents all the information about an experiment. This data includes the temporal and spatial properties, the experiment's research context, and other general settings used in the experiment. The materials and other resources used in an experiment are added as new templates and linked to the experiment. The templates are added as a service as well as a database table in CAESAR and are also available as API, thus allowing the remote clients to use them.</ns0:p><ns0:p>The user and group management provided by OMERO is adopted in CAESAR to manage users in groups and provides roles for these users. The restriction and modification of data are managed using the roles and permissions that are assigned to the users belonging to a group. The data is made available between the users in the same group in the same CAESAR server. Members of other groups can share the data based on the group's permission level. A user can be assigned any of the role of Administrator, Group Owner or Group Member. An Administrator controls all the settings of the groups. A Group Owner has the right to add other members to the group. A Group Member is a standard user in the group. There are also various permission levels in the system. Private is the most restrictive permission level, thus providing the least collaboration level with other groups in the system. A private group owner can view and control the members and data of the members within a group. While a private group member can view and control only his/her data. The Read-only is an intermediate permission level. In addition to their group, the group owners can read and perform some annotations on members' data from other groups.</ns0:p><ns0:p>The group members don't have permission to annotate the datasets from other groups, thus providing CAESAR adopts this role and permission levels to control the access and modification of provenance information of experiments. A Principal Investigator (PI) can act as a group owner and students as group members in a private group. PIs can access students' stored data and decide which data can be used to share with other collaborative groups. A Read-only group can serve as a public repository where the original data and results for the publications are stored. A Read-annotate group is suitable for collaborative teams to work together for a publication or research. Every group member is trusted and given equal rights to view and access the data in a Read-write group, thus providing a very collaborative environment. This user and group management paves the way for collaboration among teams in research groups and institutes before the publication is made available online.</ns0:p><ns0:p>This module also allows scientists to interlink the dependencies of many materials, samples, input files, measurement files, images, standard operating procedures, and steps to an experiment. Users can also attach files, scripts, publications, or other resources to any steps in an experiment form. These resources can be annotated as an input to a step or intermediate result of a step. Another feature of this module is to help the scientists to reuse resources rather than do it from scratch, thus enabling a collaborative environment among teams and avoiding replicating the experimental data. This is possible by sharing the descriptions of the experiments, standard operating procedures, and materials with the team members within the research group. Scientists can reference the descriptions of the resources in their experiments.</ns0:p><ns0:p>Version management plays a key role in data provenance. In a collaborative environment, where the experimental data are shared among the team members, it is important to know the modifications made by the members of the system and track the history of the outcome of an experiment. This module provides version management of the experimental metadata by managing all the changes made in the description of the experimental data. The plugin allows users to view the version history and compare two different versions of an experiment description. The file management system in CAESAR stores all files and index them to the experiment, which is annotated as input data, measurement data, or other resources. The user can organize the input data, measurement data, or other resources in a hierarchical structure based on their experiments and measurements using this plugin.</ns0:p><ns0:p>CAESAR also provides a database of Standard Operating Procedures. These procedures in life-sciences provide a set of step-by-step instructions to carry out a complex routine. In this database, the users can store the protocols, procedures, scripts, or Jupyter Notebooks based on their experiments, which have multiple non-computational and computational steps. The users can also link these procedures to the step in an experiment where they were used. The plugin also provides users the facility to annotate the experimental data with terms from other ontologies like GO <ns0:ref type='bibr'>(Ashburner et al., 2000)</ns0:ref>, CMPO <ns0:ref type='bibr' target='#b29'>(Jupp et al., 2016)</ns0:ref>, etc. in addition to REPRODUCE-ME Ontology.</ns0:p><ns0:p>If a user is restricted to make modifications to other member's data due to permission level, the plugin provides a feature called Proposal to allow users to propose changes or suggestions to the experiment. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Plate, Screen, Experiment, Experimenter, ExperimenterGroup, Instrument, Image, StructuredAnnotations, and ROI. Each class provides a rich set of features, including how they are used in an experiment. The Experiment, which is a subclass of Plan <ns0:ref type='bibr' target='#b21'>(Garijo &amp; Gil, 2012)</ns0:ref>, links all the provenance information of a scientific experiment together. The REPRODUCE-ME ontology is used to map the relational data in the OMERO and the ReceptorLight database using Ontop. The provenance information of computational notebooks is also semantically represented and is combined with other experimental metadata, thus providing the context of the results. This helps the scientists and machines to understand the experiments along with their context. There are around 800 mappings to create the virtual RDF graph. All the mappings are publicly available. We refer the authors to the publication for the complete documentation of the REPRODUCE-ME ontology <ns0:ref type='bibr' target='#b56'>(Samuel, 2019b)</ns0:ref> and the database schema is available in the Supplementary file.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational Reproducibility</ns0:head><ns0:p>The introduction of computational notebooks, which allow scientists to share the code along with the documentation, is a step towards computational reproducibility. Scientists widely use Jupyter Notebooks to perform several tasks, including image processing and analysis. As the experimental data and images are contained in the CAESAR itself, another requirement is to provide a computational environment for scientists to include the scripts that analyze the data stored in the platform. To create a collaborative research environment for the scientists working with images and Jupyter Notebooks, JupyterHub is installed and integrated with CAESAR. This allows the scientists to directly access the images and datasets stored in CAESAR using the API and perform data analysis or processing on them using Jupyter Notebooks. The new images and datasets created in the Jupyter Notebooks can then be uploaded and linked to the original experiments to CAESAR using the APIs. To capture the provenance traces of the computational steps in CAESAR, we introduce ProvBook, an extension of Jupyter notebooks to provide provenance support <ns0:ref type='bibr' target='#b60'>(Samuel &amp; K&#246;nig-Ries, 2018b)</ns0:ref>. It is an easy-to-use framework for the scientists and developers for the efficient capture, comparison, and visualization of the provenance data of different executions of a notebook over time. To capture the provenance of computational steps and support computational reproducibility, ProvBook is installed in JupyterHub and integrated with CAESAR. We briefly describe the modules provided by ProvBook.</ns0:p><ns0:p>Capture, Management, and Representation. This module captures and stores the provenance of the execution of cells of Jupyter Notebooks over the course of time. A Jupyter Notebook, which is stored as a JSON file format, is a dictionary with the following keys: metadata and cells. The metadata is a dictionary that contains information about the notebook, its cells, and outputs. The cell contains information of all cells, including the source, the type of the cell, and its metadata. As Jupyter notebooks allow the addition of custom metadata to its content, the provenance information captured by the ProvBook is added to the metadata of each cell of the notebook in the JSON format. ProvBook captures the provenance information, which includes the start and end time of each execution, the total time it took to run the code cell, the source code, and the output obtained during that particular execution. The execution time for a computational task was added as part of the provenance metadata in a Notebook since it is important to check the performance of the task. The start and end time also act as an indicator of the execution order of the cells. Users can execute cells in any order, so adding the start and end time helps them check when a particular cell was last executed. The users can make any changes to the parameters and source code in each cell till they arrive at their expected result. This helps the user to track the history of all the executions to see which parameters were changed and how the results were derived.</ns0:p><ns0:p>ProvBook also provides a module that converts the computational notebooks along with the provenance information of their executions and execution environment attributes into RDF. This provenance information is represented using the REPRODUCE-ME ontology. ProvBook allows the user to export the notebook in RDF as a turtle file either from the user interface of the notebook or using the command line. The users can share a notebook and its provenance in RDF and convert it back to a notebook. The reproducibility service provided by ProvBook converts the provenance graph back to a computational notebook along with its provenance. The Jupyter notebooks and the provenance information captured by ProvBook in RDF are then linked to the provenance of the experimental metadata in CAESAR. along with the original results. ProvBook provides a feature that helps scientists to compare the results of different executions of a Jupyter Notebook performed by the same or different agents. ProvBook provides a provenance difference module to compare the different executions of each cell of a notebook, thus helping the users either to (1) repeat and confirm/refute their results or (2) reproduce and confirm/refute others results. The start time of different executions collected is used to distinguish between the two executions. A dropdown menu is provided to select two executions based on their starting time. After the two executions are selected, the difference in the input and the output of these executions are shown side by side. The users can select their own execution and compare the results with the original experimenter's execution of the Jupyter Notebook as well. Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> shows the differences between the source and output of two different code cell execution. Any differences in the source or output are highlighted for the user to distinguish the change. The provenance difference module is developed by extending the nbdime (Project Jupyter, 2021) library from the Project Jupyter. The nbdime tools provide the ability to compare notebooks and also a three-way merge of notebooks with auto-conflict resolution. ProvBook calls the API from the nbdime to see the difference between the provenance of two executions of a notebook code cell.</ns0:p><ns0:p>Using nbdime, ProvBook provides diffing of notebooks based on the content and also renders image-diffs correctly. This module in CAESAR help scientists to compare the results of the executions of different users.</ns0:p><ns0:p>Visualization. Efficient visualization is important to make meaningful interpretations of the data. The provenance information of each cell captured by ProvBook is visualized below every input cell. In this provenance area, a slider is provided so that the user can drag to view the history of the different executions of the cell. This area also provides the user with the ability to track the history and compare the current results with several previous results and see the difference that occurred. The user can visualize the provenance information of a selected cell or all cells by clicking on the respective buttons in the toolbar. The user can also clear the provenance information of a selected cell or all cells if required. This solution tries to address the problem of having larger provenance information than the original notebook data.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Visualization.</ns0:head><ns0:p>We have described above the visualization of provenance of cells in Jupyter Notebooks. In this section, we look at visualization of overall scientific experiments. We present two modules for the visualization of the provenance information of scientific experiments captured, stored, and semantically represented in CAESAR: Dashboard and ProvTrack. The experimental data provided by scientists through the metadata editor, the metadata extracted from the images and instruments, and the details of the computational steps collected from ProvBook together are integrated, linked, and represented using the REPRODUCE-ME ontology. All this provenance data, stored as linked data, form the basis for the complete path of a scientific experiment and is visualized in CAESAR. Dashboard. This visualization module aggregates all the data related to an experiment and project in a single place. The user is provided with two views: one at the project level and another at the experiment level. The Dashboard at the project level provides a unified view of a research project containing multiple experiments by different agents. When a user selects a project, then the Project Dashboard is activated while the Experiment Dashboard is activated when a dataset is selected. The Dashboard is composed of several panels. Each panel provides a detailed view of a particular component of an experiment. The data inside a panel is displayed in tables. The panels are arranged in a way that they provide the story of an experiment. The detailed description of each panel is provided <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Users can also search and filter the data based on keywords inside a table in the panel.</ns0:p><ns0:p>ProvTrack. This visualization module provides users an interactive way to track the provenance of experimental results. The provenance of experiments is provided using a node-link representation, thus, helping the user to backtrack the results. Users can drill-down each node to get more information and attributes. This module which is developed independently, is integrated into CAESAR. The provenance graph is based on the data model represented by REPRODUCE-ME ontology. To get the complete path of a scientific experiment, the SPARQL endpoint is queried. Several SPARQL queries are made, and the results are combined to display this complete path and increase the system's performance. Figure <ns0:ref type='figure'>3</ns0:ref> shows the visualization of the provenance of an experiment using ProvTrack. The user is provided with the option to select an experiment whose provenance needs to be tracked.</ns0:p><ns0:p>The provenance graph is visualized in the right panel when the user selects an experiment. Each node in the provenance graph is colored based on its type like prov:Entity, prov:Agent, prov:Activity, p-plan:Step, p-plan:Plan and p-plan:Variable. The user can expand the provenance graph by opening up all nodes using the Expand All button next to the help menu. Using the Collapse All button, users can collapse the provenance graph to one node which is the Experiment. The property relationship between two nodes is shown when a user hovers on an edge. The help menu provides the user with the meaning of each color in the graph. The path from the user-selected node to the first node (Experiment) is highlighted to show the relationship of each node with the Experiment and also to see where the node is the provenance graph.</ns0:p><ns0:p>ProvTrack also provides an Infobox of the selected node of an experiment. This additional information about the selected node is displayed as a key-value pair. The keys in the Infobox are either the object or data properties of the REPRODUCE-ME ontology that is associated with the node that the user has Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>model. It provides a dropdown to search not only the nodes but also the edges. This comes handy when the provenance graph is very large.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>We evaluate different aspects of our work based on our main research question that it is possible to capture, represent, manage and visualize a complete path taken by a scientist in an experiment including the computational and non-computational steps to derive a path towards experimental results. We first evaluate our data model which is integrated in CAESAR to describe the complete path of a scientific experiment using competency questions. <ns0:ref type='bibr' target='#b62'>Samuel &amp; K&#246;nig-Ries (2021)</ns0:ref> provides the results of a user-based survey to evaluate whether the terms added in the REPRODUCE-ME ontology are required to describe the provenance of a scientific experiment. We then perform data and user-based evaluation of ProvBook in CAESAR in different scenarios of computational reproducibility. We then evaluate CAESAR in general based on the users' perspective and provide the results of the user study. Scientists are involved in the evaluation and also being the users of our system.</ns0:p></ns0:div> <ns0:div><ns0:head>Competency question based evaluation</ns0:head><ns0:p>In competency question based evaluation, ontologies are used in systems to produce good results on a given task <ns0:ref type='bibr' target='#b9'>(Brank et al., 2005)</ns0:ref>. Hence, we evaluated CAESAR with the REPRODUCE-ME ontology using competency questions collected from different scientists in our requirement analysis phase. We used the REPRODUCE-ME ontology to answer the competency questions using the scientific experiments documented in CAESAR for its evaluation. The evaluation was done on a server (installed with CentOS Linux 7 and with x86-64 architecture) hosted at the University Hospital Jena. Scientists from B1 and A4 projects of ReceptorLight documented experiments using confocal patch-clamp fluorometry (cPCF), F&#246;rster Resonance Energy Transfer (FRET), PhotoActivated Localization Microscopy (PALM) and direct Stochastic Optical Reconstruction Microscopy (dSTORM) as part of their daily work. In 23 projects, a total of 44 experiments were recorded and uploaded with 373 microscopy images generated from different instruments with various settings using either the desktop client or webclient of CAESAR (Accessed 21 April 2019). We also used the Image Data Repository (IDR) datasets (IDR, 2021) with around 35 imaging experiments <ns0:ref type='bibr' target='#b69'>(Williams et al., 2017)</ns0:ref> for our evaluation to ensure that the REPRODUCE-ME ontology can be used to describe other types of experiments as well. The scientific experiments, along with the steps, experiment materials, settings, and standard operating procedures were described using the REPRODUCE-ME ontology using OBDA. We created a knowledge base of different types of experiments from these two sources. The competency questions, which were translated into SPARQL queries by computer scientists, were executed on our knowledge base, consisting of linked data in CAESAR. The domain experts evaluated the correctness of the answers to these competency questions. We present here one competency question with the corresponding SPARQL query and part of the results obtained on running it against the knowledge base. The result of each query is a long list of values, hence, we show only the first few rows from them. This query is responsible for getting the complete path for an experiment.</ns0:p><ns0:p>Listing 1. What is the complete path taken by a scientist for an experiment? SELECT DISTINCT * WHERE { ? e x p e r i m e n t a r e p r : E x p e r i m e n t ; p r o v : w a s A t t r i b u t e d T o ? a g e n t ; r e p r : h a s D a t a s e t ? d a t a s e t ; p r o v : g e n e r a t e d A t T i m e ? g e n e r a t e d A t T i m e . ? a g e n t r e p r : h a s R o l e ? r o l e .</ns0:p><ns0:p>? d a t a s e t prov:hadMember ? image . ? i n s t r u m e n t p &#8722; p l a n : c o r r e s p o n d s T o V a r i a b l e ? image ; r e p r : h a s P a r t ? i n s t r u m e n t p a r t . ? i n s t r u m e n t p a r t r e p r : h a s S e t t i n g ? s e t t i n g . ? p l a n p &#8722; p l a n : i s S u b P l a n O f P l a n ? e x p e r i m e n t . ? v a r i a b l e p &#8722; p l a n : i s V a r i a b l e O f P l a n ? p l a n . ? s t e p p &#8722; p l a n : i s S t e p O f P l a n ? e x p e r i m e n t . OPTIONAL { ? s t e p p &#8722; p l a n : Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref> shows the part of the result for a particular experiment called 'Focused mitotic chromosome condensation screen using HeLa cells'. Here, we queried the experiment with its associated agents and their role, the plans and steps involved, the input and output of each step, the order of steps, and the instruments and their setting. We see that all these elements are linked now to the computational and non-computational steps to describe the complete path. This query can be further expanded by querying for additional information like the materials, publications, external resources, methods, etc., used in each step of an experiment. It is possible to query for all the elements mentioned in the REPRODUCE-ME Data Model.</ns0:p><ns0:p>We also observed that for certain experiments which did not provide the complete data for some elements, the query returned null. So the query needs to be tweaked to include the OPTIONAL keyword to get the results from the query. Another thing that we notice during the evaluation is that the results are spread across several rows in the table. In the Dashboard, when we show these results, the filter option provided in the table helps the user to search for particular columns. The results of SPARQL queries were manually compared using Dashboard and ProvTrack. Their correctness was evaluated by the domain experts <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Each of the competency questions addressed the different elements of the REPRODUCE-ME Data Model. The competency questions, the RDF data used for the evaluation, the SPARQL queries, and their results are publicly available <ns0:ref type='bibr' target='#b57'>(Samuel, 2021)</ns0:ref>. 1. Repeatability: The computational experiment is repeated in the same environment by the same experimenter. This is performed to confirm the final results from the previous executions. We used an example script which uses face recognition example applying eigenface algorithm and SVM using scikit-learn 3 . This script provides a computational experiment to extract information from image dataset that is publicly available using machine learning techniques. We used this example script to show the different use cases of our evaluation and also how ProvBook handles different output formats like image, text, etc., as these formats are important for the users of CAESAR. We use Original Author to refer to the author who is the first author of the notebook and User 1 and User 2 to the authors who used <ns0:ref type='bibr'>2016-2018 (17.06.2016, 19.07.2016, 07.06.2017, 09.04.2018 and 16.06.2018)</ns0:ref>. As part of these trainings, scientists were asked to upload their experimental data to CAESAR. At the time of evaluation, the participants were provided with the system with real-life scientific experiment data as mentioned in the competency question evaluation subsection. None of the questions in the study was mandatory. The questionnaire along with the responses are available in the Supplementary file.</ns0:p><ns0:p>In the first section of the study, we asked how the features in CAESAR help in improving their daily research work. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results. 83% of the participants either strongly agreed or agreed that it helps to visualize all the experimental data and results effectively, while 17% of them disagreed on that. In the next section, we asked on the perceived usefulness of CAESAR. 60% of the users consider CAESAR user-friendly while 40% of them had a neutral response. 40% of the participants agreed that CAESAR is easy to learn to use and 60% had a neutral response. The response for this was mentioned that CAESAR provides a lot of features and they found it little difficult to follow. However, all the participants strongly agreed or agreed that CAESAR is useful for scientific data management and provides a collaborative environment among teams.</ns0:p><ns0:p>In the last section, we evaluated each feature provided by CAESAR by focusing on the important visualization modules. ProvTrack was strongly liked or liked by all the participants. For the Dashboard, 80% of them either strongly liked or liked, while 20% of them had a neutral response. 60% of the users strongly liked or liked ProvBook, while other 40% had a neutral response. The reasons for the neutral response was because they were new to scripting. We also asked to provide the overall feedback of CAESAR along with its positive aspects and the things to improve. We obtained 3 responses to this question which are available in Supplementary File. We asked on the perceived usefulness of CAESAR. 60% of the users consider CAESAR user-friendly while 40% of them had a neutral response. 40% of the participants agreed that CAESAR is easy to learn to use and 60% had a neutral response. The response for this was mentioned that CAESAR provides a lot of features, and they found it a little difficult to follow. However, all the participants strongly agreed or agreed that CAESAR is useful for scientific data management and provides a collaborative environment among teams.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Provenance plays a key role in supporting the reproducibility of results, which is an important concern in data-intensive science. Through CAESAR, we aimed to provide a data management platform for capturing, semantically representing, comparing, and visualizing the provenance of scientific experiments, including their computational and non-computational aspects. CAESAR is used and deployed in the CRC ReceptorLight project, where scientists work together to understand the function of membrane receptors and develop high-end light microscopy techniques. In the competency question-based evaluation, we focused on answering the questions using the experimental provenance data provided by scientists from the research projects, which was then managed and semantically described in CAESAR. Answering the competency questions using SPARQL queries shows that some experiments documented in CAESAR had missing provenance data on some of the elements of the REPRODUCE-ME Data Model like time, settings, etc. In addition to that, the output of the query for finding the complete path of the scientific experiment results in many rows in the table. Therefore, the response time could exceed the normal query response time and result in server error from the SPARQL endpoint in some cases where the experiment has various inputs and outputs with several executions. Currently, scientists from the life sciences did not have the knowledge of Semantic Web technologies and are not familiar with writing their SPARQL queries. Hence, we did not perform any user study on writing SPARQL queries to answer competency questions. However, scientists must be able to see the answers to these competency questions and explore the complete path of a scientific experiment. To overcome this issue, we split the queries and combined their results in ProvTrack. The visualization modules, Dashboard, and ProvTrack, which use SPARQL and linked data in the background, provides the visualization of the provenance graph of each scientific experiment. The entities, agents, activities, steps, and plans in ProvTrack are grouped to help users visualize the complete path of an experiment.</ns0:p><ns0:p>The results of the data and user-based evaluation of ProvBook in CAESAR show how it helps in supporting computational reproducibility. We see that each item added in the provenance information in Jupyter Notebooks helps users track the results even in different execution environments. The input, output, starting and ending time, and execution time for each trial from each experimenter helps in tracking the provenance of the computational experiments. The Jupyter Notebooks shared along with the provenance information of their executions helps to compare the original intermediate and final results with the results from the new trials executed in the same or different environment. We see that it not only helps in reproducibility but also with repeatability. This helps track the intermediate and negative results, and the input and the output from different trials are not lost. The execution environmental attributes of the computational experiments along with their results, help to understand their complete path. We also see that we could describe the relationship between the results, the execution environment, and the executions that generated the results of a computational experiment in an interoperable way using the REPRODUCE-ME ontology.</ns0:p><ns0:p>The user-based evaluation of CAESAR aimed to see how the users find CAESAR useful concerning the features it provides. We targeted both the regular users and the users who are new to the system. Even though we had a small group of participants, they either agreed or liked its features. The survey results in <ns0:ref type='bibr' target='#b62'>(Samuel &amp; K&#246;nig-Ries, 2021)</ns0:ref> had shown that newcomers face difficulty in finding, accessing, and reusing data in a team. A strong agreement was seen among the participants that CAESAR helps to preserve data for the newcomers to understand the ongoing work in the team. This understanding of the ongoing work in the team comes from the linking of experimental data and results. This is achieved using the The limitation of our evaluation is the small number of user participation. Hence, we cannot make any statistical conclusion on the usefulness of the system itself. However, CAESAR is planned to be used and extended for another large research project, Microverse 4 , which will allow for a more scalable user evaluation. One part of the provenance capture module depends on the scientists to document their experimental data. Even though the metadata from the images captures the execution environment and the devices' settings, the need for human annotations to the experimental datasets is significant. Besides this limitation, the mappings for the ontology-based data access required some manual curation. This can affect when the database is extended for other experiment types.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article, we presented CAESAR. It provides a collaborative framework for the management of scientific experiments, including the computational and non-computational steps. The provenance of the scientific experiments is captured and semantically represented using the REPRODUCE-ME ontology. ProvBook helps the user capture, represent, manage, visualize and compare the provenance of different executions of computational notebooks. The computational data and steps are linked to the non-computational data and steps to represent the complete path of the experimental workflow. The visualization modules of CAESAR provides users to view the complete path and backtrack the provenance of results. We applied our contributions together in the ReceptorLight project to support the end-to-end provenance management from the beginning of an experiment to its end. There are several possibilities to extend and improve CAESAR. We expect that this approach can be extended to different types of experiments in diverse scientific disciplines. Reproducibility of non-computational parts of an experiment is our future line of work. We can reduce the query time for the SPARQL queries in the project dashboard and ProvTrack by taking several performance measures. CAESAR could be extended to serve as a public data repository providing DOIs to the experimental data and provenance information. This would help the scientific community to track the complete path of the provenance of the results described in the scientific publications.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The architecture of CAESAR. The data management platform consists of modules for provenance capture, representation, storage, comparison, and visualization. It also includes several additional services including API access, and SPARQL Endpoint.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021) Manuscript to be reviewed Computer Science only viewing and reading possibilities. The Read-annotate permission level provides a more collaborative option where the group owners and members can view the other groups' members as well as read and annotate their data. The Read-write permission level allows the group members to read and write data just like their own group.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Those suggestions are received by the owner of the experiment as proposals. The user can either accept the proposal and add it to the current experimental data or reject and delete the proposal. The plugin provides autocompletion of data to fasten the process of documentation. For example, based on the CAS number of the chemical provided by the user, the molecular weight, mass, structural formulas are fetched from the CAS registry and populated in the Chemical database. The plugin also provides additional data from the external servers for other materials like Protein, Plasmid, and Vector. The plugin also autofills the data about the authors and other publication details based on the DOI/PubMedId of the publications.It also provides a virtual keyboard to aid the users in documenting descriptions with special characters, chemical formulas, or symbols.Provenance Management and Representation.We use a PostgreSQL database in OMERO as well as in CAESAR. The OMERO database consists of 145 tables and the ReceptorLight database consists of 35 tables in total. We use the REPRODUCE-ME ontology to model and describe the experiments and their provenance in CAESAR. The database model and its schema consist of important classes which are based on the REPRODUCE-ME Data Model<ns0:ref type='bibr' target='#b55'>(Samuel, 2019a)</ns0:ref>. For the data management of images, the main classes include Project, Dataset, Folder,7/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Comparison.Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The difference between the input and output of two different execution of a code cell in ProvBook. Deleted elements are marked in red, newly added or created elements are marked in green.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3. ProvTrack: Tracking Provenance of Scientific Experiments</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>clicked. Links are provided to the keys to get their definitions from the web. The path of the selected node from the Experiment node is also displayed on top of the left panel. The Search panel provides the users with the ability to search for any entities in the graph defined by the REPRODUCE-ME data 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>i s P r e c e d e d B y ? p r e v i o u s S t e p } . { ? I n p u t p &#8722; p l a n : i s I n p u t V a r O f ? s t e p ; r d f : t y p e ? I n p u t T y p e . OPTIONAL { ? I n p u t r e p r : n a m e ? InputName } . } UNION { ? O u t p u t p &#8722; p l a n : i s O u t p u t V a r O f ? s t e p ; r d f : t y p e ? O u t p u t T y p e . OPTIONAL { ? O u t p u t r e p r : n a m e ? OutputName } . OPTIONAL { ? O u t p u t r e p r : i s A v a i l a b l e A t ? o u t p u t U r l } . OPTIONAL { ? O u t p u t r e p r : r e f e r e n c e ? O u t p u t R e f e r e n c e . ? O u t p u t R e f e r e n c e r d f : v a l u e ? O u t p u t R e f e r e n c e V a l u e } } } 11/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. A part of results for the competency question</ns0:figDesc><ns0:graphic coords='13,141.74,63.78,413.55,226.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Data and user-based evaluation of ProvBook in CAESARWe also evaluated how ProvBook supports computational reproducibility using Jupyter Notebooks in CAESAR. We used data and user-based evaluation focusing on how ProvBook performs with different aspects of usability, performance, and scalability in addition to reproducibility. The evaluation was done taking into consideration different use cases and factors. They are:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>2. Reproducibility: The computational experiment is reproduced in a different environment by a different experimenter. In this case, the results of the Jupyter notebook in the original environment are compared with the results from a different experimenter executed in a different environment.3. The input, output, execution time and the order in two different executions of a notebook. 4. Provenance difference of the results of a notebook. 5. Performance of ProvBook with respect to time and space. 12/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021) Manuscript to be reviewed Computer Science 6. The environmental attributes in the execution of a notebook. 7. Complete path taken by a computational experiment with the sequence of steps in the execution of a notebook with input parameters and intermediate results in each step required to generate the final output.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>the original notebook to reproduce results. The notebook was first saved without any outputs. Later the notebook was executed by two different users. The notebook was run in three different environments. The first run of the eigenfaces Jupyter Notebook gave ModuleNotFoundError for User 1. Several runs were attempted to solve the issue. However, for User 2, only the first run gave ModuleNotFoundError error. This was resolved by installing the scikit-learn module. But for User 1 installing the module still did not solve the issue. The problem occurred because of the version change of the scikit-learn module. The original Jupyter Notebook used 0.16 version of scikit-learn. While User 1 used 0.20.0 version, User 2 used 0.20.3. The classes and functions from the cross validation, grid search, and learning curve modules were placed into a new model selection module starting from Scikit-learn 0.18. Several other changes were made in the script which used these functions. User 1 made the necessary changes to work for the new versions of the scikit-learn module, hence, User 2 did not have to change scripts. Using ProvBook, Users 1 and 2 could track the changes and compare the original script with the new one, which worked on both the user's systems. The results are available in the Supplementary file.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>User-based evaluation of CAESAR We performed a user-based evaluation of CAESAR. The purpose of this study was to evaluate the usefulness of CAESAR and its different modules. 7 participants were invited for the survey, of which six participants responded to the questions. The participants of this evaluation were the scientists of ReceptorLight project who use CAESAR in their daily work. In addition to them, there were other biology students, who closely work with microscopy images and are not part of ReceptorLight project, participated in this evaluation. The scientists from ReceptorLight project were given training on CAESAR and its workflow on documenting experimental data. Apart from the internal meetings, the trainings were done throughout the years from</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>visualization of the overall view of the experimental data. The results from the study show that among the two visualization modules, ProvTrack was preferred over Dashboard by scientists. Even though both serve different purposes (Dashboard for an overall view of the experiments conducted in a Project and the ProvTrack for backtracking the results of one experiment), the users preferred the provenance graph 14/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021)Manuscript to be reviewedComputer Scienceto be visualized with detailed information on clicking. The survey shows that the visualization of the experimental data and results using ProvTrack supported by the REPRODUCE-ME ontology helps the scientists without worrying about the underlying technologies. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results.</ns0:figDesc></ns0:figure> <ns0:note place='foot' n='3'>https://scikit-learn.org/0.16/datasets/labeled_faces.html 13/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:1:1:NEW 23 Aug 2021)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Dear Editor, Thank you for your message “Decision on your PeerJ Computer Science submission: 'A collaborative semantic-based provenance management platform for reproducibility' (#CS-2021:06:62007:0:1:REVIEW)” from 5 July, 2021. We thank you for your work and the reviewers for their thoughtful and thorough suggestions and comments, which have helped us to improve significantly the quality of the manuscript. Please find below our point-by-point responses in text with bold and italic formatting. The submission is updated accordingly and further includes the editorial changes requested by the PeerJ team. We look forward to hearing from you! Best Regards, Sheeba Samuel and Birgitta König-Ries Editor's Comments Reviewer 1 Reviewer 2 Reviewer 3 Editor's Decision Major Revisions Overall, all reviewers agree that the research sounds good. However, they think that related work and experimental design and reporting need to be more elaborated. One of the reviewers asked to make the contributions of this article clear in the sentences. All reviewers also suggested some reorganization in the paper. Please consider these comments in your new version. [# PeerJ Staff Note: Please ensure that all review and editorial comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the response letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the response letter. Directions on how to prepare a response letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] Thank you for your feedback and for pointing out the directions to prepare a rebuttal letter. We have addressed the reviewers’ comments and made the corresponding changes in the manuscript as well. Reviewer 1 (Joao Felipe Pimentel) Basic reporting Strengths: S1. The paper shows the context (scientific experiments) and motivates the need for collecting computational and noncomputational provenance from the early stages of the experiments to support their reproducibility. S2. Figures are relevant, high quality, well-labeled, and described. The architecture figure provides a good overview of the approach that is helpful for following the description. The other three figures provide examples of analyses that CAESAR supports. Response: Thank you for your positive comments. Your feedback has helped us to significantly improve this manuscript. Weaknesses: W1. The related work section seems outdated. The paper states 'Only a few research works have attempted to track provenance from computational notebooks (Hoekstra 2014; Pimentel et al., 2015; Carvalho et al., 2017)', but there are newer approaches that also track provenance from notebooks: - Koop, David, and Jay Patel. 'Dataflow notebooks: encoding and tracking dependencies of cells.' 9th {USENIX} Workshop on the Theory and Practice of Provenance (TaPP 2017). 2017. - Kery, Mary Beth, and Brad A. Myers. 'Interactions for untangling messy history in a computational notebook.' 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 2018. - Petricek, Tomas, James Geddes, and Charles Sutton. 'Wrattler: Reproducible, live and polyglot notebooks.' 10th {USENIX} Workshop on the Theory and Practice of Provenance (TaPP 2018). 2018. - Head, Andrew, et al. 'Managing messes in computational notebooks.' Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. - Wenskovitch, John, et al. 'Albireo: An Interactive Tool for Visually Summarizing Computational Notebook Structure.' 2019 IEEE Visualization in Data Science (VDS). IEEE, 2019. - Wang, Jiawei, et al. 'Assessing and Restoring Reproducibility of Jupyter Notebooks.' 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2020. Response: We have updated the related work and added the references pointed out by the reviewer. W2. Some older approaches are also missing in the related work: approaches that collect provenance at the OS level usually are able to interlink data, steps, and results from computational and non-computational processes, once the noncomputational processes are stored in the computer. However, more notably, Burrito collects the OS provenance and provides a GUI for documenting the non-computational processes and annotating the provenance: - Guo, Philip J., and Margo I. Seltzer. 'Burrito: Wrapping your lab notebook in computational infrastructure.' (2012). Response: We have updated the related work and added the reference pointed out by the reviewer. W3. The 'Design and Development' subsection of 'Background' is confusing. On one hand, it is dense and enters into implementation details of the approach that are later explained in the 'Results' section. On the other, it is too shallow and does not explain important concepts and tools used by the paper. For a better comprehension of the paper, the background section could answer the following questions: - What are the main features of REPRODUCE-ME? How does it compare to other provenance ontologies such as PROV and P-Plan? - What is ODBA? What ate the benefits of using it? Response: We have now explained the important concepts and tools used by the paper in the ‘Materials & Methods’ section, including the main features of REPRODUCE-ME (line 190-205) and OBDA (line 206-226). W4. The structure of the paper does not conform to the Peerj standards (https://peerj.com/about/author-instructions/#standard-sections): it has a 'Background' section instead of a 'Materials & Methods' section. Additionally, a big part of the 'Results' section describes the approach in detail instead of describing the experimental results. Both the 'Background' and the approach proposal could be in the 'Materials & Methods' section and solve W3 partially. Response: We have renamed the ‘Background’ section to ‘Materials & Methods’ section according to PeerJ standards (line 139). Combining the reviewers’ comments on W3 and W4, we have added the background in the ‘Materials & Methods’ section. We restructured the ‘Results’ section and moved details related to panels and schema to Supplementary File. We point out the readers to previous publications for more details regarding the ontology and database schema. W5. While the raw data was supplied (great!), the cited page (Samuel, 2019b) is generic for multiple research projects, and finding the raw data associated with this specific paper requires navigating through some links to reach the GitHub repository with the data (https://github.com/Sheeba-Samuel/CAESAREvaluation). This repository could be cited directly. Additionally, there is no description on the repository describing the structure and the data, making it hard to validate. Response: We have corrected the cited page to point directly to the correct link. We have added a description in the repository describing the structure and the data. Experimental design Strength: S3. The supplemental files are well described and allow the replication of the usability experiment. Weaknesses: W6. The originality of the paper evaluation is not clear. It seems that CAESAR was introduced and partially evaluated in (Samuel et al., 2018), where the first half of the evaluation occurred (the definition and evaluation of the competency questions). However, the usability evaluation seems original. The paper should state clearly what is original in this paper and how does it improve from (Samuel et al., 2018). Response: In this paper, we focus more on the architecture of CAESAR and how it provides end-to-end provenance management of scientific experiments, including computational and non-computational steps with separate modules for provenance capture, representation, comparison, and visualization. We also focus here on the visualization module ProvTrack for visualizing the complete path of a scientific experiment, which is not mentioned in the (Samuel et al., 2018) paper. We also focus on the evaluation of ProvBook in CAESAR and user evaluation results which are not mentioned in the (Samuel et al., 2018) paper. In lines 57-64 and 246-247, we state the distinction between the papers. W7. The paper does not report the results of the competency query evaluation. It indicates that each question addressed different elements of the REPRODUCE-ME Data Model, but it does not present the questions nor the results. (Disregard this weakness if the evaluation is really part of another paper. In this case, reinforce it in line 634). Response: The REPRODUCE-ME data model and the ontology along with the results of the competency query evaluation, is part of a separate paper that is under review in another journal. Hence, we enforce this in the Discussion section. W8. The research questions are not well defined. While the main motivation of the paper is based on reproducibility, the current evaluations seem to assess understandability and usability, instead. Response: We address the reviewer’s concern by restructuring the introduction section and adding the research question, aims, and the main contribution of the paper. We have also restructured the evaluation to clearly distinguish what each evaluation strategy does. Validity of the findings Weaknesses: W9. The paper should indicate the threats to the validity of the experiments. Given the size of the population, it is likely that the experiment has an external threat to validity with statistical results that are not sound. Additionally, the usage of CAESAR did not occur in a controlled environment, which also leads to a threat to internal validity. Response: We agree with the reviewer’s comment. We add a separate limitation paragraph to address the limitations of the evaluation and the system itself. W10. The conclusion claims that the approach addresses understandability, reproducibility, and reuse of scientific experiments, but the experiments do no support these claims. Response: We address the reviewer’s comment by making changes in the conclusion. We address what the system does and provide the future lines of work. Reviewer 2 Basic reporting The report is mostly there but I think could have some additional support. The paper states: 'However, this is too little too late: These measures are usually taken at the point in time when papers are being published' Give some examples of what these measure are that are insufficient. Response: We have added some examples of these measures in line 34. In Line 117 - 'To the best of our knowledge, no work has been done to track the provenance of results generated from the execution of these notebooks and make available this provenance information in an interoperable way'. I think this claim of novelty is not necessary to make so bold. For example, I think NBSafety (http://www.vldb.org/pvldb/vol14/p1093-macke.pdf) and Vizier https://vizierdb.info are both related work that are in the same area as the reported tool. Here I think the important thing is that PeerJ is not about novelty but rather if the work is sound. The particular work has its own perspective and is useful but I think it's unnecessary to make these claims of research novelty. At least provide some deeper justification. Response: When ProvBook was introduced in 2018, there were not many approaches that provided provenance support in Jupyter Notebooks. However, this has been changed in recent years. We agree with the reviewer’s comments and have updated the related works and added the references suggested by reviewers 1 and 2. Lastly, when introducing the FAIR principles say what they stand for. Response: We have added details of what the FAIR principles stand for in line 32. Experimental design The experimental design and reporting need to be better explained. In section Background Evaluation - you seem to describe a number of different evaluation strategies. I had trouble figuring out the various strategies and telling them a part. It would be beneficial to distinguish each of the evaluation strategies and make it clear what approach you used in each of them. Maybe simply labeling them would help. For example, you seem to have an application evaluation, a competence question based evaluation, user based evaluation? The lack of organization of the material is also present in the presentation of the evaluation results. Please clearly describe and distinguish which evaluation strategies were used and the specific evaluation results there were. For example, it was unclear why a notebook with face recognition (line 574) was being used for evaluation. Likewise, I'm not sure if a user survey study of 6 participants is enough when the other evaluation approaches are not clearly demarcated. Be careful of claims made in the system description without evidence. For example 'The provenance information is stored in CAESAR to query this data from different sources efficiently.' These claims when made need to be substantiated. Response: We have added labels for each type of evaluation to distinguish between the evaluation strategies and their results as suggested. We have reorganized the Evaluation section. We have moved the evaluation subsection from the Background (Materials & Methods) section. We have provided justification for the claims made in the manuscript (e.g. line 560-564) or removed the claims which are not substantiated. We have also added limitations of our evaluation and the system itself. One of the limitations includes the small number of participants in the userbased evaluation. Validity of the findings I believe the majority of the findings are valid given that this is a report of a software platform but given the reporting on the method it was hard to determine whether the evaluation results in their totality showed what the authors said it shows. Response: We have reorganized the paper based on the reviewers’ suggestions. This includes adding the main research question and the aim. We have also removed any claims which cannot be substantiated. The findings are focused on the semantic-based data management platform and how we link the computational and noncomputational aspects of scientific experiments. Competency question-based evaluation focuses on semantically describing the complete path of a scientific experiment. The data and user-based evaluation of ProvBook in CAESAR focuses on the support of computational reproducibility. The user-based evaluation of CAESAR focuses on the usefulness of CAESAR and its modules. Comments for the author In general, a nice contribution of the system but I the reporting around the evaluation needs to be clearer for the reader. Also, the claim of novelty I don't think needs to be made and instead the focus should be on the contribution of the system and validating the claims of the system. Response: Thank you for your positive comments. We have reorganized the paper based on reviewers’ comments. Please see the above comments. We have also made more explicit the contribution of the system and validating the claims of the system. Reviewer 3 Basic reporting The paper's overall structure is fine, but the writing should be improved. There is a lot of passive voice used that should be changed. For example, 'it is also required to represent and express this information in an...' could be restated as 'information should be represented and expressed in an...' Also, there are places where 'got' is used that could use more precise language (e.g. obtained). I also would advise against using quotation marks for defining terms (e.g. 'provenance') as that brings a connotation that is likely not intended; use italics or bold instead. In general, the paper is wordy, and sentences or phrases can be shortened or even omitted. In addition, the results of the evaluation seem to be discussed twice (once in the Evaluation subsection [p. 11] and again in the Discussion section [p. 14]). The paper is professional, the raw survey results are shared, and the source code is available. There are some places where terms could be defined earlier (e.g. FAIR in the introdution is not explained, JupyterHub [p. 8]). Response: We address the comment by improving the writing. We have removed passive voice sentences (e.g., line 46) and replaced words with more precise language (e.g., got → obtained). We have added italics for defining terms (e.g., Provenance, Experiment). We have removed some sentences which can be wordy and shortened/omitted sentences. We have moved all the evaluation related explanation in a separate section ‘Evaluation’ to avoid confusion. We have added explanation for the terms like FAIR, JupyterHub, REPRODUCE-ME, and OBDA. Experimental design The paper is a research paper, detailing a framework and the evaluation of it. The framework, CAESAR, seeks to help users create and maintain reproducible experimental workflows. The introduction lays out the reasons why reproducibility is important and how provenance can help, and the contributions of CAESAR, ProvBook, and REPRODUCE-ME. Much of the paper provides details on the architectures of these frameworks which is expected. To me, the paper does not do well enough describing the experimental design. The paper states that competency questions were used to evaluate CAESAR, then discusses data and user-based evaluation of ProvBook, and finally discusses a user-based evaluation. It is not clear to me what the competency questions are nor what 'plugged the ontology in CAESAR by using these competency questions' means. I was unsure if the paragraphs describing evaluation were related or separate experiments. It is also unclear what the user-based evaluation measured. Users had to upload data, but nothing else was mandatory? The number of users (6) is too low to make significant statistical conclusions, and the evaluation seems entirely subjective. Response: We acknowledge the reviewers’ concerns. As a result, we have reorganized the paper, especially the evaluation section. We have distinguish between different evaluation strategies to avoid confusion why each evaluation has been done. We have added labels for each type of evaluation to distinguish between the evaluation strategies and their results. We have added the limitations of the evaluation and the system. Validity of the findings I don't think the conclusions are overstated but rather the conclusions seem quite limited. Due to some issues with the experimental design (see 2), I think it is difficult to judge the conclusions. It is important for users to 'like' a system, but it would be helpful to know more about the use cases where the framework has been used. Response: We address the reviewer’s concern by adding a brief explanation of the use case where the framework has been used and where it will be used (lines 619 and 669). We have restructured the Discussion section to include the implications of our evaluation and system, and also their limitations. We have also restructured the conclusions by focusing on CAESAR based on its functionality and evaluation and the future lines of work. Comments for the author This paper describes a lot of work in the areas of reproducibility and provenance which are important areas for computational science. The paper describes how domain scientists in biological imaging face particular challenges and how CAESAR, ProvBook, and REPRODUCE-ME were designed to help them. It seems like there has been a lot of work done which is important to both the bioimaging community and the broader community interested in reproducibility and provenance. As a paper describing these frameworks, it goes into great detail about the particular implementation, which is important documentation, but is not particularly useful for readers who do not use this specific system. There is certainly a tension between not providing enough detail on how the system works and writing user documentation in a research paper, but I feel the paper has too much of the latter. The long lists describing panels and the REPRODUCEME schema should be moved to supplemental material. The threads related to the core problems of supporting reproducibility in the bioimaging workflows are sometimes lost in the details of the system. I would strongly encourage adding a running example to the text to help readers understand specific use cases and how the frameworks help users to address them. In this context, specific features can be detailed. This should also lead to greater coherence among the three pieces that are detailed in the paper. Response: Thank you for your positive comments. As suggested by the reviewer, we have removed the long list of panel description of panels and pointed the readers to (Samuel et. Al, 2018). As the ProvTrack is a new module which was introduced later and is a new contribution, we keep the information regarding ProvTrack visualization module, but shorten the description. We also move the REPRODUCE-ME schema description to supplementary material and also point the readers to the ontology documentation for more details. The use case of the framework is driven by the requirement analysis phase which is mentioned in the Materials & Methods section. "
Here is a paper. Please give your review comments after reading it.
376
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Scientific data management plays a key role in the reproducibility of scientific results. To reproduce results, not only the results but also the data and steps of scientific experiments must be made findable, accessible, interoperable, and reusable. Tracking, managing, describing, and visualizing provenance helps in the understandability, reproducibility, and reuse of experiments for the scientific community. Current systems lack a link between the data, steps, and results from the computational and non-computational processes of an experiment. Such a link, however, is vital for the reproducibility of results. We present a novel solution for the end-to-end provenance management of scientific experiments. We provide a framework, CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), which allows scientists to capture, manage, query and visualize the complete path of a scientific experiment consisting of computational and noncomputational data and steps in an interoperable way. CAESAR integrates the REPRODUCE-ME provenance model, extended from existing semantic web standards, to represent the whole picture of an experiment describing the path it took from its design to its result. ProvBook, an extension for Jupyter Notebooks, is developed and integrated into CAESAR to support computational reproducibility. We have applied and evaluated our contributions to a set of scientific experiments in microscopy research projects.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Reproducibility of results is vital in every field of science. The scientific community is interested in the results of experiments that are accessible, reproducible, and reusable. Recent surveys conducted among researchers show the existence of a problem in reproducing published results in different disciplines <ns0:ref type='bibr' target='#b6'>(Baker, 2016;</ns0:ref><ns0:ref type='bibr' target='#b65'>Samuel &amp; K&#246;nig-Ries, 2021)</ns0:ref>. Recently, there is a rapidly growing awareness in scientific disciplines on the importance of reproducibility. As a consequence, measures are being taken to make the data used in the publications FAIR (Findable, Accessible, Interoperable, Reusable) <ns0:ref type='bibr' target='#b71'>(Wilkinson et al., 2016)</ns0:ref>. However, this is too little too late: These measures are usually taken at the point in time when papers are being published. These measures do not include the management and description of several trials of an experiment, negative results from these trials, dependencies between the data and steps, etc. However, many challenges, particularly for data management, are faced by scientists much earlier in the scientific cycle. If they are not addressed properly and in a timely manner, they often make it impossible to provide truly FAIR data at the end. Therefore, we argue that scientists need support from the very beginning of an experiment in handling the potentially large amounts of heterogeneous research data and its derivation.</ns0:p><ns0:p>A key factor to support scientific reproducibility is the provenance information that tells about the origin or history of the data. Recording and analysis of provenance data of a scientific experiment play a vital role for scientists to know the methods and steps taken to generate the output, to reproduce own results or other scientist's results <ns0:ref type='bibr' target='#b68'>(Taylor &amp; Kuyatt, 1994)</ns0:ref>. In addition to the preservation of data and results, the datasets and the metadata need to be collected and organized in a structured way from the beginning of the experiments. At the same time, information should be represented and expressed in an interoperable PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science way so that scientists can understand the data and results. Therefore, we need to start addressing this issue at the stage when the data is created. Thus, scientific research data management needs to start at the earlier stage of the research lifecycle to play a vital role in this context.</ns0:p><ns0:p>In this paper, we aim to provide end-to-end provenance capture and management of scientific experiments to support reproducibility. To define our aim, we define the main research question, which structure the remainder of the article: How can we capture, represent, manage and visualize a complete path taken by a scientist in an experiment, including the computational and non-computational steps to derive a path towards experimental results? To address the research question, we create a conceptual model using semantic web technologies to describe a complete path of a scientific experiment. We design and develop a provenance-based semantic framework to populate this model, collect information about the experimental data and results along with the settings, runs, and execution environment and visualize them. The main contribution of this paper is the framework for the end-to-end provenance management of scientific experiments, called CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility) integrated with ProvBook <ns0:ref type='bibr' target='#b64'>(Samuel &amp; K&#246;nig-Ries, 2018c</ns0:ref>) and REPRODUCE-ME data model <ns0:ref type='bibr' target='#b58'>(Samuel, 2019a)</ns0:ref>. CAESAR supports computational reproducibility using our tool ProvBook, which is designed and developed to capture, store, compare and track the provenance of results of different executions of Jupyter Notebooks. The complete path of a scientific experiment interlinking the computational and non-computational data and steps is semantically represented using the REPRODUCE-ME data model.</ns0:p><ns0:p>In the following sections, we provide a detailed description of our findings. We start with an overview of the current state-of-the-art ('Related Work'). We describe the experimental methodology used in the development of CAESAR ('Materials &amp; Methods'). In the 'Results' section, we describe CAESAR and its main modules. We describe the evaluation strategies and results in the 'Evaluation' section. In the 'Discussion' section, we discuss the implications of our results and the limitations of our approach. We conclude the article by highlighting our major findings in the 'Conclusion' section.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Scientific data management plays a key role in knowledge discovery, data integration, and reuse. The preservation of digital objects has been studied for long in the digital preservation community. Some works give more importance to software and business process conservation <ns0:ref type='bibr' target='#b41'>(Mayer et al., 2012)</ns0:ref>, while other works focus on scientific workflow preservation <ns0:ref type='bibr' target='#b7'>(Belhajjame et al., 2015)</ns0:ref>. We focus our approach more on the data management solutions for scientific data, including images. <ns0:ref type='bibr' target='#b21'>Eliceiri et al. (2012)</ns0:ref> provide a list of biological imaging software tools. BisQue is an open-source, server-based software system that can store, display and analyze images <ns0:ref type='bibr' target='#b36'>(Kvilekval et al., 2010)</ns0:ref>. OMERO, developed by the Open Microscopy Environment <ns0:ref type='bibr'>(OME)</ns0:ref>, is another open-source data management platform for imaging metadata primarily for experimental biology <ns0:ref type='bibr'>(Allan et al., 2012)</ns0:ref>. It has a plugin architecture with a rich set of features, including analyzing and modifying images. It supports over 140 image file formats using BIO-Formats <ns0:ref type='bibr' target='#b38'>(Linkert et al., 2010)</ns0:ref>. OMERO and BisQue are the two closest solutions that meet our requirements in the context of scientific data management. A general approach to document experimental metadata is provided by the CEDAR workbench <ns0:ref type='bibr' target='#b25'>(Gonc &#184;alves et al., 2017)</ns0:ref>. It is a metadata repository with a web-based tool that helps users to create metadata templates and fill in the metadata using those templates. However, these systems do not directly provide the features to fully capture, represent and visualize the complete path of a scientific experiment and support computational reproducibility and semantic integration.</ns0:p><ns0:p>Several tools have been developed to capture complete computational workflows to support reproducibility in the context of scientific workflows, scripts, and computational notebooks. <ns0:ref type='bibr' target='#b49'>Oliveira et al. (2018)</ns0:ref> survey the current state of the art approaches and tools that support provenance data analysis for workflow-based computational experiments. Scientific Workflows, which are a complex set of data processes and computations, are constructed with the help of a Scientific Workflow Management System (SWfMS) <ns0:ref type='bibr' target='#b20'>(Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b39'>Liu et al., 2015)</ns0:ref>. Different SWfMSs have been developed for different uses cases and domains <ns0:ref type='bibr' target='#b48'>(Oinn et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b3'>Altintas et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b20'>Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b66'>Scheidegger et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b23'>Goecks et al., 2010)</ns0:ref>. Most SWfMSs provide provenance support by capturing the history of workflow executions.</ns0:p><ns0:p>These systems focus on the computational steps of an experiment and do not link the results to the experimental metadata. Despite the provenance modules present in these systems, there are currently many challenges in the context of reproducibility of scientific workflows <ns0:ref type='bibr' target='#b73'>(Zhao et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b16'>Cohen-Boulakia et al., 2017)</ns0:ref>. Workflows created by different scientists are difficult for others to understand or re-run in Manuscript to be reviewed Computer Science a different environment, resulting in workflow decays <ns0:ref type='bibr' target='#b73'>(Zhao et al., 2012)</ns0:ref>. The lack of interoperability between scientific workflows and the steep learning curve required by scientists are some of the limitations according to the study of different SWfMSs <ns0:ref type='bibr' target='#b16'>(Cohen-Boulakia et al., 2017)</ns0:ref>. The Common Workflow Language <ns0:ref type='bibr' target='#b4'>(Amstutz et al., 2016)</ns0:ref> is an initiative to overcome the lack of interoperability of workflows.</ns0:p><ns0:p>Though there is a learning curve associated with adopting workflow languages, this ongoing work aims to make computational methods reproducible, portable, maintainable, and shareable.</ns0:p><ns0:p>Many tools have been developed to capture the provenance of results from the scripts at different levels of granularity <ns0:ref type='bibr' target='#b26'>(Guo &amp; Seltzer, 2012;</ns0:ref><ns0:ref type='bibr' target='#b19'>Davison, 2012;</ns0:ref><ns0:ref type='bibr' target='#b46'>Murta et al., 2014;</ns0:ref><ns0:ref type='bibr'>McPhillips et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Burrito <ns0:ref type='bibr' target='#b26'>(Guo &amp; Seltzer, 2012</ns0:ref>) captures provenance at the operating system level and provides a user interface for documenting and annotating the provenance of non-computational processes. <ns0:ref type='bibr' target='#b13'>Carvalho et al. (Carvalho et al., 2016)</ns0:ref> present an approach to convert scripts into reproducible Workflow Research Objects. However, it is a complex process that requires extensive involvement of scientists and curators with extensive knowledge of the workflow and script programming in every step of the conversion. The lack of documentation of computational experiments along with their results and the ability to reuse parts of code are some of the issues hindering reproducibility in script-based environments. In recent years, computational notebooks have gained widespread adoption because they enable computational reproducibility and allow users to share code along with documentation. Jupyter Notebook <ns0:ref type='bibr' target='#b34'>(Kluyver et al., 2016)</ns0:ref>, which was formerly known as the IPython notebook, is a widely used computational notebook that provides an interactive environment supporting over 100 programming languages with millions of users around the world. Even though it supports reproducible research, recent studies by <ns0:ref type='bibr' target='#b57'>Rule et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>Pimentel et al. (2019)</ns0:ref> point out the need for provenance support in computational notebooks. Overwriting and re-execution of cells in any order can lead to the loss of results from previous trials. Some research works have attempted to track provenance from computational notebooks <ns0:ref type='bibr' target='#b28'>(Hoekstra, 2014;</ns0:ref><ns0:ref type='bibr' target='#b54'>Pimentel et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b14'>Carvalho et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b32'>Kery &amp; Myers, 2018;</ns0:ref><ns0:ref type='bibr' target='#b51'>Petricek et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Head et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b70'>Wenskovitch et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b69'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Macke et al., 2021)</ns0:ref>. <ns0:ref type='bibr' target='#b54'>Pimentel et al. (2015)</ns0:ref> propose a mechanism to capture and analyze the provenance of python scripts inside IPython Notebooks using noWorkflow <ns0:ref type='bibr' target='#b52'>(Pimentel et al., 2017)</ns0:ref>. PROV-O-Matic <ns0:ref type='bibr' target='#b28'>(Hoekstra, 2014)</ns0:ref> is another extension for earlier versions of IPython Notebooks to save the provenance traces to Linked Data file using PROV-O. In recent approaches, custom Jupyter kernels are developed to trace runtime user interactions and automatically manage the lineage of cell execution <ns0:ref type='bibr' target='#b35'>(Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b40'>Macke et al., 2021)</ns0:ref>. However, some of these approaches do not capture the execution history of computational notebooks, require changes to the code by the user, and are limited to Python scripts. In our approach, the provenance tracking feature is integrated within a notebook, so there is no need for users to change the scripts and learn a new tool. We also make available the provenance information in an interoperable way.</ns0:p><ns0:p>There exists a gap in the current state-of-the-art systems as they do not interlink the data, the steps, and the results from both the computational and non-computational processes of a scientific experiment. We bridge this gap by developing a framework to capture the provenance, provide semantic integration of experimental data and support computational reproducibility. Hence, it is important to extend the current tools and at the same time, reuse their rich features to support the reproducibility and understandability of scientific experiments.</ns0:p><ns0:p>We describe here a summary of the insights of the interviews on the research practices of scientists. A scientific experiment consists of non-computational and computational data and steps. Computational tools like computers, software, scripts, etc., generate computational data. Activities in the laboratory like preparing solutions, setting up the experimental execution environment, manual interviews, observations, etc., are examples of non-computational activities. Measures taken to reproduce a non-computational step are different than those for a computational step. The reproducibility of a non-computational step depends on various factors like the availability of experiment materials (e.g., animal cells or tissues) and instruments, the origin of the materials (e.g., distributor of the reagents), human and machine errors, etc.</ns0:p><ns0:p>Hence, non-computational steps need to be described in sufficient detail for their reproducibility <ns0:ref type='bibr' target='#b31'>(Kaiser, 2015)</ns0:ref>.</ns0:p><ns0:p>The conventional way of recording the experiments in hand-written lab notebooks is still in use in biology and medicine. This creates a problem when researchers leave projects and join new projects. To understand the previous work conducted in a research project, all the information regarding the project, including previously conducted experiments along with the trials, analysis, and results, must be available to the new researchers. This information is also required when scientists are working on big collaborative projects. In their daily research work, a lot of data is generated and consumed through the computational and non-computational steps of an experiment. Different entities like devices, procedures, protocols, settings, computational tools, and execution environment attributes are involved in experiments. Several people play various roles in different steps and processes of an experiment. The outputs of some noncomputational steps are used as inputs to the computational steps. Hence, an experiment must not only be linked to its results but also to different entities, people, activities, steps, and resources. Therefore, the complete path towards the results of an experiment must be shared and described in an interoperable manner to avoid conflicts in experimental outputs.</ns0:p></ns0:div> <ns0:div><ns0:head>Design and Development</ns0:head><ns0:p>We aim to design a provenance-based semantic framework for the end-toend management of scientific experiments, including the computational and non-computational steps. To achieve our aim, we focused on the following modules: provenance capture, representation, management, comparison, and visualization. We used an iterative and layered approach in the design and development of CAESAR. We first investigated the existing frameworks that capture and store the experimental metadata and the data for the provenance capture module. We further narrowed down our search to imaging-based data management systems due to the extensive use of images and instruments in the experimental workflows in the ReceptorLight project. Based on our requirements, we selected OMERO as the underlying framework for developing CAESAR. Very active development community ensuring a continued effort to improve the system, a faster release cycle, a well-documented API to write own tools, and the ability to extend the web interface with plugins provided additional benefits to OMERO. However, they lack in providing the provenance support of experimental data, including the computational processes, and also lack in semantically representing the experiments. We designed and developed CAESAR by extending OMERO to capture the provenance of scientific experiments.</ns0:p><ns0:p>For the provenance representation module, we use semantic web technologies to describe the heterogeneous experimental data as machine-readable and link them with other datasets on the web. We develop the REPRODUCE-ME data model and ontology by extending existing web standards, PROV-O <ns0:ref type='bibr' target='#b37'>(Lebo et al., 2013)</ns0:ref> and P-Plan <ns0:ref type='bibr' target='#b22'>(Garijo &amp; Gil, 2012)</ns0:ref>. The REPRODUCE-ME Data Model is a generic data model for representing scientific experiments with their provenance information. An Experiment is considered as the central point of the REPRODUCE-ME data model. The model consists of eight components: Data, Agent, Activity, Plan, Step, Setting, Instrument, Material. We developed the ontology from the competency questions collected from the scientists in the requirement analysis phase <ns0:ref type='bibr' target='#b58'>(Samuel, 2019a)</ns0:ref>. It is extended from PROV-O to represent all agents, activities, and entities involved in an experiment. It extends from P-Plan to represent the steps, the input and output variables, and the complete path taken from an input to an output of an experiment. Using the REPRODUCE-ME ontology, we can describe and semantically query the information for scientific experiments, input and output associated with an experiment, execution environmental attributes, experiment materials, steps, the execution order of steps and activities, agents involved and their roles, script/Jupyter Notebook executions, instruments, and their settings, etc. The ontology also consists of classes and properties, which describe the elements responsible for the image acquisition process in a microscope, from OME data model <ns0:ref type='bibr' target='#b50'>(OME, 2016)</ns0:ref>.</ns0:p><ns0:p>For the provenance management module, we use PostgreSQL database and Ontology-based Data Access (OBDA) approach <ns0:ref type='bibr' target='#b55'>(Poggi et al., 2008)</ns0:ref>. OBDA is an approach to access the various data sources using Manuscript to be reviewed Computer Science ontologies and mappings. The details of the structure of the underlying data sources are isolated from the users using a high-level global schema provided by ontologies. It helps to efficiently access a large amount of data from different sources and avoid replication of data that is already available in relational databases. It also provides many high-quality services to domain scientists without worrying about the underlying technologies. There are different widely-used applications involving large data sources that use OBDA <ns0:ref type='bibr' target='#b11'>(Br&#252;ggemann et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b33'>Kharlamov et al., 2017)</ns0:ref>. As the image metadata in OMERO and the experimental data in CAESAR are already stored in the PostgreSQL database, we investigated the effective ways to represent scientific experiments' provenance information without duplicating the data.</ns0:p><ns0:p>Based on this, we selected to use the OBDA approach to represent this data semantically and at the same time avoid replication of data. To access the various databases in CAESAR, we used Ontop <ns0:ref type='bibr' target='#b12'>(Calvanese et al., 2017)</ns0:ref> for OBDA. We use the REPRODUCE-ME ontology to map the relational data in the OMERO and the ReceptorLight database using Ontop's native mapping language. We used federation for the OMERO and ReceptorLight databases provided by the rdf4j SPARQL Endpoint 1 . We used the Protege plugin provided by Ontop to write the mappings. A virtual RDF graph is created in OBDA using the ontology with the mappings <ns0:ref type='bibr' target='#b12'>(Calvanese et al., 2017)</ns0:ref>. SPARQL, the standard query language in the semantic web community, is used to query the provenance graph. We used the approach where the RDF graphs are kept virtual and queried only during query execution. The virtual approach helps avoid the materialization cost and provides the benefits of the matured relational database systems. However, there are some limitations in this approach using Ontop due to unsupported functions and data type.</ns0:p><ns0:p>To support computational reproducibility in CAESAR, we focused on providing the management of the provenance of the computational parts of an experiment. Computational notebooks, which have gained widespread attention because of their support for reproducible research, motivated us to look into this direction. These notebooks, which are extensively used and openly available, provide various features to run and share the code and results. We installed JupyterHub 2 in CAESAR to provide users access to computational environment and resources. JupyterHub provides a customizable and scalable way to serve Jupyter notebook for multiple users. In spite of the support for reproducible research, the provenance information of the execution of these notebooks was missing. To further support reproducibility in these notebooks, we developed ProvBook, an extension of Jupyter Notebooks, to capture the provenance information of their executions. We keep the design of ProvBook simple so that it can be used by researchers irrespective of their disciplines. We added the support to compare the differences in executions of the notebooks by different authors. We also extended the REPRODUCE-ME ontology to describe the computational experiments, including scripts and notebooks <ns0:ref type='bibr' target='#b62'>(Samuel &amp; K&#246;nig-Ries, 2018a)</ns0:ref>, which was missing in the current state of the art.</ns0:p><ns0:p>For the provenance visualization module, we focused on visualizing the complete path of an experiment by linking the non-computational and computational data and steps. Our two goals in designing the visualization component in CAESAR are providing users with a complete picture of an experiment and tracking its provenance. To do so, we integrated the REPRODUCE-ME ontology and ProvBook in CAESAR. We developed visualization modules to provide a complete story of an experiment starting from its design to publication. The visualization module, Project Dashboard, provides a complete overview of all the experiments conducted in a research project <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. We later developed the ProvTrack module to track the provenance of individual scientific experiments. The underlying technologies are transparent to scientists based on these approaches. We followed a Model-View-Controller architecture pattern for the development of CAESAR. We implement the webclient in the Django-Python framework and the Dashboard in ReactJs. Java is used to implement the new services extended by OMERO.server.</ns0:p><ns0:p>We use the D3 JavaScript library <ns0:ref type='bibr'>(D3.js, 2021)</ns0:ref> for the rendering of provenance graphs in the ProvTrack.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We present CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), an end-to-end semantic-based provenance management platform. It is extended from OMERO <ns0:ref type='bibr'>(Allan et al., 2012)</ns0:ref>. With the integration of the rich features provided by OMERO and our provenance-based extensions, CAESAR provides a platform to support scientists to describe, preserve and visualize their experimental data by linking the datasets with the experiments along with the execution environment and images. It provides extensive features, including the semantic representation of experimental data using the REPRODUCE-ME ontology and computational reproducibility support using ProvBook. Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> depicts the architecture of CAESAR. We describe briefly the core modules of CAESAR required for the end-to-end provenance management.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Capture.</ns0:head><ns0:p>This module provides a metadata editor with a rich set of features allowing the scientists to easily record all the data of the non-computational steps performed in their experiments and the protocols, the materials, etc. This metadata editor is a form-based provenance capture system. It provides the feature to document the experimental metadata and interlink with other experiment databases. An Experiment form is the key part of this system that documents all the information about an experiment. This data includes the temporal and spatial properties, the experiment's research context, and other general settings used in the experiment. The materials and other resources used in an experiment are added as new templates and linked to the experiment. The templates are added as a service as well as a database table in CAESAR and are also available as API, thus allowing the remote clients to use them.</ns0:p><ns0:p>The user and group management provided by OMERO is adopted in CAESAR to manage users in groups and provides roles for these users. The restriction and modification of data are managed using the roles and permissions that are assigned to the users belonging to a group. The data is made available between the users in the same group in the same CAESAR server. Members of other groups can share the data based on the group's permission level. A user can be assigned any of the role of Administrator, Group Owner or Group Member. An Administrator controls all the settings of the groups. A Group Owner has the right to add other members to the group. A Group Member is a standard user in the group. There are also various permission levels in the system. Private is the most restrictive permission level, thus providing the least collaboration level with other groups in the system. A private group owner can view and control the members and data of the members within a group; While a private group member can Manuscript to be reviewed Computer Science group, the group owners can read and perform some annotations on members' data from other groups.</ns0:p><ns0:p>The group members don't have permission to annotate the datasets from other groups, thus providing only viewing and reading possibilities. The Read-annotate permission level provides a more collaborative option where the group owners and members can view the other groups' members as well as read and annotate their data. The Read-write permission level allows the group members to read and write data just like their own group.</ns0:p><ns0:p>CAESAR adopts this role and permission levels to control the access and modification of provenance information of experiments. A Principal Investigator (PI) can act as a group owner and students as group members in a private group. PIs can access students' stored data and decide which data can be used to share with other collaborative groups. A Read-only group can serve as a public repository where the original data and results for the publications are stored. A Read-annotate group is suitable for collaborative teams to work together for a publication or research. Every group member is trusted and given equal rights to view and access the data in a Read-write group, thus providing a very collaborative environment. This user and group management paves the way for collaboration among teams in research groups and institutes before the publication is made available online.</ns0:p><ns0:p>This module also allows scientists to interlink the dependencies of many materials, samples, input files, measurement files, images, standard operating procedures, and steps to an experiment. Users can also attach files, scripts, publications, or other resources to any steps in an experiment form. They can annotate these resources as an input to a step or intermediate result of a step. Another feature of this module is to help the scientists to reuse resources rather than do it from scratch, thus enabling a collaborative environment among teams and avoiding replicating the experimental data. This is possible by sharing the descriptions of the experiments, standard operating procedures, and materials with the team members within the research group. Scientists can reference the descriptions of the resources in their experiments.</ns0:p><ns0:p>Version management plays a key role in data provenance. In a collaborative environment, where the experimental data are shared among the team members, it is important to know the modifications made by the members of the system and track the history of the outcome of an experiment. This module provides version management of the experimental metadata by managing all the changes made in the description of the experimental data. The plugin allows users to view the version history and compare two different versions of an experiment description. The file management system in CAESAR stores all files and index them to the experiment, which is annotated as input data, measurement data, or other resources. The user can organize the input data, measurement data, or other resources in a hierarchical structure based on their experiments and measurements using this plugin.</ns0:p><ns0:p>CAESAR also provides a database of Standard Operating Procedures. These procedures in life-sciences provide a set of step-by-step instructions to carry out a complex routine. In this database, the users can store the protocols, procedures, scripts, or Jupyter Notebooks based on their experiments, which have multiple non-computational and computational steps. The users can also link these procedures to the step in an experiment where they were used. The plugin also provides users the facility to annotate the experimental data with terms from other ontologies like GO <ns0:ref type='bibr' target='#b5'>(Ashburner et al., 2000)</ns0:ref>, CMPO <ns0:ref type='bibr' target='#b30'>(Jupp et al., 2016)</ns0:ref>, etc. in addition to REPRODUCE-ME Ontology.</ns0:p><ns0:p>If a user is restricted to make modifications to other members' data due to permission level, the plugin Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and its schema consist of important classes which are based on the REPRODUCE-ME Data Model <ns0:ref type='bibr' target='#b58'>(Samuel, 2019a)</ns0:ref>. For the data management of images, the main classes include Project, Dataset, Folder, Plate, Screen, Experiment, Experimenter, ExperimenterGroup, Instrument, Image, StructuredAnnotations, and ROI. Each class provides a rich set of features, including how they are used in an experiment. The Experiment, which is a subclass of Plan <ns0:ref type='bibr' target='#b22'>(Garijo &amp; Gil, 2012)</ns0:ref>, links all the provenance information of a scientific experiment together. We use the REPRODUCE-ME ontology to map the relational data in the OMERO and the ReceptorLight database using Ontop. The provenance information of computational notebooks is also semantically represented and is combined with other experimental metadata, thus providing the context of the results. This helps the scientists and machines to understand the experiments along with their context. There are around 800 mappings to create the virtual RDF graph. All the mappings are publicly available. We refer the authors to the publication for the complete documentation of the REPRODUCE-ME ontology <ns0:ref type='bibr' target='#b59'>(Samuel, 2019b)</ns0:ref> and the database schema is available in the Supplementary file.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational Reproducibility</ns0:head><ns0:p>The introduction of computational notebooks, which allow scientists to share the code along with the documentation, is a step towards computational reproducibility. Scientists widely use Jupyter Notebooks to perform several tasks, including image processing and analysis. As the experimental data and images are contained in the CAESAR itself, another requirement is to provide a computational environment for scientists to include the scripts that analyze the data stored in the platform. To create a collaborative research environment for the scientists working with images and Jupyter Notebooks, JupyterHub is installed and integrated with CAESAR. This allows the scientists to directly access the images and datasets stored in CAESAR using the API and perform data analysis or processing on them using Jupyter Notebooks. The new images and datasets created in the Jupyter Notebooks can then be uploaded and linked to the original experiments to CAESAR using the APIs. To capture the provenance traces of the computational steps in CAESAR, we introduce ProvBook, an extension of Jupyter notebooks to provide provenance support <ns0:ref type='bibr' target='#b63'>(Samuel &amp; K&#246;nig-Ries, 2018b)</ns0:ref>. It is an easy-to-use framework for scientists and developers to efficiently capture, compare, and visualize the provenance data of different executions of a notebook over time. To capture the provenance of computational steps and support computational reproducibility, ProvBook is installed in JupyterHub and integrated with CAESAR. We briefly describe the modules provided by ProvBook.</ns0:p><ns0:p>Capture, Management, and Representation. This module captures and stores the provenance of the execution of Jupyter Notebooks cells over the course of time. A Jupyter Notebook, stored as a JSON file format, is a dictionary with the following keys: metadata and cells. The metadata is a dictionary that contains information about the notebook, its cells, and outputs. The cell contains information on all cells, including the source, the type of the cell, and its metadata. As Jupyter notebooks allow the addition of custom metadata to its content, the provenance information captured by the ProvBook is added to the metadata of each cell of the notebook in the JSON format. ProvBook captures the provenance information, including the start and end time of each execution, the total time it took to run the code cell, the source code, and the output obtained during that particular execution. The execution time for a computational task was added as part of the provenance metadata in a Notebook since it is important to check the performance of the task. The start and end times also act as an indicator of the execution order of the cells. Users can execute cells in any order, so adding the start and end time helps them check when a particular cell was last executed. The users can change the parameters and source code in each cell until they arrive at their expected result. This helps the user to track the history of all the executions to see which parameters were changed and how the results were derived.</ns0:p><ns0:p>ProvBook also provides a module that converts the computational notebooks along with the provenance information of their executions and execution environment attributes into RDF. The REPRODUCE-ME ontology represents this provenance information. ProvBook allows the user to export the notebook in RDF as a turtle file either from the user interface of the notebook or using the command line. The users can share a notebook and its provenance in RDF and convert it back to a notebook. The reproducibility service provided by ProvBook converts the provenance graph back to a computational notebook along with its provenance. The Jupyter notebooks and the provenance information captured by ProvBook in RDF are then linked to the provenance of the experimental metadata in CAESAR. Comparison. To reproduce an experiment by another agent different from the original one and confirm the original experimenter's results, it is necessary to get the provenance information of its input and steps along with the original results. ProvBook provides a feature that helps scientists compare the results of different executions of a Jupyter Notebook performed by the same or different agents. ProvBook provides a provenance difference module to compare the different executions of each cell of a notebook, thus helping the users either to (1) repeat and confirm/refute their results or (2) reproduce and confirm/refute others results. ProvBook uses the start time of different executions collected to distinguish between the two executions. We provide a dropdown menu to select two executions based on their starting time.</ns0:p><ns0:p>After the two executions are selected by the user, the difference in the input and the output of these executions are shown side by side. The users can select their own execution and compare the results with the original experimenter's execution of the Jupyter Notebook. Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> shows the differences between the source and output of two different code cell execution. ProvBook highlights any differences in the source or output for the user to distinguish the change. The provenance difference module is developed by extending the nbdime (Project Jupyter, 2021) library from the Project Jupyter. The nbdime tools provide the ability to compare notebooks and also a three-way merge of notebooks with auto-conflict resolution.</ns0:p><ns0:p>ProvBook calls the API from the nbdime to see the difference between the provenance of two executions of a notebook code cell. Using nbdime, ProvBook provides diffing of notebooks based on the content and renders image-diffs correctly. This module in CAESAR helps scientists to compare the results of the executions of different users.</ns0:p><ns0:p>Visualization. Efficient visualization is important to make meaningful interpretations of the data. The provenance information of each cell captured by ProvBook is visualized below every input cell. In this provenance area, a slider is provided so that the user can drag to view the history of the different executions of the cell. This area also provides the user with the ability to track the history and compare the current results with several previous results and see the difference that occurred. The user can visualize the provenance information of a selected cell or all cells by clicking on the respective buttons in the toolbar. The user can also clear the provenance information of a selected cell or all cells if required. This solution tries to address the problem of having larger provenance information than the original notebook data.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Visualization.</ns0:head><ns0:p>We have described above the visualization of the provenance of cells in Jupyter Notebooks. In this section, we look at the visualization of overall scientific experiments. We present two modules for the visualization of the provenance information of scientific experiments captured, stored, and semantically represented in CAESAR: Dashboard and ProvTrack. The experimental data provided by scientists through the metadata editor, the metadata extracted from the images and instruments, and the details of the computational steps collected from ProvBook together are integrated, linked, and represented using the REPRODUCE-ME ontology. All this provenance data, stored as linked data, form the basis for the complete path of a scientific experiment and is visualized in CAESAR. Dashboard. This visualization module aggregates all the data related to an experiment and project in a single place. We provide users with two views: one at the project level and another at the experiment level. The Dashboard at the project level provides a unified view of a research project containing multiple experiments by different agents. When a user selects a project, the Project Dashboard is activated, while the Experiment Dashboard is activated when a dataset is selected. The Dashboard is composed of several panels. Each panel provides a detailed view of a particular component of an experiment. The data inside a panel is displayed in tables. The panels are arranged in a way that they provide the story of an experiment.</ns0:p><ns0:p>A detailed description of each panel is provided <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Users can also search and filter the data based on keywords inside a table in the panel.</ns0:p><ns0:p>ProvTrack. This visualization module provides users with an interactive way to track the provenance of experimental results. The provenance of experiments is provided using a node-link representation, thus, helping the user to backtrack the results. Users can drill-down each node to get more information and attributes. This module which is developed independently, is integrated into CAESAR. The provenance graph is based on the data model represented by REPRODUCE-ME ontology. We query the SPARQL endpoint to get the complete path of a scientific experiment. We make several SPARQL queries, and the results are combined to display this complete path and increase the system's performance. Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> shows the visualization of the provenance of an experiment using ProvTrack. CAESAR allows users to select an experiment to track its provenance.</ns0:p><ns0:p>The provenance graph is visualized in the right panel when the user selects an experiment. Each node in the provenance graph is colored based on its type like prov:Entity, prov:Agent, prov:Activity, p-plan:Step, p-plan:Plan and p-plan:Variable. The user can expand the provenance graph by opening up all nodes using the Expand All button next to the help menu. Using the Collapse All button, users can collapse the provenance graph to one node, which is the Experiment. ProvTrack shows the property relationship between two nodes when a user hovers on an edge. The help menu provides the user with the meaning of each color in the graph. The path from the user-selected node to the first node (Experiment) is highlighted to show the relationship of each node with the Experiment and also to see where the node is in the provenance graph.</ns0:p><ns0:p>ProvTrack also provides an Infobox of the selected node of an experiment. It displays the additional information about the selected node as a key-value pair. The keys in the Infobox are either the object or data properties of the REPRODUCE-ME ontology that is associated with the node that the user has </ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>We evaluate different aspects of our work based on our main research question: it is possible to capture, represent, manage, and visualize a complete path taken by a scientist in an experiment, including the computational and non-computational steps to derive a path towards experimental results. As the main research question has a broad scope and it is challenging to evaluate every part within the limited time and resources, we break the main research questions into smaller parts. We divide the evaluation into three parts based on the smaller questions and discuss their results separately. In the first part, we address the question of capturing and representing the complete path of a scientific experiment which includes both computational and non-computational steps. For this, we evaluate the role of CAESAR in capturing the non-computational data and the role of ProvBook in capturing the computational data. The role of the REPRODUCE-ME ontology in semantically representing the complete path of a scientific experiment is evaluated using the knowledge base in CAESAR using the competency question-based evaluation. In the second part, we address the question of supporting reproducibility by capturing and representing the provenance of computational experiments. For this, we address the role of ProvBook in terms of reproducibility, performance, and scalability. We focused on evaluating ProvBook as a stand-alone tool and also with the integration with CAESAR. In the third part, we address the question of representing and visualizing the complete path of scientific experiments to the users of CAESAR. For this, we performed an evaluation by conducting a user-based study to get general impression of the tool and use this as a feedback to improve the tool. Scientists from within and outside the project were involved in all the three parts of our evaluation as system's users as well as participants. <ns0:ref type='bibr' target='#b9'>Brank et al. (2005)</ns0:ref> point out different methods of evaluating ontologies. In application-based evaluation, the ontology under evaluation is used in an application/system to produce good results on a given task. Answering the competency questions over a knowledge base is one of the approaches to testing ontologies <ns0:ref type='bibr' target='#b47'>(Noy et al., 2001)</ns0:ref>. Here, we applied the ontology in an application system and answered the competency questions over a knowledge base. Hence, we evaluated CAESAR with the REPRODUCE-ME ontology using competency questions collected from different scientists in our requirement analysis phase. We used the REPRODUCE-ME ontology to answer the competency questions using the scientific experiments documented in CAESAR for its evaluation. We did the evaluation on a server (installed with CentOS Linux 7 and with x86-64 architecture) hosted at the University Hospital Jena. Scientists from B1 and A4 projects of ReceptorLight documented experiments using confocal patchclamp fluorometry (cPCF), F&#246;rster Resonance Energy Transfer (FRET), PhotoActivated Localization Microscopy (PALM) and direct Stochastic Optical Reconstruction Microscopy (dSTORM) as part of their daily work. In 23 projects, a total of 44 experiments were recorded and uploaded with 373 microscopy images generated from different instruments with various settings using either the desktop client or webclient of CAESAR (Accessed 21 April 2019). We also used the Image Data Repository (IDR) datasets (IDR, 2021) with around 35 imaging experiments <ns0:ref type='bibr' target='#b72'>(Williams et al., 2017)</ns0:ref> for our evaluation to ensure that the REPRODUCE-ME ontology can be used to describe other types of experiments as well.</ns0:p></ns0:div> <ns0:div><ns0:head>Competency question-based evaluation</ns0:head><ns0:p>The description of the scientific experiments, along with the steps, experiment materials, settings, and standard operating procedures, using the REPRODUCE-ME ontology is available in CAESAR to its users and the evaluation participants. We created a knowledge base of different types of experiments from these two sources. The competency questions, which were translated into SPARQL queries by computer scientists, were executed on our knowledge base, consisting of linked data in CAESAR. The domain experts evaluated the correctness of the answers to these competency questions. We present here one competency question with the corresponding SPARQL query and part of the results obtained on running it against the knowledge base. The result of each query is a long list of values, hence, we show only the first few rows from them. This query is responsible for getting the complete path for an experiment.</ns0:p><ns0:p>Listing 1. What is the complete path taken by a scientist for an experiment? SELECT DISTINCT * WHERE { ? e x p e r i m e n t a r e p r : E x p e r i m e n t ; p r o v : w a s A t t r i b u t e d T o ? a g e n t ; r e p r : h a s D a t a s e t ? d a t a s e t ; p r o v : g e n e r a t e d A t T i m e ? g e n e r a t e d A t T i m e . ? a g e n t r e p r : h a s R o l e ? r o l e .</ns0:p></ns0:div> <ns0:div><ns0:head>11/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>? d a t a s e t prov:hadMember ? image . ? i n s t r u m e n t p &#8722; p l a n : c o r r e s p o n d s T o V a r i a b l e ? image ; r e p r : h a s P a r t ? i n s t r u m e n t p a r t . ? i n s t r u m e n t p a r t r e p r : h a s S e t t i n g ? s e t t i n g . ? p l a n p &#8722; p l a n : i s S u b P l a n O f P l a n ? e x p e r i m e n t . ? v a r i a b l e p &#8722; p l a n : i s V a r i a b l e O f P l a n ? p l a n . ? s t e p p &#8722; p l a n : i s S t e p O f P l a n ? e x p e r i m e n t . OPTIONAL { ? s t e p p &#8722; p l a n : Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref> shows the part of the result for a particular experiment called 'Focused mitotic chromosome condensation screen using HeLa cells'. Here, we queried the experiment with its associated agents and their role, the plans and steps involved, the input and output of each step, the order of steps, and the instruments and their setting. We see that all these elements are linked now to the computational and non-computational steps to describe the complete path. We can further expand this query by asking for additional information like the materials, publications, external resources, methods, etc., used in each step of an experiment. It is possible to query for all the elements mentioned in the REPRODUCE-ME Data Model.</ns0:p><ns0:p>The domain experts reviewed, manually compared, and evaluated the correctness of the results from the queries using the Dashboard. Since the domain experts do not possess the knowledge of SPARQL, they did this verification process with the help of the Dashboard. This helped them get a complete view of the provenance of scientific experiments. In this verification process, the domain experts observed that the query returned null for certain experiments that did not provide the complete data for some elements. So the computer scientists from the project tweaked the query to include the OPTIONAL keyword to get the results from the query. Even after tweaking the results, the results from these competency questions were not complete. Missing data is especially seen in the non-computational part of an experiment than its computational part. This shows that the metadata entered by the scientists in CAESAR is not complete and requires continuous manual annotation. Another thing that we noticed during the evaluation is that the results are spread across several rows in the table. In the Dashboard, when we show these results, the filter option provided in the table helps the user to search for particular columns. The domain experts manually compared the results of SPARQL queries using Dashboard and ProvTrack and evaluated their Manuscript to be reviewed Computer Science correctness <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Each competency question addressed the different elements of the REPRODUCE-ME Data Model. The competency questions, the RDF data used for the evaluation, the SPARQL queries, and their results are publicly available <ns0:ref type='bibr' target='#b60'>(Samuel, 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Data and user-based evaluation of ProvBook</ns0:head><ns0:p>In this section, we address how ProvBook supports computational reproducibility as a stand-alone tool and also with the integration with CAESAR. We first evaluate the role of ProvBook in supporting computational reproducibility using Jupyter Notebooks. We did the evaluation taking into consideration different use cases and factors. They are:</ns0:p><ns0:p>1. Repeatability: The computational experiment is repeated in the same environment by the same experimenter. We performed this to confirm the final results from the previous executions.</ns0:p><ns0:p>2. Reproducibility: The computational experiment is reproduced in a different environment by a different experimenter. In this case, we compare the results of the Jupyter notebook in the original environment with the results from a different experimenter executed in a different environment.</ns0:p><ns0:p>3. The input, output, execution time and the order in two different executions of a notebook.</ns0:p><ns0:p>4. Provenance difference of the results of a notebook. 5. Performance of ProvBook with respect to time and space.</ns0:p><ns0:p>6. The environmental attributes in the execution of a notebook. 7. Complete path taken by a computational experiment with the sequence of steps in the execution of a notebook with input parameters and intermediate results in each step required to generate the final output.</ns0:p><ns0:p>We used an example Jupyter Notebook which uses face recognition example applying eigenface algorithm and SVM using scikit-learn 3 . This script provides a computational experiment to extract information from an image dataset that is publicly available using machine learning techniques. We used this example script to show the different use cases of our evaluation and how ProvBook handles different output formats like image, text, etc. These formats are also important for the users of CAESAR. We use Original Author to refer to the author who is the first author of the notebook and User 1 and User 2 to the authors who Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>time. The difference in the run time of each cell with and without ProvBook was negligible. Concerning space, the size of the Jupyter Notebook with provenance information of several executions was more than the original notebook. This is stated in <ns0:ref type='bibr' target='#b15'>(Chapman et al., 2008)</ns0:ref> that the size of the provenance information can grow more than the actual data. In the following scenario, we evaluated the semantic representation of the provenance of computational notebooks by integrating ProvBook with CAESAR. Listings 2 shows the SPARQL query of the complete path for a computational notebook with input parameters and intermediate results in each step required to generate the final output. It also queries the sequence of steps in its execution. We can expand this query to get information on the experiment in CAESAR which uses a notebook. The results of the evaluation are available in the Supplementary file.</ns0:p><ns0:p>User-based evaluation of CAESAR This evaluation addresses a smaller part of the main research question. Here, we focus on the visualization of the complete path of scientific experiments to the users of CAESAR. We conducted a user-based study to get the general impression of the tool and used this as feedback to improve the tool. This study aimed to evaluate the usefulness of CAESAR and its different modules, particularly the visualization module. We invited seven participants for the survey, of which six participants responded to the questions. The evaluation participants were the scientists of the ReceptorLight project who use CAESAR in their daily work. In addition to them, other biology students, who closely work with microscopy images and are not part of the ReceptorLight project, participated in this evaluation. We provided an introduction of the tool to all the participants and provided a test system to explore all the features of CAESAR. The scientists from the ReceptorLight project were given training on CAESAR and its workflow on documenting experimental data. Apart from the internal meetings, we provided the training from <ns0:ref type='bibr'>2016-2018 (17.06.2016, 19.07.2016, 07.06.2017, 09.04.2018, and 16.06.2018)</ns0:ref>.</ns0:p><ns0:p>We asked scientists to upload their experimental data to CAESAR as part of this training. At the evaluation time, we provided the participants with the system with real-life scientific experiment data as mentioned in the competency question evaluation subsection. The participants were given the system to explore all the features of CAESAR. As our goal of this user survey was to get feedback from the daily users and new users and improve upon their feedback, we let the participants answer the relevant questions. As a result, the user survey was not anonymous, and none of the questions in the user survey was mandatory. However, only 1 participant who was not part of the ReceptorLight project did not answer all the questions. The questionnaire and the responses are available in the Supplementary file.</ns0:p><ns0:p>In the first section of the study, we asked how the features in CAESAR help improve their daily research work. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results. 83% of the participants either strongly agreed or agreed that it helps to visualize all the experimental data and results effectively, while 17% disagreed on that. In the next section, we asked about the perceived usefulness of CAESAR. 60% of the users consider CAESAR user-friendly, while 40% had a neutral response. 40% of the participants agreed that CAESAR is easy to learn to use, and 60% had a neutral response. The participants provided additional comments to this response that CAESAR offers many features, and they found it a little difficult to follow. However, all the participants strongly agreed or agreed that CAESAR is useful for scientific data management and provides a collaborative environment among teams.</ns0:p><ns0:p>In the last section, we evaluated each feature provided by CAESAR by focusing on the important visualization modules. Here, we showed a real-life scientific experiment with Dashboard and ProvTrack views. We asked the participant to explore the various information, including the different steps and materials used in the experiment. Based on their experience, we asked the participants about the likeability of the different modules. ProvTrack was strongly liked or liked by all the participants. For the Dashboard, 80% of them either strongly liked or liked, while 20% had a neutral response. 60% of the users strongly Manuscript to be reviewed Computer Science liked or liked ProvBook, while the other 40% had a neutral response. The reason for the neutral response was that they were new to scripting. We also asked to provide the overall feedback of CAESAR along with its positive aspects and the things to improve. We obtained three responses to this question which are available in the Supplementary File.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Provenance plays a key role in supporting the reproducibility of results, which is an important concern in data-intensive science. Through CAESAR, we aimed to provide a data management platform for capturing, semantically representing, comparing, and visualizing the provenance of scientific experiments, including their computational and non-computational aspects. CAESAR is used and deployed in the CRC ReceptorLight project, where scientists work together to understand the function of membrane receptors and develop high-end light microscopy techniques. In the competency question-based evaluation, we focused on answering the questions using the experimental provenance data provided by scientists from the research projects, which was then managed and semantically described in CAESAR. Answering the competency questions using SPARQL queries shows that some experiments documented in CAESAR had missing provenance data on some of the elements of the REPRODUCE-ME Data Model like time, settings, etc. We see that CAESAR requires continuous user involvement and interaction in documenting non-computational parts of an experiment. Reproducing an experiment is currently not feasible unless every step in CAESAR is machine-controlled. In addition to that, the output of the query for finding the complete path of the scientific experiment results in many rows in the table. Therefore, the response time could exceed the normal query response time and result in server error from the SPARQL endpoint in some cases where the experiment has various inputs and outputs with several executions. Currently, scientists from the life sciences do not have the knowledge of Semantic Web technologies and are not familiar with writing their SPARQL queries. Hence, we did not perform any user study on writing SPARQL queries to answer competency questions. However, scientists must be able to see the answers to these competency questions and explore the complete path of a scientific experiment. To overcome this issue, we split the queries and combined their results in ProvTrack. The visualization modules, Dashboard, and ProvTrack, which use SPARQL and linked data in the background, visualize the provenance graph of each scientific experiment. ProvTrack groups the entities, agents, activities, steps, and plans to help users visualize the complete path of an experiment. In the data and user-based evaluation, we see the role of ProvBook as a stand-alone tool to capture the provenance history of computational experiments described using Jupyter Notebooks.</ns0:p><ns0:p>We see that each item added in the provenance information in Jupyter Notebooks, like the input, output, starting and ending time, helps users track the provenance of results even in different execution environments. The Jupyter Notebooks shared along with the provenance information of their executions helps users to compare the original intermediate and final results with the results from the new trials executed in the same or different environment. Through ProvBook, the intermediate and negative results and the input and the output from different trials are not lost. The execution environmental attributes of the computational experiments along with their results, help to understand their complete path. We also see that we could describe the relationship between the results, the execution environment, and the executions that generated the results of a computational experiment in an interoperable way using the REPRODUCE-ME ontology. The knowledge capture of computational experiments using notebooks and scripts is ongoing research, and many research questions are yet to be explored. ProvBook currently does not extract semantic information from the cells. This includes information like the libraries used, variables and functions defined, input parameters and output of a function, etc. In CAESAR, we currently link a whole cell as a step of a notebook which is linked to an experiment. Hence, the fine provenance information of a computational experiment is currently missing and thus not linked to an experiment to get the complete path of a scientific experiment.</ns0:p><ns0:p>The user-based evaluation of CAESAR aimed to see how the users find CAESAR useful concerning the features it provides. We targeted both the regular users and the new users to the system. As we had a small group of participants, we could not make general conclusions from the study. However, the study participant either agreed or liked its features. The survey results in <ns0:ref type='bibr' target='#b65'>(Samuel &amp; K&#246;nig-Ries, 2021)</ns0:ref> had shown that newcomers face difficulty in finding, accessing, and reusing data in a team. We see an agreement among the participants that CAESAR helps preserve data for the newcomers to understand backtracking the results of one experiment), the users preferred the provenance graph to be visualized with detailed information on clicking. The survey shows that the visualization of the experimental data and results using ProvTrack supported by the REPRODUCE-ME ontology helps the scientists without worrying about the underlying technologies. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results. However, a visualization evaluation is required to properly test the Dashboard and ProvTrack views, which will help us to determine the usability of visualizing the complete path of a scientific experiment.</ns0:p><ns0:p>The limitation of our evaluation is the small number of user participation. Hence, we cannot make any statistical conclusion on the system's usefulness. However, CAESAR is planned to be used and extended for another large research project, Microverse 4 , which will allow for a more scalable user evaluation. One part of the provenance capture module depends on the scientists to document their experimental data.</ns0:p><ns0:p>Even though the metadata from the images captures the execution environment and the devices' settings, the need for human annotations to the experimental datasets is significant. Besides this limitation, the mappings for the ontology-based data access required some manual curation. This can affect when the database is extended for other experiment types.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article, we presented CAESAR. It provides a collaborative framework for the management of scientific experiments, including the computational and non-computational steps. The provenance of the scientific experiments is captured and semantically represented using the REPRODUCE-ME ontology. ProvBook helps the user capture, represent, manage, visualize and compare the provenance of different executions of computational notebooks. CAESAR links the computational data and steps to the non-computational data and steps to represent the complete path of the experimental workflow.</ns0:p><ns0:p>The visualization modules of CAESAR provides users to view the complete path and backtrack the provenance of results. We applied our contributions together in the ReceptorLight project to support the end-to-end provenance management from the beginning of an experiment to its end. There are several possibilities to extend and improve CAESAR. We expect this approach to be extended to different types of experiments in diverse scientific disciplines. Reproducibility of non-computational parts of an experiment is our future line of work. We can reduce the query time for the SPARQL queries in the project dashboard and ProvTrack by taking several performance measures. CAESAR could be extended to serve as a public data repository providing DOIs to the experimental data and provenance information. This would help the scientific community to track the complete path of the provenance of the results described in the scientific publications. Currently, CAESAR requires continuous user involvement and interaction, especially through different non-computational steps of an experiment. The integration of persistent identifiers for physical samples and materials into scientific data management can lower the effort of user involvement. However, at this stage, reproducibility is not a one-button solution where reproducing an experiment is not feasible unless every step is machine-controlled.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>1Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The architecture of CAESAR. The data management platform consists of modules for provenance capture, representation, storage, comparison, and visualization. It also includes several additional services including API access, and SPARQL Endpoint.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>view and control only his/her data. The Read-only is an intermediate permission level. In addition to their 6/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>provides a feature called Proposal to allow users to propose changes or suggestions to the experiment. As a result, the experiment owner receives those suggestions as proposals. The user can either accept the proposal and add it to the current experimental data or reject and delete the proposal. The plugin provides autocompletion of data to fasten the process of documentation. For example, based on the CAS number of the chemical provided by the user, the molecular weight, mass, structural formulas are fetched from the CAS registry and populated in the Chemical database. The plugin also provides additional data from the external servers for other materials like Protein, Plasmid, and Vector. The plugin also autofills the data about the authors and other publication details based on the DOI/PubMedId of the publications. It also provides a virtual keyboard to aid the users in documenting descriptions with special characters, chemical formulas, or symbols. Provenance Management and Representation. We use a PostgreSQL database in OMERO as well as in CAESAR. The OMERO database consists of 145 tables, and the ReceptorLight database consists of 35 tables in total. We use the REPRODUCE-ME ontology to model and describe the experiments and their provenance in CAESAR. The database model 7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The difference between the input and output of two different execution of a code cell in ProvBook. Deleted elements are marked in red, newly added or created elements are marked in green.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. ProvTrack: Tracking Provenance of Scientific Experiments</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>clicked. Links are provided to the keys to get their definitions from the web. It also displays the path of the selected node from the Experiment node on top of the left panel. The Search panel allows the users to 10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021) Manuscript to be reviewed Computer Science search for any entities in the graph defined by the REPRODUCE-ME data model. It provides a dropdown to search not only the nodes but also the edges. This comes handy when the provenance graph is very large.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. A part of results for the competency question</ns0:figDesc><ns0:graphic coords='13,141.74,220.73,413.55,226.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>used the original notebook to reproduce results. We first saved the notebook without any outputs. Later, two different users executed this notebook in three different environments (Ubuntu 18.10 with Python 3, Ubuntu 18.04 with Python 2 and 3, Fedora with Python 3). Both users used ProvBook in their Jupyter Notebook environment. The first run of the eigenfaces Jupyter Notebook gave ModuleNotFoundError for User 1. User 1 attempted several runs to solve the issue. For User 1 installing the missing scikitlearn module still did not solve the issue. The problem occurred because of the version change of the scikit-learn module. The original Jupyter Notebook used 0.16 version of scikit-learn. While User 1 used 0.20.0 version, User 2 used 0.20.3. The classes and functions from the cross validation, grid search, and learning curve modules were placed into a new model selection module starting from Scikit-learn 0.18. User 1 made several other changes in the script which used these functions. User 1 also made the necessary changes to work for the new versions of the scikit-learn module. We provided this changed notebook along with the provenance information captured by ProvBook in User 1 notebook environment to User 2. For User 2, only the first run gave ModuleNotFoundError error. User 2 resolved this issue by installing the scikit-learn module. User 2 did not have to change scripts, as the User 2 could see the provenance history of the executions from the original author, User 1 and his own execution. Using ProvBook, User 1 could track the changes and compare with the original script, while User 2 could compare the changes with the executions from the original author, User 1 and his own execution. We also performed tests to see the input, output, and run time in different executions in different environments. The files in the Supplementary information provides the information on this evaluation by showing the difference in the execution time of the same cell in a notebook in different execution environments. ProvBook clearly shows the role of different execution environments in computational experiments. We evaluated the provenance capture and difference module in ProvBook with different output types, including images. We also evaluated the performance of ProvBook with respect to space and 3 https://scikit-learn.org/0.16/datasets/labeled_faces.html 13/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Listing 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Complete path for a computational notebook experiment SELECT DISTINCT * WHERE { ? s t e p p &#8722; p l a n : i s S t e p O f P l a n ? n o t e b o o k . ? n o t e b o o k a r e p r : N o t e b o o k . ? e x e c u t i o n p &#8722; p l a n : c o r r e s p o n d s T o S t e p ? s t e p ; r e p r : e x e c u t i o n T i m e ? e x e c u t i o n T i m e . ? s t e p p &#8722; p l a n : h a s I n p u t V a r ? i n p u t V a r ; p &#8722; p l a n : h a s O u t p u t V a r ? o u t p u t V a r ; p &#8722; p l a n : i s P r e c e d e d B y ? p r e v i o u s S t e p . }</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>the ongoing work in the team. This understanding of the ongoing work in the team comes from the 15/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:2:0:NEW 13 Dec 2021) Manuscript to be reviewed Computer Science linking of experimental data and results. The results from the study show that among the two visualization modules, ProvTrack was preferred over Dashboard by scientists. Even though both serve different purposes (Dashboard for an overall view of the experiments conducted in a Project and the ProvTrack for</ns0:figDesc></ns0:figure> </ns0:body> "
"Dear Editor, Thank you for your message “Decision on your PeerJ Computer Science submission: 'A collaborative semantic-based provenance management platform for reproducibility' (#CS-2021:06:62007:1:1:REVIEW)” from 26 October, 2021. We thank you for your work and the reviewers for their thoughtful and thorough suggestions and comments, which have helped us to improve significantly the quality of the manuscript. Please find below our point-by-point responses in text with bold and italic formatting. The submission is updated accordingly and further includes the editorial changes requested by the PeerJ team. We look forward to hearing from you! Best Regards, Sheeba Samuel and Birgitta König-Ries Editor's Comments Reviewer 1 Reviewer 2 Reviewer 3 Editor's Decision Major Revisions Reviewer #3 pointed out some questions about the experimental design of the article. I suggest that authors consider all reviewers' comments, especially the comments of reviewer #3. Reviewer 2 has suggested specific references. You may add them if you believe they are especially relevant. However, I do not expect you to include these citations, and if you do not include them, this will not influence my decision. [# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] Thank you for your feedback. We have addressed all the reviewers’ comments and made the corresponding changes in the manuscript as well. Reviewer 1 (Joao Felipe Pimentel) Experimental design The research question ('it is possible to capture, represent, manage and visualize a complete path taken by a scientist in an experiment including the computation and non-computation steps to derive a path towards experimental results') is too broad, and the evaluation does not seem to be completely related to it. The evaluation had three parts, which considered distinct aspects: - Competency question-based evaluation: the competency questions evaluate the possibility of capturing and representing the complete path. Arguably, the system also had to manage it, but there is no visualization evaluation in this part. - Data-based evaluation: this evaluation was related to reproducibility, performance, and scalability, which are not part of the main research question. - User-based evaluation: this evaluation was related the representing and visualizing the complete path to users for a usability study. The paper could break the main research question into smaller questions for each of these parts and discuss the results and implications separately. Response: We have addressed the reviewer’s comment by breaking the research questions into smaller questions in the evaluation section. We agree that the main research question has a broad scope and we could not evaluate all the components of this question. We clearly divide the evaluation into three parts based on the smaller research questions. Additional comments The paper evolved well, and the authors addressed most of my comments appropriately. However, the paper still has a weakness with the experimental design. Response: Based on Reviewer 1 and 3 feedback, we have added more information on the experiment design. This includes adding information on the competency question-based evaluation, and evaluation of ProvBook sections. We have corrected all the passive voice sentences throughout the manuscript to reflect who and how things are done. In the discussion section, we also reflect the evaluations that need to be done in the future. We have also removed or rephrased the sentences which provide strong statements on the conclusions which do not follow from our evaluations. Please check our response to Reviewer 3 for the changes that we have made to the experimental design. Reviewer 2 Basic reporting The paper addresses the concerns I had. One minor addition might be a reference to a survey on provenance. Like Wellington Oliveira, Daniel De Oliveira, and Vanessa Braganholo. 2018. Provenance Analytics for Workflow-Based Computational Experiments: A Survey. <i>ACM Comput. Surv.</i> 51, 3, Article 53 (July 2018), 25 pages. DOI:https://doi.org/10.1145/3184900 or Herschel, M., Diestelkämper, R. & Ben Lahmar, H. A survey on provenance: What for? What form? What from?. The VLDB Journal 26, 881–906 (2017). https://doi.org/10.1007/s00778-017-0486-1 Response: We have updated the related work and added the reference pointed out by the reviewer in the introduction of tools for supporting computational workflows in line 90. Reviewer 3 Basic reporting The text is improved, and much of the passive voice has been addressed, but there are still a few places where the text could be improved. Specifically, in the evaluation: - 'The evaluation was done' Who did this evaluation? - 'Several runs were attempted to solve the issue'. Who attempted these runs? User 1? - 'This was resolved by installing the sci-kit-learn module.' Was this User 2, or did ProvBook do this? Also a few other spots in the text: - 'We, therefore, argue', actually like without commas or to swap order to 'Therefore, we argue' - 'we aim to create a conceptual model' -> 'we create a conceptual model' - 'We use D3 JavaScript library' -> 'the' - 'obtained 3 responses' => 'received three responses' (got doesn't always translate to obtained) Response: We have corrected the above sentences as pointed by the reviewers. In addition, we have gone through the complete manuscript to address the passive voice in sentences. Experimental design I didn't see Brank et al. define competency questions in the survey paper so was still a bit lost about how 'produce good results' (line 560-561) is actually evaluated. Reading further and the 2018 paper on 'The Story of an Experiment', it sounds like domain experts reviewed the results from the queries to verify them. There are some details about some queries not working and requiring OPTIONAL, but should we assume that the domain experts were finally happy with all competency queries? In any case, moving this domain expert verification statement up earlier would help clarify who is doing the evaluation. Response: We have rephrased the starting of the competency question-based evaluation section to reflect the approach from Brank et al. and Noy et al. Brank et al. point out different methods of evaluating ontologies. In one such approach, the ontology under evaluation is used in an application/system to produce good results on a given task. Answering the competency questions over a knowledge base is one of the approaches to testing ontologies (Noy et al. 2001). Here, we applied the ontology in an application system and answered the competency questions over a knowledge base. We have added the explanation in Line 485, which was missing in the earlier manuscript. We have moved the domain expert verification statement up earlier in Line 549. We have added more explaination to the point of domain experts reviewing the results from the competency questions starting from Line 549-557. Continous involvement with domain experts helped tweak the SPARQL queries and improve the results. This is also added as part of the Discussion section. I still don't understand the 'Data and user-based evaluation of ProvBook in CAESAR' section. The general factors seem reasonable to care about, but I don't understand how this was *evaluated*. Also, the passive voice (see Basic Reporting) makes this harder to understand. Who did the evaluation? How was this evaluated? The description of how notebooks were used and updated is interesting, but the focus should be on ProvBook. The one sentence here is 'Using ProvBook, Users 1 and 2 could track the changes and compare the original script with the new one' How did this help them during the study? Was one user able to use ProvBook and the other not able to use it? That type of study would allow conclusions that ProvBook improved reproducibility because, for example, User 2 took less time to reproduce the original notebook. Response: We have changed the sentences addressing the passive voice in the whole manuscript, especially in the evaluation section to make it clear who and how the evaluation was done. In addition to that, we have added additional evaluations for ProvBook. We have evaluated ProvBook as a stand-alone tool by considering different use cases and factors mentioned from Line 568-578. User 2 was able to track and compare the changes as user 2 had all the provenance history of the executions of the notebook captured by ProvBook from the original author and User 1. This helped User 2 to take less time to reproduce the original notebook. The different use case evaluation are provided as supplementary file images. We also provide some insights on the evaluation of ProvBook with respect to space and time. In addition to that, we have also added the semantic representation of the provenance of computational notebooks by integrating ProvBook with CAESAR. The result of the SPARQL query for getting the complete path of a computational experiment is provided in the supplementary file. In the User-based evaluation of CAESAR, I also was left wondering how the 'None of the questions were mandatory' piece is addressed. Were the participants required to use the tool for a certain amount of time? Are the 'questions' the competency questions or the survey questions? If the survey questions were not required to be answered, why? If the competency questions were not required, how did you verify whether the users used the tool for its intended purpose? Response: The ‘questions’ referred in this section are about the user-based survey questions. As our goal of this user survey was to get feedback from the daily users and new users of CAESAR and improve upon their feedback, we let the participants answer the questions that are relevant to them. As a result, the user survey was not anonymous, and none of the questions in the user survey was mandatory. Only 1 participant (new system user) did not finish all the questions, but the rest of them answered every question. The answering of competency questions and the involvement of domain experts in reviewing them are mentioned in Lines 485-564. Validity of the findings Unfortunately, I still feel that the conclusions are rather limited, and it is unclear how they follow from the evaluations as described. This is likely due to experimental design notes above, but the discussion section still states conclusions that are not in evidence: 'The results of the data and user-based evaluation of ProvBook in CAESAR show how it helps in supporting computational reproducibility.' I suspect that ProvBook in CAESAR does help support computational reproducibility, but I don't see how what is described in that evaluation subsection validates that. Response: We have rephrased and added details in both the discussion and conclusion sections to resonate with the evaluation. We have added more information on the experiment design. This includes adding information on the competency question-based evaluation, and evaluation of ProvBook sections. We have corrected all the passive voice sentences throughout the manuscript to reflect who and how things are done. We also point out the current limitations of the evaluation, which will be addressed in our future work. We have also removed or rephrased the sentences which provide strong statements on the conclusions which do not follow from our evaluations. We have added more evaluation for ProvBook to show how the provenance history of executions helps users capture and compare the changes made in a notebook. In the discussion section, we also note down the limitations of each evaluation part. Based on our current evaluation, we have added the statement in the discussion that ‘CAESAR requires continuous user involvement and interaction, especially through different non-computational steps of an experiment and at this stage, we cannot see reproducibility as a one-button solution in CAESAR where reproducing an experiment is not feasible unless every step is machine-controlled’. We have added these points in the Conclusion section. "
Here is a paper. Please give your review comments after reading it.
377
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Scientific data management plays a key role in the reproducibility of scientific results. To reproduce results, not only the results but also the data and steps of scientific experiments must be made findable, accessible, interoperable, and reusable. Tracking, managing, describing, and visualizing provenance helps in the understandability, reproducibility, and reuse of experiments for the scientific community. Current systems lack a link between the data, steps, and results from the computational and non-computational processes of an experiment. Such a link, however, is vital for the reproducibility of results. We present a novel solution for the end-to-end provenance management of scientific experiments. We provide a framework, CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), which allows scientists to capture, manage, query and visualize the complete path of a scientific experiment consisting of computational and noncomputational data and steps in an interoperable way. CAESAR integrates the REPRODUCE-ME provenance model, extended from existing semantic web standards, to represent the whole picture of an experiment describing the path it took from its design to its result. ProvBook, an extension for Jupyter Notebooks, is developed and integrated into CAESAR to support computational reproducibility. We have applied and evaluated our contributions to a set of scientific experiments in microscopy research projects.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Reproducibility of results is vital in every field of science. The scientific community is interested in the way so that scientists can understand the data and results. Therefore, we need to start addressing this issue at the stage when the data is created. Thus, scientific research data management needs to start at the earlier stage of the research lifecycle to play a vital role in this context.</ns0:p><ns0:p>In this paper, we aim to provide end-to-end provenance capture and management of scientific experiments to support reproducibility. To define our aim, we define the main research question, which structure the remainder of the article: How can we capture, represent, manage and visualize a complete path taken by a scientist in an experiment, including the computational and non-computational steps to derive a path towards experimental results? To address the research question, we create a conceptual model using semantic web technologies to describe a complete path of a scientific experiment. We design and develop a provenance-based semantic framework to populate this model, collect information about the experimental data and results along with the settings, runs, and execution environment and visualize them. The main contribution of this paper is the framework for the end-to-end provenance management of scientific experiments, called CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility) integrated with ProvBook <ns0:ref type='bibr' target='#b63'>(Samuel &amp; K&#246;nig-Ries, 2018b</ns0:ref>) and REPRODUCE-ME data model <ns0:ref type='bibr' target='#b57'>(Samuel, 2019a)</ns0:ref>. CAESAR supports computational reproducibility using our tool ProvBook, which is designed and developed to capture, store, compare and track the provenance of results of different executions of Jupyter Notebooks. The complete path of a scientific experiment interlinking the computational and non-computational data and steps is semantically represented using the REPRODUCE-ME data model.</ns0:p><ns0:p>In the following sections, we provide a detailed description of our findings. We start with an overview of the current state-of-the-art ('Related Work'). We describe the experimental methodology used in the development of CAESAR ('Materials &amp; Methods'). In the 'Results' section, we describe CAESAR and its main modules. We describe the evaluation strategies and results in the 'Evaluation' section. In the 'Discussion' section, we discuss the implications of our results and the limitations of our approach. We conclude the article by highlighting our major findings in the 'Conclusion' section.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Scientific data management plays a key role in knowledge discovery, data integration, and reuse. The preservation of digital objects has been studied for long in the digital preservation community. Some works give more importance to software and business process conservation <ns0:ref type='bibr' target='#b41'>(Mayer et al., 2012)</ns0:ref>, while other works focus on scientific workflow preservation <ns0:ref type='bibr' target='#b5'>(Belhajjame et al., 2015)</ns0:ref>. We focus our approach more on the data management solutions for scientific data, including images. <ns0:ref type='bibr' target='#b20'>Eliceiri et al. (2012)</ns0:ref> provide a list of biological imaging software tools. BisQue is an open-source, server-based software system that can store, display and analyze images <ns0:ref type='bibr' target='#b36'>(Kvilekval et al., 2010)</ns0:ref>. OMERO, developed by the Open Microscopy Environment <ns0:ref type='bibr'>(OME)</ns0:ref>, is another open-source data management platform for imaging metadata primarily for experimental biology <ns0:ref type='bibr' target='#b0'>(Allan et al., 2012)</ns0:ref>. It has a plugin architecture with a rich set of features, including analyzing and modifying images. It supports over 140 image file formats using BIO-Formats <ns0:ref type='bibr' target='#b38'>(Linkert et al., 2010)</ns0:ref>. OMERO and BisQue are the two closest solutions that meet our requirements in the context of scientific data management. A general approach to document experimental metadata is provided by the CEDAR workbench <ns0:ref type='bibr' target='#b23'>(Gonc &#184;alves et al., 2017)</ns0:ref>. It is a metadata repository with a web-based tool that helps users to create metadata templates and fill in the metadata using those templates. However, these systems do not directly provide the features to fully capture, represent and visualize the complete path of a scientific experiment and support computational reproducibility and semantic integration.</ns0:p><ns0:p>Several tools have been developed to capture complete computational workflows to support reproducibility in the context of scientific workflows, scripts, and computational notebooks. <ns0:ref type='bibr' target='#b48'>Oliveira et al. (2018)</ns0:ref> survey the current state of the art approaches and tools that support provenance data analysis for workflow-based computational experiments. Scientific Workflows, which are a complex set of data processes and computations, are constructed with the help of a Scientific Workflow Management System (SWfMS) <ns0:ref type='bibr' target='#b19'>(Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b39'>Liu et al., 2015)</ns0:ref>. Different SWfMSs have been developed for different uses cases and domains <ns0:ref type='bibr' target='#b47'>(Oinn et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b1'>Altintas et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b19'>Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b65'>Scheidegger et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b22'>Goecks et al., 2010)</ns0:ref>. Most SWfMSs provide provenance support by capturing the history of workflow executions.</ns0:p><ns0:p>These systems focus on the computational steps of an experiment and do not link the results to the experimental metadata. Despite the provenance modules present in these systems, there are currently many challenges in the context of reproducibility of scientific workflows <ns0:ref type='bibr' target='#b71'>(Zhao et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Cohen-Boulakia et al., 2017)</ns0:ref>. Workflows created by different scientists are difficult for others to understand or re-run in Manuscript to be reviewed Computer Science a different environment, resulting in workflow decays <ns0:ref type='bibr' target='#b71'>(Zhao et al., 2012)</ns0:ref>. The lack of interoperability between scientific workflows and the steep learning curve required by scientists are some of the limitations according to the study of different SWfMSs <ns0:ref type='bibr' target='#b15'>(Cohen-Boulakia et al., 2017)</ns0:ref>. The Common Workflow Language <ns0:ref type='bibr' target='#b2'>(Amstutz et al., 2016)</ns0:ref> is an initiative to overcome the lack of interoperability of workflows.</ns0:p><ns0:p>Though there is a learning curve associated with adopting workflow languages, this ongoing work aims to make computational methods reproducible, portable, maintainable, and shareable.</ns0:p><ns0:p>Many tools have been developed to capture the provenance of results from the scripts at different levels of granularity <ns0:ref type='bibr' target='#b24'>(Guo &amp; Seltzer, 2012;</ns0:ref><ns0:ref type='bibr' target='#b18'>Davison, 2012;</ns0:ref><ns0:ref type='bibr' target='#b45'>Murta et al., 2014;</ns0:ref><ns0:ref type='bibr'>McPhillips et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Burrito <ns0:ref type='bibr' target='#b24'>(Guo &amp; Seltzer, 2012</ns0:ref>) captures provenance at the operating system level and provides a user interface for documenting and annotating the provenance of non-computational processes. <ns0:ref type='bibr' target='#b11'>Carvalho et al. (Carvalho et al., 2016)</ns0:ref> present an approach to convert scripts into reproducible Workflow Research Objects. However, it is a complex process that requires extensive involvement of scientists and curators with extensive knowledge of the workflow and script programming in every step of the conversion. The lack of documentation of computational experiments along with their results and the ability to reuse parts of code are some of the issues hindering reproducibility in script-based environments. In recent years, computational notebooks have gained widespread adoption because they enable computational reproducibility and allow users to share code along with documentation. Jupyter Notebook <ns0:ref type='bibr' target='#b32'>(Kluyver et al., 2016)</ns0:ref>, which was formerly known as the IPython notebook, is a widely used computational notebook that provides an interactive environment supporting over 100 programming languages with millions of users around the world. Even though it supports reproducible research, recent studies by <ns0:ref type='bibr' target='#b56'>Rule et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b52'>Pimentel et al. (2019)</ns0:ref> point out the need for provenance support in computational notebooks. Overwriting and re-execution of cells in any order can lead to the loss of results from previous trials. Some research works have attempted to track provenance from computational notebooks <ns0:ref type='bibr' target='#b26'>(Hoekstra, 2014;</ns0:ref><ns0:ref type='bibr' target='#b53'>Pimentel et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b13'>Carvalho et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b33'>Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>Kery &amp; Myers, 2018;</ns0:ref><ns0:ref type='bibr' target='#b50'>Petricek et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Head et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b68'>Wenskovitch et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b67'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Macke et al., 2021)</ns0:ref>. <ns0:ref type='bibr' target='#b53'>Pimentel et al. (2015)</ns0:ref> propose a mechanism to capture and analyze the provenance of python scripts inside IPython Notebooks using noWorkflow <ns0:ref type='bibr' target='#b51'>(Pimentel et al., 2017)</ns0:ref>. PROV-O-Matic <ns0:ref type='bibr' target='#b26'>(Hoekstra, 2014)</ns0:ref> is another extension for earlier versions of IPython Notebooks to save the provenance traces to Linked Data file using PROV-O. In recent approaches, custom Jupyter kernels are developed to trace runtime user interactions and automatically manage the lineage of cell execution <ns0:ref type='bibr' target='#b33'>(Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b40'>Macke et al., 2021)</ns0:ref>. However, some of these approaches do not capture the execution history of computational notebooks, require changes to the code by the user, and are limited to Python scripts. In our approach, the provenance tracking feature is integrated within a notebook, so there is no need for users to change the scripts and learn a new tool. We also make available the provenance information in an interoperable way.</ns0:p><ns0:p>There exists a gap in the current state-of-the-art systems as they do not interlink the data, the steps, and the results from both the computational and non-computational processes of a scientific experiment. We bridge this gap by developing a framework to capture the provenance, provide semantic integration of experimental data and support computational reproducibility. Hence, it is important to extend the current tools and at the same time, reuse their rich features to support the reproducibility and understandability of scientific experiments.</ns0:p><ns0:p>We describe here a summary of the insights of the interviews on the research practices of scientists. A scientific experiment consists of non-computational and computational data and steps. Computational tools like computers, software, scripts, etc., generate computational data. Activities in the laboratory like preparing solutions, setting up the experimental execution environment, manual interviews, observations, etc., are examples of non-computational activities. Measures taken to reproduce a non-computational step are different than those for a computational step. The reproducibility of a non-computational step depends on various factors like the availability of experiment materials (e.g., animal cells or tissues) and instruments, the origin of the materials (e.g., distributor of the reagents), human and machine errors, etc.</ns0:p><ns0:p>Hence, non-computational steps need to be described in sufficient detail for their reproducibility <ns0:ref type='bibr' target='#b29'>(Kaiser, 2015)</ns0:ref>.</ns0:p><ns0:p>The conventional way of recording the experiments in hand-written lab notebooks is still in use in biology and medicine. This creates a problem when researchers leave projects and join new projects. To understand the previous work conducted in a research project, all the information regarding the project, including previously conducted experiments along with the trials, analysis, and results, must be available to the new researchers. This information is also required when scientists are working on big collaborative projects. In their daily research work, a lot of data is generated and consumed through the computational and non-computational steps of an experiment. Different entities like devices, procedures, protocols, settings, computational tools, and execution environment attributes are involved in experiments. Several people play various roles in different steps and processes of an experiment. The outputs of some noncomputational steps are used as inputs to the computational steps. Hence, an experiment must not only be linked to its results but also to different entities, people, activities, steps, and resources. Therefore, the complete path towards the results of an experiment must be shared and described in an interoperable manner to avoid conflicts in experimental outputs.</ns0:p></ns0:div> <ns0:div><ns0:head>Design and Development</ns0:head><ns0:p>We aim to design a provenance-based semantic framework for the end-toend management of scientific experiments, including the computational and non-computational steps. To achieve our aim, we focused on the following modules: provenance capture, representation, management, comparison, and visualization. We used an iterative and layered approach in the design and development of CAESAR. We first investigated the existing frameworks that capture and store the experimental metadata and the data for the provenance capture module. We further narrowed down our search to imaging-based data management systems due to the extensive use of images and instruments in the experimental workflows in the ReceptorLight project. Based on our requirements, we selected OMERO as the underlying framework for developing CAESAR. Very active development community ensuring a continued effort to improve the system, a faster release cycle, a well-documented API to write own tools, and the ability to extend the web interface with plugins provided additional benefits to OMERO. However, they lack in providing the provenance support of experimental data, including the computational processes, and also lack in semantically representing the experiments. We designed and developed CAESAR by extending OMERO to capture the provenance of scientific experiments.</ns0:p><ns0:p>For the provenance representation module, we use semantic web technologies to describe the heterogeneous experimental data as machine-readable and link them with other datasets on the web. We develop the REPRODUCE-ME data model and ontology by extending existing web standards, PROV-O <ns0:ref type='bibr' target='#b37'>(Lebo et al., 2013)</ns0:ref> and P-Plan <ns0:ref type='bibr' target='#b21'>(Garijo &amp; Gil, 2012)</ns0:ref>. The REPRODUCE-ME Data Model is a generic data model for representing scientific experiments with their provenance information. An Experiment is considered as the central point of the REPRODUCE-ME data model. The model consists of eight components: Data, Agent, Activity, Plan, Step, Setting, Instrument, Material. We developed the ontology from the competency questions collected from the scientists in the requirement analysis phase <ns0:ref type='bibr' target='#b57'>(Samuel, 2019a)</ns0:ref>. It is extended from PROV-O to represent all agents, activities, and entities involved in an experiment. It extends from P-Plan to represent the steps, the input and output variables, and the complete path taken from an input to an output of an experiment. Using the REPRODUCE-ME ontology, we can describe and semantically query the information for scientific experiments, input and output associated with an experiment, execution environmental attributes, experiment materials, steps, the execution order of steps and activities, agents involved and their roles, script/Jupyter Notebook executions, instruments, and their settings, etc. The ontology also consists of classes and properties, which describe the elements responsible for the image acquisition process in a microscope, from OME data model <ns0:ref type='bibr' target='#b49'>(OME, 2016)</ns0:ref>.</ns0:p><ns0:p>For the provenance management module, we use PostgreSQL database and Ontology-based Data Access (OBDA) approach <ns0:ref type='bibr' target='#b54'>(Poggi et al., 2008)</ns0:ref>. OBDA is an approach to access the various data sources using Manuscript to be reviewed Computer Science ontologies and mappings. The details of the structure of the underlying data sources are isolated from the users using a high-level global schema provided by ontologies. It helps to efficiently access a large amount of data from different sources and avoid replication of data that is already available in relational databases. It also provides many high-quality services to domain scientists without worrying about the underlying technologies. There are different widely-used applications involving large data sources that use OBDA <ns0:ref type='bibr' target='#b9'>(Br&#252;ggemann et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b31'>Kharlamov et al., 2017)</ns0:ref>. As the image metadata in OMERO and the experimental data in CAESAR are already stored in the PostgreSQL database, we investigated the effective ways to represent scientific experiments' provenance information without duplicating the data.</ns0:p><ns0:p>Based on this, we selected to use the OBDA approach to represent this data semantically and at the same time avoid replication of data. To access the various databases in CAESAR, we used Ontop <ns0:ref type='bibr' target='#b10'>(Calvanese et al., 2017)</ns0:ref> for OBDA. We use the REPRODUCE-ME ontology to map the relational data in the OMERO and the ReceptorLight database using Ontop's native mapping language. We used federation for the OMERO and ReceptorLight databases provided by the rdf4j SPARQL Endpoint 1 . We used the Protege plugin provided by Ontop to write the mappings. A virtual RDF graph is created in OBDA using the ontology with the mappings <ns0:ref type='bibr' target='#b10'>(Calvanese et al., 2017)</ns0:ref>. SPARQL, the standard query language in the semantic web community, is used to query the provenance graph. We used the approach where the RDF graphs are kept virtual and queried only during query execution. The virtual approach helps avoid the materialization cost and provides the benefits of the matured relational database systems. However, there are some limitations in this approach using Ontop due to unsupported functions and data type.</ns0:p><ns0:p>To support computational reproducibility in CAESAR, we focused on providing the management of the provenance of the computational parts of an experiment. Computational notebooks, which have gained widespread attention because of their support for reproducible research, motivated us to look into this direction. These notebooks, which are extensively used and openly available, provide various features to run and share the code and results. We installed JupyterHub 2 in CAESAR to provide users access to computational environment and resources. JupyterHub provides a customizable and scalable way to serve Jupyter notebook for multiple users. In spite of the support for reproducible research, the provenance information of the execution of these notebooks was missing. To further support reproducibility in these notebooks, we developed ProvBook, an extension of Jupyter Notebooks, to capture the provenance information of their executions. We keep the design of ProvBook simple so that it can be used by researchers irrespective of their disciplines. We added the support to compare the differences in executions of the notebooks by different authors. We also extended the REPRODUCE-ME ontology to describe the computational experiments, including scripts and notebooks <ns0:ref type='bibr' target='#b62'>(Samuel &amp; K&#246;nig-Ries, 2018a)</ns0:ref>, which was missing in the current state of the art.</ns0:p><ns0:p>For the provenance visualization module, we focused on visualizing the complete path of an experiment by linking the non-computational and computational data and steps. Our two goals in designing the visualization component in CAESAR are providing users with a complete picture of an experiment and tracking its provenance. To do so, we integrated the REPRODUCE-ME ontology and ProvBook in CAESAR. We developed visualization modules to provide a complete story of an experiment starting from its design to publication. The visualization module, Project Dashboard, provides a complete overview of all the experiments conducted in a research project <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. We later developed the ProvTrack module to track the provenance of individual scientific experiments. The underlying technologies are transparent to scientists based on these approaches. We followed a Model-View-Controller architecture pattern for the development of CAESAR. We implement the webclient in the Django-Python framework and the Dashboard in ReactJs. Java is used to implement the new services extended by OMERO.server.</ns0:p><ns0:p>We use the D3 JavaScript library <ns0:ref type='bibr'>(D3.js, 2021)</ns0:ref> for the rendering of provenance graphs in the ProvTrack.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We present CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), an end-to-end semantic-based provenance management platform. It is extended from OMERO <ns0:ref type='bibr' target='#b0'>(Allan et al., 2012)</ns0:ref>. With the integration of the rich features provided by OMERO and our provenance-based extensions, CAESAR provides a platform to support scientists to describe, preserve and visualize their experimental data by linking the datasets with the experiments along with the execution environment and images. It provides extensive features, including the semantic representation of experimental data using the REPRODUCE-ME ontology and computational reproducibility support using ProvBook. Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> depicts the architecture of CAESAR. We describe briefly the core modules of CAESAR required for the end-to-end provenance management.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Capture.</ns0:head><ns0:p>This module provides a metadata editor with a rich set of features allowing the scientists to easily record all the data of the non-computational steps performed in their experiments and the protocols, the materials, etc. This metadata editor is a form-based provenance capture system. It provides the feature to document the experimental metadata and interlink with other experiment databases. An Experiment form is the key part of this system that documents all the information about an experiment. This data includes the temporal and spatial properties, the experiment's research context, and other general settings used in the experiment. The materials and other resources used in an experiment are added as new templates and linked to the experiment. The templates are added as a service as well as a database table in CAESAR and are also available as API, thus allowing the remote clients to use them.</ns0:p><ns0:p>The user and group management provided by OMERO is adopted in CAESAR to manage users in groups and provides roles for these users. The restriction and modification of data are managed using the roles and permissions that are assigned to the users belonging to a group. The data is made available between the users in the same group in the same CAESAR server. Members of other groups can share the data based on the group's permission level. A user can be assigned any of the role of Administrator, Group Owner or Group Member. An Administrator controls all the settings of the groups. A Group Owner has the right to add other members to the group. A Group Member is a standard user in the group. There are also various permission levels in the system. Private is the most restrictive permission level, thus providing the least collaboration level with other groups in the system. A private group owner can view and control the members and data of the members within a group; While a private group member can view and control only his/her data. The Read-only is an intermediate permission level. In addition to their</ns0:p></ns0:div> <ns0:div><ns0:head>6/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62007:3:0:NEW 23 Jan 2022)</ns0:ref> Manuscript to be reviewed Computer Science group, the group owners can read and perform some annotations on members' data from other groups.</ns0:p><ns0:p>The group members don't have permission to annotate the datasets from other groups, thus providing only viewing and reading possibilities. The Read-annotate permission level provides a more collaborative option where the group owners and members can view the other groups' members as well as read and annotate their data. The Read-write permission level allows the group members to read and write data just like their own group.</ns0:p><ns0:p>CAESAR adopts this role and permission levels to control the access and modification of provenance information of experiments. A Principal Investigator (PI) can act as a group owner and students as group members in a private group. PIs can access students' stored data and decide which data can be used to share with other collaborative groups. A Read-only group can serve as a public repository where the original data and results for the publications are stored. A Read-annotate group is suitable for collaborative teams to work together for a publication or research. Every group member is trusted and given equal rights to view and access the data in a Read-write group, thus providing a very collaborative environment. This user and group management paves the way for collaboration among teams in research groups and institutes before the publication is made available online.</ns0:p><ns0:p>This module also allows scientists to interlink the dependencies of many materials, samples, input files, measurement files, images, standard operating procedures, and steps to an experiment. Users can also attach files, scripts, publications, or other resources to any steps in an experiment form. They can annotate these resources as an input to a step or intermediate result of a step. Another feature of this module is to help the scientists to reuse resources rather than do it from scratch, thus enabling a collaborative environment among teams and avoiding replicating the experimental data. This is possible by sharing the descriptions of the experiments, standard operating procedures, and materials with the team members within the research group. Scientists can reference the descriptions of the resources in their experiments.</ns0:p><ns0:p>Version management plays a key role in data provenance. In a collaborative environment, where the experimental data are shared among the team members, it is important to know the modifications made by the members of the system and track the history of the outcome of an experiment. This module provides version management of the experimental metadata by managing all the changes made in the description of the experimental data. The plugin allows users to view the version history and compare two different versions of an experiment description. The file management system in CAESAR stores all files and index them to the experiment, which is annotated as input data, measurement data, or other resources. The user can organize the input data, measurement data, or other resources in a hierarchical structure based on their experiments and measurements using this plugin.</ns0:p><ns0:p>CAESAR also provides a database of Standard Operating Procedures. These procedures in life-sciences provide a set of step-by-step instructions to carry out a complex routine. In this database, the users can store the protocols, procedures, scripts, or Jupyter Notebooks based on their experiments, which have multiple non-computational and computational steps. The users can also link these procedures to the step in an experiment where they were used. The plugin also provides users the facility to annotate the experimental data with terms from other ontologies like GO <ns0:ref type='bibr' target='#b3'>(Ashburner et al., 2000)</ns0:ref>, CMPO <ns0:ref type='bibr' target='#b28'>(Jupp et al., 2016)</ns0:ref>, etc. in addition to REPRODUCE-ME Ontology.</ns0:p><ns0:p>If a user is restricted to make modifications to other members' data due to permission level, the plugin Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and its schema consist of important classes which are based on the REPRODUCE-ME Data Model <ns0:ref type='bibr' target='#b57'>(Samuel, 2019a)</ns0:ref>. For the data management of images, the main classes include Project, Dataset, Folder, Plate, Screen, Experiment, Experimenter, ExperimenterGroup, Instrument, Image, StructuredAnnotations, and ROI. Each class provides a rich set of features, including how they are used in an experiment. The Experiment, which is a subclass of Plan <ns0:ref type='bibr' target='#b21'>(Garijo &amp; Gil, 2012)</ns0:ref>, links all the provenance information of a scientific experiment together. We use the REPRODUCE-ME ontology to map the relational data in the OMERO and the ReceptorLight database using Ontop. The provenance information of computational notebooks is also semantically represented and is combined with other experimental metadata, thus providing the context of the results. This helps the scientists and machines to understand the experiments along with their context. There are around 800 mappings to create the virtual RDF graph. All the mappings are publicly available. We refer the authors to the publication for the complete documentation of the REPRODUCE-ME ontology <ns0:ref type='bibr' target='#b59'>(Samuel, 2019b)</ns0:ref> and the database schema is available in the Supplementary file.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational Reproducibility</ns0:head><ns0:p>The introduction of computational notebooks, which allow scientists to share the code along with the documentation, is a step towards computational reproducibility. Scientists widely use Jupyter Notebooks to perform several tasks, including image processing and analysis. As the experimental data and images are contained in the CAESAR itself, another requirement is to provide a computational environment for scientists to include the scripts that analyze the data stored in the platform. To create a collaborative research environment for the scientists working with images and Jupyter Notebooks, JupyterHub is installed and integrated with CAESAR. This allows the scientists to directly access the images and datasets stored in CAESAR using the API and perform data analysis or processing on them using Jupyter Notebooks. The new images and datasets created in the Jupyter Notebooks can then be uploaded and linked to the original experiments to CAESAR using the APIs. To capture the provenance traces of the computational steps in CAESAR, we introduce ProvBook, an extension of Jupyter notebooks to provide provenance support <ns0:ref type='bibr' target='#b63'>(Samuel &amp; K&#246;nig-Ries, 2018b)</ns0:ref>. It is an easy-to-use framework for scientists and developers to efficiently capture, compare, and visualize the provenance data of different executions of a notebook over time. To capture the provenance of computational steps and support computational reproducibility, ProvBook is installed in JupyterHub and integrated with CAESAR. We briefly describe the modules provided by ProvBook.</ns0:p><ns0:p>Capture, Management, and Representation. This module captures and stores the provenance of the execution of Jupyter Notebooks cells over the course of time. A Jupyter Notebook, stored as a JSON file format, is a dictionary with the following keys: metadata and cells. The metadata is a dictionary that contains information about the notebook, its cells, and outputs. The cell contains information on all cells, including the source, the type of the cell, and its metadata. As Jupyter notebooks allow the addition of custom metadata to its content, the provenance information captured by the ProvBook is added to the metadata of each cell of the notebook in the JSON format. ProvBook captures the provenance information, including the start and end time of each execution, the total time it took to run the code cell, the source code, and the output obtained during that particular execution. The execution time for a computational task was added as part of the provenance metadata in a Notebook since it is important to check the performance of the task. The start and end times also act as an indicator of the execution order of the cells. Users can execute cells in any order, so adding the start and end time helps them check when a particular cell was last executed. The users can change the parameters and source code in each cell until they arrive at their expected result. This helps the user to track the history of all the executions to see which parameters were changed and how the results were derived.</ns0:p><ns0:p>ProvBook also provides a module that converts the computational notebooks along with the provenance information of their executions and execution environment attributes into RDF. The REPRODUCE-ME ontology represents this provenance information. ProvBook allows the user to export the notebook in RDF as a turtle file either from the user interface of the notebook or using the command line. The users can share a notebook and its provenance in RDF and convert it back to a notebook. The reproducibility service provided by ProvBook converts the provenance graph back to a computational notebook along with its provenance. The Jupyter notebooks and the provenance information captured by ProvBook in RDF are then linked to the provenance of the experimental metadata in CAESAR. After the two executions are selected by the user, the difference in the input and the output of these executions are shown side by side. The users can select their own execution and compare the results with the original experimenter's execution of the Jupyter Notebook. Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> shows the differences between the source and output of two different code cell execution. ProvBook highlights any differences in the source or output for the user to distinguish the change. The provenance difference module is developed by extending the nbdime (Project Jupyter, 2021) library from the Project Jupyter. The nbdime tools provide the ability to compare notebooks and also a three-way merge of notebooks with auto-conflict resolution.</ns0:p><ns0:p>ProvBook calls the API from the nbdime to see the difference between the provenance of two executions of a notebook code cell. Using nbdime, ProvBook provides diffing of notebooks based on the content and renders image-diffs correctly. This module in CAESAR helps scientists to compare the results of the executions of different users.</ns0:p><ns0:p>Visualization. Efficient visualization is important to make meaningful interpretations of the data. The provenance information of each cell captured by ProvBook is visualized below every input cell. In this provenance area, a slider is provided so that the user can drag to view the history of the different executions of the cell. This area also provides the user with the ability to track the history and compare the current results with several previous results and see the difference that occurred. The user can visualize the provenance information of a selected cell or all cells by clicking on the respective buttons in the toolbar. The user can also clear the provenance information of a selected cell or all cells if required. This solution tries to address the problem of having larger provenance information than the original notebook data.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Visualization.</ns0:head><ns0:p>We have described above the visualization of the provenance of cells in Jupyter Notebooks. In this section, we look at the visualization of overall scientific experiments. We present two modules for the visualization of the provenance information of scientific experiments captured, stored, and semantically represented in CAESAR: Dashboard and ProvTrack. The experimental data provided by scientists through the metadata editor, the metadata extracted from the images and instruments, and the details of the computational steps collected from ProvBook together are integrated, linked, and represented using the REPRODUCE-ME ontology. All this provenance data, stored as linked data, form the basis for the complete path of a scientific experiment and is visualized in CAESAR. Dashboard. This visualization module aggregates all the data related to an experiment and project in a single place. We provide users with two views: one at the project level and another at the experiment level. The Dashboard at the project level provides a unified view of a research project containing multiple experiments by different agents. When a user selects a project, the Project Dashboard is activated, while the Experiment Dashboard is activated when a dataset is selected. The Dashboard is composed of several panels. Each panel provides a detailed view of a particular component of an experiment. The data inside a panel is displayed in tables. The panels are arranged in a way that they provide the story of an experiment.</ns0:p><ns0:p>A detailed description of each panel is provided <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Users can also search and filter the data based on keywords inside a table in the panel.</ns0:p><ns0:p>ProvTrack. This visualization module provides users with an interactive way to track the provenance of experimental results. The provenance of experiments is provided using a node-link representation, thus, helping the user to backtrack the results. Users can drill-down each node to get more information and attributes. This module which is developed independently, is integrated into CAESAR. The provenance graph is based on the data model represented by REPRODUCE-ME ontology. We query the SPARQL endpoint to get the complete path of a scientific experiment. We make several SPARQL queries, and the results are combined to display this complete path and increase the system's performance. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the visualization of the provenance of an experiment using ProvTrack. CAESAR allows users to select an experiment to track its provenance.</ns0:p><ns0:p>The provenance graph is visualized in the right panel when the user selects an experiment. Each node in the provenance graph is colored based on its type like prov:Entity, prov:Agent, prov:Activity, p-plan:Step, p-plan:Plan and p-plan:Variable. The user can expand the provenance graph by opening up all nodes using the Expand All button next to the help menu. Using the Collapse All button, users can collapse the provenance graph to one node, which is the Experiment. ProvTrack shows the property relationship between two nodes when a user hovers on an edge. The help menu provides the user with the meaning of each color in the graph. The path from the user-selected node to the first node (Experiment) is highlighted to show the relationship of each node with the Experiment and also to see where the node is in the provenance graph.</ns0:p><ns0:p>ProvTrack also provides an Infobox of the selected node of an experiment. It displays the additional information about the selected node as a key-value pair. The keys in the Infobox are either the object or data properties of the REPRODUCE-ME ontology that is associated with the node that the user has </ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>We evaluate different aspects of our work based on our main research question: How can we capture, represent, manage and visualize a complete path taken by a scientist in an experiment, including the computational and non-computational steps to derive a path towards experimental results? As the main research question has a broad scope and it is challenging to evaluate every part within the limited time and resources, we break the main research questions into smaller parts. We divide the evaluation into three parts based on the smaller questions and discuss their results separately. In the first part, we address the question of capturing and representing the complete path of a scientific experiment which includes both computational and non-computational steps. For this, we use the non-computational data captured in CAESAR and the computational data captured using ProvBook. The role of the REPRODUCE-ME ontology in semantically representing the complete path of a scientific experiment is evaluated using the knowledge base in CAESAR using the competency question-based evaluation. In the second part, we address the question of supporting reproducibility by capturing and representing the provenance of computational experiments. For this, we address the role of ProvBook in terms of reproducibility, performance, and scalability. We focused on evaluating ProvBook as a stand-alone tool and also with the integration with CAESAR. In the third part, we address the question of representing and visualizing the complete path of scientific experiments to the users of CAESAR. For this, we performed an evaluation by conducting a user-based study to get general impression of the tool and use this as a feedback to improve the tool. Scientists from within and outside the project were involved in all the three parts of our evaluation as system's users as well as participants. <ns0:ref type='bibr' target='#b7'>Brank et al. (2005)</ns0:ref> point out different methods of evaluating ontologies. In application-based evaluation, the ontology under evaluation is used in an application/system to produce good results on a given task. Answering the competency questions over a knowledge base is one of the approaches to testing ontologies <ns0:ref type='bibr' target='#b46'>(Noy et al., 2001)</ns0:ref>. Here, we applied the ontology in an application system and answered the competency questions over a knowledge base. Hence, we evaluated CAESAR with the REPRODUCE-ME ontology using competency questions collected from different scientists in our requirement analysis phase. We used the REPRODUCE-ME ontology to answer the competency questions using the scientific experiments documented in CAESAR for its evaluation. We imaging experiments <ns0:ref type='bibr' target='#b70'>(Williams et al., 2017)</ns0:ref> for our evaluation to ensure that the REPRODUCE-ME ontology can be used to describe other types of experiments as well. The description of the scientific experiments, along with the steps, experiment materials, settings, and standard operating procedures, using the REPRODUCE-ME ontology is available in CAESAR to its users and the evaluation participants.</ns0:p></ns0:div> <ns0:div><ns0:head>Competency question-based evaluation</ns0:head><ns0:p>The scientific experiments that used Jupyter Notebooks are linked with the provenance of these notebooks captured using ProvBook in CAESAR. This information described using the REPRODUCE-ME ontology is also available in CAESAR for the participants (Listings 2). We created a knowledge base of different types of experiments from these two sources. The competency questions, which were translated into SPARQL queries by computer scientists, were executed on our knowledge base, consisting of linked data in CAESAR. The domain experts evaluated the correctness of the answers to these competency questions.</ns0:p><ns0:p>We present here one competency question with the corresponding SPARQL query and part of the results obtained on running it against the knowledge base. The result of each query is a long list of values, hence, we show only the first few rows from them. This query is responsible for getting the complete path for an experiment. ? d a t a s e t prov:hadMember ? image . ? i n s t r u m e n t p &#8722; p l a n : c o r r e s p o n d s T o V a r i a b l e ? image ; r e p r : h a s P a r t ? i n s t r u m e n t p a r t . ? i n s t r u m e n t p a r t r e p r : h a s S e t t i n g ? s e t t i n g . ? p l a n p &#8722; p l a n : i s S u b P l a n O f P l a n ? e x p e r i m e n t . Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref> shows the part of the result for a particular experiment called 'Focused mitotic chromosome condensation screen using HeLa cells'. Here, we queried the experiment with its associated agents and their role, the plans and steps involved, the input and output of each step, the order of steps, and the instruments and their setting. We see that all these elements are linked now to the computational and non-computational steps to describe the complete path. We can further expand this query by asking for additional information like the materials, publications, external resources, methods, etc., used in each step of an experiment. It is possible to query for all the elements mentioned in the REPRODUCE-ME Data Model.</ns0:p><ns0:p>The domain experts reviewed, manually compared, and evaluated the correctness of the results from the queries using the Dashboard. Since the domain experts do not possess the knowledge of SPARQL, they did this verification process with the help of the Dashboard. This helped them get a complete view of the provenance of scientific experiments. In this verification process, the domain experts observed that the query returned null for certain experiments that did not provide the complete data for some elements. So the computer scientists from the project tweaked the query to include the OPTIONAL keyword to get the results from the query. Even after tweaking the results, the results from these competency questions were not complete. Missing data is especially seen in the non-computational part of an experiment than its computational part. This shows that the metadata entered by the scientists in CAESAR is Manuscript to be reviewed Computer Science not complete and requires continuous manual annotation. Another thing that we noticed during the evaluation is that the results are spread across several rows in the table. In the Dashboard, when we show these results, the filter option provided in the table helps the user to search for particular columns.</ns0:p><ns0:p>The domain experts manually compared the results of SPARQL queries using Dashboard and ProvTrack and evaluated their correctness <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Each competency question addressed the different elements of the REPRODUCE-ME Data Model. The competency questions, the RDF data used for the evaluation, the SPARQL queries, and the answers to these queries are publicly available <ns0:ref type='bibr' target='#b60'>(Samuel, 2021)</ns0:ref>.</ns0:p><ns0:p>The data provides ten competency questions which are translated to SPARQL queries. The answers to these SPARQL queries provide information on the steps, plans, methods, standard operating procedures, instruments and their settings, materials used in each step, agents directly and indirectly involved, and the temporal parameters of an experiment.</ns0:p></ns0:div> <ns0:div><ns0:head>Data and user-based evaluation of ProvBook</ns0:head><ns0:p>In this section, we address how ProvBook supports computational reproducibility as a stand-alone tool and also with the integration with CAESAR. To address the second part of the main research question, we evaluate the role of ProvBook in supporting computational reproducibility using Jupyter Notebooks. We did the evaluation taking into consideration different use cases and factors. They are:</ns0:p><ns0:p>1. Repeatability: The computational experiment is repeated in the same environment by the same experimenter. We performed this to confirm the final results from the previous executions.</ns0:p><ns0:p>2. Reproducibility: The computational experiment is reproduced in a different environment by a different experimenter. In this case, we compare the results of the Jupyter notebook in the original environment with the results from a different experimenter executed in a different environment.</ns0:p><ns0:p>3. The input, output, execution time and the order in two different executions of a notebook. 4. Provenance difference of the results of a notebook. 5. Performance of ProvBook with respect to time and space.</ns0:p><ns0:p>6. The environmental attributes in the execution of a notebook. 7. Complete path taken by a computational experiment with the sequence of steps in the execution of a notebook with input parameters and intermediate results in each step required to generate the final output.</ns0:p><ns0:p>We used an example Jupyter Notebook which uses a face recognition example applying eigenface algorithm and SVM using scikit-learn 3 . This script provides a computational experiment to extract information from an image dataset that is publicly available using machine learning techniques. We used this example script to show the different use cases of our evaluation and how ProvBook handles different output formats like image, text, etc. These formats are also important for the users of CAESAR.</ns0:p><ns0:p>We use Original Author to refer to the author who is the first author of the notebook and User 1 and User 2 to the authors who used the original notebook to reproduce results. We first saved the notebook without any outputs. Later, two different users executed this notebook in three different environments The results show that ProvBook can handle different output types, which Jupyter Notebooks support. We also evaluated the performance of ProvBook with respect to space and time. The difference in the run time of each cell with and without ProvBook was negligible. Concerning space, the size of the Jupyter Notebook with provenance information of several executions was more than the original notebook. This is stated in <ns0:ref type='bibr' target='#b14'>(Chapman et al., 2008)</ns0:ref> that the size of the provenance information can grow more than the actual data. In the following scenario, we evaluated the semantic representation of the provenance of computational notebooks by integrating ProvBook with CAESAR. Listings 2 shows the SPARQL query of the complete path for a computational notebook with input parameters and intermediate results in each step required to generate the final output. It also queries the sequence of steps in its execution. We can expand this query to get information on the experiment in CAESAR which uses a notebook. The result of this query, which is available in the Supplementary file, shows the steps, the execution of each step with their total run time, the input and output data of each run, and the order of execution of steps of a notebook.</ns0:p><ns0:p>User-based evaluation of CAESAR This evaluation addresses a smaller component of the third part of the main research question. Here, we focus on the visualization of the complete path of scientific experiments to the users of CAESAR. We conducted a user-based study to get the general impression of the tool and used this as feedback to improve the tool. This study aimed to evaluate the usefulness of CAESAR and its different modules, particularly the visualization module. We invited seven participants for the survey, of which six participants responded to the questions. The evaluation participants were the scientists of the ReceptorLight project who use CAESAR in their daily work. In addition to them, other biology students, who closely work with microscopy images and are not part of the ReceptorLight project, participated in this evaluation. We provided an introduction of the tool to all the participants and provided a test system to explore all the features of CAESAR. The scientists from the ReceptorLight project were given training on CAESAR and its workflow on documenting experimental data. Apart from the internal meetings, we provided the training from <ns0:ref type='bibr'>2016-2018 (17.06.2016, 19.07.2016, 07.06.2017, 09.04.2018, and 16.06.2018)</ns0:ref>. We asked scientists to upload their experimental data to CAESAR as part of this training.</ns0:p><ns0:p>At the evaluation time, we provided the participants with the system with real-life scientific experiment data as mentioned in the competency question evaluation subsection. The participants were given the system to explore all the features of CAESAR. As our goal of this user survey was to get feedback from the daily users and new users and improve upon their feedback, we let the participants answer the relevant questions. As a result, the user survey was not anonymous, and none of the questions in the user survey was mandatory. However, only 1 participant who was not part of the ReceptorLight project did not answer all the questions. The questionnaire and the responses are available in the Supplementary file.</ns0:p><ns0:p>In the first section of the study, we asked how the features in CAESAR help improve their daily research work. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results. 83% of the participants either strongly agreed or agreed that it helps to visualize all the experimental data and results effectively, while 17% disagreed on that. In the next section, we asked about the perceived usefulness of CAESAR. 60% of the users consider CAESAR user-friendly, while 40% had a neutral response. 40% of the participants agreed that CAESAR is easy to learn to use, and 60% had a neutral response. The participants provided additional comments to this response that CAESAR offers many features, and they found it a little difficult to follow. However, all the participants strongly agreed or agreed that CAESAR is useful for scientific data management and provides a collaborative environment among teams.</ns0:p><ns0:p>In the last section, we evaluated each feature provided by CAESAR by focusing on the important visualization modules. Here, we showed a real-life scientific experiment with Dashboard and ProvTrack views. We asked the participant to explore the various information, including the different steps and materials used in the experiment. Based on their experience, we asked the participants about the likeability of the different modules. ProvTrack was strongly liked or liked by all the participants. For the Dashboard, 80% of them either strongly liked or liked, while 20% had a neutral response. 60% of the users strongly liked or liked ProvBook, while the other 40% had a neutral response. The reason for the neutral response was that they were new to scripting. We also asked to provide the overall feedback of CAESAR along with its positive aspects and the things to improve. We obtained three responses to this question which are available in the Supplementary File.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Provenance plays a key role in supporting the reproducibility of results, which is an important concern in data-intensive science. Through CAESAR, we aimed to provide a data management platform for capturing, semantically representing, comparing, and visualizing the provenance of scientific experiments, including their computational and non-computational aspects. CAESAR is used and deployed in the CRC ReceptorLight project, where scientists work together to understand the function of membrane receptors and develop high-end light microscopy techniques. In the competency question-based evaluation, we focused on answering the questions using the experimental provenance data provided by scientists from the research projects, which was then managed and semantically described in CAESAR. Answering the competency questions using SPARQL queries shows that some experiments documented in CAESAR had missing provenance data on some of the elements of the REPRODUCE-ME Data Model like time, settings, etc. We see that CAESAR requires continuous user involvement and interaction in documenting non-computational parts of an experiment. Reproducing an experiment is currently not feasible unless every step in CAESAR is machine-controlled. In addition to that, the output of the query for finding the complete path of the scientific experiment results in many rows in the table. Therefore, the response time could exceed the normal query response time and result in server error from the SPARQL endpoint in some cases where the experiment has various inputs and outputs with several executions. Currently, scientists from the life sciences do not have the knowledge of Semantic Web technologies and are not familiar with writing their SPARQL queries. Hence, we did not perform any user study on writing SPARQL queries to answer competency questions. However, scientists must be able to see the answers to these competency questions and explore the complete path of a scientific experiment. To overcome this issue, we split the queries and combined their results in ProvTrack. The visualization modules, Dashboard, and ProvTrack, which use SPARQL and linked data in the background, visualize the provenance graph of each scientific experiment. ProvTrack groups the entities, agents, activities, steps, and plans to help users visualize the complete path of an experiment. In the data and user-based evaluation, we see the role of ProvBook as a stand-alone tool to capture the provenance history of computational experiments described using Jupyter Notebooks.</ns0:p><ns0:p>We see that each item added in the provenance information in Jupyter Notebooks, like the input, and the input and the output from different trials are not lost. The execution environmental attributes of the computational experiments along with their results, help to understand their complete path. We also see that we could describe the relationship between the results, the execution environment, and the executions that generated the results of a computational experiment in an interoperable way using the REPRODUCE-ME ontology. The knowledge capture of computational experiments using notebooks and scripts is ongoing research, and many research questions are yet to be explored. ProvBook currently does not extract semantic information from the cells. This includes information like the libraries used, variables and functions defined, input parameters and output of a function, etc. In CAESAR, we currently link a whole cell as a step of a notebook which is linked to an experiment. Hence, the fine provenance information of a computational experiment is currently missing and thus not linked to an experiment to get the complete path of a scientific experiment.</ns0:p><ns0:p>The user-based evaluation of CAESAR aimed to see how the users find CAESAR useful concerning the features it provides. We targeted both the regular users and the new users to the system. As we had a small group of participants, we could not make general conclusions from the study. However, the study participant either agreed or liked its features. The survey results in <ns0:ref type='bibr' target='#b64'>(Samuel &amp; K&#246;nig-Ries, 2021)</ns0:ref> had shown that newcomers face difficulty in finding, accessing, and reusing data in a team. We see an agreement among the participants that CAESAR helps preserve data for the newcomers to understand the ongoing work in the team. This understanding of the ongoing work in the team comes from the linking of experimental data and results. The results from the study show that among the two visualization modules, ProvTrack was preferred over Dashboard by scientists. Even though both serve different purposes (Dashboard for an overall view of the experiments conducted in a Project and the ProvTrack for backtracking the results of one experiment), the users preferred the provenance graph to be visualized with detailed information on clicking. The survey shows that the visualization of the experimental data and results using ProvTrack supported by the REPRODUCE-ME ontology helps the scientists without worrying about the underlying technologies. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results. However, a visualization evaluation is required to properly test the Dashboard and ProvTrack views, which will help us to determine the usability of visualizing the complete path of a scientific experiment.</ns0:p><ns0:p>The limitation of our evaluation is the small number of user participation. Hence, we cannot make any statistical conclusion on the system's usefulness. However, CAESAR is planned to be used and extended for another large research project, Microverse 5 , which will allow for a more scalable user evaluation. One part of the provenance capture module depends on the scientists to document their experimental data.</ns0:p><ns0:p>Even though the metadata from the images captures the execution environment and the devices' settings, the need for human annotations to the experimental datasets is significant. Besides this limitation, the mappings for the ontology-based data access required some manual curation. This can affect when the database is extended for other experiment types.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article, we presented CAESAR. It provides a collaborative framework for the management of scientific experiments, including the computational and non-computational steps. The provenance of the scientific experiments is captured and semantically represented using the REPRODUCE-ME ontology. ProvBook helps the user capture, represent, manage, visualize and compare the provenance of different executions of computational notebooks. CAESAR links the computational data and steps to the non-computational data and steps to represent the complete path of the experimental workflow.</ns0:p><ns0:p>The visualization modules of CAESAR provides users to view the complete path and backtrack the provenance of results. We applied our contributions together in the ReceptorLight project to support the end-to-end provenance management from the beginning of an experiment to its end. There are several possibilities to extend and improve CAESAR. We expect this approach to be extended to different types of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>experiments in diverse scientific disciplines. Reproducibility of non-computational parts of an experiment is our future line of work. We can reduce the query time for the SPARQL queries in the project dashboard and ProvTrack by taking several performance measures. CAESAR could be extended to serve as a public data repository providing DOIs to the experimental data and provenance information. This would help the scientific community to track the complete path of the provenance of the results described in the scientific publications. Currently, CAESAR requires continuous user involvement and interaction, especially through different non-computational steps of an experiment. The integration of persistent identifiers for physical samples and materials into scientific data management can lower the effort of user involvement. However, at this stage, reproducibility is not a one-button solution where reproducing an experiment is not feasible unless every step is machine-controlled.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>1Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The architecture of CAESAR. The data management platform consists of modules for provenance capture, representation, storage, comparison, and visualization. It also includes several additional services including API access, and SPARQL Endpoint.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>provides a feature called Proposal to allow users to propose changes or suggestions to the experiment. As a result, the experiment owner receives those suggestions as proposals. The user can either accept the proposal and add it to the current experimental data or reject and delete the proposal. The plugin provides autocompletion of data to fasten the process of documentation. For example, based on the CAS number of the chemical provided by the user, the molecular weight, mass, structural formulas are fetched from the CAS registry and populated in the Chemical database. The plugin also provides additional data from the external servers for other materials like Protein, Plasmid, and Vector. The plugin also autofills the data about the authors and other publication details based on the DOI/PubMedId of the publications. It also provides a virtual keyboard to aid the users in documenting descriptions with special characters, chemical formulas, or symbols. Provenance Management and Representation. We use a PostgreSQL database in OMERO as well as in CAESAR. The OMERO database consists of 145 tables, and the ReceptorLight database consists of 35 tables in total. We use the REPRODUCE-ME ontology to model and describe the experiments and their provenance in CAESAR. The database model 7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The difference between the input and output of two different execution of a code cell in ProvBook. Deleted elements are marked in red, newly added or created elements are marked in green.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. ProvTrack: Tracking Provenance of Scientific Experiments</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>clicked. Links are provided to the keys to get their definitions from the web. It also displays the path of the selected node from the Experiment node on top of the left panel. The Search panel allows the users to 10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022) Manuscript to be reviewed Computer Science search for any entities in the graph defined by the REPRODUCE-ME data model. It provides a dropdown to search not only the nodes but also the edges. This comes handy when the provenance graph is very large.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>did the evaluation on a server (installed with CentOS Linux 7 and with x86-64 architecture) hosted at the University Hospital Jena. To address the first part of the main research question, scientists from B1 and A4 projects of ReceptorLight documented experiments using confocal patch-clamp fluorometry (cPCF), F&#246;rster Resonance Energy Transfer (FRET), PhotoActivated Localization Microscopy (PALM) and direct Stochastic Optical Reconstruction Microscopy (dSTORM) as part of their daily work. In 23 projects, a total of 44 experiments were recorded and uploaded with 373 microscopy images generated from different instruments with various settings using either the desktop client or webclient of CAESAR (Accessed 21 April 2019). We also used the Image Data Repository (IDR) datasets (IDR, 2021) with around 35</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022) Manuscript to be reviewed Computer Science Listing 1. What is the complete path taken by a scientist for an experiment? SELECT DISTINCT * WHERE { ? e x p e r i m e n t a r e p r : E x p e r i m e n t ; p r o v : w a s A t t r i b u t e d T o ? a g e n t ; r e p r : h a s D a t a s e t ? d a t a s e t ; p r o v : g e n e r a t e d A t T i m e ? g e n e r a t e d A t T i m e . ? a g e n t r e p r : h a s R o l e ? r o l e .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. A part of results for the competency question</ns0:figDesc><ns0:graphic coords='13,141.74,269.08,413.55,226.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>Ubuntu 18.10 with Python 3, Ubuntu 18.04 with Python 2 and 3, Fedora with Python 3). Both users used ProvBook in their Jupyter Notebook environment. The first run of the eigenfaces Jupyter Notebook gave ModuleNotFoundError for User 1. User 1 attempted several runs to solve the issue. For User 1 installing the missing scikit-learn module still did not solve the issue. The problem occurred because of the version change of the scikit-learn module. The original Jupyter Notebook used 0.16 version of scikit-learn. While User 1 used 0.20.0 version, User 2 used 0.20.3. The classes and functions from the cross validation, grid search, and learning curve modules were placed into a new model selection module starting from Scikit-learn 0.18. User 1 made several other changes in the script which used these functions. User 1 also made the necessary changes to work for the new versions of the scikit-learn module. We provided this changed notebook along with the provenance information captured by ProvBook in User 1 notebook environment to User 2. For User 2, only the first run gave ModuleNotFoundError error. User 2 resolved this issue by installing the scikit-learn module. User 2 did not have to change scripts, as the User 2 could 3 https://scikit-learn.org/0.16/datasets/labeled_faces.html 13/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022) Manuscript to be reviewed Computer Science see the provenance history of the executions from the original author, User 1 and his own execution. Using ProvBook, User 1 could track the changes and compare with the original script, while User 2 could compare the changes with the executions from the original author, User 1 and his own execution. Since this example is based on a small number of users, an extensive user-based evaluation is required to conclude the role of ProvBook for the increased performance in these use cases. However, we expect that ProvBook play a role in helping users in such use cases to track the provenance of experiments. We also performed tests to see the input, output, and run time in different executions in different environments. The files in the Supplementary information provides the information on this evaluation by showing the difference in the execution time of the same cell in a notebook in different execution environments. One of the cells in the evaluation notebook downloads a set of preprocessed images from Labeled Faces in the World (LFW) 4 which contains the training data for face recognition study. The execution of this cell took around 41.3ms in the first environment (Ubuntu 18.10), 2 min 35s in the second environment (Ubuntu 18.04), and 3 min 55s in the third environment (Fedora). The different execution environments play a role in computational experiments which is shown with the help of ProvBook. We also show how one change in a previous cell of the notebook resulted in a difference in the intermediate result in two different executions by two different users in two different environment. We evaluated the provenance capture and difference module in ProvBook with different output types, including images.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Listing 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Complete path for a computational notebook experiment SELECT DISTINCT * WHERE { ? s t e p p &#8722; p l a n : i s S t e p O f P l a n ? n o t e b o o k . ? n o t e b o o k a r e p r : N o t e b o o k . ? e x e c u t i o n p &#8722; p l a n : c o r r e s p o n d s T o S t e p ? s t e p ; r e p r : e x e c u t i o n T i m e ? e x e c u t i o n T i m e . ? s t e p p &#8722; p l a n : h a s I n p u t V a r ? i n p u t V a r ; p &#8722; p l a n : h a s O u t p u t V a r ? o u t p u t V a r ; p &#8722; p l a n : i s P r e c e d e d B y ? p r e v i o u s S t e p . }</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022) Manuscript to be reviewed Computer Science output, starting and ending time, helps users track the provenance of results even in different execution environments. The Jupyter Notebooks shared along with the provenance information of their executions helps users to compare the original intermediate and final results with the results from the new trials executed in the same or different environment. Through ProvBook, the intermediate and negative results</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>5</ns0:head><ns0:label /><ns0:figDesc>https://microverse-cluster.de 16/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:note place='foot' n='4'>http://vis-www.cs.umass.edu/lfw/ 14/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:3:0:NEW 23 Jan 2022)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Dear Editor, Thank you for your message “Decision on your PeerJ Computer Science submission: 'A collaborative semantic-based provenance management platform for reproducibility' (#CS-2021:06:62007:2:0:REVIEW)” from 17 January, 2022. We thank you for your work and the reviewers for their thoughtful and thorough suggestions and comments, which have helped us to improve significantly the quality of the manuscript. Please find below our point-by-point responses in text with bold and italic formatting. The submission is updated accordingly and further includes the editorial changes requested by the PeerJ team. We look forward to hearing from you! Best Regards, Sheeba Samuel and Birgitta König-Ries Editor's Comments Reviewer 1 Reviewer 3 Editor's Decision Minor Revisions The reviewers agree that the paper improved a lot, but they still request a few minor issues to be improved. We have addressed all the reviewers’ comments and made the corresponding changes in the manuscript as well. Reviewer 1 (João Pimentel) Basic Reporting As a minor issue, the main research 'question' in line 466 is not phrased as a question Response: We rephrased the main research question as a question in line 466, as mentioned earlier in the introduction. Experimental Design The paper improved in reporting the experiment design, but it still can improve the wording on the experiment section: The paper divides the main question into 3 parts in the first paragraph of the Evaluation, but it does not reference these parts explicitly. For instance, the paper states that in the first part, they 'address the question of capturing and representing the complete path of a scientific experiment' and for that, they evaluate: - the role of CAESAR in capturing the non-computational data - the role of ProvBook in capturing the computational data - the role of the REPRODUCE-ME ontology in semantically representing the complete path ... using the competency question-based evaluation Then, the next subsection is called 'Competency question-based evaluation', indicating that it only covers the 3rd item (the role of the REPRODUCE-ME ontology). Moreover, by reading this subsection, it seems that it in fact does not cover the role of ProvBook in capturing the computational data (which is somewhat discussed in the next subsection), but it does have a discussion about the role of CAESAR in capturing the noncomputational data. Response: In each subection of the evaluation, we make a reference to each part of the main research question which is mentioned in the beginning of the evaluation section. We have improved the wording in the experiment section, especially in the first part of the main research question in the evaluation section to reflect the description of the ‘Competency-question based evaluation’ subsection. Lines 502-507 addresses the role of CAESAR and ProvBook in providing the data to be represented using the REPRODUCE-ME ontology. We also make a reference to the Listings 2 in Line 507 to show that the computational steps are also represented using the ontology. Reviewer 3 Basic Reporting The text is much improved, and the reduction in passive voice makes the text easier to read. I would recommend a better inline summary of results in the evaluation section. I'm not sure why the reader is expected to go to a different file to read about them. I realize that having the open, raw results available is great for those that want to poke around, but I think a summary for readers is useful. - L563: 'The competency questions, the RDF data used for the evaluation, the 564 SPARQL queries, and their results are publicly available (Samuel, 2021).' (What are the results?) - L604, 'The files in the Supplementary information provides the information on this evaluation by showing the difference in the execution time of the same cell in a notebook in different execution environments.' (What do they show?) - L630 'The results of the evaluation are available in the Supplementary file.' (What are the results?) The user-based evaluation subsection does have results in the manuscript. Typo in L581: 'uses face recognition example' -> 'uses a facial recognition example' Response: We have added inline summary of the results of the files which are not directly provided in the manuscript. This includes for line 563, 604, and 630 as mentioned by the reviewer. We have changed the typo in line 581. Experimental Design The additional information about the use case in the 'Data and user-based evaluation of ProvBook' section (L581) is helpful, but I would argue this is more of an example and probably should be tagged as such. We don't know, for example, that User 2 needed to look at User 1's provenance to fix the error. We also don't know if User 2 is a more experienced Python programmer, or even if User 2's environment (Fedora) made this easier. Because n=1, I don't think we can conclude that ProvBook is the reason for improved performance, but this use case is instructive in understanding why we should expect ProvBook to help. Response: We have added a remark as mentioned by the reviewer in line 613 that we can not conclude the role of ProvBook on increased performance but tracking of experiments in such use cases due to smaller user study. "
Here is a paper. Please give your review comments after reading it.
378
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Scientific data management plays a key role in the reproducibility of scientific results. To reproduce results, not only the results but also the data and steps of scientific experiments must be made findable, accessible, interoperable, and reusable. Tracking, managing, describing, and visualizing provenance helps in the understandability, reproducibility, and reuse of experiments for the scientific community. Current systems lack a link between the data, steps, and results from the computational and non-computational processes of an experiment. Such a link, however, is vital for the reproducibility of results. We present a novel solution for the end-to-end provenance management of scientific experiments. We provide a framework, CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), which allows scientists to capture, manage, query and visualize the complete path of a scientific experiment consisting of computational and noncomputational data and steps in an interoperable way. CAESAR integrates the REPRODUCE-ME provenance model, extended from existing semantic web standards, to represent the whole picture of an experiment describing the path it took from its design to its result. ProvBook, an extension for Jupyter Notebooks, is developed and integrated into CAESAR to support computational reproducibility. We have applied and evaluated our contributions to a set of scientific experiments in microscopy research projects.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Reproducibility of results is vital in every field of science. The scientific community is interested in the way so that scientists can understand the data and results. Therefore, we need to start addressing this issue at the stage when the data is created. Thus, scientific research data management needs to start at the earlier stage of the research lifecycle to play a vital role in this context.</ns0:p><ns0:p>In this paper, we aim to provide end-to-end provenance capture and management of scientific experiments to support reproducibility. To define our aim, we define the main research question, which structure the remainder of the article: How can we capture, represent, manage and visualize a complete path taken by a scientist in an experiment, including the computational and non-computational steps to derive a path towards experimental results? To address the research question, we create a conceptual model using semantic web technologies to describe a complete path of a scientific experiment. We design and develop a provenance-based semantic framework to populate this model, collect information about the experimental data and results along with the settings, runs, and execution environment and visualize them. The main contribution of this paper is the framework for the end-to-end provenance management of scientific experiments, called CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility) integrated with ProvBook <ns0:ref type='bibr' target='#b63'>(Samuel &amp; K&#246;nig-Ries, 2018b</ns0:ref>) and REPRODUCE-ME data model <ns0:ref type='bibr' target='#b58'>(Samuel, 2019a)</ns0:ref>. CAESAR supports computational reproducibility using our tool ProvBook, which is designed and developed to capture, store, compare and track the provenance of results of different executions of Jupyter Notebooks. The complete path of a scientific experiment interlinking the computational and non-computational data and steps is semantically represented using the REPRODUCE-ME data model.</ns0:p><ns0:p>In the following sections, we provide a detailed description of our findings. We start with an overview of the current state-of-the-art ('Related Work'). We describe the experimental methodology used in the development of CAESAR ('Materials &amp; Methods'). In the 'Results' section, we describe CAESAR and its main modules. We describe the evaluation strategies and results in the 'Evaluation' section. In the 'Discussion' section, we discuss the implications of our results and the limitations of our approach. We conclude the article by highlighting our major findings in the 'Conclusion' section.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Scientific data management plays a key role in knowledge discovery, data integration, and reuse. The preservation of digital objects has been studied for long in the digital preservation community. Some works give more importance to software and business process conservation <ns0:ref type='bibr' target='#b40'>(Mayer et al., 2012)</ns0:ref>, while other works focus on scientific workflow preservation <ns0:ref type='bibr' target='#b5'>(Belhajjame et al., 2015)</ns0:ref>. We focus our approach more on the data management solutions for scientific data, including images. <ns0:ref type='bibr' target='#b20'>Eliceiri et al. (2012)</ns0:ref> provide a list of biological imaging software tools. BisQue is an open-source, server-based software system that can store, display and analyze images <ns0:ref type='bibr' target='#b35'>(Kvilekval et al., 2010)</ns0:ref>. OMERO, developed by the Open Microscopy Environment <ns0:ref type='bibr'>(OME)</ns0:ref>, is another open-source data management platform for imaging metadata primarily for experimental biology <ns0:ref type='bibr' target='#b0'>(Allan et al., 2012)</ns0:ref>. It has a plugin architecture with a rich set of features, including analyzing and modifying images. It supports over 140 image file formats using BIO-Formats <ns0:ref type='bibr' target='#b37'>(Linkert et al., 2010)</ns0:ref>. OMERO and BisQue are the two closest solutions that meet our requirements in the context of scientific data management. A general approach to document experimental metadata is provided by the CEDAR workbench <ns0:ref type='bibr' target='#b23'>(Gonc &#184;alves et al., 2017)</ns0:ref>. It is a metadata repository with a web-based tool that helps users to create metadata templates and fill in the metadata using those templates. However, these systems do not directly provide the features to fully capture, represent and visualize the complete path of a scientific experiment and support computational reproducibility and semantic integration.</ns0:p><ns0:p>Several tools have been developed to capture complete computational workflows to support reproducibility in the context of scientific workflows, scripts, and computational notebooks. <ns0:ref type='bibr' target='#b47'>Oliveira et al. (2018)</ns0:ref> survey the current state of the art approaches and tools that support provenance data analysis for workflow-based computational experiments. Scientific Workflows, which are a complex set of data processes and computations, are constructed with the help of a Scientific Workflow Management System (SWfMS) <ns0:ref type='bibr' target='#b19'>(Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b38'>Liu et al., 2015)</ns0:ref>. Different SWfMSs have been developed for different uses cases and domains <ns0:ref type='bibr' target='#b46'>(Oinn et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b1'>Altintas et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b19'>Deelman et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b65'>Scheidegger et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b22'>Goecks et al., 2010)</ns0:ref>. Most SWfMSs provide provenance support by capturing the history of workflow executions.</ns0:p><ns0:p>These systems focus on the computational steps of an experiment and do not link the results to the experimental metadata. Despite the provenance modules present in these systems, there are currently many challenges in the context of reproducibility of scientific workflows <ns0:ref type='bibr' target='#b71'>(Zhao et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Cohen-Boulakia et al., 2017)</ns0:ref>. Workflows created by different scientists are difficult for others to understand or re-run in Manuscript to be reviewed Computer Science a different environment, resulting in workflow decays <ns0:ref type='bibr' target='#b71'>(Zhao et al., 2012)</ns0:ref>. The lack of interoperability between scientific workflows and the steep learning curve required by scientists are some of the limitations according to the study of different SWfMSs <ns0:ref type='bibr' target='#b15'>(Cohen-Boulakia et al., 2017)</ns0:ref>. The Common Workflow Language <ns0:ref type='bibr' target='#b2'>(Amstutz et al., 2016)</ns0:ref> is an initiative to overcome the lack of interoperability of workflows.</ns0:p><ns0:p>Though there is a learning curve associated with adopting workflow languages, this ongoing work aims to make computational methods reproducible, portable, maintainable, and shareable.</ns0:p><ns0:p>Many tools have been developed to capture the provenance of results from the scripts at different levels of granularity <ns0:ref type='bibr' target='#b24'>(Guo &amp; Seltzer, 2012;</ns0:ref><ns0:ref type='bibr' target='#b18'>Davison, 2012;</ns0:ref><ns0:ref type='bibr' target='#b44'>Murta et al., 2014;</ns0:ref><ns0:ref type='bibr'>McPhillips et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Burrito <ns0:ref type='bibr' target='#b24'>(Guo &amp; Seltzer, 2012</ns0:ref>) captures provenance at the operating system level and provides a user interface for documenting and annotating the provenance of non-computational processes. <ns0:ref type='bibr' target='#b12'>Carvalho et al. (Carvalho et al., 2016)</ns0:ref> present an approach to convert scripts into reproducible Workflow Research Objects. However, it is a complex process that requires extensive involvement of scientists and curators with extensive knowledge of the workflow and script programming in every step of the conversion. The lack of documentation of computational experiments along with their results and the ability to reuse parts of code are some of the issues hindering reproducibility in script-based environments. In recent years, computational notebooks have gained widespread adoption because they enable computational reproducibility and allow users to share code along with documentation. Jupyter Notebook <ns0:ref type='bibr'>(Kluyver et al., 2016)</ns0:ref>, which was formerly known as the IPython notebook, is a widely used computational notebook that provides an interactive environment supporting over 100 programming languages with millions of users around the world. Even though it supports reproducible research, recent studies by <ns0:ref type='bibr' target='#b55'>Rule et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b51'>Pimentel et al. (2019)</ns0:ref> point out the need for provenance support in computational notebooks. Overwriting and re-execution of cells in any order can lead to the loss of results from previous trials. Some research works have attempted to track provenance from computational notebooks <ns0:ref type='bibr' target='#b26'>(Hoekstra, 2014;</ns0:ref><ns0:ref type='bibr' target='#b52'>Pimentel et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b13'>Carvalho et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b34'>Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>Kery &amp; Myers, 2018;</ns0:ref><ns0:ref type='bibr' target='#b49'>Petricek et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Head et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b68'>Wenskovitch et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b67'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b39'>Macke et al., 2021)</ns0:ref>. <ns0:ref type='bibr' target='#b52'>Pimentel et al. (2015)</ns0:ref> propose a mechanism to capture and analyze the provenance of python scripts inside IPython Notebooks using noWorkflow <ns0:ref type='bibr' target='#b50'>(Pimentel et al., 2017)</ns0:ref>. PROV-O-Matic <ns0:ref type='bibr' target='#b26'>(Hoekstra, 2014)</ns0:ref> is another extension for earlier versions of IPython Notebooks to save the provenance traces to Linked Data file using PROV-O. In recent approaches, custom Jupyter kernels are developed to trace runtime user interactions and automatically manage the lineage of cell execution <ns0:ref type='bibr' target='#b34'>(Koop &amp; Patel, 2017;</ns0:ref><ns0:ref type='bibr' target='#b39'>Macke et al., 2021)</ns0:ref>. However, some of these approaches do not capture the execution history of computational notebooks, require changes to the code by the user, and are limited to Python scripts. In our approach, the provenance tracking feature is integrated within a notebook, so there is no need for users to change the scripts and learn a new tool. We also make available the provenance information in an interoperable way.</ns0:p><ns0:p>There exists a gap in the current state-of-the-art systems as they do not interlink the data, the steps, and the results from both the computational and non-computational processes of a scientific experiment. We bridge this gap by developing a framework to capture the provenance, provide semantic integration of experimental data and support computational reproducibility. Hence, it is important to extend the current tools and at the same time, reuse their rich features to support the reproducibility and understandability of scientific experiments.</ns0:p><ns0:p>We describe here a summary of the insights of the interviews on the research practices of scientists. A scientific experiment consists of non-computational and computational data and steps. Computational tools like computers, software, scripts, etc., generate computational data. Activities in the laboratory like preparing solutions, setting up the experimental execution environment, manual interviews, observations, etc., are examples of non-computational activities. Measures taken to reproduce a non-computational step are different than those for a computational step. The reproducibility of a non-computational step depends on various factors like the availability of experiment materials (e.g., animal cells or tissues) and instruments, the origin of the materials (e.g., distributor of the reagents), human and machine errors, etc.</ns0:p><ns0:p>Hence, non-computational steps need to be described in sufficient detail for their reproducibility <ns0:ref type='bibr' target='#b29'>(Kaiser, 2015)</ns0:ref>.</ns0:p><ns0:p>The conventional way of recording the experiments in hand-written lab notebooks is still in use in biology and medicine. This creates a problem when researchers leave projects and join new projects. To understand the previous work conducted in a research project, all the information regarding the project, including previously conducted experiments along with the trials, analysis, and results, must be available to the new researchers. This information is also required when scientists are working on big collaborative projects. In their daily research work, a lot of data is generated and consumed through the computational and non-computational steps of an experiment. Different entities like devices, procedures, protocols, settings, computational tools, and execution environment attributes are involved in experiments. Several people play various roles in different steps and processes of an experiment. The outputs of some noncomputational steps are used as inputs to the computational steps. Hence, an experiment must not only be linked to its results but also to different entities, people, activities, steps, and resources. Therefore, the complete path towards the results of an experiment must be shared and described in an interoperable manner to avoid conflicts in experimental outputs.</ns0:p></ns0:div> <ns0:div><ns0:head>Design and Development</ns0:head><ns0:p>We aim to design a provenance-based semantic framework for the end-toend management of scientific experiments, including the computational and non-computational steps. To achieve our aim, we focused on the following modules: provenance capture, representation, management, comparison, and visualization. We used an iterative and layered approach in the design and development of CAESAR. We first investigated the existing frameworks that capture and store the experimental metadata and the data for the provenance capture module. We further narrowed down our search to imaging-based data management systems due to the extensive use of images and instruments in the experimental workflows in the ReceptorLight project. Based on our requirements, we selected OMERO as the underlying framework for developing CAESAR. Very active development community ensuring a continued effort to improve the system, a faster release cycle, a well-documented API to write own tools, and the ability to extend the web interface with plugins provided additional benefits to OMERO. However, they lack in providing the provenance support of experimental data, including the computational processes, and also lack in semantically representing the experiments. We designed and developed CAESAR by extending OMERO to capture the provenance of scientific experiments.</ns0:p><ns0:p>For the provenance representation module, we use semantic web technologies to describe the heterogeneous experimental data as machine-readable and link them with other datasets on the web. We develop the REPRODUCE-ME data model and ontology by extending existing web standards, PROV-O <ns0:ref type='bibr' target='#b36'>(Lebo et al., 2013)</ns0:ref> and P-Plan <ns0:ref type='bibr' target='#b21'>(Garijo &amp; Gil, 2012)</ns0:ref>. The REPRODUCE-ME Data Model is a generic data model for representing scientific experiments with their provenance information. An Experiment is considered as the central point of the REPRODUCE-ME data model. The model consists of eight components: Data, Agent, Activity, Plan, Step, Setting, Instrument, Material. We developed the ontology from the competency questions collected from the scientists in the requirement analysis phase <ns0:ref type='bibr' target='#b58'>(Samuel, 2019a)</ns0:ref>. It is extended from PROV-O to represent all agents, activities, and entities involved in an experiment. It extends from P-Plan to represent the steps, the input and output variables, and the complete path taken from an input to an output of an experiment. Using the REPRODUCE-ME ontology, we can describe and semantically query the information for scientific experiments, input and output associated with an experiment, execution environmental attributes, experiment materials, steps, the execution order of steps and activities, agents involved and their roles, script/Jupyter Notebook executions, instruments, and their settings, etc. The ontology also consists of classes and properties, which describe the elements responsible for the image acquisition process in a microscope, from OME data model <ns0:ref type='bibr' target='#b48'>(OME, 2016)</ns0:ref>.</ns0:p><ns0:p>For the provenance management module, we use PostgreSQL database and Ontology-based Data Access (OBDA) approach <ns0:ref type='bibr' target='#b53'>(Poggi et al., 2008)</ns0:ref>. OBDA is an approach to access the various data sources using Manuscript to be reviewed Computer Science ontologies and mappings. The details of the structure of the underlying data sources are isolated from the users using a high-level global schema provided by ontologies. It helps to efficiently access a large amount of data from different sources and avoid replication of data that is already available in relational databases. It also provides many high-quality services to domain scientists without worrying about the underlying technologies. There are different widely-used applications involving large data sources that use OBDA <ns0:ref type='bibr' target='#b9'>(Br&#252;ggemann et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b31'>Kharlamov et al., 2017)</ns0:ref>. As the image metadata in OMERO and the experimental data in CAESAR are already stored in the PostgreSQL database, we investigated the effective ways to represent scientific experiments' provenance information without duplicating the data.</ns0:p><ns0:p>Based on this, we selected to use the OBDA approach to represent this data semantically and at the same time avoid replication of data. To access the various databases in CAESAR, we used Ontop <ns0:ref type='bibr'>(Calvanese et al., 2017)</ns0:ref> for OBDA. We use the REPRODUCE-ME ontology to map the relational data in the OMERO and the ReceptorLight database using Ontop's native mapping language. We used federation for the OMERO and ReceptorLight databases provided by the rdf4j SPARQL Endpoint 1 . We used the Protege plugin provided by Ontop to write the mappings. A virtual RDF graph is created in OBDA using the ontology with the mappings <ns0:ref type='bibr'>(Calvanese et al., 2017)</ns0:ref>. SPARQL, the standard query language in the semantic web community, is used to query the provenance graph. We used the approach where the RDF graphs are kept virtual and queried only during query execution. The virtual approach helps avoid the materialization cost and provides the benefits of the matured relational database systems. However, there are some limitations in this approach using Ontop due to unsupported functions and data type.</ns0:p><ns0:p>To support computational reproducibility in CAESAR, we focused on providing the management of the provenance of the computational parts of an experiment. Computational notebooks, which have gained widespread attention because of their support for reproducible research, motivated us to look into this direction. These notebooks, which are extensively used and openly available, provide various features to run and share the code and results. We installed JupyterHub 2 in CAESAR to provide users access to computational environment and resources. JupyterHub provides a customizable and scalable way to serve Jupyter notebook for multiple users. In spite of the support for reproducible research, the provenance information of the execution of these notebooks was missing. To further support reproducibility in these notebooks, we developed ProvBook, an extension of Jupyter Notebooks, to capture the provenance information of their executions. We keep the design of ProvBook simple so that it can be used by researchers irrespective of their disciplines. We added the support to compare the differences in executions of the notebooks by different authors. We also extended the REPRODUCE-ME ontology to describe the computational experiments, including scripts and notebooks <ns0:ref type='bibr' target='#b62'>(Samuel &amp; K&#246;nig-Ries, 2018a)</ns0:ref>, which was missing in the current state of the art.</ns0:p><ns0:p>For the provenance visualization module, we focused on visualizing the complete path of an experiment by linking the non-computational and computational data and steps. Our two goals in designing the visualization component in CAESAR are providing users with a complete picture of an experiment and tracking its provenance. To do so, we integrated the REPRODUCE-ME ontology and ProvBook in CAESAR. We developed visualization modules to provide a complete story of an experiment starting from its design to publication. The visualization module, Project Dashboard, provides a complete overview of all the experiments conducted in a research project <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. We later developed the ProvTrack module to track the provenance of individual scientific experiments. The underlying technologies are transparent to scientists based on these approaches. We followed a Model-View-Controller architecture pattern for the development of CAESAR. We implement the webclient in the Django-Python framework and the Dashboard in ReactJs. Java is used to implement the new services extended by OMERO.server.</ns0:p><ns0:p>We use the D3 JavaScript library <ns0:ref type='bibr'>(D3.js, 2021)</ns0:ref> for the rendering of provenance graphs in the ProvTrack.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We present CAESAR (CollAborative Environment for Scientific Analysis with Reproducibility), an end-to-end semantic-based provenance management platform. It is extended from OMERO <ns0:ref type='bibr' target='#b0'>(Allan et al., 2012)</ns0:ref>. With the integration of the rich features provided by OMERO and our provenance-based extensions, CAESAR provides a platform to support scientists to describe, preserve and visualize their experimental data by linking the datasets with the experiments along with the execution environment and images. It provides extensive features, including the semantic representation of experimental data using the REPRODUCE-ME ontology and computational reproducibility support using ProvBook. Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> depicts the architecture of CAESAR. We describe briefly the core modules of CAESAR required for the end-to-end provenance management.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Capture.</ns0:head><ns0:p>This module provides a metadata editor with a rich set of features allowing the scientists to easily record all the data of the non-computational steps performed in their experiments and the protocols, the materials, etc. This metadata editor is a form-based provenance capture system. It provides the feature to document the experimental metadata and interlink with other experiment databases. An Experiment form is the key part of this system that documents all the information about an experiment. This data includes the temporal and spatial properties, the experiment's research context, and other general settings used in the experiment. The materials and other resources used in an experiment are added as new templates and linked to the experiment. The templates are added as a service as well as a database table in CAESAR and are also available as API, thus allowing the remote clients to use them.</ns0:p><ns0:p>The user and group management provided by OMERO is adopted in CAESAR to manage users in groups and provides roles for these users. The restriction and modification of data are managed using the roles and permissions that are assigned to the users belonging to a group. The data is made available between the users in the same group in the same CAESAR server. Members of other groups can share the data based on the group's permission level. A user can be assigned any of the role of Administrator, Group Owner or Group Member. An Administrator controls all the settings of the groups. A Group Owner has the right to add other members to the group. A Group Member is a standard user in the group. There are also various permission levels in the system. Private is the most restrictive permission level, thus providing the least collaboration level with other groups in the system. A private group owner can view and control the members and data of the members within a group; While a private group member can view and control only his/her data. The Read-only is an intermediate permission level. In addition to their</ns0:p></ns0:div> <ns0:div><ns0:head>6/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:06:62007:4:0:NEW 13 Feb 2022)</ns0:ref> Manuscript to be reviewed Computer Science group, the group owners can read and perform some annotations on members' data from other groups.</ns0:p><ns0:p>The group members don't have permission to annotate the datasets from other groups, thus providing only viewing and reading possibilities. The Read-annotate permission level provides a more collaborative option where the group owners and members can view the other groups' members as well as read and annotate their data. The Read-write permission level allows the group members to read and write data just like their own group.</ns0:p><ns0:p>CAESAR adopts this role and permission levels to control the access and modification of provenance information of experiments. A Principal Investigator (PI) can act as a group owner and students as group members in a private group. PIs can access students' stored data and decide which data can be used to share with other collaborative groups. A Read-only group can serve as a public repository where the original data and results for the publications are stored. A Read-annotate group is suitable for collaborative teams to work together for a publication or research. Every group member is trusted and given equal rights to view and access the data in a Read-write group, thus providing a very collaborative environment. This user and group management paves the way for collaboration among teams in research groups and institutes before the publication is made available online.</ns0:p><ns0:p>This module also allows scientists to interlink the dependencies of many materials, samples, input files, measurement files, images, standard operating procedures, and steps to an experiment. Users can also attach files, scripts, publications, or other resources to any steps in an experiment form. They can annotate these resources as an input to a step or intermediate result of a step. Another feature of this module is to help the scientists to reuse resources rather than do it from scratch, thus enabling a collaborative environment among teams and avoiding replicating the experimental data. This is possible by sharing the descriptions of the experiments, standard operating procedures, and materials with the team members within the research group. Scientists can reference the descriptions of the resources in their experiments.</ns0:p><ns0:p>Version management plays a key role in data provenance. In a collaborative environment, where the experimental data are shared among the team members, it is important to know the modifications made by the members of the system and track the history of the outcome of an experiment. This module provides version management of the experimental metadata by managing all the changes made in the description of the experimental data. The plugin allows users to view the version history and compare two different versions of an experiment description. The file management system in CAESAR stores all files and index them to the experiment, which is annotated as input data, measurement data, or other resources. The user can organize the input data, measurement data, or other resources in a hierarchical structure based on their experiments and measurements using this plugin.</ns0:p><ns0:p>CAESAR also provides a database of Standard Operating Procedures. These procedures in life-sciences provide a set of step-by-step instructions to carry out a complex routine. In this database, the users can store the protocols, procedures, scripts, or Jupyter Notebooks based on their experiments, which have multiple non-computational and computational steps. The users can also link these procedures to the step in an experiment where they were used. The plugin also provides users the facility to annotate the experimental data with terms from other ontologies like GO <ns0:ref type='bibr' target='#b3'>(Ashburner et al., 2000)</ns0:ref>, CMPO <ns0:ref type='bibr' target='#b28'>(Jupp et al., 2016)</ns0:ref>, etc. in addition to REPRODUCE-ME Ontology.</ns0:p><ns0:p>If a user is restricted to make modifications to other members' data due to permission level, the plugin Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and its schema consist of important classes which are based on the REPRODUCE-ME Data Model <ns0:ref type='bibr' target='#b58'>(Samuel, 2019a)</ns0:ref>. For the data management of images, the main classes include Project, Dataset, Folder, Plate, Screen, Experiment, Experimenter, ExperimenterGroup, Instrument, Image, StructuredAnnotations, and ROI. Each class provides a rich set of features, including how they are used in an experiment. The Experiment, which is a subclass of Plan <ns0:ref type='bibr' target='#b21'>(Garijo &amp; Gil, 2012)</ns0:ref>, links all the provenance information of a scientific experiment together. We use the REPRODUCE-ME ontology to map the relational data in the OMERO and the ReceptorLight database using Ontop. The provenance information of computational notebooks is also semantically represented and is combined with other experimental metadata, thus providing the context of the results. This helps the scientists and machines to understand the experiments along with their context. There are around 800 mappings to create the virtual RDF graph. All the mappings are publicly available. We refer the authors to the publication for the complete documentation of the REPRODUCE-ME ontology <ns0:ref type='bibr' target='#b59'>(Samuel, 2019b)</ns0:ref> and the database schema is available in the Supplementary file.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational Reproducibility</ns0:head><ns0:p>The introduction of computational notebooks, which allow scientists to share the code along with the documentation, is a step towards computational reproducibility. Scientists widely use Jupyter Notebooks to perform several tasks, including image processing and analysis. As the experimental data and images are contained in the CAESAR itself, another requirement is to provide a computational environment for scientists to include the scripts that analyze the data stored in the platform. To create a collaborative research environment for the scientists working with images and Jupyter Notebooks, JupyterHub is installed and integrated with CAESAR. This allows the scientists to directly access the images and datasets stored in CAESAR using the API and perform data analysis or processing on them using Jupyter Notebooks. The new images and datasets created in the Jupyter Notebooks can then be uploaded and linked to the original experiments to CAESAR using the APIs. To capture the provenance traces of the computational steps in CAESAR, we introduce ProvBook, an extension of Jupyter notebooks to provide provenance support <ns0:ref type='bibr' target='#b63'>(Samuel &amp; K&#246;nig-Ries, 2018b)</ns0:ref>. It is an easy-to-use framework for scientists and developers to efficiently capture, compare, and visualize the provenance data of different executions of a notebook over time. To capture the provenance of computational steps and support computational reproducibility, ProvBook is installed in JupyterHub and integrated with CAESAR. We briefly describe the modules provided by ProvBook.</ns0:p><ns0:p>Capture, Management, and Representation. This module captures and stores the provenance of the execution of Jupyter Notebooks cells over the course of time. A Jupyter Notebook, stored as a JSON file format, is a dictionary with the following keys: metadata and cells. The metadata is a dictionary that contains information about the notebook, its cells, and outputs. The cell contains information on all cells, including the source, the type of the cell, and its metadata. As Jupyter notebooks allow the addition of custom metadata to its content, the provenance information captured by the ProvBook is added to the metadata of each cell of the notebook in the JSON format. ProvBook captures the provenance information, including the start and end time of each execution, the total time it took to run the code cell, the source code, and the output obtained during that particular execution. The execution time for a computational task was added as part of the provenance metadata in a Notebook since it is important to check the performance of the task. The start and end times also act as an indicator of the execution order of the cells. Users can execute cells in any order, so adding the start and end time helps them check when a particular cell was last executed. The users can change the parameters and source code in each cell until they arrive at their expected result. This helps the user to track the history of all the executions to see which parameters were changed and how the results were derived.</ns0:p><ns0:p>ProvBook also provides a module that converts the computational notebooks along with the provenance information of their executions and execution environment attributes into RDF. The REPRODUCE-ME ontology represents this provenance information. ProvBook allows the user to export the notebook in RDF as a turtle file either from the user interface of the notebook or using the command line. The users can share a notebook and its provenance in RDF and convert it back to a notebook. The reproducibility service provided by ProvBook converts the provenance graph back to a computational notebook along with its provenance. The Jupyter notebooks and the provenance information captured by ProvBook in RDF are then linked to the provenance of the experimental metadata in CAESAR. After the two executions are selected by the user, the difference in the input and the output of these executions are shown side by side. The users can select their own execution and compare the results with the original experimenter's execution of the Jupyter Notebook. Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> shows the differences between the source and output of two different code cell execution. ProvBook highlights any differences in the source or output for the user to distinguish the change. The provenance difference module is developed by extending the nbdime (Project Jupyter, 2021) library from the Project Jupyter. The nbdime tools provide the ability to compare notebooks and also a three-way merge of notebooks with auto-conflict resolution.</ns0:p><ns0:p>ProvBook calls the API from the nbdime to see the difference between the provenance of two executions of a notebook code cell. Using nbdime, ProvBook provides diffing of notebooks based on the content and renders image-diffs correctly. This module in CAESAR helps scientists to compare the results of the executions of different users.</ns0:p><ns0:p>Visualization. Efficient visualization is important to make meaningful interpretations of the data. The provenance information of each cell captured by ProvBook is visualized below every input cell. In this provenance area, a slider is provided so that the user can drag to view the history of the different executions of the cell. This area also provides the user with the ability to track the history and compare the current results with several previous results and see the difference that occurred. The user can visualize the provenance information of a selected cell or all cells by clicking on the respective buttons in the toolbar. The user can also clear the provenance information of a selected cell or all cells if required. This solution tries to address the problem of having larger provenance information than the original notebook data.</ns0:p></ns0:div> <ns0:div><ns0:head>Provenance Visualization.</ns0:head><ns0:p>We have described above the visualization of the provenance of cells in Jupyter Notebooks. In this section, we look at the visualization of overall scientific experiments. We present two modules for the visualization of the provenance information of scientific experiments captured, stored, and semantically represented in CAESAR: Dashboard and ProvTrack. The experimental data provided by scientists through the metadata editor, the metadata extracted from the images and instruments, and the details of the computational steps collected from ProvBook together are integrated, linked, and represented using the REPRODUCE-ME ontology. All this provenance data, stored as linked data, form the basis for the complete path of a scientific experiment and is visualized in CAESAR. Dashboard. This visualization module aggregates all the data related to an experiment and project in a single place. We provide users with two views: one at the project level and another at the experiment level. The Dashboard at the project level provides a unified view of a research project containing multiple experiments by different agents. When a user selects a project, the Project Dashboard is activated, while the Experiment Dashboard is activated when a dataset is selected. The Dashboard is composed of several panels. Each panel provides a detailed view of a particular component of an experiment. The data inside a panel is displayed in tables. The panels are arranged in a way that they provide the story of an experiment.</ns0:p><ns0:p>A detailed description of each panel is provided <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Users can also search and filter the data based on keywords inside a table in the panel.</ns0:p><ns0:p>ProvTrack. This visualization module provides users with an interactive way to track the provenance of experimental results. The provenance of experiments is provided using a node-link representation, thus, helping the user to backtrack the results. Users can drill-down each node to get more information and attributes. This module which is developed independently, is integrated into CAESAR. The provenance graph is based on the data model represented by REPRODUCE-ME ontology. We query the SPARQL endpoint to get the complete path of a scientific experiment. We make several SPARQL queries, and the results are combined to display this complete path and increase the system's performance. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the visualization of the provenance of an experiment using ProvTrack. CAESAR allows users to select an experiment to track its provenance.</ns0:p><ns0:p>The provenance graph is visualized in the right panel when the user selects an experiment. Each node in the provenance graph is colored based on its type like prov:Entity, prov:Agent, prov:Activity, p-plan:Step, p-plan:Plan and p-plan:Variable. The user can expand the provenance graph by opening up all nodes using the Expand All button next to the help menu. Using the Collapse All button, users can collapse the provenance graph to one node, which is the Experiment. ProvTrack shows the property relationship between two nodes when a user hovers on an edge. The help menu provides the user with the meaning of each color in the graph. The path from the user-selected node to the first node (Experiment) is highlighted to show the relationship of each node with the Experiment and also to see where the node is in the provenance graph.</ns0:p><ns0:p>ProvTrack also provides an Infobox of the selected node of an experiment. It displays the additional information about the selected node as a key-value pair. The keys in the Infobox are either the object or data properties of the REPRODUCE-ME ontology that is associated with the node that the user has </ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>We evaluate different aspects of our work based on our main research question: How can we capture, represent, manage and visualize a complete path taken by a scientist in an experiment, including the computational and non-computational steps to derive a path towards experimental results? As the main research question has a broad scope and it is challenging to evaluate every part within the limited time and resources, we break the main research questions into smaller parts. We divide the evaluation into three parts based on the smaller questions and discuss their results separately. In the first part, we address the question of capturing and representing the complete path of a scientific experiment which includes both computational and non-computational steps. For this, we use the non-computational data captured in CAESAR and the computational data captured using ProvBook. The role of the REPRODUCE-ME ontology in semantically representing the complete path of a scientific experiment is evaluated using the knowledge base in CAESAR using the competency question-based evaluation. In the second part, we address the question of supporting reproducibility by capturing and representing the provenance of computational experiments. For this, we address the role of ProvBook in terms of reproducibility, performance, and scalability. We focused on evaluating ProvBook as a stand-alone tool and also with the integration with CAESAR. In the third part, we address the question of representing and visualizing the complete path of scientific experiments to the users of CAESAR. For this, we performed an evaluation by conducting a user-based study to get general impression of the tool and use this as a feedback to improve the tool. Scientists from within and outside the project were involved in all the three parts of our evaluation as system's users as well as participants. <ns0:ref type='bibr' target='#b7'>Brank et al. (2005)</ns0:ref> point out different methods of evaluating ontologies. In application-based evaluation, the ontology under evaluation is used in an application/system to produce good results on a given task. Answering the competency questions over a knowledge base is one of the approaches to testing ontologies <ns0:ref type='bibr' target='#b45'>(Noy et al., 2001)</ns0:ref>. Here, we applied the ontology in an application system and answered the competency questions over a knowledge base. Hence, we evaluated CAESAR with the REPRODUCE-ME ontology using competency questions collected from different scientists in our requirement analysis phase. We used the REPRODUCE-ME ontology to answer the competency questions using the scientific experiments documented in CAESAR for its evaluation. We imaging experiments <ns0:ref type='bibr' target='#b70'>(Williams et al., 2017)</ns0:ref> for our evaluation to ensure that the REPRODUCE-ME ontology can be used to describe other types of experiments as well. The description of the scientific experiments, along with the steps, experiment materials, settings, and standard operating procedures, using the REPRODUCE-ME ontology is available in CAESAR to its users and the evaluation participants.</ns0:p></ns0:div> <ns0:div><ns0:head>Competency question-based evaluation</ns0:head><ns0:p>The scientific experiments that used Jupyter Notebooks are linked with the provenance of these notebooks captured using ProvBook in CAESAR. This information described using the REPRODUCE-ME ontology is also available in CAESAR for the participants (Listings 2). We created a knowledge base of different types of experiments from these two sources. The competency questions, which were translated into SPARQL queries by computer scientists, were executed on our knowledge base, consisting of linked data in CAESAR. The domain experts evaluated the correctness of the answers to these competency questions.</ns0:p><ns0:p>We present here one competency question with the corresponding SPARQL query and part of the results obtained on running it against the knowledge base. The result of each query is a long list of values, hence, we show only the first few rows from them. This query is responsible for getting the complete path for an experiment. ? d a t a s e t prov:hadMember ? image . ? i n s t r u m e n t p &#8722; p l a n : c o r r e s p o n d s T o V a r i a b l e ? image ; r e p r : h a s P a r t ? i n s t r u m e n t p a r t . ? i n s t r u m e n t p a r t r e p r : h a s S e t t i n g ? s e t t i n g . ? p l a n p &#8722; p l a n : i s S u b P l a n O f P l a n ? e x p e r i m e n t . Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref> shows the part of the result for a particular experiment called 'Focused mitotic chromosome condensation screen using HeLa cells'. Here, we queried the experiment with its associated agents and their role, the plans and steps involved, the input and output of each step, the order of steps, and the instruments and their setting. We see that all these elements are linked now to the computational and non-computational steps to describe the complete path. We can further expand this query by asking for additional information like the materials, publications, external resources, methods, etc., used in each step of an experiment. It is possible to query for all the elements mentioned in the REPRODUCE-ME Data Model.</ns0:p><ns0:p>The domain experts reviewed, manually compared, and evaluated the correctness of the results from the queries using the Dashboard. Since the domain experts do not possess the knowledge of SPARQL, they did this verification process with the help of the Dashboard. This helped them get a complete view of the provenance of scientific experiments. In this verification process, the domain experts observed that the query returned null for certain experiments that did not provide the complete data for some elements. So the computer scientists from the project tweaked the query to include the OPTIONAL keyword to get the results from the query. Even after tweaking the results, the results from these competency questions were not complete. Missing data is especially seen in the non-computational part of an experiment than its computational part. This shows that the metadata entered by the scientists in CAESAR is Manuscript to be reviewed Computer Science not complete and requires continuous manual annotation. Another thing that we noticed during the evaluation is that the results are spread across several rows in the table. In the Dashboard, when we show these results, the filter option provided in the table helps the user to search for particular columns.</ns0:p><ns0:p>The domain experts manually compared the results of SPARQL queries using Dashboard and ProvTrack and evaluated their correctness <ns0:ref type='bibr'>(Samuel et al., 2018)</ns0:ref>. Each competency question addressed the different elements of the REPRODUCE-ME Data Model. The competency questions, the RDF data used for the evaluation, the SPARQL queries, and the answers to these queries are publicly available <ns0:ref type='bibr' target='#b60'>(Samuel, 2021)</ns0:ref>.</ns0:p><ns0:p>The data provides ten competency questions which are translated to SPARQL queries. The answers to these SPARQL queries provide information on the steps, plans, methods, standard operating procedures, instruments and their settings, materials used in each step, agents directly and indirectly involved, and the temporal parameters of an experiment. To summarize, the domain scientists found the answers to the competency questions satisfactory with the additional help of computer scientists tweaking the SPARQL queries. However, the answers to the competency questions showed the missing provenance information, which needs user input in CAESAR.</ns0:p></ns0:div> <ns0:div><ns0:head>Data and user-based evaluation of ProvBook</ns0:head><ns0:p>In this section, we address how ProvBook supports computational reproducibility as a stand-alone tool and also with the integration with CAESAR. To address the second part of the main research question, we evaluate the role of ProvBook in supporting computational reproducibility using Jupyter Notebooks. We did the evaluation taking into consideration different use cases and factors. They are:</ns0:p><ns0:p>1. Repeatability: The computational experiment is repeated in the same environment by the same experimenter. We performed this to confirm the final results from the previous executions.</ns0:p><ns0:p>2. Reproducibility: The computational experiment is reproduced in a different environment by a different experimenter. In this case, we compare the results of the Jupyter notebook in the original environment with the results from a different experimenter executed in a different environment.</ns0:p><ns0:p>3. The input, output, execution time and the order in two different executions of a notebook. 4. Provenance difference of the results of a notebook. 5. Performance of ProvBook with respect to time and space.</ns0:p><ns0:p>6. The environmental attributes in the execution of a notebook. 7. Complete path taken by a computational experiment with the sequence of steps in the execution of a notebook with input parameters and intermediate results in each step required to generate the final output.</ns0:p><ns0:p>We used an example Jupyter Notebook which uses a face recognition example where eigenface and SVM algorithms from scikit-learn 3 are applied. This script provides a computational experiment to extract information from an image dataset that is publicly available using machine learning techniques. We used this example script to show the different use cases of our evaluation and how ProvBook handles different output formats like image, text, etc. These formats are also important for the users of CAESAR.</ns0:p><ns0:p>We use Original Author to refer to the author who is the first author of the notebook and User 1 and User 2 to the authors who used the original notebook to reproduce results. We first saved the notebook without any outputs. Later, two different users executed this notebook in three different environments Using ProvBook, User 1 could track the changes and compare with the original script, while User 2 could compare the changes with the executions from the original author, User 1 and his own execution.</ns0:p><ns0:p>Since this example is based on a small number of users, an extensive user-based evaluation is required to conclude the role of ProvBook for the increased performance in these use cases. However, we expect that ProvBook play a role in helping users in such use cases to track the provenance of experiments. The results show that ProvBook can handle different output types, which Jupyter Notebooks support. We also evaluated the performance of ProvBook with respect to space and time. The difference in the run time of each cell with and without ProvBook was negligible. Concerning space, the size of the Jupyter Notebook with provenance information of several executions was more than the original notebook. This is stated in <ns0:ref type='bibr' target='#b14'>(Chapman et al., 2008)</ns0:ref> that the size of the provenance information can grow more than the actual data. In the following scenario, we evaluated the semantic representation of the provenance of computational notebooks by integrating ProvBook with CAESAR. Listings 2 shows the SPARQL query of the complete path for a computational notebook with input parameters and intermediate results in each step required to generate the final output. It also queries the sequence of steps in its execution. We can expand this query to get information on the experiment in CAESAR which uses a notebook. The result of this query, which is available in the Supplementary file, shows the steps, the execution of each step with their total run time, the input and output data of each run, and the order of execution of steps of a notebook.</ns0:p><ns0:p>User-based evaluation of CAESAR This evaluation addresses a smaller component of the third part of the main research question. Here, we focus on the visualization of the complete path of scientific experiments to the users of CAESAR. We conducted a user-based study to get the general impression of the tool and used this as feedback to improve the tool. This study aimed to evaluate the usefulness of CAESAR and its different modules, particularly the visualization module. We invited seven participants for the survey, of which six participants responded to the questions. The evaluation participants were the scientists of the ReceptorLight project who use CAESAR in their daily work. In addition to them, other biology students, who closely work with microscopy images and are not part of the ReceptorLight project, participated in this evaluation. We provided an introduction of the tool to all the participants and provided a test system to explore all the features of CAESAR. The scientists from the ReceptorLight project were given training on CAESAR and its workflow on documenting experimental data. Apart from the internal meetings, we provided the training from <ns0:ref type='bibr'>2016-2018 (17.06.2016, 19.07.2016, 07.06.2017, 09.04.2018, and 16.06.2018)</ns0:ref>. We asked scientists to upload their experimental data to CAESAR as part of this training.</ns0:p><ns0:p>At the evaluation time, we provided the participants with the system with real-life scientific experiment data as mentioned in the competency question evaluation subsection. The participants were given the system to explore all the features of CAESAR. As our goal of this user survey was to get feedback from the daily users and new users and improve upon their feedback, we let the participants answer the relevant questions. As a result, the user survey was not anonymous, and none of the questions in the user survey was mandatory. However, only 1 participant who was not part of the ReceptorLight project did not answer all the questions. The questionnaire and the responses are available in the Supplementary file.</ns0:p><ns0:p>In the first section of the study, we asked how the features in CAESAR help improve their daily research work. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results. 83% of the participants either strongly agreed or agreed that it helps to visualize all the experimental data and results effectively, while 17% disagreed on that. In the next section, we asked about the perceived usefulness of CAESAR. 60% of the users consider CAESAR user-friendly, while 40% had a neutral response. 40% of the participants agreed that CAESAR is easy to learn to use, and 60% had a neutral response. The participants provided additional comments to this response that CAESAR offers many features, and they found it a little difficult to follow. However, all the participants strongly agreed or agreed that CAESAR is useful for scientific data management and provides a collaborative environment among teams.</ns0:p><ns0:p>In the last section, we evaluated each feature provided by CAESAR by focusing on the important visualization modules. Here, we showed a real-life scientific experiment with Dashboard and ProvTrack views. We asked the participant to explore the various information, including the different steps and materials used in the experiment. Based on their experience, we asked the participants about the likeability of the different modules. ProvTrack was strongly liked or liked by all the participants. For the Dashboard, 80% of them either strongly liked or liked, while 20% had a neutral response. 60% of the users strongly liked or liked ProvBook, while the other 40% had a neutral response. The reason for the neutral response was that they were new to scripting. We also asked to provide the overall feedback of CAESAR along with its positive aspects and the things to improve. We obtained three responses to this question which are available in the Supplementary File.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Provenance plays a key role in supporting the reproducibility of results, which is an important concern in data-intensive science. Through CAESAR, we aimed to provide a data management platform for capturing, semantically representing, comparing, and visualizing the provenance of scientific experiments, including their computational and non-computational aspects. CAESAR is used and deployed in the CRC ReceptorLight project, where scientists work together to understand the function of membrane receptors and develop high-end light microscopy techniques. In the competency question-based evaluation, we focused on answering the questions using the experimental provenance data provided by scientists from the research projects, which was then managed and semantically described in CAESAR. Answering the competency questions using SPARQL queries shows that some experiments documented in CAESAR had missing provenance data on some of the elements of the REPRODUCE-ME Data Model like time, settings, etc. We see that CAESAR requires continuous user involvement and interaction in documenting non-computational parts of an experiment. Reproducing an experiment is currently not feasible unless every step in CAESAR is machine-controlled. In addition to that, the output of the query for finding the complete path of the scientific experiment results in many rows in the table. Therefore, the response time could exceed the normal query response time and result in server error from the SPARQL endpoint in some cases where the experiment has various inputs and outputs with several executions. Currently, scientists from the life sciences do not have the knowledge of Semantic Web technologies and are not familiar with writing their SPARQL queries. Hence, we did not perform any user study on writing SPARQL queries to answer competency questions. However, scientists must be able to see the answers to these competency questions and explore the complete path of a scientific experiment. To overcome this issue, we split the queries and combined their results in ProvTrack. The visualization modules, Dashboard, and ProvTrack, which use SPARQL and linked data in the background, visualize the provenance graph of each scientific experiment. ProvTrack groups the entities, agents, activities, steps, and plans to help users visualize the complete path of an experiment. In the data and user-based evaluation, we see the role of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ProvBook as a stand-alone tool to capture the provenance history of computational experiments described using Jupyter Notebooks.</ns0:p><ns0:p>We see that each item added in the provenance information in Jupyter Notebooks, like the input, output, starting and ending time, helps users track the provenance of results even in different execution environments. The Jupyter Notebooks shared along with the provenance information of their executions helps users to compare the original intermediate and final results with the results from the new trials executed in the same or different environment. Through ProvBook, the intermediate and negative results and the input and the output from different trials are not lost. The execution environmental attributes of the computational experiments along with their results, help to understand their complete path. We also see that we could describe the relationship between the results, the execution environment, and the executions that generated the results of a computational experiment in an interoperable way using the REPRODUCE-ME ontology. The knowledge capture of computational experiments using notebooks and scripts is ongoing research, and many research questions are yet to be explored. ProvBook currently does not extract semantic information from the cells. This includes information like the libraries used, variables and functions defined, input parameters and output of a function, etc. In CAESAR, we currently link a whole cell as a step of a notebook which is linked to an experiment. Hence, the fine provenance information of a computational experiment is currently missing and thus not linked to an experiment to get the complete path of a scientific experiment.</ns0:p><ns0:p>The user-based evaluation of CAESAR aimed to see how the users find CAESAR useful concerning the features it provides. We targeted both the regular users and the new users to the system. As we had a small group of participants, we could not make general conclusions from the study. However, the study participant either agreed or liked its features. The survey results in <ns0:ref type='bibr' target='#b64'>(Samuel &amp; K&#246;nig-Ries, 2021)</ns0:ref> had shown that newcomers face difficulty in finding, accessing, and reusing data in a team. We see an agreement among the participants that CAESAR helps preserve data for the newcomers to understand the ongoing work in the team. This understanding of the ongoing work in the team comes from the linking of experimental data and results. The results from the study show that among the two visualization modules, ProvTrack was preferred over Dashboard by scientists. Even though both serve different purposes (Dashboard for an overall view of the experiments conducted in a Project and the ProvTrack for backtracking the results of one experiment), the users preferred the provenance graph to be visualized with detailed information on clicking. The survey shows that the visualization of the experimental data and results using ProvTrack supported by the REPRODUCE-ME ontology helps the scientists without worrying about the underlying technologies. All the participants either strongly agreed or agreed that CAESAR enables them to organize their experimental data efficiently, preserve data for the newcomers, search all the data, provide a collaborative environment and link the experimental data with results.</ns0:p><ns0:p>However, a visualization evaluation is required to properly test the Dashboard and ProvTrack views, which will help us to determine the usability of visualizing the complete path of a scientific experiment.</ns0:p><ns0:p>The limitation of our evaluation is the small number of user participation. Hence, we cannot make any statistical conclusion on the system's usefulness. However, CAESAR is planned to be used and extended for another large research project, Microverse 5 , which will allow for a more scalable user evaluation. One part of the provenance capture module depends on the scientists to document their experimental data.</ns0:p><ns0:p>Even though the metadata from the images captures the execution environment and the devices' settings, the need for human annotations to the experimental datasets is significant. Besides this limitation, the mappings for the ontology-based data access required some manual curation. This can affect when the database is extended for other experiment types.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article, we presented CAESAR. It provides a collaborative framework for the management of scientific experiments, including the computational and non-computational steps. The provenance of the scientific experiments is captured and semantically represented using the REPRODUCE-ME ontology. ProvBook helps the user capture, represent, manage, visualize and compare the provenance of different executions of computational notebooks. CAESAR links the computational data and steps to the non-computational data and steps to represent the complete path of the experimental workflow.</ns0:p><ns0:p>The visualization modules of CAESAR provides users to view the complete path and backtrack the Manuscript to be reviewed Computer Science provenance of results. We applied our contributions together in the ReceptorLight project to support the end-to-end provenance management from the beginning of an experiment to its end. There are several possibilities to extend and improve CAESAR. We expect this approach to be extended to different types of experiments in diverse scientific disciplines. Reproducibility of non-computational parts of an experiment is our future line of work. We can reduce the query time for the SPARQL queries in the project dashboard and ProvTrack by taking several performance measures. CAESAR could be extended to serve as a public data repository providing DOIs to the experimental data and provenance information. This would help the scientific community to track the complete path of the provenance of the results described in the scientific publications. Currently, CAESAR requires continuous user involvement and interaction, especially through different non-computational steps of an experiment. The integration of persistent identifiers for physical samples and materials into scientific data management can lower the effort of user involvement. However, at this stage, reproducibility is not a one-button solution where reproducing an experiment is not feasible unless every step is machine-controlled.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>1Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The architecture of CAESAR. The data management platform consists of modules for provenance capture, representation, storage, comparison, and visualization. It also includes several additional services including API access, and SPARQL Endpoint.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>provides a feature called Proposal to allow users to propose changes or suggestions to the experiment. As a result, the experiment owner receives those suggestions as proposals. The user can either accept the proposal and add it to the current experimental data or reject and delete the proposal. The plugin provides autocompletion of data to fasten the process of documentation. For example, based on the CAS number of the chemical provided by the user, the molecular weight, mass, structural formulas are fetched from the CAS registry and populated in the Chemical database. The plugin also provides additional data from the external servers for other materials like Protein, Plasmid, and Vector. The plugin also autofills the data about the authors and other publication details based on the DOI/PubMedId of the publications. It also provides a virtual keyboard to aid the users in documenting descriptions with special characters, chemical formulas, or symbols. Provenance Management and Representation. We use a PostgreSQL database in OMERO as well as in CAESAR. The OMERO database consists of 145 tables, and the ReceptorLight database consists of 35 tables in total. We use the REPRODUCE-ME ontology to model and describe the experiments and their provenance in CAESAR. The database model 7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The difference between the input and output of two different execution of a code cell in ProvBook. Deleted elements are marked in red, newly added or created elements are marked in green.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. ProvTrack: Tracking Provenance of Scientific Experiments</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>clicked. Links are provided to the keys to get their definitions from the web. It also displays the path of the selected node from the Experiment node on top of the left panel. The Search panel allows the users to 10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022) Manuscript to be reviewed Computer Science search for any entities in the graph defined by the REPRODUCE-ME data model. It provides a dropdown to search not only the nodes but also the edges. This comes handy when the provenance graph is very large.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>did the evaluation on a server (installed with CentOS Linux 7 and with x86-64 architecture) hosted at the University Hospital Jena. To address the first part of the main research question, scientists from B1 and A4 projects of ReceptorLight documented experiments using confocal patch-clamp fluorometry (cPCF), F&#246;rster Resonance Energy Transfer (FRET), PhotoActivated Localization Microscopy (PALM) and direct Stochastic Optical Reconstruction Microscopy (dSTORM) as part of their daily work. In 23 projects, a total of 44 experiments were recorded and uploaded with 373 microscopy images generated from different instruments with various settings using either the desktop client or webclient of CAESAR (Accessed 21 April 2019). We also used the Image Data Repository (IDR) datasets (IDR, 2021) with around 35</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022) Manuscript to be reviewed Computer Science Listing 1. What is the complete path taken by a scientist for an experiment? SELECT DISTINCT * WHERE { ? e x p e r i m e n t a r e p r : E x p e r i m e n t ; p r o v : w a s A t t r i b u t e d T o ? a g e n t ; r e p r : h a s D a t a s e t ? d a t a s e t ; p r o v : g e n e r a t e d A t T i m e ? g e n e r a t e d A t T i m e . ? a g e n t r e p r : h a s R o l e ? r o l e .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. A part of results for the competency question</ns0:figDesc><ns0:graphic coords='13,141.74,269.08,413.55,226.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>Ubuntu 18.10 with Python 3, Ubuntu 18.04 with Python 2 and 3, Fedora with Python 3). Both users used ProvBook in their Jupyter Notebook environment. The first run of the eigenfaces Jupyter Notebook gave ModuleNotFoundError for User 1. User 1 attempted several runs to solve the issue. For User 1 installing the missing scikit-learn module still did not solve the issue. The problem occurred because of the version change of the scikit-learn module. The original Jupyter Notebook used 0.16 version of scikit-learn. While User 1 used 0.20.0 version, User 2 used 0.20.3. The classes and functions from the cross validation, grid search, and learning curve modules were placed into a new model selection module starting from Scikit-learn 0.18. User 1 made several other changes in the script which used these functions. User 1 also made the necessary changes to work for the new versions of the scikit-learn module. We provided 3 https://scikit-learn.org/0.16/datasets/labeled_faces.html 13/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022) Manuscript to be reviewed Computer Science this changed notebook along with the provenance information captured by ProvBook in User 1 notebook environment to User 2. For User 2, only the first run gave ModuleNotFoundError error. User 2 resolved this issue by installing the scikit-learn module. User 2 did not have to change scripts, as the User 2 could see the provenance history of the executions from the original author, User 1 and his own execution.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>We also performed tests to see the input, output, and run time in different executions in different environments. The files in the Supplementary information provides the information on this evaluation by showing the difference in the execution time of the same cell in a notebook in different execution environments. One of the cells in the evaluation notebook downloads a set of preprocessed images from Labeled Faces in the World (LFW) 4 which contains the training data for face recognition study. The execution of this cell took around 41.3ms in the first environment (Ubuntu 18.10), 2 min 35s in the second environment (Ubuntu 18.04), and 3 min 55s in the third environment (Fedora). The different execution environments play a role in computational experiments which is shown with the help of ProvBook. We also show how one change in a previous cell of the notebook resulted in a difference in the intermediate result in two different executions by two different users in two different environment. We evaluated the provenance capture and difference module in ProvBook with different output types, including images.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Listing 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Complete path for a computational notebook experiment SELECT DISTINCT * WHERE { ? s t e p p &#8722; p l a n : i s S t e p O f P l a n ? n o t e b o o k . ? n o t e b o o k a r e p r : N o t e b o o k . ? e x e c u t i o n p &#8722; p l a n : c o r r e s p o n d s T o S t e p ? s t e p ; r e p r : e x e c u t i o n T i m e ? e x e c u t i o n T i m e . ? s t e p p &#8722; p l a n : h a s I n p u t V a r ? i n p u t V a r ; p &#8722; p l a n : h a s O u t p u t V a r ? o u t p u t V a r ; p &#8722; p l a n : i s P r e c e d e d B y ? p r e v i o u s S t e p . }</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>5</ns0:head><ns0:label /><ns0:figDesc>https://microverse-cluster.de 16/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:note place='foot' n='4'>http://vis-www.cs.umass.edu/lfw/ 14/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:06:62007:4:0:NEW 13 Feb 2022)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Dear Editor, Thank you for your message “Decision on your PeerJ Computer Science submission: 'A collaborative semantic-based provenance management platform for reproducibility' (#CS-2021:06:62007:3:0:REVIEW) from 9 February, 2022. We thank you for your work and the reviewers for their thoughtful and thorough suggestions and comments, which have helped us to improve significantly the quality of the manuscript. Please find below our point-by-point responses in text with bold and italic formatting. The submission is updated accordingly and further includes the editorial changes requested by the PeerJ team. We look forward to hearing from you! Best Regards, Sheeba Samuel and Birgitta König-Ries Editor's Decision Minor Revisions One of the reviewers is still asking for a statement summarizing the results of the evaluation. Please take a look at the revision and resubmit your paper. Response: We have addressed the reviewer 3 comments and made the corresponding changes in the manuscript as well. Reviewer 1 I understand that the authors addressed all of my previous comments, and it is ready for publication. Response: We thank you for your thoughtful and thorough suggestions and comments. Reviewer 3 Basic reporting With respect to the results, I think my previous comments may have misled. I care about what the evaluation **shows**, not specifically what is contained in the supplemental files. The paper states 'we evaluated CAESAR with the REPRODUCE-ME ontology using competency questions collected from different scientists in our requirement analysis phase'; 'The domain experts reviewed, manually compared, and evaluated the correctness of the results from the queries using the Dashboard'; and 'The domain experts manually compared the results of SPARQL queries using Dashboard and ProvTrack and evaluated their correctness.' I see the text regarding issues with null results and modifications, but I don't see any statement summarizing the results of the evaluation. Did the domain experts find that after the modifications, all the competency questions were satisfactorily answered? If so, please state that. If not, it seems like having details about what was problematic would be helpful. (If I'm missing how the competency evaluation works, perhaps clarifying that in the text would also help. Evaluation to me implies there are results that give an indication of how well something works.) Response: The results of the evaluation are explained in lines 553-572. To summarize the results, we have added an additional statement summarizing the result of this evaluation in line 572-575. 'We used an example Jupyter Notebook which uses a face recognition example applying eigenface algorithm and SVM using scikit-learn' is still not quite right to my ears. Perhaps '... example where eigenface and SVM algorithms from scikit-learn are applied.' Response: We have rephrased the sentence in line 593. "
Here is a paper. Please give your review comments after reading it.
379
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Multi-view clustering (MVC) is a mainstream task that aims to divide objects into meaningful groups from different perspectives. The quality of data representation is the key issue in MVC. A comprehensive meaningful data representation should be with the discriminant characteristics in a single view and the correlation of multiple views.</ns0:p></ns0:div> <ns0:div><ns0:head>Considering this, a novel framework called Dynamic Guided Metric Representation</ns0:head><ns0:p>Learning for Multi-View Clustering (DGMRL-MVC) is proposed in this paper, which can cluster multi-view data in a learned latent discriminated embedding space. Specifically, in the framework, the data representation can be enhanced by multi-steps. Firstly, the class separability is enforced with Fisher Discriminant Analysis (FDA) within each single view, while the consistence among different views is enhanced based on Hilbert-Schmidt independence criteria (HSIC). Then the 1st enhanced representation is obtained. In the second step, a dynamic routing mechanism is introduced, in which the location or direction information is added to fulfil the expression. After that, a generalized canonical correlation analysis (GCCA) model is used to get the final ultimate common discriminated representation. The learned fusion representation can substantially improve multi-view clustering performance. Experiments validated the effectiveness of the proposed method for clustering tasks.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Multi-view data are extremely common in many applications, each individual view and the correlation of multiple views have their specific property for a particular knowledge discovery task. Multi-view data often contain diversity as well as consistent information that should be exploited and fused. Therefore, considering the uniqueness, complementary, and correlation of each view, it is essential to study how to fuse multi-view data efficiently. Recently, increasing research efforts have been made in multi-view learning, where multi-view clustering (MVC) forms a mainstream task that aims to divide subjects into meaningful groups from different perspectives by learning the multi-view information (e.g. <ns0:ref type='bibr'>Yang &amp; Wang,2018)</ns0:ref>. However, a better clustering representation can be learned by simultaneously analyzing the discriminant characteristics of a single view and the correlations among multiple views. Therefore, a priority for most MVC methods is to find a feasible and direct way to explore the underlying data cluster latent fusion representation by multiple views to obtain the final ideal clustering results.</ns0:p><ns0:p>Many research work about MVC have been investigated in the last decades, in which most of them proposed to effectively consider rich information from multiple views (e.g. Wang, Yang &amp;Liu,2020). For instance, co-training (e.g. Xia, Yang &amp; Yu, 2020) and co-regularized (e.g. <ns0:ref type='bibr' target='#b3'>Kang et al.,2020)</ns0:ref> spectral clustering have been proposed to minimize the disagreement between each pair of views (e.g. <ns0:ref type='bibr'>Chen et al.,2020 )</ns0:ref>. However, clustering performance is easily affected by the poor quality of original views in this kind of method. Multi-view subspace clustering uses the unified shared feature representation of each view to obtain consistent clustering results from multiple views (e.g. <ns0:ref type='bibr' target='#b5'>Huang et al.,2020;</ns0:ref><ns0:ref type='bibr'>Zhao, Ding &amp; Fu, 2017)</ns0:ref>. Typical models also include subspace learning-based and non-negative matrix factorization (NMF)-based models. In the past decade, numerous machine learning technologies have been investigated to determine the scope of combining multiple views. However, these methods lack the ability to mine the latent unified representation and to learn the non-linear correlation of views. To address the second limitation, many researchers proposed multi-kernel and CCA-based clustering methods. The multi-kernel method uses predefined kernels for each view and combines these kernels in linear or non-linear mode. Nevertheless, the complex relationships make it difficult to represent data fusion. To solve this problem, canonical correlation analysis (CCA) (e.g. <ns0:ref type='bibr'>Haldorai &amp;Ramu,2020;</ns0:ref><ns0:ref type='bibr'>Hotelling,1936)</ns0:ref> and kernel CCA are commonly used in multi-view clustering. Rasiwasia et al. proposed mean CCA and cluster CCA. Blaschko (e.g. <ns0:ref type='bibr'>Rasiwasia et al.,2008)</ns0:ref> projected the data across different views by KCCA and used k-means to cluster projected samples. We can find that the computational cost of the CCA and KCCA models is high (e.g. <ns0:ref type='bibr' target='#b11'>Wang, Li &amp;Huang, 2017)</ns0:ref> </ns0:p><ns0:note type='other'>.</ns0:note></ns0:div> <ns0:div><ns0:head>Related work</ns0:head><ns0:p>Related work on multi-view clustering methods can be divided into two categories: the common matrix framework (spectral clustering, subspace clustering, and non-negative matrix factorization clustering) and view fusion methods (multi-kernel clustering and DCCA-based methods). This section will review multi-view clustering methods from these two technological categories.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Common matrix framework</ns0:head><ns0:p>Methods of this type have the commonality that they share the similar structure to combine multiple views.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.1'>Multi-view spectral clustering</ns0:head><ns0:p>Multi-view spectral clustering assumes that all the views share the same or similar eigenvector matrices, in which co-training spectral clustering and co-regularized spectral clustering are the two representative methods.</ns0:p><ns0:p>1) Co-training spectral clustering. These algorithms are investigated under the assumption of consensus among multiple views and trained alternately to maximize the consistency of the two distinct views. Three main assumptions are made: (1) each view is sufficient for the learning task;</ns0:p><ns0:p>(2) the views are conditionally independent given the class labels; and (3) the objective functions export the same predictions for co-occurring features with high probability in both views. Overall, most co-training methods are semi-supervised learning.</ns0:p><ns0:p>2) Co-regularized spectral clustering. Zhu (e.g. Zhu, Zhang &amp;He, 2019) proposed coregularized approach. The main idea of this kind of method is to minimize the distinction between the predictor functions of two views acting as one part of an objective function. They used graphic Laplacian eigenvectors to play a role much like that of predictor functions in a semi-supervised learning scenario. Inspired by the previous work (e.g. Zhu, Zhang &amp;He, 2019; Chen et al., 2020), automatically learned the weights of different views from data.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.2'>Multi-view subspace clustering</ns0:head><ns0:p>In practice, multi-view data can be sampled from multiple subspaces. A subspace learning model can learn a new and unified representation or a latent space for multi-view data. The unified representation or latent space can then be directly used for the clustering task. In addition, a subspace clustering model can deal with high-dimensional data. The approach is to find the underlying subspaces and then to cluster. Wang (e.g. <ns0:ref type='bibr' target='#b18'>Wang et al.,2015)</ns0:ref> proposed a model to measure correlation consensus in multiple views. Unlike Wang, Zhao (e.g. <ns0:ref type='bibr'>Zhao et al.,2017</ns0:ref>) used a deep semi-nonnegative matrix factorization to perform clustering. This kind of method focus on mining the inherent structure from multiple subspace of views and the clustering performance is heavily dependent on the affinity matrix. Therefore, some works used deep networks to learn the inter-view specific features based on classical subspace clustering methods. The newest multi-view representation learning model (MRL) (e.g. <ns0:ref type='bibr' target='#b14'>Zheng et al., 2020</ns0:ref>) is focus on mining the specific characteristics of inner-view and then learning the fusion representation based on the maximum correlation of multiple views. In all, how to mining the hidden difference of views is a research topic in recent years.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.3'>Multi-view non-negative matrix factorization clustering</ns0:head><ns0:p>Given a non-negative matrix , non-negative matrix factorization (NMF) seeks two</ns0:p><ns0:formula xml:id='formula_0'>X &#8712; &#8477; &#119889; &#215; &#119899;</ns0:formula><ns0:p>non-negative matrices and , whose product the original matrix :</ns0:p><ns0:formula xml:id='formula_1'>W &#8712; &#8477; &#119889; &#215; &#119901; &#119881; &#8712; &#8477; &#119901; &#215; &#119899; X X &#8776; WV T ,</ns0:formula><ns0:p>where is the basis matrix and the indicator matrix. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Due to its non-negativity constraints, NMF has emerged as a latent feature learning method. To combine multi-view information in an NMF framework, many variants of NMF have also been proposed. Kalayeh (e.g. Kalayeh, Idrees &amp; Shah,2014) present a weighted extension of Multiview NMF to address the aforementioned drawbacks. Liu (e.g. Liu et al.,2020) used the cross entropy loss function to constrain the objective function better.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>View fusion methods</ns0:head><ns0:p>View fusion methods are also commonly used to learn multi-view information. However, their learning targets directly combine the views for clustering by different modes. Hence, they focus on how to combine multiple views.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.1'>Multi-view multi-kernel clustering</ns0:head><ns0:p>Multi-kernel learning was originally used to combine views by means of a kernel and was widely used to deal with multi-view data. The common approach is to define a kernel for each view and then combine these kernels in a convex combination (e.g. Sellami &amp;Alaya, 2021; Lu et al.,2020;Nithya et al.,2020). Therefore, one main problem of this method is to choose kernel functions and optimal. A k-means analysis performed on kernel space that simultaneously found the best cluster labels, the cluster kernels, and the optimal combination of multiple kernels (e.g. <ns0:ref type='bibr' target='#b25'>Cui et al.,2007)</ns0:ref>. Liu (e.g. <ns0:ref type='bibr' target='#b26'>Liu et al.,2013)</ns0:ref> extended k-means clustering into Hilbert space. Liu try to denote the data matrices as kernel matrices and then combined for fusion. In addition, consider the differences among views, methods with weighted combinations of kernels have also been studied.</ns0:p><ns0:p>However, multi-view data are incomplete. To address this issue, researchers integrated kernel imputation and clustering into a unified learning procedure (e.g. <ns0:ref type='bibr' target='#b27'>Monney et al.,2020)</ns0:ref>. Besides, other methods use a direct combination of features to perform multi-view clustering, like those in (e.g. Xu et al.,2016; Nie et al.,2017).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.2'>Multi-view CCA-based methods clustering</ns0:head><ns0:p>For multi-view data clustering, it is reasonable to combine the data directly. However, it is hard to fuse different data types, and high dimensionality and noise are difficult to handle. The CCA method is used to combine the views directly and CCA-based techniques for multi-view clustering fusion. Chaudhuri (e.g. <ns0:ref type='bibr' target='#b30'>Chaudhuri et al.,2009 )</ns0:ref> projected the data into a lower-dimensional space using CCA and cluster. Then, he CCA-based methods are proposed, such as KCCA (e.g. Wang, Li &amp;Huang, 2017), DCCA(e.g. Andrew, Arora &amp; Bilmes, 2013),DGCCA (e.g. <ns0:ref type='bibr' target='#b33'>Benton et al., 2017)</ns0:ref> and MRL(e.g. <ns0:ref type='bibr' target='#b14'>Zheng et al., 2020)</ns0:ref>. Overall, although DCCA-based methods are used now in multiview representation, this is a direction worthy of further multi-view clustering study using DCCAbased methods. In contrast to these methods, the representation will directly influence effectiveness on the clustering task. Thus, based on existing multi-view clustering learning methods, the intention is to design a novel model to learn a comprehensive meaningful data representation with the discriminant characteristics in a single view and correlation of all views.</ns0:p><ns0:p>In short, existing methods still has the following challenges: (1) Common matrix methods focus on learning a shared similar structure, they ignore the hidden difference of views. (2) View fusion methods considering the unique features of each view and the correlation of all views. However, existing methods strategy is two steps. The first step is self-representation of each view independently and the next step is focus on fusion views. The two-step collaboration is not well meet the clustering task.</ns0:p></ns0:div> <ns0:div><ns0:head>The proposed model</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Motivation</ns0:head><ns0:p>Existing CCA-based multi-view clustering approaches that can deal with multi-view data learn the correlation of two views and perform a clustering task at the same time. Despite appealing PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science performance, they still have some limitations. First, the quality of data representation is the key issue in MVC. And in real-world applications, directly fusing multi-view data is difficult and the effect is not good. Second, the discriminant characteristics in a single view and the correlation of multiple views should all be considered. Considering this, a novel framework called Dynamic Guided Metric Representation Learning for Multi-View Clustering (DGMRL-MVC) is proposed in this paper, which can cluster multi-view data in a learned latent discriminated embedding space. Specifically, in the framework, the data representation can be enhanced by multi-steps. The model combines the effectiveness of FDA-HSIC and the dynamic routing learning of views embedded in the model. Hence, it can mine the class separability of single view, the consistence among different views, discriminant characteristics in a single view and a latent discriminated embedding for clustering task.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Problem formulation</ns0:head><ns0:p>Given </ns0:p><ns0:formula xml:id='formula_2'>&#119889; 2 &#119872; (&#119909; 1 ,&#119909; 2 ) = &#8214;&#119909; 1 -&#119909; 2 &#8214; 2 &#119872; = (&#119909; 1 -&#119909; 2 ) &#119879; &#119872;(&#119909; 1 -&#119909; 2 )</ns0:formula><ns0:p>where, the Mahalanobis matrix M is constrained to be symmetric positive-definite to ensure its validity. Then, as for dynamic guided deep learning, it is used to introduces location or direction information to enhance the single view representation. In the last sub-model, the fusion representation G of multiple views for the clustering task is created. Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>: Illustration of the proposed DGMRL-MVC model.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Framework 3.3.1 Network Architecture</ns0:head></ns0:div> <ns0:div><ns0:head>1) Inter-intra representation based on FDA-HSIC</ns0:head><ns0:p>The Fisher-HSIC Multi-View Metric Learning (FISH-MML) system is proposed in the work (e.g. <ns0:ref type='bibr' target='#b34'>Zhang et al.,2018)</ns0:ref>. This method is based on FDA and HSIC, which are simple, yet rather effective. Inspired by Zhang's work, an effort was made to add FDA-HSIC into the proposed framework for multi-view clustering representation learning.</ns0:p><ns0:p>Intra-view separability. The starting point is the definitions of between-class and total scatter matrices in the view: vector corresponding to and is defined as follows:</ns0:p><ns0:formula xml:id='formula_3'>&#119907; &#119905;&#8462;<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>&#119878; (&#119907;) &#119887; = 1 &#119899; &#8721; &#119898; &#119895; = 1 &#119899; &#119895; (&#120583; (&#119907;) &#119895; -&#120583; (&#119907;) )(&#120583; (&#119907;) &#119895; -&#120583; (&#119907;) ) &#119879; , ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>&#119878; (&#119907;) &#119905; = 1 &#119899; &#8721; &#119899; &#119894; = 1 (&#119911; (&#119907;) &#119894; -&#120583; (&#119907;) )(&#119911; (&#119907;) &#119894; -&#120583; (&#119907;) ) &#119879; ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>&#120583; (&#119907;) &#119895; = 1 &#119899; &#119895; &#8721; &#119899; &#119895; &#119894; = 1 &#119911; &#119895;(&#119907;) &#119894; ,&#120583; (&#119907;) = 1 &#119899; &#8721; &#119899; &#119895; &#119894; = 1</ns0:formula><ns0:formula xml:id='formula_7'>&#119909; &#119895; &#119894; (&#119907;)<ns0:label>(5)</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>&#119911; &#119895; &#119894; (&#119907;) = &#119875; (&#119907;) &#119909; &#119895; &#119894; (&#119907;) ,</ns0:formula><ns0:p>where the symmetric positive semi-definite matrix ( is the</ns0:p><ns0:formula xml:id='formula_9'>&#119872; = &#119823; T &#119823; and &#119823; &#8712; &#8477; &#119896; &#215; &#119889; k &#8805; &#119903;&#119886;&#119899;&#119896;(M))</ns0:formula><ns0:p>new space.</ns0:p><ns0:p>The trace operator for and conditioned on is then optimized by the following</ns0:p><ns0:formula xml:id='formula_10'>S (v) b S (v) t &#119823; (v)</ns0:formula><ns0:p>optimization function:</ns0:p><ns0:p>max</ns0:p><ns0:formula xml:id='formula_12'>{&#119875; (&#119907;) } &#119881; &#119907; = 1 &#8721; &#119881; &#119907; = 1 &#119879;&#119903;(&#119826; (&#119907;) &#119887; ;&#119823; (&#119907;) ) -&#120574;&#119879;&#119903;(&#119826; (&#119907;) &#119905; ;&#119823; (v)</ns0:formula><ns0:p>),</ns0:p><ns0:p>where is a tunable parameter to balance the two terms involved. Therefore, the above &#120574; optimization function is focused on seeking metrics that jointly maximize separability.</ns0:p><ns0:p>Inter-view consistency. The next step is to explore the complementarity information from multi-views using HSIC (e.g. <ns0:ref type='bibr' target='#b35'>Gretton et al.,2005)</ns0:ref>. Based on HSIC, the function can be defined as: <ns0:ref type='bibr' target='#b6'>(7)</ns0:ref> HSCI(&#119833; (&#119907;) ,&#119833;</ns0:p><ns0:formula xml:id='formula_13'>(&#119908;) ) = ( &#119899; -1 ) -2 tr(&#119818; &#119907; &#119815;&#119818; &#119908; &#119815;), ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>&#119818; (&#119907;) = &#119833; (&#119907;)&#119879; &#119833; (&#119907;) = &#119831; (&#119907;)&#119879; &#119823; (&#119907;)&#119879; &#119823; (&#119907;) &#119831; (&#119907;) ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>&#119818; (&#119908;) = &#119833; (&#119908;)&#119879; &#119833; (&#119908;) = &#119831; (&#119908;)&#119879; &#119823; (&#119908;)&#119879; &#119823; (&#119908;) &#119831; (&#119908;) where and are the Gram matrices from the two views parameterized by the projections &#119818; (&#119907;) &#119818; (&#119908;) and . In the model, the dependency between and is enhanced by maximizing the</ns0:p><ns0:formula xml:id='formula_15'>&#119823; (&#119907;) &#119823; (&#119908;) &#119818; (&#119907;) &#119818; (&#119908;)</ns0:formula><ns0:p>HSIC function, and is the Gram matrix that ensures zero mean in the feature space.</ns0:p><ns0:p>&#119815; Optimization. The FDA-HSIC optimization function can be described as: max</ns0:p><ns0:formula xml:id='formula_16'>{&#119875; (&#119907;) } &#119881; &#119907; = 1 &#119881; &#8721; &#119907; = 1</ns0:formula><ns0:p>&#119879;&#119903;(&#119826; (&#119855;) &#119835; ;&#119927; (&#119907;) ) + &#120582; 1 &#119879;&#119903;(&#119826; (&#119855;) &#119853; ;&#119927; (&#119907;) ) + &#120582; 2 &#8721; &#119907; &#8800; &#119908; HSIC(&#119823; (&#119907;) &#119831; (&#119907;) ,&#119823; (&#119908;) &#119831; (&#119908;) ) = max</ns0:p><ns0:formula xml:id='formula_17'>{&#119875; (&#119907;) } &#119881; &#119907; = 1 &#8721; &#119881; &#119907; = 1 &#119879;&#119903;(&#119927; (&#119907;) (&#119808; + &#120582; 1 &#119809; + &#120582; 2 &#119810;)&#119823; (&#119907;)T = max {&#119875; (&#119907;) } &#119881; &#119907; = 1 &#119881; &#8721; &#119907; = 1 &#119905;&#119903;(&#119823; (&#119907;) &#119811;&#119823; (&#119907;)&#119879; )<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>&#119904;.&#119905;. &#119823; (&#119907;) &#119823; (&#119907;)T = &#119816;, v = 1,2,...,V, where .</ns0:p><ns0:formula xml:id='formula_18'>&#120582; 1 &gt; 0 &#119886;&#119899;&#119889; &#120582; 2 &gt; 0</ns0:formula><ns0:p>2) Deep representation based on dynamic guided learning The output of first submodel is fed into this submodel. is the view. In this process, &#119823; (&#119907;) &#119823; (&#119907;) &#119907; t&#8462; the learning of each view is independent. Inspired by Sabour's work (e.g. Sabour,Frosst &amp;Hinton,2017), this submode adopt the dynamic guided mechanism to learn the location or direction information of view. Taking the first view as example, we first divide into several &#119823; (1) &#119823; (1) capsules </ns0:p><ns0:formula xml:id='formula_19'>= [&#119959; &#120783; ,&#8230;,&#119959; &#119947; ] ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>v &#119844; &#120784; = squas&#8462;(s k 2 ) = &#8214;s k 2 &#8214; 2 1 + &#8214;s k 2 &#8214; 2 s k 2 &#8214;s k 2 &#8214;</ns0:formula><ns0:p>3) Shared representation based on deep generalized canonical correlation analysis This sub-model learns the ultimate common fusion representation from all views. In this &#119814; process, minimized loss function as follows: minimize </ns0:p><ns0:formula xml:id='formula_21'>U &#119907; &#8712; &#8477; d &#119907; &#215; r , &#119866; &#8712; &#8477; r &#215; N V &#8721; &#119907; = 1 &#8214;&#119814; -&#119828; &#119827; &#119855; &#119823; * (&#119959;) &#8214; 2 F , ,<ns0:label>(15)</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>&#119810; &#119959;&#119959; = &#119823; * (&#119959;) (&#119823; * (&#119959;) ) &#119827; &#8712; &#8477; d &#119907; * d &#119907; &#119823; * (&#119959;) = &#119823; * (&#119959;)&#119827; &#119810; -&#120783; &#119959;&#119959; &#119823; * (&#119959;) &#8712; &#8477; N * N</ns0:formula><ns0:p>projection matrix. In optimization process, we consider maximize the sum of correlations between fusion representation and single view. The reconstruction error as follows:</ns0:p><ns0:formula xml:id='formula_23'>&#119814; V &#8721; &#119907; = 1 &#8214;&#119814; -&#119828; &#119827; &#119959; &#119823; * (&#119959;) &#8214; 2 F = V &#8721; &#119907; = 1 &#8214;&#119814; -&#119814;&#119823; * (&#119959;)&#119827; &#119810; -&#120783; &#119959;&#119959; &#119823; * (&#119959;) &#8214; 2 F ,(16) = &#119903;&#119817; -Tr(&#119814;&#119820;&#119814; &#119879; )</ns0:formula><ns0:p>where the positive semi-definite matrix of each view is , is the top rows of . To where .</ns0:p><ns0:formula xml:id='formula_24'>L = &#8721; r i = 1 &#955; i (&#119820;)</ns0:formula></ns0:div> <ns0:div><ns0:head>Experiments</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Datasets</ns0:head><ns0:p>We select four datasets come from real-world to verify model. The views of these datasets include network, text, IDs and image.</ns0:p><ns0:p>&#61548;Football. This dataset is collected as previously described in Greene D (2013). It is the Twitter active data of 248 English Premier League football players and clubs. This dataset contains nine views and 20 class labels. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Evaluation metrics</ns0:head><ns0:p>The effect of our model is evaluated by comparing it with baselines in terms of clustering precision, recall, F1, RI, Normalized Mutual Information(NMI) and Silhouette_score. These evaluation measures are defined as follows:</ns0:p><ns0:p>, ( <ns0:ref type='formula'>18</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_25'>&#119901;r&#119890;&#119888;&#119894;&#119904;&#119894;&#119900;&#119899; = TP TP + FP (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_26'>) &#119903;&#119890;&#119888;&#119886;&#119897;&#119897; = &#119879;&#119875; &#119879;&#119875; + &#119865;&#119873; , ,<ns0:label>(20)</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>&#119865;1 = 2&#119875;&#119877; &#119875; + &#119877; ,<ns0:label>(21)</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>&#119877;&#119868; = &#119879;&#119875; + &#119879;&#119873; &#119879;&#119875; + &#119865;&#119875; + &#119865;&#119873; + &#119879;&#119873;</ns0:formula><ns0:p>where TP and FP are positive and negative samples predicted as positive classes, TN and FN are positive and negative samples predicted as negative classes.</ns0:p><ns0:p>, <ns0:ref type='bibr' target='#b21'>(22)</ns0:ref> &#119873;&#119872;&#119868;(&#119883;;&#119884;) =</ns0:p></ns0:div> <ns0:div><ns0:head>2&#119868;(&#119883;;&#119884;) &#119867;(&#119883;) + &#119867;(&#119884;)</ns0:head><ns0:p>where .</ns0:p><ns0:p>&#119868; ( &#119883;;&#119884; ) = &#8721; &#119909; &#8721; &#119910; &#119901; ( &#119909;,&#119910; ) &#119897;&#119900;&#119892; &#119901;(&#119909;,&#119910;) &#119901;(&#119909;)&#119901;(&#119910;) ,&#119867; ( &#119883; ) =-&#8721; &#119894; &#119901;(&#119909; &#119894; )&#119897;&#119900;&#119892;&#119901;(&#119909; &#119894; ) The Silhouette_score for a sample is: ,</ns0:p><ns0:p>&#119887; -&#119886; max (&#119886;,&#119887;) here is the distance between a sample and the nearest cluster of which the sample is not a w &#119887; member. Values close to 1 indicate that a sample has been assigned to a better cluster because a different cluster is more similar.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Baselines</ns0:head><ns0:p>The following methods were compared in the experiments described in this section: &#61548; DCCA: The method realizes the deep learning applied in CCA. However, DCCA can only learn nonlinear mapping of two views (e.g. <ns0:ref type='bibr' target='#b12'>Andrew, Arora &amp; Bilmes, 2013)</ns0:ref>.</ns0:p><ns0:p>&#61548;DGCCA: Although this model overcame the view number limitation of DCCA, it ignores mining the specific characteristics of multiple views (e.g. <ns0:ref type='bibr' target='#b33'>Benton et al., 2017)</ns0:ref>.</ns0:p><ns0:p>&#61548;FISH-MML: A Fish-HSIC multi-view metric learning method, which is simple, yet rather effective. The model can learn intra-view separability and inter-view correlation (e.g. <ns0:ref type='bibr' target='#b14'>Zheng et al., 2020)</ns0:ref>.</ns0:p><ns0:p>&#61548;Dynamic guided representation-MVC: To better verify the validity of the model with FDA-FISH, only the dynamic guided representation was added to the proposed model (e.g. <ns0:ref type='bibr' target='#b14'>Zheng et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> shows the differences among these methods. In summary, the effectiveness of the proposed DGMRL-MVC model was directly evaluated by comparing it with newest DCCA-based multi-view learning methods. The significance of the dynamic guided representation mechanism in the proposed model was further verified by comparison with FISH-MML. To provide a further comprehensive analysis of the stability and performance of the proposed multi-view representation learning model, the dynamic guided representation-MVC was also chosen as a baseline.</ns0:p></ns0:div> <ns0:div><ns0:head>Table1: Differences between baselines and the proposed method 4.5 Results</ns0:head><ns0:p>The experimental results answered the following questions:</ns0:p></ns0:div> <ns0:div><ns0:head>(RQ1) How good is the performance improvement of the proposed model with dynamic guided representation? (DGMRL-MVC vs. FISH-MML)</ns0:head><ns0:p>FISH-MML provided the best multi-view metric learning with FDA-FISH. The proposed DGMRL-MVC model was first compared with FISH-MML. The results showed that DGMRL-MVC significantly outperformed FISH-MML on all four datasets for the multi-view clustering task. Table <ns0:ref type='table'>2</ns0:ref> presents the comprehensive comparison results. It is obvious that the proposed model achieved the best performance on nearly all datasets under all metrics. Take the Handwritten dataset with six views as an example, the improvement of DGMRL-MVC over FISH-MML was about 42.2%, 60.4%, 46%, 60.3% and 19.5% in terms of precision, recall, F1, RI, and NMI respectively. This proved that the dynamic guided representation used in the proposed method is effective for multi-view clustering. Our model has been enhanced in two aspects, i.e., inter-intra relations and dynamic routing learning embedded into each view self-representation, and fusion representation with maximum correlation.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2: Improvement performance of dynamic guided representation on different views. The best results are highlighted in bold. (RQ2) How good is the performance of DGMRL-MVC compared with other DCCA-based methods?</ns0:head><ns0:p>The latent DCCA-based representation was compared with the proposed algorithm on a clustering task. DCCA and DGCCA were chosen as baselines. In the dynamic guided representation-MVC model, FDS-FISH was not present, and only dynamic guided representation and generalized canonical correlation analysis were added. As shown in Table <ns0:ref type='table' target='#tab_7'>3</ns0:ref>, the results showed that DGMRL-MVC significantly outperformed DGCCA and DCCA on all datasets. In addition, the performance using the proposed discriminated representation was generally better than that using dynamic guided representation-MVC. In all, our model achieved comprehensive results because our model adds FDA-FISH and dynamic guided module that distills more separable intra-view and specific intrinsic features, resulting in a high-quality latent discriminated embedding.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 3: Comparison with latest methods with multiple views. The best results are highlighted in bold. (RQ3) Is the discriminated fusion representation good on the clustering task?</ns0:head><ns0:p>As shown in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> and 5, the visualization is consistent with the clustering results. In this experiment, the Silhouette_score was also chosen as the evaluation indicator. Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> and 5 reveal the advantage of the proposed model when the dynamic guided and FDA-HSIC models were used simultaneously. The Silhouette_score for clustering was higher than that for the FISH-MML model. Besides, the Silhouette_score of the full view method was higher than that of the 2-view method, which proved the necessity of the multi-view fusion representation. Our model has better learning performance by using full views, and can effectively mine the features of multiple views. The result empirically proves that clustering with full-view is more robust that with 2-view. </ns0:p></ns0:div> <ns0:div><ns0:head>(RQ4) How does performance vary w.r.t. the parameter of n_clusters?</ns0:head><ns0:p>Parameter analysis was conducted on the Silhouette_score parameter by varying the number of clusters in <ns0:ref type='bibr' target='#b1'>[2,</ns0:ref><ns0:ref type='bibr' target='#b2'>3,</ns0:ref><ns0:ref type='bibr' target='#b4'>5,</ns0:ref><ns0:ref type='bibr' target='#b6'>7,</ns0:ref><ns0:ref type='bibr' target='#b7'>8,</ns0:ref><ns0:ref type='bibr' target='#b8'>9,</ns0:ref><ns0:ref type='bibr' target='#b9'>10]</ns0:ref> (Handwritten and Wikipedia datasets) and <ns0:ref type='bibr' target='#b1'>[2,</ns0:ref><ns0:ref type='bibr' target='#b4'>5,</ns0:ref><ns0:ref type='bibr' target='#b7'>8,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b12'>13,</ns0:ref><ns0:ref type='bibr' target='#b14'>15,</ns0:ref><ns0:ref type='bibr' target='#b17'>18,</ns0:ref><ns0:ref type='bibr' target='#b19'>20]</ns0:ref> (Football and Pascal datasets). Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>,3,4 and 5 plots the results in terms of Silhouette_score when different numbers of clusters were used on the four datasets. The proposed DGMRL-MVC showed the best performance on the other datasets. However, DGMRL-MVC slightly worse than the other methods on the Handwritten dataset. Further, combined with the previous experimental results, we analysis the reason of Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> result. First, this dataset is different from other datasets, its views are six types of features on same picture, i.e., Fourier coefficients of the character shapes, profile correlations, Karhunen-love coefficients, pixel averages in 2 &#215; 3 windows, Zernike moment, morphological. Due to the large difference and weak correlation between the original data views, our model weakens the original relationship of views. The value of the Silhouette_score reflects the distance of samples in same cluster. In detail, although clustering evaluation metrics of our model is higher than the baselines, the sample is closer to the boundary of the cluster in this kind of dataset. </ns0:p></ns0:div> <ns0:div><ns0:head>(RQ5) Ablation experiment</ns0:head><ns0:p>Finally, ablation experiments were used to further verify the effectiveness of the model. We compared the differences of model learning performance under three steps respectively. Specifically, Step-A is inter-intra representation, Step-B is fusion representation directly, Step-C is dynamic routing and fusion representation. From the results of Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>, we can find that performance of Step-B is lower than Step-C. The main reason is that fusion learning is only aimed at the maximum correlation between views, lacking difference learning of views, which is not conducive to the feature expression for clustering tasks. At the same time, the performance difference between Step-B and Step-A is not big, and only some indicators are slightly higher than</ns0:p><ns0:p>Step-A, indicating that in the process of fusion learning, the specific intrinsic features learning with dynamice routing plays the most obvious role. Our method result is higher than other sub-models, which proves that on the basic of strengthening the learning view relationship, adding subspace fusion characterization can effectively improve the clustering performance of model. </ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper has proposed a novel framework called Dynamic Guided Metric Representation Learning for Multi-View Clustering (DGMRL-MVC). The proposed method could cluster multi-PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science view data in a learned latent discriminated embedding space with multiple views by multi-step enhanced representation. Within this framework, intra-view class separability and the complex correlations among different views were explored using FDA-HSIC. Then the direction information is added to enhance the single view representation by dynamic routing deep learning mechanism. Finally, the common discriminated representation was obtained by generalized canonical correlation analysis for multiple views and directly improved MVC performance. Experiments on four multi-view datasets have demonstrated the effectiveness of the proposed method for clustering tasks.</ns0:p><ns0:p>Multi-view data exits everywhere and there are still many challenging problems, such as data alignment from heterogeneous sources (e.g. <ns0:ref type='bibr'>Wang</ns0:ref> (&#119891; 1 (&#119883; 1 ;&#120579; 1 ),&#119891; 2 (&#119883; 2 ;&#120579; 2 )) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_30'>DGCCA Deep, multi-view &#215; &#215; Minimize &#119880; &#119895; &#8712; &#8477; &#119889; &#119895; &#215; &#119903; , &#119866; &#8712; &#8477; &#119903; &#215; &#119873; &#119869; &#8721; &#119895; = 1 &#8214;&#119814; -&#119828; &#119827; &#119895; &#119891; &#119895; (&#119883; &#119895; )&#8214; 2 &#119865; FISH-MML Deep, multi-view &#215; &#8730; &#119898;&#119886;&#119909; {&#119872; (&#119907;) } &#119881; &#119907; = 1 &#119878;({&#119820; (&#119907;) } &#119881; &#119907; = 1 ) + &#120582;&#119862;({&#119820; (&#119907;) } &#119881; &#119907; = 1 ) Dynamic guided representation- MVC Deep, multi-view &#8730; &#215; Minimize &#119880; &#119895; &#8712; &#8477; &#119889; &#119895; &#215; &#119903; , &#119866; &#8712; &#8477; &#119903; &#215; &#119873; &#119869; &#8721; &#119895; = 1 &#8214;&#119814; -&#119828; &#119879; &#119895; &#119822; * &#119843; &#8214; 2 &#119865; DGMRL-MVC Deep, multi-view &#8730; &#8730; minimize U &#119907; &#8712; &#8477; d &#119907; &#215; r , &#119866; &#8712; &#8477; r &#215; N V &#8721; &#119907; = 1</ns0:formula><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Besides, existing CCA-based clustering methods are only focus on mining the linear correlations. With the wide application of deep learning, the deep network has been applied in CCA model. Such as, Deep CCA (DCCA) (e.g. Andrew, Arora &amp; Bilmes, 2013), Deep Generalized CCA (DGCCA) (e.g. Benton et al., 2017). However, DCCA model can only learn two views. Although DGCCA overcame the limitation of DCCA, it ignores mining the specific characteristics of multiple views. Subsequently, the newest multi-view representation learning model (MRL) (e.g. Zheng et al., 2020) based on DGCCA is proposed. This model is focus on mining the specific characteristics of inner-view and then learning the fusion representation based on the maximum correlation of multiple views. Recently, multi-view representation based on clustering has been viewed as the problem of learning a meaningful representation of data. Therefore, how to design model for multiview clustering is an intriguing direction. In summary, an effective mechanism is to learn a comprehensive meaningful representation with the discriminant characteristics in a single view and the correlation of multiple views. DCCAbased multi-view clustering methods are rarely used, but they have room for improvement with deep networks to mine nonlinear correlation and high-level fusion representations. With the aim of addressing the limitation of DCCA-based MVC methods, this paper proposes a unique novel framework called Dynamic Guided Metric Representation Learning for Multi-View Clustering (DGMRL-MVC) that has not been investigated already in previous works on this topic. The main contributions of this work can be summarized as follows: &#61548; A multi-step enhanced representation is proposed for multi-view clustering, which consists of inter-intra learning, deep learning and latent space mapping. The proposed model can jointly learn a latent discriminated embedding. &#61548; On the basis of learning intra-view class separability and inter-view consistency by Fisher Discriminant Analysis -with Hilbert-Schmidt Independence Criteria (FDA-HSIC) metric learning, a dynamic guided deep learning method is used that introduces location or direction information to enhance the single view representation. &#61548; An ultimate common representation is obtained by generalized canonical correlation analysis (GCCA) model for multiple views. &#61548; Experiments on four real-world multi-view datasets have validated the effectiveness of the proposed method for clustering tasks. The rest of this paper is organized as follows: Section 2 presents a review of related work. The next section introduces the proposed DGMRL-MVC model. Section 4 presents the datasets, experimental settings, and experimental results. Finally, Section 5 draws conclusions.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Fig. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>illustrates the architecture of the proposed DGMRL-MVC model for multi-view clustering. This architecture effectively improves fusion representation and overcomes the limitations of traditional DCCA-based multi-view clustering learning. The model consists of three modules: inter-intra representation based on Fisher discriminant analysis and the Hilbert-Schmidt independence criteria, together called FDA-FISH metric learning&#65307;deep representation based on dynamic guided deep learning; shared representation based on deep generalized canonical correlation analysis. The first module learns intra-view separability and inter-view consistency.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022) Manuscript to be reviewed Computer Science &#61548;Wikipedia. This dataset is collected as previously described in N.Rasiwasia et al. (2020) &amp; J.Costa P et al. (2014).It come from Wikipedia's featured articles. This dataset contains two views and 10 categories. &#61548;Handwritten. This dataset is collected through https://archive.ics.uci.edu/ml/datasets/Mult-Iple+Features. It contains six views and 10 categories. &#61548;Pascal. This dataset is collected as previously described in C.Rashtchian et al. (2010). It contains six views and 20 categories. 4.2 Experimental settings In the experiments, we choose 80% of data randomly to train and other 20% to test. In FDA-HSCI, the parameter tuning was as in Zhang's work (e.g. Zhang et al.,2018). The maximum training iterations and learning rate as in Beton et al. (2017).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>&#951;</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Parameter analysis on n_clusters in terms of Silhouette_score on Handwritten dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Parameter analysis on n_clusters in terms of Silhouette_score on Wikipedia dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Parameter analysis on n_clusters in terms of Silhouette_score on Football dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Parameter analysis on n_clusters in terms of Silhouette_score on Pascal dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Ablation experiments on four multi-view datasets.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>, Zheng &amp; L,2017), unified multi view fusion model design and optimization (e.g. Tang et al,2020), and et al. In the future, we will do more work on multi view data fusion.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 1 Illustration</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,320.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,279.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,338.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,336.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,111.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>is the abbreviation of between, is the abbreviation of total, is the number of classes,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#119887; is the number of samples belonging to class , &#119905; C j &#120583; (&#119907;) &#119895; class , and &#119899; &#119895; are the sample means of the view for all the views. are the sample means of the &#119898; is the projected feature view for &#119907; &#119905;&#8462; &#119862; &#119895; &#120583; (&#119907;) &#119907; &#119905;&#8462; &#119911; &#119895; &#119894; (&#119907;)</ns0:cell></ns0:row></ns0:table><ns0:note>&#119911; (&#119907;) &#119894; PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022) Manuscript to be reviewed Computer Science where</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 : Visualization of clustering result with FISH-MML on Pascal dataset. The best results are highlighted in bold. Table 5: Visualization of clustering result with DGMRL-MVC (ours) on Pascal dataset. The best results are highlighted in bold.</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison with latest methods with multiple views. The best results are highlighted in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Number of views</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>RI</ns0:cell><ns0:cell>NMI</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DCCA</ns0:cell><ns0:cell /><ns0:cell>0.10111</ns0:cell><ns0:cell>0.10000</ns0:cell><ns0:cell>0.10047</ns0:cell><ns0:cell>0.10000</ns0:cell><ns0:cell>0.57354</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DGCCA</ns0:cell><ns0:cell /><ns0:cell>0.08222</ns0:cell><ns0:cell>0.05400</ns0:cell><ns0:cell>0.06244</ns0:cell><ns0:cell>0.10800</ns0:cell><ns0:cell>0.54150</ns0:cell></ns0:row><ns0:row><ns0:cell>Handwritten</ns0:cell><ns0:cell>FISH-MML</ns0:cell><ns0:cell>6 (full views)</ns0:cell><ns0:cell>0.09961</ns0:cell><ns0:cell>0.11099</ns0:cell><ns0:cell>0.10493</ns0:cell><ns0:cell>0.11100</ns0:cell><ns0:cell>0.09643</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dynamic guided representation-MVC</ns0:cell><ns0:cell /><ns0:cell>0.09835</ns0:cell><ns0:cell>0.11650</ns0:cell><ns0:cell>0.10402</ns0:cell><ns0:cell>0.11650</ns0:cell><ns0:cell>0.52245</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DGMRL-MVC</ns0:cell><ns0:cell /><ns0:cell>0.14166</ns0:cell><ns0:cell>0.17800</ns0:cell><ns0:cell>0.15315</ns0:cell><ns0:cell>0.17800</ns0:cell><ns0:cell>0.11521</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DCCA</ns0:cell><ns0:cell /><ns0:cell>0.09245</ns0:cell><ns0:cell>0.08467</ns0:cell><ns0:cell>0.08289</ns0:cell><ns0:cell>0.09090</ns0:cell><ns0:cell>0.50819</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DGCCA</ns0:cell><ns0:cell /><ns0:cell>0.10329</ns0:cell><ns0:cell>0.10567</ns0:cell><ns0:cell>0.10267</ns0:cell><ns0:cell>0.10400</ns0:cell><ns0:cell>0.54222</ns0:cell></ns0:row><ns0:row><ns0:cell>Wikipedia</ns0:cell><ns0:cell>FISH-MML representation-MVC Dynamic guided</ns0:cell><ns0:cell>2 (full views)</ns0:cell><ns0:cell>0.08957 0.16876</ns0:cell><ns0:cell>0.10149 0.14224</ns0:cell><ns0:cell>0.09313 0.13550</ns0:cell><ns0:cell>0.10032 0.17027</ns0:cell><ns0:cell>0.51780 0.59470</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DGMRL-MVC</ns0:cell><ns0:cell /><ns0:cell>0.19480</ns0:cell><ns0:cell>0.18393</ns0:cell><ns0:cell>0.22795</ns0:cell><ns0:cell>0.18110</ns0:cell><ns0:cell>0.59851</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DCCA</ns0:cell><ns0:cell /><ns0:cell>0.05625</ns0:cell><ns0:cell>0.04909</ns0:cell><ns0:cell>0.05200</ns0:cell><ns0:cell>0.05241</ns0:cell><ns0:cell>0.21037</ns0:cell></ns0:row><ns0:row><ns0:cell>Football</ns0:cell><ns0:cell>DGCCA FISH-MML</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0.01404 0.05679</ns0:cell><ns0:cell>0.01885 0.05885</ns0:cell><ns0:cell>0.01603 0.01761</ns0:cell><ns0:cell>0.02016 0.06048</ns0:cell><ns0:cell>0.21256 0.22956</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dynamic guided representation-MVC</ns0:cell><ns0:cell>(full views)</ns0:cell><ns0:cell>0.10516</ns0:cell><ns0:cell>0.06449</ns0:cell><ns0:cell>0.05430</ns0:cell><ns0:cell>0.06451</ns0:cell><ns0:cell>0.26470</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DGMRL-MVC</ns0:cell><ns0:cell /><ns0:cell>0.07540</ns0:cell><ns0:cell>0.06867</ns0:cell><ns0:cell>0.06647</ns0:cell><ns0:cell>0.07258</ns0:cell><ns0:cell>0.28367</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DCCA</ns0:cell><ns0:cell /><ns0:cell>0.02123</ns0:cell><ns0:cell>0.04200</ns0:cell><ns0:cell>0.02814</ns0:cell><ns0:cell>0.04200</ns0:cell><ns0:cell>0.03363</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DGCCA</ns0:cell><ns0:cell /><ns0:cell>0.04716</ns0:cell><ns0:cell>0.06600</ns0:cell><ns0:cell>0.05351</ns0:cell><ns0:cell>0.06600</ns0:cell><ns0:cell>0.09261</ns0:cell></ns0:row><ns0:row><ns0:cell>Pascal</ns0:cell><ns0:cell>FISH-MML</ns0:cell><ns0:cell>6 (full views)</ns0:cell><ns0:cell>0.07616</ns0:cell><ns0:cell>0.06700</ns0:cell><ns0:cell>0.06606</ns0:cell><ns0:cell>0.06700</ns0:cell><ns0:cell>0.23351</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Dynamic guided representation-MVC</ns0:cell><ns0:cell /><ns0:cell>0.06619</ns0:cell><ns0:cell>0.07200</ns0:cell><ns0:cell>0.04782</ns0:cell><ns0:cell>0.07200</ns0:cell><ns0:cell>0.15842</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DGMRL-MVC</ns0:cell><ns0:cell /><ns0:cell>0.08315</ns0:cell><ns0:cell>0.07500</ns0:cell><ns0:cell>0.07342</ns0:cell><ns0:cell>0.07500</ns0:cell><ns0:cell>0.48847</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Visualization of clustering result with FISH-MML on Pascal dataset. The best results are highlighted in bold.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Visualization of clustering result with DGMRL-MVC (ours) on Pascal dataset. The best results are highlighted in bold</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68875:1:1:NEW 28 Jan 2022)</ns0:note> </ns0:body> "
" TAIYUAN UNIVERSITY College of Information and Computer OF TECHNOLOGY Taiyuan University of Technology No.79 West Street Yingze Taiyuan Shanxi China 030024 Tel:+86 13994220268 http://www2017.tyut.edu.cn tyut66666@163.com January 26th,2022 Dear Editors We thank the editors and reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. In particular all of the code we wrote is available and we have included multiple links throughout the paper to the appropriate dataset repositories. We believe that the manuscript is now suitable for publication in PeerJ.  Dr. Tingyi Zheng College of Information and Computer On behalf of all authors. Editor (Pengcheng Liu) The paper was evaluated by three reviewers who provided thorough comments and feedback. All reviewers find that the paper contains effort to conduct point estimation tasks. The reviewers give some additional comments and suggestions on how the paper could be improved. Please see their full comments for that. I suggest the authors highlight all modifications in the revised version and answer point-by-point the reviewer's comments. Thank editors and reviewers comments, we have seen carefully and revised according to the comments. It is also expected that the authors can clearly highlight a unique novel aspect that has not been investigated already in previous works on this topic. Thank you for your constructive comment. We completely agree. We have added the highlight sentence in the revised paper line 77-79. Reviewer1 (Peng Bao) Basic reporting This paper proposes a framework called Dynamic Guided Metric Representation Learning for Multi-View Clustering, which is clearly and unambiguously expressed. The literature references is sufficient. The structure, figures and tables are profession. Experimental datasets are shared. However, the writing could be further improved. More explanations about the experimental comparison should be added, besides reporting the best results. Moreover, well-designed ablation studies could be further enhance the effectiveness of the proposed framework. Therefore, I recommend this paper to be accepted after a minor revision. Thank you for your positive comments. Experimental design Overall, the experiments are well-designed and meaningful. Several comments are listed as follows: 1. More explanations about the experimental comparison should be added, besides reporting the best results. Thank you for your constructive comment. We completely agree. We have added the explanations in each part of the experiment in the revised paper line 356-359, 383-385 and 396-406. 2. Well-designed ablation studies could be further enhance the effectiveness of the proposed framework. Thank you for your constructive comment. We completely agree. We have added the ablation experiment in revised paper line 412-425. Additional comments The written of the manuscript should be carefully checked and improved. Several examples of typos are listed here: 1. In the abstract, make sure whether '... 1rt ' is a mistake; Thank you very much for your careful review and sorry for my carelessness. We have corrected “1rt” to “1st” in the revised paper line 23. 2. Line 259-260, and et al. Thank you very much for your careful review and sorry for my carelessness. We have corrected the content and format of this paragraph in the revised paper line 267-268. Reviewer2 (Anonymous) Basic reporting This paper proposed a method with multiple learning steps to cluster multi-view data including inter-intra learning, dynamic guided deep learning and shared representation. It proposed an effective mechanism that considers the discriminant characteristics in a single view and the correlation between views simultaneously. Specially, it combines the effectiveness of FDA-HSIC and dynamic routing learning, and solves issues that occurs with existing methods. In the end, experiments on four datasets demonstrated the effectiveness of the proposed method for clustering tasks. Thank you for your positive comments. This paper needs to be improved in the following aspects: 1. In the figure 2 of experiments part, the silhouette_score of this method is lower than the baselines, authors should give more detailed analysis on this. Thank you for your constructive comment. We completely agree. We have added the analysis in the revised paper line 396-406. 2. There are some grammatical and spelling mistakes that need to polish.  For example: what are the variables t and b in formula 2 and 3? Is “Deep representation based on and dynamic guided deep learning ” correct ? Thank you very much for your careful review and sorry for my carelessness. We have added explanations for variables t and b in the revised paper line 228. We have corrected “Deep representation based on and dynamic guided deep learning” to “Deep representation based on dynamic guided learning” in the revised paper line 254. 3. In Section 4.3, the reference of baseline algorithms should be added. We completely agree. We have added the reference of each baseline algorithm in the revised paper line 328,330 and 332. 4. In related work, a paragraph to summarize the main technical challenges of related work should be added. Thank you for your constructive comment. We completely agree. We have added the summary content of main technical challenges to the last paragraph of related work in the revised paper line 176-181. Experimental design Experiments are well designed. Datasets and experimental results are presented clear. Thank you for your positive comments. Validity of the findings Conclusion are well stated. Thank you for your positive comments. Reviewer3 (Anonymous) Basic reporting This paper presents an interesting study on multi-view clustering method. In particular, the authors have proposed a novel framework called DGMRL-MVC, which can cluster multi-view data in a learned latent discriminated embedding space. The data representation can be enhanced by multi-steps. And the approach is proposed to address issues that arise with existing methods, through considering the discriminant characteristics in a single view and the correlation of multiple views rather than using this information from a single view. The effectiveness of the proposed approach has been validated experimentally by compared baseline methods on four multi-view datasets. The results show some improvements of the performance in terms of precision, recall, F1, RI, NMI and Silhouette_score, in comparison with existing methods. This paper needs to be improved in the following aspects: 1)The proposed approach is learning the embedding space for clustering. Therefore, authors should add some relevant details on multi-view subspace clustering in the Section 2.1.2. Thank you for your constructive comment. We completely agree. We have added the more relevant content in the revised paper line 127-134. 2)Please check the last sentence in the Section 2.2.2 and Section 3.3.1. Thank you very much for your careful review and sorry for my carelessness. We have checked and modified these sentences in the revised paper line181 and 288. Experimental design 1)In Experiments, the authors state that the evaluation measures include precision, recall, F1, RI, NMI. In order to make the effect clearer to reader,authors should add the formulas. Thank you for your constructive comment. We completely agree. In the revised manuscript, we have added the formulas for showing how the values of evaluation measures on line 307-324. 2)In the Section 4.3, Table 1 shows the differences among baselines. Authors should add the more relevant details (e.g., optimization function). Thank you for your constructive comment. We completely agree. We have added the optimization function in Table 1. Additional comments 1)Please check the abbreviation of the model in Fig.2 and Fig.3. Thank you very much for your careful review and sorry for my carelessness. We have modified the abbreviation of the model in Figure2,3,4 and 5. 2)Please standardize the punctuation after the formula. Thank you very much for your careful review and sorry for my carelessness. We have unified checked and modified the punctuation after the formula in the revised paper. "
Here is a paper. Please give your review comments after reading it.
380
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper proposes a storage-efficient ensemble classification to overcome the low inference accuracy of binary neural networks (BNNs). When external power is enough in a dynamic powered system, classification results can be enhanced by aggregating outputs of multiple BNN classifiers. However, memory requirements for storing multiple classifiers are a significant burden in the lightweight system. The proposed scheme shares the filters from a trained convolutional neural network (CNN) model to reduce storage requirements in the binarized CNNs instead of adopting the fully independent classifier. While several filters are shared, the proposed method only trains unfrozen learnable parameters in the retraining step. We compare and analyze the performances of the proposed ensemblebased systems depending on various ensemble types and BNN structures on CIFAR datasets. Our experiments conclude that the proposed method using the filter sharing can be scalable with the number of classifiers and effective in enhancing classification accuracy. With binarized ResNet-20 and ReActNet-10 on the CIFAR-100 dataset, the proposed scheme can achieve 56.74% and 70.29% Top-1 accuracies with ten BNN classifiers, which enhances performance by 7.6% and 3.6% compared with that using a single BNN classifier.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Deep Neural Networks (DNNs) have been attractive in many fields, including computer vision, data On the other hand, if an external power source or energy harvesting environment is available, the system can switch into high-speed mode by consuming enough power. If the system can have different trained models depending on its power source status, it can provide high performance when power is enough.</ns0:p><ns0:p>However, lightweight systems could not contain different types of models due to their storage limitations.</ns0:p><ns0:p>An ensemble-based system can improve the performance of CNNs by averaging the classification results from different models <ns0:ref type='bibr' target='#b14'>(Hansen and Salamon, 1990)</ns0:ref>. Each model acts as a single base classifier, and the combined prediction of multiple base classifiers is provided from the ensemble-based system with CNNs. In the same manner, ensemble BNNs can obtain better classification results using multiple models <ns0:ref type='bibr' target='#b42'>(Vogel et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b50'>Zhu et al., 2019)</ns0:ref>, which increase the regularization of target solutions and enhance the inference accuracy. The ensemble in <ns0:ref type='bibr' target='#b42'>Vogel et al. (2016)</ns0:ref> stores weights of base classifiers derived from a BNN model by applying the stochastic rounding to each real-valued weight multiple times.</ns0:p><ns0:p>The study in <ns0:ref type='bibr' target='#b50'>Zhu et al. (2019)</ns0:ref> shows the trade-offs on the number of classifiers with BNNs. The methods in <ns0:ref type='bibr' target='#b42'>Vogel et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Zhu et al. (2019)</ns0:ref> increase the inference accuracy using multiple base classifiers, so that memory requirements are proportional to the number of weight files in base classifiers. They could not be suitable for the embedded system with limited storage resources.</ns0:p><ns0:p>Our study focuses on a method for overcoming the storage cost limitation required by BNN ensembles.</ns0:p><ns0:p>The storage costs required by BNN ensemble models can be reduced by sharing the filters in convolutional layers. While base classifiers share the filter weights from the convolutional layers of a pretrained BNN model, the affine parameters of the batch normalization layer and weights of the fully connected layer are only retrained for ensemble-based systems.</ns0:p><ns0:p>We summarize our contributions as follows:</ns0:p><ns0:p>&#8226; We propose an ensemble method based on shared filter weights that reduce the amount of storage required for ensemble-based systems.</ns0:p><ns0:p>&#8226; In the proposed ensemble system, we show the scalability with the number of base classifiers in terms of classification accuracy.</ns0:p><ns0:p>&#8226; We adopt various ensemble methods and compare the details of each method in our evaluations. Details of experimental environments are described to apply the proposed method based on several BNNs. Notably, with a binarized ResNet-20 model <ns0:ref type='bibr' target='#b39'>(Rastegari et al., 2016)</ns0:ref> on the CIFAR-100 dataset <ns0:ref type='bibr' target='#b26'>(Krizhevsky et al., 2014)</ns0:ref>, the proposed scheme can achieve 56.74% Top-1 accuracy with ten BNN classifiers, which enhances the performance by 7.58% compared with that using a single BNN model. For the state-of-the-art <ns0:ref type='bibr'>ReActNet-10 (Liu et al., 2020)</ns0:ref> on the CIFAR-100 dataset, our method produces Top-1 accuracy improvements up to 3.6%.</ns0:p><ns0:p>In our experiments, we apply different ensemble schemes to binarized ResNets from XNOR-Net <ns0:ref type='bibr' target='#b39'>(Rastegari et al., 2016)</ns0:ref>, Bi-Real-Net <ns0:ref type='bibr' target='#b33'>(Liu et al., 2018)</ns0:ref>, and ReActNet <ns0:ref type='bibr' target='#b32'>(Liu et al., 2020)</ns0:ref>. When base classifiers share filter weights, our experimental data shows performance enhancements. The fusion scheme is the best of ensembles when using binarized ResNet-20. The bagging scheme shows good performance enhancements when using binarized ResNet-18. Our method are evaluated with Bi-Real-Net <ns0:ref type='bibr' target='#b33'>(Liu et al., 2018)</ns0:ref> and ReActNet <ns0:ref type='bibr' target='#b32'>(Liu et al., 2020)</ns0:ref>. Notably, although the ensemble-based systems share filter weights, they can show the scalability with the number of base classifiers. This paper is structured as follows: in preliminary section, we introduce several related works and explain BNNs in detail, as well as the various ensemble methods. Then, this paper describes our motivation and the proposed ensemble-based systems. Finally, experimental results and analysis show that the proposed method improves the inference accuracy with scalability.</ns0:p></ns0:div> <ns0:div><ns0:head>PRELIMINARIES Low-cost neural networks</ns0:head><ns0:p>Particularly for low-cost edge devices, the ultimate design goal is to create low-cost neural network models. Low-cost neural network models have a small number of multiply-accumulate operations, which makes them efficient in terms of storage and computational costs, as well as easy to deploy on the edge.</ns0:p><ns0:p>Besides, lightweight data formats and their operations for low-cost neural networks have been developed along with training methods based on them.</ns0:p></ns0:div> <ns0:div><ns0:head>2/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Pruning is a well-studied method for reducing both computational and storage costs of DNNs by reducing the number of multiply-accumulate operations. In the early stages of applying pruning on DNN models, connections can be pruned based on the lowest saliency <ns0:ref type='bibr' target='#b29'>(LeCun et al., 1990)</ns0:ref>. The saliency term comes from computing the Hessian matrix or inverse Hessian matrix for every parameter, as shown in the Optimal Brain Damage (OBD) and Optimal Brain Surgeon (OBS) methods, respectively <ns0:ref type='bibr' target='#b29'>(LeCun et al., 1990;</ns0:ref><ns0:ref type='bibr' target='#b15'>Hassibi and Stork, 1993)</ns0:ref>. However, for DNN models such as AlexNet <ns0:ref type='bibr' target='#b27'>(Krizhevsky et al., 2012)</ns0:ref> and VGG-16 <ns0:ref type='bibr' target='#b3'>(Chatfield et al., 2014)</ns0:ref>, it is not plausible to compute the Hessian matrix or inverse Hessian matrix for every parameter. In the deep compression method of <ns0:ref type='bibr' target='#b13'>Han et al. (2015)</ns0:ref> so that a certain threshold can be used to remove connections below a certain threshold. Although floating-point operations for convolutional layers could be dramatically compressed, the implementation of pruning DNNs require special libraries such as Sparse Basic Linear Algebra Subprograms (BLAS) <ns0:ref type='bibr' target='#b13'>(Han et al., 2015)</ns0:ref> or special hardware accelerators to deal with sparse matrices <ns0:ref type='bibr' target='#b12'>(Han et al., 2016)</ns0:ref>. Methods in <ns0:ref type='bibr' target='#b30'>Li et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Luo et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>He et al. (2019)</ns0:ref> prune filters of CNNs so that those do not require specialized libraries or hardware blocks. However, due to their coarse granularity of compression, the ratio of reduced floating-point operations cannot be significant.</ns0:p><ns0:p>Quantization methods reduce overall costs by adopting lightweight data formats and their operations.</ns0:p><ns0:p>Mainly, quantization focuses on how many bits are needed to represent DNN model parameters. When quantizing DNN models, weights, activations, or inputs from real-valued format (e.g., 32-bit floating point) are converted into one or several lower-precision formats. Notably, 16-bit fixed-point <ns0:ref type='bibr' target='#b11'>(Gupta et al., 2015)</ns0:ref>, 8-bit fixed-point <ns0:ref type='bibr' target='#b13'>(Han et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b46'>Wu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Wang et al., 2018)</ns0:ref>, and logarithmic <ns0:ref type='bibr' target='#b37'>(Miyashita et al., 2016)</ns0:ref>, etc. are adopted in existing quantization CNN models. The hardware complexity of the multiply-accumulate operation in convolution can decrease depending on the degree of quantization. </ns0:p></ns0:div> <ns0:div><ns0:head>Binary neural networks</ns0:head><ns0:p>In CNNs, as the numbers of layers and channels increase, there are large memory requirements for storing model parameters. Also, tremendous multiply-accumulate operations in convolutions need large computational power consumption. A low-cost embedded inference system could not have enough memory units to store 32-bit floating-point (fp32) model parameters. Besides, parallel fp32 units are not equipped considering its low-cost implementation. The quantization in CNNs reduces memory requirements and power consumption by adopting inaccurate data computation <ns0:ref type='bibr' target='#b4'>(Courbariaux et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hubara et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b22'>Hubara et al., , 2017;;</ns0:ref><ns0:ref type='bibr' target='#b39'>Rastegari et al., 2016)</ns0:ref>. Notably, BNNs can quantize both weights and activations of CNNs into &#8722;1 and +1 in their forward paths, significantly decreasing storage resource requirements for saving parameters. The multiply-accumulate operation in convolutions can be approximated using XNOR and bit count operations, thus reducing computational resources. For example, in <ns0:ref type='bibr' target='#b39'>Rastegari et al. (2016)</ns0:ref>, a BNN can achieve &#8764; 32&#215; memory efficiency and &#8764; 58&#215; speedup on a single core device, compared with its corresponding fp32-based CNN.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b39'>Rastegari et al. (2016)</ns0:ref>, the dot product in a convolution is approximated as follow: We define the real-valued activation and weight filter as X and W. In convolutional layers of BNNs, the dot product is approximated between X, W &#8712; R n such that X &#8890; W &#8776; &#947;H &#8890; &#945;B = kH &#8857; B, where H, B &#8712; {+1, &#8722;1} n and &#945;, &#947; &#8712; R + . In other words, H and B denote the binary activation and filter. Terms &#945; and &#947; are scaling factors for weights and activations, respectively. Symbol &#8857; denotes the dot product of vectors H and B using XNOR-bitcount operations. Element-wise matrix product is obtained by multiplying vector H &#8857; B with k = &#947;&#945;. Whereas term k is deterministically calculated in <ns0:ref type='bibr' target='#b39'>Rastegari et al. (2016)</ns0:ref>, k &#8712; K is a learnable affine parameter in <ns0:ref type='bibr' target='#b2'>Bulat and Tzimiropoulos (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Kim (2021)</ns0:ref>. The dot product for reducing quantization errors can be optimized as: (1)</ns0:p><ns0:p>The binarization of X using sign() function is applied to deterministically quantize each feature x &#8712; X in the binary activation layer. A binary activation is formulated as:</ns0:p><ns0:formula xml:id='formula_0'>x &#8712; X, sign(x) = +1 i f x &#8805; 0, &#8722;1 else. (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>)</ns0:formula><ns0:p>The derivative of sign() function contains the &#948; function, so it is approximated in the backward path</ns0:p><ns0:p>during training a BNN model. BNNs in <ns0:ref type='bibr' target='#b39'>Rastegari et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b4'>Courbariaux et al. (2015)</ns0:ref> adopt the straight-through-estimator approach in the approximation of its derivative. Unlike the conventional convolutional layer, the binarized activation is the input to the convolutional layer.</ns0:p><ns0:p>The batch normalization (BN) <ns0:ref type='bibr' target='#b23'>(Ioffe and Szegedy, 2015)</ns0:ref> layer is placed before the binary activation layer. The BN is formulated as:</ns0:p><ns0:formula xml:id='formula_2'>xi &#8594; x i &#8722; &#181; &#946; &#963; 2 &#946; + &#949; ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where &#181; &#946; and &#963; are the mini-batch mean and variance for a channel. Iterative mini-batches from the previous layer's outputs are used in the training of &#181; &#946; and &#963; . A convolution output x i means each element in a channel. A small constant &#949; prevents the division by zero. After the normalization, the BN layer scales and shifts the normalized feature xi into x i in a channel, which can be equated as:</ns0:p><ns0:formula xml:id='formula_3'>x i &#8594; &#955; xi + &#946; ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where the affine parameters &#955; and &#946; are learnable during the CNN training.</ns0:p><ns0:p>The BN layer can change the range of the convolution output distribution, and the adjusted convolution output is used as the input to the binary activation layer. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science located before the BinActive layer to adjust features with its learnable parameters. The features binarized from the BinActive layer go towards the BinConv layer. It is noted that the output format of the BinConv layer becomes fp32 after its scaling.</ns0:p><ns0:p>During BNN training, the fp32 format is generally adopted to update the weights of the binarized convolutional layer in the backward path. Besides, the parameters of the BN layer also use fp32 format. Because the derivative of sign(x) function is &#948; function, the derivative of the sign(x) function in the binary activation layer should be approximated. Generally, a straight-through-estimator <ns0:ref type='bibr' target='#b39'>(Rastegari et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b5'>Courbariaux et al., 2016)</ns0:ref> for this approximation is formulated as follows:</ns0:p><ns0:formula xml:id='formula_4'>&#8706; sign(x) &#8706; x = 1 i f &#8722; 1 &#8804; x &#8804; 1, 0 else.</ns0:formula><ns0:p>(5)</ns0:p></ns0:div> <ns0:div><ns0:head>Ensemble-based systems</ns0:head><ns0:p>Ensemble methods, also known as ensemble learning, are a powerful tool to improve deep neural network models and providing better model generalization. An ensemble-based system is defined as the implementation of ensemble learning. An ensemble-based system combines multiple machine learning algorithms or multiple different models to produce better performance than those adopting a single algorithm or model. An ensemble-based system consists of multiple base estimators that are combined to form a strong estimator <ns0:ref type='bibr' target='#b14'>(Hansen and Salamon, 1990)</ns0:ref>. Ensemble methods include fusion, voting, bagging, gradient boosting, and many others. Finding the best model that yields the least error in the search space is an important problem in statistical learning, which is the goal of machine learning. This is a hard work because datasets are always smaller than the search space. Researchers have discovered a way to mitigate this by using various ensemble methods. Voting reduces the risk of selecting a bad model. Bagging and boosting with different starting points result in a better approximation. Fusion expands model's function space <ns0:ref type='bibr' target='#b8'>(Ganaie et al., 2021)</ns0:ref>. Furthermore, when homogeneous estimators of the same type are built with a different activation function or initialization method, homogeneous ensemble can increase diversity while decreasing correlation <ns0:ref type='bibr' target='#b35'>(Maguolo et al., 2021)</ns0:ref>.</ns0:p><ns0:p>In CNNs, estimators are mainly used for classifications so that base classifiers are combined to produce one strong classifier for predicting the class for a given sample. There are different library that help to assist building ensemble methods with state-of-art DNNs. In this paper, we use Ensemble-Pytorch library <ns0:ref type='bibr' target='#b47'>(Xu, 2020)</ns0:ref>, which is an open source library that support different type of the ensemble methods.</ns0:p><ns0:p>Fusion and voting are basic ensemble-based schemes. In the fusion-based ensemble, the averaged prediction from base classifiers is used to calculate the training loss. When M base classifiers {e 1 , e 2 , ..., e m , ..., e M } are adopted, the output can be o i = 1 M &#8721; M m=1 o i m for a given sample s i . In the fusion, when y i is the target output for s i , its loss function is L(o i , y i ). On data batch D, its training loss can be 1 In the bagging-based ensemble <ns0:ref type='bibr' target='#b1'>(Breiman, 1996)</ns0:ref>, the subsampling with replacement produces multiple datasets to train each base classifier. In <ns0:ref type='bibr' target='#b47'>Xu (2020)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>D &#8721; D i=1 L(o i , y i ).</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>obtained different parameters for the single model.</ns0:p><ns0:p>Notably, the base classifier can be base classifiers for the image classification. Generally, when base classifiers are adopted, the averaging of base classifiers is performed on the predicted probabilities of target classes via a softmax function as:</ns0:p><ns0:formula xml:id='formula_6'>so f tmax(z j i ) = e z j i &#8721; K k=1 e z j k ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where term K is the number of classes. Term z j i is an element of the input vector z on the j-th base classifier, so z j = (z j i , ..., z j K ) &#8712; R K . In inference, predictions from the base classifiers are averaged.</ns0:p><ns0:p>There have been several ensemble-based systems using BNNs. The ensemble-based system in <ns0:ref type='bibr' target='#b42'>Vogel et al. (2016)</ns0:ref> applies the stochastic rounding to a real-valued weight to get its binary weight. The stochastic rounding of a weight w can be performed in Eq. ( <ns0:ref type='formula'>7</ns0:ref>) as:</ns0:p><ns0:formula xml:id='formula_7'>sr(w) =</ns0:formula><ns0:p>&#8970;w&#8971; with probability 1 &#8722; (w &#8722; &#8970;w&#8971;), &#8970;w&#8971; + 1 with probability w &#8722; &#8970;w&#8971;.</ns0:p><ns0:p>(7)</ns0:p><ns0:p>This system performs a kind of soft voting with base classifiers that contain different binary weight files from one high-precision neural network. Each inference evaluation with a binary weight file could be considered as a base classifier, The ensemble-based system averages prediction probabilities to enhance the classification accuracy. Although this ensemble-based system lowers the classification variance of the aggregated classifiers <ns0:ref type='bibr' target='#b42'>(Vogel et al., 2016)</ns0:ref>, its target system should store multiple weight files. Binary</ns0:p><ns0:p>Ensemble Neural Network (BENN) <ns0:ref type='bibr' target='#b50'>(Zhu et al., 2019)</ns0:ref> adopts bagging and boosting strategies to obtain multiple models to be aggregated in <ns0:ref type='bibr' target='#b50'>Zhu et al. (2019)</ns0:ref>. These sophisticated ensemble methods improve inference accuracy, but multiple weights should be stored as well, which is a significant burden when using BNNs.</ns0:p></ns0:div> <ns0:div><ns0:head>PROPOSED METHOD Motivations</ns0:head><ns0:p>BNNs reduce both storage requirements and computation time using binary weights and related binary bitwise operations. Therefore, BNNs are suitable for power-hungry embedded systems. However, their inference accuracies are degraded compared with those of real-valued CNNs.</ns0:p><ns0:p>A power-hungry embedded system has different power source status. For example, if an external power source is connected, enough power is available to the system. Sometimes, even if a system consumes more power, it requires high inference accuracy. If a system has energy harvesting equipment, the system can have other power source statuses. When the system can gather enough power, there is no need to continue the low power mode. Depending on its power source status, it is necessary to provide adequate inference accuracy by adjusting the amount of computation.</ns0:p><ns0:p>We note that the ensemble-based system using BNNs can provide this trade-off. In the ensemble-based system using BNNs, multiple BNN models are aggregated to produce better classification results, where each trained model can act as a base classifier. Whereas only one BNN model can be used in a low power mode, multiple BNN models can be aggregated when power source is enough. When using multiple BNN models, multiple sets of model parameters should be stored for providing multiple base classifiers.</ns0:p><ns0:p>However, if storage resources are limited in the system, this ensemble-based system is not applicable.</ns0:p><ns0:p>Except for the voting-based ensemble, the ensemble-based systems introduced in Preliminaries cannot train base classifiers independently. In other words, when varying the number of aggregated classifiers in the ensembles, each ensemble gets a different set of learnable model parameters.</ns0:p><ns0:p>Although there are significant advantages of BNNs, the increase in storage requirements for the ensemble-based system can limit their applications. We are motivated that if parameters are shared between BNN models, the storage resource limitation can be alleviated in the ensemble. We consider the characteristics of CNNs to determine which parameters are shared. In image classification using CNNs, convolutional filter weights are automatically learned during their training process. Each filter extracts the abstract meaning from features, which are connected by performing hierarchical convolutions. We think that classifiers share the abstraction tools. Therefore, filters could be shared as the tools, extracting the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_8'>&#945;, B, &#947; m , H m = argmin &#945;,B,&#947; m ,H m &#8741;X m &#8857; W &#8722; &#947; m &#945;H m &#8857; B&#8741;.<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>On the other hand, the parameters of BN layers in the same position are different in base classifiers.</ns0:p><ns0:p>Formally, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>x m i &#8594; &#955; m xm i + &#946; m ,</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 Training of ensemble-based system using BNNs. For better classification results, we can choose which filters are shared or not. Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> illustrates basic blocks of the binarized ResNet <ns0:ref type='bibr' target='#b17'>(He et al., 2016)</ns0:ref>. In the binarized ResNet, the basic blocks are stacked, maintaining a pyramid structure. The non-zero stride can reduce the resolution of output features by their height and width. When the number of channels is doubled in the ResNet, stride = 2 is adopted in the convolution.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> (a), a basic block contains the shortcut summing the input features to the output of the last BN layer. The heights and widths of input and output features are the same, respectively. The basic block of Fig. <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> (a) contains two BinConv layers. In this case, it is possible that only one of them can share its filter weights in an ensemble-based system. In Fig. <ns0:ref type='figure' target='#fig_7'>3 (b)</ns0:ref>, when stride = 2, the height and width of output features are half of those of input features. An 1 &#215; 1 exact Conv layer can be used in the shortcut when shrinking the feature dimension. In another ensemble-based system, filter weights for this 1 &#215; 1 Conv layer may not be shared between base classifiers.</ns0:p></ns0:div> <ns0:div><ns0:head>8/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed a When the number of output channels and the width and height of output features are denoted as c out , w out , and h out , the output size is calculated as c out &#215; w out &#215; h out . b Terms denote the weight filter size w &#215; h, the number of input channels c in , and the stride used in the first convolutional layer of the basic block. When stride = 2, c out = c in &#215; 2. c Memory requirements for storing weights.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>HARDWARE ANALYSIS</ns0:head></ns0:div> <ns0:div><ns0:head>Storage resource requirements</ns0:head><ns0:p>The filter sharing in the proposed ensemble-based systems reduces the storage resource requirements.</ns0:p><ns0:p>For example, the binarized ResNet-20 structure for the CIFAR dataset is shown in Fig. <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>. The conv1 and fully-connected linear layers adopt precise fp32 operations like <ns0:ref type='bibr' target='#b39'>Rastegari et al. (2016)</ns0:ref>. The layer1, layer2, and layer3 blocks contain six 3 &#215; 3 BinConv layers, respectively, where a basic block of the dotted box contains two BinConv layers. Each basic block contains the shortcut, which is described as the round red arrow. The dotted round red arrows mean 1 &#215; 1 exact convolutional layers used as the shortcut for shrinking the feature dimension with stride = 2. Finally, the average pooling layer (denoted as Avg pooling) averages the final convolutional outputs. The linear layer has full connections to all averaged outputs to produce the final classification result.</ns0:p><ns0:p>Table 1 lists the output size, layer description, and storage requirements. The weight size of each binarized convolutional layer can be calculated as c in &#215; w &#215; h &#215; c out bits. On the other hand, the weight size of the first fp32 convolutional layer (conv1) is calculated as c in &#215; w &#215; h &#215; c out &#215; 32 bits. In the linear layer, c out can be the same as the number of classes. The storage requirements for the linear layer increase with the number of classes. For example, the storage requirements of the linear layer for the CIFAR-100 dataset can be 204,800 bits, providing 100 image classes.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational resources and power consumption</ns0:head><ns0:p>The ensemble-based system using these homogeneous base classifiers can increase computations proportional to the number of base classifiers in the inference stage. Although filter weights are shared, each base classifier follows the same binarized CNN structure, performing the same number of multiply-accumulate operations when using fusion, voting, bagging, and boosting schemes. For example, in Sagartesla (2020); <ns0:ref type='bibr' target='#b24'>Kim (2021)</ns0:ref>, the binarized ResNet-18 for the CIFAR-10 dataset requires 58.6 &#215; 10 7 floating-point operations (FLOPs). When M base classifiers perform M &#215; 58.6 &#215; 10 7 FLOPs. Power consumption is also proportional to the number of base classifiers. For example, in <ns0:ref type='bibr' target='#b10'>Guo et al. (2021)</ns0:ref>, the estimated power consumption of the XNOR-Net model for ImageNet dataset <ns0:ref type='bibr' target='#b7'>(Deng et al., 2009)</ns0:ref> is 1.92 mJ. In this case, if M base classifiers are adopted, power consumption can be M&#215;1.92 mJ.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL RESULTS AND ANALYSIS Binarized ResNets on CIFAR datasets</ns0:head><ns0:p>We evaluated the binarized ResNet-20 and ResNet-18 models on the CIFAR datasets. The reasons why these models and datasets were explained as follows. The skip connection or shortcut originated from</ns0:p><ns0:p>ResNet <ns0:ref type='bibr' target='#b17'>(He et al., 2016)</ns0:ref> has been used in many prominent CNN models. Because the shortcut can pass the exact features to the next layer without information loss, BNNs using the shortcut could outperform the binarized plain CNN model. In <ns0:ref type='bibr' target='#b39'>Rastegari et al. (2016)</ns0:ref>, compared with the binarized plain CNN model, the performance of the binarized ResNet model was better. The outstanding benefits of the binarized ResNet model were considered in our experiments. Whereas the binarized ResNet-20 model was a simple lightweight BNN in Fig. <ns0:ref type='figure' target='#fig_8'>4</ns0:ref> and Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, the binarized ResNet-18 model required additional computational and storage resources. In the CIFAR dataset <ns0:ref type='bibr' target='#b26'>(Krizhevsky et al., 2014)</ns0:ref>, 50,000 32 &#215; 32 training color images and 10,000 32 &#215; 32 test color images are prepared in the training and inference, respectively. After finishing retraining, trained models were evaluated with the test images. Whereas the CIFAR-10 dataset contains 10 different classes, the CIFAR-100 dataset has 100 classes. Because more sophisticated classification is needed, the classification accuracy on the CIFAR-100 dataset could be lower than that on the CIFAR-10 when adopting the same BNN structure.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluations of ensembles with binarized ResNets</ns0:head><ns0:p>An Manuscript to be reviewed</ns0:p><ns0:p>Computer Science applied in the inference. We ran the training for 400 epochs with a batch size of 256. This training adopted the default Adam optimizer (Kingma and Ba, 2014) with momentum=0.9. It was known that the error of binarized operations could be helpful in regularizing BNN models so that the regularization decaying updated weights was not adopted by zero weight decay. The learning rate was changed depending on the poly policy so that the learning rate lr decreased by base lr &#215; (1 &#8722; iteration epochs ). The term base lr means the starting learning rate, which was initialized as 0.001. For the CIFAR-100 dataset, the dropout <ns0:ref type='bibr' target='#b41'>(Srivastava et al., 2014)</ns0:ref> layer with dropout = 0.5 was inserted just before the linear fully-connected layer. Parametric</ns0:p><ns0:p>Rectified Linear Unit (PReLU) <ns0:ref type='bibr' target='#b16'>(He et al., 2015)</ns0:ref> Several ensemble schemes were experimented with using the Ensemble Pytorch library in <ns0:ref type='bibr' target='#b47'>Xu (2020)</ns0:ref>.</ns0:p><ns0:p>In these evaluations, all filter weights of the binarized convolutional layers in each base classifier were initialized using the trained initial weights and then frozen during the training process. Regardless of the number of base classifiers M, the filter weights were shared between classifiers. On the other hand, the learnable parameters in the BN layer and weights of the linear layers were retrained in the ensemble.</ns0:p><ns0:p>The retraining of this ensemble was performed for 200 epochs with a batch size of 256 and adopted the Adam optimizer with momentum=0.9. The starting learning rate was initialized as 0.01. Experimental results of the boosting and snapshot ensembles showed significantly degraded Top-1 accuracies. For example, when M = 2, the boosting scheme achieved 35.68% Top-1 accuracy. Therefore, we did not consider the boosting and snapshot ensemble schemes. Instead, the fusion, voting, bagging schemes were evaluated. Our codes have been available at https://github.com/analog75/ensemble bnn.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_11'>5</ns0:ref> (a) illustrates Top-1 inference accuracies of ensemble-based systems using trained binarized</ns0:p><ns0:p>ResNet-20 on the CIFAR-100 dataset. While accuracies of the fusion ensemble scheme were proportional to M, those of the voting and bagging schemes cannot increase with M &#8805; 4. Whereas real-valued ResNet-20 can produce 64.26% Top-1 accuracy, the binarized ResNet-20 achieved 49.16% Top-1 accuracy. On the other hand, the fusion scheme got 56.74% Top-1 accuracy when M = 10. Although base classifiers shared several filter weights, these evaluation results showed that the ensemble scheme can enhance the inference accuracy.</ns0:p><ns0:p>In other experiments, the binarized ResNet-18 on the CIFAR-100 dataset was evaluated to know the effectiveness of ensemble schemes on more complex BNN model. Whereas real-valued ResNet-18 achieved 75.61% Top-1 accuracy, Top-1 accuracy from the trained binarized ResNet-18 was 68.59%. Base classifiers shared the weights of the convolutional layers in the ensemble. The learnable parameters in the BN layer and weights of the linear layers were retrained in the ensemble. The retraining of this ensemble was performed for 200 epochs with a batch size of 256 except for the fusion scheme. Besides, the Adam optimizer was adopted with momentum=0.9. When retraining the ensembles, two Tesla V100 graphic processing units (GPU) of Nvidia <ns0:ref type='bibr' target='#b6'>(Cui et al., 2018)</ns0:ref> were used in our GPU-based workstation. When M &#8805; 8 in the fusion ensemble method, GPU memory requirements exceeded the maximum resources of our workstation with a batch size of 256. Therefore, for the fusion ensemble scheme, the binarized ResNet-18 was evaluated with a batch size of 128.</ns0:p><ns0:p>These ensemble schemes showed enhanced performances in the experimental results compared with the results with the binarized ResNet-20. The bagging ensemble scheme explicitly showed the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science scalability with M, providing small performance enhancements. When M = 10, the bagging ensemble scheme achieved 70.21% Top-1 inference accuracy. Compared with the evaluations using ResNet-20, the enhancements using the ensemble schemes were not significant in those using ResNet-18. The initial weights of ResNet-18 produced higher accuracy than those of ResNet-20, meaning that the margin that can be improved using ensemble schemes was small in the binarized ResNet-18.</ns0:p><ns0:p>The CIFAR-10 dataset was adopted in ResNet-20 to know the effect of the number of classes. Although images of the CIFAR dataset were the same, the numbers of classes for the CIFAR-10 were only 10. In the data augmentation, from each 40 &#215; 40 padded image, 32 &#215; 32 input image was cropped and randomly flipped in the horizontal direction. The training ran for 400 epochs with a batch size of 256. We used the Adam optimizer with momentum=0.9. The regularization decaying updated weights was not adopted.</ns0:p><ns0:p>The initial learning rate is 0.001, changing with the poly policy. For the CIFAR-10 dataset, this training did not insert the dropout layers. The PReLU <ns0:ref type='bibr' target='#b16'>(He et al., 2015)</ns0:ref> was used in the activation of each basic block.</ns0:p><ns0:p>In the initialization, a trained binarized ResNet-20 model was adopted, where the binarized ResNet-20 model achieved 84.06% Top-1 accuracy for the CIFAR-10 dataset. Figure <ns0:ref type='figure' target='#fig_13'>6</ns0:ref> illustrates Top-1 inference accuracies of ensemble-based systems using trained binarized ResNet-20 on the CIFAR-10 dataset. In the evaluations, the fusion scheme got 85.30% Top-1 inference accuracy with M = 2, and 86.99% Top-1 inference accuracy was achieved with M = 10. Compared with the case for the CIFAR-100 dataset, the enhancements with increasing M were not significant so that we concluded that accuracy margins to be improved were small. However, the evaluation results showed the scalability with M. Besides, about 3%</ns0:p><ns0:p>Top-1 inference accuracy was enhanced with M = 10, compared with the inference only using the initial BNN model.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison on different configurations of weight sharing</ns0:head><ns0:p>We compared different configurations of ensembles depending on shared weights, proving that the ensemble using shared weights enhanced accuracies.</ns0:p><ns0:p>The other configurations are described as:</ns0:p><ns0:p>&#8226; no shared: M base classifiers did not share weights in any layers.</ns0:p><ns0:p>&#8226; fusion: base classifiers shared the weights of all convolutional layers and did not share the weights of the linear layer.</ns0:p><ns0:p>&#8226; all frozen: base classifiers shared the weights of all convolutional and linear layers.</ns0:p><ns0:p>&#8226; C 1 : except for the first convolutional layer, base classifiers shared the weights of all convolutional and linear layers.</ns0:p><ns0:p>&#8226; C 2 : except for 1 &#215; 1 convolutional layers used as shortcuts and linear layer, base classifiers shared the weights of all convolutional layers.</ns0:p><ns0:p>&#8226; C 3 : except for the last convolutional and linear layers, base classifiers shared the weights of all convolutional layers.</ns0:p></ns0:div> <ns0:div><ns0:head>12/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Whereas all weights of convolutional and linear layers were shared in all frozen configuration, the affine parameters of BN layers were trainable, producing the accuracy enhancements proportional to M. The memory requirements for storing multiple affine parameters in the ensemble were tiny so that the costs for increasing storage resource requirements were negligible. When the weights of the first convolutional layer were not shared in C 2 configuration, there were no performance enhancements over the fusion and C 1 configurations. We concluded that when base classifiers did not share the filters of the first convolutional layer, the performance enhancements could be negligible in the ensemble-based system. On the other hand, C 2 and C 3 configurations had slight enhancements over the fusion configuration. However, Fig. <ns0:ref type='figure' target='#fig_14'>7</ns0:ref> (b) shows that storage resource requirements of the C 2 and C 3 increased sharply with M.</ns0:p></ns0:div> <ns0:div><ns0:head>Ensembles with Bi-Real-Net and ReActNet on CIFAR dataset</ns0:head><ns0:p>Among several recent BNN structures, Bi-Real-Net <ns0:ref type='bibr' target='#b33'>(Liu et al., 2018)</ns0:ref> was one of the outstanding works, proposing the shortcut for each convolutional layer. We modified the original code of Bi-Real-Net for the evaluation on the CIFAR-100 dataset. Data augmentation process was the same as the method mentioned for the binarized ResNet-18. Like the binarized ResNet-18, Bi-Real-Net-18 contained 16 binarized convolutional layers. The first convolutional and linear layers got real-valued (fp32) features as inputs for achieving high classification accuracy. The downsampling was performed with 1 &#215; 1 real-valued convolutional layer per four binarized convolutional layers. The training process for producing the initial weights was as follows:</ns0:p><ns0:p>The training was performed for 200 epochs with a batch size of 256. The Adam optimizer was applied with momentum=0.9. Regularization methods using the dropout layer and non-zero weight decay were not adopted. The initial learning rate is 0.001, changing with poly policy. With the trained weights, Bi-Real-Net-18 can achieve 63.97% Top-1 final accuracy on setups mentioned above.</ns0:p><ns0:p>Fusion, soft voting, and bagging schemes were applied to evaluate ensemble-based systems using Bi-Real-Net-18. The affine parameters of the BN layers and weights of linear layer for each base classifier were retrained. During the retraining with base classifiers, the initial learning rate was 0.001, and the learning rate was changed according to poly policy. The retraining was performed for 200 epochs with a batch size of 256. The Adam optimizer with momentum=0.9 was used. Like the cases using binarized ResNet-18, the batch size was setup as 128 for fusion schemes.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_16'>8</ns0:ref> illustrates Top-1 inference accuracies of ensemble-based systems using trained Bi-Real-Net-18 on the CIFAR-100 dataset. In the evaluations, the voting and bagging schemes performed better than the fusion scheme. The number of channels in each layer of Bi-Real-Net-18 is equal to that of ResNet-18.</ns0:p></ns0:div> <ns0:div><ns0:head>13/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science results showed the scalability with M. The performance improvement slowed down as M increased, but the ensemble schemes always provided better performance over the case only using one classifier.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The proposed ensemble-based system using shared filter weights can reduce storage resource requirements and shows the scalability with the number of base classifiers. In different scenarios of filter sharings, the trained ensembles can enhance the classification accuracies and show trade-off relationships between storage requirements and classification performance. We evaluate the proposed method on the-state-ofthe-art BNN models and describe the detailed training process, proving that our storage-efficient ensemble can enhance classification accuracies. Most of all, it is concluded that the proposed method can provide a scalable solution and extend applications of ensemble-based systems using BNN models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>In post-training quantization, a DNN model is trained based on a real-valued format for producing its pretrained model. Then, quantized parameters of the pretrained model are adopted for realizing low-cost inference stages. However, the training loss does not consider quantization errors during training so that highly quantized model could not provide acceptable classification results. On the other hand, quantization-aware training considers quantization errors during training steps. Although training loss from the quantization errors requires long training time, its trained model can provide better classification results compared with those using post-training quantization. BNNs produce highly quantized models by only using 1-bit to represent DNN parameters. Generally, quantization errors from their binarization are considered during training. In the following, BNNs are fully explained.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Layer connection for convolutional layers: (a) conventional fp32 (b) BNN.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1 illustrates the layer connection for a convolutional layer, where Conv and BinConv denote the conventional fp32-based convolutional and binarized convolutional layers, respectively. Besides, the term BN and BinActive mean the BN and binary activation layers. In a conventional fp32-based CNN of Fig. 1 (a), the convolutional layer outputs go towards the next BN layer, and the BN layer outputs go towards the activation layer such as ReLU. Fig. 1 (b) shows the layer connection of a binarized convolution in Rastegari et al. (2016). The BN layer is 4/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>The training loss is used to update parameters of all base classifiers. When using a voting-based ensemble, each base classifier is created independently. With M base classifiers, a base classifier e m can be trained without considering other base classifiers. In inference, hard voting gathers so-called votes from base classifiers and selects the class with majority vote. For example, let us assume that there are two classes ({dog, cat}) in a dataset and three base classifiers ({e 1 , e 2 , e 3 }) in a voting-based ensemble. When classifiers e 1 and e 2 classify a sample into dog, dog is voted on the classification. Even if classifier e 3 votes for cat, dog is selected by a majority vote. On the other hand, soft voting sums the prediction probabilities from classifiers and then averages the summed values. The class with a high probability is selected in soft voting. For example, let us assume that a base classifier e m outputs its predicting probabilities of classifying a sample for a set of classes {dog, cat}, which is formulated as P(e m ) = {P dog , P cat }. When two base classifiers have P(e 1 ) = {0.7, 0.3} and P(e 2 ) = {0.2, 0.8}, their averaged probabilities can be P avg = { 0Therefore, the sample is classified into cat in this soft voting.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>, different data batches for each base classifier are sampled with replacement and used to train base classifiers independently. The boosting-based ensemble trains base classifier sequentially, where a base classifier is trained considering the errors from the previously trained base classifier. In the snapshot ensemble (Huang et al., 2017), unlike other ensemble-based systems, only one model exists, and the model parameters are collected at each minima during its training. Therefore, multiple base classifier are obtained from the 5/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Conceptual figure of proposed ensemble-based system using BNNs.</ns0:figDesc><ns0:graphic coords='8,183.09,63.78,330.86,241.30' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>where x m i &#8712; X m . Parameters &#955; m and &#946; m for e m are learnable in the retraining process. The scaling and shifting with parameters &#955; m and &#946; m for each channel can adjust the normalized features to optimize the ensemble-based system in a base classifier. In a pretrained model, when an activation value x i becomes close to zero, its quantization error is maximized, which could produce large bias and variance in a classifier. If other base classifiers produce smaller quantization errors for their activations, it is assured that the problem of the maximized quantization error can be mitigated. Besides, the dot products for each base classifier maintain their optimization shown in Eq. (8). Besides, each base classifier has different learnable weights of last fully connected layer, adjusting the accumulation of the final features.Algorithm 1 formally describes the retraining process mentioned above. A pretrained model BNN pretrained is used to initialize multiple M base classifiers BNN base (M). A function Freeze(BinConv) prevents up-7/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>1:Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Basic Blocks of Binarized ResNet (He et al., 2016): (a) stride = 1; (b) stride = 2.</ns0:figDesc><ns0:graphic coords='9,203.77,252.93,289.51,219.44' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Binarized ResNet-20 structure for CIFAR dataset.</ns0:figDesc><ns0:graphic coords='10,183.09,63.78,330.87,232.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>ensemble using the binarized ResNet-20 model was experimented on the CIFAR-100 dataset to know the performance and scalability of the proposed ensemble-based system. Firstly, the binarized ResNet-20 was evaluated on the CIFAR-100 dataset. The initial weights of this model were trained, which is denoted as BNN pretrained in Algorithm 1. The training setup for producing the initial weights was as follows: In the data augmentation of input images, a 32 &#215; 32 input image was randomly cropped from a 40 &#215; 40 padded image and randomly flipped in the horizontal direction. However, this data augmentation above was not 10/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63467:1:1:NEW 11 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Top-1 inference accuracies of ensemble schemes using binarized ResNet models on CIFAR-100 dataset: (a) binarized ResNet-20; (b) binarized ResNet-18.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.58,123.37' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>layers were used in the activation like Gu et al. (2019); Phan et al. (2020); Martinez et al. (2020).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Top-1 inference accuracies of ensemble schemes using binarized ResNet-20 models on CIFAR-10 dataset.</ns0:figDesc><ns0:graphic coords='13,245.13,63.78,206.76,114.69' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Top-1 inference accuracies and storage requirements of different configurations of ensembles using binarized ResNet-20 on CIFAR-100 dataset: (a) Top-1 inference accuracy (b) storage resource requirements.</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,413.58,153.03' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure7illustrates Top-1 final accuracies and storage resource requirements depending on the configurations of shared weights using the fusion ensemble scheme. In no shared with M = 10, Top-1 accuracy reached up to 65.9%, showing that large storage resources can increase performance in the ensemble. When the weights of all convolutional and linear layers were shared in the all frozen configuration, Top-1 final accuracies showed the scalability with M. Notably, when M = 10, Top-1 final accuracy was 53.23%, increasing by 4%, compared with 49.16% Top-1 accuracy from the initial weights.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Top-1 inference accuracies of ensemble schemes using Bi-Real-Net-18 on CIFAR-100 dataset.</ns0:figDesc><ns0:graphic coords='15,245.13,63.78,206.76,114.69' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Top-1 inference accuracies of ensemble schemes using ReActNet-10 on CIFAR-100 dataset.</ns0:figDesc><ns0:graphic coords='15,245.13,213.11,206.76,114.69' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure9illustrates Top-1 inference accuracies of ensemble-based systems using trained ReActNet-10 on the CIFAR-100 dataset. In the evaluations, Top-1 inference accuracies were enhanced by 3.6% &#8722; 1.7%. Notably, the bagging scheme got 70.29% Top-1 inference accuracy with M = 10, which enhanced 3.6% Top-1 final accuracy. Like results of the binarized ResNet-18 in Fig.5, the bagging scheme was the best in our evaluations, where 70.29% Top-1 inference accuracy was achieved with M = 10. The evaluation</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Details of binarized ResNet-20 model and storage resource requirements</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Block Name</ns0:cell><ns0:cell>Output Size a</ns0:cell><ns0:cell>Layer Description</ns0:cell></ns0:row></ns0:table><ns0:note>b Storage Requirements (bits) c conv1 16 &#215; 32 &#215; 32 3 &#215; 3, 3, stride = 1 13,824 layer1 16 &#215; 32 &#215; 32 binarized 3 &#215; 3, 16, stride = 1 13,824 layer2 32 &#215; 16 &#215; 16 binarized 3 &#215; 3, 16, stride = 2 67,072 layer3 64 &#215; 8 &#215; 8 binarized 3 &#215; 3, 32, stride = 2 268,288 average pooling 1 &#215; 1 &#215; 64 8 &#215; 8 average pooling linear 10 (CIFAR-10 ) 1 &#215; 1, 64, no stride 20,480</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Details of binarized ResNet-18 model and storage resource requirements</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Block Name a</ns0:cell><ns0:cell>Output Size a</ns0:cell><ns0:cell>Layer Description</ns0:cell><ns0:cell>Storage Requirements (bits)</ns0:cell></ns0:row><ns0:row><ns0:cell>conv1</ns0:cell><ns0:cell>64 &#215; 32 &#215; 32</ns0:cell><ns0:cell>3 &#215; 3, 3, stride = 1</ns0:cell><ns0:cell>5.53E+4</ns0:cell></ns0:row><ns0:row><ns0:cell>layer1</ns0:cell><ns0:cell>64 &#215; 32 &#215; 32</ns0:cell><ns0:cell>binarized 3 &#215; 3, 64, stride = 1</ns0:cell><ns0:cell>1.47E+5</ns0:cell></ns0:row><ns0:row><ns0:cell>layer2</ns0:cell><ns0:cell>128 &#215; 16 &#215; 16</ns0:cell><ns0:cell>binarized 3 &#215; 3, 64, stride = 2</ns0:cell><ns0:cell>7.78E+5</ns0:cell></ns0:row><ns0:row><ns0:cell>layer3</ns0:cell><ns0:cell>256 &#215; 8 &#215; 8</ns0:cell><ns0:cell cols='2'>binarized 3 &#215; 3, 128, stride = 2 3.11E+6</ns0:cell></ns0:row><ns0:row><ns0:cell>layer4</ns0:cell><ns0:cell>512 &#215; 4 &#215; 4</ns0:cell><ns0:cell cols='2'>binarized 3 &#215; 3, 256, stride = 2 1.25E+7</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>average pooling 1 &#215; 1 &#215; 64</ns0:cell><ns0:cell>8 &#215; 8 average pooling</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>linear</ns0:cell><ns0:cell cols='2'>10 (CIFAR-10) 1 &#215; 1, 512, no stride</ns0:cell><ns0:cell>1.64E+5</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>a Block layer1, layer2, layer3, and layer4 contains two basic blocks, respectively.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Table 2 lists details of binarized ResNet-18 model and storage resource requirements. Compared with the binarized ResNet-20 in Table2, the storage requirements of the binarized ResNet-18 were about 47 times more than those of the binarized ResNet-20. Besides, the binarized ResNet-18 model required 13.5 times the computational resources.</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"Response to the reviewers Thank you for offering the opportunity to revise the paper titled as A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks. The paper has been revised with careful study in aspects of paper format, language, terminology, conveyed meaning, and grammar. Several typos and missing information have been corrected. In addition, the revised paper has addressed academic editor and all reviewers’ comments. Reviewer 1 Point 1.1 — I suggest to add it to github “The source code for reproducing the experiments will be available upon publication of the manuscript” Reply: We submitted a repository link to data; data has been available at a Github repository linked as https://github.com/analog75/ensemble bnn. According to your concerns, the code has been available for reproducing our experiments, which has been added in the revised version as: (line 363) Our codes have been available at https://github.com/analog75/ensemble bnn. Point 1.2 — The contribution is no introduced clearly on the theoretical analysis; also if very good results are reported. If feasible, add some more details on the theoretical analysis of your approach. Reply: Thanks for your recommendation. We think that the key point of our contribution in terms of the theoretical analysis is to show that it is possible that the ensemble system using a binarized model can produce significant performance enhancements even if filter weights in convolutional layers are shared. When each base classifier has its own scaling and shifting before binarized activation layer, the dot product can be optimized to minimize its quantization error. Although filter weights are shared between base classifiers, all the classifiers can be optimized. When activation value becomes close to zero, the quantization error is maximized, which could produce large bias and variance with a classifier. In ensemble-based systems, when other base classifiers produce smaller quantization errors, which could be helpful for reducing bias and variance by averaging. This explanation has been added in the revised version as follows: (lines 259-269) We define the binary weight filter Bm from m-th base classifier, where m ∈ {i ∈ N|i ≤ M }. When the filter weights are reused, Bm = B. Term Hm denotes the binary activation from m-th base classifier. Thus, the dot product for a base classifier is optimized as: α, B, γ m , Hm = argmin ∥Xm ⊙ W − γ m αHm ⊙ B∥. (8) α,B,γ m ,Hm On the other hand, the parameters of BN layers in the same position are different in base m m m m m m and β m for em are classifiers. Formally, xm i → λ x̂i + β , where xi ∈ X . Parameters λ 1 learnable in the retraining process. The scaling and shifting with parameters λm and β m for each channel can adjust the normalized features to optimize the ensemble-based system in a base classifier. In a pretrained model, when an activation value xi becomes close to zero, its quantization error is maximized, which could produce large bias and variance in a classifier. If other base classifiers produce smaller quantization errors for their activations, it is assured that the problem of the maximized quantization error can be mitigated. Besides, the dot products for each base classifier maintain their optimization shown in Eq. (8). Point 1.3 — You should better review the literature on ensemble of classifiers, e.g. https://arxiv.org/abs/1802.03518 https://doi.org/10.1016/j.eswa.2020.114048 https://arxiv.org/pdf/2104.02395.pdf Reply: In agreement with your concerns, the detail explanation of ensemble classifiers and previous works have been added as follows: (lines 166-179) Ensemble methods, also known as ensemble learning, are a powerful tool to improve deep neural network models and providing better model generalization. An ensemble-based system is defined as the implementation of ensemble learning. An ensemble-based system combines multiple machine learning algorithms or multiple different models to produce better performance than those adopting a single algorithm or model. An ensemble-based system consists of multiple base estimators that are combined to form a strong estimator (Hansen and Salamon, 1990). Ensemble methods include fusion, voting, bagging, gradient boosting, and many others. Finding the best model that yields the least error in the search space is an important problem in statistical learning, which is the goal of machine learning. This is a hard work because datasets are always smaller than the search space. Researchers have discovered a way to mitigate this by using various ensemble methods. Voting reduces the risk of selecting a bad model. Bagging and boosting with different starting points result in a better approximation. Fusion expands model’s function space (Ganaie et al., 2021). Furthermore, when homogeneous estimators of the same type are built with a different activation function or initialization method, homogeneous ensemble can increase diversity while decreasing correlation (Maguolo et al., 2021). Point 1.4 — Some typos, E.g. row 153 “BNN model can be used in a low poer” -¿ “...power” Reply: According to your concerns, the typo has been revised as: (lines 233-234) Whereas only one BNN model can be used in a low power mode, Besides, other typos have been corrected after revising this manuscripts. Point 1.5 — You run many experiments and you have reported many results, this is appreciated, anyway please better stress the novelty of your method respect the literature on pruning and 2 quantization approaches: https://www.sciencedirect.com/science/article/pii/S0031320321000868 https://arxiv.org/pdf/2103.13630.pdf Reply: Thanks for your comments. We think that BNNs are the most quantized format of implementing neural networks because both weights and activations in convolutional layers are binarized. The pruning method can reduce required computational resources. However, for achieving acceptable performance, retraining is repeated in multiple step, which can be burden due to its long training time. Besides, special libraries and hardware accelerators are required to deal with sparse matrices. These explanations of pruning and quantization approaches have been added in new Low-cost neural network subsection as follows: (lines 91-127) Particularly for low-cost edge devices, the ultimate design goal is to create low-cost neural network models. Low-cost neural network models have a small number of multiply-accumulate operations, which makes them efficient in terms of storage and computational costs, as well as easy to deploy on the edge. Besides, lightweight data formats and their operations for low-cost neural networks have been developed along with training methods based on them. Pruning is a well-studied method for reducing both computational and storage costs of DNNs by reducing the number of multiply-accumulate operations. In the early stages of applying pruning on DNN models,The saliency term comes from computing the Hessian matrix or inverse Hessian matrix for every parameter, as shown in the Optimal Brain Damage (OBD) and Optimal Brain Surgeon (OBS) methods, respectively (LeCun et al., 1990; Hassibi and Stork, 1993). However, for DNN models such as AlexNet (Krizhevsky et al., 2012) and VGG-16 (Chatfield et al., 2014), it is not plausible to compute the Hessian matrix or inverse Hessian matrix for every parameter. In the deep compression method of Han et al. (2015) so that a certain threshold can be used to remove connections below a certain threshold. Although floating-point operations for convolutional layers could be dramatically compressed, the implementation of pruning DNNs require special libraries such as Sparse Basic Linear Algebra Subprograms (BLAS) (Han et al., 2015) or special hardware accelerators to deal with sparse matrices (Han et al., 2016). Methods in Li et al. (2016); Luo et al. (2017); He et al. (2019) prune filters of CNNs so that those do not require specialized libraries or hardware blocks. However, due to their coarse granularity of compression, the ratio of reduced floating-point operations cannot be significant. Quantization methods reduce overall costs by adopting lightweight data formats and their operations. Mainly, quantization focuses on how many bits are needed to represent DNN model parameters. When quantizing DNN models, weights, activations, or inputs from real-valued format (e.g., 32-bit floating point) are converted into one or several lower-precision formats. Notably, 16-bit fixed-point (Gupta et al., 2015), 8-bit fixed-point (Han et al., 2015; Wu et al., 2018; Wang et al., 2018), and logarithmic (Miyashita et al., 2016), etc. are adopted in existing quantization CNN models. The hardware complexity of the multiply-accumulate operation in convolution can decrease depending on the degree of quantization. In post-training quantization, a DNN model is trained based on a real-valued format for producing its pretrained model. Then, quantized parameters of the pretrained model are adopted for realizing low-cost inference stages. However, the training loss does not consider quantization errors during training so that highly quantized model could not provide acceptable classification results. On the other hand, quantization-aware training considers quantization errors during training steps. Although training loss from the quantization errors requires long training time, its trained model can provide 3 better classification results compared with those using post-training quantization. BNNs produce highly quantized models by only using 1-bit to represent DNN parameters. Generally, quantization errors from their binarization are considered during training. In the following, BNNs are fully explained. Thanks for your brilliant comments. We assure that your idea will be helpful for inceasing the quality of this paper. Reviewer 2 Point 2.1 — The last word on line 153 is wrong. Suggest to examine each word and sentence carefully. Reply: According to your concerns, the typo has been revised as: (lines 233-234) Whereas only one BNN model can be used in a low power mode, Besides, other typos have been corrected after revising this manuscripts. Point 2.2 — Figure 5(b) does not contained data when in the fusion ensemble due to the limitation of GPU resources. I suggest you to supplement the result when, in order to ensure the integrity of the experiment. You can use smaller batch size or use CPU for retraining. Reply: Thanks for your sharp comments. In agreement with your recommendation, we have used smaller batch size for the fusion scheme with ResNet-18 when performing retraining with our ensemble systems. The explanation and Fig. 5 have been modified as follows: (lines 377-381) When retraining the ensembles, two Tesla V100 graphic processing units (GPU) of Nvidia (Cui et al., 2018) were used in our GPU-based workstation. When M ≥ 8 in the fusion ensemble method, GPU memory requirements exceeded the maximum resources of our workstation with a batch size of 256. Therefore, for the fusion ensemble scheme, the binarized ResNet-18 was evaluated with a batch size of 128. Besides, the evaluations for Bi-Real-Net-18 had the same problem. We have adopted the batch size of 128 for fusion schemes as follows: (lines 451-452) Like the cases using binarized ResNet-18, the batch size was setup as 128 for fusion schemes. 4 Figure 5: Top-1 inference accuracies of ensemble schemes using binarized ResNet models on CIFAR100 dataset: (a) binarized ResNet-20; (b) binarized ResNet-18. Point 2.3 — Fusion, voting, and bagging schemes were applied to evaluate ensemble-based systems. However, there is no comparison between these methods in the paper. Reply: Thanks for your recommendation. For better understanding, we have explained more theoretical backgrounds of ensemble methods. Notably, detail descriptions and formuations of fusion, voting, and bagging and their comparison have been added in the Ensemble-based systems subsection as follows: (lines 184-203) Fusion and voting are basic ensemble-based schemes. In the fusion-based ensemble, the averaged prediction from base classifiers is used to calculate the training loss. When M base 1 PM m for a given classifiers {e1 , e2 , ..., em , ..., eM } are adopted, the output can be oi = M m=1 oi sample si . In the fusion, when yi is P the target output for si , its loss function is L(oi , yi ). On data 1 batch D, its training loss can be D D i=1 L(oi , yi ). The training loss is used to update parameters of all base classifiers. When using a voting-based ensemble, each base classifier is created independently. With M base classifiers, a base classifier em can be trained without considering other base classifiers. In inference, hard voting gathers so-called votes from base classifiers and selects the class with majority vote. For example, let us assume that there are two classes ({dog, cat}) in a dataset and three base classifiers ({e1 , e2 , e3 }) in a voting-based ensemble. When classifiers e1 and e2 classify a sample into dog, dog is voted on the classification. Even if classifier e3 votes for cat, dog is selected by a majority vote. On the other hand, soft voting sums the prediction probabilities from classifiers and then averages the summed values. The class with a high probability is selected in soft voting. For example, let us assume that a base classifier em outputs its predicting probabilities of classifying a sample for a set of classes {dog, cat}, which is formulated as P (em ) = {Pdog , Pcat }. When two base classifiers have P (e1 ) = {0.7, 0.3} and P (e2 ) = {0.2, 0.8}, their averaged probabilities can be Pavg = { 0.7+0.2 , 0.3+0.8 }. Therefore, the sample is classified into cat in this soft voting. 2 2 In the bagging-based ensemble (Breiman, 1996), the subsampling with replacement produces multiple datasets to train each base classifier. In Xu (2020), different data batches for each base classifier are sampled with replacement and used to train base classifiers independently. Point 2.4 — Your introduction at lines 296-310 needs more detail. I suggest that you can add 5 Figure 6: Top-1 inference accuracies of ensemble schemes using binarized ResNet-20 models on CIFAR-10 dataset. figure to describe the experiment result. Reply: For the evaluation using binarized ResNet-20 models on CIFAR-10 dataset, Fig. 6 has been added to show our experimental environments graphically as follows: Thanks for your brilliant comments. We assure that your idea will be helpful for inceasing the quality of this paper. Reviewer 3 Point 3.1 — In the abstract, add the value of classification accuracy to support the theory. Reply: In the end of abstract, we have added numerical values to conclude our performance in terms of classification accuracies as follows: (lines 21-26) Our experiments conclude that the proposed method using the filter sharing can be scalable with the number of classifiers and effective in enhancing classification accuracy. With binarized ResNet-20 and ReActNet-10 on the CIFAR-100 dataset, the proposed scheme can achieve 56.74% and 70.29% Top-1 accuracies with ten BNN classifiers, which enhances performance by 7.6% and 3.6% compared with that using a single BNN classifier. Point 3.2 — The authors show the results of their proposed method with the state-of-the-art model (ResNet), I would suggest showing the results of the ResNet on the same dataset without the use of the proposed method. Reply: Thanks for your recommendation. We added the evaluation results using original BNN classifiers. Besides, under similar experimental environments, the real-valued ResNet models were trained and evaluated. The experimental data of real-valued ResNet-20 on the CIFAR-100 dataset has been added as: 6 (lines 366-367) Whereas real-valued ResNet-20 can produce 64.26% final Top-1 accuracy, the binarized ResNet-20 achieved 49.16% Top-1 accuracy. The experimental data of ResNet-18 on the CIFAR-100 dataset has been added as: (lines 372-373) Whereas real-valued ResNet-18 achieved 75.61% final Top-1 accuracy, Top-1 final accuracy from the trained binarized ResNet-18 was 68.59%. Point 3.3 — Training parameters are required to mention. Reply: The hyperparameters of CNN training for the binarized ResNet on the CIFAR-100 dataset were described in this manuscript as follows: (lines 340-352) The initial weights of this model were trained, which is denoted as BN Npretrained in Algorithm 1. The training setup for producing the initial weights was as follows: In the data augmentation of input images, a 32 × 32 input image was randomly cropped from a 40 × 40 padded image and randomly flipped in the horizontal direction. However, this data augmentation above was not applied in the inference. We ran the training for 400 epochs with a batch size of 256. This training adopted the default Adam optimizer (Kingma and Ba, 2014) with momentum=0.9. It was known that the error of binarized operations could be helpful in regularizing BNN models so that the regularization decaying updated weights was not adopted by zero weight decay. The learning rate was changed depending on the poly policy so that the learning rate lr decreased by baselr × (1 − iteration epochs ). The term baselr means the starting learning rate, which was initialized as 0.001. For the CIFAR-100 dataset, the dropout (Srivastava et al., 2014) layer with dropout = 0.5 was inserted just before the linear fully-connected layer. Parametric Rectified Linear Unit (PReLU) (He et al., 2015) layers were used in the activation like Gu et al. (2019); Phan et al. (2020); Martinez et al. (2020). (lines 358-359) The retraining of this ensemble was performed for 200 epochs with a batch size of 256 and adopted the Adam optimizer with momentum=0.9. The starting learning rate was initialized as 0.01. In similar ways, the hyperparameters for Bi-Real-Net (lines 443-451) and ReActNet-10 (lines 466-479) on the CIFAR-100 dataset have been described in the revised version. Point 3.4 — Paper code with a nice demo is important to upload on any public platform. Reply: We submitted a repository link to data including adopted our code; data has been available at a Github repository linked as https://github.com/analog75/ensemble bnn, which has been added in the revised version as: 7 (line 363) Our codes have been available at https://github.com/analog75/ensemble bnn. Point 3.5 — I would suggest citing the following reference when referring to CNN so new reference from 2021 can be used https://link.springer.com/article/10.1186/s40537-021-00444-8 Reply: According to your recommendation, the review paper has been added in the revised version as follows: (lines 28-31) Deep Neural Networks (DNNs) have succeeded in achieving significant performance enhancements in many fields, including computer vision, speech recognition, natural language processing, etc. Notably, CNNs show many outstanding performances in applications in the field of computer vision (Alzubaidi et al., 2021). Point 3.6 — Comparison with state-of-the-art is necessary to add on the same used dataset. Reply: In agreement with your concerns, evaluations with a state-of-the-art BNN model have been added in the revised version. ReActNet-10 has been adopted to show the evaluation results with a state-of-the-art BNN model including its original result as follows: (lines 482-487) In the evaluations, Top-1 inference accuracies were enhanced by 3.6% − 1.7%. Notably, the bagging scheme got 70.29% Top-1 inference accuracy with M = 10, which enhanced 3.6% Top-1 final accuracy. Like results of the binarized ResNet-18 in Fig. 5, the bagging scheme was the best in our evaluations, where 70.29% Top-1 inference accuracy was achieved with M = 10. The evaluation results showed the scalability with M . The performance improvement slowed down as M increased, but the ensemble schemes always provided better performance over the case only using one classifier. Point 3.7 — Explain more on the research gap of previous methods. Reply: Thanks for your recommendation. In the revised version, problem with existing ensemble methods has been additionally explained as follows: (lines 210-219) There have been several ensemble-based systems using BNNs. The ensemble-based system in Vogel et al. (2016) applies the stochastic rounding to a real-valued weight to get its binary weight. The stochastic rounding of a weight w can be performed in Eq. (7) as: ( ⌊w⌋ with probability 1 − (w − ⌊w⌋), sr(w) = (7) ⌊w⌋ + 1 with probability w − ⌊w⌋. 8 This system performs a kind of soft voting with base classifiers that contain different binary weight files from one high-precision neural network. Each inference evaluation with a binary weight file could be considered as a base classifier, The ensemble-based system averages prediction probabilities to enhance the classification accuracy. Although this ensemble-based system lowers the classification variance of the aggregated classifiers (Vogel et al., 2016), its target system should store multiple weight files. Binary Ensemble Neural Network (BENN) (Zhu et al., 2019) adopts bagging and boosting strategies to obtain multiple models to be aggregated in Zhu et al. (2019). These sophisticated ensemble methods improve inference accuracy, but multiple weights should be stored as well, which is a significant burden when using BNNs. Besides, we have explained more theoretical backgrounds of ensemble methods. (lines 180-199) Fusion and voting are basic ensemble-based schemes. In the fusion-based ensemble, the averaged prediction from base classifiers is used to calculate the training loss. When M base 1 PM 1 2 m M classifiers {e , e , ..., e , ..., e } are adopted, the output can be oi = M m=1 oi m for a given sample si . In the fusion, when yi is P the target output for si , its loss function is L(oi , yi ). On data D 1 batch D, its training loss can be D i=1 L(oi , yi ). The training loss is used to update parameters of all base classifiers. When using a voting-based ensemble, each base classifier is created independently. With M base classifiers, each base classifier em can be trained without considering other base classifiers. In inference, hard voting gathers so-called votes from base classifiers and selects the class with majority vote. For example, let us assume that there are two classes ({dog, cat}) in a dataset and three base classifiers ({e1 , e2 , e3 }) in a voting-based ensemble. When classifiers e1 and e2 classify a sample into dog, class dog is voted on the classification. Even if classifier e3 votes for cat, class dog is selected by a majority vote. On the other hand, soft voting sums the prediction probabilities from classifiers and then averages the summed values. The class with a high probability is selected in soft voting. For example, let us assume that a base classifier em outputs its predicting probabilities of classifying a sample for a set of class {dog, cat}, which is formulated as P (em ) = {Pdog , Pcat }. When two base classifiers have P (e1 ) = {0.7, 0.3} and P (e2 ) = {0.2, 0.8}, their averaged probabilities can be Pavg = { 0.7+0.2 , 0.3+0.8 }. Therefore, the sample is classified into class cat in this soft voting. 2 2 In the bagging-based ensemble (Breiman, 1996), the subsampling with replacement produces multiple datasets to train each base classifier. In Xu (2020), different data batches for each base classifier are sampled with replacement and used to train base classifiers independently. Point 3.8 — The contributions of the article have to be clear for the readers, I would suggest making them as bullet points at the end of the introduction. Reply: Thanks for your sharp comments. In agreement with your recommendation, our contributios are summarized in Introduction section as follows: (lines 66-77) We summarize our contributions as follows: 9 ˆ We propose an ensemble method based on shared filter weights that reduce the amount of storage required for ensemble-based systems. ˆ In the proposed ensemble system, we show the scalability with the number of base classifiers in terms of classification accuracy. ˆ We adopt various ensemble methods and compare the details of each method in our evaluations. Details of experimental environments are described to apply the proposed method based on several BNNs. Notably, with a binarized ResNet-20 model (Rastegari et al., 2016) on the CIFAR-100 dataset (Krizhevsky et al., 2014), the proposed scheme can achieve 56.74% Top-1 accuracy with ten BNN classifiers, which enhances the performance by 7.58% compared with that using a single BNN model. For the state-of-the-art ReActNet-10 (Liu et al., 2020) on the CIFAR-100 dataset, our method produces Top-1 accuracy improvements up to 3.6%. Thanks for your brilliant comments. We assure that your idea will be helpful for inceasing the quality of this paper. 10 "
Here is a paper. Please give your review comments after reading it.
381
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Legged robots are better able to adapt to different terrains compared with wheeled robots.</ns0:p><ns0:p>However, traditional motion controllers suffer from extremely complex dynamics properties. Reinforcement learning (RL) helps to overcome the complications of dynamics design and calculation. In addition, the high autonomy of the RL controller results in a more robust response to complex environments and terrains compared with traditional controllers. However, RL algorithms are limited by the problems of convergence and training efficiency due to the complexity of the task. Learn and outperform the reference motion (LORM), an RL based framework for gait controlling of biped robot is proposed leveraging the prior knowledge of reference motion. The proposed trained agent outperformed the reference motion and existing motion-based methods. The RL environment was finely crafted for optimal performance, including the pruning of state space and action space, reward shaping, and design of episode criterion. Several improvements were implemented to further improve the training efficiency and performance including: random state initialization (RSI), the noise of joint angles, and a novel improvement based on symmetrization of gait. To validate the proposed method, the Darwin-op robot was set as the target platform and two different tasks were designed: (I) Walking as fast as possible and (II) Tracking specific velocity. In task (I), the proposed method resulted in the walking velocity of 0.488m/s, with a 5.8 times improvement compared with the original traditional reference controller. The directional accuracy improved by 87.3%. The velocity performance achieved 2x compared with the rated max velocity and more than 8x compared with other recent works. To our knowledge, our work achieved the best velocity performance on the platform Darwin-op. In task (II), the proposed method achieved a tracking accuracy of over 95%. Different environments are introduced including plains, slopes, uneven terrains, and walking with external force, where the robot was expected to maintain walking stability with ideal speed and little direction deviation, to validate the performance and robustness of the proposed method.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 32</ns0:head><ns0:p>Autonomous robots have been used to perform different tasks and help to reduce workloads. Robotic arms 33 are widely used in factories, significantly improving productivity. For those non-moving robotics, motion 34 planning is to control all the joints of the robot to achieve the target position. Currently, various motion 35 controlling and optimization methods have been proposed and widely used <ns0:ref type='bibr' target='#b39'>(Sucan et al., 2012)</ns0:ref> <ns0:ref type='bibr'>(Huda 36 et al., 2020)</ns0:ref> <ns0:ref type='bibr' target='#b33'>(Ratliff et al., 2009)</ns0:ref> <ns0:ref type='bibr' target='#b21'>(Liu and Liu, 2021)</ns0:ref>. Another family of robots, mobile robots, are widely 37 used to perform tasks including serving, rescue, and medical treatment. For mobile robots, the controlling 38 task includes both motion and moving, which increases the complexity of the design. Wheeled robots have 39 the simplest moving pattern. However, wheeled robots are not capable of complex outdoor environments 40 where the ground may slope or be uneven. Biped robots have more lifelike movements and can walk 41 on more complex terrains, showing great potential in nursing, rescue, and other various applications. inverted pendulum is one of the most adapted models which has inspired many algorithms such as those of <ns0:ref type='bibr' target='#b32'>Pratt et al. (2006)</ns0:ref>, <ns0:ref type='bibr' target='#b16'>Lee and Goswami (2007)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>Li et al. (2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b14'>Kajita et al. (2010)</ns0:ref>. Furthermore, the Zero Momentum Point (ZMP) <ns0:ref type='bibr' target='#b40'>(Vukobratovic et al., 2012)</ns0:ref> was proposed as a way to control different humanoid robots. In addition to walking on the plain, some traditional methods were proposed to solve the gait planning on slopes, uneven terrains, or with the disturbance of external force. <ns0:ref type='bibr' target='#b15'>(Kim et al., 2007</ns0:ref>) <ns0:ref type='bibr' target='#b26'>(Morisawa et al., 2012)</ns0:ref> <ns0:ref type='bibr' target='#b45'>(Yi et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b46'>(Yi et al., 2010)</ns0:ref> <ns0:ref type='bibr' target='#b38'>(Smaldone et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b26'>Morisawa et al. (2012)</ns0:ref> proposed an improved balance control algorithm based on the Capture Point (CP) controller, enabling the robot to walk on uneven terrain. <ns0:ref type='bibr' target='#b45'>Yi et al. (2016)</ns0:ref> proposed an algorithm enabling the robot to walk on unknown uneven terrain while minimizing the modification to the original gait on plain based on online terrain estimation. Based on similar models for balance, <ns0:ref type='bibr' target='#b5'>Gong et al. (2019)</ns0:ref> proposed an algorithm enabling the robot Cassie to ride a segway. However, there exist some disadvantages of traditional methods. Firstly, traditional methods highly rely on dynamics and mathematical models for both the robot and the terrain, requiring large amounts of time and labor for designers. When the type of the robot or the property of the terrain is changed, the model needs to be re-designed. This disadvantage also results in the lack of adaptivity of the gait controller. Furthermore, the performance of the human-designed model is also limited by the prior knowledge and experience of the designers, which cannot fully explore the potential of the robot.</ns0:p><ns0:p>Reinforcement learning (RL) is suitable for controlling tasks where the agent can be trained within the designed environment as a controller for different tasks <ns0:ref type='bibr' target='#b0'>(Arulkumaran et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b7'>(Gullapalli et al., 1994)</ns0:ref> <ns0:ref type='bibr' target='#b13'>(Johannink et al., 2019)</ns0:ref>. In those tasks, the trained model can instruct the controlled servos and adjust to the change of environment simultaneously, which results in better adaptivity and autonomy. In addition, many works have proved that exploration during the training process enables the controller to obtain skills beyond human knowledge, resulting in much better performance <ns0:ref type='bibr' target='#b37'>(Silver et al., 2017)</ns0:ref>.</ns0:p><ns0:p>With the development of efficient, tiny neural networks and the hardware accelerators, reinforcement learning can easily be implemented on robotic devices with embedded processors <ns0:ref type='bibr' target='#b48'>(Zhang et al., 2021</ns0:ref>) <ns0:ref type='bibr' target='#b22'>(Meng et al., 2020)</ns0:ref>. While the motion controlling tasks can be solved by reinforcement learning with high performance, gait controlling of legged robots remains a challenge due to the complexity. Different algorithms are used to solve the biped tasks from OpenAI Gym <ns0:ref type='bibr' target='#b1'>(Brockman et al., 2016)</ns0:ref> by implementing pure reinforcement learning to control the joints directly <ns0:ref type='bibr' target='#b9'>(Heess et al., 2017)</ns0:ref>. However, when these algorithms are used in the gait controlling for a real robot with much more complex dynamics property, the convergence and performance suffer. In addition, the trained model may also have a nonhuman-like gait, which is usually unacceptable. Thus training frameworks leveraging prior knowledge have become an important point of study for RL-based gait controllers. One solution is to combine reinforcement learning with the traditional models <ns0:ref type='bibr' target='#b18'>(Lin et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b31'>(Phaniteja et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b12'>(Jiang et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b4'>). Gil et al. (2019)</ns0:ref> proposed a multi-level algorithm to train the RL-based controller, by finding stable poses that meet the condition of ZMP. Then a motion sequence composed of the poses is trained to form a gait cycle. <ns0:ref type='bibr' target='#b19'>Liu et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b12'>Jiang et al. (2020)</ns0:ref> used reinforcement learning to optimize the parameters of the dynamics models improving the performance. However, the used traditional models, such as static walking patterns, largely limit the performance in velocity. And in many works, the joints are not directly controlled by the reinforcement learning algorithms, thus the flexibility and adaptivity of reinforcement learning decrease. Some knowledge including the symmetry of locomotion can also improve the performance of the trained model <ns0:ref type='bibr' target='#b47'>(Yu et al., 2018)</ns0:ref>. Other works utilize existing gait as the prior knowledge. <ns0:ref type='bibr' target='#b42'>Xi and Chen (2020)</ns0:ref> proposed a hybrid reinforcement learning framework to keep the center of pressure (CoP) close to the reference gait, achieving an adaptive gait on both static and dynamic platforms. Imitation-learning-based algorithms such as GAIL <ns0:ref type='bibr' target='#b10'>(Ho and Ermon, 2016)</ns0:ref> are also widely used to improve the training efficiency and overcome the problem of nonhuman-like motion. Imitation learning can mimic the motion of humans or robots controlled by traditional algorithms in much fewer training iterations with considerable performance and convergence <ns0:ref type='bibr' target='#b28'>(Peng et al., 2018)</ns0:ref> <ns0:ref type='bibr' target='#b30'>(Peng et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b43'>(Xie et al., 2018)</ns0:ref> <ns0:ref type='bibr' target='#b44'>(Xie et al., 2019)</ns0:ref>. However, imitation learning seeks to master the given motion, and the performance of the trained model is similar to expert data, with the potential of robot unexplored. <ns0:ref type='bibr' target='#b43'>Xie et al. (2018)</ns0:ref> proposed an imitation-learning-based algorithm for the biped control with different velocities.</ns0:p><ns0:p>However, The reference action is artificially modified for different velocities, and the feasibility cannot be guaranteed. Thus the robot failed to achieve the highest velocity. Meanwhile, the agent is trained only on plain, with the adaptivity and robustness of reinforcement learning not fully explored. We proposed a novel framework for biped control known as Learn and Outperform the Reference Motion (LORM) to Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>fully explore the robot's potential and utilize the prior data from the reference motion.</ns0:p><ns0:p>The main contributions of this paper include:</ns0:p><ns0:p>1. LORM, an RL-based controlling framework, was proposed for biped gait. Using both simulation environment state and reference state, the RL agent can not only mimic the reference motion but also explore a much better policy by combining the environment reward and imitation reward. LORM uses one gait cycle of the reference motion without modification for different tasks and terrains, which can be collected by various methods such as motion capture. With the assistance of the reference motion, LORM has much better training efficiency and performance than the pure RL method. Instead of simply imitating, in the tasks of walking as fast as possible and tracking specific velocity, LORM significantly outperformed the reference motion with better performance in velocity, direction, and robustness on different terrains. By the simple reward shaping, LORM can obtain different velocities from the same reference motion. With the full exploration by reinforcement learning, our max velocity on the plain is more than 2x compared with the rated highest speed of the official document of the robot, whose gait is generated by traditional methods. Compared with other proposed traditional or reinforcement learning control algorithms, the velocity performance is even more outstanding, with more than 7x improvement.</ns0:p><ns0:p>For other complex terrains, the proposed algorithm also has novel performance, with approximately 4x improvement on slopes and uneven terrains. To our knowledge, LORM achieves the highest velocity on the platform robot Darwin-op <ns0:ref type='bibr' target='#b8'>(Ha et al., 2013)</ns0:ref>. In addition to the high max velocity, the velocity can be adjusted at the training stage flexibly, providing more choices for different tasks and environments.</ns0:p><ns0:p>2. An RL environment with finely crafted state space, action space, reward, and criterion was proposed to allow the RL agent to learn how to control the robot in an expected manner with high performance.</ns0:p><ns0:p>3. Several tricks were introduced into the framework. Random State Initialization (RSI) and Gaussian noise were introduced into the training process. We proposed an improvement fully leveraging the symmetry of the gait cycle to further improve the performance and training efficiency.</ns0:p><ns0:p>4. Sufficient validation environments were constructed in Webots <ns0:ref type='bibr' target='#b23'>(Michel, 2004)</ns0:ref> to evaluate the performance and robustness of the proposed method and can be used to evaluate other methods and robots.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Reinforcement learning</ns0:head></ns0:div> <ns0:div><ns0:head>Overall theory</ns0:head><ns0:p>Reinforcement learning aims to generate an agent to provide proper actions according to the observations from the environment. A reinforcement learning structure is composed of an agent to be trained and the environment for training and evaluating. As shown in Figure1, the agent repeatedly interacts with the environment in RL training, according to the value function generated by a neural network <ns0:ref type='bibr' target='#b24'>(Mnih et al., 2013)</ns0:ref> <ns0:ref type='bibr' target='#b25'>(Mnih et al., 2015)</ns0:ref> or a simple Q-value table that stores the expected reward for each combination of the state and action. In trajectory T (from the beginning state to the end state of the environment), at the t-th time step, the agent receives observation o t from the environment, which is based on current state s t of environment and outputs action a t accordingly. The environment receives the action and gets into the next state s t+1 and outputs reward r t for the action and new observation o t+1 . After some iterations of interaction, the agent will update its weights to get a higher total reward R total = n &#8721; t=1 r t , where n stands for the number of steps in one trajectory. This may be achieved by either getting higher reward in every step or by trying to maintain more steps before the end of the trajectory. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Policy gradient</ns0:head><ns0:p>Policy gradient algorithms are one mainstream family of reinforcement learning algorithms that are capable of complex continuous tasks. Policy gradient algorithms generate optional actions directly without one function or table indicating the reward for different actions. The output of the network is the probability distribution of actions.</ns0:p><ns0:formula xml:id='formula_0'>p(a|s, &#952; ) = P[A t = a|S t = s, &#952; t = &#952; ] (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>)</ns0:formula><ns0:p>where &#952; is the current weights of the neural network. The agent will choose the action according to the probability distribution at every step. Thus the probability P(&#964;|&#952; ) of trajectory &#964; = (s 1 , a 1 , &#8226;&#8226;&#8226;, s T , a T ) and the total reward can be calculated as:</ns0:p><ns0:formula xml:id='formula_2'>P(&#964;|&#952; ) = p(s 1 ) T &#8719; t=1 p &#952; (a t |s t , &#952; )p(s t+1 |s t , a t ) (2) R(&#964;) = &#8721; T t=1 r t (3)</ns0:formula><ns0:p>Given N different trajectories &#964; 1 , &#964; 2 , ...&#964; N sampled by policy &#960; with weights &#952; , the expectation of the total reward for one trajectory can be approximated as:</ns0:p><ns0:formula xml:id='formula_3'>R&#952; = &#8721; &#964; R(&#964;)P(&#964;|&#952; ) &#8776; 1 N N &#8721; n=1 R(&#964; n )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The training will optimize the weights &#952; of policy &#960; to obtain the highest expected reward.</ns0:p><ns0:formula xml:id='formula_4'>&#952; * = arg max &#952; R&#952; (5)</ns0:formula><ns0:p>During the gradient ascent of backward propagation, the probabilities of actions bringing more rewards will increase while other probabilities are reduced. The gradient can be calculated as:</ns0:p><ns0:formula xml:id='formula_5'>&#8711; R&#952; &#8776; 1 N N &#8721; n=1 R(&#964; n )&#8711; log P(&#964; n |&#952; ) = 1 N N &#8721; n=1 R(&#964; n ) T n &#8721; t=1 &#8711; log p(a n t |s n t , &#952; ) = 1 N N &#8721; n=1 T n &#8721; t=1 R(&#964; n )&#8711; log p(a n t |s n t , &#952; )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Advantage function A &#952; (s t , a t ) is introduced replacing R(&#964;) to further improve the training efficiency:</ns0:p><ns0:formula xml:id='formula_6'>&#8711; R&#952; &#8776; 1 N N &#8721; n=1 T n &#8721; t=1 A &#952; (s t , a t )&#8711; log p(a n t |s n t , &#952; ) (7) A &#952; (s t , a t ) = G t &#8722; b (8) G t = &#8721; T n t &#8242; =t &#947; t &#8242; &#8722;t r n t &#8242; (9)</ns0:formula><ns0:p>where G t is the discounted accumulated reward, &#947; is the discounted coefficient, and b is the regularizer which will be further discussed in the next subsection.</ns0:p></ns0:div> <ns0:div><ns0:head>4/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Actor-critic structure</ns0:p><ns0:p>Actor-critic structure introduces one critic that calculates the value function V &#952; (s) of the state s indicating whether the state is good or bad. The value function is used in the calculation of the advantage function.</ns0:p><ns0:p>By the prediction of critic, the advantage function of each state can be calculated respectively without the information of the whole trajectory, as shown in Equation( <ns0:ref type='formula' target='#formula_7'>10</ns0:ref>). Here r t +V &#952; (s t+1 ) acts as the discounted accumulated reward G t and V &#952; (s t ) acts as regulizer b.</ns0:p><ns0:formula xml:id='formula_7'>A &#952; (s t , a t ) = r t +V &#952; (s t+1 ) &#8722;V &#952; (s t )<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>The critic is a neural network in deep reinforcement learning, which will be trained together with the agent (called actor) during the training process. The loss for critic can be calculated by temporal-difference (TD) approach:</ns0:p><ns0:formula xml:id='formula_8'>Loss = V &#952; &#8722; &#947;V &#952; (s t+1 ) &#8722; r t (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>Proximal policy gradient</ns0:p><ns0:p>The sampling is time-consuming in the training of reinforcement learning. However, the sampled trajectories expired after the update of policy network &#960; and new trajectories need to be sampled based on the new policy, which resulted in low trajectory utilization and training efficiency. Thus importance sampling is introduced in TRPO <ns0:ref type='bibr' target='#b34'>(Schulman et al., 2015a)</ns0:ref> and PPO <ns0:ref type='bibr' target='#b36'>(Schulman et al., 2017)</ns0:ref> to use the trajectories repeatedly, improving the training efficiency.</ns0:p><ns0:p>Importance sampling is widely used to estimate the expectation where the sampling of the distribution is difficult to obtain. Given one distribution x &#8764; p(x), the expectation f (x) can be estimated as Equation( <ns0:ref type='formula' target='#formula_10'>12</ns0:ref>), where x i is sampled from p(x).</ns0:p><ns0:formula xml:id='formula_10'>E x&#8764;p [ f (x)] = 1 n &#8721; i f (x i )<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>However, when the sample of x i is difficult to access, importance sampling can be used by introducing another distribution x &#8764; q(x) with a relatively small difference with p(x). The expectation f (x) with distribution p(x) can be calculated as Equation( <ns0:ref type='formula' target='#formula_11'>13</ns0:ref>), where x i is sampled from q(x). Based on importance sampling, PPO uses one neural network to interact with the environment collecting samplings and another neural network to update weights, which is called an off-policy algorithm. The training efficiency and sample utilization are largely improved. In addition, the difference of distribution p(x) and q(x) should be small to ensure the accuracy.</ns0:p><ns0:formula xml:id='formula_11'>E x&#8764;p [ f (x)] = E x&#8764;q [ f (x) p(x) q(x) ] = 1 n &#8721; i f (x i ) p(x i ) q(x i )<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>PPO maintains two actors &#960; &#952; (Pi) and &#960; &#952; &#8242; (OldPi) with the same network structure. The actor &#960; &#952; is used to update weights while the &#960; &#952; &#8242; is used to interact with the environment sampling the training data.</ns0:p><ns0:p>The gradient for optimization is calculated based on importance sampling:</ns0:p><ns0:formula xml:id='formula_12'>&#8711; R&#952; = E &#964;&#8764;p &#952; (&#964;) [R(&#964;)&#8711; log p &#952; (&#964;)] = E &#964;&#8764;p &#952; &#8242; (&#964;) [ p &#952; (&#964;) p &#952; &#8242; (&#964;) R(&#964;)&#8711; log p &#952; (&#964;)]<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>PPO maintains one critic neural network for state evaluation. Introducing advantage function to replace R(&#964;), the gradient is further calculated as:</ns0:p></ns0:div> <ns0:div><ns0:head>5/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science (16)</ns0:p><ns0:p>The clip coefficient &#949; is furthermore introduced to limit the update step, which can keep the difference between Pi and OldPi small enough for importance sampling. The final objective function for gradient ascent is calculated as Equation( <ns0:ref type='formula' target='#formula_13'>17</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_13'>J &#952; k PPO (&#952; ) = &#8721; (s t ,a t ) min( p &#952; (a t |s t ) p &#952; k (a t |s t ) A &#952; k (s t , a t ), clip( p &#952; (a t |s t ) p &#952; k (a t |s t ) , 1 &#8722; &#949;, 1 + &#949;)A &#952; k (s t , a t ))<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science To fully leverage the reference motion, our method generates the target action based on both the output of agent &#948; a and corresponding reference motion &#226;. Different from earlier works generating target action by RL agent directly, the output action bias &#948; a of RL agent is not the final target angles of the joints. In this case, the goal of the RL agent is to make a suitable adjustment to the reference action according to the current state. The final 12-D target action a explicitly indicating the target angles of joints is calculated as the sum of reference action and output bias of the RL agent:</ns0:p><ns0:formula xml:id='formula_14'>a = &#226; + &#948; a (<ns0:label>18</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>)</ns0:formula></ns0:div> <ns0:div><ns0:head>Reward function</ns0:head><ns0:p>Inspired by <ns0:ref type='bibr' target='#b28'>Peng et al. (2018)</ns0:ref>, the reward function of LORM is divided as task reward and imitation reward: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_16'>R = w I R I + w T R T (19a) w I + w T = 1 (19b)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Task reward</ns0:head><ns0:p>Task reward, including velocity reward and deviation penalty, was used to prompt the agent to interact properly with the environment and complete the task. The velocity reward encouraged the robot to walk at the expected velocity stably. Two different velocity rewards were designed according to the tasks. For the task of walking as fast as possible, the velocity reward was calculated as shown in Equation( <ns0:ref type='formula' target='#formula_17'>20</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_17'>r sim = 50 &#215; min(x pos &#8722; xpos , L max )<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>where x pos denotes the position in the forward direction at the current time-step while xpos denotes the position at the last time-step. L max is introduced as the ceiling of the reward to guarantee stability. The velocity reward for the task of tracking specific velocity is calculated as shown in Equation(21a):</ns0:p><ns0:formula xml:id='formula_18'>r sim = exp[&#8722;10 5 &#215; (x pos &#8722; xpos &#8722; x t ) 2 ] (<ns0:label>21a</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>)</ns0:formula><ns0:formula xml:id='formula_20'>x t = v t &#215; t (21b)</ns0:formula><ns0:p>where x pos and xpos have the same meaning as Equation( <ns0:ref type='formula' target='#formula_17'>20</ns0:ref>). x t is the expected forward distance in one time step, which can be converted with target velocity v t by Equation( <ns0:ref type='formula' target='#formula_18'>21b</ns0:ref>), where t denotes the length of one time step.</ns0:p><ns0:p>The deviation penalty is implemented to guarantee the accuracy of direction. When the robot deviates from the target direction, the total reward will decrease. The deviation penalty is defined as follows:</ns0:p><ns0:formula xml:id='formula_21'>Loss y = 0, &#8722;20&#176;&#8804; pitch &#8804; 20&#176;, &#8722;1, else. (<ns0:label>22</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>)</ns0:formula><ns0:p>The deviation is not significant when the pitch of the robot is in the range from &#8722;20&#176;to 20&#176;, thus the penalty is zero. When the absolute value exceeds this range, the penalty is set to &#8722;1 to command the robot to adjust the direction. The total environment reward is the sum of velocity reward and deviation penalty: The reward for joints is calculated as shown in Equation( <ns0:ref type='formula' target='#formula_24'>25</ns0:ref>), where qi t and q i t respectively stand for the angles of the i-th joint of the reference and the simulation environment, w it is the weight indicating the importance of the i-th joint. Different joints contribute differently to the walk. For instance, the joints of ankles are more important than joints of shoulders. Thus the weights of left and right shoulder are set to 0.05 while other joints of legs are 0.09, and add up to 1.</ns0:p><ns0:formula xml:id='formula_23'>R</ns0:formula><ns0:formula xml:id='formula_24'>r t I = exp[&#8722; &#8721; i w it (|| qi t &#8722; q i t || 2 )]<ns0:label>(25)</ns0:label></ns0:formula><ns0:p>The reward for the center of mass is calculated as shown in Equation( <ns0:ref type='formula' target='#formula_25'>26</ns0:ref>), where q j c and q j c denotes the reference and simulation position of the center of mass at the current time-step.</ns0:p></ns0:div> <ns0:div><ns0:head>10/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_25'>r c I = exp[&#8722; &#8721; j (|| q j c &#8722; q j c || 2 )]<ns0:label>(26)</ns0:label></ns0:formula><ns0:p>The reward for the Euler angle is calculated as shown in Equation( <ns0:ref type='formula' target='#formula_26'>27</ns0:ref>), where q j o and q j o denotes the reference and simulation Euler angle at the current time-step.</ns0:p><ns0:formula xml:id='formula_26'>r o I = exp[&#8722; &#8721; k (|| qk o &#8722; q k o || 2 )]<ns0:label>(27)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Criterion for done</ns0:head><ns0:p>The robot interacts with its environment during each training period collecting the states, actions and rewards with which to update network weights. When the episode ends, signal Done is set as 1 to reset the world and begin a new episode. If the criterion for Done is unsuitable, the robot will collect information of poor conditions, such as falling down, deviating, or marching in unnatural manners like crawling.</ns0:p><ns0:p>These unexpected data will mislead the RL agent during the training, especially at the beginning of the training process. A strict criterion for signal Done is introduced to maintain the purity of the experience pool, which will end the episode in advance before the bad experience is collected. The criterion is shown in Equation( <ns0:ref type='formula'>28</ns0:ref>). If the current time-step t exceeds the max time-step number T limited , the episode will end automatically. Other stop conditions include: (a) The height of center of mass h com is lower than the </ns0:p><ns0:formula xml:id='formula_27'>Criterion = &#63729; &#63732; &#63732; &#63732; &#63732; &#63732; &#63732; &#63730; &#63732; &#63732; &#63732; &#63732; &#63732; &#63732; &#63731; True t &gt; T limit True h com &lt; H limit True roll &gt; 45&#176;or roll &lt; &#8722;45&#176;T rue yaw &gt; 45&#176;or yaw &lt; &#8722;45&#176;F<ns0:label>alse</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Random state initialization</ns0:head><ns0:p>The environment state is initialized at the beginning of every episode. Traditional RL training is based on fixed state initialization (FSI), resetting the environment to a fixed beginning state. In such methods, agent will learn the policy serially. For example, in the walking task, the robot will learn to stand stable at the beginning, then learn the next sub-task of walking. However, at the beginning of the training, the robot falls directly in most episodes, thus the latter sub-tasks cannot be trained, resulting in a low training efficiency. To make full use of the reference motion and improve the training efficiency, the random state initialization (RSI) is introduced into training. This method has been widely adopted <ns0:ref type='bibr' target='#b28'>(Peng et al., 2018</ns0:ref>) <ns0:ref type='bibr' target='#b27'>(Nair et al., 2018)</ns0:ref>. At the beginning of one episode, the environment is initialized as one random phase of the reference gait cycle. In this way, the agent learns earlier and later sub-tasks simultaneously with a much higher efficiency.</ns0:p></ns0:div> <ns0:div><ns0:head>Symmetrization of actions and states</ns0:head><ns0:p>The pattern of gait is defined as the cycling switch of the swing leg and instance leg between the two legs.</ns0:p><ns0:p>In the first half of the cycle, the left leg is the swing leg and in the second half of the cycle the right leg is the swing leg. The two halves are symmetrical, which provides useful prior knowledge for the training. </ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>Parameter selection</ns0:head><ns0:p>In the LORM framework, different parameters were tested and selected to achieve the best performance. The results of the different methods for tracking specific speeds are shown in Figure11. The tracked forward distance in one time step is shown as x t = 0.01m in (a) and x t = 0.005m in (b), which can also be represented as 0.313m/s and 0.156m/s. In (a), the method with symmetrization kept the speed of 0.301m/s with an accuracy of 96.3%, which has 5.6% improvement over the method without symmetrization. In (b), the method with symmetrization kept the speed of 0.163m/s with an accuracy of 95.8%, which has 6.8% improvement than method without symmetrization.</ns0:p></ns0:div> <ns0:div><ns0:head>14/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Performance and robustness in different environments</ns0:head><ns0:p>In this subsection, all improvements were introduced into the method to achieve the final result. To validate the performance and robustness of the proposed method, different walking environments were constructed including plain, slope, uneven terrain, and external force.</ns0:p></ns0:div> <ns0:div><ns0:head>Walking on plains</ns0:head><ns0:p>The position and orientation during the walking procedure are shown in Figure13, where the task was to walk as fast as possible. The robot achieved an average speed of 0.488m/s, with 22.48m forward distance in 1,440 time steps. The velocity performance achieved a 5.8 time improvement compared with the reference motion generated by the traditional controller. The absolute deviation was 0.31m and the relative deviation was 1.4%. The relative deviation decreased 87.3% compared with the original reference motion, which is 11.0%. In addition, the max Euler angle deviation from the expected direction (euler_y in Figure13(b)) was 16.2&#176;, however, the robot corrected the direction automatically and simultaneously to maintain accuracy while the reference motion kept the deviated direction with no adjustment. The comparison of velocity performance is shown in Table6. The compared five works include: the official document of Darwin-op, <ns0:ref type='bibr' target='#b17'>Li et al. (2016)</ns0:ref>, <ns0:ref type='bibr' target='#b4'>Gil et al. (2019)</ns0:ref>, and two different algorithms in <ns0:ref type='bibr' target='#b42'>Xi and Chen (2020)</ns0:ref>, in which the target platform is the same as or similar with our platform. Our work and the first two compared works are tested on the platform Darwin-op, while the last three works on the platform NAO. For</ns0:p><ns0:p>Xi and Chen (2020), the highest velocity among various cases is selected as the absolute velocity on the plain. Normalization of the velocity is introduced to ensure the precision of the comparison, which divides the absolute velocity by the height of the controlled robot. The normalized velocity of our work is 2 times compared with the rated max velocity, and approximately 10 times compared with other algorithms. The result shows that our work fully explored the potential of the robot, getting better performance compared with traditional algorithms and other reinforcement learning algorithms combined with dynamics models.</ns0:p><ns0:p>To our knowledge, we have achieved the highest velocity on the platform Darwin-op.</ns0:p><ns0:p>For the task of tracking specific speed, the comparison of the actual speed and target speed is shown in Figure14, where the equivalent target speed (forward distance in one time step) is x t = 0.01m in (a) and</ns0:p><ns0:p>x t = 0.005m in (b). The tracking accuracy achieved above 95% in both conditions. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The sequence of walking uphill is shown in Figure15, where the slope was 0.1rad. The task was tracking specific velocity, rather than walking as fast as possible, to ensure the stability of walking. The robot kept the velocity of 0.295m/s, with an accuracy of 94.0% compared with the target velocity of 0.313m/s.</ns0:p><ns0:p>The directional error was adjusted simultaneously, with a max value of 25.4&#176;. The sequence of walking downhill is shown in Figure16, where the slope was 0.1rad and the task was tracking specific velocity.</ns0:p><ns0:p>The robot maintained a velocity of 0.278m/s, with an accuracy of 88.8% compared with the target velocity of 0.313m/s. The slight drop in accuracy was caused by the high velocity when walking downhill, which results in greater difficulty to keep balance. The velocity performance outperforms the rated max velocity on plain though the velocity is limited to around 0.313m/s during the training. The comparison of velocity performance on slopes is shown in Table7, where the compared works come from <ns0:ref type='bibr' target='#b42'>Xi and Chen (2020)</ns0:ref>.</ns0:p><ns0:p>The velocity performance is more than 5 times compared with the reference work. The robot maintained the directional error within 18&#176;in the tested 1,440 time steps, which is better than the original reference motion. In addition, the robot adjusted its pose before losing balance to maintain a stable walking gait, which is the benefit of RL based controller.</ns0:p></ns0:div> <ns0:div><ns0:head>17/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science t=0.1s t=0.4s t=0.7s</ns0:note><ns0:p>t=1.0s t=2.1s t=2.5s Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Uneven terrain</ns0:head><ns0:p>To further evaluate the robustness of the proposed method, an environment with the uneven ground was Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Variety of capable environments</ns0:head><ns0:p>Our method is capable in the widest range of environments including: plains, slopes, uneven terrains and walking with random disturbance of external force. The comparison of capable environments is shown in Table10, where the compared works are those mentioned in the previous comparisons. </ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>A novel framework for biped gait controlling was proposed based on reinforcement learning. The advantages of LORM include: 1. Better performance in velocity and direction. Our work achieved the best velocity performance on the platform Darwin-op to our knowledge. In addition, the robot can adjust to the environments to maintain balance and a stable velocity and direction simultaneously. proficiently. In addition, various methods were introduced to simplify the task and make full use of the prior knowledge of reference motion, improving the performance of the proposed method further. To validate the proposed method, different tasks and environments were designed, which can also be used in the validation of other methods or robots in the future. The experiment was conducted on the Darwin-op platform, which adapts servos as joints instead of a hydraulic actuator. Thus the gait controller of different small or medium-sized robots can also be trained with the proposed method with adjustment of training parameters. Thus, the design of the gait controller is much more efficient, costing less time and manpower.</ns0:p><ns0:p>The proposed framework has been implemented on gait controlling. Additional tasks and actions such as rolling, jumping will be researched with the proposed framework in future studies. By fusing different actions into one single RL controller, the robot will act more naturally with an easy and fluent change between different motions.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Basic reinforcement learning framework.</ns0:figDesc><ns0:graphic coords='4,272.72,598.90,151.50,107.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>), where k denotes the number of current iteration. The overall structure of PPO is shown in Figure2. In this work, PPO from OpenAI Baselines (Dhariwal et al., 2017) is implemented as the training algorithm with hyperparameters as shown in Table1. The neural network structure of Pi and OldPi is shown in Figure3, and is composed of one input layer, one output layer, and two hidden layers. The algorithm also introduced generalized advantage estimation (GAE) (Schulman et al., 2015b) in the calculation of advantage function, improving the training performance.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Framework of PPO</ns0:figDesc><ns0:graphic coords='7,183.67,407.71,329.70,298.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Neural network structure of Pi and OldPi</ns0:figDesc><ns0:graphic coords='8,151.48,63.78,394.08,207.84' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. General Framework of LORM</ns0:figDesc><ns0:graphic coords='9,212.87,63.78,271.30,181.44' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Joint angles of reference motion in one cycle.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>R</ns0:head><ns0:label /><ns0:figDesc>T denotes the task reward, indicating the performance of the walking trajectory, while R I denotes the imitation reward, indicating the similarity between the state s of the simulation environment and the reference state &#349; at the same phase. w T and w I are the weights balancing the influence of each reward, which add up to 1. 9/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>T = r sim + Loss y (23) Imitation reward Imitation reward R I encourages the robot to imitate the reference motion in each phase of the gait cycle. In this case, the policy is constrained by the prior information obtained from the reference, which significantly improves the training efficiency. Imitation is especially important at the beginning of the training process where little knowledge is accumulated and the reward is almost zero in every training epoch. The imitation reward is composed of imitation rewards for joints, the center of mass, and Euler angles: weights of the three parts of the reward. Based on sufficient experiments, weights are set as follows: w t I = 0.6, w c I = 0.3 , w o I = 0.1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>threshold H limit . (b) The roll angle exceeds the threshold. (c) The yaw angle exceeds the threshold. These three conditions indicate the robot has lost balance and the experience is misleading to the training.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Three main methods were introduced into the framework to improve the training efficiency and the performance of the trained model: a) Random state initialization. b) Symmetrization of actions and states. c) Noise in training.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>The symmetrization of reference and the symmetrization of state and action are proposed to improve the training efficiency and performance.Symmetrization of reference:The gait cycle of reference motion is not symmetric, decreasing the performance of the trained agent, as shown in Figure5, where the curves of AnkleR and AnkleL are not perfectly symmetrical. To achieve absolute symmetry, the phases 9-17 are derived based on the phases11/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)Manuscript to be reviewed Computer Science 0-8, leveraging the symmetry of the two halves of the gait cycle. The resulting gait cycle has a perfect mirror symmetry in corresponding phase pairs: 0-9, 1-10, and so on.Symmetrization of state and action:The state space is composed of the phase information (1D), the simulation state (35D) and the reference state (35D). The robot only needs to learn the first half of the gait cycle due to the symmetry of the gait cycle, then the second half can be generated according to symmetry. We propose the compression of state space and action space to accelerate the training process based on this innovation as shown in Figure6. At phases 0-8, the states are unchanged and are sent to the RL agent directly, while states at phases 9-17 are processed into the symmetry state as follows: For the simulation state, mirror symmetry to the 20-dimension joint angles was obtained replacing the original ones. Other dimensions remained unchanged. Then the processed symmetry state was sent to the RL agent. The 20-dimension joint angles of the reference state is also replaced by the symmetric ones. Thus, all states indicating joint angles are symmetric. For states at symmetric phase pairs, such as phases 0 and 9, 1 and 10, the output action generated by the RL agent will be highly similar. At phase 0-8, the output of the RL agent will be sent to the adder directly. At phase 9-17, the output of the RL agent will be recovered by symmetrization firstly. Then the symmetry output will be sent into the adder. The final target angles for the joints are the sum of the output action and the reference angles for both the first and second halves of gait cycle.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Flowchat of state symmetrization</ns0:figDesc><ns0:graphic coords='13,147.40,281.37,402.24,123.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Comparison of different weights w I , w T</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Comparison of length and reward in one episode between FSI and RSI</ns0:figDesc><ns0:graphic coords='14,150.39,558.34,194.51,129.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Speed comparison of different symmetrization methods</ns0:figDesc><ns0:graphic coords='15,218.92,202.58,259.34,172.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 11 .Figure 12 .</ns0:head><ns0:label>1112</ns0:label><ns0:figDesc>Figure 11. Comparison of tracking accuracy between methods with and without symmetrization</ns0:figDesc><ns0:graphic coords='16,146.56,63.88,198.02,148.67' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Position and orientation during walking procedure</ns0:figDesc><ns0:graphic coords='17,150.39,558.34,194.51,129.60' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Comparison of real velocity and tracked velocity</ns0:figDesc><ns0:graphic coords='18,146.56,288.61,198.25,148.81' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 15 .Figure 16 .</ns0:head><ns0:label>1516</ns0:label><ns0:figDesc>Figure 15. The sequence of walking uphill</ns0:figDesc><ns0:graphic coords='19,281.53,180.03,134.44,90.51' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>constructed in Webots as shown in Figure17. The walking sequence is shown in Figure18. For the task of walking as fast as possible, the robot achieved a max velocity of 0.453m/s, with a max position deviation of 0.234m, and a max direction deviation of 17.5&#176;. The comparison of velocity performance is shown in Table8. The three compared works come from<ns0:ref type='bibr' target='#b19'>Liu et al. (2018)</ns0:ref>,<ns0:ref type='bibr' target='#b45'>Yi et al. (2016), and</ns0:ref><ns0:ref type='bibr' target='#b26'>Morisawa et al. (2012)</ns0:ref>. The velocity performance is largely improved compared with the reference works. For the task of tracking specific velocity, the robot maintained the velocity of 0.299m/s, with an accuracy of 95.5% compared with the target velocity 0.313m/s. The max position deviation was 0.385m and the max directional deviation was 21.4&#176;. The trained robot was able to adapt well to the uneven ground and complete both tasks with high quality.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>FigureFigure 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Figure 17. Uneven terrain</ns0:figDesc><ns0:graphic coords='20,198.76,556.38,299.52,149.92' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 19 .Figure 20 .Figure 21 .Figure 22 .</ns0:head><ns0:label>19202122</ns0:label><ns0:figDesc>Figure 19. Sequence of walking on series of slopes</ns0:figDesc><ns0:graphic coords='22,150.73,276.33,128.68,85.51' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head /><ns0:label /><ns0:figDesc>2. Compared with pure RL-based methods, the training efficiency and convergence are largely improved by leveraging the supervision of reference motion. 3. Compared with imitation learning based methods, LORM significantly outperforms expert data instead of simply imitating. 4. LORM is capable of different environments including: plain, slopes, uneven terrains, and walking with the disturbance of external force. A training environment was expertly crafted for the RL agent to learn the expected gait manner</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,168.52,162.50,360.00,144.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,168.52,357.15,360.00,144.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>. The reference state is recorded in the same format, thus has the same 35 dimensions as the environment state.The assistant state is one integer ranging from 0 to 17, indicating the current phase of the gait cycle, which has 18 phases in total. Thus the gait cycle is a finite state machine (FSM) with 18 states. The total dimension of state space is 35 + 35 + 1 = 71.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Environment state</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell>Dimension</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>Base Position</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>The xyz position of the base of robot</ns0:cell></ns0:row><ns0:row><ns0:cell>Orientation</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>The Euler angle of the robot</ns0:cell></ns0:row><ns0:row><ns0:cell>Velocity</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>The current xyz velocity</ns0:cell></ns0:row><ns0:row><ns0:cell>Angular velocity</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>The current xyz angular velocity</ns0:cell></ns0:row><ns0:row><ns0:cell>Position of center of mass</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>The xyz position of the center of mass</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Joints</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PelvR</ns0:cell><ns0:cell>PelvL</ns0:cell><ns0:cell>Joints on hips with two DoF</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>LegUpperR LegUpperL (Different from human, one another DoF is eliminated)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>LegLowerR LegLowerL</ns0:cell><ns0:cell>Joints on knees with one DoF (The same as human)</ns0:cell></ns0:row><ns0:row><ns0:cell>AnkleR</ns0:cell><ns0:cell>AnkleL</ns0:cell><ns0:cell>Joints on feet with two DoF</ns0:cell></ns0:row><ns0:row><ns0:cell>RootR</ns0:cell><ns0:cell>RootL</ns0:cell><ns0:cell>(The same as human)</ns0:cell></ns0:row></ns0:table><ns0:note>after pruning Name Description ShoulderR ShoulderL Joints on shoulders with one DoF (Degree of Freedom) (Different from human, one another DoF is eliminated)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Speed of different methods</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Average forward distance</ns0:cell><ns0:cell>Average</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>in one time step (m)</ns0:cell><ns0:cell>speed (m/s)</ns0:cell></ns0:row><ns0:row><ns0:cell>Original reference motion</ns0:cell><ns0:cell>0.0023</ns0:cell><ns0:cell>0.072</ns0:cell></ns0:row><ns0:row><ns0:cell>LORM without symmetrization</ns0:cell><ns0:cell>0.0117</ns0:cell><ns0:cell>0.365</ns0:cell></ns0:row><ns0:row><ns0:cell>LORM with</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>symmetrization in reference</ns0:cell><ns0:cell>0.0131</ns0:cell><ns0:cell>0.411</ns0:cell></ns0:row><ns0:row><ns0:cell>motion</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>LORM with</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>symmetrization in reference motion</ns0:cell><ns0:cell>0.0156</ns0:cell><ns0:cell>0.488</ns0:cell></ns0:row><ns0:row><ns0:cell>and state &amp; action</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Direction deviations of different methods</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Evaluating</ns0:cell><ns0:cell>Evaluating</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>environment(a)</ns0:cell><ns0:cell>environment(b)</ns0:cell></ns0:row><ns0:row><ns0:cell>Training without noise</ns0:cell><ns0:cell>0.44m</ns0:cell><ns0:cell>0.70m</ns0:cell></ns0:row><ns0:row><ns0:cell>Training with noise</ns0:cell><ns0:cell>0.17m</ns0:cell><ns0:cell>0.31m</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of velocity performance on plain</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Ours</ns0:cell><ns0:cell>Rated max velocity of Darwin-op</ns0:cell><ns0:cell>Li's</ns0:cell><ns0:cell>Gil's</ns0:cell><ns0:cell>Xi's</ns0:cell><ns0:cell>Model-based in Xi's</ns0:cell></ns0:row><ns0:row><ns0:cell>Absolute velocity (m/s)</ns0:cell><ns0:cell>0.488</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell cols='3'>0.06 0.0615 0.069</ns0:cell><ns0:cell>0.061</ns0:cell></ns0:row><ns0:row><ns0:cell>Height of robot (m)</ns0:cell><ns0:cell>0.454</ns0:cell><ns0:cell>0.454</ns0:cell><ns0:cell>0.454</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell>0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>Normalized velocity (s &#8722;1 )</ns0:cell><ns0:cell>1.075</ns0:cell><ns0:cell>0.529</ns0:cell><ns0:cell cols='3'>0.132 0.106 0.119</ns0:cell><ns0:cell>0.105</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison of velocity performance on slopes</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Ours</ns0:cell><ns0:cell>Xi's</ns0:cell><ns0:cell>Model-based in Xi's</ns0:cell></ns0:row><ns0:row><ns0:cell>Uphill</ns0:cell><ns0:cell>Absolute velocity (m/s) Height of robot (m)</ns0:cell><ns0:cell cols='2'>0.295 0.037 0.454 0.58</ns0:cell><ns0:cell>0.028 0.58</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Normalized velocity (s &#8722;1 )</ns0:cell><ns0:cell cols='2'>0.612 0.119</ns0:cell><ns0:cell>0.105</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Ours</ns0:cell><ns0:cell>Xi's</ns0:cell><ns0:cell>Model-based in Xi's</ns0:cell></ns0:row><ns0:row><ns0:cell>Downhill</ns0:cell><ns0:cell>Absolute velocity (m/s) Height of robot (m)</ns0:cell><ns0:cell cols='2'>0.278 0.069 0.454 0.58</ns0:cell><ns0:cell>0.061 0.58</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Normalized velocity (s &#8722;1 )</ns0:cell><ns0:cell cols='2'>0.612 0.119</ns0:cell><ns0:cell>0.105</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison of velocity performance on uneven terrain</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Ours Liu's</ns0:cell><ns0:cell>Yi's</ns0:cell><ns0:cell>Morisawa's</ns0:cell></ns0:row><ns0:row><ns0:cell>Absolute velocity (m/s)</ns0:cell><ns0:cell cols='2'>0.453 0.022 0.05</ns0:cell><ns0:cell>0.267</ns0:cell></ns0:row><ns0:row><ns0:cell>Height of robot (m)</ns0:cell><ns0:cell>0.454 0.58</ns0:cell><ns0:cell>1.61</ns0:cell><ns0:cell>1.54</ns0:cell></ns0:row><ns0:row><ns0:cell>Normalized velocity (s &#8722;1 )</ns0:cell><ns0:cell cols='2'>0.998 0.038 0.031</ns0:cell><ns0:cell>0.173</ns0:cell></ns0:row><ns0:row><ns0:cell>Series of slopes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>A series of slopes combining plain-uphill-plain-downhill-plain was designed to further validate the</ns0:cell></ns0:row></ns0:table><ns0:note>adaptability of the proposed method. The walking sequence is shown in Figure19. The robot automatically adjusted to the gait policy to adapt different slopes and perform better. For instance, the angles of AnkleR and AnkleL during walking are shown in Figure20. The angle of AnkleR is higher during walking uphill and lower downhill. Correspondingly, the angle of AnkleL is lower during walking uphill and higher during the downhill. Both two ankles lift higher when the robot is walking uphill and lift lower when downhill.20/25PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparison of performance with external force</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Ours</ns0:cell><ns0:cell cols='2'>Liu's Smaldone's</ns0:cell></ns0:row><ns0:row><ns0:cell>Max force in x-axis (N)</ns0:cell><ns0:cell>[-10,10]</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>[-3,6]</ns0:cell></ns0:row><ns0:row><ns0:cell>Max force in y-axis (N)</ns0:cell><ns0:cell>[-6,6]</ns0:cell><ns0:cell>&#8764; 6.5</ns0:cell><ns0:cell>[-1,6]</ns0:cell></ns0:row><ns0:row><ns0:cell>Absolute velocity (m/s)</ns0:cell><ns0:cell>0.363</ns0:cell><ns0:cell>0.032</ns0:cell><ns0:cell>0.05</ns0:cell></ns0:row><ns0:row><ns0:cell>Height of robot (m)</ns0:cell><ns0:cell>0.454</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell>0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>Normalized velocity (s &#8722;1 )</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.055</ns0:cell><ns0:cell>0.086</ns0:cell></ns0:row></ns0:table><ns0:note>22/25PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Comparison of capable environments Ours Li's Gil's Xi's Liu's Yi's Morisawa's Smaldone's</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Plains</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell></ns0:row><ns0:row><ns0:cell>Slopes</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>&#10003;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Uneven terrains</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>External force</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#10003;</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>&#10003;</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='15'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='25'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64352:1:0:NEW 7 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response to Comments from Editor and Reviewers Dear Editor of PeerJ Computer Science: The authors would like to thank the editor and the reviewers for their valuable comments and suggestions. The list of review comments from the editor and reviewers is really long, but all these review comments really give us a chance to re-consider every aspect of this work, from the content designing to the essay writing, and other detailed aspects. We have paid special attention to those issues raised by more than one reviewers, and then made some significant changes after careful consideration. Extra measurements have been done to validate the changes. The manuscript has been revised according to the careful consideration of the reviewers’ comments. The major changes in the revised manuscript are as follows: • Sufficient comparisons are added to validate the performance of the proposed method. • Sufficient introduction of related works are added. • Detailed description of proximal policy optimization is added. • The manuscript is carefully proofread to avoid typo mistakes. Kind personal regards and thanks to the editor and reviewers. Sincerely Editor 1. The manuscript is well-written with an interesting topic. However, you need to compare your study to existing methods to highlight the contribution of the proposed algorithm. Response: Thanks for the critical comments to further improve our work. There are some papers with related topic about the gait controlling of biped robots. Even though more than half of them showed the results without quantitative analysis, we managed to make comparison between our method with the latest, and best works setting the velocity performance as the metric. Sufficient comparisons have been made for various environments, including plain, slopes, uneven terrains and walking with the disturbance of external force. In each comparison, we provide sufficient data from related works to validate the performance of our method. To our knowledge, our method have achieved the best performance on the targeted platform robot, Darwin-op. And our method has the widest range of environments, while other related works usually solve one or two environments. Change(s): We have added comparison in Results -> Performance and robustness in different environments . (a) In Walking on plains, we added the comparison shown in Table6. The absolute velocity and the normalized velocity is compared to validate the velocity performance. The normalized velocity is calculated as the ratio of absolute velocity and the height of the experiment robot. We compare our method with the rated max velocity of Darwin-op, and four methods from three latest works[1][2][3]. [1] Li X, Li Y, Cui X. Kinematic analysis and gait planning for a DARwIn-OP Humanoid Robot[C]//2016 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2016: 1442-1447. [2] Gil C R, Calvo H, Sossa H. Learning an efficient gait cycle of a biped robot based on reinforcement learning and artificial neural networks[J]. Applied Sciences, 2019, 9(3): 502. [3] Xi A, Chen C. Walking Control of a Biped Robot on Static and Rotating Platforms Based on Hybrid Reinforcement Learning[J]. IEEE Access, 2020, 8: 148411-148424. (b) In Walking on slopes, we added the comparison shown in Table7. Both walking uphill and walking downhill are compared with the same metric as (a). The compared data comes from two different methods in one latest work[1]. [1] Xi A, Chen C. Walking Control of a Biped Robot on Static and Rotating Platforms Based on Hybrid Reinforcement Learning[J]. IEEE Access, 2020, 8: 148411-148424. (c) In Uneven terrain, we added the comparison shown in Table8 with the same metric as (a). Three representative works are compared here[1][2][3]. [1] Liu C, Ning J, Chen Q. Dynamic walking control of humanoid robots combining linear inverted pendulum mode with parameter optimization[J]. International Journal of Advanced Robotic Systems, 2018, 15(1): 1729881417749672. [2] Yi J, Zhu Q, Xiong R, et al. Walking algorithm of humanoid robot on uneven terrain with terrain estimation[J]. International Journal of Advanced Robotic Systems, 2016, 13(1): 35. [3] Morisawa M, Kajita S, Kanehiro F, et al. Balance control based on capture point error compensation for biped walking on uneven terrain[C]//2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012). IEEE, 2012: 734-740. (d) In External force, we added the comparison shown in Table9 with the same metric as (a). In addition, we added the metric of max force in x-axis and y-axis to compare the robustness of different methods. [1] Liu C, Ning J, Chen Q. Dynamic walking control of humanoid robots combining linear inverted pendulum mode with parameter optimization[J]. International Journal of Advanced Robotic Systems, 2018, 15(1): 1729881417749672. [2] Smaldone F M, Scianca N, Modugno V, et al. Gait generation using intrinsically stable MPC in the presence of persistent disturbances[C]//2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids). IEEE, 2019: 651-656. (e) We added one subsection Variety of capable environments to show the robustness of our methods. While most previous methods solve one or two environments, our method has the widest range of capable environments. The comparison is shown in Table10. The compared works are those mentioned above. 2. Additionally, the typos should be corrected via careful proof reading before re-submission. Response: Thanks for the critical comments to further improve our work. We apologize for those mistakes. We have actually required for an editing service provided by PeerJ before the first submission of the manuscript, and it really helped a lot. Change(s): We have corrected the typo mistakes carefully including those in text and figures. Other detailed text changes for improvement are in accordance with the suggestions from Reviewer1. Reviewer: 1 1. This paper proposes a biped control framework based on reinforcement learning. The expert trajectory of the traditional controller is introduced to accelerate the training. And the exploration of reinforcement learning ensures the final model outperforms the expert instead of simply imitating the expert. To improve the training efficiency and performance, some improvements are also introduced into the framework. The method is validated by various experiments, including two tasks (walking as fast as possible & tracking specific velocity) and several different environments (plain, up-hill, down-hill and uneven floor). The work is conducted well and is promising in different control tasks. Both the framework and the tricks are inspiring for other works. Response: Thanks for the comments very much. We will further dig in the field in the future work. 2. The description of the PPO algorithm is not detailed enough. The meaning of lambda and gamma in Table1 and their usage are not illustrated. Response: Thanks for the valuable suggestion. We apologize for the problem. Change(s): We have re-written the section Methods -> Reinforcement learning totally to give a brief and clear introduction of reinforcement learning and PPO algorithms. The missed explanation of lambda and gamma is also added. 3. The experiments are rich for the LORM models. However, the performance of reference motion should be illustrated to show the improvement or difference between the proposed algorithm and reference motion. Response: Thanks for the valuable suggestion. Change(s): We have added the performance of direction accuracy in the subsection Results -> Walking on plains. And the performance of velocity is given in Table4. 4. Some typo problems including: a. Some paragraphs have indentation while others do not. For example Line 295 and Line 297; Line 273 and Line 276. b. Please avoid capital letters in sentences. For example, Line 15 “ Learn and Outperform the Reference Motion (LORM), an RL based framework ... “ in the abstract. c. Equation 17: f(x) = ..., however, there is no variable x or function f(x). I think it should be Criterion = ... . d. The names of curves in Figures can be polished, for example, Fig11. And the captions can be used in figures to make the meaning of curves more clear. e. The language should be further polished. Especially in the subsection “Symmetrization of actions and states”, I wonder whether it can be more clear? Though the description is understandable, it takes time to read and understand it. Response: Thanks a lot for the detailed reviewing and valuable suggestions. Change(s): a. The indentation is controlled by the latex. Currently the first paragraph of one section has indentation while other paragraphs does not. Hopefully, this format may be edited and changed when the paper is being published. b. We have corrected it and checked to ensure there is no similar mistake. c. We have corrected it and checked to ensure there is no similar mistake. d. For Figure7a, we corrected the label from “Timitate” to “imitate”, which was a spelling mistake. We also exchanged the colors of the curves to make it consistent with Figure7(b). For Figure 11(a), 11(b), 12(a), 12(b), 14(a) and 14(b), we shortened the length of legends because the detailed description can found in the main text. In addition, we gave some more descriptions for the figures that may be misleading, such as Figure5. e. We have managed to edit the paragraph to make it more clear. The changes is shown below. 5. In addition, I have two questions to be answered by the authors: a. In the subsection “Symmetrization of actions and states”, why only the angles of joints are symmetrized while other observations keep unchanged? b. The input observation of the agent contains many items which can be obtained in simulation software (base position, position of centre of mass). However, is it possible to obtain them in the real world? Response: Thank you for the questions. For the first question, according to our experiments, the states for the joints play the most important role in the biped control. Thus the symmetrization is introduced for them. There is not significant changes within the allowable error range by changing other observations. For the second question, The angles of joints can be read back from the servos of the robot. The observation of position and orientation can be collected by IMU or SLAM system. Reviewer: 2 1. An RL based framework for gait controlling of biped robot is proposed in this paper to overcome the complications of dynamics design and calculation. The results validated the efficiency and the advantages of the proposed method. Response: Thank you for the comments. The gait control of biped robots is a hard problem with high complexity of dynamics. Thus we designed the reinforcement-learning-based control system to reduce the complication of the design and improve the robustness of the controller in different environments. We have designed different environments to validate our method, which can also be used for other related works. We will continue digging in the field of biped control in the future research. 2. As the proposed method is claimed to be a novel method, there should be more literature discussion in the introduction part to clarify the state-of-art of the field and thus the novelty of the paper. Response: Thank you for the suggestion, which gives great improvement to our manuscript. Changes: We have totally re-written the introduction part adding sufficient introduction of the current researches in biped controlling. The introduction of related works is generally divided into the traditional methods and the reinforcement-learning-based methods, both of which introduce many latest and state-of-art works. As shown below, all the texts in blue are newly added, introducing sufficient related works. 3. In the result and discussion part, it is better to compare and validate the result with published works to make it more convincing. Response: Thanks for the critical comments to further improve our work. There are some papers with related topic about the gait controlling of biped robots. Even though more than half of them showed the results without quantitative analysis, we managed to make comparison between our method with the latest, and best works setting the velocity performance as the metric. Sufficient comparisons have been made for various environments, including plain, slopes, uneven terrains and walking with the disturbance of external force. In each comparison, we provide sufficient data from related works to validate the performance of our method. To our knowledge, our method have achieved the best performance on the targeted platform robot, Darwin-op. And our method has the widest range of environments, while other related works usually solve one or two environments. Change(s): We have added comparison in Results -> Performance and robustness in different environments . (f) In Walking on plains, we added the comparison shown in Table6. The absolute velocity and the normalized velocity is compared to validate the velocity performance. The normalized velocity is calculated as the ratio of absolute velocity and the height of the experiment robot. We compare our method with the rated max velocity of Darwin-op, and four methods from three latest works[1][2][3]. [3] Li X, Li Y, Cui X. Kinematic analysis and gait planning for a DARwIn-OP Humanoid Robot[C]//2016 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2016: 1442-1447. [4] Gil C R, Calvo H, Sossa H. Learning an efficient gait cycle of a biped robot based on reinforcement learning and artificial neural networks[J]. Applied Sciences, 2019, 9(3): 502. [3] Xi A, Chen C. Walking Control of a Biped Robot on Static and Rotating Platforms Based on Hybrid Reinforcement Learning[J]. IEEE Access, 2020, 8: 148411-148424. (g) In Walking on slopes, we added the comparison shown in Table7. Both walking uphill and walking downhill are compared with the same metric as (a). The compared data comes from two different methods in one latest work[1]. [1] Xi A, Chen C. Walking Control of a Biped Robot on Static and Rotating Platforms Based on Hybrid Reinforcement Learning[J]. IEEE Access, 2020, 8: 148411-148424. (h) In Uneven terrain, we added the comparison shown in Table8 with the same metric as (a). Three representative works are compared here[1][2][3]. [3] Liu C, Ning J, Chen Q. Dynamic walking control of humanoid robots combining linear inverted pendulum mode with parameter optimization[J]. International Journal of Advanced Robotic Systems, 2018, 15(1): 1729881417749672. [4] Yi J, Zhu Q, Xiong R, et al. Walking algorithm of humanoid robot on uneven terrain with terrain estimation[J]. International Journal of Advanced Robotic Systems, 2016, 13(1): 35. [3] Morisawa M, Kajita S, Kanehiro F, et al. Balance control based on capture point error compensation for biped walking on uneven terrain[C]//2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012). IEEE, 2012: 734-740. (i) In External force, we added the comparison shown in Table9 with the same metric as (a). In addition, we added the metric of max force in x-axis and y-axis to compare the robustness of different methods. [3] Liu C, Ning J, Chen Q. Dynamic walking control of humanoid robots combining linear inverted pendulum mode with parameter optimization[J]. International Journal of Advanced Robotic Systems, 2018, 15(1): 1729881417749672. [4] Smaldone F M, Scianca N, Modugno V, et al. Gait generation using intrinsically stable MPC in the presence of persistent disturbances[C]//2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids). IEEE, 2019: 651-656. (j) We added one subsection Variety of capable environments to show the robustness of our methods. While most previous methods solve one or two environments, our method has the widest range of capable environments. The comparison is shown in Table10. The compared works are those mentioned above. 4. There are some typo and grammar errors in the paper, please give it a proofreading for the language check. Response: Thanks for the critical comments to further improve our work. We apologize for those mistakes. We have actually required for an editing service provided by PeerJ before the first submission of the manuscript, and it really helped a lot. Change(s): We have corrected the typo mistakes carefully including those in text and figures. 5. The results are sufficient enough to validate the aim of the paper. However, more discussions are expected to emphasize the novelty and significance of the method. Response: Thanks for the suggestion. We have re-written the introduction part which has better emphasize on our novelty. And we have added sufficient comparisons to validate the significance of our method. The detailed changes have been mentioned above. "
Here is a paper. Please give your review comments after reading it.
382
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers and exerts adverse effects on society. Automatically detecting social media rumors has become a challenge faced by contemporary society. To overcome this challenge, we proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection. Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers and exerts adverse effects on society.</ns0:p><ns0:p>Automatically detecting social media rumors has become a challenge faced by contemporary society. To overcome this challenge, we proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection.</ns0:p><ns0:p>Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>As Internet technology gradually matures, online social networking (OSN) has become the spiritual ecology. Since OSN information is open and easily accessible, social networking software such as Weibo, Twitter, and Facebook have become the primary sources for millions of global users to receive news and information. They serve as essential approaches for Internet users to express their opinions. However, the authenticity of published information cannot be detected without supervision. Such social networking software has become the source of public opinion in hot events and news media.</ns0:p><ns0:p>For example, during the tenure of Barack Obama as the US President, a tweet from the 'so-called' Associated Press said, 'Two explosions occurred in the White House, and US President Barack Obama was injured.' Three minutes after the tweet was sent, the US stock index plunged like a 'roller coaster,' and the market value of the US stock market evaporated by 200 billion US dollars within a short period, which tremendously affected both the stock and bond futures. Soon after, the Associated Press issued a statement saying that its Twitter account had been hacked, and that tweet proved to be false news.</ns0:p><ns0:p>Therefore, it is of great necessity to automatically detect social media rumors in the early stage, and this technology will be extensively applied with the rapid development of social networks.</ns0:p><ns0:p>Nowadays, online rumors are no longer in the single form of texts. Instead, they are often in multiple modalities that combine images and texts. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the cases of rumors in the Twitter dataset, displaying the texts and images of each tweet. In Figure <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>, the news is fake based on the images and texts; it is hard to identify whether the news in Figure <ns0:ref type='figure' target='#fig_0'>1B</ns0:ref> is true or not, but the images are fake; we cannot determine the authenticity of the news in Figure <ns0:ref type='figure' target='#fig_0'>1C</ns0:ref> based on the images, but we can confirm that the information is false according to the texts. In current studies, the features of images and texts are mostly fused through feature concentration and averaging results. Nevertheless, this single fusion method fails to represent the posts fully. Firstly, it cannot solve the problem caused by the difference in semantic correlation between texts and images in rumors and non-rumors; secondly, the semantic gap cannot be overcome. Moreover, unlike paragraphs or documents, the texts in posts that are usually short fail to provide enough context information, making our classification fuzzier and more random.</ns0:p><ns0:p>This paper introduces a new end-to-end framework to solve the above problems. This framework is known as the multi-modal affine fusion network (MAFN). In the proposed model, employing affine fusion, we fused the features of images and texts to reduce the semantic gap and better capture the semantic correlation between images and texts. Entity recognition was introduced to improve the semantic understanding of texts and enhance the ability of rumor detection models. MAFN can gain multi-modal knowledge representation by processing posts on social media to detect rumors effectively. This paper makes the following three contributions:</ns0:p><ns0:p>&#8226; We proposed the multi-modal affine fusion network (MAFN) combined with entity recognition for the first time better to capture the semantic correlation between images and texts.</ns0:p><ns0:p>&#8226; The proposed MAFN model enriched the semantic information of text with entity recognition, and entity recognition was fused with the extracted textual features to improve the semantic comprehension of text.</ns0:p><ns0:p>&#8226; Experiments show that the MAFN model proposed in this paper can effectively identify rumors on Weibo and Twitter datasets and is superior to currently available multi-modal rumor detection models.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In early research on rumor detection, <ns0:ref type='bibr' target='#b3'>Castillo et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b15'>Kwon et al. (2013)</ns0:ref>, the rumor detection model was mainly established based on the differences between the features of rumors and factual information. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to identify rumors on Sina Weibo. However, it is time and energy-consuming to design these features manually, and the language patterns are highly dependent on specific time and knowledge in corresponding fields. Therefore, these features cannot be correctly understood. (2017) detected rumors by fusing the image and textural features of posts using the RNN combined with the attention mechanism. However, multi-modal features still depend highly on specific events in the dataset, which will weaken the model's generalization ability. Therefore, Wang et al. <ns0:ref type='bibr' target='#b24'>Wang et al. (2018)</ns0:ref> put forward the EANN model that connected the visual features and textual features of posts in series and applied the event discriminator to remove specific features of events and learn the shared features of rumor events. Experiments show that this method can detect many events that are difficult to distinguish in a single modality. This model extracts adequate information and essential features from highly repeated texts, which solves the problems of excessive redundancy of texts in the data to be tested and weak information links between remote sites.</ns0:p><ns0:p>According to Dhruv K et al., <ns0:ref type='bibr' target='#b14'>Khattar et al. (2019)</ns0:ref>, a single fusion method cannot effectively represent the posts. So, they used the encoder and decoder to extract the features of images and texts and learned across modalities with the help of Gaussian distribution. Liu et al. <ns0:ref type='bibr' target='#b13'>Jinshuo et al. (2020)</ns0:ref> put the text vector, the text vector in the image, and the image vector together, and then processed them using Gaussian distribution to get a new fusion vector to discover the association between the two modalities of hidden representation. Besides learning the text representation of posts, Zhang et al. <ns0:ref type='bibr' target='#b27'>Zhang et al. (2019)</ns0:ref> retrieved external knowledge to supplement the semantic representation of short posts and used conceptual knowledge as additional evidence to improve the performance of the rumor detection model.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This paper introduced the four modules of the proposed MAFN model in this Section, i.e., the entity recognition enhanced textual feature extractor, the visual feature extractor, the multi-modal affine fuser, and the rumor classifier. Furthermore, we described the integration of the proposed modules to represent and detect rumors.</ns0:p><ns0:p>We instantiated tweets on Weibo and Twitter. The total tweets were expressed as S = {t 1 ,t 2 , . . . ,t n }, and each tweet was expressed as t = {T,E,V}, where T denotes the text content of the tweets, E represents the entity content extracted from the tweets, and V stands for the visual content matched with the tweets. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>L = {L 1 , L 2 , . . . , L m }</ns0:formula><ns0:p>Computer Science Bert is a natural language processing model with the transformer bidirectional encoder representation as to the core, which can better extract the text context representation bidirectionally. By inputting the sequential vocabulary of the words in the tweets, the words were first embedded into the vector. The dimension of the ith word in the sentence is denoted by m, which is expressed as W i &#8712; R m , and by inputting it into the sentence, S, it can be expressed as:</ns0:p><ns0:formula xml:id='formula_1'>S = [W 0,W 1,W 2, . . . ,W p] (1)</ns0:formula><ns0:p>Where, S &#8712; R m * p , p denotes the total number of words, W 0 denotes [CLS], and W p represents <ns0:ref type='bibr'>[SEP]</ns0:ref>. By inputting the complete texts of tweets into the Bert model, we obtained the feature vector of the given sentence as</ns0:p><ns0:formula xml:id='formula_2'>S f = [W f 0,W f 1,W f 2, . . . ,W f p]</ns0:formula><ns0:p>Then the sentence feature vectors S f n were given to the two fully connected layers. The above steps 138 can be defined as follows:</ns0:p><ns0:formula xml:id='formula_3'>139 Rt &#8242; = &#963; (W f t2&#8226;&#963; (W f t1 &#8226;S f + bt1) + bt2) (2)</ns0:formula><ns0:p>Where W f t1 denotes the weight matrix of the first fully connected layer with activation function,</ns0:p></ns0:div> <ns0:div><ns0:head>140</ns0:head><ns0:p>W f t2 represents the weight matrix of the second fully connected layer with activation function, and bt1 141 and bt2 are the bias terms.</ns0:p></ns0:div> <ns0:div><ns0:head>142</ns0:head><ns0:p>The attention-based neural network can better obtain relatively long dependencies in sentences. The self-attention mechanism is a kind of attention mechanism that associates different positions of a single sequence to calculate the representation of the same sequence. To enable the model to learn the correlation 4/12 between the current word and the other parts of the sentence, we added the self-attention mechanism after the fully connected layer, the process of which was expressed as follows:</ns0:p><ns0:formula xml:id='formula_4'>Attsel f = so f tmax[QT &#8226; KT &#8868; / &#8730; m]&#8226;V T (3)</ns0:formula><ns0:p>Where, QT = Rt &#8242; &#215; WQT , KT =Rt &#8242; &#215; WKT , V T = Rt &#8242; &#215; WV T .WQT , WKT , WV T denote the three matrices learned by Q, K, and V, respectively. To make the model automatically recognize the importance of each word, degrade unimportant features to their original features, and process essential features using the self-attention mechanism, we used the residual connection to extract the features better.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the architecture of a residual self-attention. A building block was defined as:</ns0:p><ns0:formula xml:id='formula_5'>Rt = Attsel f + Rt &#8242; (4)</ns0:formula><ns0:p>Where, R t denotes the eventually extracted text representation, R t &#8712; R k .</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. The architecture of a residual self-attention.</ns0:p></ns0:div> <ns0:div><ns0:head>Extraction of Entity Representation</ns0:head><ns0:p>Named entity recognition identifies person names, place names, and organization names in a corpus.</ns0:p><ns0:p>It was assumed that the combination of entity tagging and text coding in a post could supplement the semantic representation of the short text of the post in a certain way so that the model could identify rumors and non-rumors more accurately. Explosion AI developed spacy, a team of computer scientists and computational linguists in Berlin, and its named entity recognition model was pre-trained on OntoNotes 5, a sizeable authoritative corpus. In this paper, Spacy was applied to train the two datasets and extract the entities of posts. There were 18 kinds of identifiable entities.</ns0:p><ns0:p>First of all, we identified the recognizable word W i as the entity e &#8712; E s in every sentence S = Rv &#8712; R k . The equation for extracting image features was defined as follows:</ns0:p><ns0:formula xml:id='formula_6'>Rv &#8242; = W f v2 &#8226;&#963; (BN(W f v1 &#8226;Rvgg + bv1)) + bv2 (6) Rv = Dropout(&#963; (BN(Rv &#8242; )))<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Where, Rvgg represents the visual features extracted from the network in the pre-trained model VGG19, &#963; is the activation function, W f v1 denotes the weight matrix of the first fully connected layer with the activation function, and bv1 and bv2 are the bias terms.</ns0:p></ns0:div> <ns0:div><ns0:head>Multi-modal Affine Fuser</ns0:head><ns0:p>Affine transformation transforms into another vector space via linear transformation and translation. Through affine transformation, the multi-modal affine fuser fuses the multi-modal features extracted by the entity recognition enhanced textual feature extractor and the visual feature extractor, the joint representation and visual features of text and entity. It was assumed that the data of the two modalities could be fused more closely and the high-level semantic correlation could be better extracted. The corresponding equation was defined as follows:</ns0:p><ns0:formula xml:id='formula_7'>R c = F R v &#8226;R u + H (R v ) (8)</ns0:formula><ns0:p>Where, Rc is the feature R c &#8712; R k gained after the fusion of all features, and F &#8226; and H &#8226; were fitted by the neural network. After extracting the fused features, in order to get more robust features, we reconnected the fused features with the textual features to obtain the total feature R s . The equation was expressed as:</ns0:p><ns0:formula xml:id='formula_8'>R s = R c &#8853; R t (9)</ns0:formula><ns0:p>Where, &#8853; denotes concatenation.</ns0:p></ns0:div> <ns0:div><ns0:head>6/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:1:0:NEW 4 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Rumor Detector</ns0:p><ns0:p>The rumor detector, based on the multi-modal affine fuser, sent the finally obtained multi-modal feature R s to the multilayer perceptron for classification to judge whether the message was a rumor or not. The rumor detector consists of multiple completely connected layers with softmax. The rumor detector was expressed as G(R i s , &#952; ), where &#952; represents all the parameters in the rumor detector, and R i s denotes the multi-modal representation of the case of the ith tweet. The rumor detector was defined as follows:</ns0:p><ns0:formula xml:id='formula_9'>p i = G(R i s , &#952; )<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>Where p i denotes the probability that the ith post input by the detector is a rumor, in the process of model training, we selected the cross-entropy function as the loss function, which was expressed as follows:</ns0:p><ns0:formula xml:id='formula_10'>Loss = N &#8721; i=1 &#8722;[L i &#215; log (p i ) + (1 &#8722; L i ) &#215; log(1 &#8722; p i )]<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>Where, L i denotes the tag of the tweet in the i-th group, and N refers to the total number of training samples.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTS</ns0:head><ns0:p>This section first described the datasets used in the experiment, namely two social media datasets extracted from the real world. Secondly, we briefly compared the results obtained by the most advanced rumor detection method and those gained by the model proposed in this paper. Through the MAFN ablation experiment, we compared the performances of different models.</ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>To fairly evaluate the performance of the proposed model, we used two standard datasets extracted from the real world to assess the rumor detection framework of the MAFN. These two datasets were composed of rumors and non-rumors collected from Twitter and Weibo, which simulated the natural open environment to some extent. They are currently the only datasets with paired image and text information.</ns0:p></ns0:div> <ns0:div><ns0:head>Weibo Dataset</ns0:head><ns0:p>The Weibo dataset is a dataset proposed by <ns0:ref type='bibr' target='#b12'>Jin Jin et al. (2017)</ns0:ref> for rumor detection. It consists of the data collected by Xinhua News Agency, an authoritative news source in China, and the website of Sina Weibo and the data verified by the official rumor refuting system of Weibo. We preprocessed the dataset using a method similar to that put forward by Jin. First, locality sensitive hashing (LSH) was applied to filter out the same images and then delete irregular images such as very small or very long images to ensure that images in the dataset were of uniform quality. In the last step, the dataset was divided into the training and test sets. The ratio of tweets in training set to those in the test set was 8:2.</ns0:p></ns0:div> <ns0:div><ns0:head>Twitter Dataset</ns0:head><ns0:p>The Twitter dataset <ns0:ref type='bibr' target='#b2'>Boididou et al. (2015)</ns0:ref> was released to verify the task of social media rumor detection.</ns0:p><ns0:p>This dataset contains about 15,000 tweets focusing on 52 different events, and each tweet is composed of texts, images, and videos. The ratio of concentrated development set to test set in the dataset is 15:2, with the ratio of rumors to non-rumors being 3:2. Since this paper mainly studies the fusion of texts and images, we filtered out all tweets with videos. The ratio of development set and test set used to train the proposed model is the same as above.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiment Setting</ns0:head><ns0:p>The feature dimension of the images processed by VGG19 was 1000; the image features were extracted and embedded by two linear layers to obtain the feature dimension. After applying Bert and the linear layer were processed, the texts and entities were turned into 32-dimensional vectors. </ns0:p></ns0:div> <ns0:div><ns0:head>Baselines</ns0:head><ns0:p>To verify the performance of the proposed multi-modal rumor detection framework based on knowledge attention fusion, we compared it with the single-modal methods, i.e., Textual and Visual, and five new multi-modal models. Textual and Visual were the subnetworks of the MAFN. The following are relatively new rumor detection methods for the comparative analysis:</ns0:p><ns0:p>&#8226; Neural Talk generates the words that describe images using the potential representations output by the RNN. Using the same structure, we applied the RNN to output the joint representation of images and texts in each step and then fed the representation into the fully connected layer for rumor detection and classification.</ns0:p><ns0:p>&#8226; <ns0:ref type='bibr'>EANN Wang et al. (2018)</ns0:ref>: extracted textual features using Text-CNN, processes image features with VGG19 and then splices the two types of features together. With the features of specific events removed by the event discriminator, the remaining features were input into the fake news detector for classification.</ns0:p><ns0:p>&#8226; MVAE <ns0:ref type='bibr' target='#b14'>Khattar et al. (2019)</ns0:ref>: used the structure of encoder-decoder to extract the image and textual features and conducted cross-modal learning with Gaussian distribution.</ns0:p><ns0:p>&#8226; att- <ns0:ref type='bibr'>RNN Jin et al. (2017)</ns0:ref> uses the RNN combined with the attention mechanism to fuse three modalities, i.e., image, textual, and user features. For a fair comparison, we removed the feature fusion in the user feature part of att-RNN, with the parameters of other parts being the same as those of the original model.</ns0:p><ns0:p>&#8226; <ns0:ref type='bibr'>MSRD Jinshuo et al. (2020)</ns0:ref> obtains a new fusion vector for classification by splicing textual features, textual features in images, and visual features extracted by VGG19 using Gaussian distribution.</ns0:p><ns0:p>&#8226; VQA is applied in the field of visual questioning and answering. Initially a multi-classification task, the image question-and-answer task was changed to a binary classification task. We used a single-layer LSTM with 32 hidden units to detect and classify rumors.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Comparison</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> shows the baseline results of single-modal and multi-modal models as well as the performances of the MAFN on two datasets in terms of the accuracy, precision, recall, and F1 of our rumor detection framework. MAFN performed better than the baseline models. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Component Analysis</ns0:head><ns0:p>To further analyze the performance of each part of the proposed model and to better describe the necessity of adding entity recognition and affine model, we carried out corresponding ablation experiments. We designed several comparison baselines, including simplified single-modal and multi-modal variants that removed some original models' components. The Weibo dataset contains a greater variety of events without strong specificity, better reflecting the rumors in the real world. Therefore, we ran the newly designed simplified variants on the Weibo dataset. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science from 73.9% to 77.2%. 1.9% also improved the accuracy of image text fusion due to the introduction of entity branches. It was found that entity branches could supplement semantic representation, proving our idea effective. According to the experimental results, if we remove affine fusion, the accuracy of MAFN will decrease by 1.3%, and F1 will also decline by 2.4%. If images and texts are only connected without adding fusion and supplement, the accuracy will be lower. This proves the effectiveness of MAFN in rumor detection. MAFN can achieve more reliable multi-modal representation.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Three forms of rumors on Weibo and Twitter datasets</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Rumors on social media have gradually transformed from text-based to multi-modal rumors that combine both texts and images. Data in different modalities can complement each other. An increasing number of researchers have tried to integrate visual information into rumor detection. Isha et al.<ns0:ref type='bibr' target='#b21'>Singh et al. (2021)</ns0:ref> manually designed textual, and image features in four dimensions, i.e., content, organization, emotions, and manipulation, and eventually fused multiple features to detect rumors.Jin et al. Jin et al. </ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b17'>Ma et al. Ma et al. (2016)</ns0:ref> introduced recurrent neural networks (RNN) to learn hidden representations from the texts of related posts and used LSTM, GRU, and 2-layer GRU to model text sequences, respectively. It was the first attempt to introduce a deep neural network into post-based rumor detection and achieve considerable performance on real datasets, verifying the effectiveness of deep learning-based rumor detection. Yu et al. Yu et al. (2017) used a convolutional neural network (CNN) to obtain critical features and their advanced interactions from the text content of related posts. Nonetheless, CNN is unable to capture long-distance features. Hence, Chen et al. Chen et al. (2019) applied an attention mechanism to the detection of network rumor and proposed a neural network model with deep attention.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>denotes the corresponding rumor and non-rumor tags of tweets. This paper aims to learn a multi-modal fusion classification model F by using the total tweets S and the corresponding tag sets L. Fcan predict rumors on unmarked social media.Figure 2 shows the framework of the proposed model. The entity recognition enhanced textual feature extractor and obtained the joint representation R u of text using Bert pre-training and self-attention mechanism. The visual feature extractor used the pretrained model VGG19 to capture visual semantic feature R v . The multi-modal affine fuser fused the joint representation and visual representation to obtain R s , and the rumor classifier was utilized in the end to detect rumors. 3/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:1:0:NEW 4 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The model diagram of the proposed multimodal network MAFN.The yellow part represents the visual feature extractor, the blue part denotes the entity recognition enhanced textual feature extractor, the pink part stands for the multi-modal affine fuser, and the green part refers to the rumor detector.</ns0:figDesc><ns0:graphic coords='5,162.41,63.78,372.23,254.47' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>[</ns0:head><ns0:label /><ns0:figDesc>W 0,W 1,W 2, . . . ,W p] of the tweet, and then obtained the tag L &#8712; {L 1 , L 2 , . . . , L n } corresponding to this entity, where L i is one of the tags {PERSON, LANGUAGE, . . ., LOC}. For instance, to instantiate 5/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:1:0:NEW 4 Feb 2022) Manuscript to be reviewed Computer Science a piece of text, we instantiated the entities in the text, as shown in Figure ??EQ4. The extracted entity L European = {NORP}, NORP means nationalities or religions or political groups; L Google = {ORG}, OPG represents companies, agencies, institutions etc.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of entity refining process. Based on the obtained L i , the corresponding entity tags were connected in series to capture semantic features by Bert. E f &#8712; R k , where R k denotes the embedding dimension of tags. By inputting E f into the residual attention mechanism, we gained Re &#8712; R k . In the end, we combined the extracted text representation with the entity representation to obtain the joint representation Ru, Ru &#8712; R k , which was defined as follows: Ru = add Re, Rt (5) Visual Feature Extractor Images in tweets form the input into the visual feature extractor. This proposed framework used the pre-trained model VGG-19 and added two fully connected layers in the last layer to more comprehensively extract the visual features matched with the rumors in the tweet. According to the parameters unchanged after pre-training, VGG-19 adjusted the representation dimension of final visual features to k through two fully connected layers. We added the batch normalization layer and drop-out layer between the two fully connected layers and the activation function to prevent overfitting during the extraction of image representation. The eventually obtained feature of visual representation was expressed as Rv, where</ns0:figDesc><ns0:graphic coords='7,162.41,110.30,372.22,88.49' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>The entire training epochs was 50, and the batch size was 32. Adam served as the model optimizer during the training of the model. The initial learning rate was 0.001, and then lr varied with epoch based on the following equation: lr = 0.001/(1. + 10 * p) * * 0.75 (13)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>'w/o -affine fusion' means removing affine fusion but retaining texts for entity recognition. Images and entity recognition were directly connected in series with the joint representation of texts. 'w/o entity+ affine fusion' removed both entity and affine modules. 'Text-only' refers to the single-text experiment.After pre-training the text using Bert, we connected the texts to the two fully connected layers and then accessed the residual self-attention to detect rumors directly. We conducted it for comparison. 'Entity-Link-Only' results from rumor text detection carried out by only model branch entities. 'w/o image' refers to the experiment without images, but only the combination of texts and entities. Furthermore,Table 2 indicates the performance of the simplified variant of MAFN. The experimental results show the necessity for the model to use affine fusion and enhance entity recognition. With entity-link added, the accuracy of single-modal text classification was increased from 77.4% to 79.9%, and F1 increased 9/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:1:0:NEW 4 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The single textual model outperformed the single visual model on the Twitter dataset. Although the image features learned by visual features with the help of VGG-19 had better performance in rumor detection, the extraction of textural features was improved by Bert pre-training and residual attention. However, the single-modal model performed much. Among currently available multi-modal models, att-RNN uses LSTM and attention mechanism to process text representation, but it is not as good as EANN, which shows that EANN's event discriminator can better improve the model when it comes to rumor detection. The variational autoencoder proposed by</ns0:figDesc><ns0:table><ns0:row><ns0:cell>MVAE can better discover multi-modal correlation, and it outperforms EANN. MAFN outperformed all</ns0:cell></ns0:row><ns0:row><ns0:cell>baselines in terms of accuracy, precision, and F1, with high accuracy increasing from 82.7% to 84.2% and</ns0:cell></ns0:row><ns0:row><ns0:cell>the F1 score going up from 82.9% to 84.0%. This verifies the effectiveness of MAFN in rumor detection.</ns0:cell></ns0:row><ns0:row><ns0:cell>A similar trend was found on the Weibo dataset. The textual model is superior to the visual model</ns0:cell></ns0:row><ns0:row><ns0:cell>among the single-modal models. The accuracy of single text reaches 77.4%, which verifies the effective-</ns0:cell></ns0:row></ns0:table><ns0:note>ness of Bert pre-training and residual self-attention mechanism in improving semantic representation.Among the multi-modal methods, att-RNN, EANN, and MSRD proposed for this task outperform Neu-ralTalk and VQA, proving the necessity of improving modal fusion. The proposed MAFN achieved the best performance among other state-of-the-art models, with accuracy increasing from 74.5 % to 77.1% and the F1 score rising from 75.8% to 78.7%. This implies that the proposed model can better extract the multi-modal joint representation of images and texts.8/12PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:1:0:NEW 4 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of performances of MAFN and other methods on Twitter and Weibo datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>Twitter</ns0:cell><ns0:cell>Textual</ns0:cell><ns0:cell>0.551</ns0:cell><ns0:cell>0.680</ns0:cell><ns0:cell>0.605</ns0:cell><ns0:cell>0.520</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Visual</ns0:cell><ns0:cell>0.512</ns0:cell><ns0:cell>0.655</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>0.505</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NeuralTalk</ns0:cell><ns0:cell>0.610</ns0:cell><ns0:cell>0.728</ns0:cell><ns0:cell>0.504</ns0:cell><ns0:cell>0.595</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VQA</ns0:cell><ns0:cell>0.631</ns0:cell><ns0:cell>0.765</ns0:cell><ns0:cell>0.509</ns0:cell><ns0:cell>0.611</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>att-RNN</ns0:cell><ns0:cell>0.664</ns0:cell><ns0:cell>0.749</ns0:cell><ns0:cell>0.615</ns0:cell><ns0:cell>0.676</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MSRD</ns0:cell><ns0:cell>0.685</ns0:cell><ns0:cell>0.725</ns0:cell><ns0:cell>0.636</ns0:cell><ns0:cell>0.678</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EANN</ns0:cell><ns0:cell>0.715</ns0:cell><ns0:cell>0.822</ns0:cell><ns0:cell>0.638</ns0:cell><ns0:cell>0.719</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MVAE</ns0:cell><ns0:cell>0.745</ns0:cell><ns0:cell>0.801</ns0:cell><ns0:cell>0.719</ns0:cell><ns0:cell>0.758</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MAFN</ns0:cell><ns0:cell>0.771</ns0:cell><ns0:cell>0.790</ns0:cell><ns0:cell>0.782</ns0:cell><ns0:cell>0.787</ns0:cell></ns0:row><ns0:row><ns0:cell>Weibo</ns0:cell><ns0:cell>Textual</ns0:cell><ns0:cell>0.774</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.739</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Visual</ns0:cell><ns0:cell>0.633</ns0:cell><ns0:cell>0.523</ns0:cell><ns0:cell>0.637</ns0:cell><ns0:cell>0.575</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NeuralTalk</ns0:cell><ns0:cell>0.717</ns0:cell><ns0:cell>0.683</ns0:cell><ns0:cell>0.843</ns0:cell><ns0:cell>0.754</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VQA</ns0:cell><ns0:cell>0.773</ns0:cell><ns0:cell>0.780</ns0:cell><ns0:cell>0.782</ns0:cell><ns0:cell>0.781</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>att-RNN</ns0:cell><ns0:cell>0.779</ns0:cell><ns0:cell>0.778</ns0:cell><ns0:cell>0.799</ns0:cell><ns0:cell>0.789</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MSRD</ns0:cell><ns0:cell>0.794</ns0:cell><ns0:cell>0.854</ns0:cell><ns0:cell>0.716</ns0:cell><ns0:cell>0.779</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MVAE</ns0:cell><ns0:cell>0.824</ns0:cell><ns0:cell>0.854</ns0:cell><ns0:cell>0.769</ns0:cell><ns0:cell>0.809</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EANN</ns0:cell><ns0:cell>0.827</ns0:cell><ns0:cell>0.847</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.829</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MAFN</ns0:cell><ns0:cell>0.842</ns0:cell><ns0:cell>0.861</ns0:cell><ns0:cell>0.821</ns0:cell><ns0:cell>0.840</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Variants of the proposed MAFN's performance on Weibo datasets.As shown in Table2, 'w/o -entity' denotes the proposed MAFN without entity recognition module;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>MAFN</ns0:cell><ns0:cell>0.842</ns0:cell><ns0:cell>0.861</ns0:cell><ns0:cell>0.821</ns0:cell><ns0:cell>0.840</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o entity</ns0:cell><ns0:cell>0.836</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.826</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o affine fusion</ns0:cell><ns0:cell>0.829</ns0:cell><ns0:cell>0.800</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.816</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o entity+ affine fusion</ns0:cell><ns0:cell>0.819</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.852</ns0:cell><ns0:cell>0.797</ns0:cell></ns0:row><ns0:row><ns0:cell>Text-only</ns0:cell><ns0:cell>0.774</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.739</ns0:cell></ns0:row><ns0:row><ns0:cell>Entity-Link-Only</ns0:cell><ns0:cell>0.549</ns0:cell><ns0:cell>0.429</ns0:cell><ns0:cell>0.529</ns0:cell><ns0:cell>0.474</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o image</ns0:cell><ns0:cell>0.799</ns0:cell><ns0:cell>0.719</ns0:cell><ns0:cell>0.834</ns0:cell><ns0:cell>0.772</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='10'>/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:1:0:NEW 4 Feb 2022)</ns0:note> <ns0:note place='foot' n='12'>/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:1:0:NEW 4 Feb 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response Sheet Multi-modal affine fusion network for social media rumor detection Editor Comments Authors are asked to pay more attention in results section, and address each of the reviewers' comments. Response: Thank you for allowing us resubmission, as per your suggestion we have updated the experimental section please see the revised version as track changes. Also, we covered all the reviewers’ comments. Reviewer 1 - In this study, the authors proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method, and it cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection. Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection. 2. Strength - this paper firstly introduced the four modules of the proposed MAFN model, i.e., the entity recognition enhanced textual feature extractor, the visual feature extractor, the multi-modal affine fuser, and the rumor classifier. Secondly, this paper describes how to integrate the four modules to represent and detect rumors. Response: Thank you for your compliment, this can be interpreted as in the revised manuscript: “This paper introduces a new end-to-end framework to solve the above problems. This framework is known as the multi-modal affine fusion network (MAFN). In the proposed model, employing affine fusion, we fused the features of images and texts to reduce the semantic gap and better capture the semantic correlation between images and texts. Entity recognition was introduced to improve the semantic understanding of texts and enhance the ability of rumor detection models”. 3. Weakness - Lack of novelty of research. textual feature extractor -based problem solving is a very common approach in the recent deep learning field, and post-processin g is also difficult to consider as a new algorithm. Response: Thank you for your concern, nowadays, online rumors are no longer in the single form of texts. Instead, they are often in multiple modalities that combine images and texts. In current studies, the features of images and texts are mostly fused through feature concentration and averaging results. Nevertheless, the single fusion method fails to represent the posts fully. Firstly, it cannot solve the problem caused by the difference in semantic correlation between texts and images in rumors and non-rumors; secondly, the semantic gap cannot be overcome. Moreover, unlike paragraphs or documents, the texts in posts that are usually short fail to provide enough context information, making our classification fuzzier and more random. Thus, considering above we use preprocessing instead of post processing. - The part about the learning scenario is confusing. A more understandable explanation is needed for training and testing. Response: Thank you for your concern, as per your suggestion we have update it in more clear way. Images in tweets form the input into the visual feature extractor. This proposed framework used the pre-trained model VGG-19 and added two fully connected layers in the last layer to more comprehensively extract the visual features matched with the rumors in the tweet. According to the parameters unchanged after pre-training, VGG-19 adjusted the representation dimension of final visual features to k through two fully connected layers. We added the batch normalization layer and drop-out layer between the two fully connected layers and the activation function to prevent overfitting during the extraction of image representation. - The entity resolution feature is quite confusing, and difficult to understand. Response: Thank you for your concern, as per your suggestion we have update it in more clear way. Named entity recognition identifies person names, place names, and organization names in a corpus. It was assumed that the combination of entity tagging and text coding in a post could supplement the semantic representation of the short text of the post in a certain way so that the model could identify rumors and non-rumors more accurately. Explosion AI developed spacy, a team of computer scientists and computational linguists in Berlin, and its named entity recognition model was pre-trained on OntoNotes 5, a sizeable authoritative corpus. In this paper, Spacy was applied to train the two datasets and extract the entities of posts. There were 18 kinds of identifiable entities. 4. Minor comments - There is an error in the reference. I haven't looked at all of them in detail. Response: Thank you for your concern, as per your suggestion, we have updated the reference list, please see the revised version of manuscript. - The manuscript is not well organized. The introduction section must introduce the status and motivation of this work and summarize with a paragraph about this paper. Response: Thank you for your concern; as per your suggestion, we have modified the organization; in the previous version, the intro was divided into two sub-sections, which was confusing, and we changed into single text also, we this categorized the related work and made them more systematic. Please see the revised version of the manuscript. - What are the limitations of the related works Response: Thank you for your concern, Nowadays, online rumors are no longer in the single form of texts. Instead, they are often in multiple modalities that combine images and texts. In current studies, the features of images and texts are mostly fused through feature concentration and averaging results. Nevertheless, this single fusion method fails to represent the posts fully. Firstly, it cannot solve the problem caused by the difference in semantic correlation between texts and images in rumors and non-rumors; secondly, the semantic gap cannot be overcome. Moreover, unlike paragraphs or documents, the texts in posts that are usually short fail to provide enough context information, making our classification fuzzier and more random. Thus, considering above we use preprocessing instead of post processing. -Are there any limitations of this carried out study? Response: Thank you for your concern, the limitation of this work can be a facing more symmetrical meaning-based text and useful organs which was not considered. This help to open the new opportunities. -How to select and optimize the user-defined parameters in the proposed model? Response: Thank you for your concern, the said detail is given in experimental setup section, and we mentioned the number of parameters, types of parameters and epochs as well. Please see the modified version of manuscript. Reviewer 2 Grammatical mistakes 1. However, without supervision, the authenticity of published information cannot be detected--> should be ... 'However, the authenticity of published information cannot be detected without supervision'. Response: Thank you for highlighting the errors, as per your suggestion we have proofread the entire manuscript and corrected the errors, please see the revised version of manuscript. 2. With the rapid development of the Internet, people obtain abundant information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in real information to be widely spread, which misleads readers and exerts negative effects on society --> should be---> 'With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in real information to be widely spread, which misleads readers and exerts adverse effects on society.' Response: Thank you for highlighting the errors, as per your suggestion we have proofread the entire manuscript and corrected the errors, please see the revised version of manuscript. 3. Established based on the multi-modal affine fuser, the rumor detector sent the finally obtained multi-modal feature--> 'Based on the multi-modal affine fuser, the rumor detector sent the finally obtained multi-modal feature.' Response: Thank you for highlighting the errors, as per your suggestion we have proofread the entire manuscript and corrected the errors, please see the revised version of manuscript. Minor Changes: 1. The author should provide only relevant information related to this paper and reserve more space for the proposed framework. Response: Thank you for your concern; as per your suggestion, we have modified the organization; in the previous version, the intro was divided into two sub-sections, which was confusing, and we changed into single text also, we this categorized the related work and made them more systematic. Please see the revised version of the manuscript. 2. The theoretical perceptive of all models the used for comparison must be included in the literature. Response: Thank you for your concern; as per your suggestion we have added more theoretical perspective in the related work as: the single fusion method fails to represent the posts fully. Firstly, it cannot solve the problem caused by the difference in semantic correlation between texts and images in rumors and non-rumors; secondly, the semantic gap cannot be overcome. Moreover, unlike paragraphs or documents, the texts in posts that are usually short fail to provide enough context information, making our classification fuzzier and more random. Thus, considering above we use preprocessing instead of post processing. Please see the updated version of manuscript. 3. What are the real-life use cases of the proposed model? The authors can add a theoretical discussion on the real-life usage of the proposed model. Response: Thank you for your concern, there are several real life uses for example in intro we mentioned “During the tenure of Barack Obama as the US President, a tweet from the “so-called” Associated Press said, “Two explosions occurred in the White House, and US President Barack Obama was injured.” Three minutes after the tweet was sent, the US stock index plunged like a “roller coaster,” and the market value of the US stock market evaporated by 200 billion US dollars within a short period, which tremendously affected both the stock and bond futures. Soon after, the Associated Press issued a statement saying that its Twitter account had been hacked, and that tweet proved to be false news. Therefore, it is of great necessity to automatically detect social media rumors in the early stage, and this technology will be extensively applied with the rapid development of social networks.” 4. However, the author should compare the proposed algorithm with other recent works or provide a discussion. Otherwise, it's hard for the reader to identify the novelty and contribution of this work. Response: Thank you for your concern, to verify the performance of the proposed multi-modal rumor detection framework based on knowledge attention fusion, we compared it with the single-modal methods, i.e., Textual and Visual, and five new multi-modal models. Textual and Visual were the subnetworks of the MAFN. Please see the Table 1 for the state-of-the-art comparison. 5. The descriptions given in this proposed scheme are not sufficient that this manuscript only adopted a variety of existing methods to complete the experiment where there are no strong hypotheses and methodical theoretical arguments. Therefore, the reviewer considers that this paper needs more works. Response: Thank you for your concern, we proposed the novel model, which is not existing in the current applications, The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection. Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection. Reviewer 3 The abstract should be reformulated. The abstract is an extremely important and powerful representation of the article. The authors have to clarify what is the novelty of this paper in abstract. Response: Thank you for your concern, as per your suggestion we have modified the abstract as: “With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers and exerts adverse effects on society. Automatically detecting social media rumors has become a challenge faced by contemporary society. To overcome this challenge, we proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection. Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection” • Reduce challenges list as much as you can. Response: Thank you for your concern, sorry we did not get these points which sort of challenges? Are you talking about errors list? • Provide the related works clearly highlight the main gap. Response: Thank you for your concern, as per your suggestion, we updated the related work and dis classified the modality as the detection can be based on single or multi models. • Authors have proposed three algorithms, but I do not understand which one is used in the comparison with state-of-the-art works. Response: Thank you for your concern, the original proposal is based on MAFN, however, to ensure the integrity we proposed several variants of MAFN which were validated on the Weibo data set. Please see Table 2. • Figures 5abcd, have same label which quite confusing what is the point from each figure. Response: Thank you for your concern, actually examples of successfully detecting rumors on Twitter by MAFN. Where A is Right now, in Dresden. Over 30,000 at Pegida Anti-Immigrant and B When the bomb exploded, the man on the roof was the one who caused the panic • Figures looks very fuzzy, and resolution of image is poor. Response: Thank you for your concern, as per your suggestion we have revised the quality of figures. Please see the revised version of figures. • Replace Case Study Performance Visualization with the discussion section as it is very poor and more deep discussion is needed for findings of the study. Response: Thank you for your concern, actually it is sub part of discussion as “a qualitative analysis was performed on MAFN. After analyzing and ranking the examples of rumors successfully classified by MAFN, we selected the best two examples on Twitter and Weibo and showed them in Figure 5 and Figure 6, respectively. Without the support of affine fusion and entity recognition, the examples in Twitter could not be detected. Since the model failed to effectively capture the relationship between texts and images, these examples were misjudged as non-rumors. Insufficient text information and the absence of close connections between information and images are the reasons why the examples in Weibo could not be detected using “w/o entity+ affine fusion.” However, we can identify rumors with affine fusion by judging the image features.” • Conclusion needs to be improved. The most important obtained results should be briefly and clearly mentioned through the support of numerical data in the conclusion. Response: Thank you for your concern, as per your suggestion we have modified the conclusion as: “This paper proposed an affine fusion network combined with entity recognition. This network accurately identifies rumors using the affine fusion between the entity recognition joint representation of images and texts. When extracting text representation, we used Bert to generate sentence vector features and learn semantics by extracting knowledge from the outside through entity recognition. Moreover, affine fusion was used for multi-modal fusion to better summarize the invariant features of new events. The Twitter and Weibo datasets experiments show that the proposed model is robust and performs better than the most advanced baselines. In the future, we plan to capture and identify rumor propagation in the field of rumor text and short videos to strengthen the generalization ability of the multi-modal fusion model.” • The details in this manuscript are vague, especially in the depth feature extraction, which is the major defect of this paper. In addition, there is a gap between the experimental condition and the real scene. So this method can not be applied to the real scene effectively. Response: Thank you for your concern, as per your suggestion we have update it in more clear way. Images in tweets form the input into the visual feature extractor. This proposed framework used the pre-trained model VGG-19 and added two fully connected layers in the last layer to more comprehensively extract the visual features matched with the rumors in the tweet. According to the parameters unchanged after pre-training, VGG-19 adjusted the representation dimension of final visual features to k through two fully connected layers. We added the batch normalization layer and drop-out layer between the two fully connected layers and the activation function to prevent overfitting during the extraction of image representation • There are many words and figures about the background/architectures of the proposed networks used in this paper that can be omitted. These proposed networks are in the field for a while and they are known by most likely every researcher in the field. Response: as per your suggestion we have added more theoretical perspective in the related work as: the single fusion method fails to represent the posts fully. Firstly, it cannot solve the problem caused by the difference in semantic correlation between texts and images in rumors and non-rumors; secondly, the semantic gap cannot be overcome. Moreover, unlike paragraphs or documents, the texts in posts that are usually short fail to provide enough context information, making our classification fuzzier and more random. Thus, considering above we use preprocessing instead of post processing. Please see the updated version of manuscript. • There is a lack of comparison with other studies in the discussion. I do know that from the “related work” introduced in this paper that most previous study provides a very high accuracy/statistic in a much smaller dataset. The quantitative results are lower if just compare to the numeric values. However, the model in this study could be more robust than other previously published models applying your dataset using other models. Response: Thank you for your concern, to verify the performance of the proposed multi-modal rumor detection framework based on knowledge attention fusion, we compared it with the single-modal methods, i.e., Textual and Visual, and five new multi-modal models. Textual and Visual were the subnetworks of the MAFN. Please see the Table 1 for the state-of-the-art comparison. "
Here is a paper. Please give your review comments after reading it.
383
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers and exerts adverse effects on society. Automatically detecting social media rumors has become a challenge faced by contemporary society. To overcome this challenge, we proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection. Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers and exerts adverse effects on society.</ns0:p><ns0:p>Automatically detecting social media rumors has become a challenge faced by contemporary society. To overcome this challenge, we proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection.</ns0:p><ns0:p>Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>As Internet technology gradually matures, online social networking (OSN) has become the spiritual ecology. Since OSN information is open and easily accessible, social networking software such as Weibo, Twitter, and Facebook have become the primary sources for millions of global users to receive news and information. They serve as essential approaches for Internet users to express their opinions. However, the authenticity of published information cannot be detected without supervision. Such social networking software has become the source of public opinion in hot events and news media.</ns0:p><ns0:p>For example, during the tenure of Barack Obama as the US President, a tweet from the 'so-called' Associated Press said, 'Two explosions occurred in the White House, and US President Barack Obama was injured.' Three minutes after the tweet was sent, the US stock index plunged like a 'roller coaster,' and the market value of the US stock market evaporated by 200 billion US dollars within a short period, which tremendously affected both the stock and bond futures. Soon after, the Associated Press issued a statement saying that its Twitter account had been hacked, and that tweet proved to be false news.</ns0:p><ns0:p>Therefore, it is of great necessity to automatically detect social media rumors in the early stage, and this technology will be extensively applied with the rapid development of social networks.</ns0:p><ns0:p>Nowadays, online rumors are no longer in the single form of texts. Instead, they are often in multiple modalities that combine images and texts. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the cases of rumors in the Twitter dataset, displaying the texts and images of each tweet. In Figure <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>, the news is fake based on the images and texts; it is hard to identify whether the news in Figure <ns0:ref type='figure' target='#fig_0'>1B</ns0:ref> is true or not, but the images are fake; we cannot determine the authenticity of the news in Figure <ns0:ref type='figure' target='#fig_0'>1C</ns0:ref> based on the images, but we can confirm that the information is false according to the texts. In current studies, the features of images and texts are mostly fused through feature concentration and averaging results. Nevertheless, this single fusion method fails to represent the posts fully. Firstly, it cannot solve the problem caused by the difference in semantic correlation between texts and images in rumors and non-rumors; secondly, the semantic gap cannot be overcome. Moreover, unlike paragraphs or documents, the texts in posts that are usually short fail to provide enough context information, making our classification fuzzier and more random.</ns0:p><ns0:p>This paper introduces a new end-to-end framework to solve the above problems. This framework is known as the multi-modal affine fusion network (MAFN). In the proposed model, employing affine fusion, we fused the features of images and texts to reduce the semantic gap and better capture the semantic correlation between images and texts. Entity recognition was introduced to improve the semantic understanding of texts and enhance the ability of rumor detection models. MAFN can gain multi-modal knowledge representation by processing posts on social media to detect rumors effectively. This paper makes the following three contributions:</ns0:p><ns0:p>&#8226; We proposed the multi-modal affine fusion network (MAFN) combined with entity recognition for the first time better to capture the semantic correlation between images and texts.</ns0:p><ns0:p>&#8226; The proposed MAFN model enriched the semantic information of text with entity recognition, and entity recognition was fused with the extracted textual features to improve the semantic comprehension of text.</ns0:p><ns0:p>&#8226; Experiments show that the MAFN model proposed in this paper can effectively identify rumors on Weibo and Twitter datasets and is superior to currently available multi-modal rumor detection models.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In early research on rumor detection, <ns0:ref type='bibr' target='#b3'>Castillo et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b15'>Kwon et al. (2013)</ns0:ref>, the rumor detection model was mainly established based on the differences between the features of rumors and factual information. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to identify rumors on Sina Weibo. However, it is time and energy-consuming to design these features manually, and the language patterns are highly dependent on specific time and knowledge in corresponding fields. Therefore, these features cannot be correctly understood. (2017) detected rumors by fusing the image and textural features of posts using the RNN combined with the attention mechanism. However, multi-modal features still depend highly on specific events in the dataset, which will weaken the model's generalization ability. Therefore, Wang et al. <ns0:ref type='bibr' target='#b24'>Wang et al. (2018)</ns0:ref> put forward the EANN model that connected the visual features and textual features of posts in series and applied the event discriminator to remove specific features of events and learn the shared features of rumor events. Experiments show that this method can detect many events that are difficult to distinguish in a single modality. This model extracts adequate information and essential features from highly repeated texts, which solves the problems of excessive redundancy of texts in the data to be tested and weak information links between remote sites.</ns0:p><ns0:p>According to Dhruv K et al., <ns0:ref type='bibr' target='#b14'>Khattar et al. (2019)</ns0:ref>, a single fusion method cannot effectively represent the posts. So, they used the encoder and decoder to extract the features of images and texts and learned across modalities with the help of Gaussian distribution. Liu et al. <ns0:ref type='bibr' target='#b13'>Jinshuo et al. (2020)</ns0:ref> put the text vector, the text vector in the image, and the image vector together, and then processed them using Gaussian distribution to get a new fusion vector to discover the association between the two modalities of hidden representation. Besides learning the text representation of posts, Zhang et al. <ns0:ref type='bibr' target='#b27'>Zhang et al. (2019)</ns0:ref> retrieved external knowledge to supplement the semantic representation of short posts and used conceptual knowledge as additional evidence to improve the performance of the rumor detection model.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This paper introduced the four modules of the proposed MAFN model in this Section, i.e., the entity recognition enhanced textual feature extractor, the visual feature extractor, the multi-modal affine fuser, and the rumor classifier. Furthermore, we described the integration of the proposed modules to represent and detect rumors.</ns0:p><ns0:p>We instantiated tweets on Weibo and Twitter. The total tweets were expressed as S = {t 1 ,t 2 , . . . ,t n }, and each tweet was expressed as t = {T,E,V}, where T denotes the text content of the tweets, E represents the entity content extracted from the tweets, and V stands for the visual content matched with the tweets. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>L = {L 1 , L 2 , . . . , L m }</ns0:formula><ns0:p>Computer Science Bert is a natural language processing model with the transformer bidirectional encoder representation as to the core, which can better extract the text context representation bidirectionally. By inputting the sequential vocabulary of the words in the tweets, the words were first embedded into the vector. The dimension of the ith word in the sentence is denoted by m, which is expressed as W i &#8712; R m , and by inputting it into the sentence, S, it can be expressed as:</ns0:p><ns0:formula xml:id='formula_1'>S = [W 0,W 1,W 2, . . . ,W p] (1)</ns0:formula><ns0:p>Where, S &#8712; R m * p , p denotes the total number of words, W 0 denotes [CLS], and W p represents <ns0:ref type='bibr'>[SEP]</ns0:ref>. By inputting the complete texts of tweets into the Bert model, we obtained the feature vector of the given sentence as</ns0:p><ns0:formula xml:id='formula_2'>S f = [W f 0,W f 1,W f 2, . . . ,W f p]</ns0:formula><ns0:p>Then the sentence feature vectors S f n were given to the two fully connected layers. The above steps 138 can be defined as follows:</ns0:p><ns0:formula xml:id='formula_3'>139 Rt &#8242; = &#963; (W f t2&#8226;&#963; (W f t1 &#8226;S f + bt1) + bt2) (2)</ns0:formula><ns0:p>Where W f t1 denotes the weight matrix of the first fully connected layer with activation function,</ns0:p></ns0:div> <ns0:div><ns0:head>140</ns0:head><ns0:p>W f t2 represents the weight matrix of the second fully connected layer with activation function, and bt1 141 and bt2 are the bias terms.</ns0:p></ns0:div> <ns0:div><ns0:head>142</ns0:head><ns0:p>The attention-based neural network can better obtain relatively long dependencies in sentences. The self-attention mechanism is a kind of attention mechanism that associates different positions of a single sequence to calculate the representation of the same sequence. To enable the model to learn the correlation 4/12 between the current word and the other parts of the sentence, we added the self-attention mechanism after the fully connected layer, the process of which was expressed as follows:</ns0:p><ns0:formula xml:id='formula_4'>Attsel f = so f tmax[QT &#8226; KT &#8868; / &#8730; m]&#8226;V T (3)</ns0:formula><ns0:p>Where, QT = Rt &#8242; &#215; WQT , KT =Rt &#8242; &#215; WKT , V T = Rt &#8242; &#215; WV T .WQT , WKT , WV T denote the three matrices learned by Q, K, and V, respectively. To make the model automatically recognize the importance of each word, degrade unimportant features to their original features, and process essential features using the self-attention mechanism, we used the residual connection to extract the features better.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the architecture of a residual self-attention. A building block was defined as:</ns0:p><ns0:formula xml:id='formula_5'>Rt = Attsel f + Rt &#8242; (4)</ns0:formula><ns0:p>Where, R t denotes the eventually extracted text representation, R t &#8712; R k .</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. The architecture of a residual self-attention.</ns0:p></ns0:div> <ns0:div><ns0:head>Extraction of Entity Representation</ns0:head><ns0:p>Named entity recognition identifies person names, place names, and organization names in a corpus.</ns0:p><ns0:p>It was assumed that the combination of entity tagging and text coding in a post could supplement the semantic representation of the short text of the post in a certain way so that the model could identify rumors and non-rumors more accurately. Explosion AI developed spacy, a team of computer scientists and computational linguists in Berlin, and its named entity recognition model was pre-trained on OntoNotes 5, a sizeable authoritative corpus. In this paper, Spacy was applied to train the two datasets and extract the entities of posts. There were 18 kinds of identifiable entities.</ns0:p><ns0:p>First of all, we identified the recognizable word W i as the entity e &#8712; E s in every sentence S = Rv &#8712; R k . The equation for extracting image features was defined as follows:</ns0:p><ns0:formula xml:id='formula_6'>Rv &#8242; = W f v2 &#8226;&#963; (BN(W f v1 &#8226;Rvgg + bv1)) + bv2 (6) Rv = Dropout(&#963; (BN(Rv &#8242; )))<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Where, Rvgg represents the visual features extracted from the network in the pre-trained model VGG19, &#963; is the activation function, W f v1 denotes the weight matrix of the first fully connected layer with the activation function, and bv1 and bv2 are the bias terms.</ns0:p></ns0:div> <ns0:div><ns0:head>Multi-modal Affine Fuser</ns0:head><ns0:p>Affine transformation transforms into another vector space via linear transformation and translation. Through affine transformation, the multi-modal affine fuser fuses the multi-modal features extracted by the entity recognition enhanced textual feature extractor and the visual feature extractor, the joint representation and visual features of text and entity. It was assumed that the data of the two modalities could be fused more closely and the high-level semantic correlation could be better extracted. The corresponding equation was defined as follows:</ns0:p><ns0:formula xml:id='formula_7'>R c = F R v &#8226;R u + H (R v ) (8)</ns0:formula><ns0:p>Where, Rc is the feature R c &#8712; R k gained after the fusion of all features, and F &#8226; and H &#8226; were fitted by the neural network. After extracting the fused features, in order to get more robust features, we reconnected the fused features with the textual features to obtain the total feature R s . The equation was expressed as:</ns0:p><ns0:formula xml:id='formula_8'>R s = R c &#8853; R t (9)</ns0:formula><ns0:p>Where, &#8853; denotes concatenation.</ns0:p></ns0:div> <ns0:div><ns0:head>6/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:2:0:NEW 26 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Rumor Detector</ns0:p><ns0:p>The rumor detector, based on the multi-modal affine fuser, sent the finally obtained multi-modal feature R s to the multilayer perceptron for classification to judge whether the message was a rumor or not. The rumor detector consists of multiple completely connected layers with softmax. The rumor detector was expressed as G(R i s , &#952; ), where &#952; represents all the parameters in the rumor detector, and R i s denotes the multi-modal representation of the case of the ith tweet. The rumor detector was defined as follows:</ns0:p><ns0:formula xml:id='formula_9'>p i = G(R i s , &#952; )<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>Where p i denotes the probability that the ith post input by the detector is a rumor, in the process of model training, we selected the cross-entropy function as the loss function, which was expressed as follows:</ns0:p><ns0:formula xml:id='formula_10'>Loss = N &#8721; i=1 &#8722;[L i &#215; log (p i ) + (1 &#8722; L i ) &#215; log(1 &#8722; p i )]<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>Where, L i denotes the tag of the tweet in the i-th group, and N refers to the total number of training samples.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTS</ns0:head><ns0:p>This section first described the datasets used in the experiment, namely two social media datasets extracted from the real world. Secondly, we briefly compared the results obtained by the most advanced rumor detection method and those gained by the model proposed in this paper. Through the MAFN ablation experiment, we compared the performances of different models.</ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>To fairly evaluate the performance of the proposed model, we used two standard datasets extracted from the real world to assess the rumor detection framework of the MAFN. These two datasets were composed of rumors and non-rumors collected from Twitter and Weibo, which simulated the natural open environment to some extent. They are currently the only datasets with paired image and text information.</ns0:p></ns0:div> <ns0:div><ns0:head>Weibo Dataset</ns0:head><ns0:p>The Weibo dataset is a dataset proposed by <ns0:ref type='bibr' target='#b12'>Jin Jin et al. (2017)</ns0:ref> for rumor detection. It consists of the data collected by Xinhua News Agency, an authoritative news source in China, and the website of Sina Weibo and the data verified by the official rumor refuting system of Weibo. We preprocessed the dataset using a method similar to that put forward by Jin. First, locality sensitive hashing (LSH) was applied to filter out the same images and then delete irregular images such as very small or very long images to ensure that images in the dataset were of uniform quality. In the last step, the dataset was divided into the training and test sets. The ratio of tweets in training set to those in the test set was 8:2.</ns0:p></ns0:div> <ns0:div><ns0:head>Twitter Dataset</ns0:head><ns0:p>The Twitter dataset <ns0:ref type='bibr' target='#b2'>Boididou et al. (2015)</ns0:ref> was released to verify the task of social media rumor detection.</ns0:p><ns0:p>This dataset contains about 15,000 tweets focusing on 52 different events, and each tweet is composed of texts, images, and videos. The ratio of concentrated development set to test set in the dataset is 15:2, with the ratio of rumors to non-rumors being 3:2. Since this paper mainly studies the fusion of texts and images, we filtered out all tweets with videos. The ratio of development set and test set used to train the proposed model is the same as above.</ns0:p></ns0:div> <ns0:div><ns0:head>Experiment Setting</ns0:head><ns0:p>The feature dimension of the images processed by VGG19 was 1000; the image features were extracted and embedded by two linear layers to obtain the feature dimension. After applying Bert and the linear layer were processed, the texts and entities were turned into 32-dimensional vectors. </ns0:p></ns0:div> <ns0:div><ns0:head>Baselines</ns0:head><ns0:p>To verify the performance of the proposed multi-modal rumor detection framework based on knowledge attention fusion, we compared it with the single-modal methods, i.e., Textual and Visual, and five new multi-modal models. Textual and Visual were the subnetworks of the MAFN. The following are relatively new rumor detection methods for the comparative analysis:</ns0:p><ns0:p>&#8226; Neural Talk generates the words that describe images using the potential representations output by the RNN. Using the same structure, we applied the RNN to output the joint representation of images and texts in each step and then fed the representation into the fully connected layer for rumor detection and classification.</ns0:p><ns0:p>&#8226; <ns0:ref type='bibr'>EANN Wang et al. (2018)</ns0:ref>: extracted textual features using Text-CNN, processes image features with VGG19 and then splices the two types of features together. With the features of specific events removed by the event discriminator, the remaining features were input into the fake news detector for classification.</ns0:p><ns0:p>&#8226; MVAE <ns0:ref type='bibr' target='#b14'>Khattar et al. (2019)</ns0:ref>: used the structure of encoder-decoder to extract the image and textual features and conducted cross-modal learning with Gaussian distribution.</ns0:p><ns0:p>&#8226; att- <ns0:ref type='bibr'>RNN Jin et al. (2017)</ns0:ref> uses the RNN combined with the attention mechanism to fuse three modalities, i.e., image, textual, and user features. For a fair comparison, we removed the feature fusion in the user feature part of att-RNN, with the parameters of other parts being the same as those of the original model.</ns0:p><ns0:p>&#8226; <ns0:ref type='bibr'>MSRD Jinshuo et al. (2020)</ns0:ref> obtains a new fusion vector for classification by splicing textual features, textual features in images, and visual features extracted by VGG19 using Gaussian distribution.</ns0:p><ns0:p>&#8226; VQA is applied in the field of visual questioning and answering. Initially a multi-classification task, the image question-and-answer task was changed to a binary classification task. We used a single-layer LSTM with 32 hidden units to detect and classify rumors.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Comparison</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> shows the baseline results of single-modal and multi-modal models as well as the performances of the MAFN on two datasets in terms of the accuracy, precision, recall, and F1 of our rumor detection framework. MAFN performed better than the baseline models. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Component Analysis</ns0:head><ns0:p>To further analyze the performance of each part of the proposed model and to better describe the necessity of adding entity recognition and affine model, we carried out corresponding ablation experiments. We designed several comparison baselines, including simplified single-modal and multi-modal variants that removed some original models' components. The Weibo dataset contains a greater variety of events without strong specificity, better reflecting the rumors in the real world. Therefore, we ran the newly designed simplified variants on the Weibo dataset. 'w/o -affine fusion' means removing affine fusion but retaining texts for entity recognition. Images and entity recognition were directly connected in series with the joint representation of texts. 'w/o entity+ affine fusion' removed both entity and affine modules. 'Text-only' refers to the single-text experiment.</ns0:p><ns0:p>After pre-training the text using Bert, we connected the texts to the two fully connected layers and then accessed the residual self-attention to detect rumors directly. We conducted it for comparison. 'Entity-Link-Only' results from rumor text detection carried out by only model branch entities. 'w/o image' refers to the experiment without images, but only the combination of texts and entities. Furthermore, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science from 73.9% to 77.2%. 1.9% also improved the accuracy of image text fusion due to the introduction of entity branches. It was found that entity branches could supplement semantic representation, proving our idea effective. According to the experimental results, if we remove affine fusion, the accuracy of MAFN will decrease by 1.3%, and F1 will also decline by 2.4%. If images and texts are only connected without adding fusion and supplement, the accuracy will be lower. This proves the effectiveness of MAFN in rumor detection. MAFN can achieve more reliable multi-modal representation.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Three forms of rumors on Weibo and Twitter datasets</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Rumors on social media have gradually transformed from text-based to multi-modal rumors that combine both texts and images. Data in different modalities can complement each other. An increasing number of researchers have tried to integrate visual information into rumor detection. Isha et al.<ns0:ref type='bibr' target='#b21'>Singh et al. (2021)</ns0:ref> manually designed textual, and image features in four dimensions, i.e., content, organization, emotions, and manipulation, and eventually fused multiple features to detect rumors.Jin et al. Jin et al. </ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b17'>Ma et al. Ma et al. (2016)</ns0:ref> introduced recurrent neural networks (RNN) to learn hidden representations from the texts of related posts and used LSTM, GRU, and 2-layer GRU to model text sequences, respectively. It was the first attempt to introduce a deep neural network into post-based rumor detection and achieve considerable performance on real datasets, verifying the effectiveness of deep learning-based rumor detection. Yu et al. Yu et al. (2017) used a convolutional neural network (CNN) to obtain critical features and their advanced interactions from the text content of related posts. Nonetheless, CNN is unable to capture long-distance features. Hence, Chen et al. Chen et al. (2019) applied an attention mechanism to the detection of network rumor and proposed a neural network model with deep attention.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>denotes the corresponding rumor and non-rumor tags of tweets. This paper aims to learn a multi-modal fusion classification model F by using the total tweets S and the corresponding tag sets L. Fcan predict rumors on unmarked social media.Figure 2 shows the framework of the proposed model. The entity recognition enhanced textual feature extractor and obtained the joint representation R u of text using Bert pre-training and self-attention mechanism. The visual feature extractor used the pretrained model VGG19 to capture visual semantic feature R v . The multi-modal affine fuser fused the joint representation and visual representation to obtain R s , and the rumor classifier was utilized in the end to detect rumors. 3/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The model diagram of the proposed multimodal network MAFN.The yellow part represents the visual feature extractor, the blue part denotes the entity recognition enhanced textual feature extractor, the pink part stands for the multi-modal affine fuser, and the green part refers to the rumor detector.</ns0:figDesc><ns0:graphic coords='5,162.41,63.78,372.23,254.47' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>[</ns0:head><ns0:label /><ns0:figDesc>W 0,W 1,W 2, . . . ,W p] of the tweet, and then obtained the tag L &#8712; {L 1 , L 2 , . . . , L n } corresponding to this entity, where L i is one of the tags {PERSON, LANGUAGE, . . ., LOC}. For instance, to instantiate 5/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:2:0:NEW 26 Feb 2022) Manuscript to be reviewed Computer Science a piece of text, we instantiated the entities in the text, as shown in Figure 4. The extracted entity L European = {NORP}, NORP means nationalities or religions or political groups; L Google = {ORG}, OPG represents companies, agencies, institutions etc.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of entity refining process. Based on the obtained L i , the corresponding entity tags were connected in series to capture semantic features by Bert. E f &#8712; R k , where R k denotes the embedding dimension of tags. By inputting E f into the residual attention mechanism, we gained Re &#8712; R k . In the end, we combined the extracted text representation with the entity representation to obtain the joint representation Ru, Ru &#8712; R k , which was defined as follows: Ru = add Re, Rt (5) Visual Feature Extractor Images in tweets form the input into the visual feature extractor. This proposed framework used the pre-trained model VGG-19 and added two fully connected layers in the last layer to more comprehensively extract the visual features matched with the rumors in the tweet. According to the parameters unchanged after pre-training, VGG-19 adjusted the representation dimension of final visual features to k through two fully connected layers. We added the batch normalization layer and drop-out layer between the two fully connected layers and the activation function to prevent overfitting during the extraction of image representation. The eventually obtained feature of visual representation was expressed as Rv, where</ns0:figDesc><ns0:graphic coords='7,162.41,110.30,372.22,88.49' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>The entire training epochs was 50, and the batch size was 32. Adam served as the model optimizer during the training of the model. The initial learning rate was 0.001, and then lr varied with epoch based on the following equation: lr = 0.001/(1. + 10 * p) * * 0.75 (13)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The single textual model outperformed the single visual model on the Twitter dataset. Although the image features learned by visual features with the help of VGG-19 had better performance in rumor detection, the extraction of textural features was improved by Bert pre-training and residual attention. However, the single-modal model performed much. Among currently available multi-modal models, att-RNN uses LSTM and attention mechanism to process text representation, but it is not as good as EANN, which shows that EANN's event discriminator can better improve the model when it comes to rumor detection. The variational autoencoder proposed by</ns0:figDesc><ns0:table><ns0:row><ns0:cell>MVAE can better discover multi-modal correlation, and it outperforms EANN. MAFN outperformed all</ns0:cell></ns0:row><ns0:row><ns0:cell>baselines in terms of accuracy, precision, and F1, with high accuracy increasing from 82.7% to 84.2% and</ns0:cell></ns0:row><ns0:row><ns0:cell>the F1 score going up from 82.9% to 84.0%. This verifies the effectiveness of MAFN in rumor detection.</ns0:cell></ns0:row><ns0:row><ns0:cell>A similar trend was found on the Weibo dataset. The textual model is superior to the visual model</ns0:cell></ns0:row><ns0:row><ns0:cell>among the single-modal models. The accuracy of single text reaches 77.4%, which verifies the effective-</ns0:cell></ns0:row></ns0:table><ns0:note>ness of Bert pre-training and residual self-attention mechanism in improving semantic representation.Among the multi-modal methods, att-RNN, EANN, and MSRD proposed for this task outperform Neu-ralTalk and VQA, proving the necessity of improving modal fusion. The proposed MAFN achieved the best performance among other state-of-the-art models, with accuracy increasing from 74.5 % to 77.1% and the F1 score rising from 75.8% to 78.7%. This implies that the proposed model can better extract the multi-modal joint representation of images and texts.8/12PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:2:0:NEW 26 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of performances of MAFN and other methods on Twitter and Weibo datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>Twitter</ns0:cell><ns0:cell>Textual</ns0:cell><ns0:cell>0.551</ns0:cell><ns0:cell>0.680</ns0:cell><ns0:cell>0.605</ns0:cell><ns0:cell>0.520</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Visual</ns0:cell><ns0:cell>0.512</ns0:cell><ns0:cell>0.655</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>0.505</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NeuralTalk</ns0:cell><ns0:cell>0.610</ns0:cell><ns0:cell>0.728</ns0:cell><ns0:cell>0.504</ns0:cell><ns0:cell>0.595</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VQA</ns0:cell><ns0:cell>0.631</ns0:cell><ns0:cell>0.765</ns0:cell><ns0:cell>0.509</ns0:cell><ns0:cell>0.611</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>att-RNN</ns0:cell><ns0:cell>0.664</ns0:cell><ns0:cell>0.749</ns0:cell><ns0:cell>0.615</ns0:cell><ns0:cell>0.676</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MSRD</ns0:cell><ns0:cell>0.685</ns0:cell><ns0:cell>0.725</ns0:cell><ns0:cell>0.636</ns0:cell><ns0:cell>0.678</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EANN</ns0:cell><ns0:cell>0.715</ns0:cell><ns0:cell>0.822</ns0:cell><ns0:cell>0.638</ns0:cell><ns0:cell>0.719</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MVAE</ns0:cell><ns0:cell>0.745</ns0:cell><ns0:cell>0.801</ns0:cell><ns0:cell>0.719</ns0:cell><ns0:cell>0.758</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MAFN</ns0:cell><ns0:cell>0.771</ns0:cell><ns0:cell>0.790</ns0:cell><ns0:cell>0.782</ns0:cell><ns0:cell>0.787</ns0:cell></ns0:row><ns0:row><ns0:cell>Weibo</ns0:cell><ns0:cell>Textual</ns0:cell><ns0:cell>0.774</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.739</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Visual</ns0:cell><ns0:cell>0.633</ns0:cell><ns0:cell>0.523</ns0:cell><ns0:cell>0.637</ns0:cell><ns0:cell>0.575</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NeuralTalk</ns0:cell><ns0:cell>0.717</ns0:cell><ns0:cell>0.683</ns0:cell><ns0:cell>0.843</ns0:cell><ns0:cell>0.754</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VQA</ns0:cell><ns0:cell>0.773</ns0:cell><ns0:cell>0.780</ns0:cell><ns0:cell>0.782</ns0:cell><ns0:cell>0.781</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>att-RNN</ns0:cell><ns0:cell>0.779</ns0:cell><ns0:cell>0.778</ns0:cell><ns0:cell>0.799</ns0:cell><ns0:cell>0.789</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MSRD</ns0:cell><ns0:cell>0.794</ns0:cell><ns0:cell>0.854</ns0:cell><ns0:cell>0.716</ns0:cell><ns0:cell>0.779</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MVAE</ns0:cell><ns0:cell>0.824</ns0:cell><ns0:cell>0.854</ns0:cell><ns0:cell>0.769</ns0:cell><ns0:cell>0.809</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EANN</ns0:cell><ns0:cell>0.827</ns0:cell><ns0:cell>0.847</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.829</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MAFN</ns0:cell><ns0:cell>0.842</ns0:cell><ns0:cell>0.861</ns0:cell><ns0:cell>0.821</ns0:cell><ns0:cell>0.840</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Variants of the proposed MAFN's performance on Weibo datasets.As shown in Table2, 'w/o -entity' denotes the proposed MAFN without entity recognition module;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>MAFN</ns0:cell><ns0:cell>0.842</ns0:cell><ns0:cell>0.861</ns0:cell><ns0:cell>0.821</ns0:cell><ns0:cell>0.840</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o entity</ns0:cell><ns0:cell>0.836</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.826</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o affine fusion</ns0:cell><ns0:cell>0.829</ns0:cell><ns0:cell>0.800</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.816</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o entity+ affine fusion</ns0:cell><ns0:cell>0.819</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.852</ns0:cell><ns0:cell>0.797</ns0:cell></ns0:row><ns0:row><ns0:cell>Text-only</ns0:cell><ns0:cell>0.774</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.739</ns0:cell></ns0:row><ns0:row><ns0:cell>Entity-Link-Only</ns0:cell><ns0:cell>0.549</ns0:cell><ns0:cell>0.429</ns0:cell><ns0:cell>0.529</ns0:cell><ns0:cell>0.474</ns0:cell></ns0:row><ns0:row><ns0:cell>w/o image</ns0:cell><ns0:cell>0.799</ns0:cell><ns0:cell>0.719</ns0:cell><ns0:cell>0.834</ns0:cell><ns0:cell>0.772</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>indicates the performance of the simplified variant of MAFN. The experimental results show the necessity for the model to use affine fusion and enhance entity recognition. With entity-link added, the accuracy of single-modal text classification was increased from 77.4% to 79.9%, and F1 increased</ns0:figDesc><ns0:table /><ns0:note>9/12PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:2:0:NEW 26 Feb 2022)</ns0:note></ns0:figure> <ns0:note place='foot' n='10'>/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:2:0:NEW 26 Feb 2022)</ns0:note> <ns0:note place='foot' n='12'>/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68113:2:0:NEW 26 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Response Sheet Multi-modal affine fusion network for social media rumor detection Editor Comments There are some minor things that should be considered before publication, like the organization of the paper. Please check that all tables and figures are cited appropriately in-text. Response: Thank you for allowing us resubmission, as per your suggestion we have updated the organization section and cross verified the table and figure citations. please see the revised version as track changes. Also, we covered all the reviewers’ comments. [# PeerJ Staff Note: On line 160, Figure 4 is written 'Figure ??EQ4' #] Response: Thank you for your concern, as per your suggestion we have corrected that error. Please see the revise version. [# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]. Response: Thank you for your concern, as per your reviewer suggestions, we have proof read the article carefully and solved grammatical and sentence based mistakes, please see the updated version. Reviewer 3 Thanks, although the manuscript is improved, however, there are some minor things that should be considered before publication. - For example, the organization of the paper. All tables and figures should be cited in order from low to high. Response: Thank you for your concern, as per your suggestion we have updated the organization section and cross verified the table and figure citations. please see the revised version as track changes. Additional comments - Some grammatical and punctuation issues exist in the paper. It should be rectified. Response: Thank you for your concern, as per your reviewer suggestions, we have proof read the article carefully and solved grammatical and sentence based mistakes, please see the updated version. - The figure quality still needs enhacments. Response: Thank you for your concern, as per your suggestion, we have improved the quality of figures. Please see the updated version of manuscript. "
Here is a paper. Please give your review comments after reading it.
384
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Remote photoplethysmography (rPPG) aspires to automatically estimate heart rate (HR) variability from videos in realistic environments. A number of effective methods relying on data-driven, model-based and statistical approaches have emerged in the past two decades. They exhibit increasing ability to estimate the blood volume pulse (BVP) signal upon which BPMs (Beats per Minute) can be estimated. Furthermore, learning-based rPPG methods have been recently proposed. The present pyVHR framework represents a multistage pipeline covering the whole process for extracting and analyzing HR fluctuations. It is designed for both theoretical studies and practical applications in contexts where wearable sensors are inconvenient to use. Namely, pyVHR supports either the development, assessment and statistical analysis of novel rPPG methods, either traditional or learning-based, or simply the sound comparison of well-established methods on multiple datasets. It is built up on accelerated Python libraries for video and signal processing as well as equipped with parallel/accelerated ad-hoc procedures paving the way to online processing on a GPU. The whole accelerated process can be safely run in real-time for 30 fps HD videos with an average speedup of around 5. This paper is shaped in the form of a gentle tutorial presentation of the framework.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Heart rate variability can be monitored via photoplethysmography (PPG), an optoelectronic measurement technology first introduced in <ns0:ref type='bibr' target='#b27'>Hertzman (1937)</ns0:ref>, and then largely adopted due to its reliability and noninvasiveness <ns0:ref type='bibr' target='#b6'>(Blazek and Schultz-Ehrenburg, 1996)</ns0:ref>. Principally, this technique captures the amount of reflected light skin variations due to the blood volume changes.</ns0:p><ns0:p>Successively, remote-PPG (rPPG) has been introduced. This is a contactless technique able to measure reflected light skin variations by using an RGB-video camera as a virtual sensor <ns0:ref type='bibr' target='#b68'>(Wieringa et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b30'>Humphreys et al., 2007)</ns0:ref>. Essentially, rPPG techniques leverage on the RGB color traces acquired over time and processed to approximate the PPG signal. As a matter of fact, rPPG has sparked great interest by fostering the opportunity for measuring PPG at distance (e.g. remote health assistance) or in all those cases where contact has to be prevented (e.g. surveillance, fitness, health, emotion analysis) <ns0:ref type='bibr' target='#b0'>(Aarts et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b40'>McDuff et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b52'>Ram&#237;rez et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b10'>Boccignone et al., 2020b;</ns0:ref><ns0:ref type='bibr' target='#b54'>Rouast et al., 2017)</ns0:ref>. Indeed, the rPPG research field has witnessed a growing number of techniques proposed for making this approach more and more robust and thus viable in contexts facing challenging problems such as subject motion, ambient light changes, low-cost cameras <ns0:ref type='bibr' target='#b34'>(Lewandowska et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b64'>Verkruysse et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b60'>Tarassenko et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b5'>Benezeth et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b65'>Wang et al., 2016</ns0:ref><ns0:ref type='bibr' target='#b67'>Wang et al., , 2015;;</ns0:ref><ns0:ref type='bibr' target='#b49'>Pilz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>de Haan and van Leest, 2014)</ns0:ref>. More recently, alongside the traditional methods listed above, rPPG approaches based on deep learning (DL) have burst into this research field <ns0:ref type='bibr' target='#b13'>(Chen and McDuff, 2018;</ns0:ref><ns0:ref type='bibr' target='#b46'>Niu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b70'>Yu et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b36'>Liu et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b37'>Liu et al., , 2021;;</ns0:ref><ns0:ref type='bibr' target='#b22'>Gideon and Stent, 2021;</ns0:ref><ns0:ref type='bibr' target='#b71'>Yu et al., 2021)</ns0:ref>.</ns0:p><ns0:p>The blossoming of the field and the variety of the proposed solutions raise the issue, for both researchers and practitioners, of a fair comparison among proposed techniques while engaging in the rapid prototyping and the systematic testing of novel methods. Under such circumstances, several reviews and surveys concerning rPPG <ns0:ref type='bibr' target='#b41'>(McDuff et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b55'>Rouast et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Heusch et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b63'>Unakafov, 2018;</ns0:ref><ns0:ref type='bibr' target='#b65'>Wang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b39'>McDuff and Blackford, 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Cheng et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b38'>McDuff, 2021;</ns0:ref><ns0:ref type='bibr' target='#b44'>Ni et al., 2021)</ns0:ref> have conducted empirical comparisons, albeit suffering under several aspects, as discussed in Sec. 1.1.</ns0:p><ns0:p>To promote the development of new methods and their experimental analysis, in <ns0:ref type='bibr' target='#b9'>Boccignone et al. (2020a)</ns0:ref> we proposed pyVHR, a preliminary version of a framework supporting the main steps of the traditional rPPG pulse rate recovery, together with a sound statistical assessment of methods' performance.</ns0:p><ns0:p>Yet, that proposal exhibited some limits, both in terms of code organization, usability, and scalability, and since it was suitable for traditional approaches only. Here we present a new version of pyVHR 1 , with a totally re-engineered code, which introduces several novelties.</ns0:p><ns0:p>First of all, we provide a dichotomous view of remote heart rate monitoring, leading to two distinct classes of approaches: traditional methods (Sec. 3) and DL-based methods (Sec. 4). Moreover, concerning the former, a further distinction is setup, concerning the Region Of Interest (ROI) taken into account, thus providing both holistic and patch-based methods. The former takes into account the whole skin region, extracted from the face captured in subsequent frames. Undoubtedly, it is the simplest approach, giving satisfying results when applied on video acquired in controlled contexts. However, in more complex settings the illumination conditions are frequently unstable, giving rise to either high variability of skin tone or shading effects. In these cases the holistic approach is prone to biases altering subsequent analyses.</ns0:p><ns0:p>Differently, the patch-based approach employs and tracks an ensemble of patches sampling the whole face. The rationale behind this choice is twofold. On the one hand, the face regions affected by either shadows or bad lighting conditions can be discarded, thus avoiding uncorrelated measurements with the HR ground-truth. On the other hand, the amount of observations available allows for making the final HR estimate more robust, even through simple statistics (e.g., medians), while controlling the confidence levels.</ns0:p><ns0:p>Second, the framework is agile, covers each stage of the pipeline that instantiates it, and it is easily extensible. Indeed, one can freely embed new methods, datasets or tools for the intermediate steps (see <ns0:ref type='bibr'>Sec. 6</ns0:ref>) such as for instance: face detection and extraction, pre-and post-filtering of RGBs traces or BVPs signals, spectral analysis techniques, statistical methods.</ns0:p><ns0:p>pyVHR can be easily employed for many diverse applications such as anti-spoofing, aliveness detection, affective computing, biometrics. For instance, in Sec. 7 a case study on the adoption of rPPG technology for a Deepfake detection task is presented.</ns0:p><ns0:p>Finally, computations can be achieved in real-time thanks to the NVIDIA GPU (Graphics Processing Units) accelerated code and the use of optimized Python primitives.</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>Related Works</ns0:head><ns0:p>In the last decade the rPPG domain has witnessed a flourish of investigations <ns0:ref type='bibr' target='#b41'>(McDuff et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b55'>Rouast et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Heusch et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b63'>Unakafov, 2018;</ns0:ref><ns0:ref type='bibr' target='#b65'>Wang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b39'>McDuff and Blackford, 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Cheng et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b38'>McDuff, 2021;</ns0:ref><ns0:ref type='bibr' target='#b44'>Ni et al., 2021</ns0:ref>). Yet, the problem of a fair and reproducible evaluation has been in general overlooked. It is undeniable that theoretical evaluations are almost infeasible, given the complex operations or transformations each algorithm performs. Nevertheless, empirical comparisons could be very informative if conducted in the light of some methodological criteria <ns0:ref type='bibr' target='#b9'>(Boccignone et al., 2020a)</ns0:ref>. In brief: pre/post processing standardization; reproducible evaluation; multiple dataset testing; rigorous statistical assessment.</ns0:p><ns0:p>To the best of our knowledge, a framework respecting all these criteria was missing until the introduction of the early version of pyVHR <ns0:ref type='bibr' target='#b9'>Boccignone et al. (2020a)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b28'>Heusch et al. (2017a)</ns0:ref> a Python collection of rPPG algorithms is presented, without claiming to be complete in the method assessment.</ns0:p><ns0:p>Interestingly, in <ns0:ref type='bibr' target='#b63'>Unakafov (2018)</ns0:ref>, the authors highlight the dependency of the pulse rate estimation on five main steps: ROI-selection, pre-processing, rPPG method, post-processing, pulse rate estimation.</ns0:p><ns0:p>They present a theoretical framework to assess different pipelines in order to find out which combination provides the most precise PPG estimation; results are reported on the DEAP dataset <ns0:ref type='bibr' target='#b32'>(Koelstra et al., 2011)</ns0:ref>.</ns0:p><ns0:p>Unfortunately, no code has been made available.</ns0:p><ns0:p>1 Freely available on GitHub: github.com/phuselab/pyVHR.</ns0:p></ns0:div> <ns0:div><ns0:head>2/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Pilz (2019) a MATLAB toolbox is presented, implementing two newly proposed methods, namely Local Group Invariance (LGI) <ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref> and Riemannian-PPGI (SPH) <ns0:ref type='bibr' target='#b48'>(Pilz, 2019)</ns0:ref>, and comparing them to the GREEN channel expectation <ns0:ref type='bibr' target='#b64'>(Verkruysse et al., 2008)</ns0:ref> baseline, and two state-of-the-art methods, i.e. Spatial Subspace Rotation (SSR) <ns0:ref type='bibr' target='#b67'>(Wang et al., 2015)</ns0:ref>, and Projection Orthogonal to Skin (POS) <ns0:ref type='bibr' target='#b65'>(Wang et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b39'>McDuff and Blackford (2019)</ns0:ref> authors propose iPhys, a MATLAB toolbox implementing several methods, such as Green Channel, POS, CHROM <ns0:ref type='bibr' target='#b16'>(De Haan and Jeanne, 2013)</ns0:ref>, ICA <ns0:ref type='bibr' target='#b50'>(Poh et al., 2010)</ns0:ref>, and BCG <ns0:ref type='bibr' target='#b2'>(Balakrishnan et al., 2013)</ns0:ref>. The toolbox is presented as a bare collection of method implementations, without aiming at setting up a rigorous comparison framework on one or more datasets. It is worth noticing that all these frameworks are suitable for traditional methods only. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarizes at a glance the main differences between pyVHR and the already proposed frameworks. </ns0:p></ns0:div> <ns0:div><ns0:head n='2'>INSTALLATION</ns0:head><ns0:p>The quickest way to get started with pyVHR is to install the miniconda distribution, a lightweight minimal installation of Anaconda Python.</ns0:p><ns0:p>Once installed, create a new conda environment, automatically fetching all the dependencies based on the adopted architecture -with or without GPU -, by one of the following commands: for CPU with GPU support.</ns0:p><ns0:p>The source code for pyVHR can be found on GitHub at https://github.com/phuselab/pyVHR and it is distributed under the GPL-3.0 License. On GitHub, the community can report issues, questions as well as contribute with code to the project. The documentation of the pyVHR framework is available at https://phuselab.github.io/pyVHR/.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>PYVHR PIPELINE FOR TRADITIONAL METHODS</ns0:head><ns0:p>In this section, we introduce the pyVHR modules to be referred by traditional rPPG methods. They are built on top of both APIs developed for the purpose, and open-source libraries. This pipeline follows a software design strategy that assemble sequential modules or stages, with the output of a stage serving as input to one or more subsequent stages. This responds to the need for the framework to be flexible and extensible in order to be more maintainable and improvable over time with innovative or alternative techniques.</ns0:p></ns0:div> <ns0:div><ns0:head>3/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>The Pipeline Stages</ns0:head><ns0:p>The Pipeline() class implements the sequence of stages or steps that are usually required by the vast majority of rPPG methods proposed in the literature, in order to estimate the BPM of a subject, given a video displaying his/her face. Eventually, going through all these steps in pyVHR is as simple as writing a couple of lines of Python code:</ns0:p><ns0:p>1 from pyVHR.analysis.pipeline import Pipeline Calling the run on video() method of the Pipeline() class starts the analysis of the video provided as argument and produces as output the time step of the estimated BPM and related uncertainty estimate. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> depicts the predicted BPM on Subject1 of the UBFC 2 dataset <ns0:ref type='bibr' target='#b8'>(Bobbia et al., 2019)</ns0:ref> (blue trajectory). For comparison, the ground truth BPM trajectory (as recorded from a PPG sensor) is reported in red. On the one hand the above-mentioned example witnesses the ease of use of the package by hiding the whole pipeline behind a single function call. On the other hand it may be considered too constraining as hinders the user from exploiting its full flexibility. Indeed, the run on video() method can be thought of as a black box delivering the desired result with the least amount of effort, relying on default parameter setting.</ns0:p><ns0:p>Nevertheless, some users may be interested in playing along with all the different modules composing the pyVHR pipeline and the related parameters. The following sections aim at describing in detail each of such elements. These are shown in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>(b) and can be recapped as follows:</ns0:p><ns0:p>1. Skin extraction: The goal of this first step is to perform a face skin segmentation in order to extract PPG-related areas; the latter are subsequently collected in either a single patch (holistic approach)</ns0:p><ns0:p>or a bunch of 'sparse' patches covering the whole face (patch-wise approach).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>RGB signal processing:</ns0:head><ns0:p>The patches, either one or more, are coherently tracked and are used to compute the average colour intensities along overlapping windows, thus providing multiple time-varying RGB signals for each temporal window. The multi-stage pipeline for traditional approaches that goes through: windowing and patch collection, RGB trace computation, pre-filtering, the application of an rPPG algorithm estimating a BVP signal, post-filtering and BPM estimate through PSD analysis.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Skin Extraction</ns0:head><ns0:p>The skin extraction step implemented in pyVHR consists in the segmentation of the face region of the subject. Typically, the regions corresponding to the eyes and mouth are discarded from the analysis. This can be accomplished by pyVHR in two different ways, denoted as:</ns0:p><ns0:p>1. the Convex-hull extractor, 2. the Face parsing extractor.</ns0:p><ns0:p>The Convex-hull extractor considers the skin region as the convex-hull of a set of the 468 facial fiducial points delivered by the MediaPipe face mesh <ns0:ref type='bibr' target='#b23'>(Google, 2021)</ns0:ref>. The latter provides reliable face/landmark detection and tracking in real-time. From the convex-hulls including the whole face, pyVHR subtracts those computed from the landmarks associated to the eyes and mouth. The resulting mask is employed to isolate the pixels that are generally associated to the skin. An example is shown in the left image of Figure <ns0:ref type='figure'>3</ns0:ref> on a subject of the LG-PPGI dataset <ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref> 3 .</ns0:p><ns0:p>Alternatively, the Face parsing extractor computes a semantic segmentation of the subject's face. It produces pixel-wise label maps for different semantic components (e.g., hair, mouth, eyes, nose, etc...), thus allowing to retain only those related to the skin regions. Face semantic segmentation is carried over with BiSeNet <ns0:ref type='bibr' target='#b69'>(Yu et al., 2018)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1'>Holistic Approach</ns0:head><ns0:p>The skin extraction method paves the way to the RGB trace computation which is accomplished in a channel-wise fashion by averaging the facial skin colour intensities. This is referred to as the holistic approach, and within the pyVHR framework it can be instantiated as follows:</ns0:p><ns0:p>1 sig = sig_processing.extract_holistic(videoFileName)</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.2'>Patch-based Approach</ns0:head><ns0:p>In contrast to the holistic approach, the patch-based one takes into account a bunch of localized regions of interest, thus extracting as many RGB traces as patches. Clearly, imaging photoplethysmography in unconstrained settings is sensitive to subjects changing pose, moving their head or talking. This calls for a mechanism for robust detection and tracking of such regions.</ns0:p><ns0:p>To such end, pyVHR again relies on the MediaPipe Face Mesh, which establishes a metric 3D space to infer the face landmark screen positions by a lightweight method to drive a robust and performant tracking. The analysis runs on CPU and has a minimal speed or memory footprint on top of the inference model.</ns0:p><ns0:p>The user can easily select up to 468 patches centered on a subset of landmarks and define them as the set of informative regions on which the subsequent steps of the pipeline are evaluated. An example of landmark extraction and tracking is shown in Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>. Note that eventually, a patch may disappear due to subject's movements, hence delivering only partial or none contribution. It is worth noting how the user is allowed to arbitrarily compose its own set of patches by exploiting pyVHR utility functions. In the example below, three patches have been selected corresponding to the forehead, left and right cheek areas. Usually, several patches are chosen in order to better control the high variability in the results and to achieve high level of confidence, while making smaller the margin of error.</ns0:p><ns0:p>As for the holistic approach, video loading and patch extraction are handled by few APIs available in the SignalProcessing() class, as shown in the following script. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.3'>RGB Signal Computation</ns0:head><ns0:p>In this step, the skin regions detected and tracked on the subject's face are split in successive overlapping time windows. Next, the RGB traces associated to each region are computed by averaging their colour intensities.</ns0:p><ns0:p>More formally, let us consider an RGB video v &#8712; R w&#215;h&#215;3&#215;T of T frames containing a face, split on P (possibly overlapped) patches. Once the i-th patch has been selected, an RGB signal q i (t) is computed. Denote {p j i (t)} N i j=1 the set of N i pixels belonging to the i-th patch at time t, where p j i (t) &#8712; [0, 255] 3 . Then, q i (t) is recovered by averaging on pixel colour intensities, i.e.,</ns0:p><ns0:formula xml:id='formula_0'>q i (t) = 1 N i N i &#8721; j=1 p j i (t), i = 1, . . . , P.</ns0:formula><ns0:p>In the time-splitting process, fixed an integer &#964; &gt; 0, q i (t) is sliced into K overlapping windows of M = W s F s frames, thus obtaining</ns0:p><ns0:formula xml:id='formula_1'>q k i (t) = q i (t)w (t &#8722; k&#964;F s ) , k = 0, . . . , K &#8722; 1.</ns0:formula><ns0:p>where F s represents the video frame rate, W s the window length in seconds, while w is the rectangular window defined as:</ns0:p><ns0:formula xml:id='formula_2'>w(t) = 1, 0 &#8804; t &lt; M 0, otherwise.<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>In order for the signal segments to actually overlap, the overlap inequality &#964; &lt; W s must be verified.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> shows how the above described patch-based split and tracking procedure is put in place.</ns0:p><ns0:p>In pyVHR , the extraction of the windowed RGB signals is computed by the following code snippet.</ns0:p><ns0:p>1 from pyVHR.extraction.utils import sig_windowing, get_fps Notably, beside being able to switch between convex-hull and face parsing, the user can easily change the main process parameters such as the window length and the amount of frame overlapping.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.4'>Methods for BVP estimation</ns0:head><ns0:p>Given that the framework can rely on holistic-wise and patch-wise processing, pyVHR estimates the BVP signal either from a single trace or leveraging on multiple traces. In both cases it employs a wide range of state of the art rPPG methods.</ns0:p><ns0:p>In particular, the windowed RGB traces q k i (t) (i = 1, . . . , P, with P = 1 in the holistic case) of length K are given in input to the rPPG method at hand, which outputs the signals y k i (t) representing the estimated BVP associated to the i-th patch in the k-th time window. Currently, the package implements the following methods for the estimation of the pulse signal from the RGB traces: GREEN <ns0:ref type='bibr' target='#b64'>(Verkruysse et al., 2008)</ns0:ref>, CHROM (De Haan and Jeanne, 2013), ICA <ns0:ref type='bibr' target='#b50'>(Poh et al., 2010)</ns0:ref>, LGI <ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref>, PBV (de Haan and van Leest, 2014), PCA <ns0:ref type='bibr' target='#b34'>(Lewandowska et al., 2011)</ns0:ref>, POS <ns0:ref type='bibr' target='#b65'>(Wang et al., 2016)</ns0:ref>, SSR <ns0:ref type='bibr' target='#b67'>(Wang et al., 2015)</ns0:ref>. However, the user may define any custom method for estimating BVP by extending the pyVHR.BVP.methods module.</ns0:p><ns0:p>The BVP signal can be estimated in pyVHR as follows:</ns0:p><ns0:p>1 bvp = RGB_sig_to_BVP(windowed_sig, fps, method=cpu_POS)</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> depicts the BVP signals estimated by four different rPPG methods implemented in pyVHR (POS, GREEN, CHROM, PCA), on the same time window using the holistic patch.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.5'>Pre and Post-Filtering</ns0:head><ns0:p>pyVHR offers simple APIs to apply filters on either the RGB traces q i (t) (pre-filtering) or the estimated pulse signal y i (t) (post-filtering). A set of ready to use filters are implemented, namely:</ns0:p><ns0:p>&#8226; Band Pass (BP) filter: filters the input signal using a bandpass N-th order Butterworth filter with a given passband frequency range.</ns0:p><ns0:p>&#8226; Detrending: subtracts offsets or linear trends from time-domain input data.</ns0:p><ns0:p>&#8226; Zero-Mean: Removes the DC component from a given signal.</ns0:p><ns0:p>However, the user can adopt any custom filter complying with the function signature defined in pyVHR.BVP.filters. The following provides an example of how to detrend an RGB trace q i (t):</ns0:p><ns0:formula xml:id='formula_3'>1 filtered_sig = apply_filter(sig, detrend)</ns0:formula><ns0:p>Additionally, a Band-Pass filter can be applied on the estimated BVP signals y i (t) in order to rule out the frequencies that leave outside the feasible range of typical heart rates (which is usually between 40Hz and 200Hz):</ns0:p><ns0:p>1 filtered_bvp = apply_filter(bvp, BPfilter, Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. Traditional rPPG algorithms implemented in pyVHR .</ns0:p></ns0:div> <ns0:div><ns0:head>Method Description</ns0:head><ns0:p>GREEN <ns0:ref type='bibr' target='#b64'>(Verkruysse et al., 2008)</ns0:ref> The Green (G) temporal trace is directly considered as an estimate of the BVP signal. Usually adopted as a baseline method.</ns0:p><ns0:p>ICA <ns0:ref type='bibr' target='#b50'>(Poh et al., 2010)</ns0:ref> Independent Component Analysis (ICA) is employed to extract the pulse signal via Blind Source Separation of temporal RGB mixtures.</ns0:p><ns0:p>PCA <ns0:ref type='bibr' target='#b34'>(Lewandowska et al., 2011)</ns0:ref> Principal Component Analysis (PCA) of temporal RGB traces is employed to estimate the BVP signal.</ns0:p></ns0:div> <ns0:div><ns0:head>CHROM (De Haan and Jeanne, 2013)</ns0:head><ns0:p>A Chrominance-based method for the BVP signal estimation.</ns0:p><ns0:p>PBV (de Haan and van Leest, 2014)</ns0:p><ns0:p>Computes the signature of blood volume pulse changes to distinguish the pulse-induced color changes from motion noise in RGB temporal traces.</ns0:p><ns0:p>SSR (S2R) <ns0:ref type='bibr' target='#b67'>(Wang et al., 2015)</ns0:ref> Spatial Subspace Rotation (SSR); estimates a spatial subspace of skin-pixels and measures its temporal rotation for extracting pulse signal.</ns0:p><ns0:p>POS <ns0:ref type='bibr' target='#b65'>(Wang et al., 2016)</ns0:ref> Plane Orthogonal to the Skin (POS). Pulse signal extraction is performed via a projection plane orthogonal to the skin tone.</ns0:p><ns0:p>LGI <ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref> Local Group Invariance (LGI). Computes a feature representation which is invariant to action and motion based on differentiable local transformations. Manuscript to be reviewed Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.6'>From BVP to BPM</ns0:head><ns0:p>Given the estimated BVP signal, the beats per minute (BPM) associated to a given time window can be easily recovered via analysis of its frequency domain representation. In particular, pyVHR estimates the Power Spectral Density (PSD) of the windowed pulse signal y k i (t) via discrete time Fourier transform (DFT) using the Welch's method. The latter employs both averaging and smoothing to analyze the underlying random process.</ns0:p><ns0:p>Given a sequence y k i (t), call S k i (&#957;) its power spectra (periodogram) estimated via the Welch's method. The BPM is recovered by selecting the normalized frequency associated to the peak of the periodogram:</ns0:p><ns0:formula xml:id='formula_4'>&#957;k i = arg max &#957;&#8712;&#8486; S k i (&#957;) ,</ns0:formula><ns0:p>corresponding to the PSD maxima as computed by Welch's method on the range &#8486; = [39, 240] of feasible BPMs.</ns0:p><ns0:p>The instantaneous BPM associated to the k-th time window (k &#8712; 1, . . . , K) for the i-th patch (i &#8712; 1, . . . , P), is recovered by converting the normalized peak frequency &#957;k i into an actual frequency,</ns0:p><ns0:formula xml:id='formula_5'>&#293;k i = &#957;k i F s L ,</ns0:formula><ns0:p>where F s is the video frame rate and L is the DFT size. Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> shows the Welch's estimates for the BVP signals of Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref>. The peak in the spectrum represents the instantaneous Heart Rate ( &#293;k i ).</ns0:p><ns0:p>(A) POS When multiple patches have been selected (P &gt; 1), the predicted BPM for the k-th time window can be obtained resorting to simple statistical measures. Specifically, pyVHR computes the median BPM value of the predictions coming from the P patches.</ns0:p><ns0:p>Formally, call H k the ordered list of P BPM predictions coming from each patch in the k-th time window; then:</ns0:p><ns0:formula xml:id='formula_6'>&#293;k = median(H k ) = H k P&#8722;1 2 if P is odd (H k [ P 2 &#8722;1]+H k [ P 2 ]) 2 if P is even.</ns0:formula><ns0:p>(2) Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Note that if the number of patches P = 1 (i.e. a single patch has been selected or the holistic approach has been chosen), then:</ns0:p><ns0:formula xml:id='formula_7'>&#293;k = H k [0].<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Moreover, when multiple patches have been selected, a measure of variability of the predictions can be computed in order to quantify the uncertainty of the estimation. In particular, pyVHR computes the Median Absolute Deviation (MAD) as a robust measure of statistical dispersion. The MAD is defined as:</ns0:p><ns0:formula xml:id='formula_8'>MAD k = median(|H k &#8722; &#293;k |).<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Clearly, the MAD drops to 0 when P = 1. Computing the BPM from the BVP signal(s) can be easily accomplished in pyVHR as follows:</ns0:p><ns0:p>1 from pyVHR.BPM.BPM import BVP_to_BPM, multi_est_BPM_median The result along with the ground-truth are shown in Figure <ns0:ref type='figure' target='#fig_12'>9</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Efficient computation and GPU acceleration</ns0:head><ns0:p>Most of the steps composing the pipeline described above are well suited for parallel computation. For instance, the linear algebra operations involved in the pulse signal recovery from the RGB signal or, more Predicted BPM (blue) for the Subject1 of the UBFC Dataset. The uncertainty is plotted in shaded blue, while the ground-truth is represented by the red line.</ns0:p><ns0:p>generally, the signal processing steps (e.g. filtering, spectral estimation, etc..), not to mention the skin segmentation procedures from high resolution videos.</ns0:p><ns0:p>To such end, pyVHR exploits the massive parallelism of Graphical Processing Units (GPUs). It is worth mentioning that GPUs are not strictly required to run pyVHR code; nevertheless, in some cases, GPU accelerated code allows to run the pipeline in real-time.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_13'>10</ns0:ref> shows the average per-frame time requirement for getting through the whole pipeline when using the POS method. It is worth noticing that, when using the Holistic approach (or equivalently one single patch), a video frame can be processed in less than 0.025 seconds, regardless of the adopted architecture (either CPU or GPU). This means that the whole pipeline can be safely run in real-time for videos at 30 frames per second (the 30 fps time limit is represented by the dashed green line). Obviously, when multiple patches are employed (in the example of Figure <ns0:ref type='figure' target='#fig_13'>10</ns0:ref>, P = 100 patches are used), the average time required by CPUs to process a single frame rises up to about 0.12 seconds.</ns0:p><ns0:p>Notably, the adoption of GPU accelerated code allows to run the whole pipeline in real-time, even when using a huge number of patches. Indeed, the ratio to CPU time and GPU time, i.e., the speedup defined The maximum RAM usage for 1 minute HD video analysis is 2.5 GB (average is 2 GB); the maximum GPU memory usage for 1 minute HD video analysis is 1.8 GB (average is 1.4 GB).</ns0:p><ns0:p>In the following it is shown how to enable CUDA GPU acceleration on different steps in the Pipeline:</ns0:p><ns0:p>&#8226; </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>PYVHR PIPELINE FOR DEEP-LEARNING METHODS</ns0:head><ns0:p>Recent literature in computer vision has given wide prominence to end-to-end deep neural models and their ability to outperform traditional methods requiring hand-crafted feature design. In this context, learning frameworks for recovering physiological signals were also born <ns0:ref type='bibr' target='#b13'>(Chen and McDuff, 2018;</ns0:ref><ns0:ref type='bibr' target='#b46'>Niu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b70'>Yu et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b71'>Yu et al., , 2021;;</ns0:ref><ns0:ref type='bibr' target='#b22'>Gideon and Stent, 2021;</ns0:ref><ns0:ref type='bibr' target='#b37'>Liu et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b47'>Nowara et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The end-to-end nature of the DL based approaches is reflected by a much simpler pipeline; indeed, these methods typically require as input raw video frames that are processed by the DL architecture at hand and produce either a BVP signal or the estimated heart rate, directly. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science As for the pipeline for traditional methods shown in previous section, pyVHR also defines a sequence of stages that allows to recover the time varying heart rate from a sequence of images displaying a face.</ns0:p><ns0:p>These are detailed in the following.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>The Stages for End-to-end Methods</ns0:head><ns0:p>Given a video displaying a subject face, the DeepPipeline() class performs the necessary steps for the rPPG estimate using a chosen end-to-end DL method. Specifically, the pipeline includes the handling of input videos, the estimation from the sequence of raw frames and, eventually, the pre/post-processing steps.</ns0:p><ns0:p>The following code snippet carries out the above procedure with few statements:</ns0:p><ns0:p>1 from pyVHR.analysis.pipeline import DeepPipeline Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the very same procedure outlined in Section 3.2.6, namely time windowing and spectral estimation.</ns0:p><ns0:p>Eventually, some optional pre/post filtering operations (Section 3.2.5) can be performed.</ns0:p><ns0:p>The following few lines of Python code allow to carry out the above steps explicitly: In order to embed a new DL-method, the code above should be simply modified substituting the function MTTS CAN deep with a new one implementing the method at hand, while respecting the same signature (cfr. Sec. 6).</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>ASSESSMENT OF RPPG METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Does a given rPPG algorithm outperforms the existing ones?</ns0:head><ns0:p>To what extent? Is the difference in performance significantly large? Does a particular post-filtering algorithm cause an increase/drop of performance?</ns0:p><ns0:p>Answering all such questions, calls for a rigorous statistical assessment of rPPG methods. As a matter of fact, although the field has recently experienced a substantial gain in interest from the scientific community, it is still missing a sound and reproducible assessment methodology allowing to gain meaningful insights and delivering best practices.</ns0:p><ns0:p>By and large, novel algorithms proposed in the literature are benchmarked on non-publicly available datasets, thus hindering proper reproducibility of results. Moreover, in many cases, the reported results are obtained with different pipelines; this makes it difficult to precisely identify the actual effect of the proposed method on the final performance measurement.</ns0:p><ns0:p>Besides that, the performance assessment mostly relies on basic and common-sense techniques, such as roughly rank new methods with respect to the state-of-the-art. These crude methodologies often make the assessment unfair and statistically unsound. Conversely, a good research practice should not limit to barely report performance numbers, but rather aiming at principled and carefully designed analyses. This is in accordance with the growing quest for statistical procedures in performance assessment in many different fields, including machine learning and computer vision <ns0:ref type='bibr' target='#b18'>(Dem&#353;ar, 2006;</ns0:ref><ns0:ref type='bibr' target='#b4'>Benavoli et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b62'>Torralba and Efros, 2011;</ns0:ref><ns0:ref type='bibr' target='#b24'>Graczyk et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b20'>Eisinga et al., 2017)</ns0:ref>.</ns0:p><ns0:p>In the vein of its forerunner <ns0:ref type='bibr' target='#b9'>(Boccignone et al., 2020a)</ns0:ref>, pyVHR deals with all such problems by means of its statistical assessment module. The design principles can be recapped as follows:</ns0:p><ns0:p>&#8226; Standardized pipeline: When setting up an experiment to evaluate a new rPPG algorithm, the whole pipeline (except the algorithm) should be held fixed.</ns0:p><ns0:p>&#8226; Reproducible evaluation: The evaluation protocol should be reproducible. This entails adopting publicly available datasets and code.</ns0:p><ns0:p>&#8226; Comparison over multiple datasets: In order to avoid dataset bias, the analysis should be conducted on as many diverse datasets as possible.</ns0:p><ns0:p>&#8226; Rigorous statistical assessment: The reported results should be the outcome of proper statistical procedures, assessing their statistical significance.</ns0:p><ns0:p>The workflow of the Statistical Assessment Module is depicted in Figure <ns0:ref type='figure' target='#fig_20'>12</ns0:ref>.</ns0:p><ns0:p>In a nutshell, each video composing a particular rPPG dataset is processed by the pyVHR pipeline as described above. Moreover, the package provides primitives for loading and processing real BVP signals Finally, the estimated ( &#293;k ) and ground truth (h k ) BPM signals are compared with one another exploiting standard metrics (c.f.r Section 5.1). Eventually, statistically rigorous comparisons can be effortlessly performed (c.f.r Section 5.3).</ns0:p><ns0:p>Notably, the many parameters that make up each step of the pipeline (from the ROI selection method to the pre/post filtering operations, passing through the BVP estimation by one or multiple rPPG algorithms) can be easily specified in a configuration (.cfg) file. Setting up a .cfg file allows to design the experimental procedure in accordance with the principles summarized above. A brief description of the implemented comparison metrics and the .cfg file specifications are provided in the following Sections.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Metrics</ns0:head><ns0:p>pyVHR provides common metrics to evaluate the performance of one or more rPPG methods in estimating the correct heart rate (BPM) over time. These are briefly recalled here.</ns0:p><ns0:p>In order to measure the accuracy of the BPM estimate &#293;, this is compared to the reference BPM as recovered from contact BVP sensors h. To this end, the reference BVP signal g(t) is splitted into overlapping windows, similarly to the procedure described in Section 3.2.4 for the estimated BVP, thus producing K windowed signals g k (k &#8712; 1, . . . , K). The reference BPM is found via spectral analysis of each window, as described in Section 3.2.6. This yields the K reference BPM h k to be compared to the estimated one &#293;k by adopting any of the following metrics:</ns0:p><ns0:p>Mean Absolute Error (MAE) The Mean Absolute Error measures the average absolute difference between the estimated &#293; and reference BPM h. It is computed as:</ns0:p><ns0:formula xml:id='formula_9'>MAE = 1 K &#8721; k | &#293;k &#8722; h k |.</ns0:formula><ns0:p>Root Mean Squared Error (RMSE). The Root-Mean-Square Error measures the difference between quantities in terms of the square root of the average of squared differences, i.e.</ns0:p><ns0:formula xml:id='formula_10'>RMSE = 1 K &#8721; k ( &#293;k &#8722; h k ) 2 .</ns0:formula><ns0:p>Pearson Correlation Coefficient (PCC). Pearson Correlation Coefficient measures the linear correlation between the estimate &#293; and the ground truth h. It is defined as: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_11'>PCC = &#8721; k ( &#293;k &#8722; &#956;)(h k &#8722; &#181;) &#963; 1 &#963; 2 ,</ns0:formula><ns0:p>here &#956; and &#181; denote the means of the respective signals, while &#963; 1 and &#963; 2 are their standard deviations.</ns0:p><ns0:p>Concordance Correlation Coefficient (CCC). The Concordance Correlation Coefficient <ns0:ref type='bibr' target='#b33'>(Lawrence and Lin, 1989</ns0:ref>) is a measure of the agreement between two quantities. Like Pearson's correlation, CCC ranges from -1 to 1, with perfect agreement at 1. It is defined as:</ns0:p><ns0:formula xml:id='formula_12'>CCC = 2&#963; 12 ( &#956; &#8722; &#181;) 2 + &#963; 2 1 + &#963; 2 2</ns0:formula><ns0:p>where &#956; and &#181; denote the means of the prediceted and reference BPM traces, respectively. Likewise, &#963; 1 and &#963; 2 are their standard deviations, while &#963; 12 is their covariance.</ns0:p><ns0:p>Signal to Noise Ratio (SNR). The SNR (De Haan and Jeanne, 2013) measures the ratio of the power around the reference HR frequency plus the first harmonic of the estimated pulse-signal and the remaining power contained in the spectrum of the estimated BVP. Formally it is defined as:</ns0:p><ns0:formula xml:id='formula_13'>SNR = 1 K &#8721; K 10 log 10 &#8721; v U k (v)S k (v) 2 &#8721; v (1 &#8722;U k (v)) S k (v)) 2</ns0:formula><ns0:p>where S k (v) is the power spectral density of the estimated BVP in the k-th time window and</ns0:p><ns0:formula xml:id='formula_14'>U k (v)</ns0:formula><ns0:p>is a binary mask that selects the power contained within &#177;12 BPM around the reference Heart Rate and its first harmonic.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.2'>The configuration (.cfg) file</ns0:head><ns0:p>The .cfg file allows to set up the experimental procedure for the evaluation of models. It is structured into 6 main blocks that are briefly described here:</ns0:p><ns0:p>Dataset. This block contains the information relative to a particular rPPG dataset, namely its name, and its path. Filters. It defines the filtering methods to be eventually used in the pre/post filtering phase. In the following example a band-pass butterworth filter of 6-th order is defined, with a passing band between 40Hz and 240Hz. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>BVP. Sets up the rPPG method to be adopted for the estimation of the BVP signal. Multiple methods can be provided in order to compare them. In this example two methods will be analyzed, namely POS and GREEN (adopting their CPU implementations). Methods. It allows to configure each rPPG method to be analyzed (e.g. eventual parametrs and pre/post filters). The two methods chosen above are configured here. In particular, POS will not employ any pre/post filtering, while for the GREEN method, the above-defined band pass filter will be applied for both pre and post filtering. The experiment on the dataset defined in the .cfg file can be simply launched as:</ns0:p><ns0:p>1 from pyVHR.analysis.pipeline import Pipeline In the above code, the run on dataset method from the Pipeline class, parses the .cfg file and initiates a pipeline for each rPPG method defined in it. The pipelines are used to process each video in the dataset. Concurrently, ground truth BPM data is loaded and comparison metrics are computed w.r.t.</ns0:p><ns0:p>the predictions (cfr. Figure <ns0:ref type='figure' target='#fig_20'>12</ns0:ref>). The results are delivered as a table containing for each method the value of the comparison metrics computed between ground truth and predicted BPM signals, on each video belonging to the dataset, which are then saved to disk. The same considerations hold for the definition of .cfg files associated to DL-based methods. Clearly, in this case the information related to the RGB Signal block are unnecessary.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3'>Significance Testing</ns0:head><ns0:p>Once the comparison metrics have been computed for all the considered methods, the significance of the differences between their performance can be evaluated. In other words, we want to ensure that such difference is not drawn by chance, but it represents an actual improvement of one method over another.</ns0:p><ns0:p>To this end, pyVHR resorts to standard statistical hypothesis testing procedures. Clearly, the results eventually obtained represent a typical repeated measure design, in which two or more pipelines are compared on paired samples (videos). A number of statistical tests are available in order to deal with such state of affairs.</ns0:p><ns0:p>In the two populations case, typically, the paired t-test is employed; alternatively some non-parametric versions of the paired t-test are at hand, namely the Sign Test or the Wilcoxon signed ranks Test; in general the latter is preferred over the former due to its higher power. For the same reason it is recommended to adopt the parametric paired t-test instead of the non-parametric Wilcoxon test. However, the use of the paired t-test is subject to the constraint of normality of the populations. If such condition is not met, a non-parametric test should be chosen.</ns0:p><ns0:p>Similarly, with more than two pipelines, repeated measure ANOVA is the parametric test that is usually adopted. Resorting to ANOVA, requires Normality and Heteroskedasticity (equality of variances) conditions to be met. Alternatively, when these cannot be ensured, the Friedman Test is chosen.</ns0:p></ns0:div> <ns0:div><ns0:head>18/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In pyVHR the Normality and Heteroskedasticity conditions are automatically checked via the Shapiro-Wilk Normality test and, depending on the Normality with Levene's test or Bartlett's tests for homogeneity of the data.</ns0:p><ns0:p>In the case of multiple comparisons (ANOVA/Friedman), a proper post-hoc analysis is required in order to establish the pairwise differences among the pipelines. Specifically, the Tukey post-hoc Test is adopted downstream to the rejection of the null hypothesis of ANOVA (the means of the populations are equal), while the Nemenyi post-hoc Test is used after the rejection of the Friedman's null hypothesis of equality of the medians of the samples.</ns0:p><ns0:p>Besides the significance of the differences, it is convenient to report their magnitude, too. The effect size can be computed via the Cohen's d in case of Normal of populations; the Akinshin's &#947; is used otherwise.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.1'>The two populations case</ns0:head><ns0:p>pyVHR automatically handles the above significance testing procedure within the StatAnalysis() class, by relying on the Autorank Python package <ns0:ref type='bibr' target='#b25'>(Herbold, 2020)</ns0:ref>. StatAnalysis() ingests the results produced at the previous step and runs the appropriate statistical test on a chosen comparison metric:</ns0:p><ns0:p>1 from pyVHR.analysis.stats import StatAnalysis The output of the statistical testing procedure is reported as follows:</ns0:p><ns0:p>The Shapiro-Wilk Test rejected the null hypothesis of normality for the populations POS (p &lt; 0.01) and GREEN (p &lt; 0.01). (. . . ) the Wilcoxon's signed rank test has been chosen</ns0:p><ns0:p>to determine the differences in the central tendency; median (MD) and median absolute deviation (MAD) are reported for each population. The test rejected the null hypothesis (p &lt; 0.01) that population POS (MD = 1.344 &#177; 1.256, MAD = 0.688) is not greater than population GREEN (MD = 2.297 &#177; 3.217, MAD = 1.429). Hence, we assume that the median of POS is significantly larger than the median of GREEN with a large effect size (&#947; = &#8722;0.850).</ns0:p><ns0:p>As it can be observed, the appropriate statistical test for two non-normal populations has been properly selected. The Concordance Correlation Coefficient (CCC) for the method POS turned out to be significantly larger than the CCC of the method GREEN. Besides being significant, such difference is substantial, as witnessed by the large effect size.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.2'>The more-than-two populations case</ns0:head><ns0:p>Suppose now to structure the above .cfg in order to run three methods instead of two. This would be as simple as extending the 'BVP' and 'Methods' blocks as follows:</ns0:p><ns0:p>1 Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_15'>###BVP### 2 [BVP]</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Re-running the statistical analysis would yield the following output:</ns0:p><ns0:p>The Shapiro-Wilk Test rejected the null hypothesis of normality for the populations CHROM (p &lt; 0.01), POS (p &lt; 0.01), and GREEN (p &lt; 0.01). Given that more than two populations are present, and normality hypothesis has been rejected, the non-parametric Friedman test is chosen to inspect the eventual significant differences between the medians of the populations.</ns0:p><ns0:p>The post-hoc Nemenyi test is then used to determine which differences are significant. The Notably, the presence of more than two non-normal populations leads to the choice of the non-parametric</ns0:p><ns0:p>Friedman Test as omnibus test to determine if there are any significant differences between the median values of the populations.</ns0:p><ns0:p>The box-plots showing the distributions of CCC values for all methods on the UBFC dataset is provided in Figure <ns0:ref type='figure' target='#fig_31'>13</ns0:ref>, while the output of the post-hoc Nemenyi test can be visualized through the Critical Difference (CD) diagram <ns0:ref type='bibr' target='#b18'>(Dem&#353;ar, 2006)</ns0:ref> shown in Figure <ns0:ref type='figure' target='#fig_32'>14</ns0:ref>; CD Diagrams show the average rank of each method (higher ranks meaning higher average scores); models whose difference in ranks does not exceed the CD &#945; (&#945; = 0.05) are joined by thick lines and cannot be considered significantly different. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.3.3'>Comparing Deep and Traditional Pipelines</ns0:head><ns0:p>How does a given DL-based rPPG method compares to the above mentioned traditional approaches? The In this case, the Signal-to-Noise Ratio (SNR) has been chosen as comparison metric; Figure <ns0:ref type='figure' target='#fig_33'>15</ns0:ref> qualitatively displays the results of the comparison of the above mentioned traditional methods with the MTTS-CAN DL-based approach <ns0:ref type='bibr' target='#b36'>(Liu et al., 2020)</ns0:ref>. The outcome of the statistical assessment is shown in the CD diagram of Figure <ns0:ref type='figure' target='#fig_35'>16</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>EXTENDING THE FRAMEWORK</ns0:head><ns0:p>Besides assessing built-in methods on public datasets included in the framework, the platform is conceived to allow the addition of new methods or datasets. This way, it is possible to assess a new proposal, comparing it against built-in methods, and testing it on either already included datasets or on new ones, this exploiting all the pre-and post-processing modules made available in pyVHR . The framework extension can be achieved following simple steps as described in the subsequent subsections.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.1'>Adding a new method</ns0:head><ns0:p>In this section we show how to add to the pyVHR framework either a new traditional or learning-based method called MY NEW METHOD.</ns0:p><ns0:p>In the first case, to exploit the pyVHR built-in modules the new function should receive as input a signal in the shape produced by the built-in pre-processing modules, together with some other parameters required by the method itself. Specifically, this results in a signature of the form:</ns0:p><ns0:p>1 MY_NEW_METHOD(signal, ** kargs)</ns0:p><ns0:p>where signal is a Numpy array in the form (P, 3, K); P is the number of considered patches (it can be 1 if the holistic approach is used), 3 is the number of RGB Channels and K is the number of frames. ** kargs refers to a dictionary that contains all the parameters required by the method at hand. A proper function implementing an rPPG method must return a BVP signal as a Numpy array of shape (P,K). In case of DL-based method, the new function should receive as input the raw frames as a Numpy array in the form (H,W,3,K), where H,W denote the frame dimensions. The output of the new method could be either a BVP signal or the HR directly.</ns0:p><ns0:p>Accordingly, the signature becomes:</ns0:p><ns0:p>1 MY_NEW_METHOD(frames, fps)</ns0:p><ns0:p>Both for traditional and DL-based method, the function call MY NEW METHOD can now be embedded into the proper Pipeline, and assessed as described earlier. In order to do so, the .cfg file should be tweaked as follows:</ns0:p><ns0:p>1 <ns0:ref type='bibr'>[MY_NEW_METHOD]</ns0:ref> 2 path = 'path/to/module.py'</ns0:p><ns0:formula xml:id='formula_16'>3 name = 'MY_NEW_METHOD' 4 ...</ns0:formula><ns0:p>Moreover, the methods block of the .cfg file is supposed to contain a specific listing describing MY NEW METHOD, providing the path to the Python module encoding the method and its function name.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.2'>Adding a new dataset</ns0:head><ns0:p>Currently pyVHR provides APIs for handling five datasets commonly adopted for the evaluation of rPPG methods, namely LGI-PPGI <ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref>, UBFC <ns0:ref type='bibr' target='#b8'>(Bobbia et al., 2019)</ns0:ref>, PURE <ns0:ref type='bibr' target='#b59'>(Stricker et al., 2014)</ns0:ref>, MAHNOB-HCI <ns0:ref type='bibr' target='#b57'>(Soleymani et al., 2011)</ns0:ref>, and COHFACE <ns0:ref type='bibr' target='#b28'>(Heusch et al., 2017a)</ns0:ref>. However, the platform allows to add new datasets favoring the method assessment on new data. A comprehensive list of the datasets that are typically employed for rPPG estimation and evaluation is reported in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>The framework conceives datasets as a hierarchy of classes (see Fig. Dataset Subjects Task/Condition pyVHR UBFC1 <ns0:ref type='bibr' target='#b8'>(Bobbia et al., 2019)</ns0:ref> 8 Stationary &#10003; UBFC2 <ns0:ref type='bibr' target='#b8'>(Bobbia et al., 2019)</ns0:ref> 42 Interaction &#10003; PURE <ns0:ref type='bibr' target='#b59'>(Stricker et al., 2014)</ns0:ref> 10 Multiple &#10003; LGI-PPGI <ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref> 25 (6 available) Multiple &#10003; MAHNOB-HCI <ns0:ref type='bibr' target='#b57'>(Soleymani et al., 2011)</ns0:ref> 27 Emotion Elicitation &#10003; COHFACE <ns0:ref type='bibr' target='#b29'>(Heusch et al., 2017b)</ns0:ref> 40 Stationary &#10003; UBFC-Phys <ns0:ref type='bibr' target='#b56'>(Sabour et al., 2021)</ns0:ref> 56 Stress Test &#10007; AFRL <ns0:ref type='bibr' target='#b21'>(Estepp et al., 2014)</ns0:ref> 25 Multiple &#10007; MMSE-HR <ns0:ref type='bibr' target='#b72'>(Zhang et al., 2016)</ns0:ref> 140 Simulating Facial Expressions &#10007; OBF <ns0:ref type='bibr' target='#b35'>(Li et al., 2018)</ns0:ref> 106 Multiple &#10007; VIPL-HR <ns0:ref type='bibr' target='#b45'>(Niu et al., 2018)</ns0:ref> 107 Multiple &#10007; ECG-Fitness <ns0:ref type='bibr'>( &#352;petl&#237;k et al., 2018)</ns0:ref> 17 Physical Activities &#10007; Table <ns0:ref type='table'>3</ns0:ref>. A list of datasets commonly used for rPPG. The left-most column collects the dataset names and introducing papers; second column, the number of subjects involved; third column, the task or condition under which data have been collected (Stationary: subject are asked to sit still; Interaction: emulation of a human-computer interaction scenario via a time sensitive mathematical game; Multiple: more than one condition has been considered while recording subjects, such as Steady, Talking, Head Motion etc; Physical Activities: subjects are recorded while performing activities such as speaking, rowing, exercising on a stationary bike etc; Stress Test: participants are subject to tasks with different levels of difficulty inspired by the Trier Social Stress Test; Emotion Elicitation: participants were shown fragments of movies and pictures apt at eliciting emotional reactions). In the last column, datasets whose handling APIs are currently available in pyVHR have been checked.</ns0:p><ns0:p>The new dataset can then be included in the testing via the .cfg file as described in the paragraph Dataset of Section 5.2. As for the addition of new method, also in case of adding a new dataset the .cfg file should be completed by specifying the path pointing to the new dataset class: </ns0:p></ns0:div> <ns0:div><ns0:head n='7'>CASE STUDY : DEEPFAKE DETECTION WITH PYVHR</ns0:head><ns0:p>DeepFakes are a set of DL based techniques allowing to create fake videos by swapping the face of a person by that of another. This technology has many diverse applications such as expression re-enactment <ns0:ref type='bibr' target='#b3'>(Bansal et al., 2018)</ns0:ref> or video de-identification <ns0:ref type='bibr' target='#b12'>(Bursic et al., 2021)</ns0:ref>. However, in recent years the quality of deepfakes has reached tremendous levels of realism, thus posing a series of treats related to the possibility of arbitrary manipulation of identity, such as political propaganda, blackmailing, and fake news <ns0:ref type='bibr' target='#b43'>(Mirsky and Lee, 2021)</ns0:ref>.</ns0:p><ns0:p>As a consequence, efforts have been devoted to the study and the development of methods allowing to discriminate between real and forged videos <ns0:ref type='bibr' target='#b61'>(Tolosana et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Mirsky and Lee, 2021)</ns0:ref>. Interestingly enough, one effective approach is represented by the exploitation of physiological information <ns0:ref type='bibr' target='#b26'>(Hernandez-Ortega et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b15'>Ciftci et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b51'>Qi et al., 2020)</ns0:ref> . Indeed, signals originating from biological action such as heart beat, blood flow, or breathing are expected to be (in large part) disrupted after face-swapping.</ns0:p><ns0:p>Therefore, methods such as remote PPG can be adopted in order to evaluate their presence.</ns0:p><ns0:p>In the following, it is shown how pyVHR can be effectively employed to easily perform a DeepFake detection task. To this end, we rely on the FaceForensics++ 4 dataset <ns0:ref type='bibr' target='#b53'>(R&#246;ssler et al., 2019)</ns0:ref> consisting of 1000 original video sequences (mostly frontal face without occlusions) that have been manipulated with four automated face manipulation methods.</ns0:p><ns0:p>Each video, either original or swapped is fed as input to the pyVHR pipeline; then, the estimated BVPs and the predicted BPMs can be analyzed in order to detect DeepFakes. It is reasonable to imagine that the BVP signals estimated on original videos would have much lower complexity if compared with the swapped ones, due to the stronger presence of PPG related information that would be possibly ruled out during swapping procedures. As a consequence, BVP signals from DeepFakes would perhaps exhibit higher levels of noise and hence more complex behaviour.</ns0:p><ns0:p>There exist many ways of measuring the complexity of a signal; here we choose to compute the Fractal Dimension (FD) of BVPs; in particular the Katz's method <ns0:ref type='bibr' target='#b31'>(Katz, 1988)</ns0:ref> is employed.</ns0:p><ns0:p>The FD of the BVP estimated from the i-th patch on the k-th time window (D k i ) can be computed as <ns0:ref type='bibr' target='#b31'>(Katz, 1988)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_17'>D k i =</ns0:formula><ns0:p>log 10 (L/a) log 10 (d/a) ,</ns0:p><ns0:p>where L is the sum of distances between successive points, a is their average, and d is the maximum distance between the first point and any other point of the estimated BVP signal.</ns0:p><ns0:p>The FD associated to a given video can then be obtained via averaging:</ns0:p><ns0:p>4 Available at: https://github.com/ondyari/FaceForensics.</ns0:p></ns0:div> <ns0:div><ns0:head>24/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Similarly, one could consider adopting the average Median Absolute Deviation (MAD) of the BPM predictions on a video as a predictor of the presence of DeepFakes:</ns0:p><ns0:formula xml:id='formula_18'>M AD vid = 1 K K &#8721; k=0 MAD k .</ns0:formula><ns0:p>Figure <ns0:ref type='figure' target='#fig_38'>18</ns0:ref> shows how the FaceForensics++ videos lie in the 2-dimensional space defined by the average Fractal Dimension ( FD) of predicted BVPs using the POS method and the average MADs of BPM predictions ( M AD), when considering the original and swapped videos with the FaceShifter method. It is easy to see how adopting these simple statistics on pyVHR's predictions allows to discriminate original videos from DeepFakes. In particular, learning a baseline Linear SVM for the classification of Real vs. Fake videos generated by the FaceShifter method, yields an average 10-fold Cross-Validation Accuracy of 91.41% &#177; 2.05. This result is comparable with state of the art approaches usually adopting much more complex solutions.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In recent years, the rPPG-based pulse rate recovery has attracted much attention due to its promise to reduce invasiveness, while granting higher and higher precision in heart rate estimation. In particular, we have witnessed the proliferation of rPPG algorithms and models that accelerate the successful deployment in areas that traditionally exploited wearable sensors or ambulatory monitoring. These two trends, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; Ease of installation and use.</ns0:p><ns0:p>&#8226; Two distinct pipelines for either traditional or DL-based methods.</ns0:p><ns0:p>&#8226; Holistic or patch processing for traditional approaches.</ns0:p><ns0:p>&#8226; Acceleration by GPU architectures.</ns0:p><ns0:p>&#8226; Ease of extension (adding new methods or new datasets).</ns0:p><ns0:p>The adoption of GPU support allows the whole process to be safely run in real-time for 30 fps HD videos and an average speedup (time seq /time parall ) of around 5.</ns0:p><ns0:p>Besides addressing the challenges of remote Heart Rate monitoring, we also expect that this framework will be useful to researchers and practitioners from various disciplines when dealing with new problems and building new applications leveraging rPPG technology.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>BPM, uncertainty = pipe.run_on_video('/path/to/vid.avi') </ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Prediction example. Predictions on the Subject1 of the UBFC Dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The pyVHR Pipeline at a glance. (a) The multi-stage pipeline of the pyVHR framework for BPM estimate through PSD analysis exploiting end-to-end DL-based methods. (b) The multi-stage pipeline for traditional approaches that goes through: windowing and patch collection, RGB trace computation, pre-filtering, the application of an rPPG algorithm estimating a BVP signal, post-filtering and BPM estimate through PSD analysis.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Landmarks automatically tracked by MediaPipe and correspondent patch tracking on a subject of the LGI-PPGI dataset<ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref>.</ns0:figDesc><ns0:graphic coords='7,309.59,593.60,72.63,98.86' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>from pyVHR.extraction.utils import MagicLandmarks 2 3 ldmks_list = [MagicLandmarks.cheek_left_top[16], MagicLandmarks.cheek_right_top[14], MagicLandmarks.forehead_center[1]] 4 sig_processing.set_landmarks(ldmks_list) 5 # set squares patches side dimension 6 sig_processing.set_square_patches_side(28.0) 7 #Extract square patches and compute the RGB trajectories as the channel-wise mean 8 sig = sig_processing.extract_patches(videoFileName, 'squares', 'mean')</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>timesES = sig_windowing(sig, Ws, overlap, fps) </ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Patch tracking within a frame temporal window on a subject of the LGI-PPGI dataset<ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref>.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>params={'order':6,'minHz':0.65,'maxHz':4.0,'fps':fps})8/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Predicted BVP signals. An example of estimated BVP signals on the same time window by four different methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Estimated PSD. Estimated Power Spectral Densities (PSD) for the BVP signals plotted in Figure 6. The BPM estimate, given by the maxima of the PSD, is represented by the blue dashed line.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 8 Figure 8 .</ns0:head><ns0:label>88</ns0:label><ns0:figDesc>Figure 8 depicts the distribution of predicted BPM in a given time window, when P = 100 patches are employed. The results from different methods are shown for comparison. Note how the median is able to deliver precise predictions, while the MAD represents a robust measure of uncertainty.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>23</ns0:head><ns0:label /><ns0:figDesc>bpmES = BVP_to_BPM(bvp, fps) 4 # median BPM from multiple estimators BPM 5 bpm, uncertainty = multi_est_BPM_median(bpmES)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Comparison of predicted vs Ground truth BPMs using the patch-wise approach.Predicted BPM (blue) for the Subject1 of the UBFC Dataset. The uncertainty is plotted in shaded blue, while the ground-truth is represented by the red line.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure10. Per-frame time requirements. Average time requirements to process one frame by the Holistic and Patches approaches when using CPU vs. GPU accelerated implementations. The green dashed line represents the real-time limit at 30 frames per second (fps).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022) Manuscript to be reviewed Computer Science as time seq /time parall , is about 5. Remarkably, similar gain in performances are observed if adopting any other rPPG method. The result shown in Figure 10 refers to the following hardware configuration: Intel Xeon Silver 4214R 2.40GHz (CPU), NVIDIA Tesla V100S PCIe 32GB (GPU). Similar results were obtained relying on a non-server configuration: Intel Core i7-8700K 4.70 GHz (CPU), NVIDIA GeForce GTX 960 2GB (GPU).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>$</ns0:head><ns0:label /><ns0:figDesc>Figure11shows a screenshot of the GUI during the online analysis of a video. On the top right are presented the video file name, the video FPS, resolution, and a radio button list to select the type of frame displayed. The original or segmented face can be visualized either selecting the Original Video or the Skin option, while the Patches radio button enables the visualization of the patches (in red). The Stop button ends the analysis, and results can be saved on disk by pushing the Save BPMs button.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>Figure 2(a) depicts at a glance the flow of stages involved in the estimation of heart rate using DL based approaches. Clearly, this gain in simplicity comes at the cost of having to train the model on huge amounts of data, not to mention the issues related to the assessment of the model's generalization abilities. 13/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The Graphical User Interface. A screenshot of the Graphical User Interface (GUI) for online video analysis. The plot on the left shows the predicted BPMs, while on the right it is shown the processed video frames (captured with a webcam) with an example of the segmented skin and the tracked patches.</ns0:figDesc><ns0:graphic coords='15,141.73,63.78,413.57,212.66' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>Figure2(a) summarizes the steps involved in a run on video() call on a given input video. As in the pipeline using traditional methods (see Section 3), after a predetermined chain of analysis steps it produces as output the estimated BPM and related timestamps (time).For instance, consider the MTTS-CAN model currently embedded into the DeepPipeline() class; it estimates the rPPG pulse signal from which the BPM computation can be carried out by following</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>from pyVHR.extraction.sig_processing import SignalProcessing 2 from pyVHR.extraction.utils import get_fps 3 from pyVHR.BPM import BVP_to_BPM 4 from pyVHR.utils.errors import BVP_windowing 5 6 sp = SignalProcessing() 7 frames = sp.extract_raw('/path/to/videoFileName') 8 fps = get_fps('/path/to/videoFileName') 9 bvps_pred = MTTS_CAN_deep(frames, fps) winsize = 6 #6 seconds long time window bvp_win, timesES = BVP_windowing(bvp_pred, winsize, fps, stride=1) bpm = BVP_to_BPM(bvp_win, fps)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. The Assessment Module at a glance. One or more datasets are loaded; videos are processed by the pyVHR pipeline while ground-truth BPM signals are retrieved. Predicted and real BPM are compared with standard metrics and the results are rigorously analyzed via hypothesis testing procedures.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head /><ns0:label /><ns0:figDesc>7</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head /><ns0:label /><ns0:figDesc>minHz':0.65, 'maxHz':4.0, 'fps':'adaptive', 'order':6} 5 RGB Signal. Defines all the parameters for the extraction of the RGB signal (e.g. ROI selection method, temporal windowing size, number and type of patches to be used, etc.Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head /><ns0:label /><ns0:figDesc>4</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_27'><ns0:head /><ns0:label /><ns0:figDesc>pipe.run_on_dataset('/path/to/config.cfg') 5 results.saveResults('/path/to/results.h5')</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_28'><ns0:head>23</ns0:head><ns0:label /><ns0:figDesc>st = StatAnalysis('/path/to/results.h5') 4 # --box plot statistics (medians) 5 st.displayBoxPlot(metric='CCC') 6 7 #testing 8 st.run_stats(metric='CCC')</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_29'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_30'><ns0:head>Friedman</ns0:head><ns0:label /><ns0:figDesc>test rejected the null hypothesis (p &lt; 0.01) of equality of the medians of the populations CHROM (MD = 1.263 &#177; 1.688, MAD = 0.515, MR = 1.385), POS (MD = 1.344 &#177; 1.513, MAD = 0.688, MR = 1.769), and GREEN (MD = 2.297 &#177; 4.569, MAD = 1.429, MR = 2.846). (. . . ) the post-hoc Nemenyi test revealed no significant differences within the following groups: CHROM and POS, while other differences are significant.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_31'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Box Plots showing the CCC values distribution for the POS, CHROM and GREEN methods on the UBFC2 dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_32'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Results of the statistical assessment procedure. CD diagram displaying the results of the Nemenyi post-hoc test on the three populations (POS, CHROM and GREEN) of CCC values on the UBFC2 dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_33'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Box Plots showing the SNR values distribution for the POS, CHROM, MTTS-CAN and GREEN methods on the UBFC1 dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_34'><ns0:head /><ns0:label /><ns0:figDesc>17</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_35'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Results of the statistical assessment procedure. CD diagram displaying the results of the Nemenyi post-hoc test on the four populations (POS, CHROM, MTTS-CAN and GREEN) of SNR values on the UBFC1 dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_36'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17. Class diagram of dataset hierarchy of classes.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_38'><ns0:head>Figure 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Figure 18. Deepfake detection results. The 1000 FaceForensics++ original videos (blue) and their swapped versions (yellow) represented in the 2-D space of BVP Fractal Dimension vs. BPMs average MAD. The green and red half-spaces are simply learned via a Linear SVM.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_39'><ns0:head /><ns0:label /><ns0:figDesc>combined together, have fostered a new perspective in which advanced video-based computing techniques play a fundamental role in replacing the domain of physical sensing. In this paper, in order to allow the rapid development and the assessment of new techniques, we presented an open and very general framework, namely pyVHR . It allows for a careful study of every step, and no less important, for a sound comparison of methods on multiple datasets. pyVHR is a re-engineered version of the framework presented in Boccignone et al. (2020a) but exhibiting substantial novelties: 25/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,141.73,63.78,413.59,179.29' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>A comparison of the freely available rPPG frameworks. Check signs mark conditions fulfilled; crosses, those neglected.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Lang.</ns0:cell><ns0:cell cols='4'>Modular Deep-Ready Multi-Data Stat. Assessment</ns0:cell></ns0:row><ns0:row><ns0:cell>pyVHR</ns0:cell><ns0:cell>Python</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10003;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>McDuff and Blackford (2019) MATLAB</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10007;</ns0:cell></ns0:row><ns0:row><ns0:cell>Heusch et al. (2017a)</ns0:cell><ns0:cell>Python</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10007;</ns0:cell></ns0:row><ns0:row><ns0:cell>Pilz (2019)</ns0:cell><ns0:cell>MATLAB</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10007;</ns0:cell><ns0:cell>&#10003;</ns0:cell><ns0:cell>&#10007;</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>The pulse signals are optionally passed through a narrow-band filter in order to remove unwanted out-of-band frequency components.6. BPM estimation:A BPM estimate is eventually obtained through simple statistics relying on the apical points of the BVP power spectral densities.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>patches 5. Post-filtering: holistic ROI Selection</ns0:cell><ns0:cell>ICA, PCA, POS, CHROM, GREEN, PBV, SSR, LGI PBV, SSR, LGI CHROM, GREEN, ICA, PCA, POS,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ICA, PCA, POS, CHROM, GREEN, PBV, SSR, LGI</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>3. Pre-filtering: Optionally, the raw RGB traces are pre-processed via canonical filtering, normaliza-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>tion or de-trending; the outcome signals provide the inputs to any subsequent rPPG method.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>4. BVP extraction: The rPPG method(s) at hand is applied to the time-windowed signals, thus</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>producing a collection of heart rate pulse signals (BVP estimates), one for each patch.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 Available at: https://sites.google.com/view/ybenezeth/ubfcrppg.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:cell><ns0:cell>4/29</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Comparison of the two implemented skin extraction methods. Output of the Convex-hull approach (left) and face parsing by BiSeNet (right) on a subject of the LGI-PPGI dataset<ns0:ref type='bibr' target='#b49'>(Pilz et al., 2018)</ns0:ref>.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(A) Convex Hull</ns0:cell><ns0:cell>(B) BiSeNet</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Figure 3. 1 from pyVHR.extraction.sig_processing import SignalProcessing</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>3 sig_processing = SignalProcessing()</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>4 if skin_method == 'convexhull':</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='2'>sig_processing.set_skin_extractor(SkinExtractionConvexHull(target_device))</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>6 elif skin_method == 'faceparsing':</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell cols='2'>sig_processing.set_skin_extractor(SkinExtractionFaceParsing(target_device))</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>, which supports real-time inference speed. One example is shown in the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>right image of Figure 3.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Both extraction methods are handled in pyVHR by the SignalProcessing() class. The following</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>lines of code set-up the extractor with the desired skin extraction procedure:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>3 Available for download at https://github.com/partofthestars/LGI-PPGI-DB</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66981:1:1:NEW 16 Feb 2022)</ns0:cell><ns0:cell>5/29</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Revised version of the paper: February 5, 2022 Old title: “An end-to-end Python Framework for Remote Photoplethysmography” New title: “pyVHR: A Python Framework for Remote Photoplethysmography” Ms. No.: CS-2021:10:66981:0:2:REVIEW Authors: Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, Alessandro D’Amelio, Giuliano Grossi, Raffaella Lanzarotti, and Edoardo Mortara By taking into account all the points addressed by the Referees we have revised the manuscript. The main improvement concerns the introduction of recent methods based on deep learning. This has required to reorganize the Python code and the manuscript consequently, arriving to a much more complete and up-to-date contribution. Several other changes have been put in place, making the manuscript more readable (addition of tables, section reorganization) and exhaustive (new metric, new comparison). As a consequence, the original manuscript has been extensively rewritten. In particular, Section 1 has been extended introducing also the Deep Learning (DL) models. In addition, a table has been added to facilitate the comparison among the available rPPG frameworks. Section 3 has been reorganized, focusing on the traditional methods included in the framework. Section 4 is completely new, being designated to the DL methods dealt by the framework. Section 5 has been enriched by the SNR metric, and by comparisons between traditional and DL pipelines. Section 6 now explains also how to extend the framework adding a new DL method, and it includes a table summarizing the datasets typically employed for rPPG estimation and evaluation. All the issues raised by Reviewers have been explicitly addressed in the present version of the manuscript. All other minor remarks have also been considered and corrected in the form suggested by the Referees. In the following, it is described how the specific points raised by the Referees have been taken into account. We thank the Associate Editor and the Reviewers for all their very valuable comments and enlightening suggestions, hoping that this version will meet PeerJ Computer Science publication standards. The Authors Reply to Reviewer 1 - Basic reporting Not all abbreviations are explained in text. For e.g. pyVHR, GPU, RGB, BVP ... Please check all abbreviations. R. We thank the Reviewer for the suggestion. We have checked and added the proper definition where missing or not directly understandable - Detailed Comments None Reply to Reviewer 2 - Basic reporting The topic for this study is not new but very interesting, and the methods are clearly described. The proposed system is important for health and biomedical applications. However, some major points are required before any progress. Experimental design. The technical approach isn’t defined in detail (e.g., no mathematical equations and no justifications for design choices). The experimental design was prepared in good style. R. We would like to thank the Reviewer for grasping and appreciating the bulk of our work. Overall, the comments of the Reviewer also motivated us to better clarify the rationale of this manuscript and the main differences with previous works. As to the Experimental Design, the Reviewer subtly captured the main issue we are confronting with. Actually, we are not only concerned with the prima facie objective of providing a comprehensive open source Python package - which, from the beginning, includes standardized baseline algorithms accompanied by a handful of commonly used datasets - offering a standardized, easy to use and efficient pipeline that researchers involved in studying rPPG methods can fruitfully adopt and exploit to develop novel methods and compare them with others, and/or add new datasets to test existing algorithm or their own. Our long term goal is hopefully to make at least one step ahead towards a principled experimental design within this field. Clearly, the issue of experimental design in computer vision and machine learning (and rPPG) presents an insidious terrain when compared to classic experimental design as intended in biological/medical or social science realms (roughly, how participants are allocated to the different groups in an experiment and related statistical analyses). Further, as also cogently remarked by another Reviewer, it is a problem hitherto overlooked, by and large (though this problem was early noticed in 1997 in a seminal and lucid paper by Salzberg). Indeed, designing a machine learning experiment is a problem with a lot of pitfalls, in spite of the fact that large public databases are becoming increasingly popular in machine learning and computer vision, also fostered by the widespread adoption of deep learning techniques. This positive trend on common datasets (besides the problems of data bias, effects of closed-world assumption, etc, as recently pointed out by Torralba & Efros) has not been counterbalanced by principled algorithm evaluation protocols and tools, thus leading to comparative studies whose results are at best confusing. Yet, we must sincerely admit that sometimes the problem is worse than stated above, because still many people are sharing a small repository of datasets and repeatedly using the same datasets for experiments (and this is a particular concern in many rPPG studies). A comparative framework designed in this perspective, on the one hand should allow researchers whose main goal is that of “creatively” laying out new algorithms to demonstrate feasibility on an important domain, for instance by comparing to baselines (otherwise, the risk is suppressing creative work, encouraging people instead to focus on narrow studies that make slight incremental changes to previous work). On the other hand, for what concerns comparative studies should be done in a statistically acceptable framework. These studies do not usually propose an entirely new method but most often propose changes to one or more known algorithms and uses comparisons to show where and how the changes will improve performance. Included in the comparative study category are papers that neither introduce a new algorithm nor improve an old one; instead, they consider one or more known algorithms and conduct experiments on known datasets. Further, the adoption of classical tools such as t-test or ANOVA when the issue is that of statistical tests for comparisons of more algorithms on multiple datasets, which is even more essential to typical computer vision studies, might be unsuitable because most of the assumptions behind these tools (normality, etc.) are not likely to be satisfied (and even worse, in many papers, simple measures are often reported, e.g., average accuracy in classification studies, giving too much value to “winning” a particular dataset competition, with the surprising outcome, as documented by a handful of commendable studies in such direction - Torralba, Everingham, and others -, that, when evaluated, no statistically significant difference can be found between “treatments”, namely, top-ranked algorithms). In addition, in machine learning and computer vision one has to be aware that often coexist potentially conflicting, but valid, perspectives on what constitutes a good algorithm. The first perspective is that a good learning algorithm should perform well when trained and applied to a single population and experimental setting (but it is not expected to perform well when the resulting model is applied to different populations and settings. These “specialist algorithms” can adapt and specialize to the population at hand. Often, this is the mainstream perspective for assessing prediction algorithms and is consistent with validation procedures performed within studies. Meanwhile, there are good “generalist” algorithms that yield models that may be suboptimal for the training population, or not fully representative of the dataset at hand, but that perform reasonably well across different populations (and one example introduced in the revised manuscript - albeit not for criticism purposes but for illustrative aims- indirectly shows a modest performance of a deep learning algorithm with respect to classic “simple” ones). This latter perspective brings to the multiple dataset comparison problem. Under such circumstances, to make a long story short, the proposed framework is also designed to provide a principled help to rPPG researchers confronting with the question: how does one choose which algorithm to use for solving the problem? From this rationale, also stems the choice of not overloading the manuscript with excessive mathematical details of the algorithms/methods presently embedded within the framework (that researchers within the rPPG field are very likely to be familiar with, and those that are novel to the field can more readily grasp from the many gentle introductions and reviews that have been published over years). More details in the specific answers to comments below. - Detailed comments 1. Please add some of the most important quantitative results to the Abstract and conclusion. • R. Being the focus of the manuscript on the framework/methodology for rPPG analysis from videos, we are clearly not addressing the performance (accuracy of the estimated signal) of the methods currently embedded within pyVHR. However, we can report on the acceleration due to possibility of exploiting GPU architectures within the current version. Thus, following Reviewer’s suggestion, we added a comment on the speedups achieved using GPUs in the final part of the abstract and in the conclusions. 1. In section 1.1, the authors should clearly mention the weakness point of former works (identification of the gaps) and shows the key differences between the different previous methods and the proposed method. A comparative overview table should solve this point. • R. We thank the Reviewer for this suggestion. We have introduced Table 1 (pag 3) illustrating a comparison among freely available rPPG frameworks, which we surmise to seamlessly provide, at a glance, the “placement” of pyVHR with respect to other frameworks. 2. The technical approach isn’t defined in detail (e.g., no mathematical equations and no justifications for design choices). • R. We have deliberately chosen to present the pyVHR framework in the form of a gentle, albeit rigorous, tutorial presentation (and the same rationale holds as to the design of the framework), thus avoiding, for the sake of clarity of the manuscript, redundant definitions of basic rPPG concepts. The motivation behind this choice is that the expected audience of the manuscript is likely to be formed by researchers or practitioners that are at ease with the fundamentals of rPPG methods and definitions, markedly for the traditional ones (e.g., temporal and spatial mean of ROIs providing RGB traces, the spectral analysis of BVPs for BMP estimate, and, obviously, related statistical analyses). Meanwhile, most people actually involved in rPPG research do become from a variety of research realms (from biomedical research to HCI and affective computing), not necessarily having a strict computer science background and related technical skills. Thus, in the spirit of PeerJ, one of the few journals that realizes the importance of interdisciplinarity, the scientific value of software contributions, and reproducible results, we have opted for this, hopefully, seamless presentation of the the techniques and the methodology at the core of pyVHR. Such choice we surmise (due to personal interactions with colleagues in biomedical engineering and psychology) to be the most straightforward for researchers engaging with these problems. As to design choices, these have been overall further clarified in this revised manuscript. 3. Please describe your software algorithm in a flowchart or block diagram. • R. In the doubt as to whether to describe the pipe by means of a figure or flowchart, we considered the first to be more explicit. It would be pleonastic to add a flow chart as the figure clearly shows the flow of the main operations inside the pipe. 4. The authors should describe more details for the used dataset in a table instead of mentioning the reference only. • R. Those presently included within pyVHR are some publicly available third-party datasets well-known to the rPPG community. We have however implemented Reviewer’s suggestion in the novel Table 3 - listing main datasets in the field, with their salient features (task/subjects) - since we guessed it would improve the readability of the manuscript. Clearly, further datasets could be easily added, even those that researchers might design along their own experiments. 5. Please state whether the study was conducted in accordance with the Declaration of Helsinki (the author should provide the human ethics protocol number). • R. Helsinki Declaration concerns medical research involving human subjects. Here we are definitely not dealing with such problem. pyVHR is an algorithmic/statistical/software framework and can be used in any context (either medical or not, e.g., natural interaction or affective computing) where the VHR signal needs to be inferred from a video source. In the manuscript we are obviously showing, for practical purposes, examples of how to set up the pipeline with respect to a video dataset or multiple datasets for cross comparison. To such end, we are simply exploiting third-party publicly delivered datasets, that are widely used in the rPPG research field. But we have not involved and we are not interested in involving human subjects in our study, it is completely out of the scope of our research field and, markedly, of the submitted manuscript. Reply to Reviewer 3 Basic reporting I commend the authors for this project. It would indeed be very valuable to have an open-source Python library available that allows for consistent comparison of state-of-the-art approaches on multiple datasets. The main weakness of the project and this accompanying paper (in their current state) is that they seem to be stuck in 2018. Although the paper purports to be based on the most recent literature, there exists a whole slew of more modern, state-of-the-art rPPG methods (see detailed comments) and newer, larger datasets (e.g., VIPL-HR, OBF) which are unfortunately not addressed. To be accepted for publication in its current form, the paper needs to at least acknowledge that it focuses on legacy rPPG methods (pre-deep learning period). To make this project considerably more useful for the rPPG research community, the authors would future-proof the framework. This means relaxing the rPPG framework assumptions that existed until 2017 and supporting addition of modern rPPG models to the library, which take an entire stack of (cropped) video frames as input. There are also several other minor issues in the detailed comments below. R. We are grateful to the Reviewer for the general appreciation of our effort first, and, most important, for raising the issue of the urge to consider recent methods based on deep learning techniques. This concern has been seriously taken into account (see detailed replies below). Rather than barely acknowledge that our work focuses on legacy rPPG methods (pre-deep learning period), while accounting, in bibliographic terms, the published work suggested by the referee, we indeed engaged in augmenting the rPPG framework in order to support the addition of modern (deep) rPPG models to the library. One example (MTTS-CAN model) is now available, and most relevant, the means for plugging in and experimenting with novel deep models. Seamlessly, the statistical tools available at the end of the processing pipeline can be use independently of the approach adopted. As to the availability of benchmarks proposed in the literature, it is, as we are all aware, somewhat a sore point. Large or more recent datasets, as those suggested by the Reviewer (VIPL-HR, OBF), are unfortunately (at the moment) not addressed due to the difficulties in their availability. However, the framework is by conception open in this sense. It offers the possibility of operating with any method and any dataset (but, indeed, we are planning to extend the experimental tests to the most relevant datasets in the area, including the two indicated). Detailed comments l.1: I got slightly confused about the usage of the phase “end-to-end” in the title and throughout this paper. It seems to be taken to mean “a method which goes from raw video all the way to to HR estimates” – but then shouldn’t all rPPG methods that estimate a HR from video be described as end-to-end? It could also be seen as also contentious because the phrase has a special meaning when it comes to deep learning. R. We definitely agree. In the revised manuscript, we have crossed out the term “end-to-end” (in the title and in the text in general) as suggested by the Reviewer, because it is indeed true that such term, in the “deep learning era” calls for a specific semantics. Instead, we have adopted the term “multi-stage”, which seems more appropriate to describe a process that makes use of multiple computation steps, each being susceptible to be replaced and improved by novel or more efficient techniques. Further, after having seriously taken into account the crucial issue raised by the referee (deep learning approach, cfr, answer below), in this revised version, we now allow the pyVHR framework to embed deep end-to-end models; thus, the generic use of the term would have been even more confusing. l.18: ... analyzing ... R. Done. l.18: Sentence is too long. Break up into two sentences. R. Done. l.20, l.52, l. 241, l. 580, l. 668 & l.123/Figure 2: Today, virtually all novel rPPG methods use machine learning to estimate the pulse signal or the heart rate. It is not obvious how those methods fit into the framework introduced in this paper [...] This framework assumes that the selection of the region of interest is external to the definition of rPPG methods; and that all rPPG methods are defined based on the averaged RGB traces. While this framework is applicable to the methods implemented here, it is not compatible with more modern learning-based rPPG methods that usually regard entire video frames (possibly after face detection) as input [...] The authors does not seem to be aware of new learning-based methods published since 2018 [...] In light of my previous comments, and to future-proof this framework, I think it would be useful to add support for rPPG methods which rely on an entire video (potentially previously cropped to include a face) as input with dimensions [T, H, W, C] [...] As pointed out in earlier comments, framework is not general enough to accommodate recent work in the field. [...] Again, I believe the sequence of steps defined here do not apply “for the vast majority of rPPG methods proposed in the literature” – in fact, they probably only apply for very few methods proposed since 2018. Some examples of recent work that it does not apply to: • DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks. • RhythmNet: End-to-end Heart Rate Estimation from Face via Spatial-temporal Representation. https://arxiv.org/abs/1910.11515 • AutoHR: A Strong End-to-end Baseline for Remote Heart Rate Measurement with Neural Searching. https://arxiv.org/abs/2004.12292 • Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement. https://arxiv.org/abs/2006.03790 • The Benefit of Distraction: Denoising Remote Vitals Measurements using Inverse Attention. https://arxiv.org/abs/2010.07770 • MetaPhys: few-shot adaptation for non-contact physiological measurement. https://arxiv.org/abs/2010.01773 • The Way to My Heart Is Through Contrastive Learning: Remote Photoplethysmography From Unlabelled Video. • PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer. https://arxiv.org/abs/2111.12082 R. Overall, the Reviewer raised a crucial issue, which we have seriously considered. In order to avoid the project being limited to legacy methods only, we decided to engage, along the manuscript revision, in the endeavour of developing in pyVHR a parallel line suitable to embed deep end-to-end methods. This is now available to be used by researchers. Consequently, we have inserted in the revised manuscript a new section (Section 4: pyVHR Pipeline for Deep-Learning Methods) which, in analogy to previous Section 3 (pyVHR Pipeline for Traditional Methods), describes how to conduct analyses when deep methods are the researcher’s target (or a comparison between classic and deep techniques). Clearly, as detailed in the reply to another Reviewer, designing a machine learning experiment is a problem with a lot of pitfalls. One such pitfall is whether other researcher can replicate the data presented in such works, an issue that is often overlooked. This problem, as we all know, is cogent when addressing deep learning techniques. Many popular experiments for deciding which algorithms performs better for a given data set lack replicability when the data set is changed. There are many subtle issued involved (to the point that it has been argued that complex parameter tuning and every adjustment should really be statistically considered a separate experiment calling for careful statistical analysis. In parallel, in some cases, even before replicability of experiment, “simple” reproducibility can become an issue; the situation being worsened when open software is lacking, which allows independent use. Lack of pre-trained models or clearly stated modalities on how to carry out the training from scratch is yet another impediment to progress with serious analyses. As a matter of fact, we all experience in daily research, where we try to address real world problems, that deep architectures gaining excellent results on one or even more datasets, exhibit unexpected embarrassing performance on a novel dataset. In this preliminary step we have however engaged in the endeavour of suitably extending the Python framework to such end and managed to embed within the framework, as a practical example, the MTTS-CAN model (as clearly stated in novel Section 4), by relying on pre-training of the model, so to provide the framework and interested researchers with this possibility too. The idea is to extend the “native” inclusion in the framework to as many models as possible, while hoping that this represents one step ahead towards a principled analysis of DL-architecture performance, at least in the rPPG community (but the problem is cogent in general for the DL field). This being understood that any researcher can presently afford it by virtue of the openness the framework was designed for. But we are well aware that it is a long way to go, well beyond the rPPG problem and, obviously, the scope of our manuscript. Eventually, all the papers suggested above, and some others, have been referenced in this revised version. We hope that with the changes made and the improvements as a whole (undoubtedly fostered by the precise and enlightening suggestions of the Reviewer) the framework has gained a higher level of generality. l.70 : real time R. Done. l.118: The introduction for Section 3 confused me – two main building blocks are mentioned here, but the following subsection structure is not organized accordingly. R. Fixed. The organization now reflects the structure of the revised manuscript. l. 312: I like the detailed explanations here. Maybe the authors could use some of the information from here to qualify the statement about real-time applicability made earlier in line 70? R. In this revised version, in the abstract we explicitly state that the whole accelerated process achieves a speedup of about 5 on 30 fps HD videos. l. 322: In my own investigations, I found that after optimizing the algorithms, at some point the computation required for spatial averaging of the skin regions can become a bottleneck. Have any investigations been made as to how the performance differs when varying the resolution of the original raw video? R. We agree with the Reviewer often finding similar behavior. In case of Full HD lossless video the performance decreases. With the hardware used in our experiments, the pipeline can be processed in real-time at 30 fps. In particular, the resources used to analyze a 1 minute Full HD lossless video are: maximum RAM usage is 2.9 GM (average 2.4 GB); maximum GPU memory usage is 2GB (1.6GB average). Compared to the hardware in our experiments, the CPU utilization for both HD and FULL HD video never exceeded 40% (25% on average). l. 355: Well done on including statistical assessments, these get ignored too often. R. We thank the Reviewer for the appreciation. We really believe that this is an imperative point for the analysis of empirical outcomes, much like in other realms of science. We agree that it is often overlooked, and markedly, we surmise that due to the introduction of deep learning techniques it is even likely to become more sensible. l. 399: Many papers on rPPG also use the SNR (signal to noise ratio) metric. Is there any reason this wasn’t included here? R. Now it has been included and it has also been used for one analysis example. "
Here is a paper. Please give your review comments after reading it.
385
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e. N-grams, and topics. Our models can predict the court's decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In his prescient work on investigating the potential use of information technology in the legal domain, Lawlor surmised that computers would one day become able to analyse and predict the outcomes of judicial decisions <ns0:ref type='bibr' target='#b16'>(Lawlor, 1963)</ns0:ref>. According to Lawlor, reliable prediction of the activity of judges would depend on a scientific understanding of the ways that the law and the facts impact on the relevant decision-makers, i.e. the judges. More than fifty years later, the advances in Natural Language Processing (NLP) and Machine Learning (ML) provide us with the tools to automatically analyse legal materials, so as to build successful predictive models of judicial outcomes.</ns0:p><ns0:p>In this paper, our particular focus is on the automatic analysis of cases of the European Court of Human Rights (ECtHR or Court). The ECtHR is an international court that rules on individual or, much more rarely, State applications alleging violations by some State Party of the civil and political rights set out in the European Convention on Human Rights (ECHR or Convention). Our task is to predict whether a particular Article of the Convention has been violated, given textual evidence extracted from a case, which comprises of specific parts pertaining to the facts, the relevant applicable law and the arguments presented by the parties involved. Our main hypotheses are that (1) the textual content, and (2) the different parts of a case are important factors that influence the outcome reached by the Court. This hypothesis is corroborated by the results. Our more general aim is to place our work within the larger context of ongoing empirical research in the theory of adjudication about the determinants of judicial decision-making. Accordingly, in the discussion we highlight ways in which automatically predicting the outcomes of ECHR cases could potentially provide insights on whether judges follow a so-called legal model <ns0:ref type='bibr' target='#b6'>(Grey, 1983)</ns0:ref> of decision making or their behavior conforms to the legal realists' theory <ns0:ref type='bibr' target='#b17'>(Leiter, 2007)</ns0:ref>, according to which judges primarily decide cases by responding to the stimulus of the facts of the case.</ns0:p><ns0:p>We define the problem of the ECtHR case prediction as a binary classification task. We utilise textual features, i.e. N-grams and topics, to train Support Vector Machine (SVM) classifiers <ns0:ref type='bibr' target='#b39'>(Vapnik, 1998)</ns0:ref>. We apply a linear kernel function that facilitates the interpretation of models in a straightforward manner. Our models can reliably predict ECtHR decisions with high accuracy, i.e. 79% on average. Results indicate that the 'facts' section of a case best predicts the actual court's decision, which is more consistent with legal realists' insights about judicial decision-making. We also observe that the topical content of a case is an important indicator whether there is a violation of a given Article of the Convention or not.</ns0:p><ns0:p>Previous work on predicting judicial decisions, representing disciplinary backgrounds in political science and economics, has largely focused on the analysis and prediction of judges' votes given non textual information, such as the nature and the gravity of the crime or the preferred policy position of each judge <ns0:ref type='bibr' target='#b11'>(Kort, 1957;</ns0:ref><ns0:ref type='bibr' target='#b21'>Nagel, 1963;</ns0:ref><ns0:ref type='bibr' target='#b9'>Keown, 1980;</ns0:ref><ns0:ref type='bibr' target='#b32'>Segal, 1984;</ns0:ref><ns0:ref type='bibr' target='#b23'>Popple, 1996;</ns0:ref><ns0:ref type='bibr' target='#b14'>Lauderdale and Clark, 2012)</ns0:ref>.</ns0:p><ns0:p>More recent research shows that information from texts authored by amici curiae 1 improves models for predicting the votes of the US Supreme Court judges <ns0:ref type='bibr' target='#b34'>(Sim et al., 2015)</ns0:ref>. Also, a text mining approach utilises sources of metadata about judge's votes to estimate the degree to which those votes are about common issues <ns0:ref type='bibr' target='#b15'>(Lauderdale and Clark, 2014)</ns0:ref>. Accordingly, this paper presents the first systematic study on predicting the decision outcome of cases tried at a major international court by mining the available textual information.</ns0:p><ns0:p>We believe that building a text-based predictive system of judicial decisions can offer lawyers and judges a useful assisting tool. The system may be used to rapidly identify cases and extract patterns that correlate with certain outcomes. It can also be used to develop prior indicators for diagnosing potential violations of specific Articles in submitted cases and prioritise the decision process on cases where violation seems to be very likely. This may improve the significant delay imposed by the court and encourage more applications by individuals who may have been discouraged by the expected time delays.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>European Court of Human Rights</ns0:head><ns0:p>The ECtHR is an international court set up in 1959 by the ECHR. The court has jurisdiction to rule on the applications of individuals or sovereign states alleging violations of the civil and political rights set out in the Convention. The ECHR is an international treaty for the protection of civil and political liberties in European democracies committed to the rule of law. The treaty was initially drafted in 1950 by the ten states which had created the Council of Europe in the previous year. Membership in the Council entails becoming party to the Convention and all new members are expected to ratify the ECHR at the earliest opportunity. The Convention itself entered into force in 1953. Since 1949, the Council of Europe and thus the Convention have expanded significantly to embrace forty-seven states in total, with a combined population of nearly 800 million. Since 1998, the Court has sat as a full-time court and individuals can apply to it directly, if they can argue that they have voiced their human rights grievance by exhausting all effective remedies available to them in their domestic legal systems before national courts.</ns0:p></ns0:div> <ns0:div><ns0:head>Case Processing by the Court</ns0:head><ns0:p>The vast majority of applications lodged with the court are made by individuals. Applications are first assessed at a prejudicial stage on the basis of a list of admissibility criteria. The criteria pertain to a number of procedural rules, chief amongst which is the one on the exhaustion of effective domestic remedies. If the case passes this first stage, it can either be allocated to a single judge, who may declare the application inadmissible and strike it out of the Court's list of cases, or be allocated to a Committee or a Chamber. A large number of the applications, according to the court's statistics fail this first admissibility stage. Thus, to take a representative example, according to 1 An amicus curiae (friend of the court) is a person or organisation that offers testimony before the Court in the context of a particular case without being a formal party to the proceedings.</ns0:p></ns0:div> <ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:1:0:NEW 11 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science the Court's provisional annual report for the year 2015 2 , 900 applications were declared inadmissible or struck out of the list by Chambers, approximately 4,100 by Committees and some 78,700 by single judges.</ns0:p><ns0:p>To these correspond, for the same year, 891 judgments on the merits. Moreover, cases held inadmissible or struck out are not reported, which entails that a text-based predictive analysis of them is impossible.</ns0:p><ns0:p>It is important to keep this point in mind, since our analysis was solely performed on cases retrievable through the electronic database of the court, HUDOC 3 . The cases analysed are thus the ones that have already passed the first admissibility stage 4 , with the consequence that the Court decided on these cases' merits under one of its formations.</ns0:p><ns0:p>Case Structure The judgments of the Court have a distinctive structure, which makes them particularly suitable for a text-based analysis. According to Rule 74 of the Rules of the Court 5 , a judgment contains (among other things) an account of the procedure followed on the national level, the facts of the case, a summary of the submissions of the parties, which comprise their main legal arguments, the reasons in point of law articulated by the Court and the operative provisions. Judgments are clearly divided into different sections covering these contents, which allows straightforward standardisation of the text and consequently renders possible text-based analysis. More specifically, the sections analysed in this paper are the following:</ns0:p><ns0:p>&#8226; Procedure: This section contains the procedure followed before the Court, from the lodging of the individual application until the judgment was handed down.</ns0:p><ns0:p>&#8226; The Facts: This section comprises all material which is not considered as belonging to points of law, i.e. legal arguments. It is important to stress that the facts in the above sense do not just refer to actions and events that happened in the past as these have been formulated by the Court, giving rise to an alleged violation of a Convention article. The 'Facts' section is divided in the following subsections:</ns0:p><ns0:p>-The Circumstances of the Case: This subsection has to do with the factual background of the case and the procedure (typically) followed before domestic courts before the application was lodged by the Court. This is the part that contains materials relevant to the individual applicant's story in its dealings with the respondent state's authorities. It comprises a recounting of all actions and events that have allegedly given rise to a violation of the ECHR.</ns0:p><ns0:p>With respect to this subsection, a number of crucial clarifications and caveats should be stressed. To begin with, the text of the 'Circumstances' subsection has been formulated by the Court itself. As a result, it should not always be understood as a neutral mirroring of the factual background of the case. The choices made by the Court when it comes to formulations of the facts incorporate implicit or explicit judgments to the effect that some facts are more relevant than others. This leaves open the possibility that the formulations used by the Court may be tailor-made to fit a specific preferred outcome. We openly acknowledge this possibility, but we believe that there are several ways in which it is mitigated. First, the ECtHR has very limited fact-finding powers. This implies that, in the vast majority of cases, it will defer, when summarizing the factual background of a case, to the judgments of domestic courts that have already heard and dismissed the applicants' complaint. While the latter may also reflect assumptions about relevance, they also provide formulations of the facts that have been validated by more than one decision-maker. Second, the Court cannot openly acknowledge any kind of bias on its part. This means that, on their face, summaries of facts found in the 'Circumstances' section have to be at least framed in as neutral and impartial a way as possible. Third, a a cursory examination of many ECtHR cases indicates that, in the vast majority of cases, parties do not seem to dispute the facts themselves, as 2 ECHtR provisional annual report for the year 2015, http://www.echr.coe.int/Documents/Annual_report_ 2015_ENG.pdf 3 HUDOC ECHR Database, http://hudoc.echr.coe.int/ 4 Nonetheless, not all cases that pass this first admissibility stage are decided in the same way. While the individual judge's decision on admissibility is final and does not comprise the obligation to provide reasons, a Committee deciding a case may, by unanimous vote, declare the application admissible and render a judgment on its merits, if the legal issue raised by the application is covered by well-established case-law by the Court.</ns0:p><ns0:p>5 Rules of ECtHR, http://www.echr.coe.int/Documents/Rules_Court_ENG.pdf</ns0:p></ns0:div> <ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:1:0:NEW 11 Jul 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science contained in the 'Circumstances' subsection, but only their legal significance (i.e. whether a violation took place or not, given those facts). As a result, the 'Circumstances' subsection contains formulations on which, in the vast majority of cases, disputing parties agree. Last, and in any event, the 'Circumstances' subsection is the closest (even if sometimes crude)</ns0:p><ns0:p>proxy we have to a reliable textual representation of the factual background of a case.</ns0:p><ns0:p>-Relevant Law: This subsection of the judgment contains all legal provisions other than the articles of the Convention that can be relevant to deciding the case. These are mostly provisions of domestic law, but the Court also frequently invokes other pertinent international or European treaties and materials.</ns0:p><ns0:p>&#8226; The Law: The Law section is focused on considering the merits of the case, through the use of legal argument. Depending on the number of issues raised by each application, the section is further divided into subsections that examine individually each alleged violation of some Convention article (see below). However, the Court in most cases refrains from examining all such alleged violations in detail. Insofar as the same claims can be made by invoking more than one article of the Convention, the Court frequently decides only those that are central to the arguments made.</ns0:p><ns0:p>Moreover, the Court also frequently refrains from deciding on an alleged violation of an article, if it overlaps sufficiently with some other violation it has already decided on.</ns0:p><ns0:p>- &#8226; Operative Provisions: This is the section where the Court announces the outcome of the case, which is a decision to the effect that a violation of some Convention article either did or did not take place. Sometimes it is coupled with a decision on the division of legal costs and, much more rarely, with an indication of interim measures, under article 39 of the ECHR.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows extracts of different sections from the Case of Velcheva v. Bulgaria following the structure described above.</ns0:p></ns0:div> <ns0:div><ns0:head>Data</ns0:head><ns0:p>We create a data set 6 consisting of cases related to Articles 3, 6, and 8 of the Convention. We focus on these three articles for two main reasons. First, these articles provided the most data we could automatically scrape. Second, it is of crucial importance that there should be a sufficient number of cases available, in order to test the models. Cases from the selected articles fulfilled both criteria. Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> shows the Convention right that each article protects and the number of cases in our data set.</ns0:p><ns0:p>For each article, we first retrieve all the cases available in HUDOC. Then, we keep only those that are in English and parse them following the case structure presented above. We then select an equal number of violation and non-violation cases for each particular article of the Convention. To achieve a balanced number of violation/non-violation cases, we first count the number of cases available in each class. Then, 6 The data set is publicly available for download from https://figshare.com/s/6f7d9e7c375ff0822564 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science we choose all the cases in the smaller class and randomly select an equal number of cases from the larger class. This results to a total of 250, 80 and 254 cases for Articles 3, 6 and 8 respectively.</ns0:p><ns0:p>Finally, we extract the text under each part of the case by using regular expressions, making sure that any sections on operative provisions of the Court are excluded. In this way, we ensure that the models do not use information pertaining to the outcome of the case. We also preprocess the text by lower-casing and removing stop words (i.e. frequent words that do not carry significant semantic information) using the list provided by NLTK 7 .</ns0:p></ns0:div> <ns0:div><ns0:head>Description of Textual Features</ns0:head><ns0:p>We derive textual features from the text extracted from each section (or subsection) of each case. These are either N-gram features, i.e. contiguous word sequences, or word clusters, i.e. abstract semantic topics.</ns0:p><ns0:p>&#8226; N-gram Features: The Bag-of-Words (BOW) model <ns0:ref type='bibr' target='#b30'>(Salton et al., 1975;</ns0:ref><ns0:ref type='bibr' target='#b29'>Salton and McGill, 1983)</ns0:ref> is a popular semantic representation of text used in NLP and Information Retrieval. In a BOW model, a document (or any text) is represented as the bag (multiset) of its words (unigrams) or N-grams without taking into account grammar, syntax and word order. That results to a vector space representation where documents are represented as m-dimensional variables over a set of m N-grams. N-gram features have been shown to be effective in various supervised learning tasks <ns0:ref type='bibr' target='#b0'>(Bamman et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b13'>Lampos and Cristianini, 2012)</ns0:ref>. For each set of cases in our data set, we compute the top-2000 most frequent N-grams where N &#8712; {1, 2, 3, 4}. Each feature represents the normalized frequency of a particular N-gram in a case or a section of a case. This can be considered as a feature matrix, C &#8712; R c&#215;m , where c is the number of the cases and m = 2000. We extract N-gram features for the Procedure (Procedure), Circumstances (Circumstances), Facts (Facts),</ns0:p><ns0:p>Relevant Law (Relevant Law), Law (Law) and the Full case (Full) respectively. Note that the representations of the Facts is obtained by taking the mean vector of Circumstances and Relevant Law. In a similar way, the representation of the Full case is computed by taking the mean vector of all of its sub-parts.</ns0:p><ns0:p>&#8226; Topics: We create topics for each article by clustering together N-grams that are semantically similar by leveraging the distributional hypothesis suggesting that similar words appear in similar contexts. We thus use the C feature matrix (see above), which is a distributional representation A representation of a cluster is derived by looking at the most frequent N-grams it contains. The main advantages of using topics (sets of N-grams) instead of single N-grams is that it reduces the dimensionality of the feature space, which is essential for feature selection, it limits overfitting to training data <ns0:ref type='bibr' target='#b12'>(Lampos et al., 2014;</ns0:ref><ns0:ref type='bibr'>Preot &#184;iuc-Pietro et al., 2015;</ns0:ref><ns0:ref type='bibr'>Preot &#184;iuc-Pietro et al., 2015)</ns0:ref> and also provides a more concise semantic representation. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Classification Model</ns0:head><ns0:p>The problem of predicting the decisions of the ECtHR is defined as a binary classification task. Our goal is to predict if, in the context of a particular case, there is a violation or non-violation in relation to a specific Article of the Convention. For that purpose, we use each set of textual features, i.e. N-grams and topics, to train Support Vector Machine (SVM) classifiers <ns0:ref type='bibr' target='#b39'>(Vapnik, 1998)</ns0:ref>. SVMs are a machine learning algorithm that has shown particularly good results in text classification, especially using small data sets <ns0:ref type='bibr' target='#b7'>(Joachims, 2002;</ns0:ref><ns0:ref type='bibr' target='#b41'>Wang and Manning, 2012)</ns0:ref>. We employ a linear kernel since that allows us to identify important features that are indicative of each class by looking at the weight learned for each feature <ns0:ref type='bibr' target='#b4'>(Chang and Lin, 2008)</ns0:ref>. We label all the violation cases as +1, while no violation is denoted by &#8722;1. Therefore, features assigned with positive weights are more indicative of violation, while features with negative weights are more indicative of no violation.</ns0:p><ns0:p>The models are trained and tested by applying a stratified 10-fold cross validation, which uses a heldout 10% of the data at each stage to measure predictive performance. The linear SVM has a regularisation parameter of the error term C, which is tuned using grid-search. For Articles 6 and 8, we use the Article 3 data for parameter tuning, while for Article 3 we use Article 8.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Predictive Accuracy</ns0:head><ns0:p>We compute the predictive performance of both sets of features on the classification of the ECtHR cases. Performance is computed as the mean accuracy obtained by 10-fold cross-validation. Accuracy is computed as follows:</ns0:p><ns0:formula xml:id='formula_0'>Accuracy = TV + T NV V + NV<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where TV and T NV are the number of cases correctly classified that there is a violation an article of the Convention or not respectively. V and NV represent the total number of cases where there is a violation or not respectively. Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the accuracy of each set of features across articles using a linear SVM. The rightmost column also shows the mean accuracy across the three articles. In general, both N-gram and topic features achieve good predictive performance. Our main observation is that both language use and topicality are important factors that appear to stand as reliable proxies of judicial decisions. Therefore, we take a further look into the models by attempting to interpret the differences in accuracy.</ns0:p><ns0:p>We observe that 'Circumstances' is the best subsection to predict the decisions for cases in Articles 6 and 8, with a performance of .82 and .77 respectively. In Article 3, we obtain better predictive accuracy (.70) using the text extracted from the full case ('Full') while the performance of 'Circumstances' is almost comparable (.68). We should again note here that the 'Circumstances' subsection contains information regarding the factual background of the case, as this has been formulated by the Court. The subsection Manuscript to be reviewed</ns0:p><ns0:p>Computer Science therefore refers to the actions and events which triggered the case and gave rise to a claim made by an individual to the effect that the ECHR was violated by some state. On the other hand, 'Full', which is a mixture of information contained in all of the sections of a case, surprisingly fails to improve over using only the 'Circumstances' subsection. This entails that the factual background contained in the 'Circumstances' is the most important textual part of the case when it comes to predicting the Court's decision.</ns0:p><ns0:p>The other sections and subsections that refer to the facts of a case, namely 'Procedure', 'Relevant Law' and 'Facts' achieve somewhat lower performance (.73 cf. .76), although they remain consistently above chance. Recall, at this point, that the 'Procedure' subsection consists only of general details about the applicant, such as the applicant's name or country of origin and the procedure followed before domestic courts.</ns0:p><ns0:p>On the other hand, the 'Law' subsection, which refers either to the legal arguments used by the parties or to the legal reasons provided by the Court itself on the merits of a case consistently obtains the lowest performance (.62). One important reason for this poor performance is that a large number of cases does not include a 'Law' subsection, i.e. 162, 52 and 146 for Articles 3, 6 and 8 respectively. That happens in cases that the Court deems inadmissible, concluding to a judgment of non-violation. In these cases, the judgment of the Court is more summary than in others.</ns0:p><ns0:p>We also observe that the predictive accuracy is high for all the Articles when using the 'Topics' as features, i.e. .78, .81 and .76 for Articles 3, 6 and 8 respectively. 'Topics' obtain best performance in Article 3 and performance comparable to 'Circumstances' in Articles 6 and 8. 'Topics' form a more abstract way of representing the information contained in each case and capture a more general gist of the cases.</ns0:p><ns0:p>Combining the two best performing sets of features ('Circumstances' and 'Topics') we achieve the best average classification performance (.79). The combination also yields slightly better performance for Articles 6 and 8 while performance slightly drops for Article 3. That is .75, .84 and .78 for Articles 3, 6 and 8 respectively. </ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The consistently more robust predictive accuracy of the 'Circumstances' subsection suggests a strong correlation between the facts of a case, as these are formulated by the Court in this subsection, and the decisions made by judges. The relatively lower predictive accuracy of the 'Law' subsection could also be Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science observed, many inadmissibility cases do not contain a separate 'Law' subsection.</ns0:p></ns0:div> <ns0:div><ns0:head>Legal Formalism and Realism</ns0:head><ns0:p>These results could be understood as providing some evidence for judicial decision-making approaches according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide appellate cases. Without going into details with respect to a particularly complicated debate that is out of the scope of this paper, we may here simplify by observing that since the beginning of the 20th century, there has been a major contention between two opposing ways of making sense of judicial decision-making: legal formalism and legal realism <ns0:ref type='bibr' target='#b24'>(Posner, 1986;</ns0:ref><ns0:ref type='bibr' target='#b36'>Tamanaha, 2009;</ns0:ref><ns0:ref type='bibr' target='#b18'>Leiter, 2010)</ns0:ref>. Very roughly, legal formalists have provided a legal model of judicial decision-making, claiming that the law is rationally determinate: judges either decide cases deductively, by subsuming facts under formal legal rules or use more complex legal reasoning than deduction whenever legal rules are insufficient to warrant a particular outcome <ns0:ref type='bibr' target='#b25'>(Pound, 1908;</ns0:ref><ns0:ref type='bibr' target='#b8'>Kennedy, 1973;</ns0:ref><ns0:ref type='bibr' target='#b6'>Grey, 1983;</ns0:ref><ns0:ref type='bibr' target='#b22'>Pildes, 1999)</ns0:ref>. On the other hand, legal realists have criticized formalist models, insisting that judges primarily decide appellate cases by responding to the stimulus of the facts of the case, rather than on the basis of legal rules or doctrine, which are in many occasions rationally indeterminate <ns0:ref type='bibr' target='#b19'>(Llewellyn, 1996;</ns0:ref><ns0:ref type='bibr' target='#b31'>Schauer, 1998;</ns0:ref><ns0:ref type='bibr' target='#b1'>Baum, 2009;</ns0:ref><ns0:ref type='bibr' target='#b17'>Leiter, 2007;</ns0:ref><ns0:ref type='bibr' target='#b20'>Miles and Sunstein, 2008)</ns0:ref>.</ns0:p><ns0:p>Extensive empirical research on the decision-making processes of various supreme and international courts, and especially the US Supreme Court, has indicated rather consistently that pure legal models, especially deductive ones, are false as an empirical matter when it comes to appellate cases. As a result, it is suggested that the best way to explain past appellate decisions and to predict future ones is by placing emphasis on other kinds of empirical variables that affect judges <ns0:ref type='bibr' target='#b1'>(Baum, 2009;</ns0:ref><ns0:ref type='bibr' target='#b31'>Schauer, 1998)</ns0:ref>. For example, early legal realists had attempted to classify cases in terms of regularities that can help predict outcomes, in a way that did not reflect standard legal doctrine <ns0:ref type='bibr' target='#b19'>(Llewellyn, 1996)</ns0:ref>. Likewise, the attitudinal model for the US Supreme Court claims that the best predictors of its decisions are the policy preferences of the Justices and not legal doctrinal arguments <ns0:ref type='bibr' target='#b33'>(Segal and Spaeth, 2002)</ns0:ref>.</ns0:p><ns0:p>In general, and notwithstanding the simplified snapshot of a very complex debate that we just presented, our results could be understood as lending some support to the basic legal realist conception according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide hard appellate cases. In particular, if we accept that the 'Circumstances' subsection, with all the caveats we have already voiced, is a (crude) proxy for non-legal facts and the 'Law' subsection is a (crude) proxy for legal reasons and arguments, the predictive superiority of the 'Circumstances' subsection seems to cohere with extant legal realist treatments of appellate judicial decision making.</ns0:p><ns0:p>However, not more should be read into this than our results allow. First, as we have already stressed at several occasions, the 'Circumstances' subsection is not a neutral statement of the facts of the case.</ns0:p><ns0:p>Second, it is important to underline that the results should also take into account the so-called selection effect <ns0:ref type='bibr' target='#b28'>(Priest and Klein, 1984)</ns0:ref> that pertains to cases judged by the ECtHR as an international court. Given that the largest percentage of applications never reaches the Chamber or, still less, the Grand Chamber, and that cases have already been tried at the national level, it could very well be the case that the set of ECtHR decisions on the merits primarily refers to cases in which the class of legal reasons, defined in a formal sense, is already considered as indeterminate by competent interpreters. This could help explain why judges primarily react to the facts of the case, rather than to legal arguments. Thus, further text-based analysis is needed in order to determine whether the results could generalise to other courts, especially to domestic courts deciding ECHR claims that are placed lower within the domestic judicial hierarchy.</ns0:p><ns0:p>Third, our discussion of the realism/formalism debate is overtly simplified and does not imply that the results could not be interpreted in a sophisticated formalist way. Still, our work coheres well with a bulk of other empirical approaches in the legal realist vein.</ns0:p></ns0:div> <ns0:div><ns0:head>Topic Analysis</ns0:head><ns0:p>The topics further exemplify this line of interpretation and provide proof of the usefulness of the NLP approach. The linear kernel of the SVM model can be used to examine which topics are most important for inferring whether an article of the Convention has been violated or not by looking at their weights w.</ns0:p><ns0:p>Tables 3, 4 and 5 present the six topics for the most positive and negative SVM weights for the articles 3, 6 and 8 respectively. Topics identify in a sufficiently robust manner patterns of fact scenarios that correspond to well-established trends in the Court's case law. <ns0:ref type='bibr' target='#b37'>(Tsarapatsanis, 2015)</ns0:ref>. A representative case here is Oao Plodovaya Kompaniya v. Russia of 7 June 2007. Consequently, the topics identify independently well-established trends in the case law without recourse to expert legal/doctrinal analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>First, topic 13 in</ns0:head><ns0:p>The above observations require to be understood in a more mitigated way with respect to a (small) number of topics. For instance, most representative cases for topic 8 in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref> were not particularly informative. This is because these were cases involving a person's death, in which claims of violations of Article 3 (inhuman and degrading treatment) were only subsidiary: this means that the claims were mainly about Article 2, which protects the right to life. In these cases, the absence of a violation, even if correctly identified, is more of a technical issue on the part of the Court, which concentrates its attention on Article 2 and rarely, if ever, moves on to consider independently a violation of Article 3. This is exemplified by cases such as Buldan v. Turkey of 20 April 2004 and Nuray S &#184;en v. Turkey of 30 March 2004, which were, again, correctly identified.</ns0:p><ns0:p>On the other hand, cases have been misclassified mainly because their textual information is similar to cases in the opposite class. We observed a number of cases where there is a violation having a very similar feature vector to cases that there is no violation and vice versa.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>We presented the first systematic study on predicting judicial decisions of the European Court of Human Rights using only the textual information extracted from relevant sections of ECtHR judgments. We framed this task as a binary classification problem, where the training data consists of textual features extracted from given cases and the output is the actual decision made by the judges.</ns0:p><ns0:p>Apart from the strong predictive performance that our statistical NLP framework achieved, we have reported on a number of qualitative patterns that could potentially drive judicial decisions. More specifically, we observed that the information regarding the factual background of the case as this is formulated by the Court in the relevant subsection of its judgments is the most important part obtaining on average the strongest predictive performance of the Court's decision outcome. We suggested that, even if understood as a crude proxy and with all the caveats that we have highlighted, the rather robust correlation between the outcomes of cases and the text corresponding to fact patterns contained in the relevant subsections coheres well with other empirical work on appellate judicial decision-making and backs basic legal realist intuitions.</ns0:p><ns0:p>Finally, we believe that our study opens up avenues for future work, using different kinds of data (e.g. texts of individual applications, or domestic judgments) coming from various sources (e.g. national authorities or law firms).</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Extracts of different parts of the Case of Velcheva v. Bulgaria http://hudoc.echr.coe.int/sites/eng/pages/search.aspx?i=001-155099.</ns0:figDesc><ns0:graphic coords='5,170.68,517.15,355.66,137.11' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Alleged Violation of Article x: Each subsection of the judgment examining alleged violations in depth is divided into two sub-sections. The first one contains the Parties' Submissions. The second one comprises the arguments made by the Court itself on the Merits. * Parties' Submissions: The Parties' Submissions typically summarise the main arguments made by the applicant and the respondent state. Since in the vast majority of cases the material facts are taken for granted, having been authoritatively established by domestic courts, this part has almost exclusively to do with the legal arguments used by the parties. * Merits: This subsection provides the legal reasons that purport to justify the specific outcome reached by the Court. Typically, the Court places its reasoning within a wider set of rules, principles and doctrines that have already been established in its past caselaw and attempts to ground the decision by reference to these. It is to be expected, then, that this subsection refers almost exclusively to legal arguments, sometimes mingled with bits of factual information repeated from previous parts.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b38'>Turney and Pantel, 2010)</ns0:ref> of the N-grams given the case as the context; each column vector of the matrix represents an N-gram. Using this vector representation of words, we compute N-gram similarity using the cosine metric and create an N-gram by N-gram similarity matrix. We finally apply spectral clustering (von Luxburg, 2007) -which performs graph partitioning on the similarity matrix -to obtain 30 clusters of N-grams. For Articles 6 and 8, we use the Article 3 data for selecting the number of clusters T , where T = {10, 20, ..., 100}, while for Article 3 we use Article 8. Given that the obtained topics are hard clusters, an N-gram can only be part of a single topic.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Articles of the Convention and number of cases in the data set.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Article Human Right</ns0:cell><ns0:cell>Cases</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Prohibits torture and inhuman and degrading treatment</ns0:cell><ns0:cell>250</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>Protects the right to a fair trial</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Provides a right to respect for one's 'private and family life, his</ns0:cell><ns0:cell>254</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>home and his correspondence'</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Feature Type</ns0:cell><ns0:cell cols='4'>Article 3 Article 6 Article 8 Average</ns0:cell></ns0:row><ns0:row><ns0:cell>N-grams</ns0:cell><ns0:cell>Full</ns0:cell><ns0:cell>.70 (.10)</ns0:cell><ns0:cell>.82 (.11)</ns0:cell><ns0:cell>.72 (.05)</ns0:cell><ns0:cell>.75</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Procedure</ns0:cell><ns0:cell>.67 (.09)</ns0:cell><ns0:cell>.81 (.13)</ns0:cell><ns0:cell>.71 (.06)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Circumstances .68 (.07)</ns0:cell><ns0:cell>.82 (.14)</ns0:cell><ns0:cell>.77 (.08)</ns0:cell><ns0:cell>.76</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Relevant Law</ns0:cell><ns0:cell>.68 (.13)</ns0:cell><ns0:cell>.78 (.08)</ns0:cell><ns0:cell>.72 (.11)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Facts</ns0:cell><ns0:cell>.70 (.09)</ns0:cell><ns0:cell>.80 (.14)</ns0:cell><ns0:cell>.68 (.10)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Law</ns0:cell><ns0:cell>.56 (.09)</ns0:cell><ns0:cell>.68 (.15)</ns0:cell><ns0:cell>.62 (.05)</ns0:cell><ns0:cell>.62</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Topics</ns0:cell><ns0:cell>.78 (.09)</ns0:cell><ns0:cell>.81 (.12)</ns0:cell><ns0:cell>.76 (.09)</ns0:cell><ns0:cell>.78</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Topics and Circumstances</ns0:cell><ns0:cell cols='3'>.75 (.10) .84 (0.11) .78 (0.06)</ns0:cell><ns0:cell>.79</ns0:cell></ns0:row></ns0:table><ns0:note>of the different feature types across articles. Accuracy of predicting violation/non-violation of cases across articles on 10-fold cross-validation using an SVM with linear kernel. Parentheses contain the standard deviation from the mean. Accuracy of random guess is .50. Bold font denotes best accuracy in a particular Article or on Average across Articles.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The most predictive topics for Article 3 decisions. Most predictive topics for Article 3, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The most predictive topics for Article 6 decisions. Most predictive topics for Article 6, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The most predictive topics for Article 8 decisions. Most predictive topics for Article 8, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table><ns0:note>286 9/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:1:0:NEW 11 Jul 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head /><ns0:label /><ns0:figDesc>amount to inhuman and degrading treatment under Article 3. That is correctly identified as typically not giving rise to a violation 8 . For example, cases 9 such as Kafkaris v. Cyprus ([GC] no. 21906/04, ECHR 2008-I), Hutchinson v. UK (no. 57592/08 of 3 February 2015) and Enea v. Italy ([GC], no. 74912/01, ECHR 2009-IV) were identified as exemplifications of this trend. Likewise, topic 28 in Table5has to do with whether certain choices with regard to the social policy of states can amount to a violation of Article 8. That was correctly identified as typically not giving rise to a violation, in line with the Court's tendency to acknowledge a large margin of appreciation to states in this area<ns0:ref type='bibr' target='#b5'>(Greer, 2000)</ns0:ref>. In this vein, cases such as Aune v. Norway (no. 52502/07 of 28 October 2010) and Ball v. Andorra (Application no. 40628/10 of 11 December 2012) are examples of cases where topic 28 is dominant. Similar observations apply, among other things, to topics 27, 23 and 24. That includes issues with the enforcement of domestic judgments giving rise to a violation of Article 6<ns0:ref type='bibr' target='#b10'>(Kiestra, 2014)</ns0:ref>. Some representative cases are Velskaya v. Russia, of 5 October 2006 and Aleksandrova v. Russia of 6 December 2007. Topic 7 in Table4is related to lower standard of review when property rights are at play</ns0:figDesc><ns0:table /><ns0:note>Table 3 has to do with whether long prison sentences and other detention measures 10/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:1:0:NEW 11 Jul 2016) Manuscript to be reviewed Computer Science can</ns0:note></ns0:figure> </ns0:body> "
"We would like to thank the editor and the reviewers for their constructive comments. See below our point­by­point response. Reviewer 1 1. The description, under 'Data', of how cases were selected needs clarification in several ways (lines 165­182). First, when it is said that Article 3 'Prohibits torture', are we to understand that the study does not cover the other prohibitions contained in Article 3 (such as the prohibition on inhuman treatment)? Precision is important in legal writing and it is important here. The study does cover the other prohibitions contained in Article 3 (such as inhuman and degrading treatment). Accordingly, we have added the full description of the Articles in Table 1. 2. Second, the number of cases seems much lower than what one would expect, and much lower than what a rudimentary search of the HUDOC database generates. For example, a basic HUDOC search of Article 6, in English, generates over 10,000 cases. Of those, HUDOC indicates there at least 8,000 violation cases and at least 900 non­violation cases. If the methodology at lines 172­177 is replicable, one would expect that this study would have included at least 1,800 cases in its study of Article 6 (being all the cases in the smaller class plus a randomly selected equal number of cases from the larger class). And yet here the number studied is just 80. It is hard to discern the reasons for the discrepancy between the draft article and the results of a basic HUDOC search. At the beginning of our study, we planned to manually develop the data set using experts in the School of Law of the University of Sheffield. We quickly realised that this process was very slow and thus infeasible. Then, we decided to automate the process by devising a “reasonable” common structure in the format of the case reports. This includes the main parts of “Procedure”, “The Facts”, “The Law” and “Operative Provisions” in that order. We strictly filtered out cases that failed to match more than one of these main sections. There are many cases that a different wording is used in the title of a section which makes it difficult to be captured automatically. In addition, we strictly filtered out comments made by the Court keeping only those comments made by the two parties. That sometimes resulted into empty sections. Finally, many case reports retrieved were actually in French even if the selected language was set to English. For these reasons, our dataset contains a smaller number of cases. However, the results obtained are significantly different compared to the random baseline, i.e. 50% accuracy (t­test, p<0.001). Note that we scraped the Hudoc website and matched cases with regular expressions without having access to the actual database (access here means to be able to retrieve data automatically using some sort of database query language such as SQL and not through the website interface). We believe that our results can be seen as a proof of concept and by given access to the actual database we could perform a study that covers all the available cases and articles. That would be an interesting avenue for future work and/or a research grant proposal. 3. Third, the reasons for choosing articles 3, 6, and 8 could be substantiated a bit more. Surely *all* of the ECHR rights may be regarded as 'important human rights that correspond to a variety of interests' (lines 167­168)? Why focus on these three? These Articles seemed to us to provide the most data we could automatically scrape. We have revised the text accordingly. 4. The article claims that 'there is a strong correlation between the actual facts of a case and the decisions made by judges' (lines 264­265). I have serious concerns about whether or not this conclusion is substantiated by the data which preceded the Discussion. First, it is unclear what the authors mean by 'the *actual* facts' (line 265). Are the 'actual' facts somehow different from 'the facts'? No, they are not. We have revised the text accordingly. 5. Second, it is not clear what the authors mean when they say 'information available to the judges before they make any comments or decisions' (line 180). Are the authors implying that the judgments contain all the information available to the judges before they made their decision? If so, this would seem to be a misunderstanding of how courts work (surely, at the very least, one would want to look at the full written arguments of the parties, rather than simply the summaries of those written arguments that are contained in judgments?). Our wording was somewhat sloppy. All we meant to say was that the models do not have information pertaining to the operative provisions. We have revised the text accordingly. 6. Third, it seems to me that this study proves, at best, that there is a correlation between the facts *as described in the judgement* and the result of a case. There is a difference between 'the actual facts of a case' and 'the facts as they are described in the judgment of case'. The article does not acknowledge this difference at all. This is a problem. I'm afraid that the authors seem to be under the impression that the facts section of a judgment is an objective scientifically­established recitation of the facts. Unless the authors are aware of ECtHR practice that I am unaware of, this seems dangerously naive. On my understanding, the judgments of the ECtHR are prepared by the judges, their assistants, and the Court Registry. In any court anywhere around the world, including the ECtHR, it would not be unusual in the slightest for the judges, the assistants, or the registry, to frame the facts in light of their full understanding of the case (which would include their view on whether or not there is a violation). Facts sections of judgments are not peer­reviewed scientific papers. They are subjective summaries of the facts, including what the authors think is relevant and what they think is irrelevant. If a judge/judicial assistant/registrar is of the preliminary view that a violation is likely, it would not be at all unusual for them to frame the facts differently than how those same facts would be framed if they were of the view that a violation was unlikely. This raises a problem: the article involves the authors taking the facts section of a delivered judgment, and then predicting whether or not that same judgment will result in a violation or not. This may be useful, but it does not seem to be the same as 'predicting judicial decisions...using only the textual information available to the judges before they make any comments or decisions about a specific case...' (lines 312­313). The model does not seem to provide any capability for ex ante prediction ­­ i.e. it does not allow the result of a judicial decision to be predicted until the facts section of the judgment can be analysed (and the facts section of the judgment cannot be analysed until the judgment is handed down). Surely this limits its utility? Perhaps I misunderstand the mathematical value of the study; perhaps I misunderstand the internal workings of the European Court. But even if I am wrong on the maths or on the workings of the Court, the article needs to be considerably clearer about what it is predicting and about the nature of how judgments are written and prepared. Without that, it is hard to attach too much significance to its findings, I'm afraid. We should have made our argument clearer here. We made revisions to the text accordingly, stressing the following points. First, the ECtHR has only limited fact­finding powers, which implies that, in the vast majority of cases, it will defer, when summarizing the facts, to the judgments of domestic courts that have already heard and dismissed the applicants’ complaint. While these can also reflect assumptions about relevance, they also reflect understandings of the facts that have been validated by more than one decision­maker. Second, the Court cannot openly acknowledge any kind of bias on its part. This means that, on their face, summaries of facts have to be at least framed in as neutral a way as possible. Furthermore, a random reading of ECtHR cases indicates that, in the vast majority of cases, parties do not seem to dispute the facts themselves, but merely their legal significance (i.e. whether a violation took place or not, given those facts). Third, it is important to note that the data used by the model are to do with ‘the facts of the case are these are described in the relevant section of the judgment’, as the reviewer correctly suggested. We have revised all pertinent formulations accordingly. Fourth, for our argument to get off the ground all we need is that the text of this section performs differently from the text of other sections. This much has been established by our model. Fifth, the reviewer is right that we should deflate our claim: the model is only a (crude) proxy to different kinds of considerations, and not a perfect representative of these considerations. We have revised all pertinent formulations accordingly. Sixth, this of course leaves open the possibility, noted by the reviewer, that this section is indeed formulated in a way that reflects judges’ understanding of the case, which includes various judgments relating to relevance/irrelevance and, potentially, to biases related to how the case should be decided. We have acknowledged this openly, revising all pertinent formulations. Seventh, and final, point: insofar as the ‘facts’ section of the case is a (crude) proxy, it is an open question whether it could provide a basis for ex ante predictions of judgments. We do not really see any reason why it could not, since it — at the very least — proves the concept that, on the basis of chunks of particular textual information that differ on their face, it can do a relatively good job at predicting outcomes. So the model could have practical utility in this respect. 7. Fourth, the authors claim that their study amounts to support for legal realism over legal formalism (line 322). This may be so, but a much more sophisticated account (than that at lines 28­37) of the debate about realism and formalism would be needed to draw much of a conclusion here. We have (a) moved the relevant points from the introduction to the discussion part and (b) deflated our claims accordingly, with all the usual caveats. Different (sub)sections of a judgment are not to be understood as more than crude proxies (but they are all that we have at this point). Again, we believe that the important thing is that the model, given the data, differentiates clearly between (sub)sections of a judgment and that is a significant result in itself. Reviewer 2 8. The research question is clearly defined: it is possible to use text processing and machine learning to predict whether, given a case, there has been a violation of an article in the convention of human rights. The research question is certainly relevant, and the results are interesting for the natural language processing and machine learning community, but it’s unclear how these findings are useful for the law and human rights community, since nothing is mentioned in the paper with regards to this question, for example, 'it would be useful to apply this kind of classifier as a tagging tool for highlighting cases in which violation of human rights are likely to be true? perhaps as a prioritizing or filtering means?' From a more “applied” perspective, we mention in the abstract that “This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions.”. In the revised version we have added that in the introduction as well. Moreover, insofar as different sections of the judgment can be understood as (crude) proxies of the relevance of different kinds of considerations to judicial decision­making, the analysis provides a first step that could be later further tested with text coming from lawyers’ briefs/applications or domestic judgments. The hard part is to have access to that data (and that is why we focused on the ECtHR’s judgments). 9. Topic models: in spectral clustering, the number of topics is input parameter, but nothing is mentioned about how the value of this parameter, in this case 30 topics, was chosen. We tuned this parameter using the same strategy followed to tune the SVM parameters. We have added the description of how we set the value of this parameter in the Classification Model subsection. 10. I was expecting to see experiments using both set of features, bow and topics, but results are only reported for experiments using one set of features at a time. Why are there no experiments that combine both features sets? If the performance was lower than when using the individual features sets, the outcome is still useful for the community and should be reported. We performed experiments combining both sets of features (N­grams Circumstances + Topics) yielding slightly better performance for articles 6 and 8 while performance was slightly lower for article 3. That is 0.75 (0.10), 0.84 (0.11) and 0.78 (0.06). We have updated Table 2 accordingly. 11. Accuracy is reported as the evaluation metric used to measure performance, but nothing is said about how accuracy is calculated, the formula or an explanation would be helpful. I'm assuming accuracy should be understood to mean: the proportion of true outcomes (true positives and true negatives) among the total number of cases. Please, clarify this. That is true. We have added the equation in section “Results and Discussion”. 12. In the discussion section, the authors gave examples how topics aligned with the theme of some of the cases, but it’s hard to understand if those examples are from the dataset used or from another dataset. I figured it out by exploring the dataset itself. A line clarifying this would be helpful. The examples of cases we use in the discussion are from the dataset used in the study. We clarify that in the revised version of the paper. 13. Comment something about the cases that were wrongly classify, for example, is there any commonality between the wrongly classified instances? What does it mean for the law community to have more than 20% of its cases wrongly classified? On the other hand, cases have been misclassified mainly because their textual information is similar to cases in the opposite class. We observed a number of cases where there is a violation having a very similar feature vector to cases that there is no violation and vice versa. We have added that comment in the Discussion subsection (Results section). 14. The topic of the paper is very interesting and the paper is easy to follow, but I would like to read about how this results can be use by the law community. See point 8. "
Here is a paper. Please give your review comments after reading it.
386
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e. N-grams, and topics. Our models can predict the court's decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In his prescient work on investigating the potential use of information technology in the legal domain, Lawlor surmised that computers would one day become able to analyse and predict the outcomes of judicial decisions <ns0:ref type='bibr' target='#b15'>(Lawlor, 1963)</ns0:ref>. According to Lawlor, reliable prediction of the activity of judges would depend on a scientific understanding of the ways that the law and the facts impact on the relevant decision-makers, i.e. the judges. More than fifty years later, the advances in Natural Language Processing (NLP) and Machine Learning (ML) provide us with the tools to automatically analyse legal materials, so as to build successful predictive models of judicial outcomes.</ns0:p><ns0:p>In this paper, our particular focus is on the automatic analysis of cases of the European Court of Human Rights (ECtHR or Court). The ECtHR is an international court that rules on individual or, much more rarely, State applications alleging violations by some State Party of the civil and political rights set out in the European Convention on Human Rights (ECHR or Convention). Our task is to predict whether a particular Article of the Convention has been violated, given textual evidence extracted from a case, which comprises of specific parts pertaining to the facts, the relevant applicable law and the arguments presented by the parties involved. Our main hypotheses are that (1) the textual content, and</ns0:p><ns0:p>(2) the different parts of a case are important factors that influence the outcome reached by the Court. These hypotheses are corroborated by the results. Our work lends some initial plausibility to a text-based approach with regard to ex ante prediction of ECtHR outcomes on the assumption, defended in later sections, that the text extracted from published judgments of the Court bears a sufficient number of similarities with, and can therefore stand as a (crude) proxy for, applications lodged with the Court as well as for briefs submitted by parties in pending cases. We submit, though, that full acceptance of that reasonable assumption necessitates more empirical corroboration. Be that as it may, our more general aim is to work under this assumption, thus placing our work within the larger context of ongoing empirical research in the theory of adjudication about the determinants of judicial decision-making. Accordingly, in the discussion we highlight ways in which automatically predicting the outcomes of ECtHR cases could potentially provide insights on whether judges follow a so-called legal model <ns0:ref type='bibr' target='#b4'>(Grey, 1983)</ns0:ref> of decision making or their behavior conforms to the legal realists' theorization <ns0:ref type='bibr' target='#b18'>(Leiter, 2007)</ns0:ref>, according to which judges primarily decide cases by responding to the stimulus of the facts of the case.</ns0:p><ns0:p>We define the problem of the ECtHR case prediction as a binary classification task. We utilise textual features, i.e. N-grams and topics, to train Support Vector Machine (SVM) classifiers <ns0:ref type='bibr' target='#b39'>(Vapnik, 1998)</ns0:ref>. We apply a linear kernel function that facilitates the interpretation of models in a straightforward manner. Our models can reliably predict ECtHR decisions with high accuracy, i.e. 79% on average. Results indicate that the 'facts' section of a case best predicts the actual court's decision, which is more consistent with legal realists' insights about judicial decision-making. We also observe that the topical content of a case is an important indicator whether there is a violation of a given Article of the Convention or not.</ns0:p><ns0:p>Previous work on predicting judicial decisions, representing disciplinary backgrounds in political science and economics, has largely focused on the analysis and prediction of judges' votes given non textual information, such as the nature and the gravity of the crime or the preferred policy position of each judge <ns0:ref type='bibr' target='#b9'>(Kort, 1957;</ns0:ref><ns0:ref type='bibr' target='#b22'>Nagel, 1963;</ns0:ref><ns0:ref type='bibr' target='#b7'>Keown, 1980;</ns0:ref><ns0:ref type='bibr' target='#b33'>Segal, 1984;</ns0:ref><ns0:ref type='bibr' target='#b24'>Popple, 1996;</ns0:ref><ns0:ref type='bibr' target='#b12'>Lauderdale and Clark, 2012)</ns0:ref>.</ns0:p><ns0:p>More recent research shows that information from texts authored by amici curiae 1 improves models for predicting the votes of the US Supreme Court judges <ns0:ref type='bibr' target='#b35'>(Sim et al., 2015)</ns0:ref>. Also, a text mining approach utilises sources of metadata about judge's votes to estimate the degree to which those votes are about common issues <ns0:ref type='bibr' target='#b14'>(Lauderdale and Clark, 2014)</ns0:ref>. Accordingly, this paper presents the first systematic study on predicting the decision outcome of cases tried at a major international court by mining the available textual information.</ns0:p><ns0:p>Overall, We believe that building a text-based predictive system of judicial decisions can offer lawyers and judges a useful assisting tool. The system may be used to rapidly identify cases and extract patterns that correlate with certain outcomes. It can also be used to develop prior indicators for diagnosing potential violations of specific Articles in lodged applications and eventually prioritise the decision process on cases where violation seems very likely. This may improve the significant delay imposed by the Court and encourage more applications by individuals who may have been discouraged by the expected time delays.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>European Court of Human Rights</ns0:head><ns0:p>The ECtHR is an international court set up in 1959 by the ECHR. The court has jurisdiction to rule on the applications of individuals or sovereign states alleging violations of the civil and political rights set out in the Convention. The ECHR is an international treaty for the protection of civil and political liberties in to a single judge, who may declare the application inadmissible and strike it out of the Court's list of cases, or be allocated to a Committee or a Chamber. A large number of the applications, according to the court's statistics fail this first admissibility stage. Thus, to take a representative example, according to the Court's provisional annual report for the year 2015 2 , 900 applications were declared inadmissible or struck out of the list by Chambers, approximately 4,100 by Committees and some 78,700 by single judges.</ns0:p><ns0:p>To these correspond, for the same year, 891 judgments on the merits. Moreover, cases held inadmissible or struck out are not reported, which entails that a text-based predictive analysis of them is impossible.</ns0:p><ns0:p>It is important to keep this point in mind, since our analysis was solely performed on cases retrievable through the electronic database of the court, HUDOC 3 . The cases analysed are thus the ones that have already passed the first admissibility stage 4 , with the consequence that the Court decided on these cases' merits under one of its formations.</ns0:p><ns0:p>Main Premise Our main premise is that published judgments can be used to test the possibility of a text-based analysis for ex ante predictions of outcomes on the assumption that there is enough similarity between (at least) certain chunks of the text of published judgments and applications lodged with the Court and/or briefs submitted by parties with respect to pending cases. Predictive tasks were based on access to the relevant data set. We thus used published judgments as proxies for the material to which we do not have access. This point should be borne in mind when approaching our results. At the very least, our work can be read in the following hypothetical way: if there is enough similarity between the chunks of text of published judgments that we analyzed and that of lodged applications and briefs, as we believe there is, then our approach can be fruitfully used to predict outcomes with these other kinds of texts.</ns0:p><ns0:p>Case Structure The judgments of the Court have a distinctive structure, which makes them particularly suitable for a text-based analysis. According to Rule 74 of the Rules of the Court 5 , a judgment contains (among other things) an account of the procedure followed on the national level, the facts of the case, a summary of the submissions of the parties, which comprise their main legal arguments, the reasons in point of law articulated by the Court and the operative provisions. Judgments are clearly divided into different sections covering these contents, which allows straightforward standardisation of the text and consequently renders possible text-based analysis. More specifically, the sections analysed in this paper are the following:</ns0:p><ns0:p>&#8226; Procedure: This section contains the procedure followed before the Court, from the lodging of the individual application until the judgment was handed down.</ns0:p><ns0:p>&#8226; The Facts: This section comprises all material which is not considered as belonging to points of law, i.e. legal arguments. It is important to stress that the facts in the above sense do not just refer to actions and events that happened in the past as these have been formulated by the Court, giving 5 Rules of ECtHR, http://www.echr.coe.int/Documents/Rules_Court_ENG.pdf</ns0:p></ns0:div> <ns0:div><ns0:head>4/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_8'>2016:05:10595:2:1:NEW 17 Sep 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>rise to an alleged violation of a Convention article. The 'Facts' section is divided in the following subsections:</ns0:p><ns0:p>-The Circumstances of the Case: This subsection has to do with the factual background of the case and the procedure (typically) followed before domestic courts before the application was lodged by the Court. This is the part that contains materials relevant to the individual applicant's story in its dealings with the respondent state's authorities. It comprises a recounting of all actions and events that have allegedly given rise to a violation of the ECHR.</ns0:p><ns0:p>With respect to this subsection, a number of crucial clarifications and caveats should be stressed. To begin with, the text of the 'Circumstances' subsection has been formulated by the Court itself. As a result, it should not always be understood as a neutral mirroring of the factual background of the case. The choices made by the Court when it comes to formulations of the facts incorporate implicit or explicit judgments to the effect that some facts are more relevant than others. This leaves open the possibility that the formulations used by the Court may be tailor-made to fit a specific preferred outcome. We openly acknowledge this possibility, but we believe that there are several ways in which it is mitigated. First, the ECtHR has limited fact-finding powers and, in the vast majority of cases, it defers, when summarizing the factual background of a case, to the judgments of domestic courts that have already heard and dismissed the applicants' ECHR-related complaint <ns0:ref type='bibr' target='#b17'>(Leach and Uelac, 2010;</ns0:ref><ns0:ref type='bibr' target='#b16'>Leach, 2013)</ns0:ref>. While domestic courts do not necessarilly hear complaints on the same legal issues as the ECtHR does, by virtue of the rule of exhaustion of domestic remedies they typically have powers to issue judgments on ECHR-related issues. Domestic judgments may also reflect assumptions about the relevance of various events, but they also provide formulations of the facts that have been validated by more than one decision-maker. Second, the Court cannot openly acknowledge any kind of bias on its part. This means that, on their face, summaries of facts found in the 'Circumstances' section have to be at least framed in as neutral and impartial a way as possible. As a result, for example, clear displays of impartiality, such as failing to mention certain crucial events, seem rather improbable. Third, a cursory examination of many ECtHR cases indicates that, in the vast majority of cases, parties do not seem to dispute the facts themselves, as contained in the 'Circumstances' subsection, but only their legal significance (i.e. whether a violation took place or not, given those facts).</ns0:p><ns0:p>As a result, the 'Circumstances' subsection contains formulations on which, in the vast majority of cases, disputing parties agree. Last, we hasten to add that the above three kinds of considerations do not logically entail that other forms of non-outright or indirect bias in the formulation of facts are impossible. However, they suggest that, in the absence of access to other kinds of textual data, such as lodged applications and briefs, the 'Circumstances' subsection can reasonably perform the function of a (sometimes crude) proxy for a textual representation of the factual background of a case.</ns0:p><ns0:p>-Relevant Law: This subsection of the judgment contains all legal provisions other than the articles of the Convention that can be relevant to deciding the case. These are mostly provisions of domestic law, but the Court also frequently invokes other pertinent international or European treaties and materials.</ns0:p><ns0:p>&#8226; The Law: The Law section is focused on considering the merits of the case, through the use of legal argument. Depending on the number of issues raised by each application, the section is further divided into subsections that examine individually each alleged violation of some Convention article (see below). However, the Court in most cases refrains from examining all such alleged violations in detail. Insofar as the same claims can be made by invoking more than one article of the Convention, the Court frequently decides only those that are central to the arguments made.</ns0:p><ns0:p>Moreover, the Court also frequently refrains from deciding on an alleged violation of an article, if it overlaps sufficiently with some other violation it has already decided on.</ns0:p><ns0:p>-Alleged Violation of Article x: Each subsection of the judgment examining alleged violations in depth is divided into two sub-sections. The first one contains the Parties' Submissions.</ns0:p><ns0:p>The second one comprises the arguments made by the Court itself on the Merits. This subsection provides the legal reasons that purport to justify the specific outcome reached by the Court. Typically, the Court places its reasoning within a wider set of rules, principles and doctrines that have already been established in its past caselaw and attempts to ground the decision by reference to these. It is to be expected, then, that this subsection refers almost exclusively to legal arguments, sometimes mingled with bits of factual information repeated from previous parts.</ns0:p><ns0:p>&#8226; Operative Provisions: This is the section where the Court announces the outcome of the case, which is a decision to the effect that a violation of some Convention article either did or did not take place. Sometimes it is coupled with a decision on the division of legal costs and, much more rarely, with an indication of interim measures, under article 39 of the ECHR.</ns0:p><ns0:p>Figures <ns0:ref type='figure' target='#fig_3'>1-4</ns0:ref>, show extracts of different sections from the Case of 'Velcheva v. Bulgaria' 6 following the structure described above.</ns0:p></ns0:div> <ns0:div><ns0:head>Data</ns0:head><ns0:p>We create a data set 7 consisting of cases related to Articles 3, 6, and 8 of the Convention. We focus on these three articles for two main reasons. First, these articles provided the most data we could automatically scrape. Second, it is of crucial importance that there should be a sufficient number of cases available, in order to test the models. Cases from the selected articles fulfilled both criteria. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the Convention right that each article protects and the number of cases in our data set. For each article, we first retrieve all the cases available in HUDOC. Then, we keep only those that are in English and parse them following the case structure presented above. We then select an equal number of violation and non-violation cases for each particular article of the Convention. To achieve a balanced number of violation/non-violation cases, we first count the number of cases available in each class. Then, we choose all the cases in the smaller class and randomly select an equal number of cases from the larger class. This results to a total of 250, 80 and 254 cases for Articles 3, 6 and 8 respectively.</ns0:p><ns0:p>Finally, we extract the text under each part of the case by using regular expressions, making sure that any sections on operative provisions of the Court are excluded. In this way, we ensure that the models do not use information pertaining to the outcome of the case. We also preprocess the text by lower-casing and removing stop words (i.e. frequent words that do not carry significant semantic information) using the list provided by NLTK 8 .</ns0:p></ns0:div> <ns0:div><ns0:head>Description of Textual Features</ns0:head><ns0:p>We derive textual features from the text extracted from each section (or subsection) of each case. These are either N-gram features, i.e. contiguous word sequences, or word clusters, i.e. abstract semantic topics. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; N-gram Features: The Bag-of-Words (BOW) model <ns0:ref type='bibr' target='#b31'>(Salton et al., 1975;</ns0:ref><ns0:ref type='bibr' target='#b30'>Salton and McGill, 1983)</ns0:ref> is a popular semantic representation of text used in NLP and Information Retrieval. In a BOW model, a document (or any text) is represented as the bag (multiset) of its words (unigrams) or N-grams without taking into account grammar, syntax and word order. That results to a vector space representation where documents are represented as m-dimensional variables over a set of m N-grams. N-gram features have been shown to be effective in various supervised learning tasks <ns0:ref type='bibr' target='#b0'>(Bamman et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Lampos and Cristianini, 2012)</ns0:ref>. For each set of cases in our data set, we compute the top-2000 most frequent N-grams where N &#8712; {1, 2, 3, 4}. Each feature represents the normalized frequency of a particular N-gram in a case or a section of a case. This can be considered as a feature matrix, C &#8712; R c&#215;m , where c is the number of the cases and m = 2000. We extract N-gram features for the Procedure (Procedure), Circumstances (Circumstances), Facts (Facts), Relevant Law (Relevant Law), Law (Law) and the Full case (Full) respectively. Note that the representations of the Facts is obtained by taking the mean vector of Circumstances and Relevant Law. In a similar way, the representation of the Full case is computed by taking the mean vector of all of its sub-parts.</ns0:p><ns0:p>&#8226; Topics: We create topics for each article by clustering together N-grams that are semantically similar by leveraging the distributional hypothesis suggesting that similar words appear in similar contexts. We thus use the C feature matrix (see above), which is a distributional representation <ns0:ref type='bibr' target='#b38'>(Turney and Pantel, 2010)</ns0:ref> of the N-grams given the case as the context; each column vector of the matrix represents an N-gram. Using this vector representation of words, we compute N-gram similarity using the cosine metric and create an N-gram by N-gram similarity matrix. We finally apply spectral clustering (von Luxburg, 2007) -which performs graph partitioning on the similarity matrix -to obtain 30 clusters of N-grams. For Articles 6 and 8, we use the Article 3 data for selecting the number of clusters T , where T = {10, 20, ..., 100}, while for Article 3 we use Article 8. Given that the obtained topics are hard clusters, an N-gram can only be part of a single topic.</ns0:p><ns0:p>A representation of a cluster is derived by looking at the most frequent N-grams it contains. The main advantages of using topics (sets of N-grams) instead of single N-grams is that it reduces the dimensionality of the feature space, which is essential for feature selection, it limits overfitting to training data <ns0:ref type='bibr' target='#b10'>(Lampos et al., 2014;</ns0:ref><ns0:ref type='bibr'>Preot &#184;iuc-Pietro et al., 2015;</ns0:ref><ns0:ref type='bibr'>Preot &#184;iuc-Pietro et al., 2015)</ns0:ref> and also provides a more concise semantic representation.</ns0:p></ns0:div> <ns0:div><ns0:head>Classification Model</ns0:head><ns0:p>The problem of predicting the decisions of the ECtHR is defined as a binary classification task. Our goal is to predict if, in the context of a particular case, there is a violation or non-violation in relation to a specific Article of the Convention. For that purpose, we use each set of textual features, i.e. N-grams and topics, to train Support Vector Machine (SVM) classifiers <ns0:ref type='bibr' target='#b39'>(Vapnik, 1998)</ns0:ref>. SVMs are a machine learning algorithm that has shown particularly good results in text classification, especially using small data sets <ns0:ref type='bibr' target='#b5'>(Joachims, 2002;</ns0:ref><ns0:ref type='bibr' target='#b41'>Wang and Manning, 2012)</ns0:ref>. We employ a linear kernel since that allows us to identify important features that are indicative of each class by looking at the weight learned for each feature <ns0:ref type='bibr' target='#b2'>(Chang and Lin, 2008)</ns0:ref>. We label all the violation cases as +1, while no violation is denoted by &#8722;1. Therefore, features assigned with positive weights are more indicative of violation, while features with negative weights are more indicative of no violation.</ns0:p><ns0:p>The models are trained and tested by applying a stratified 10-fold cross validation, which uses a heldout 10% of the data at each stage to measure predictive performance. The linear SVM has a regularisation parameter of the error term C, which is tuned using grid-search. For Articles 6 and 8, we use the Article 3 data for parameter tuning, while for Article 3 we use Article 8.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Predictive Accuracy</ns0:head><ns0:p>We compute the predictive performance of both sets of features on the classification of the ECtHR cases. Performance is computed as the mean accuracy obtained by 10-fold cross-validation. Accuracy is computed as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>Accuracy = TV + T NV V + NV<ns0:label>(</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where TV and T NV are the number of cases correctly classified that there is a violation an article of the Convention or not respectively. V and NV represent the total number of cases where there is a violation or not respectively. Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the accuracy of each set of features across articles using a linear SVM. The rightmost column also shows the mean accuracy across the three articles. In general, both N-gram and topic features achieve good predictive performance. Our main observation is that both language use and topicality are important factors that appear to stand as reliable proxies of judicial decisions. Therefore, we take a further look into the models by attempting to interpret the differences in accuracy.</ns0:p><ns0:p>We observe that 'Circumstances' is the best subsection to predict the decisions for cases in Articles 6 and 8, with a performance of .82 and .77 respectively. In Article 3, we obtain better predictive accuracy (.70) using the text extracted from the full case ('Full') while the performance of 'Circumstances' is almost comparable (.68). We should again note here that the 'Circumstances' subsection contains information regarding the factual background of the case, as this has been formulated by the Court. The subsection therefore refers to the actions and events which triggered the case and gave rise to a claim made by an individual to the effect that the ECHR was violated by some state. On the other hand, 'Full', which is a mixture of information contained in all of the sections of a case, surprisingly fails to improve over using only the 'Circumstances' subsection. This entails that the factual background contained in the 'Circumstances' is the most important textual part of the case when it comes to predicting the Court's decision.</ns0:p><ns0:p>The other sections and subsections that refer to the facts of a case, namely 'Procedure', 'Relevant Law' and 'Facts' achieve somewhat lower performance (.73 cf. .76), although they remain consistently above chance. Recall, at this point, that the 'Procedure' subsection consists only of general details about the applicant, such as the applicant's name or country of origin and the procedure followed before domestic courts.</ns0:p><ns0:p>On the other hand, the 'Law' subsection, which refers either to the legal arguments used by the parties or to the legal reasons provided by the Court itself on the merits of a case consistently obtains the lowest performance (.62). One important reason for this poor performance is that a large number of cases does not include a 'Law' subsection, i.e. 162, 52 and 146 for Articles 3, 6 and 8 respectively. That happens in cases that the Court deems inadmissible, concluding to a judgment of non-violation. In these cases, the judgment of the Court is more summary than in others.</ns0:p><ns0:p>We also observe that the predictive accuracy is high for all the Articles when using the 'Topics' as Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and 8 respectively.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. The most predictive topics for Article 3 decisions. Most predictive topics for Article 3, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation. </ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The consistently more robust predictive accuracy of the 'Circumstances' subsection suggests a strong correlation between the facts of a case, as these are formulated by the Court in this subsection, and the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science decisions made by judges. The relatively lower predictive accuracy of the 'Law' subsection could also be an indicator of the fact that legal reasons and arguments of a case have a weaker correlation with decisions made by the Court. However, this last remark should be seriously mitigated since, as we have already observed, many inadmissibility cases do not contain a separate 'Law' subsection.</ns0:p></ns0:div> <ns0:div><ns0:head>Legal Formalism and Realism</ns0:head><ns0:p>These results could be understood as providing some evidence for judicial decision-making approaches according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide appellate cases. Without going into details with respect to a particularly complicated debate that is out of the scope of this paper, we may here simplify by observing that since the beginning of the 20th century, there has been a major contention between two opposing ways of making sense of judicial decision-making: legal formalism and legal realism <ns0:ref type='bibr' target='#b25'>(Posner, 1986;</ns0:ref><ns0:ref type='bibr' target='#b36'>Tamanaha, 2009;</ns0:ref><ns0:ref type='bibr' target='#b19'>Leiter, 2010)</ns0:ref>. Very roughly, legal formalists have provided a legal model of judicial decision-making, claiming that the law is rationally determinate: judges either decide cases deductively, by subsuming facts under formal legal rules or use more complex legal reasoning than deduction whenever legal rules are insufficient to warrant a particular outcome <ns0:ref type='bibr' target='#b26'>(Pound, 1908;</ns0:ref><ns0:ref type='bibr' target='#b6'>Kennedy, 1973;</ns0:ref><ns0:ref type='bibr' target='#b4'>Grey, 1983;</ns0:ref><ns0:ref type='bibr' target='#b23'>Pildes, 1999)</ns0:ref>. On the other hand, legal realists have criticized formalist models, insisting that judges primarily decide appellate cases by responding to the stimulus of the facts of the case, rather than on the basis of legal rules or doctrine, which are in many occasions rationally indeterminate <ns0:ref type='bibr' target='#b20'>(Llewellyn, 1996;</ns0:ref><ns0:ref type='bibr' target='#b32'>Schauer, 1998;</ns0:ref><ns0:ref type='bibr' target='#b1'>Baum, 2009;</ns0:ref><ns0:ref type='bibr' target='#b18'>Leiter, 2007;</ns0:ref><ns0:ref type='bibr' target='#b21'>Miles and Sunstein, 2008)</ns0:ref>.</ns0:p><ns0:p>Extensive empirical research on the decision-making processes of various supreme and international courts, and especially the US Supreme Court, has indicated rather consistently that pure legal models, especially deductive ones, are false as an empirical matter when it comes to cases decided by courts further up the hierarchy. As a result, it is suggested that the best way to explain past decisions of such courts and to predict future ones is by placing emphasis on other kinds of empirical variables that affect judges <ns0:ref type='bibr' target='#b1'>(Baum, 2009;</ns0:ref><ns0:ref type='bibr' target='#b32'>Schauer, 1998)</ns0:ref>. For example, early legal realists had attempted to classify cases in terms of regularities that can help predict outcomes, in a way that did not reflect standard legal doctrine <ns0:ref type='bibr' target='#b20'>(Llewellyn, 1996)</ns0:ref>. Likewise, the attitudinal model for the US Supreme Court claims that the best predictors of its Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>decisions are the policy preferences of the Justices and not legal doctrinal arguments <ns0:ref type='bibr' target='#b34'>(Segal and Spaeth, 2002)</ns0:ref>.</ns0:p><ns0:p>In general, and notwithstanding the simplified snapshot of a very complex debate that we just presented, our results could be understood as lending some support to the basic legal realist intuition according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide hard cases. In particular, if we accept that the 'Circumstances' subsection, with all the caveats we have already voiced, is a (crude) proxy for non-legal facts and the 'Law' subsection is a (crude) proxy for legal reasons and arguments, the predictive superiority of the 'Circumstances' subsection seems to cohere with extant legal realist treatments of judicial decision-making.</ns0:p><ns0:p>However, not more should be read into this than our results allow. First, as we have already stressed at several occasions, the 'Circumstances' subsection is not a neutral statement of the facts of the case and we have only assumed the similarity of that subsection with analogous sections found in lodged applications and briefs. Second, it is important to underline that the results should also take into account the so-called selection effect <ns0:ref type='bibr' target='#b29'>(Priest and Klein, 1984)</ns0:ref> that pertains to cases judged by the ECtHR as an international court. Given that the largest percentage of applications never reaches the Chamber or, still less, the Grand Chamber, and that cases have already been tried at the national level, it could very well be the case that the set of ECtHR decisions on the merits primarily refers to cases in which the class of legal reasons, defined in a formal sense, is already considered as indeterminate by competent interpreters. This could help explain why judges primarily react to the facts of the case, rather than to legal arguments. Thus, further text-based analysis is needed in order to determine whether the results could generalise to other courts, especially to domestic courts deciding ECHR claims that are placed lower within the domestic judicial hierarchy. Third, our discussion of the realism/formalism debate is overtly simplified and does not imply that the results could not be interpreted in a sophisticated formalist way. Still, our work coheres well with a bulk of other empirical approaches in the legal realist vein.</ns0:p></ns0:div> <ns0:div><ns0:head>Topic Analysis</ns0:head><ns0:p>The topics further exemplify this line of interpretation and provide proof of the usefulness of the NLP approach. The linear kernel of the SVM model can be used to examine which topics are most important for inferring whether an article of the Convention has been violated or not by looking at their weights w.</ns0:p><ns0:p>Tables 3, 4 and 5 present the six topics for the most positive and negative SVM weights for the articles 3, 6 and 8 respectively. Topics identify in a sufficiently robust manner patterns of fact scenarios that correspond to well-established trends in the Court's case law.</ns0:p><ns0:p>First, topic 13 in Table <ns0:ref type='table'>3</ns0:ref> 74912/01, ECHR 2009-IV) were identified as exemplifications of this trend. Likewise, topic 28 in Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref> has to do with whether certain choices with regard to the social policy of states can amount to a violation of Article 8. That was correctly identified as typically not giving rise to a violation, in line with the Court's tendency to acknowledge a large margin of appreciation to states in this area <ns0:ref type='bibr' target='#b3'>(Greer, 2000)</ns0:ref>. In this vein, cases such as Aune v. Norway (no. 52502/07 of 28 October 2010) and Ball v. Andorra (Application no. <ns0:ref type='table' target='#tab_6'>4</ns0:ref> is related to lower standard of review when property rights are at play <ns0:ref type='bibr' target='#b37'>(Tsarapatsanis, 2015)</ns0:ref>. A representative case here is Oao Plodovaya Kompaniya v. Russia of 7 June 2007. Consequently, the topics identify independently well-established trends in the case law without recourse to expert legal/doctrinal analysis.</ns0:p><ns0:p>The above observations require to be understood in a more mitigated way with respect to a (small) number of topics. For instance, most representative cases for topic 8 in Table <ns0:ref type='table'>3</ns0:ref> were not particularly informative. This is because these were cases involving a person's death, in which claims of violations of Article 3 (inhuman and degrading treatment) were only subsidiary: this means that the claims were mainly about Article 2, which protects the right to life. In these cases, the absence of a violation, even if correctly</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1. Procedure</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The Facts</ns0:figDesc><ns0:graphic coords='4,141.73,246.68,413.57,161.46' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The Law</ns0:figDesc><ns0:graphic coords='5,141.73,63.78,413.56,155.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Decision</ns0:figDesc><ns0:graphic coords='5,141.73,253.88,413.57,159.43' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:2:1:NEW 17 Sep 2016) Manuscript to be reviewed Computer Science * Parties' Submissions: The Parties' Submissions typically summarise the main arguments made by the applicant and the respondent state. Since in the vast majority of cases the material facts are taken for granted, having been authoritatively established by domestic courts, this part has almost exclusively to do with the legal arguments used by the parties. * Merits:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>40628/10 of 11 December 2012) are examples of cases where topic 28 is dominant. Similar observations apply, among other things, to topics 27, 23 and 24. That includes issues with the enforcement of domestic judgments giving rise to a violation of Article 6 (Kiestra, 2014). Some representative cases are Velskaya v. Russia, of 5 October 2006 and Aleksandrova v. Russia of 6 December 2007. Topic 7 in Table</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Articles of the Convention and number of cases in the data set.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Article Human Right</ns0:cell><ns0:cell>Cases</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Prohibits torture and inhuman and degrading treatment</ns0:cell><ns0:cell>250</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>Protects the right to a fair trial</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Provides a right to respect for one's 'private and family life, his</ns0:cell><ns0:cell>254</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>home and his correspondence'</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Accuracy of the different feature types across articles. Accuracy of predicting violation/non-violation of cases across articles on 10-fold cross-validation using an SVM with linear kernel. Parentheses contain the standard deviation from the mean. Accuracy of random guess is .50. Bold font denotes best accuracy in a particular Article or on Average across Articles.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Feature Type</ns0:cell><ns0:cell cols='4'>Article 3 Article 6 Article 8 Average</ns0:cell></ns0:row><ns0:row><ns0:cell>N-grams</ns0:cell><ns0:cell>Full</ns0:cell><ns0:cell>.70 (.10)</ns0:cell><ns0:cell>.82 (.11)</ns0:cell><ns0:cell>.72 (.05)</ns0:cell><ns0:cell>.75</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Procedure</ns0:cell><ns0:cell>.67 (.09)</ns0:cell><ns0:cell>.81 (.13)</ns0:cell><ns0:cell>.71 (.06)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Circumstances .68 (.07)</ns0:cell><ns0:cell>.82 (.14)</ns0:cell><ns0:cell>.77 (.08)</ns0:cell><ns0:cell>.76</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Relevant Law</ns0:cell><ns0:cell>.68 (.13)</ns0:cell><ns0:cell>.78 (.08)</ns0:cell><ns0:cell>.72 (.11)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Facts</ns0:cell><ns0:cell>.70 (.09)</ns0:cell><ns0:cell>.80 (.14)</ns0:cell><ns0:cell>.68 (.10)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Law</ns0:cell><ns0:cell>.56 (.09)</ns0:cell><ns0:cell>.68 (.15)</ns0:cell><ns0:cell>.62 (.05)</ns0:cell><ns0:cell>.62</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Topics</ns0:cell><ns0:cell>.78 (.09)</ns0:cell><ns0:cell>.81 (.12)</ns0:cell><ns0:cell>.76 (.09)</ns0:cell><ns0:cell>.78</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Topics and Circumstances</ns0:cell><ns0:cell cols='3'>.75 (.10) .84 (0.11) .78 (0.06)</ns0:cell><ns0:cell>.79</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>features, i.e. .78, .81 and .76 for Articles 3, 6 and 8 respectively. 'Topics' obtain best performance in</ns0:figDesc><ns0:table /><ns0:note>Article 3 and performance comparable to 'Circumstances' in Articles 6 and 8. 'Topics' form a more abstract way of representing the information contained in each case and capture a more general gist of the cases.Combining the two best performing sets of features ('Circumstances' and 'Topics') we achieve the best average classification performance (.79). The combination also yields slightly better performance for Articles 6 and 8 while performance slightly drops for Article 3. That is .75, .84 and .78 for Articles 3, 68/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:2:1:NEW 17 Sep 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The most predictive topics for Article 6 decisions. Most predictive topics for Article 6, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The most predictive topics for Article 8 decisions. Most predictive topics for Article 8, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>has to do with whether long prison sentences and other detention measures can amount to inhuman and degrading treatment under Article 3. That is correctly identified as typically not giving rise to a violation 9 . For example, cases 10 such as Kafkaris v. Cyprus ([GC] no. 21906/04, ECHR 2008-I), Hutchinson v. UK (no. 57592/08 of 3 February 2015) and Enea v. Italy ([GC], no.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot' n='2'>ECHtR provisional annual report for the year 2015, http://www.echr.coe.int/Documents/Annual_report_ 2015_ENG.pdf 3 HUDOC ECHR Database, http://hudoc.echr.coe.int/ 4 Nonetheless, not all cases that pass this first admissibility stage are decided in the same way. While the individual judge's decision on admissibility is final and does not comprise the obligation to provide reasons, a Committee deciding a case may, by unanimous vote, declare the application admissible and render a judgment on its merits, if the legal issue raised by the application is covered by well-established case-law by the Court.3/13PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:2:1:NEW 17 Sep 2016)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='9'>European Court of Human Rights (2015): Factsheet on Life Imprisonment, http://www.echr.coe.int/Documents/ FS_Life_sentences_ENG.pdf 10 Note that all the cases used as examples in this section are taken from the data set we used to perform the experiments. 11/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:2:1:NEW 17 Sep 2016)</ns0:note> </ns0:body> "
"We would like to thank the editor and the reviewers for their constructive comments. See below our point­by­point response. Reviewer 1 1. At lines 88­94, the authors suggest the model can ‘be used to rapidly identify cases’, perhaps in ‘submitted cases’. It remains fundamentally unclear to me what text the authors imagine the model will be operating on to make ex ante predictions, given that it works solely on text in completed judgments. Is it not at least worth *identifying* what the text is that the model would operate on in order to make its ‘rapid’ ex ante predictions? Otherwise, the risk is that the argument will appear to be limited to this: ‘Our model allows us to analyse how the facts are summarized in published cases and predict how, later in that very same published case, the case will be resolved’. This may be interesting in itself, but I’m not sure it’s what the authors want the model to do. Our main argument in favour of ex ante predictions of outcomes rests on the premise that there is enough similarity between (at least certain) chunks of text of completed judgments by the Court and other kinds of textual materials, to wit: (a) applications lodged with the Court, (b) briefs submitted by parties with respect to pending cases making ECHR­related legal arguments and, possibly, (c) sections of domestic judgments that touch upon ECHR­related issues (whether an application to the Court has been made or not). Since our predictive NLP ​ approach (and not the specific supervised model ​per se) seems to work reasonably well with text chucks from published judgments issued by the Court, it could also be further tested to assess whether it in fact can generalise to these other kinds of (in our view quite similar) texts. Unfortunately, this is something that we cannot test at the moment, because we do not have access to the data set that will allow us to do so (and we will not have access to such material in the foreseeable future, since the Court does not easily give access to lodged applications or briefs submitted by parties, which would be the most natural candidates). We thus used published judgments as proxies for the material to which we do not have access. At the very least, our work proves the following point (and is therefore at least a ‘proof of concept’ in this limited sense): ​if there is enough similarity between the chunks of text that we analysed and/or applications and briefs, as we believe there is, ​then NLP approaches can be fruitfully used to predict outcomes with a certain degree of reliability (at the very least, with a degree of reliability that appears to exceed by far the random 50% distribution). So, at the very least, our work provides some justification for further research, potentially on a wider scale and with the use of materials to which we do not at the present moment have access. We hasten to add that this kind of research programme requires different kinds of resources than the ones currently at our disposal, since it renders imperative a close cooperation with the Court itself, with all that such a cooperation entails. Moreover, we provided various reasons to lend support to our view according to which the sections of the cases we analysed can ​indeed be used as proxies for these other sorts of materials (i.e. applications and briefs), and we shall get back to these reasons in the remaining sections of our answer. Overall, we do not believe that a wholesale scepticism with regard to such uses of sections of cases is warranted. We have incorporated the above clarifications in the article. 2. The new section at lines 140­166 seems problematic, unless I have misunderstood key aspects of it. First, the authors say that the ECtHR has very limited fact­finding powers (true). But the authors then move on, without citing any authority, to say that this means that ‘in the vast majority of cases, [the ECtHR] will defer…to the judgments of domestic courts that have already heard and dismissed the applicants’ complaint’. This is problematic in various ways (not least that it implies that the domestic courts hear and dismiss complaints on the same legal questions that the ECtHR does, which seems to suggest the ECtHR is an appellate court – more on this below). More problematically, the authors’ logic is unclear: why wouldn’t the ECtHR defer to the summary of the facts prepared by (e.g.) the government lawyers? Moreover, even if it were true that the Court defers in this manner ‘in the vast majority of cases’, surely it should not be difficult to find a range of law journal articles and analysis supporting this legal or procedural proposition? And finally, this logic would suggest that any predictive model should look principally or exclusively at the domestic courts' factual summaries, would it not? There are various things to note here. First, we explicitly say that deference to domestic judgments is to do ​only with summaries of the factual background of the case and not with anything else. The exact phrase we had used is this “in the vast majority of cases, it [the ECtHR] will defer, ​when summarizing the factual background of a case, to the judgments of domestic courts” (emphasis added). As a result, we nowhere either said explicitly or implied that domestic courts hear and dismiss complaints on the same legal questions as the ECtHR does. In any event, though, we have taken note of the reviewer’s comments and have revised the text accordingly, to remove any possibility of misunderstanding. Second, we are of the view that the main thing to underline is that the facts of the case, in the vast majority of judgments where the Court does not use its powers of investigation, are ​not in dispute by the parties. Accordingly, for the vast majority of cases, the content of those facts has been fixed by the rules of procedure (and evidence) used at the domestic level by national jurisdictions. Moreover, in the vast majority of cases where the facts are thus fixed, the main question that the Court has to answer is a legal (whether there has been a violation of some ECHR­protected right) and not a factual (what are the facts of this particular case) one. We thus believe that it can be reasonably held that, in view of the above, the question of the exact source used to summarize the facts is of secondary importance. We also wish to stress that ‘reasonably’ in the above sense does not mean ‘bulletproof’ for all intents and purposes. As a matter of empirical fact, we can never be sure about the exact ways in which the Court arrives at summaries of the facts of each case and this is a limitation to our analysis that we have revised our text to take account of. In addition, we have revised the text of our article to provide references to academic work on how registry lawyers prepare summaries of the facts for judges (an issue on which some ­limited­ socio­legal research has been conducted). Again, our argument here depends on certain (in our view) reasonable assumptions. We cannot prove these assumptions, since we do not have access to the materials themselves, so readers are invited to read our argument as a hypothetical: ​if these assumptions are met (which we think they do), ​then certain consequences follow. Third, In order to back our claim that the Court mostly defers to domestic courts when determining the factual background of a case, we provided one reference to the leading research on the matter and one more reference to a textbook by a leading legal practitioner of the ECHR. 3. Second, the authors say that ‘the Court cannot openly acknowledge any kind of bias on its part’ and therefore ‘on their face, summaries of facts…have to be at least framed in as neutral…’. The significance of this point is unexplained. Are the authors arguing that the Court does, in reality, prepare neutral summaries? Or that it may well be biased but that it must hide that bias? How does this help the argument about making ex ante predictions of any sort? Moreover, the authors do not seem to appreciate here that outright bias is only one problem: the bigger problem for their argument is the possibility of perfectly rational differential emphasis by the judges/registry/etc of facts that they know will be significant *because they are also involved in reaching the legal conclusions*. Unless I have misunderstood the ECtHR's procedure, in which case I apologize. As we have already mentioned, we are not in a position to either corroborate or refute empirically the proposition to the effect that the fact summaries prepared by the Court are neutral or not. The point we tried to make is merely that, since the Court has to (at least) appear unbiased to both parties in a dispute, it has an interest in presenting the facts of the case in what appears to the parties as being an unbiased way. There is thus an incentive that the facts are characterised in a way that does not (e.g.) hide certain crucial events or misrepresent them. So this, again, is an argument in support of the hypothesis that the chunks of text found in judgments can indeed be reasonably taken to stand as crude proxies for other kinds of texts (lodged applications/briefs of parties), i.e. to the main candidates for textual analysis of ex ante predictions of outcomes to which we do not have access. Moreover, we fully appreciate the possibility of forms of non­outright bias (and we have revised our text accordingly), but, as already stated at various points, we cannot control for the (eventual) presence of that variable, because we cannot compare the various versions of fact summaries due to lack of data. We have revised our text accordingly to make this clearer. 4. Third, the absence of disputes before the ECtHR about the facts does not mean that there cannot be different facts emphasized or prioritized by the judges/registry in light of the analysis that they most likely know will follow. We agree with the reviewer, and we have revised our text accordingly (see lines 160­162 of the revised text)​. 5. Fourth, the authors say ‘the “Circumstances” subsection is the closest (even if sometimes crude) proxy we have to a reliable textual representative of the factual background of a case’. Perhaps this is so, but one may wonder about what ‘reliable’ is worth here without a clearer sense of the model's utility. Surely the facts as summarized in government and/or applicant arguments might be worth a look if the goal is to assist the Court/lawyers in making ex ante predictions about how cases will be decided, even if they are not 'reliable' in the way a peer­reviewed paper might be reliable? The reviewer is right, but, as a matter of fact, and as we have already said, we do not have access to these texts and this is the main reason we have used such crude proxies. We submit once again (see our response to the first point) that such access requires other kinds of resources than those we currently have at our disposal. We also want to stress that, in any event, analysis of such further texts, whose obtention requires other kinds of resources, would be a logical step to take once we have ​some evidence, as our article suggests we do, to the effect that an NLP approach can indeed, in principle, be used to predict outcomes with a certain degree of reliability. In any event, we struck out the word ‘reliable’, which we do not think adds something substantial to our argument, given the caveats already in place. 6. At several points (line 325, line 340), the ECtHR seems to be referred to as an appellate court. It is not an appellate court. This means, (1) the range of orders and remedies available to the ECtHR are not those of an appellate court, with consequences for its analysis; (2) the ECtHR will frequently be applying different *legal* tests to those applied by the domestic appellate courts (eg, ‘was there a violation of Art5 or Art6?’ vs ‘was the conviction unsafe?’), and (3) therefore different *facts* or different emphasis on those facts may be of interest to the ECtHR than those that were of interest to the domestic court. We agree with the reviewer on the ‘appellate court’ point and we have revised our text accordingly. On the question of facts, see our response to the first point. It is of course possible that different facts may be of interest to the ECtHR (this is always the case when, e.g. the Court uses its fact­finding powers to investigate whether a Convention right was violated at the domestic level, even if such use is rare) but this just restates the point that the relevant sections we used are only crude and imperfect proxies for the facts of the case. General comment: ​We agree that there are various constraints that curtail the utility of our work. We have quite clearly stated that we do not have access to lodged applications and briefs submitted by parties, nor will we have in the foreseeable future. Under these conditions, we proceeded to a crude emulation of the process, by using instead chunks of text contained in the judgments themselves, that we believe can reasonably stand as crude proxies for the real thing (summaries of facts and legal arguments in applications and briefs) to provide an initial test for an NLP approach. It is precisely, we think, the ​success of this initial test that corroborates the idea that further testing is needed, with recourse to other kinds of data. Again, we stress that access to such data would open avenues for further research and analysis. However that requires the active collaboration with the Court, and therefore a level of trust that exceeds by far what we can work with at the present moment. Reviewer 2 7. The authors commented during the review process that there are some barriers for accessing the data, from the EctHR portal. Besides, the authors mentioned that accessing cases from domestic courts is not straightforward either. I would like to see a comment regarding data access issues, perhaps a sentence or two in the conclusion section. Are data access issues stopping or slowing down emerging research as the one presented in this paper? Should cases in the EctHR and domestic cases be easily and freely available? If that it the case, is the EctHR making progress in open their repositories for public good? Data access issues slow down emerging research indeed. To the best of our knowledge the HUDOC database has not been designed to support large scale access. It is not possible to download the entire dump as in other large repositories such as Wikipedia. Of course making the data available easily accessible will further enable other studies considering the vast amounts of text and the rich associated metadata contained in HUDOC. We have added a couple of sentences in the conclusions section discussing the issue. "
Here is a paper. Please give your review comments after reading it.
387
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e. N-grams, and topics. Our models can predict the court's decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In his prescient work on investigating the potential use of information technology in the legal domain, Lawlor surmised that computers would one day become able to analyse and predict the outcomes of judicial decisions <ns0:ref type='bibr' target='#b17'>(Lawlor, 1963)</ns0:ref>. According to Lawlor, reliable prediction of the activity of judges would depend on a scientific understanding of the ways that the law and the facts impact on the relevant decision-makers, i.e. the judges. More than fifty years later, the advances in Natural Language Processing (NLP) and Machine Learning (ML) provide us with the tools to automatically analyse legal materials, so as to build successful predictive models of judicial outcomes.</ns0:p><ns0:p>In this paper, our particular focus is on the automatic analysis of cases of the European Court of Human Rights (ECtHR or Court). The ECtHR is an international court that rules on individual or, much more rarely, State applications alleging violations by some State Party of the civil and political rights set out in the European Convention on Human Rights (ECHR or Convention). Our task is to predict whether a particular Article of the Convention has been violated, given textual evidence extracted from a case, which comprises of specific parts pertaining to the facts, the relevant applicable law and the arguments presented by the parties involved. Our main hypotheses are that (1) the textual content, and</ns0:p><ns0:p>(2) the different parts of a case are important factors that influence the outcome reached by the Court. These hypotheses are corroborated by the results. Our work lends some initial plausibility to a text-based approach with regard to ex ante prediction of ECtHR outcomes on the assumption, defended in later sections, that the text extracted from published judgments of the Court bears a sufficient number of similarities with, and can therefore stand as a (crude) proxy for, applications lodged with the Court as well as for briefs submitted by parties in pending cases. We submit, though, that full acceptance of that reasonable assumption necessitates more empirical corroboration. Be that as it may, our more general aim is to work under this assumption, thus placing our work within the larger context of ongoing empirical research in the theory of adjudication about the determinants of judicial decision-making. Accordingly, in the discussion we highlight ways in which automatically predicting the outcomes of ECtHR cases could potentially provide insights on whether judges follow a so-called legal model <ns0:ref type='bibr' target='#b4'>(Grey, 1983)</ns0:ref> of decision making or their behavior conforms to the legal realists' theorization <ns0:ref type='bibr' target='#b20'>(Leiter, 2007)</ns0:ref>, according to which judges primarily decide cases by responding to the stimulus of the facts of the case.</ns0:p><ns0:p>We define the problem of the ECtHR case prediction as a binary classification task. We utilise textual features, i.e. N-grams and topics, to train Support Vector Machine (SVM) classifiers <ns0:ref type='bibr' target='#b41'>(Vapnik, 1998)</ns0:ref>. We apply a linear kernel function that facilitates the interpretation of models in a straightforward manner. Our models can reliably predict ECtHR decisions with high accuracy, i.e. 79% on average. Results indicate that the 'facts' section of a case best predicts the actual court's decision, which is more consistent with legal realists' insights about judicial decision-making. We also observe that the topical content of a case is an important indicator whether there is a violation of a given Article of the Convention or not.</ns0:p><ns0:p>Previous work on predicting judicial decisions, representing disciplinary backgrounds in political science and economics, has largely focused on the analysis and prediction of judges' votes given non textual information, such as the nature and the gravity of the crime or the preferred policy position of each judge <ns0:ref type='bibr' target='#b10'>(Kort, 1957;</ns0:ref><ns0:ref type='bibr' target='#b24'>Nagel, 1963;</ns0:ref><ns0:ref type='bibr' target='#b8'>Keown, 1980;</ns0:ref><ns0:ref type='bibr' target='#b35'>Segal, 1984;</ns0:ref><ns0:ref type='bibr' target='#b26'>Popple, 1996;</ns0:ref><ns0:ref type='bibr' target='#b15'>Lauderdale and Clark, 2012)</ns0:ref>.</ns0:p><ns0:p>More recent research shows that information from texts authored by amici curiae 1 improves models for predicting the votes of the US Supreme Court judges <ns0:ref type='bibr' target='#b37'>(Sim et al., 2015)</ns0:ref>. Also, a text mining approach utilises sources of metadata about judge's votes to estimate the degree to which those votes are about common issues <ns0:ref type='bibr' target='#b16'>(Lauderdale and Clark, 2014)</ns0:ref>. Accordingly, this paper presents the first systematic study on predicting the decision outcome of cases tried at a major international court by mining the available textual information.</ns0:p><ns0:p>Overall, We believe that building a text-based predictive system of judicial decisions can offer lawyers and judges a useful assisting tool. The system may be used to rapidly identify cases and extract patterns that correlate with certain outcomes. It can also be used to develop prior indicators for diagnosing potential violations of specific Articles in lodged applications and eventually prioritise the decision process on cases where violation seems very likely. This may improve the significant delay imposed by the Court and encourage more applications by individuals who may have been discouraged by the expected time delays.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>European Court of Human Rights</ns0:head><ns0:p>The ECtHR is an international court set up in 1959 by the ECHR. The court has jurisdiction to rule on the applications of individuals or sovereign states alleging violations of the civil and political rights set out in the Convention. The ECHR is an international treaty for the protection of civil and political liberties in to a single judge, who may declare the application inadmissible and strike it out of the Court's list of cases, or be allocated to a Committee or a Chamber. A large number of the applications, according to the court's statistics fail this first admissibility stage. Thus, to take a representative example, according to the Court's provisional annual report for the year 2015 2 , 900 applications were declared inadmissible or struck out of the list by Chambers, approximately 4,100 by Committees and some 78,700 by single judges.</ns0:p><ns0:p>To these correspond, for the same year, 891 judgments on the merits. Moreover, cases held inadmissible or struck out are not reported, which entails that a text-based predictive analysis of them is impossible.</ns0:p><ns0:p>It is important to keep this point in mind, since our analysis was solely performed on cases retrievable through the electronic database of the court, HUDOC 3 . The cases analysed are thus the ones that have already passed the first admissibility stage 4 , with the consequence that the Court decided on these cases' merits under one of its formations.</ns0:p><ns0:p>Main Premise Our main premise is that published judgments can be used to test the possibility of a text-based analysis for ex ante predictions of outcomes on the assumption that there is enough similarity between (at least) certain chunks of the text of published judgments and applications lodged with the Court and/or briefs submitted by parties with respect to pending cases. Predictive tasks were based on access to the relevant data set. We thus used published judgments as proxies for the material to which we do not have access. This point should be borne in mind when approaching our results. At the very least, our work can be read in the following hypothetical way: if there is enough similarity between the chunks of text of published judgments that we analyzed and that of lodged applications and briefs, then our approach can be fruitfully used to predict outcomes with these other kinds of texts.</ns0:p><ns0:p>Case Structure The judgments of the Court have a distinctive structure, which makes them particularly suitable for a text-based analysis. According to Rule 74 of the Rules of the Court 5 , a judgment contains (among other things) an account of the procedure followed on the national level, the facts of the case, a summary of the submissions of the parties, which comprise their main legal arguments, the reasons in point of law articulated by the Court and the operative provisions. Judgments are clearly divided into different sections covering these contents, which allows straightforward standardisation of the text and consequently renders possible text-based analysis. More specifically, the sections analysed in this paper are the following:</ns0:p><ns0:p>&#8226; Procedure: This section contains the procedure followed before the Court, from the lodging of the individual application until the judgment was handed down.</ns0:p><ns0:p>&#8226; The Facts: This section comprises all material which is not considered as belonging to points of law, i.e. legal arguments. It is important to stress that the facts in the above sense do not just refer to actions and events that happened in the past as these have been formulated by the Court, giving Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>rise to an alleged violation of a Convention article. The 'Facts' section is divided in the following subsections:</ns0:p><ns0:p>-The Circumstances of the Case: This subsection has to do with the factual background of the case and the procedure (typically) followed before domestic courts before the application was lodged by the Court. This is the part that contains materials relevant to the individual applicant's story in its dealings with the respondent state's authorities. It comprises a recounting of all actions and events that have allegedly given rise to a violation of the ECHR.</ns0:p><ns0:p>With respect to this subsection, a number of crucial clarifications and caveats should be stressed. To begin with, the text of the 'Circumstances' subsection has been formulated by the Court itself. As a result, it should not always be understood as a neutral mirroring of the factual background of the case. The choices made by the Court when it comes to formulations of the facts incorporate implicit or explicit judgments to the effect that some facts are more relevant than others. This leaves open the possibility that the formulations used by the Court may be tailor-made to fit a specific preferred outcome. We openly acknowledge this possibility, but we believe that there are several ways in which it is mitigated. First, the ECtHR has limited fact-finding powers and, in the vast majority of cases, it defers, when summarizing the factual background of a case, to the judgments of domestic courts that have already heard and dismissed the applicants' ECHR-related complaint <ns0:ref type='bibr' target='#b19'>(Leach et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b18'>Leach, 2013)</ns0:ref>. While domestic courts do not necessarily hear complaints on the same legal issues as the ECtHR does, by virtue of the incorporation of the Convention by all States Parties <ns0:ref type='bibr' target='#b5'>(Helfer, 2008)</ns0:ref>, they typically have powers to issue judgments on ECHR-related issues. Domestic judgments may also reflect assumptions about the relevance of various events, but they also provide formulations of the facts that have been validated by more than one decision-maker. Second, the Court cannot openly acknowledge any kind of bias on its part. This means that, on their face, summaries of facts found in the 'Circumstances' section have to be at least framed in as neutral and impartial a way as possible. As a result, for example, clear displays of impartiality, such as failing to mention certain crucial events, seem rather improbable. Third, a cursory examination of many ECtHR cases indicates that, in the vast majority of cases, parties do not seem to dispute the facts themselves, as contained in the 'Circumstances' subsection, but only their legal significance (i.e. whether a violation took place or not, given those facts). As a result, the 'Circumstances' subsection contains formulations on which, in the vast majority of cases, disputing parties agree. Last, we hasten to add that the above three kinds of considerations do not logically entail that other forms of non-outright or indirect bias in the formulation of facts are impossible. However, they suggest that, in the absence of access to other kinds of textual data, such as lodged applications and briefs, the 'Circumstances' subsection can reasonably perform the function of a (sometimes crude) proxy for a textual representation of the factual background of a case.</ns0:p><ns0:p>-Relevant Law: This subsection of the judgment contains all legal provisions other than the articles of the Convention that can be relevant to deciding the case. These are mostly provisions of domestic law, but the Court also frequently invokes other pertinent international or European treaties and materials.</ns0:p><ns0:p>&#8226; The Law: The Law section is focused on considering the merits of the case, through the use of legal argument. Depending on the number of issues raised by each application, the section is further divided into subsections that examine individually each alleged violation of some Convention article (see below). However, the Court in most cases refrains from examining all such alleged violations in detail. Insofar as the same claims can be made by invoking more than one article of the Convention, the Court frequently decides only those that are central to the arguments made.</ns0:p><ns0:p>Moreover, the Court also frequently refrains from deciding on an alleged violation of an article, if it overlaps sufficiently with some other violation it has already decided on.</ns0:p><ns0:p>-Alleged Violation of Article x: Each subsection of the judgment examining alleged violations in depth is divided into two sub-sections. The first one contains the Parties' Submissions.</ns0:p><ns0:p>The second one comprises the arguments made by the Court itself on the Merits. This subsection provides the legal reasons that purport to justify the specific outcome reached by the Court. Typically, the Court places its reasoning within a wider set of rules, principles and doctrines that have already been established in its past caselaw and attempts to ground the decision by reference to these. It is to be expected, then, that this subsection refers almost exclusively to legal arguments, sometimes mingled with bits of factual information repeated from previous parts.</ns0:p><ns0:p>&#8226; Operative Provisions: This is the section where the Court announces the outcome of the case, which is a decision to the effect that a violation of some Convention article either did or did not take place. Sometimes it is coupled with a decision on the division of legal costs and, much more rarely, with an indication of interim measures, under article 39 of the ECHR.</ns0:p><ns0:p>Figures <ns0:ref type='figure' target='#fig_2'>1-4</ns0:ref>, show extracts of different sections from the Case of 'Velcheva v. Bulgaria' 6 following the structure described above.</ns0:p></ns0:div> <ns0:div><ns0:head>Data</ns0:head><ns0:p>We create a data set 7 consisting of cases related to Articles 3, 6, and 8 of the Convention. We focus on these three articles for two main reasons. First, these articles provided the most data we could automatically scrape. Second, it is of crucial importance that there should be a sufficient number of cases available, in order to test the models. Cases from the selected articles fulfilled both criteria. Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> shows the Convention right that each article protects and the number of cases in our data set. For each article, we first retrieve all the cases available in HUDOC. Then, we keep only those that are in English and parse them following the case structure presented above. We then select an equal number of violation and non-violation cases for each particular article of the Convention. To achieve a balanced number of violation/non-violation cases, we first count the number of cases available in each class. Then, we choose all the cases in the smaller class and randomly select an equal number of cases from the larger class. This results to a total of 250, 80 and 254 cases for Articles 3, 6 and 8 respectively.</ns0:p><ns0:p>Finally, we extract the text under each part of the case by using regular expressions, making sure that any sections on operative provisions of the Court are excluded. In this way, we ensure that the models do not use information pertaining to the outcome of the case. We also preprocess the text by lower-casing and removing stop words (i.e. frequent words that do not carry significant semantic information) using the list provided by NLTK 8 .</ns0:p></ns0:div> <ns0:div><ns0:head>Description of Textual Features</ns0:head><ns0:p>We derive textual features from the text extracted from each section (or subsection) of each case. These are either N-gram features, i.e. contiguous word sequences, or word clusters, i.e. abstract semantic topics. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; N-gram Features: The Bag-of-Words (BOW) model <ns0:ref type='bibr' target='#b33'>(Salton et al., 1975;</ns0:ref><ns0:ref type='bibr' target='#b32'>Salton and McGill, 1983)</ns0:ref> is a popular semantic representation of text used in NLP and Information Retrieval. In a BOW model, a document (or any text) is represented as the bag (multiset) of its words (unigrams) or N-grams without taking into account grammar, syntax and word order. That results to a vector space representation where documents are represented as m-dimensional variables over a set of m N-grams. N-gram features have been shown to be effective in various supervised learning tasks <ns0:ref type='bibr' target='#b0'>(Bamman et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b13'>Lampos and Cristianini, 2012)</ns0:ref>. For each set of cases in our data set, we compute the top-2000 most frequent N-grams where N &#8712; {1, 2, 3, 4}. Each feature represents the normalized frequency of a particular N-gram in a case or a section of a case. This can be considered as a feature matrix, C &#8712; R c&#215;m , where c is the number of the cases and m = 2000. We extract N-gram features for the Procedure (Procedure), Circumstances (Circumstances), Facts (Facts), Relevant Law (Relevant Law), Law (Law) and the Full case (Full) respectively. Note that the representations of the Facts is obtained by taking the mean vector of Circumstances and Relevant Law. In a similar way, the representation of the Full case is computed by taking the mean vector of all of its sub-parts.</ns0:p><ns0:p>&#8226; Topics: We create topics for each article by clustering together N-grams that are semantically similar by leveraging the distributional hypothesis suggesting that similar words appear in similar contexts. We thus use the C feature matrix (see above), which is a distributional representation <ns0:ref type='bibr' target='#b40'>(Turney and Pantel, 2010)</ns0:ref> of the N-grams given the case as the context; each column vector of the matrix represents an N-gram. Using this vector representation of words, we compute N-gram similarity using the cosine metric and create an N-gram by N-gram similarity matrix. We finally apply spectral clustering (von Luxburg, 2007) -which performs graph partitioning on the similarity matrix -to obtain 30 clusters of N-grams. For Articles 6 and 8, we use the Article 3 data for selecting the number of clusters T , where T = {10, 20, ..., 100}, while for Article 3 we use Article 8. Given that the obtained topics are hard clusters, an N-gram can only be part of a single topic.</ns0:p><ns0:p>A representation of a cluster is derived by looking at the most frequent N-grams it contains. The main advantages of using topics (sets of N-grams) instead of single N-grams is that it reduces the dimensionality of the feature space, which is essential for feature selection, it limits overfitting to training data <ns0:ref type='bibr' target='#b11'>(Lampos et al., 2014;</ns0:ref><ns0:ref type='bibr'>Preot &#184;iuc-Pietro et al., 2015;</ns0:ref><ns0:ref type='bibr'>Preot &#184;iuc-Pietro et al., 2015)</ns0:ref> and also provides a more concise semantic representation.</ns0:p></ns0:div> <ns0:div><ns0:head>Classification Model</ns0:head><ns0:p>The problem of predicting the decisions of the ECtHR is defined as a binary classification task. Our goal is to predict if, in the context of a particular case, there is a violation or non-violation in relation to a specific Article of the Convention. For that purpose, we use each set of textual features, i.e. N-grams and topics, to train Support Vector Machine (SVM) classifiers <ns0:ref type='bibr' target='#b41'>(Vapnik, 1998)</ns0:ref>. SVMs are a machine learning algorithm that has shown particularly good results in text classification, especially using small data sets <ns0:ref type='bibr' target='#b6'>(Joachims, 2002;</ns0:ref><ns0:ref type='bibr' target='#b43'>Wang and Manning, 2012)</ns0:ref>. We employ a linear kernel since that allows us to identify important features that are indicative of each class by looking at the weight learned for each feature <ns0:ref type='bibr' target='#b2'>(Chang and Lin, 2008)</ns0:ref>. We label all the violation cases as +1, while no violation is denoted by &#8722;1. Therefore, features assigned with positive weights are more indicative of violation, while features with negative weights are more indicative of no violation.</ns0:p><ns0:p>The models are trained and tested by applying a stratified 10-fold cross validation, which uses a heldout 10% of the data at each stage to measure predictive performance. The linear SVM has a regularisation parameter of the error term C, which is tuned using grid-search. For Articles 6 and 8, we use the Article 3 data for parameter tuning, while for Article 3 we use Article 8.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Predictive Accuracy</ns0:head><ns0:p>We compute the predictive performance of both sets of features on the classification of the ECtHR cases. Performance is computed as the mean accuracy obtained by 10-fold cross-validation. Accuracy is computed as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>Accuracy = TV + T NV V + NV<ns0:label>(</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where TV and T NV are the number of cases correctly classified that there is a violation an article of the Convention or not respectively. V and NV represent the total number of cases where there is a violation or not respectively. Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the accuracy of each set of features across articles using a linear SVM. The rightmost column also shows the mean accuracy across the three articles. In general, both N-gram and topic features achieve good predictive performance. Our main observation is that both language use and topicality are important factors that appear to stand as reliable proxies of judicial decisions. Therefore, we take a further look into the models by attempting to interpret the differences in accuracy.</ns0:p><ns0:p>We observe that 'Circumstances' is the best subsection to predict the decisions for cases in Articles 6 and 8, with a performance of .82 and .77 respectively. In Article 3, we obtain better predictive accuracy (.70) using the text extracted from the full case ('Full') while the performance of 'Circumstances' is almost comparable (.68). We should again note here that the 'Circumstances' subsection contains information regarding the factual background of the case, as this has been formulated by the Court. The subsection therefore refers to the actions and events which triggered the case and gave rise to a claim made by an individual to the effect that the ECHR was violated by some state. On the other hand, 'Full', which is a mixture of information contained in all of the sections of a case, surprisingly fails to improve over using only the 'Circumstances' subsection. This entails that the factual background contained in the 'Circumstances' is the most important textual part of the case when it comes to predicting the Court's decision.</ns0:p><ns0:p>The other sections and subsections that refer to the facts of a case, namely 'Procedure', 'Relevant Law' and 'Facts' achieve somewhat lower performance (.73 cf. .76), although they remain consistently above chance. Recall, at this point, that the 'Procedure' subsection consists only of general details about the applicant, such as the applicant's name or country of origin and the procedure followed before domestic courts.</ns0:p><ns0:p>On the other hand, the 'Law' subsection, which refers either to the legal arguments used by the parties or to the legal reasons provided by the Court itself on the merits of a case consistently obtains the lowest performance (.62). One important reason for this poor performance is that a large number of cases does not include a 'Law' subsection, i.e. 162, 52 and 146 for Articles 3, 6 and 8 respectively. That happens in cases that the Court deems inadmissible, concluding to a judgment of non-violation. In these cases, the judgment of the Court is more summary than in others.</ns0:p><ns0:p>We also observe that the predictive accuracy is high for all the Articles when using the 'Topics' as Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and 8 respectively. </ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The consistently more robust predictive accuracy of the 'Circumstances' subsection suggests a strong correlation between the facts of a case, as these are formulated by the Court in this subsection, and the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science decisions made by judges. The relatively lower predictive accuracy of the 'Law' subsection could also be an indicator of the fact that legal reasons and arguments of a case have a weaker correlation with decisions made by the Court. However, this last remark should be seriously mitigated since, as we have already observed, many inadmissibility cases do not contain a separate 'Law' subsection.</ns0:p></ns0:div> <ns0:div><ns0:head>Legal Formalism and Realism</ns0:head><ns0:p>These results could be understood as providing some evidence for judicial decision-making approaches according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide appellate cases. Without going into details with respect to a particularly complicated debate that is out of the scope of this paper, we may here simplify by observing that since the beginning of the 20th century, there has been a major contention between two opposing ways of making sense of judicial decision-making: legal formalism and legal realism <ns0:ref type='bibr' target='#b27'>(Posner, 1986;</ns0:ref><ns0:ref type='bibr' target='#b38'>Tamanaha, 2009;</ns0:ref><ns0:ref type='bibr' target='#b21'>Leiter, 2010)</ns0:ref>. Very roughly, legal formalists have provided a legal model of judicial decision-making, claiming that the law is rationally determinate: judges either decide cases deductively, by subsuming facts under formal legal rules or use more complex legal reasoning than deduction whenever legal rules are insufficient to warrant a particular outcome <ns0:ref type='bibr' target='#b28'>(Pound, 1908;</ns0:ref><ns0:ref type='bibr' target='#b7'>Kennedy, 1973;</ns0:ref><ns0:ref type='bibr' target='#b4'>Grey, 1983;</ns0:ref><ns0:ref type='bibr' target='#b25'>Pildes, 1999)</ns0:ref>. On the other hand, legal realists have criticized formalist models, insisting that judges primarily decide appellate cases by responding to the stimulus of the facts of the case, rather than on the basis of legal rules or doctrine, which are in many occasions rationally indeterminate <ns0:ref type='bibr' target='#b22'>(Llewellyn, 1996;</ns0:ref><ns0:ref type='bibr' target='#b34'>Schauer, 1998;</ns0:ref><ns0:ref type='bibr' target='#b1'>Baum, 2009;</ns0:ref><ns0:ref type='bibr' target='#b20'>Leiter, 2007;</ns0:ref><ns0:ref type='bibr' target='#b23'>Miles and Sunstein, 2008)</ns0:ref>.</ns0:p><ns0:p>Extensive empirical research on the decision-making processes of various supreme and international courts, and especially the US Supreme Court, has indicated rather consistently that pure legal models, especially deductive ones, are false as an empirical matter when it comes to cases decided by courts further up the hierarchy. As a result, it is suggested that the best way to explain past decisions of such courts and to predict future ones is by placing emphasis on other kinds of empirical variables that affect judges <ns0:ref type='bibr' target='#b1'>(Baum, 2009;</ns0:ref><ns0:ref type='bibr' target='#b34'>Schauer, 1998)</ns0:ref>. For example, early legal realists had attempted to classify cases in terms of regularities that can help predict outcomes, in a way that did not reflect standard legal doctrine <ns0:ref type='bibr' target='#b22'>(Llewellyn, 1996)</ns0:ref>. Likewise, the attitudinal model for the US Supreme Court claims that the best predictors of its Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>decisions are the policy preferences of the Justices and not legal doctrinal arguments <ns0:ref type='bibr' target='#b36'>(Segal and Spaeth, 2002)</ns0:ref>.</ns0:p><ns0:p>In general, and notwithstanding the simplified snapshot of a very complex debate that we just presented, our results could be understood as lending some support to the basic legal realist intuition according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide hard cases. In particular, if we accept that the 'Circumstances' subsection, with all the caveats we have already voiced, is a (crude) proxy for non-legal facts and the 'Law' subsection is a (crude) proxy for legal reasons and arguments, the predictive superiority of the 'Circumstances' subsection seems to cohere with extant legal realist treatments of judicial decision-making.</ns0:p><ns0:p>However, not more should be read into this than our results allow. First, as we have already stressed at several occasions, the 'Circumstances' subsection is not a neutral statement of the facts of the case and we have only assumed the similarity of that subsection with analogous sections found in lodged applications and briefs. Second, it is important to underline that the results should also take into account the so-called selection effect <ns0:ref type='bibr' target='#b31'>(Priest and Klein, 1984)</ns0:ref> that pertains to cases judged by the ECtHR as an international court. Given that the largest percentage of applications never reaches the Chamber or, still less, the Grand Chamber, and that cases have already been tried at the national level, it could very well be the case that the set of ECtHR decisions on the merits primarily refers to cases in which the class of legal reasons, defined in a formal sense, is already considered as indeterminate by competent interpreters. This could help explain why judges primarily react to the facts of the case, rather than to legal arguments. Thus, further text-based analysis is needed in order to determine whether the results could generalise to other courts, especially to domestic courts deciding ECHR claims that are placed lower within the domestic judicial hierarchy. Third, our discussion of the realism/formalism debate is overtly simplified and does not imply that the results could not be interpreted in a sophisticated formalist way. Still, our work coheres well with a bulk of other empirical approaches in the legal realist vein.</ns0:p></ns0:div> <ns0:div><ns0:head>Topic Analysis</ns0:head><ns0:p>The topics further exemplify this line of interpretation and provide proof of the usefulness of the NLP approach. The linear kernel of the SVM model can be used to examine which topics are most important for inferring whether an article of the Convention has been violated or not by looking at their weights w.</ns0:p><ns0:p>Tables 3, 4 and 5 present the six topics for the most positive and negative SVM weights for the articles 3, 6 and 8 respectively. Topics identify in a sufficiently robust manner patterns of fact scenarios that correspond to well-established trends in the Court's case law.</ns0:p><ns0:p>First, topic 13 in 74912/01, ECHR 2009-IV) were identified as exemplifications of this trend. Likewise, topic 28 in Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref> has to do with whether certain choices with regard to the social policy of states can amount to a violation of Article 8. That was correctly identified as typically not giving rise to a violation, in line with the Court's tendency to acknowledge a large margin of appreciation to states in this area <ns0:ref type='bibr' target='#b3'>(Greer, 2000)</ns0:ref>. <ns0:ref type='bibr' target='#b39'>(Tsarapatsanis, 2015)</ns0:ref>. A representative case here is Oao Plodovaya Kompaniya v. Russia of 7 June 2007. Consequently, the topics identify independently well-established trends in the case law without recourse to expert legal/doctrinal analysis.</ns0:p><ns0:p>The above observations require to be understood in a more mitigated way with respect to a (small) number of topics. For instance, most representative cases for topic 8 in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref> were not particularly informative. This is because these were cases involving a person's death, in which claims of violations of Article 3 (inhuman and degrading treatment) were only subsidiary: this means that the claims were mainly about Article 2, which protects the right to life. In these cases, the absence of a violation, even if correctly On the other hand, cases have been misclassified mainly because their textual information is similar to cases in the opposite class. We observed a number of cases where there is a violation having a very similar feature vector to cases that there is no violation and vice versa.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>We presented the first systematic study on predicting judicial decisions of the European Court of Human</ns0:p><ns0:p>Rights using only the textual information extracted from relevant sections of ECtHR judgments. We framed this task as a binary classification problem, where the training data consists of textual features extracted from given cases and the output is the actual decision made by the judges.</ns0:p><ns0:p>Apart from the strong predictive performance that our statistical NLP framework achieved, we have reported on a number of qualitative patterns that could potentially drive judicial decisions. More specifically, we observed that the information regarding the factual background of the case as this is formulated by the Court in the relevant subsection of its judgments is the most important part obtaining on average the strongest predictive performance of the Court's decision outcome. We suggested that, even if understood only as a crude proxy and with all the caveats that we have highlighted, the rather robust correlation between the outcomes of cases and the text corresponding to fact patterns contained in the relevant subsections coheres well with other empirical work on judicial decision-making in hard cases and backs basic legal realist intuitions.</ns0:p><ns0:p>Finally, we believe that our study opens up avenues for future work, using different kinds of data (e.g. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 1. Procedure</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The Law</ns0:figDesc><ns0:graphic coords='5,141.73,63.78,413.56,155.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Decision</ns0:figDesc><ns0:graphic coords='5,141.73,253.88,413.57,159.43' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:3:0:NEW 21 Sep 2016) Manuscript to be reviewed Computer Science * Parties' Submissions: The Parties' Submissions typically summarise the main arguments made by the applicant and the respondent state. Since in the vast majority of cases the material facts are taken for granted, having been authoritatively established by domestic courts, this part has almost exclusively to do with the legal arguments used by the parties. * Merits:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10595:3:0:NEW 21 Sep 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:05:10595:3:0:NEW 21 Sep 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>9</ns0:head><ns0:label /><ns0:figDesc>European Court of Human Rights (2015): Factsheet on Life Imprisonment, http://www.echr.coe.int/Documents/ FS_Life_sentences_ENG.pdf 10 Note that all the cases used as examples in this section are taken from the data set we used to perform the experiments. 11/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:3:0:NEW 21 Sep 2016) Manuscript to be reviewed Computer Science identified, is more of a technical issue on the part of the Court, which concentrates its attention on Article 2 and rarely, if ever, moves on to consider independently a violation of Article 3. This is exemplified by cases such as Buldan v. Turkey of 20 April 2004 and Nuray S &#184;en v. Turkey of 30 March 2004, which were, again, correctly identified.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>texts of individual applications, briefs submitted by parties or domestic judgments) coming from various sources (e.g. the European Court of Human Rights, national authorities, law firms). However, data access issues pose a significant barrier for scientists to work on such kinds of legal data. Large repositories like HUDOC, which are easily and freely accessible, are only case law databases. Access to other kinds of data, especially lodged applications and briefs, would enable further research in the intersection of legal science and artificial intelligence.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='4,141.73,63.78,413.55,147.54' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='4,141.73,246.68,413.57,161.46' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Articles of the Convention and number of cases in the data set.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Article Human Right</ns0:cell><ns0:cell>Cases</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Prohibits torture and inhuman and degrading treatment</ns0:cell><ns0:cell>250</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>Protects the right to a fair trial</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Provides a right to respect for one's 'private and family life, his</ns0:cell><ns0:cell>254</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>home and his correspondence'</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Accuracy of the different feature types across articles. Accuracy of predicting violation/non-violation of cases across articles on 10-fold cross-validation using an SVM with linear kernel. Parentheses contain the standard deviation from the mean. Accuracy of random guess is .50. Bold font denotes best accuracy in a particular Article or on Average across Articles.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Feature Type</ns0:cell><ns0:cell cols='4'>Article 3 Article 6 Article 8 Average</ns0:cell></ns0:row><ns0:row><ns0:cell>N-grams</ns0:cell><ns0:cell>Full</ns0:cell><ns0:cell>.70 (.10)</ns0:cell><ns0:cell>.82 (.11)</ns0:cell><ns0:cell>.72 (.05)</ns0:cell><ns0:cell>.75</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Procedure</ns0:cell><ns0:cell>.67 (.09)</ns0:cell><ns0:cell>.81 (.13)</ns0:cell><ns0:cell>.71 (.06)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Circumstances .68 (.07)</ns0:cell><ns0:cell>.82 (.14)</ns0:cell><ns0:cell>.77 (.08)</ns0:cell><ns0:cell>.76</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Relevant Law</ns0:cell><ns0:cell>.68 (.13)</ns0:cell><ns0:cell>.78 (.08)</ns0:cell><ns0:cell>.72 (.11)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Facts</ns0:cell><ns0:cell>.70 (.09)</ns0:cell><ns0:cell>.80 (.14)</ns0:cell><ns0:cell>.68 (.10)</ns0:cell><ns0:cell>.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Law</ns0:cell><ns0:cell>.56 (.09)</ns0:cell><ns0:cell>.68 (.15)</ns0:cell><ns0:cell>.62 (.05)</ns0:cell><ns0:cell>.62</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Topics</ns0:cell><ns0:cell>.78 (.09)</ns0:cell><ns0:cell>.81 (.12)</ns0:cell><ns0:cell>.76 (.09)</ns0:cell><ns0:cell>.78</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Topics and Circumstances</ns0:cell><ns0:cell cols='3'>.75 (.10) .84 (0.11) .78 (0.06)</ns0:cell><ns0:cell>.79</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>features, i.e. .78, .81 and .76 for Articles 3, 6 and 8 respectively. 'Topics' obtain best performance in</ns0:figDesc><ns0:table /><ns0:note>Article 3 and performance comparable to 'Circumstances' in Articles 6 and 8. 'Topics' form a more abstract way of representing the information contained in each case and capture a more general gist of the cases.Combining the two best performing sets of features ('Circumstances' and 'Topics') we achieve the best average classification performance (.79). The combination also yields slightly better performance forArticles 6 and 8 while performance slightly drops for Article 3. That is .75, .84 and .78 for Articles 3, 6 8/13 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:3:0:NEW 21 Sep 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The most predictive topics for Article 3 decisions. Most predictive topics for Article 3, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The most predictive topics for Article 6 decisions. Most predictive topics for Article 6, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The most predictive topics for Article 8 decisions. Most predictive topics for Article 8, represented by the 20 most frequent words, listed in order of their SVM weight. Topic labels are manually added. Positive weights (w) denote more predictive topics for violation and negative weights for no violation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Topic</ns0:cell><ns0:cell>Label</ns0:cell><ns0:cell>Words</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head /><ns0:label /><ns0:figDesc>Table 3 has to do with whether long prison sentences and other detention measures can amount to inhuman and degrading treatment under Article 3. That is correctly identified as typically not giving rise to a violation 9 . For example, cases 10 such as Kafkaris v. Cyprus ([GC] no. 21906/04,</ns0:figDesc><ns0:table /><ns0:note>ECHR 2008-I), Hutchinson v. UK (no. 57592/08 of 3 February 2015) and Enea v. Italy ([GC], no.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>In this vein, cases such as Aune v. Norway (no. 52502/07 of 28 October 2010) and Ball v. Andorra (Application no. 40628/10 of 11 December 2012) are examples of cases where topic 28 is dominant. Similar observations apply, among other things, to topics 27, 23 and 24. That includes issues with the enforcement of domestic judgments giving rise to a violation of Article 6 (Kiestra, 2014). Some representative cases are Velskaya v. Russia, of 5 October 2006 and Aleksandrova v. Russia of 6 December 2007. Topic 7 in Table 4 is related to lower standard of review when property rights are at play</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot' n='2'>ECHtR provisional annual report for the year 2015, http://www.echr.coe.int/Documents/Annual_report_ 2015_ENG.pdf 3 HUDOC ECHR Database, http://hudoc.echr.coe.int/ 4 Nonetheless, not all cases that pass this first admissibility stage are decided in the same way. While the individual judge's decision on admissibility is final and does not comprise the obligation to provide reasons, a Committee deciding a case may, by unanimous vote, declare the application admissible and render a judgment on its merits, if the legal issue raised by the application is covered by well-established case-law by the Court.3/13PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10595:3:0:NEW 21 Sep 2016)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"We would like to thank the editor and the reviewers for their constructive comments. See below our point­by­point response. Reviewer 1 The caveats in the rebuttal and the changes made to the paper are interesting. The relevant limitations that are acknowledged ­­ those that make the paper reliant on crude proxies ­­ mean that the paper's arguments would struggle to be satisfactory in a legal paper or in legal argument. I understand that the methods adopted in this area may be different. Four specific revisions: (1) The the rule on exhaustion of domestic remedies seems to be poorly articulated, or perhaps misunderstood, at line 152. In particular, its application at the domestic level seems back­to­front, at least as it is explained here. It should be revised. The reviewer is right that maybe the formulation we used was not as clear as it could be. The formulation is this: ‘While domestic courts do not necessarily hear complaints on the same legal issues as the ECtHR does, by virtue of the rule of exhaustion of domestic remedies they typically have powers to issue judgments on ECHR­related issues.’ We meant to say that the rule of exhaustion of domestic remedies, which is an aspect of the principle of subsidiarity, guarantees that domestic courts will be the first to hear complaints on ECHR­based issues (among other things). Of course, whether these issues can be formulated before domestic courts in ways that invoke specifically the Convention will depend on whether the ECHR is indeed incorporated and directly applicable before domestic courts. This is currently the case for all States Parties to the Convention. As the Committee of Ministers puts it (Appendix to Recommendation Rec(2004)6 of the Committee of Ministers to Member States on the improvement of domestic remedies (12 May 2004), at 3–4): ‘[the ECHR] has become an integral part of the domestic legal orders of all states parties’. In any event, we have further clarified the point in the text (see lines 147­149). (2) If there is literature to support the assumption made in lines 108­111, that literature should be noted. Or is it the literature in lines 134­173? We know of no academic literature to support this claim with regard to pending applications and/or briefs, the situation being different with respect to domestic judgments (where we provided references in lines 134­173). The reason is simply that applications and briefs are not publicly available. However, one of the coauthors has drafted applications to the ECtHR and has handled cases before the Court: his experience was to the effect that the summaries of the facts by the Court bear important similarities to those prepared by individuals and States. The sample of cases, nonetheless, was not significant. Be that as it may, we reiterate that we formulated our premise in a hypothetical way to underscore the fact that more research is needed, based on the text of applications and briefs, and that the encouraging results we got for the present paper provide us with a reason to proceed to this further research, which we are very keen to undertake in the future. (3) The authors have explained eloquently the limitations on their methodology due to unavailable data. It is therefore jarring to see an expression of belief at line 116­117 ('we believe there is') that, surely, is unsupported due to those very limitations? Is belief the basis for scientific argument here? We agree with the reviewer and have revised the text accordingly. (4) Line 416 says 'Large repositories like HUDOC should be easily and freely accessible.' This could be more precise: is HUDOC not easily and freely accessible? Isn't the authors' real concern with the fact that HUDOC is a 'case law database' and not a 'database of case law and other things'? We agree with the reviewer and have revised our text accordingly. "
Here is a paper. Please give your review comments after reading it.
388
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Searchable symmetric encryption (SSE) provides an effective way to search encrypted data stored on untrusted servers. As we all known, the server is not trusted, so it is indispensable to verify the results returned by it. However, the existing SSE schemes either lack fairness in the verification of search results, or do not support the verification of multiple keywords. To address this, we design a multi-keyword verifiable searchable symmetric encryption scheme based on blockchain, which provides an efficient multikeyword search and fair verification of search results. We utilize bitmap to build search index in order to improve search efficiency, and use blockchain to ensure fair verification of search results. The bitmap and hash function are combined to realize lightweight multikeyword search result verification, compared with the existing verification schemes using public key cryptography primitives, our scheme reduces the verification time and improves the verification efficiency. In addition, our scheme supports the dynamic update of files and realizes the forward security in update. Finally, formal security analysis proves that our scheme is secure against Chosen-Keyword Attacks (CKA), experimental analysis demonstrations that our scheme is efficient and viable in practice.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>With the development of artificial intelligence, Internet of things, Internet of vehicles and other emerging technologies, more and more enterprises and individuals outsource local data to the cloud, thereby reducing storage and management overhead. However, security and privacy concerns still hinder the deployment of cloud storage system. Although data encryption can eradicate such concerns to some extent, it becomes difficultfor users to search over the data.</ns0:p><ns0:p>Searchable symmetric encryption (SSE) provides an efficient mechanism to solve this, which enables users to search encrypted data efficiently without decryption. Since SSE was first proposed by Song <ns0:ref type='bibr' target='#b0'>(Song, Wagner &amp; Perrig, 2000)</ns0:ref>, how to perform efficient and versatile search on encrypted data has always been an important research direction. The existing SSE schemes mainly use linked lists and vectors to build indexes, the cloud server needs to traverse the whole list or vector to search for matching results during a query, which incurs high search overhead. In addition to efficient search, dynamic updates are also very important in SSE. <ns0:ref type='bibr' target='#b26'>(Zhang, Katz &amp; Papamanthou, 2016)</ns0:ref> has shown that adversaries can infer the critical information through the file injection attacks during the dynamic update of the SSE, while the forward-secure SSE can avoid this. Therefore, the forward security of the scheme must be fully considered when designing the SSE scheme.</ns0:p><ns0:p>Verifiability of the search results is another important research issue for SSE. Since The cloud server is untrusted, which may returns incorrect or incomplete results due to system failures or cost savings, so, it is necessary to verify the search results. In 2012, <ns0:ref type='bibr' target='#b7'>(Qi &amp; Gong, 2012)</ns0:ref> proposed the concept of verifiable SSE (VSSE) and constructed a verifiable SSE scheme based on word tree. Following this work, a great many VSSE schemes are proposed <ns0:ref type='bibr'>(Kurosawa</ns0:ref> In these schemes, the verification is mainly performed by users, but the user may forge verification results to save costs, so the reliability of the verification cannot be guaranteed. To address this, some researchers <ns0:ref type='bibr'>(Hu et.al 2018;</ns0:ref><ns0:ref type='bibr'>Li et.al 2019;</ns0:ref><ns0:ref type='bibr' target='#b21'>Guo, Zhang &amp; Jia , 2020)</ns0:ref> introduce blockchain into SSE to verify search results, which guarantees the fairness and reliability of the verification. Although blockchain achieves fair verification of search results, but the existing schemes are only for a single keyword, and there is little research on fair verification for multi-keywords.</ns0:p><ns0:p>In this paper, we introduce a verifiable multi-keyword SSE scheme based on blockchain, which can perform efficient multi-keyword search, ensures the fairness of verification, and supports the dynamic update of files. To our knowledge, this is the first scheme to verify the search results of multi-keywords fairly. In general, the contributions of this paper are summarized as follows: &#61623; Our scheme realizes efficient multi-keyword search and verification of search results, at the same time, our scheme supports dynamic update of files and achieves forward security.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>Our scheme utilizes blockchain to verify the search results, ensuring the reliability and fairness of the verification results. Combining bitmap index and hash function, we realize lightweight multi-keyword verification to improve verification efficiency.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>We formally prove that our scheme is adaptively secure against CKA, and we conduct a series of experiments to evaluate the performance of our scheme.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Works</ns0:head></ns0:div> <ns0:div><ns0:head>Searchable Symmetric Encryption</ns0:head><ns0:p>Since SSE was proposed, a number of works have been done to improve search efficiency, rich expression and advanced security. The first SSE scheme <ns0:ref type='bibr' target='#b0'>(Song, Wagner &amp; Perrig, 2000)</ns0:ref> enables users to search keywords through full-text scanning, search time increases linearly with the size of files, which is impractical and inefficient. To improve efficient, <ns0:ref type='bibr'>Curtmola et.al (2006)</ns0:ref> proposed an inverted index SSE, which achieves sub-linear search time, and gives a definition of SSE security, but this scheme does not support dynamic operations. Wang, Cao &amp; Ren (2010) expanded the scheme of <ns0:ref type='bibr'>Curtmola et.al (2006)</ns0:ref> to support dynamic operations, and proved that the scheme was adaptively secure against chosen-keyword attacks (CKA2-secure). For the schemes that support dynamic operation, forward security is critically crucial. The research of Cash et.al (2013) and Zhang, Katz &amp; Papamanthou (2016) indicated that in the SSE scheme without forward security, the adversary can recover most of the sensitive information in ciphertext at a small cost, their research shows the importance of forward security.</ns0:p><ns0:p>Multi-keyword search is a crucial means to improve search efficiency. In single-keyword search scheme <ns0:ref type='bibr'>(Song,</ns0:ref> <ns0:ref type='bibr'>Liu et.al (2021)</ns0:ref> proposed their verifiable schemes in the Internet of things. These schemes verify the results of multi-keyword search by public key cryptography primitives, which is computationally expensive and inefficient. What is more, these multi-keyword search verifiable schemes mainly focus on verifying the returned files are valid and whether the files really contains the query keywords, but they didn't ensure all files containing the query keywords are returned.</ns0:p></ns0:div> <ns0:div><ns0:head>Verifiable Searchable Symmetric Encryption Based on Blockchain</ns0:head><ns0:p>In the existing SSE schemes, the verification of search results is performed by users. However, users may forge verification results for economic benefits, which damages the fairness of verification. To solve this, a flexible and feasible method is to adopt blockchain to verify search results, which uses the non-repudiable property of the blockchain to ensure the reliability and fairness of verification. Hu et.al <ns0:ref type='bibr' target='#b5'>(2018)</ns0:ref> built a distributed, verifiable and fair ciphertext retrieval scheme based on blockchain. Li et.al (2019) proposed a verifiable scheme combined blockchain and SSE, which can verify the results automatically and reduce the calculation of users. Guo, Zhang &amp; Jia (2020) used the blockchain to realize the public authentication of search results, and ensures forward security of dynamic update. Although these schemes realize the fair verification of search results, but they are mainly for single keyword search, whereas there is little research on the fair verification of multi-keyword. Comparison results with existing schemes are shown in Table <ns0:ref type='table' target='#tab_12'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Preliminaries Bitmap</ns0:head><ns0:p>To improve search efficiency, we use the bitmap <ns0:ref type='bibr' target='#b22'>(Spiegler &amp; Maayan, 1985)</ns0:ref> to build inverted index. Bitmap uses a binary string to store a set of information, which can effectively save storage space, and it has been widely used in the field of ciphertext retrieval. In our scheme, each keyword i w corresponds to a bitmap, which contains bits, is the number of files in the system, if the th l l i &#61485; document contains the value of in position is 1, otherwise 0. For example, there are four i w l i files ( , , , ) and two keywords ( , ), in Fig. <ns0:ref type='figure'>1</ns0:ref>, is contained in and , is </ns0:p></ns0:div> <ns0:div><ns0:head>Blockchain</ns0:head><ns0:p>Blockchain is a distributed database, which is widely used in emerging cryptocurrencies to store transaction information such as bitcoin. The blockchain has the features of decentralization, transparency and unforgeability. There is no central server in the blockchain, all nodes participate in the operation and generate the calculation results, the information stored on the blockchain can be seen by all nodes in the network. All nodes of the blockchain share the same data record, under the action of the consensus mechanism, a single node cannot modify the data stored on the chain. The above characteristics of blockchain make it suitable to be a trusted third party for fair verification.</ns0:p></ns0:div> <ns0:div><ns0:head>Method System Model</ns0:head><ns0:p>The system model of our scheme is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>, there are four entities in the system: data owner, cloud server, data user, blockchain. For the files in the system, data owner extracts all F keywords and generates a keyword set . Data owner encrypts files to a database , builds an W T encrypted index and a checklist , and are sent to cloud server, and are sent to</ns0:p><ns0:formula xml:id='formula_0'>T B B T B T T B B</ns0:formula><ns0:p>blockchain. When a data user joins the system, it sends an authentication request to the data owner, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science obtains keys and system parameters. During a query, the data user generates search token , i Q TK according to the keywords to be queried with the help of keys and system parameters, and then sends it to cloud server and blockchain, respectively. Cloud server provides storage services for index and . In addition, the cloud server performs ciphertext retrieval according to the search </ns0:p></ns0:div> <ns0:div><ns0:head>Threat Model</ns0:head><ns0:p>Like other verifiable SSE schemes (Soleimanian &amp; Khazaei, 2019), we assume that the cloud server is malicious, which may return an incorrect or incomplete search result for selfish reasons, such as saving bandwidth or storage space. In addition, we assume that the data user is also untrusted, since it may forge the verification results for economic benefits. The data owner and blockchain are trusted, they execute the protocols in the system honestly.</ns0:p></ns0:div> <ns0:div><ns0:head>Algorithm Definitions</ns0:head><ns0:p>Our scheme includes eight polynomial time algorithms, {Keygen,Setup,ClientAuth &#61653; &#61501;</ns0:p><ns0:p>, and the details are as follows: TokenGen,Search,Verify,UpdateToken,Update} &#61623; , takes system parameter as input, and outputs system keys .</ns0:p><ns0:p>(1 )</ns0:p><ns0:formula xml:id='formula_1'>K &#61548; &#61612; KeyGen &#61548; K &#61623;</ns0:formula><ns0:p>, takes system keys , the keyword set and the set of files</ns0:p><ns0:formula xml:id='formula_2'>(T, T , B) ( , ) K &#61612; Setup W, F B K W</ns0:formula><ns0:p>as input, outputs a database of encrypted files , an encrypted index and a checklist .</ns0:p><ns0:formula xml:id='formula_3'>F T T B B &#61623;</ns0:formula><ns0:p>, takes the attribute of user as input, outputs secret key and</ns0:p><ns0:formula xml:id='formula_4'>1 ( , ) ( ) i K &#61669; &#61612; ClientAuth A i A 1 K the keyword status . &#61669; &#61623; , takes secret key , a set of keywords to query , 1 ( ,W) i Q TK K &#61612; TokenGen 1 K 1 2</ns0:formula><ns0:p>W { , ,..., w w &#61501; , outputs the search token . </ns0:p></ns0:div> <ns0:div><ns0:head>Security Definitions</ns0:head><ns0:p>We prove the security of our scheme with the random oracle model, which can be executed by two probabilistic games and , and we have the following definitions: { , ,..., }</ns0:p><ns0:formula xml:id='formula_5'>Real ( ) &#61548; A Ideal ( ) &#61548; A S</ns0:formula><ns0:formula xml:id='formula_6'>t Q q q q &#61501; i q Q &#61646; S Search L</ns0:formula><ns0:p>and , receives those tokens and generates a bit as the output of this experiment.</ns0:p></ns0:div> <ns0:div><ns0:head>Update</ns0:head></ns0:div> <ns0:div><ns0:head>L A b</ns0:head><ns0:p>If for any probabilistic polynomial time (PPT) adversary , there exist an efficient simulator A S , which satisfies that:</ns0:p><ns0:formula xml:id='formula_7'>, |Pr[Real ( ) 1] Pr[Ideal ( ) 1] ( ) negl &#61548; &#61548; &#61548; &#61501; &#61485; &#61501; &#61603; A A S</ns0:formula><ns0:p>, we say is secure against CKA2, where is an negligible function and is the security</ns0:p><ns0:formula xml:id='formula_8'>&#61653; &#61485; L negl &#61548; parameter.</ns0:formula></ns0:div> <ns0:div><ns0:head>Construction</ns0:head><ns0:p>In this section, we present the construction of our scheme in detail. We take bitmap as index structure to achieve efficient search over encrypted data, and use blockchain to verify the search Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>results. The bitmap is utilized to build the inverted index to achieve the optimal search time ,where is the keywords in search and is the number of .</ns0:p></ns0:div> <ns0:div><ns0:head>&#61480; &#61481;</ns0:head><ns0:formula xml:id='formula_9'>| | q O q | | q q</ns0:formula><ns0:p>In our scheme, the blockchain is used to fairly verify the search results. In , the data owner Setup calculates the hash value of files, generates a checklist and saves it on the blockchain. During B the verification, the blockchain smart contract computes the hash values of search results returned by the server and compares them with the existing results to obtain the verification results. Specifically, in the single keyword setting, the blockchain stores the corresponding benchmark directly since the results corresponding to the keywords are determined. However, it's impossible in multi-keyword search because the search results are variable, which can only store the verification value of each file. To ensure the credibility of the search results, the blockchain also needs to perform multi-keyword search to obtain the search results. Therefore, we save the index on the blockchain. During a query, the blockchain executes multi-keyword search to get the , where ,</ns0:p><ns0:formula xml:id='formula_10'>&#61548; &#61646; N 1 2 3 { , , } K K K K &#61501; 1 2 3 , , {0,1} K K K &#61548; &#61612; $ 1 2</ns0:formula><ns0:p>, K K are used to encrypt the bitmap index for each keyword , is used to encrypt files</ns0:p><ns0:formula xml:id='formula_11'>w i &#61646; W 3 K f i &#61646; F</ns0:formula><ns0:p>and store the hash value of files.</ns0:p><ns0:p>: Given a set of files , a set of keywords and the secret keys</ns0:p><ns0:formula xml:id='formula_12'>(T, T , B) ( , ) K &#61612; Setup W, F B F W</ns0:formula><ns0:p>, this algorithm builds an encrypted index , a checklist and a ciphertext database , as is</ns0:p><ns0:formula xml:id='formula_13'>K T B B T</ns0:formula><ns0:p>shown in Algorithm 1. For each file , is the identifier of , the data owner encrypts by</ns0:p><ns0:formula xml:id='formula_14'>f i &#61646; F i id f i f i calculating</ns0:formula><ns0:p>, and computes the hash value using . Then data owner 3 c Enc( ,f ) ( , ( ))</ns0:p><ns0:formula xml:id='formula_15'>i i K &#61612; (c ) i i hash H &#61612;</ns0:formula><ns0:formula xml:id='formula_16'>i w i u F K H w &#61612; [ ] i i st w &#61612; &#61669; , i Q TK respectively.</ns0:formula><ns0:p>: This algorithm takes search token , index and , ( , ) (T, T , B, )</ns0:p><ns0:formula xml:id='formula_17'>i Q R Acc TK &#61612; Search B , i Q TK T B</ns0:formula><ns0:p>ciphertext database as input, and outputs search results . On receiving the search token, the T R cloud server and blockchain perform the same operations for multi-keyword search. They all parse out the position of the keyword in the token , and get the bitmap through</ns0:p><ns0:formula xml:id='formula_18'>i w l , i Q TK i w Bw ,</ns0:formula><ns0:p>. To achieve multi-keyword search, they compute ( || )</ns0:p><ns0:formula xml:id='formula_19'>i i w w i v H K l &#61612; &#61637; B B T [ ] i w v l &#61612; B B</ns0:formula><ns0:p>, the cloud server gets files in according to with regard to , and</ns0:p><ns0:formula xml:id='formula_20'>1 2 ... t &#61501; &#61657; &#61657; &#61657; B B B B T B [ ]=1 i B</ns0:formula><ns0:p>sends them to the blockchain to verify. Similarly, the blockchain gets hash values of files in according to , computes </ns0:p></ns0:div> <ns0:div><ns0:head>R Acc</ns0:head><ns0:p>input, outputs search results and , and the verify process is shown in Algorithm 3. To R proof verify the integrity of files, the data owner calculates the hash value of each file through in the Setup, and adds to the checklist , then is sent to the blockchain. (c )</ns0:p><ns0:formula xml:id='formula_21'>i i hash H &#61612; i hash B B</ns0:formula><ns0:p>Through algorithm , the blockchain gets the search result of multiple keywords, obtains the Search hash value of each file in the result from , and computes the benchmark .To verify the search B Acc results, the blockchain calculates of and compares it with .</ns0:p></ns0:div> <ns0:div><ns0:head>W H R Acc</ns0:head><ns0:p>In Algorithm 3, for all ciphertexts , blockchain computes , where</ns0:p><ns0:formula xml:id='formula_22'>i c R &#61646; ( ) i W W H H H c &#61612; &#61637;</ns0:formula><ns0:p>denotes the hash value of . Blockchain compares and , if they are equal, the</ns0:p><ns0:formula xml:id='formula_23'>( ) i H c i c W H Acc</ns0:formula><ns0:p>proof is true, otherwise false. At last, the search results and proof are sent to data user. During R the verification, is calculated through the hash value stored on the blockchain, due to the Acc unforgeability of blockchain, thus is unforgeable. In addition, the verification is completed Acc by the blockchain, so the proof is also unforgeable, which ensures the fairness of verification.</ns0:p><ns0:p>: The data owner generates an update token through this ( , ) (F, W', ) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_24'>s b K &#61556; &#61556; &#61612; UpdateToken algorithm,</ns0:formula><ns0:note type='other'>Computer Science token(</ns0:note><ns0:p>).For files , the data owner encrypts and calculates the hash value of by ,</ns0:p><ns0:formula xml:id='formula_25'>s b &#61556; &#61556; f F k &#61646; f k</ns0:formula><ns0:p>and ,respectively. For keywords that 3 c Enc( ,f ) B'</ns0:p><ns0:formula xml:id='formula_26'>k k K &#61612; (c ) k k hash H &#61612;</ns0:formula></ns0:div> <ns0:div><ns0:head>Forward security</ns0:head><ns0:p>As described above, dynamic update is the foundation function of an SSE scheme, and forward security is an indispensable component of dynamic update. In Algorithm </ns0:p></ns0:div> <ns0:div><ns0:head>Security Analysis</ns0:head><ns0:p>In this section, we analysis the security of our scheme. For the scheme Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We deploy our experiments on a local machine with an Intel Core i7-8550U CPU of 1.80GHz&#12289; 8GB RAM. We use HMAC-SHA-256 for the pseudo-random functions , SHA-256 for the hash F function . We use AES as the encryption algorithm to encrypt files. We implement the H algorithms in data owner, data user and server using Python and construct the smart contract using Solidity, and the smart contract is tested in with the Ethereum blockchain using a local simulated network TestRPC.</ns0:p><ns0:p>For the dataset, we adopt a real-world dataset, Enron email dataset <ns0:ref type='bibr' target='#b29'>(William, 2015)</ns0:ref>, which contains more than 517 thousand documents. We utilize the Porter Stemmer to extract more than 1.67 million keywords and filter that meaningless keywords, such as 'of', 'the'. At last, we build an inverted index with those keywords to improve the search efficiency of the experiment.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation of Setup</ns0:head><ns0:p>In setup phase, data owner encrypts the files, calculates the initial verification values of ciphertexts, generates the bitmap indexes of keywords, stores them in , and , respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>T B T B</ns0:head><ns0:p>First, we compare the setup time of our scheme with Li et.al (2021) and Guo, Zhang &amp; Jia (2020), the setup time is related to the number of files in the index and the number of keywords included in each file. Figure <ns0:ref type='figure'>3</ns0:ref> shows the setup time with different number of keywords in each file while the number of files is fixed at 3137, Fig. <ns0:ref type='figure'>4</ns0:ref> shows the setup time with different number of files when the number of keywords in each file is fixed at 20. Both figures show that the setup time is affected by the number of keywords in each file and the number of files, and the setup time increases linearly concerning the number of keywords and files. Furthermore, Fig. <ns0:ref type='figure'>3</ns0:ref> Manuscript to be reviewed Computer Science in contrast, our scheme utilizes hash functions to verify search results, which reduces the computational overhead greatly.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation of Search</ns0:head><ns0:p>For the performance of our scheme, we compare the search time of our scheme with Li et.al (2021). Moreover, to better evaluate the performance of the scheme in multi-keyword search, we perform two settings in a query: 5 keywords and 10 keywords, respectively. In figures, the suffix of the icon indicates the number of keywords in a query, i. Figure <ns0:ref type='figure'>5</ns0:ref> shows the search time with different number of keywords in each file when the number of files is fixed at 3137, and Fig. <ns0:ref type='figure'>6</ns0:ref> shows the search time with different number of files when the number of keywords in each file is fixed at 20. Both figures show that the search time is affected by the number of keywords in each file and the number of files, and the search time increases sublinearly with the number of keywords and files.</ns0:p><ns0:p>From Fig. <ns0:ref type='figure'>5</ns0:ref> and Fig. <ns0:ref type='figure'>6</ns0:ref>, we can see that the more keywords included in a query, the more time it takes, this is because the more keywords, the search algorithm spends more time to calculate matched files. Another conclusion can be drawn that our scheme is more efficient than Li et.al (2021) in search, the reason is that the same as the setup algorithm, Li et.al (2021) takes more time to calculate the verification values.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation of Verify</ns0:head><ns0:p>Here, we evaluate the performance of our scheme in verification, we verify the results of searching for 5 keywords and 10 keywords respectively, and compares the verification time with Li et.al (2021),the comparison results are shown in Fig. <ns0:ref type='figure'>7</ns0:ref> and Fig. <ns0:ref type='figure'>8</ns0:ref>. Figure <ns0:ref type='figure'>7</ns0:ref> shows the verification time with different number of keywords in each file when the number of files is fixed at 3137, and Fig. <ns0:ref type='figure'>8</ns0:ref> shows the verification time with different number of files when the number of keywords in each file is fixed at 20. From those two figures, we can see that the verification time is affected by the number of keywords in each file and the number of files, the verification time increases with the number of keyword and files.</ns0:p><ns0:p>Both figures shows that our scheme gains a higher verification efficiency than Li et.al (2021), the reason is that <ns0:ref type='bibr'>Li et.al (2021)</ns0:ref> </ns0:p><ns0:formula xml:id='formula_27'>i i i u F K r &#61501; f 3 ( , f ) i i K G K &#61501;</ns0:formula><ns0:p>stored in untrusted server and the verification is performed by the data user, both the server and the user may forge the verification results, while in our scheme, the values are stored in blockchain and the verification is performed by blockchain, cannot be tampered with, hence, our scheme is more fair and secure in verification. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>to do AND operation on the two bitmaps, i.e.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>as input , and outputs the search results and the T B B R benchmark . Acc PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67741:1:2:CHECK 8 Jan 2022) Manuscript to be reviewed Computer Science &#61623; , takes the search results , and the benchmark as inputset of files to update , the set of keywords (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>1 K: 1 (</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>in . At the end of the Setup, and stored on blockchain and cloud server, respectively. (T , B) B (T, T ) B PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67741:1:2:CHECK 8 Jan 2022) Manuscript to be reviewed Computer Science : It needs to register to the data owner when a new data user who 1 ClientAuth A wants to query files on the cloud server joins the system. The data user submits attribute to the i A data owner through this algorithm to obtain the keyword status and the key . &#61669; It takes the key and the set of keywords to query ,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>updated checklist .The details are shown in Algorithm 4.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>and Fig.4 illustrate that our scheme is more efficient than Li et.al (2021) and Guo, Zhang &amp; Jia (2020) under the same condition in setup time. Since Guo, Zhang &amp; Jia (2020) utilizes the linked list instead of bitmap to build the index, it requires more time than the other schemes. Our scheme takes less time than Li et.al (2021), the reason is that Li et.al (2021) adopts RSA accumulator based on public key encryption to verify multi-keyword search results, PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67741:1:2:CHECK 8 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>e., Our scheme_5 indicates the search time spent in our scheme during a query which contains 5 keywords, Our scheme_10 indicates the search time spent in our scheme during a query which contains 10 keywords, similarly, Li et.al (2021)_5 and Li et.al (2021)_10 indicates the search time spent in Li et.al (2021) during a query which contains 5 keywords and 10 keywords, respectively.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,370.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,370.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>&amp; Ohtaki, 2012; Zhu, Liu &amp; Wang, 2016; Liu et.al 2017; Zhang et.al 2019;Chen et.al 2021).</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Wagner &amp; Perrig,2000; Curtmola et.al, 2006; Wang, Cao &amp; Ren, 2010), the server returns some irrelevant results, while the multi-keyword search (Cash et.al, 2013; Lai et.al, 2018; Xu et.al, 2019;Liang et.al 2020;Liang et.al 2021) gains higher search accuracy and more accurate results. To further improve search efficiency, Abdelraheem et.al (2016) proposed an SSE scheme on encrypted bitmap indexes to support multi-keyword search, but requires two rounds of interactions with the cloud server. Zuo et.al (2019) proposed a secure SSE scheme based on bitmap index which supports dynamic operations with forward and backward security, but this scheme lacks the verification of the results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Verifiable Searchable Symmetric Encryption</ns0:cell></ns0:row><ns0:row><ns0:cell>In SSE, it is necessary to verify the results since the server is untrusted. Qi &amp; Gong (2012) proposed</ns0:cell></ns0:row><ns0:row><ns0:cell>the concept of verifiable searchable symmetric encryption (VSSE) and constructed a VSSE</ns0:cell></ns0:row><ns0:row><ns0:cell>scheme based on word tree. Along this direction, some other VSSE schemes (Kurosawa &amp; Ohtaki,</ns0:cell></ns0:row><ns0:row><ns0:cell>2012; Zhu, Liu &amp; Wang ,2016; Liu et.al ,2017,Miao et.al 2021) are proposed. These schemes are</ns0:cell></ns0:row><ns0:row><ns0:cell>the verification of single keyword search results, Azraoui et.al (2015) combined polynomial-based</ns0:cell></ns0:row><ns0:row><ns0:cell>accumulators and Merkle trees to achieve conjunctive keyword verification. Wan &amp; Deng (2018)</ns0:cell></ns0:row><ns0:row><ns0:cell>used homomorphic MAC to verify the results of multi-keyword search. Li et.al (2021) utilized</ns0:cell></ns0:row><ns0:row><ns0:cell>bitmap index to gain high efficiency of multi-keyword search, and verified the results by RSA</ns0:cell></ns0:row><ns0:row><ns0:cell>accumulator. Ge et.al (2021) and</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>,</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='13'>Definition 1&#65306;CKA2-security, for the verifiable multi-keyword search scheme</ns0:cell></ns0:row><ns0:row><ns0:cell>&#61653;</ns0:cell><ns0:cell cols='12'>={KeyGen,Setup,ClientAuth,TokenGen,Search,Verify,Update}</ns0:cell><ns0:cell>, let</ns0:cell><ns0:cell>={ L L</ns0:cell><ns0:cell>setup</ns0:cell><ns0:cell>, L</ns0:cell><ns0:cell>search</ns0:cell><ns0:cell>,</ns0:cell><ns0:cell>L</ns0:cell><ns0:cell>update</ns0:cell><ns0:cell>}</ns0:cell><ns0:cell>be</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>the leakage function,</ns0:cell><ns0:cell cols='2'>A</ns0:cell><ns0:cell cols='2'>is the adversary and</ns0:cell><ns0:cell>S</ns0:cell><ns0:cell>is the simulator, there are two probabilistic</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>experiments:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Real ( ) &#61548;</ns0:cell><ns0:cell cols='9'>: The challenger runs</ns0:cell><ns0:cell>&#61548;</ns0:cell><ns0:cell>to generate secret key</ns0:cell><ns0:cell>1 { , , } 2 3 K K K K &#61501;</ns0:cell><ns0:cell>, the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>adversary</ns0:cell><ns0:cell>A</ns0:cell><ns0:cell /><ns0:cell cols='8'>outputs and F</ns0:cell><ns0:cell>W</ns0:cell><ns0:cell>. The challenger triggers this experiment to run</ns0:cell><ns0:cell>Setup( , K W, F</ns0:cell><ns0:cell>)</ns0:cell><ns0:cell>,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>outputs the index</ns0:cell><ns0:cell cols='6'>, and , which are sent to T B T B</ns0:cell><ns0:cell>. A A</ns0:cell><ns0:cell>generates a series of adaptive queries</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>1 { , ,..., } 2 t Q q q q &#61501;</ns0:cell><ns0:cell cols='6'>, for each</ns0:cell><ns0:cell>i q Q &#61646;</ns0:cell><ns0:cell>, the challenger generates search or update tokens, receives A</ns0:cell></ns0:row><ns0:row><ns0:cell cols='13'>those tokens and generates a bit as the output of this experiment. b</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>Ideal ( ) &#61548; A S ,</ns0:cell><ns0:cell cols='8'>: The adversary</ns0:cell><ns0:cell>A</ns0:cell><ns0:cell>outputs and F</ns0:cell><ns0:cell>W</ns0:cell><ns0:cell>, the simulator</ns0:cell><ns0:cell>S</ns0:cell><ns0:cell>generates the index</ns0:cell><ns0:cell>, T B T</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>and</ns0:cell><ns0:cell>B</ns0:cell><ns0:cell cols='4'>through</ns0:cell><ns0:cell>L</ns0:cell><ns0:cell cols='2'>Setup</ns0:cell><ns0:cell>,</ns0:cell><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>receives them.</ns0:cell><ns0:cell>A</ns0:cell><ns0:cell>generates a series of adaptive queries</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='6'>, for each</ns0:cell><ns0:cell>, the simulator generates search or update tokens with</ns0:cell></ns0:row></ns0:table><ns0:note>AKeyGen<ns0:ref type='bibr' target='#b0'>(1 )</ns0:ref> </ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>TABLE 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison results with existing schemes</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Schemes</ns0:cell><ns0:cell>Single-</ns0:cell><ns0:cell>Multi-</ns0:cell><ns0:cell cols='2'>Verification Blockchain-</ns0:cell><ns0:cell>Forward</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>keyword</ns0:cell><ns0:cell>keyword</ns0:cell><ns0:cell /><ns0:cell>based</ns0:cell><ns0:cell>security</ns0:cell></ns0:row><ns0:row><ns0:cell>Ref [3]</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61620;</ns0:cell></ns0:row><ns0:row><ns0:cell>Ref [7]</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61620;</ns0:cell></ns0:row><ns0:row><ns0:cell>Ref [14]</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61620;</ns0:cell></ns0:row><ns0:row><ns0:cell>Ref [17]</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61620;</ns0:cell></ns0:row><ns0:row><ns0:cell>Ref [18]</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell></ns0:row><ns0:row><ns0:cell>Ref [21]</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61620;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell></ns0:row><ns0:row><ns0:cell>Our scheme</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell><ns0:cell>&#61654;</ns0:cell></ns0:row></ns0:table><ns0:note>&#61612; 9: end if 10: sends (proof, Result) to data user</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67741:1:2:CHECK 8 Jan 2022)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67741:1:2:CHECK 8 Jan 2022)</ns0:note> </ns0:body> "
"Dear editor and reviewer, I have carefully read all the comments from the reviewers. First of all, I would like to thanks the reviewers and editor for having a thorough look into my paper. All the comments suggested by reviewers are very much technical and related to the area of research proposed in the paper. I have carefully implemented all the comments by the reviewers in the paper which has made the foundation of this paper much stronger. The justifications of the comments of all the reviewers are given below. Editor comments (Huiyu Zhou) MAJOR REVISIONS The presentation and experimental section in the paper must be improved. [# PeerJ Staff Note: Please ensure that all review comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the response letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the response letter. Directions on how to prepare a response letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] To editor comments (Huiyu Zhou) Thank you very much for your comments. We have improved the presentation and experimental section of the paper, added more detailed content and additional comparative experiments. Besides, we have thoroughly revised the paper again with the help of some professional English editors. We hope that this revised manuscript will be east to read and understand. We are submitting three separate documents, (1) Rebuttal letter, (2) Revised manuscript with tracked changes on, (3) Revised manuscript for clarifications. we have used the instructions listed in [https://peerj.com/benefits/academic-rebuttal-letters/] to complete the response letter. Reviewer 1 (Anonymous) Basic reporting The authors please make any changes to improve the readability of the paper. The reference is insufficient. The authors need to add popular related articles to support the proposed arguments. The authors need to add more detailed contents to enrich the proofing process. Experimental design No comment Validity of the findings The paper proposed a conceptual approach. If possible, the authors please verify the applicability of the proposed algorithm. To reviewer 1: Basic reporting The authors please make any changes to improve the readability of the paper. Thank you very much for your careful reading and professional advice, we have noticed the readability of the paper, and we have thoroughly revised the paper again with the help of some professional English editors. We have revised the Abstract, Introduction, Related works, Construction, Performance Evaluation, Conclusions and other chapters, we hope that this revised manuscript will be east to read and understand. The reference is insufficient. The authors need to add popular related articles to support the proposed arguments. The reviewer's opinion is correct. We have read and added 8 related articles (Ref [30] ~ Ref [37]) in 2020 and 2021 to support our arguments, and added corresponding citations in this paper. [30] Maryam H, Parvaneh A, Hamid HSJ. 2021. Dynamic Secure Multi-keyword Ranked Search over Encrypted Cloud Data. Journal of Information Security and Applications. 61:1-12 DOI: 10.1016/j.jisa.2021.102902 [31] Miao Y, Deng RH, Choo KKR, Liu X, Ning J, Li H. 2021. Optimized Verifiable Fine-grained Keyword Search in Dynamic Multi-Owner Settings. IEEE Transactions on Dependable and Secure Computing. 18(4):1804-1820 DOI: 10.1109/TDSC.2019.2940573. [32] Chen CM, Tie Z, Wang EK, Khan MK, Kumar S, Kumari S. 2021. Verifiable Dynamic Ranked Search with Forward Privacy over Encrypted Cloud Data. Peer-to-Peer Networking and Applications, 14:2977–2991 DOI: /10.1007/s12083-021-01132-3. [33] Ge XR, Yu J, Chen F, Kong F, Wang H. 2021. Towards Verifiable Phrase Search over Encrypted Cloud-based IoT Data. IEEE Internet of Things Journal.8(16):12902 – 12918 DOI: 10.1109/JIOT.2021.3063855. [34] Liu X, Yang X, Luo Y, Zhang Q. 2021. Verifiable Multi-keyword Search Encryption Scheme with Anonymous Key Generation for Medical Internet of Things. IEEE Internet of Things Journal. DOI: 10.1109/JIOT.2021.3056116. [35] Liang YR, Li YP, Cao Q, Ren F. VPAMS: Verifiable and Practical Attribute-based Multi-keyword Search Over Encrypted Cloud Data. Journal of Systems Architecture. DOI: 10.1016/j.sysarc.2020.101741. [36] Liang Y, Li Y, Zhang K, Ma L. 2021. DMSE: Dynamic Multi-keyword Search Encryption Based on Inverted Index. Journal of Systems Architecture. DOI: 10.1016/j.sysarc.2021.102255. [37] Li H, Yang Y, Dai Y, Yu S, Xiang Y. 2020. Achieving Secure and Efficient Dynamic Searchable Symmetric Encryption over Medical Cloud Data. IEEE Transactions on Cloud Computing. 8(2):484-494 DOI: 10.1109/TCC.2017.2769645. The authors need to add more detailed contents to enrich the proofing process. The reviewer's opinion is correct. We have thoroughly thought about the proofing process and made detailed modifications, and we add detailed contents in line 275-287 to enrich the proofing process. Validity of the findings The paper proposed a conceptual approach. If possible, the authors please verify the applicability of the proposed algorithm Thank you very much for your professional advice. Searchable encryption is widely used in public cloud storage, cloud-based IoT, medical cloud data and other fields, Ref. [14], Ref. [16], Ref. [33], Ref. [37] in the references in this paper are all applications of searchable encryption technology. In our paper, we discussed the applicability of the proposed algorithm in Conclusions as follows: Our scheme can be widely used in cloud storage systems such as data outsourcing, cloud-based IoT (Ge et.al, 2021), medical cloud data (Li et.al, 2020), etc., helping to achieve efficient multi-keyword searches, and ensuring the integrity and credibility of search results. We would like to thank the reviewer to point out some potential mistakes in the paper at very early stage. We have revised the manuscript accordingly, and hope that this time the reviewer will find this manuscript suitable for possible acceptance in the journal. Reviewer 2 (Anonymous) Basic reporting no comment Experimental design no comment Validity of the findings no comment Comments for the author Additional Comments: 1. In page 2, 'To address this, some researchers introduce blockchain into SSE...', “and there are few studies...”, add some relevant references about these works. It should be corrected. 2. You should pay more attention to English description in the manuscript. There are lots of long sentences in the manuscript, such as “In this paper, we introduce a verifiable...” in page 2, “Multi-keyword search is a crucial means to improve search efficiency...” in page 3, etc. 3. In the references, the capitalization of the title is not uniform, such as [7] and [8], etc. 4. In page 6 under Security Definitions part, it would be “|·|” in place of “|·”, in place of adversary A and in place of in Construction part, etc. In page 7 under Proposed Construction part, check the symbol and correct the other problems about the symbols and formulas such as some symbols should be italicized 5. The experiment is not enough, add the comparison with Ref. [21]. Besides, please explain the meaning of “xxx_5” and “xxx_10” in Fig. 5. 6. The output of Algorithm 4 includes T', TB' and B', but they are not reflected in the output of Algorithm 4. Please correct it. To reviewer2: 1. In page 2, 'To address this, some researchers introduce blockchain into SSE...', “and there are few studies...”, add some relevant references about these works. It should be corrected The reviewer's opinion is correct. We have added some relevant references as follows: To address this, some researchers (Hu et.al 2018; Li et.al 2019; Guo, Zhang & Jia , 2020) introduce blockchain into SSE to verify search results, which guarantees the fairness and reliability of the verification. We reread the relevant documents, to our knowledge, our scheme is the first scheme to realize fair verification for multi-keywords, so we re-written the second sentence as follows: Although blockchain achieves fair verification of search results, but the existing schemes are only for a single keyword, and there is little research on fair verification for multi-keywords. 2. You should pay more attention to English description in the manuscript. There are lots of long sentences in the manuscript, such as “In this paper, we introduce a verifiable...” in page 2, “Multi-keyword search is a crucial means to improve search efficiency...” in page 3, etc. The reviewer's opinion is correct. We have thoroughly revised the paper again with the help of some professional English editors, and we re-written the “In this paper, we introduce a verifiable...” in page 2 as follows: In this paper, we introduce a verifiable multi-keyword SSE scheme based on blockchain, which can perform efficient multi-keyword search, ensures the fairness of verification, and supports the dynamic update of files. And we re-written the “Multi-keyword search is a crucial means to improve search efficiency…” in page 3 as follows: Multi-keyword search is a crucial means to improve search efficiency. In single-keyword search scheme (Song, Wagner & Perrig,2000; Curtmola et.al, 2006; Wang, Cao & Ren, 2010), the server returns some irrelevant results, while the multi-keyword search (Cash et.al, 2013; Lai et.al, 2018; Xu et.al, 2019; Liang et.al 2020; Liang et.al 2021) gains higher search accuracy and more accurate results. 3. In the references, the capitalization of the title is not uniform, such as [7] and [8], etc. The reviewer's opinion is correct. We unified the capitalization of the title in the references, we revised the references [1], [2], [3], [4], [6], [7], [13], [15], [20] ~ [28]. 4. In page 6 under Security Definitions part, it would be “|·|” in place of “|·”, in place of adversary A and in place of in Construction part, etc. In page 7 under Proposed Construction part, check the symbol and correct the other problems about the symbols and formulas such as some symbols should be italicized. Thank you for your careful reading and professional advice. We re-checked the symbols in the paper to make sure they are correct. The type of manuscript we submitted is docx, and the symbols in it are correct, but some symbols have been changed in the Review PDF generated by the system, we submitted a new manuscript and contacted the staff to solve this problem. 5. The experiment is not enough, add the comparison with Ref. [21]. Besides, please explain the meaning of “xxx_5” and “xxx_10” in Fig. 5. The reviewer's opinion is correct. In Performance evaluation, we added the comparison with Ref. [21], the comparison results are shown in Fig.3, Fig.4, Fig.9 and Fig.10. Since our scheme and Ref. [17] focus on multi-keyword search, while Ref. [21] focuses on single-keyword search, we added a comparison with Ref. [21] in the Setup and Update, and we compare with Ref. [17] in Search and Verify algorithms. We explained the meaning of “xxx_5” and “xxx_10” in Fig. 5 and Fig.6 as follows: Our scheme_5 indicates the search time spent in our scheme during a query which contains 5 keywords, Our scheme_10 indicates the search time spent in our scheme during a query which contains 10 keywords, similarly, Li et.al (2021)_5 and Li et.al (2021)_10 indicates the search time spent in Li et.al (2021) during a query which contains 5 keywords and 10 keywords, respectively. We also added the explanation of “xxx_5” and “xxx_10” in Fig. 7, Fig.8, Fig. 9, Fig.10. 6. The output of Algorithm 4 includes T', TB' and B', but they are not reflected in the output of Algorithm 4. Please correct it. The reviewer's opinion is correct. We have corrected the output of Algorithm 4 about . We are very grateful to the reviewer for your careful reading and constructive comments, which greatly helped us improve the quality and readability of the paper. Thank you very much for your opinions. All the authors thank you very much. Reviewer 3 (Anonymous) Basic reporting In this article, the author proposes a blockchain-based multi-keyword verifiable symmetric searchable encryption scheme. Compared with previous work, it improves the search efficiency, ensures the fairness of the search, and can perform multi-key Word search result verification. Finally, the author uses experiments to prove that this method is safe and effective in practical applications. Here are some comments for the authors to improve the quality of this manuscript. Experimental design The experimental design should be more detailed, and the pictures will be more convincing Validity of the findings This is the same problem, adding your pictures will make your findings more convincing. Additional comments 1. I don’t know if it’s due to the system or other reasons. I didn’t see your picture in the article. 2. In the introduction, I think you can write more compactly. For example, line 45, the shortcomings of SSE are introduced before, and dynamic updates are also very important afterwards. 3. It is recommended to use subtitles to make your article clearer. 4. Please use correct template of this journal. To reviewer3: Basic reporting In this article, the author proposes a blockchain-based multi-keyword verifiable symmetric searchable encryption scheme. Compared with previous work, it improves the search efficiency, ensures the fairness of the search, and can perform multi-key Word search result verification. Finally, the author uses experiments to prove that this method is safe and effective in practical applications. Here are some comments for the authors to improve the quality of this manuscript Special thanks to you for your good comments. Experimental design The experimental design should be more detailed, and the pictures will be more convincing The reviewer's opinion is correct, we have made correction according to the reviewer’s comments. We added a comparison with Ref. [21] in Fig.3, Fig.4, Fig.9 and Fig.10, and we explained the meaning of “xxx_5” and “xxx_10” in Fig. 5, Fig.6, Fig.7, Fig.8, Fig.9 and Fig.10. We have enriched other detailed contents in the experiment. The pictures are shown in the page 21, page 22, page 23, page 24, page 25, page 26, page 27, page 28. Validity of the findings This is the same problem, adding your pictures will make your findings more convincing. The reviewer's opinion is correct, the pictures are shown in the page 21, page 22, page 23, page 24, page 25, page 26, page 27, page 28. Additional comments 1. I don’t know if it’s due to the system or other reasons. I didn’t see your picture in the article We appreciate the reviewers' attention to the flaws of our text. Probably due to system reasons, text-only manuscripts and pictures are required to be submitted separately. In the reviewing PDF of our paper, pictures are shown in the page 21, page 22, page 23, page 24, page 25, page 26, page 27, page 28. 2. In the introduction, I think you can write more compactly. For example, line 45, the shortcomings of SSE are introduced before, and dynamic updates are also very important afterwards. We thank the reviewer for this insightful comment. We have re-written the introduction and reviewed the rest of the paper. We corrected line 45 as follows: In addition to efficient search, dynamic updates are also very important in SSE. (Zhang, Katz & Papamanthou, 2016) has shown that adversaries can infer the critical information through the file injection attacks during the dynamic update of the SSE, while the forward-secure SSE can avoid this. 3. It is recommended to use subtitles to make your article clearer. Thank you very much for the reviewer's suggestion. We have added three subtitles “Searchable Symmetric Encryption”, “Verifiable Searchable Symmetric Encryption” and “Verifiable Searchable Symmetric Encryption Based on Blockchain” to the Related Works, according to your suggestion. In this paper, the font of the title of each section is Arial, 14pt, bold, and the subtitle is Arial, 12pt, bold. After correction, every section in the paper, such as: Related Works, Preliminaries, Method, Construction, Security Analysis, Performance Evaluation, has a subtitle. 4. Please use correct template of this journal. The reviewer's opinion is correct. Our manuscript uses template “PeerJ-research-manuscipt-template.docx” provided by the journal, which can be found at https://peerj.com/about/author-instructions/cs. Thank you very much for your profound and meticulous work. Those comments are all valuable and very helpful for revising and improving our paper. We have studied comments carefully and have made correction which hope meet with approval. Once again, thank you very much for your comments and suggestions. "
Here is a paper. Please give your review comments after reading it.
390
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Feature selection is an independent technology for high-dimensional dataset that has been widely applied in a variety of fields. With the vast expansion of information, such as bioinformatics data, there has been an urgent need to investigate more effective and accurate methods involving feature selection in recent decades. Here, we proposed the hybrid MMPSO method, by combining the feature ranking method and the heuristic search method, to obtain an optimal subset that can be used for higher classification accuracy. In this study, ten datasets obtained from the UCI Machine Learning Repository were analyzed to demonstrate the superiority of our method. The MMPSO algorithm outperformed other algorithms in terms of classification accuracy while utilizing the same number of features.</ns0:p><ns0:p>Then we applied the method to a biological dataset containing gene expression information about liver hepatocellular carcinoma (LIHC) samples obtained from The Cancer Genome Atlas (TCGA) and Genotype-Tissue Expression (GTEx). On the basis of the MMPSO algorithm, we identified a 18-gene signature that performed well in distinguishing normal samples from tumours. Nine of the 18 differentially expressed genes were significantly upregulated in LIHC tumour samples, and the area under curves (AUC) of the combination seven genes (ADRA2B, ERAP2, NPC1L1, PLVAP, POMC, PYROXD2, TRIM29) in classifying tumours with normal samples was greater than 0.99. Six genes (ADRA2B, PYROXD2, CACHD1, FKBP1B, PRKD1 and RPL7AP6) were significantly correlated with survival time.</ns0:p><ns0:p>The MMPSO algorithm can be used to effectively extract features from a high-dimensional dataset, which will provide new clues for identifying biomarkers or therapeutic targets from biological data and more perspectives in tumor research.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The dimensionality of data has increased greatly due to the rapid growth in big data <ns0:ref type='bibr' target='#b24'>(Li et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b45'>Wainwright 2019)</ns0:ref>. This condition has also accelerated the development of high dimensional data processing technology <ns0:ref type='bibr' target='#b23'>(Li et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b36'>Saeys et al. 2007</ns0:ref>). One of the main issues in data mining, pattern recognition, and machine learning is feature selection for high dimensional data <ns0:ref type='bibr' target='#b5'>(Chen et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b19'>Larranaga et al. 2006</ns0:ref>). Feature selection is the process of selecting the feature subset that best captures the characteristics of the original dataset and alters the feature expression of the original dataset as little as possible. It can be utilized as an important dimensionality reduction technique to minimize computing complexity, lower the potential of overfitting as well as improve the prediction performance <ns0:ref type='bibr' target='#b42'>(Tao et al. 2015)</ns0:ref>. Feature selection seldom modifies the original feature space, and the resultant feature subset has clearer physical implications that can be exploited for subsequent classification or inference <ns0:ref type='bibr' target='#b44'>(Villa et al. 2021)</ns0:ref>. The search for the optimal subset of features is typically computationally expensive and has been demonstrated to be nondeterministic polynomial-hard (NP-hard) <ns0:ref type='bibr' target='#b11'>(Faris et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b48'>Wang et al. 2016)</ns0:ref>. Traditionally, feature selection algorithms are classified into three categories: filter, wrapper, and embedded methods and these methods can also be divided into two main categories: feature ranking and feature subset selection <ns0:ref type='bibr' target='#b43'>(Van Hulse et al. 2011</ns0:ref>). In the past few years, feature selection based on high-dimensional datasets has attracted more attentions. Because of their simplicity and efficiency, ranking-based approaches such as ReliefF <ns0:ref type='bibr' target='#b34'>(Robnik-&#352;ikonja &amp; Kononenko 2003)</ns0:ref>, minimum-redundancy maximum-relevancy (mRMR) <ns0:ref type='bibr' target='#b31'>(Peng et al. 2005)</ns0:ref>, Fisher <ns0:ref type='bibr' target='#b13'>(Gu et al. 2012)</ns0:ref>, CFS <ns0:ref type='bibr' target='#b16'>(Hong et al. 2011)</ns0:ref>, and others are widely utilized in a variety of applications. Different from the feature ranking selection, which screens out the top K highestscoring features, feature subset selection selects the subset of features that perform well together. Some heuristic search strategies(Rasheed &amp; Education 2021) have been proposed to obtain the global optimal feature subset, such as the genetic algorithm (GA) <ns0:ref type='bibr' target='#b15'>(Holland 1975;</ns0:ref><ns0:ref type='bibr' target='#b38'>Stefano et al. 2017)</ns0:ref>, particle swarm optimization (PSO) <ns0:ref type='bibr' target='#b7'>(Chuang et al. 2008;</ns0:ref><ns0:ref type='bibr' target='#b10'>Eberhart &amp; Kennedy 2002;</ns0:ref><ns0:ref type='bibr' target='#b51'>Xiangyang et al. 2007)</ns0:ref>, and ant colony optimization (ACC) <ns0:ref type='bibr' target='#b20'>(Li et al. 2008a)</ns0:ref>. It is worth mentioning that, some methods based on neural networks which supports higher-dimensional inputs can also be used for feature selection <ns0:ref type='bibr' target='#b25'>(Liu et al. 2022)</ns0:ref>. Feature selection has been widely utilized in bioinformatics to remove irrelevant features in high-throughput data as an effective method for preventing the 'curse of dimensionality' <ns0:ref type='bibr' target='#b22'>(Li et al. 2008b)</ns0:ref>. It is appropriate to filter out biomarkers in the medical field, which can not only help explore disease pathophysiology at the molecular level but also has advantages in accurate diagnosis. In general, the number of features in a bioinformatics dataset tends to be very large. It is critical to identify highly discriminating biomarkers to improve disease diagnosis and prediction accuracy <ns0:ref type='bibr' target='#b27'>(Ma et al. 2020)</ns0:ref>. Therefore, there is no doubt that obtaining relevant biomarkers from high-throughput data is of great significance <ns0:ref type='bibr' target='#b54'>(Yuanyuan et al. 2021)</ns0:ref>. Furthermore, we realized that there is considerable space for improvement in the feature selection process by combining feature ranking with feature subset searching. There are several methods for measuring the specific value of relevance, including the Pearson correlation coefficient <ns0:ref type='bibr' target='#b30'>(Obayashi &amp; Kinoshita 2009)</ns0:ref>, mutual information (MI), and maximum information coefficient (MIC) <ns0:ref type='bibr' target='#b33'>(Reshef et al. 2011)</ns0:ref>, and MIC can substitute MI to obtain better mutual information measurement results in some situations, particularly for continuous data. Furthermore, feature ranking does not provide a 'golden standard' for obtaining the best feature subset but only the ranking result. Therefore, combining the two methods is a more promising way <ns0:ref type='bibr' target='#b37'>(Shreem et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b38'>Stefano et al. 2017)</ns0:ref>.</ns0:p><ns0:p>In this study, we focus on developing a hybrid efficient approach for obtaining the optimum features by combining the feature ranking method and the heuristic search method. Specifically, ten datasets were employed to validate our hypothesis first. Furthermore, one dataset derived from high-throughput sequencing was used to assess the effect of the approach at the genetic level. The discussion and conclusion are presented in the last section.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS &amp; METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>The mRMR algorithm</ns0:head><ns0:p>The mRMR <ns0:ref type='bibr' target='#b31'>(Peng et al. 2005)</ns0:ref> algorithm, which uses mutual information to assess the relevance between features, has been used in bioinformatics <ns0:ref type='bibr'>(Ding &amp; Peng 2005;</ns0:ref><ns0:ref type='bibr' target='#b21'>Li et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b28'>Mundra &amp; Rajapakse 2010)</ns0:ref>. Mutual information is widely used to analyze the correlation between two variables, and it can be expressed as Equation <ns0:ref type='formula'>1</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>&#119868;(&#119883;,&#119884;) = &#8721; &#119909; &#8712; &#119883; &#8721; &#119910; &#8712; &#119884; &#119901;(&#119909;,&#119910;)&#119897;&#119900;&#119892; &#119901;(&#119909;,&#119910;) &#119901;(&#119909;)&#119901;(&#119910;) #(1)</ns0:formula><ns0:p>In the Equation 1, P represents the probability and X, Y represent the feature vector or class vector. The relevance V and redundancy W of the mRMR can be expressed using Equation 2 and Equation 3.</ns0:p><ns0:formula xml:id='formula_1'>&#119881; = 1 |&#119878;| &#8721; &#119909; &#119894; &#8712; &#119878; &#119868;(&#119910;;&#119909; &#119894; )#(2) &#119882; = 1 |&#119878;| 2 &#8721; &#119909; &#119894; ,&#119909; &#119895; &#8712; &#119878; &#119868;(&#119909; &#119894; ;&#119909; &#119895; )#(3)</ns0:formula><ns0:p>In the Equation 2 and 3, y is the target variable, S is candidate feature set and x i , x j are arbitrary variables of S. To calculate the final score of relevance, the MIQ can be used, as shown in Equation <ns0:ref type='formula'>4</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>&#119872;&#119868;&#119876;:&#119886;&#119903;&#119892;&#119898;&#119886;&#119909; ( &#119881; &#119882; ) #(4)</ns0:formula></ns0:div> <ns0:div><ns0:head>Maximal Information Coefficient</ns0:head><ns0:p>As a measure of dependence for two-variable relationships in a large dataset, MIC has been widely used in various fields, including global health, gene expression, human gut microbiota and identify novel relationships due to its ability to capture a wide range of functional and nonfunctional associations. The definition of MIC is:</ns0:p><ns0:p>&#119872;&#119868;&#119862;(&#119909;,&#119910;&#9474;&#119863;) = &#119898;&#119886;&#119909; &#119894; * &#119895; &lt; &#119861;(&#119899;) { &#119868; * (&#119909;,&#119910;,&#119863;,&#119894;,&#119895;) log min (&#119894;,&#119895;) } #(5)</ns0:p><ns0:p>In the Equation <ns0:ref type='formula'>5</ns0:ref>, x and y are the pairs(x, y) of the dataset and I*(x, y, D, i, j) denotes the maximum mutual information of D| G with the i-by-j grid, where the default B(n) = n 0.6 .</ns0:p></ns0:div> <ns0:div><ns0:head>Particle swarm optimization algorithm</ns0:head><ns0:p>The particle swarm optimization (PSO) algorithm is a heuristic search algorithm that originated from studies of bird predation behavior <ns0:ref type='bibr' target='#b10'>(Eberhart &amp; Kennedy 2002)</ns0:ref>. The first step of PSO is to initialize a group of particles. It then iterates until it finds the best solution. The particles update themselves in each iteration by tracking two extreme values. The first is the particle's determination of the individual extreme value Pbest. The other is called Gbest, which is determined by the entire particle swarm. After determining the Gbest and Pbest values, the particle updates its speed and position based on Equation 6 and 7. <ns0:ref type='formula'>6</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_3'>&#119881; &#119896; + 1 = &#120596;&#119881; &#119896; + &#119888; 1 &#119903; 1 (&#119875;&#119887;&#119890;&#119904;&#119905; -&#119883; &#119896; ) + &#119888; 2 &#119903; 2 (&#119866;&#119887;&#119890;&#119904;&#119905; -&#119883; &#119896; )#(</ns0:formula><ns0:formula xml:id='formula_4'>&#119883; &#119896; + 1 = &#119883; &#119896; + &#119881; &#119896; #(7)</ns0:formula><ns0:p>In the above equations, k represents the number of iterations; V k and X k represent the particle's current velocity and position, respectively; r 1 , r 2 are random values between [0, 1]; c 1 , c 2 are the learning factors; &#969; is the inertia weight, which is used to control the influence of the last iteration's speed on the current speed. A smaller and larger &#969; can strengthen the PSO algorithm's local or global search ability, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head>The hybrid algorithm for feature selection</ns0:head><ns0:p>In this study, we proposed the MMPSO hybrid method, as shown in Algorithm 1. First, the dataset needed to be preprocessed. On the one hand, the aim of preprocessing is to remove some features that contain a large quantity of noisy data, such as features that contain many zeros. On the other hand, if the proportion of samples is clearly unbalanced, it is necessary to balance the samples. Here, we employed random oversampling technology to address this issue. A random over-sampler randomly copies and repeats the minority class samples, eventually resulting in the minority and majority classes having the same number. The next step was to rank the features. MIC was used to measure the correlation of two features, resulting in a more accurate ranking result of features based on the mRMR framework <ns0:ref type='bibr' target='#b2'>(Cao et al. 2021)</ns0:ref>. Considering the numerous features and the complexity of MIC, we used the multithreading method in paper <ns0:ref type='bibr' target='#b40'>(Tang et al. 2014)</ns0:ref> to speed up the calculation. After performing the mRMR based on MIC method, we obtained the ranking features and use the top K features as the input of the next step to reduce the computational load for the PSO. The K features were used in the third step to initialize the particle swam and calculate the fitness of each particle. For the wrapper feature selection algorithm, only the classification accuracy is used as the fitness function to guide the feature selection process, which will lead to a larger scale of the selected feature subset <ns0:ref type='bibr' target='#b26'>(Liu et al. 2011)</ns0:ref>. Therefore, some studies combine the classification accuracy and the number of selected feature subsets to form a fitness function <ns0:ref type='bibr'>(Xue et al. 2012)</ns0:ref>. Here, the fitness we defined is shown in Equation 8. V error was the error, which is measured by a classification method of k-nearest neighbor (KNN) <ns0:ref type='bibr' target='#b3'>(Chen et al. 2021a)</ns0:ref>. N selected and N all were the numbers of selected features and the entire features, respectively. &#945; and &#946; were parameters whose sum is 1. The larger &#945; is, the more features will be chosen; otherwise, fewer features will be chosen. When the specified number of iterations is reached, the PSO program terminates, and the final selected features will be available. </ns0:p></ns0:div> <ns0:div><ns0:head>Summary of datasets</ns0:head><ns0:p>A total of eleven datasets were used in this study; the basic information about the datasets was shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms <ns0:ref type='bibr' target='#b9'>(Dua &amp; Graff 2017</ns0:ref>). In the beginning, ten datasets that have different numbers of features, instances and classes were downloaded from the UCI website, and they were used to evaluate the performance of our proposed method. Furthermore, we conducted a more thorough analysis to demonstrate the biological application of our method.</ns0:p><ns0:p>Tremendous amount of RNA expression data has been produced by large consortium projects such as TCGA and GTEx, creating new opportunities for data mining and deeper understanding of gene functions <ns0:ref type='bibr' target='#b41'>(Tang et al. 2017</ns0:ref>). Thus, the final liver hepatocellular carcinoma (LIHC) dataset was obtained from UCSC Xean, which contains large-scale standardized public, multiomic and clinical/phenotype information <ns0:ref type='bibr' target='#b12'>(Goldman et al. 2020)</ns0:ref>. The LIHC dataset used in the current study contains RNA expression data of over 60,000 genes in 531 biosamples (371 tumor samples and 160 normal samples, and the latter further containing 50 normal samples from the TCGA-LIHC cohort and 110 normal tissues from GTEx), and it is available at https://toil-xena-hub.s3.us-east-1.amazonaws.com/download/TCGA-GTEx-TARGET-gene-expcounts.deseq2-normalized.log2.gz.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>In this section, we focused on testing the accuracy of our proposed MMPSO method and compared it with other methods of mRMR, ILFS, ReliefF, Mutinffs, FSV, Fisher, CFS and UFSOL. All experiments in this study were carried out on a Windows 10 system with an Intel(R) Xeon(R) CPU E5-2420, 1.9 Ghz processor with 16 GB RAM. Our proposed algorithm was implemented in MATLAB 2020b, and the PSO parameters were as follows: population size: 100; number of iterations: 50; c 1 : 2; c 2 : 2; &#969; : 0.9; &#945; : 0.95; &#946; : 0.05.</ns0:p></ns0:div> <ns0:div><ns0:head>Results of the experiment based on ten datasets</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> and Table <ns0:ref type='table'>2</ns0:ref> summarized the classification accuracy on the basis of the MMPSO and the compared methods. Here, we defined the threshold K was 100. When the number of original features was greater than 100, the top 100 features from the ranking result were selected as the PSO input for the MMPSO method; otherwise, all features were selected as the input. In Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, we obtained the conclusion that our method was superior to the other methods in terms of classification accuracy based on six datasets including gisette, hillvalley, isolet, madelon, scene and usps. And in the other four datasets, the MMPSO method achieved the accuracies of top Manuscript to be reviewed</ns0:p><ns0:p>Computer Science three rankings. Therefore, the MMPSO algorithm was outperformed other methods with respect to accuracy of classification by utilizing the same number of features.</ns0:p></ns0:div> <ns0:div><ns0:head>Results of the experiment based on the biological dataset</ns0:head><ns0:p>After analyzing the first ten datasets, we employed the MMPSO method on the LIHC dataset to identify features (gene biomarkers) that can be used to distinguish the tumor group from the normal group with high accuracy. Different from the previous datasets, LIHC is an unbalanced dataset containing 371 tumor samples and 160 normal samples. Therefore, the preprocessing was needed and the number of genes was reduced from over 60,000 to 15,185 after preprocessing. When the genes were ranked by the mRMR based on MIC method, the top 100 genes were selected and input into PSO to identify the best gene signatures. Finally, we obtained a signature of 18 genes through the MMPSO algorithm, that had better classification compared to other methods. The 18 genes were ACTN1, CACHD1, ERAP2, FAM171A1, FKBP1B, HIST1H2BC, PLVAP, PRKD1, RPL7AP6, ADRA2B, DMKN, FNDC4, NPC1L1, POMC, PYROXD2, RBP1, TRIM29, and ZBED9, with the relative expression of the first nine genes significantly increasing in tumors and the last nine genes decreasing (P &lt; 0.01 in the Wilcoxon rank sum test with continuity correction, Figure <ns0:ref type='figure'>2</ns0:ref>). Principal component analysis (PCA) was then performed using the 'FactoMineR' and 'Factoextra' packages in R version 4.0.2 based on the expression profiles of the candidate 18 genes. As shown in Figure <ns0:ref type='figure'>3A-B</ns0:ref>, Dim 1 and Dim 2 were 15% and 11.9%, respectively. Figure <ns0:ref type='figure'>3C</ns0:ref> illustrated the heatmap of all samples based on the 18 gene expression profiling. The results revealed that the 18-gene signature obtained from MMPSO algorithm could effectively separate the 531 samples into two groups. Since logistic regression show that seven biomarkers of ADRA2B, ERAP2, NPC1L1, PLVAP, POMC, PYROXD2 and TRIM29 were significantly associated with Wald and P value, as shown in Table <ns0:ref type='table'>3</ns0:ref>. We further investigated the combined diagnostic efficacy of the seven candidate genes according to the Equation <ns0:ref type='formula'>9</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_5'>&#119875;&#119875; = 1 1 + &#119890; -(&#119888;&#119900;&#119899;&#119904;&#119905;&#119886;&#119899;&#119905; + &#119899; &#8721; 1 &#119888;&#119900;&#119890;&#119891;&#119891;&#119894;&#119888;&#119894;&#119890;&#119899;&#119905; &#119894; * &#119890;&#119909;&#119901;&#119903;&#119890;&#119904;&#119904;&#119894;&#119900;&#119899; &#119894; ) #(9)</ns0:formula><ns0:p>In the above Equation, PP was the functional formula for predicting the incidence of LIHC, i.e. PP-value, and the constant and coefficient were the result of logistic regression in Table <ns0:ref type='table'>3</ns0:ref>. The results of receiver operating characteristic (ROC) curve analysis using MedCalc software based on the value of PP and classification labels of LIHC dataset was shown in the Figure <ns0:ref type='figure'>4</ns0:ref>, which had an area under the curve (AUC) greater than 0.999 and P &lt; 0.0001. It demonstrated that the seven genes significantly distinguished tumors from normal samples in LIHC dataset.</ns0:p><ns0:p>To explore whether the 18 genes are associated with survival time of phenotype information in LIHC dataset, the Kaplan-Meier (KM) survival curve was performed by using the 'survival' and 'survminer' packages in R. For each gene, the cut-off points obtaining from 'survminer' package then divided gene expression values into the high (high) and the low (low) groups. We identified that higher expression levels of CACHD1, FKBP1B, PRKD1, and RPL7AP6 were associated with worse overall survival (OS) time, whereas higher expression levels of ADRA2B, PYROXD2 were associated with better OS, as shown in Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>High-dimensional data such as text data, multimedia data, aerospace collection data and biometric data have become more common in recent years <ns0:ref type='bibr' target='#b24'>(Li et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b36'>Saeys et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b45'>Wainwright 2019</ns0:ref>). The need for efficient processing technology for high-dimensional data has become more urgent and challenging. Feature selection, as one of the most popular methods for dimension reduction, plays an important role in high-dimensional data processing, particularly in biological information data <ns0:ref type='bibr' target='#b29'>(Nguyen et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Xue et al. 2015)</ns0:ref>.</ns0:p><ns0:p>Generally, features filtered out of the original high-dimensional dataset have more definite physical meanings, making it more convenient for researchers to carry out subsequent work. The direct benefits of feature selection are that it reduces the burden of follow-up work and improves model generalization. Choosing the best subset from the original features has been shown to be a NP-hard problem. When the number of features is N, 2 N combinations of features must be tried using the greedy strategy, which is unsustainable for ordinary computer systems, especially when the number of features is very large <ns0:ref type='bibr' target='#b11'>(Faris et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b53'>Xue et al. 2015)</ns0:ref>. As a result, over the last few decades, some heuristic algorithms have been proposed to find the best subset that can best represent the feature meanings of the original dataset. A best subset can be used to represent the original dataset with the least amount of redundancy between features and the highest correlation between the subset's features and labels. The mRMR algorithm, as an implementation of this mind, can obtain the top K ranking features, where the K value must be manually set and mutual information is used to measure the relevance of two features. The mRMR algorithm is undoubtedly an excellent feature selection framework, and it has been widely used in a variety of fields. Despite this, there are still some shortcomings that can be addressed. On the one hand, mutual information can only handle discrete data, which means that continuous data must be discretized in advance, resulting in some accuracy loss. The output of the mRMR, on the other hand, is the top K features, and there is no 'golden rule' to specify a suggested or best K value. To address the above two issues, we proposed a hybrid method called MMPSO. First, the noisy data were removed using a conventional method, and the imbalanced data were corrected using random oversampling technology for preprocessing. Second, we used the MIC <ns0:ref type='bibr' target='#b2'>(Cao et al. 2021;</ns0:ref><ns0:ref type='bibr' target='#b33'>Reshef et al. 2011</ns0:ref>) instead of the MI to obtain a more precise correlation value. Although study <ns0:ref type='bibr' target='#b18'>(Kinney &amp; Atwal 2014)</ns0:ref> has noted that estimates of mutual information are more equitable than estimates of MIC, there is no denying that MIC has been widely and conveniently used. Furthermore, rapidMic <ns0:ref type='bibr' target='#b40'>(Tang et al. 2014)</ns0:ref>, an algorithm that can use multiple threads simultaneously, was used to reduce the time expenditure of MIC algorithm. Finally, we selected the top K features from the second step as the input for the PSO algorithm to find the best subset. In accordance with the preceding thought, we conducted our experiment using ten datasets. On these datasets, the MMPSO method was applied to compared with the other methods, including mRMR, ILFS, ReliefF, Mutinffs, FSV, Fisher, CFS, UFSOL. We applied LIBSVM library to evaluate the performance of the three methods and test the classification accuracy of the selected Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>features. The experimental results of the ten datasets provided evidences that the MMPSO method performed better than other feature selection algorithms, when all the algorithms used the same number of features. It's worth mentioning that the mRMR performed similarly to the MMPSO method in the previous findings. Here, we still highlighted the advantages of MMPSO, since it can autonomously select feature subset on the basis of MIC, which will improve the accuracy of classification.</ns0:p><ns0:p>To investigate the efficacy of our method for biological data, we used the LIHC dataset (a dataset of RNA expression in liver hepatocellular carcinoma) for further analysis. After performing the MMPSO algorithm, a signature including 18 genes was identified as significant biomarkers for distinguishing between tumor and normal groups. The PCA and ROC analysis results all confirmed that the biomarkers we selected have great discrimination ability, while six biomarkers were significantly associated with the overall survival of patients. Furthermore, we need to discuss the significance of these biomarkers from a biological perspective. Studies <ns0:ref type='bibr' target='#b50'>(Wang et al. 2014</ns0:ref>) identified and evaluated tumor vascular PLVAP as a therapeutic target for treatment of HCC but not in nontumorous liver tissues, and this result may provide some clues for the development of drugs for patients with HCC. FNDC4 <ns0:ref type='bibr' target='#b47'>(Wang et al. 2021</ns0:ref>) was reported to be an extracellular factor and played important roles in the invasion and metastasis of HCC in that it promoted the invasion and metastasis of HCC partly via the PI3K/Akt signaling pathway. Wang et al. <ns0:ref type='bibr' target='#b49'>(Wang et al. 2019</ns0:ref>) discovered that PYROXD2 localizes to the mitochondrial inner membrane/matrix, and it plays important roles in regulating mitochondrial function of HCC. TRIM29 plays critical role in many neoplasms. The study <ns0:ref type='bibr' target='#b52'>(Xu et al. 2018</ns0:ref>) revealed that higher TRIM29 expression was associated with higher differentiation grade of HCC and its depletion promoted liver cancer cell proliferation, clone formation, migration and invasion. The regulatory role of ACTNs in tumorigenesis has been demonstrated and ACTN1 was significantly upregulated in HCC tissue and closely related to tumor size, TNM stage and patient prognoses <ns0:ref type='bibr' target='#b4'>(Chen et al. 2021b</ns0:ref>). Based on the above studies, the majority of the genes identified by our algorithm are promising candidate biomarkers for the diagnosis or treatment of liver cancer.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this paper, we proposed the MMPSO hybrid algorithm to identify a feature subset for highdimensional dataset. The experimental data provided evidences that our method outperformed others. More importantly, by applying our proposed algorithm to the biological LIHC dataset, we obtained the gene signatures in classifying tumors and normal samples with high efficacy. Our study also has several limitations. Despite the fact that we used rapidMic to accelerate the calculation, the computational complexity for too many features remains relatively high. In addition, we selected only the top K features as the PSO input without a theoretical foundation. Therefore, there is still some space for improvement in selecting a better subset of features for high-dimensional datasets, and we will advance this in future works. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>The relative expressions of the 18 genes in tumor and control groups.</ns0:p><ns0:p>The values were displayed as floating bars (min to max) with a line at the mean value. The first nine genes increased in tumors (color in red) compared to controls (color in green), while the last nine genes decreased in tumors (color in green) compared to controls (color in red). Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>PCA and heatmap analysis of the 18 gene signatures obtained from MMPSO method in LIHC dataset.</ns0:p><ns0:p>(A) PCA analysis of tumor and control samples. The PCA analysis was performed by using 'FactoMineR' and 'factoextra' packages in R; (B) PCA analysis of tumor and control samples, the latter including control_GTEx and control_TCGA samples; (C) Heatmap of all the samples based on the 18 gene expression profiling. The heatmap analysis was performed by using 'pheatmap' package in R. DEGs: differentially expressed genes. The expression levels of up DEGs were increased in tumors compared to controls, and the down DEGs were decreased. The ten datasets obtained from UCI were analyzed to demonstrate the superiority of MMPSO; the biological dataset was used as an application of the proposed MMPSO method.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Here, we respectively compared the classification accuracy of the MMPSO method with the results of other algorithms, including mRMR<ns0:ref type='bibr' target='#b31'>(Peng et al. 2005)</ns0:ref>, ILFS<ns0:ref type='bibr' target='#b35'>(Roffo et al. 2017</ns0:ref>), ReliefF(Sui-Yu et al. 2010), Mutinffs<ns0:ref type='bibr' target='#b17'>(Hutter 2002)</ns0:ref>, FSV<ns0:ref type='bibr' target='#b1'>(Bradley &amp; Mangasarian 1999)</ns0:ref>, Fisher<ns0:ref type='bibr' target='#b13'>(Gu et al. 2012)</ns0:ref>, CFS<ns0:ref type='bibr' target='#b16'>(Hong et al. 2011)</ns0:ref>, UFSOL<ns0:ref type='bibr' target='#b14'>(Guo et al. 2017)</ns0:ref>, to demonstrate that our method has better classification accuracy. LIBSVM(Chih-Chung &amp; Chih-Jen 2011) is an integrated library, which supports multi-class classification. Here, we performed classification using LIBSVM to test the accuracy with k-fold cross validation.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69874:1:0:NEW 25 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>The statistic was performed by Wilcoxon rank sum test with continuity correction in R.PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69874:1:0:NEW 25 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Basic information of the datasets in this study</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Datasets</ns0:cell><ns0:cell /><ns0:cell cols='3'>Instances Features Classes</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>gina</ns0:cell><ns0:cell>3468</ns0:cell><ns0:cell>970</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>gisette</ns0:cell><ns0:cell>6000</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>hillvalley</ns0:cell><ns0:cell>1212</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Datasets obtained from UCI</ns0:cell><ns0:cell>isolet madelon musk</ns0:cell><ns0:cell>7797 2000 6598</ns0:cell><ns0:cell>617 500 166</ns0:cell><ns0:cell>26 2 2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>scene</ns0:cell><ns0:cell>2407</ns0:cell><ns0:cell>294</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>splice</ns0:cell><ns0:cell>3190</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>usps</ns0:cell><ns0:cell>9298</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>waveform</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Biological dataset LIHC</ns0:cell><ns0:cell>531</ns0:cell><ns0:cell>60498</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69874:1:0:NEW 25 Feb 2022)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69874:1:0:NEW 25 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editor Qichun Zhang, We thank the Editor and Reviewers for their positive remarks and valuable comments, which will greatly help improve the quality of the article. We have revised the manuscript point by point, to address their concerns. We believe that the revised paper may be suitable for publication in PeerJ Computer Science. Your consideration for this manuscript is highly appreciated. We are looking forward to hearing from you soon. Best wishes Sincerely yours, Yangyang Wang, Xiaoguang Gao, Xinxin Ru, Jihan Wang Reviewer 1 Basic reporting Wang et al. developed a hybrid MMPSO method combining feature ranking and heuristic searching which yield higher classification of accuracy. To demonstrate the accuracy of such method, the authors applied the method to eleven datasets and identified 18 tumor specific genes, nine of which were further confirmed in a HCC TCGA dataset. Together, the study presents a highly effective MMPSO algorithm with great potential to identify novel biomarkers and therapeutic targets for cancer research and treatment. Answer: We thank the reviewer for the positive remarks and valuable comments. Experimental design The study is original and novel in the sense that it develops the hybrid algorithm by combining mRMR and MIC method. The methodology is very concrete as the authors compared the MMPSO method to eight other methods and clearly showed their superiority. Furthermore, the MMPSO was applied to the HCC biological dataset and gave a 18 gene signature which successfully separate 531 samples into cancer and normal groups. Many of the targets in the signature were validated by published research and/or their own Kaplan Meier method. Therefore, the study is well targeted at the Aims of Scope of Peer J Computer Science. Answer: We thank the reviewer so much for the valuable comments, and have improved the manuscript. Validity of the findings The findings in this study are thoroughly validated. The conclusions are well stated. Answer: We thank the reviewer for the positive remarks, and added the results of Figure 2 and Figure 3 in the revised manuscript. Reviewer 2 Basic reporting The authors aimed to develop an improved feature selection algorithm. They applied the algorithm to the public gene expression dataset and found some candidate biomarker genes for classifying tumor versus normal samples. This is an incremental work based on previous methods and has potentially interesting applications on tumor diagnosis. However, the following concerns should be addressed to meet the basic requirement of a bioinformatics method paper. Answer: We thank the reviewer for the valuable comments. We revised the manuscript point by point and added the information according to the reviewer’s concern. Experimental design Major concern #1: the authors used the same dataset (LIHC) for selecting the tumor/normal gene signatures and testing their performance. This does not prove the superiority of the algorithm since it can simply overfit this dataset. I suggest 1) selecting features based on a subset of samples and validating on the left-out samples; 2) using another cohort of normal/tumor gene expression datasets for testing the selected features. Answer: We thank the reviewer for the comments. Generally, the modeling process of machine learning is to train the parameters of the model and test the performance of the model by dividing the data into training set and test set. In this study, we built the MMPSO model and tested its classification performance using the first ten datasets from UCI Machine Learning Repository as the foundation work, and the biological LIHC dataset was used as an application of the proposed MMPSO method. Thus, we didn’t further divide the LIHC dataset into two parts. We have modified the abstract and context referring to the ten test datasets and the biological dataset in the revised manuscript, to make the study design more clear. Major concern #2: the authors did not compare the gene signatures selected by their method versus other methods. For instance, using the top differential genes of a simple differential gene expression analysis with DESeq2 or EdgeR, do they separate the tumor samples better than these 18 genes? Answer: We thank the reviewer so much for the constructive suggestions. In the revision, we obtained 18 gene signatures from the MMPSO method in LIHC dataset. In the revised Figure 3, we added a heatmap of all the samples based on the 18 gene expression profiling (Figure 3C, as shown below), to show both the relative gene expression pattern and the gene symbol information. Figure 3C. The heatmap of all samples based on the 18 gene signature obtaining from MMPSO. The heatmap was performed by using “pheatmap” package in R. Just as the reviewer suggested, we also performed the Limma algorithm to analyze the differential expressed genes (DEGs) between tumor and controls, since Limma method has been widely used in the bioinformatic data. We then selected the top 18 DEGs obtaining from Limma, including 9 up-regulated genes and 9 down-regulated genes. The detail gene symbol and expression profiling of the 18 genes from Limma was shown in the heatmap below (this heatmap was not included in the manuscript). We can see from both the two heatmaps that the 18 gene signature could well distinguish the control from tumor samples, and to be honest that, the Limma performed better than the MMPSO method. Here, we highlight the importance of MMPSO method in the aspect that it can select feature subset just including very few features for classification. It’s true that the classical Limma method or other biological methods will identify all of the DEGs in tumors. For example, we obtained more that 3,000 DEGs (set the criterion as |Log2FC| > 1 and FDR < 0.05 between tumor and control groups) in tumor when performing Limma as for the LIHC dataset. Thus, we suppose that methods such as Limma will perfectly show the global/comprehensive gene expression alterations to the researchers, and the method in this study will help researchers identify as few features as possible when select representative biomarkers for tumors, which may improve the efficiency and reduce the workloads during the big data analysis. The heatmap of all samples based on the top 18 DEGs signature obtaining from Limma. Given that in many of the ten datasets in Figure 1 and Table 2, mRMR performs very similarly to MMPSO, what is the performance on the LIHC dataset if only using mRMR to select features? Answer: We thank the reviewer so much for the careful comments. Indeed, as the reviewer found, from the results based on ten test datasets, the classification accuracy of MMPSO is similar to mRMR. In fact, the MMPSO method has higher classification accuracy. The mRMR is an excellent feature selection framework, which strives to select features with the greatest correlation with classification but with the least redundancy among the selected features. Here, we highlighted the advantages of MMPSO over mRMR: (1) mRMR is based on mutual information measurement to measure the correlation between features, while MMPSO based on maximum mutual information. Maximum mutual information is a more general and balanced measurement method than mutual information. (2) For feature selection based on mRMR method, the number of selected features needs to be determined personally, this will lead to different results between different researchers. MMPSO can autonomously determine this parameter by using the PSO method. Taken together, we highlighted the advantages of MMPSO over mRMR, and we discussed the reviewer’s concerns in the revision. Validity of the findings Major concern #3: In line 210, the authors listed the 18 gene signatures and “nine genes were significantly upregulated in tumors compared to normal samples” but did not clarify which genes were differentially upregulated or downregulated and what were the criteria. The authors also did not provide method detail of how they determined the high and low expression levels for sample stratification in Figure 4. For instance, I checked the average values of some of the genes in the two raw data files (data_tumor.csv, data_normal.csv) provided by the authors: ACTN1, ENSG00000072110 Tumor = 12.78586 Normal = 12.33047 CACHD1, ENSG00000158966 Tumor = 7.624089 Normal = 7.308306 Answer: We thank the reviewer for the constructive suggestions and the careful review. In the revised manuscript, we added the information about the reviewer concerned. The 18 genes were ACTN1, CACHD1, ERAP2, FAM171A1, FKBP1B, HIST1H2BC, PLVAP, PRKD1, RPL7AP6, ADRA2B, DMKN, FNDC4, NPC1L1, POMC, PYROXD2, RBP1, TRIM29, and ZBED9, with the relative expression of the first nine genes significantly increasing in tumors and the last nine genes decreasing (P < 0.01 in the Wilcoxon rank sum test with continuity correction, as shown in Figure 2 below). As for Figure 4, the Kaplan-Meier (KM) method was performed by using the “survival” and “survminer” packages in R. For each gene, the cut-off points obtaining from “survminer” package then divided gene expression values into the high (high) and the low (low) groups. In lines 292-296, the authors discussed the literature that a higher level of ACTN1 is associated with HCC size, and in Figure 4, CACHD1 was described as an ungulated gene associated with worse prognosis, however, I do not consider a difference of < 0.5 as significantly different in gene expression. At the very least, the authors should provide a box-whisker plot with proper statistics of the expression of these 18 genes across the normal/tumor samples they used to demonstrate their points. Answer: We thank the reviewer so much for the valuable suggestion. In the revised manuscript, we added Figure 2 to show the relative expression of the 18 gene signatures. Specifically, we performed the Wilcoxon rank sum test with continuity correction to analyze the differences of the 18 genes between Tumor and Control groups, the P values were all less than 0.01, as shown in Figure 2 below. Figure 2. The relative expressions of the 18 genes in tumor and control groups. The values were displayed as floating bars (min to max) with a line at the mean value. The first nine genes increased in tumors (color in red) compared to controls (color in green), while the last nine genes decreased in tumors (color in green) compared to controls (color in red). The statistic was performed by Wilcoxon rank sum test with continuity correction in R. Additional comments Minor comments. Please read carefully for typos. Line 18: “a optimal subset”. Line 70: “Yuanyuan et al. 2021”. In line 17, MMPSO should not be abbreviated because this is its first appearance in the whole manuscript. Answer: We thank the reviewer for the careful review, we have revised the manuscript accordingly. "
Here is a paper. Please give your review comments after reading it.
391
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper aims to propose a new algorithm to detect tsunami risk areas based on spatial modeling of vegetation indices and a prediction model to calculate the tsunami risk value. It employs atmospheric correction using DOS1 algorithm combined with k-NN algorithm to classify and predict tsunami-affected areas from vegetation indices data that have spatial and temporal resolutions. Meanwhile, the model uses the vegetation indices (i.e., NDWI, NDVI, SAVI), slope, and distance. The result of the experiment compared to other classification algorithms demonstrates good results for the proposed model. It has the smallest MSEs of 0.0002 for MNDWI, 0.0002 for SAVI, 0.0006 for NDVI, 0.0003 for NDWI, and 0.0003 for NDBI. The experiment also shows that the accuracy rate for the prediction model is about 93.62%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Tsunami is one of the disaster threats for many coastal areas in Indonesia. This disaster is generally triggered by an earthquake at sea causing a vertical shift in the seabed <ns0:ref type='bibr' target='#b1'>(Amri et al., 2018)</ns0:ref>. The territories of Indonesia are surrounded by the meeting of the world's main tectonic plates which causes the tsunami. These plates are: (1) the Indian-Australian Ocean plate in the south, moving relatively to the north and presses the Eurasian plate (where most of Indonesia territories are located), and (2) the Pacific plate in the east, which moves relatively westward against the Eurasian plate. This phenomenon causes many sources of earthquakes as well as the growth of active volcanoes in the territories of Indonesia, thus placing Indonesia as one of the areas in the world's most active tectonic zone <ns0:ref type='bibr' target='#b39'>(Verstappen, 2010)</ns0:ref>.</ns0:p><ns0:p>The National Development Planning Agency of Indonesia (Bappenas) stated that the total loss and damage from the tsunami and earthquake in Yogyakarta and Aceh Province was Rp 70.5 trillion. The impacts were 80 percent of the infrastructure sector damage (including housing) and 11 percent of the productive sector damage (Regulation of the Ministry of Health Republic of Indonesia No. 36 of 2014 about Assessment of Post-Disaster Damage, Loss and Health Resource Needs, 2014). The tsunami itself caused the death toll in the affected area reaching 108,100 people and 127,700 people missing <ns0:ref type='bibr' target='#b1'>(Amri et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The province of Yogyakarta that has the potential for a tsunami is located in the south coast area. The disaster risk index in Yogyakarta for the tsunami disaster currently reaches 1.74. This value is seen from the vulnerability of the Yogyakarta region from the tsunami disaster by considering geological aspects, tsunami threat maps and demographics in villages that may be exposed to the impact of the tsunami. In general, the Yogyakarta region has a fairly high tsunami disaster index, especially areas directly adjacent to the coastline <ns0:ref type='bibr'>(Regional Disaster Management Agency, 2019)</ns0:ref>. Kulon Progo Regency is the most susceptible to tsunami disaster among the four districts in Yogyakarta. There is a possibility of 60,607 people that will be affected if a tsunami occurs in this district <ns0:ref type='bibr'>(National Disaster Management Agency, 2012)</ns0:ref>. Areas with a high tsunami risk in Kulon Progo Regency are Temon Sub-District, Wates Sub-District, Panjatan Sub-District, Lendah Sub-District, and Galur Sub-District (Regulation of the Special Region of Yogyakarta No. 5 of 2019 about The Spatial Plan of the Special Region of Yogyakarta for <ns0:ref type='bibr'>2019)</ns0:ref>. Kulon Progo Regency comprises of 47 villages (see Figure <ns0:ref type='figure'>1</ns0:ref> for further information on the name of the villages) <ns0:ref type='bibr' target='#b28'>(Mustaqim, 2019)</ns0:ref>. Figure <ns0:ref type='figure'>1</ns0:ref> shows the visualization of the map of Kulon Progo Regency. Figure <ns0:ref type='figure'>1</ns0:ref> is obtained from the mapping of Shuttle Radar Topography Mission (SRTM) image from https://earthexplorer.usgs.gov/.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 1. The Visualization of Kulon Progo Regency Map</ns0:head><ns0:p>Damage and impacts caused by disasters, especially tsunamis, can be analyzed using the field survey method. The field survey method used during a disaster is difficult and carries a high risk. Disasters can occur over a large area, so surveying the entire affected area takes a long time <ns0:ref type='bibr' target='#b35'>(Singh et al., 2014)</ns0:ref>. Obtaining fast and accurate information in the event of a disaster can save many human lives and reduce losses. The solution for problems in field survey is the use of satellite technology. With the availability of easily obtained satellite images, remote sensing techniques can be widely used to assess disaster risk areas <ns0:ref type='bibr' target='#b4'>(Brunner et al., 2010)</ns0:ref>. The use of satellite imagery can overcome the limitations of collecting disaster area image data using traditional methods. Satellite images not only speed up the process of analyzing disaster areas but also provide accurate and timely estimates of the risk of disaster affected areas <ns0:ref type='bibr' target='#b20'>(Koshimura et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Landsat 8 satellite captures land cover images in disaster-affected areas that are influenced by soil and vegetation characteristics, in which this process becomes the basis for remote sensing techniques in analyzing land cover <ns0:ref type='bibr' target='#b34'>(Rendana et al., 2016)</ns0:ref>. In the last five years, the vegetation indices analysis has been applied in the evaluation of land cover. One of them is the analysis of disaster-affected land as a variation of spatial and temporal analysis techniques <ns0:ref type='bibr' target='#b10'>(Holzman et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b23'>Mallick et al., 2009)</ns0:ref>. Digital image processing from satellite data enables image analysis through various algorithms and mathematical indices. Meanwhile, feature analysis is based on the reflectance characteristics and the index has been designed to detect prominent features in the image area <ns0:ref type='bibr' target='#b43'>(Xie et al., 2010)</ns0:ref>. There are several indices that can detect areas containing vegetation in the image obtained from remote sensing, such as Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-Up Index (NDBI), Soil Adjusted Vegetation Index (SAVI), Normalized Difference Water Index (NDWI), and Modified Normalized Difference Water Index (MNDWI). NDVI is a common and widely used index <ns0:ref type='bibr' target='#b18'>(Karnieli et al., 2010)</ns0:ref>. It is also applied considerably in global environment, climate change, and disasters research <ns0:ref type='bibr' target='#b6'>(Gao, 1996)</ns0:ref>.</ns0:p><ns0:p>The response of vegetation to the environment is considered very sensitive. It affects the ecological and climate balances. In addition it becomes an effective barrier against natural disasters. Classification of natural disaster images, especially images before and after the tsunami, can use the average value of various spectrum indices such as NDVI, NDBI, SAVI, MNDWI, and NDWI as training data <ns0:ref type='bibr' target='#b35'>(Singh et al., 2014)</ns0:ref>. The pre-and post-tsunami images are compared to obtain the inundated area. NDVI, NDBI, SAVI, MNDWI, and NDWI are calculated as a ratio difference between red and near infrared bands respectively in measured canopy reflectance <ns0:ref type='bibr' target='#b12'>(Hu et al., 2008)</ns0:ref>. The process of analyzing the vegetation index data is carried out using representation, categorization, and classification approaches to produce valid information. The most suitable method to represent, categorize, and classify the vegetation index is a machine learning <ns0:ref type='bibr' target='#b25'>(Maxwell et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The most recent development of remote sensing methods for impact analysis and detection of image transformations caused by disasters is the use of machine learning. The machine learningbased transformation detection method consists of two categories, they are supervised and unsupervised detection methods. The supervised method requires training to identify transformations, such as methods of instance support vector machine (SVM) <ns0:ref type='bibr' target='#b3'>(Bovolo &amp; Bruzzone, 2007;</ns0:ref><ns0:ref type='bibr' target='#b40'>Volpi et al., 2012)</ns0:ref>, post classification comparison <ns0:ref type='bibr' target='#b48'>(Yang &amp; Wen, 2011)</ns0:ref>, artificial neural network (ANN), and function neural network (FNN) <ns0:ref type='bibr' target='#b26'>(Mehrotra et al., 2015)</ns0:ref>. Meanwhile, the unsupervised method does not require training as this method analyzes the image to identify any transformations. The unsupervised method is usually used to differentiate images <ns0:ref type='bibr' target='#b42'>(Wang et al., 2020)</ns0:ref> and make a principal component analysis (PCA) <ns0:ref type='bibr' target='#b5'>(Corner et al., 2000)</ns0:ref>.</ns0:p><ns0:p>In this study, a new algorithm is proposed to predict tsunami risk areas based on spatial modelling prediction of vegetation index. The new algorithm is developed with the addition of atmospheric correction before the data is processed using the k-Nearest Neighbor (k-NN) algorithm. In this algorithm, machine learning is used to classify and predict tsunami-affected areas from vegetation indices data that has spatial and temporal resolutions. To evaluate the performance of this new algorithm, it is compared to Cart, SVM, and ANN algorithms. The experimental results show that this new framework is superior, as indicated by the Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Cohen's Kappa scores that outperform other algorithms. Furthermore, since there is only a category of risk level and no value that indicates the level of tsunami risk, then a new formula using vegetation indices and other parameters is proposed to determine the tsunami risk value. This paper is presented as follows. Section 1 describes the background on tsunami problems. Section 2 contains some related works and the reviews of machine learning and vegetation indices. Section 3 presents the research method and flowchart, as well as the proposed algorithm. A new algorithm for detecting tsunami risk areas is proposed, by adding atmospheric correction, before applying prediction and classification algorithms, since some data may contain some noises. This framework uses k-NN algorithm to classify the area, with high, medium, and low risk. Section 4 explains the experiments and comparisons of the proposed framework using k-NN algorithm with other classifying methods using real data. A model to predict the tsunami risk value is also discussed in this section. Finally, the conclusions are stated in Sections 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Related Works</ns0:head><ns0:p>In this section, some research about remote sensing based on prediction and classification are discussed. <ns0:ref type='bibr'>Prasetyo, et al. implemented</ns0:ref> spectral vegetation index to the data obtained from the Landsat 8 OLI satellite to provide disaster risk index information <ns0:ref type='bibr' target='#b29'>(Prasetyo et al., 2020)</ns0:ref>. k-NN with spatial autocorrelation was used to classify the drought risk areas. The spectral vegetation indices used in this study were NDVI, SAVI, Vegetation Condition Index (VCI), Temperature Condition Index (TCI), and Vegetation Health Index (VHI) <ns0:ref type='bibr' target='#b46'>(Xue &amp; Su, 2017)</ns0:ref>. While, the Kappa accuracy test showed that the SVM and k-NN methods had an accuracy of 88.30.</ns0:p><ns0:p>Other research were conducted a remote sensing study using machine learning methods for classifying the image data <ns0:ref type='bibr' target='#b22'>(Ma et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Maxwell et al., 2018)</ns0:ref>. One of them wanted to classify the distribution, species, and extent of mangroves using the Akaike Information Criterion (AIC) on Nusa Lembongan Island <ns0:ref type='bibr' target='#b16'>(Ilham &amp; Marzuki, 2017)</ns0:ref>. This research was important since mangrove forest had important ecological, economic, and social roles. One example was the study on mangrove forest as a green belt for shoreline protection from storms and tsunami waves using Worldview-2 satellite imagery with a data resolution of 0.46 meters. This method automatically identified land classes, sea/water classes, and mangrove classes. The results showed that the classification accuracy was 68.32%.</ns0:p><ns0:p>Another related study proposed a new method for classifying daily NDVI time series data based on a combination of multi-classifiers <ns0:ref type='bibr' target='#b54'>(Zhao et al., 2017)</ns0:ref>. In this study, the HJ-CCD satellite was used as data for compiling an NDVI time series model with S-G filtering and spatial interpolation. This study also proposed a dimension reduction method using the statistical features of the daily NDVI time series. The results showed that the accuracy of image classification of disaster-prone areas was 77.45%.</ns0:p><ns0:p>The next study used the unsupervised classification method to classify coastal forest damage due to tsunamis <ns0:ref type='bibr' target='#b50'>(Yonezawa, 2015)</ns0:ref>. The extent and distribution of tsunami damage was predicted using NDVI. To assess the classification accuracy, they used the error matrix. Aerial photos were used for reference. From 200 random points, the accuracy was 79.50% with a Kappa Stats of 0.6387. Similar research proposed NDVI predictions recorded by satellite at Ventspils City in Courland, Latvia and obtained using the Markov chain method <ns0:ref type='bibr' target='#b37'>(Stepchenko &amp; Chizhov, 2016)</ns0:ref>. In general, Markov chain prediction is a probability forecasting method because the prediction results show the probability of an NDVI value in the future <ns0:ref type='bibr' target='#b21'>(Liu, 2010)</ns0:ref>. This study demonstrated how Markov chains could predict future values with less memory and random walk capability. Each state was reached directly by other states with a transition matrix to provide a high prediction accuracy of 63.93%. Subsequent research proposed a method for mapping the value of environmental damage after a disaster <ns0:ref type='bibr' target='#b9'>(Havivi et al., 2018)</ns0:ref>. The data used are TerraSAR-X (TSX) images with high resolution obtained from before and after the incident and also Landsat 5 images before the incident. The affected areas were analyzed with Synthetic Aperture Radar (SAR) using one SAR interferometric coherence map (InSAR). The accuracy of mapping the environmental damage caused by the tsunami could be improved using the vegetation index (NDVI) <ns0:ref type='bibr' target='#b7'>(Ghebrezgabher et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b20'>Koshimura et al., 2020)</ns0:ref>. The affected areas were mapped with the overall accuracy of 89% and Kappa coefficient of 82%.</ns0:p><ns0:p>In summary, it has been shown from this review that spectral vegetation indices indicated the quantitative values for measuring the vegetation canopy in receiving and reflecting the light spectrum. They were interpreted as plant's spectral characteristics, including the infrared spectrum as visible light (IR) and the near infrared spectrum as invisible light (NIR) <ns0:ref type='bibr' target='#b29'>(Prasetyo et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Research Methods</ns0:head><ns0:p>In this section, a research method for detecting the tsunami risk areas is introduced. Atmospheric correction is used in this algorithm since data may contain some noises which will interfere with </ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Atmospheric Correction</ns0:head><ns0:p>The data obtained from data extraction still has a low level of radiometric accuracy because it has errors in the recording process from the sensor in the image. Therefore, if the data will be used for data processing such as biomass, vegetation indices, land-cover classification, and so on, they require an atmospheric correction process. This process is used to improve the accuracy of image classification so that the data obtained can be compared and arranged in a number of solutions and evaluation of tsunami-prone areas. One of the processes is the removal of atmospheric haze and cloud cover. The results of the atmospheric correction process are very important for optimizing foggy satellite imagery regarding the object's surface to detect changes in land cover and land use.</ns0:p><ns0:p>The method that can be used in the atmospheric correction process is the Dark Object Subtraction (DOS), especially DOS1. This method is chosen because the field data parameters for image correction are not known and also the atmospheric effect model is not known which shows the condition of an image when the image is recorded <ns0:ref type='bibr' target='#b52'>(Zhang et al., 2010)</ns0:ref>.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#120588; = &#120587;(&#119871; &#119904;&#119886;&#119905; -&#119871; &#119901; )&#119889; 2 &#119864; 0 cos (&#120579; &#119911; )</ns0:formula><ns0:p>where is the at-satellite radiance, is the path radiance, is the land surface reflectance, &#119871; &#119904;&#119886;&#119905; &#119871; &#119901; &#120588; &#119889; is the Earth-Sun distance in astronomical units, is the exoatmospheric solar spectral &#119864; 0 irradiance, and is the solar zenith angle.</ns0:p><ns0:p>&#120579; &#119911;</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Data Pre-processing</ns0:head><ns0:p>To begin with, the geometric and radiometric corrections need to be known. Geometric correction is used to correct the position of the coordinates of each pixel in the image exactly as NDVI is a vegetation index that is often used to compare the level of vegetation greenness (chlorophyll level) in plants <ns0:ref type='bibr' target='#b27'>(Min et al., 2016)</ns0:ref>.</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_1'>) &#119873;&#119863;&#119881;&#119868; = &#119873;&#119868;&#119877; -&#119877;&#119890;&#119889; &#119873;&#119868;&#119877; + &#119877;&#119890;&#119889; = &#119861;&#119886;&#119899;&#119889; 5 -&#119861;&#119886;&#119899;&#119889; 4 &#119861;&#119886;&#119899;&#119889; 5 + &#119861;&#119886;&#119899;&#119889; 4 &#61623; Normalized Difference Water Index (NDWI)<ns0:label>2</ns0:label></ns0:formula><ns0:p>NDWI is an index that shows the wetness level of an area. The formula for the NDWI index follows Eq. 3 <ns0:ref type='bibr' target='#b45'>(Xu, 2006)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>(3) &#119873;&#119863;&#119882;&#119868; = &#119866;&#119903;&#119890;&#119890;&#119899; -&#119873;&#119868;&#119877; &#119866;&#119903;&#119890;&#119890;&#119899; + &#119873;&#119868;&#119877; = &#119861;&#119886;&#119899;&#119889; 3 -&#119861;&#119886;&#119899;&#119889; 5 &#119861;&#119886;&#119899;&#119889; 3 + &#119861;&#119886;&#119899;&#119889; 5 &#61623; Modified Normalized Difference Water Index (MNDWI)</ns0:formula><ns0:p>MNDWI is a modification of the NDWI index. The formula for MNDWI index follows Eq. 4 <ns0:ref type='bibr' target='#b0'>(Acharya et al., 2018)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>(4) &#119872;&#119873;&#119863;&#119882;&#119868; = &#119866;&#119903;&#119890;&#119890;&#119899; -&#119878;&#119882;&#119868;&#119877; 1 &#119866;&#119903;&#119890;&#119890;&#119899; + &#119878;&#119882;&#119868;&#119877; 1 = &#119861;&#119886;&#119899;&#119889; 3 -&#119861;&#119886;&#119899;&#119889; 6 &#119861;&#119886;&#119899;&#119889; 3 + &#119861;&#119886;&#119899;&#119889; 6</ns0:formula><ns0:p>&#61623; Soil Adjusted Vegetation Index (SAVI) SAVI is used to correct NDVI for minimizing the influence of soil brightness in low vegetative cover areas, using a soil-brightness correction factor ( ). SAVI is calculated as a &#119871; ratio between the Red and NIR values with a soil brightness correction factor, where , &#119871; = 0.5 to accommodate most land cover types. The formula for SAVI index follows Eq. 5 <ns0:ref type='bibr' target='#b15'>(Huete, 1988)</ns0:ref>.</ns0:p><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_4'>&#119878;&#119860;&#119881;&#119868; = ( &#119873;&#119868;&#119877; -&#119877;&#119890;&#119889; &#119873;&#119868;&#119877; + &#119877;&#119890;&#119889; + &#119871; ) * (1 + &#119871;) = ( &#119861;&#119886;&#119899;&#119889; 5 -&#119861;&#119886;&#119899;&#119889; 4 &#119861;&#119886;&#119899;&#119889; 5 + &#119861;&#119886;&#119899;&#119889; 4 + 0.5 ) * 1.5 &#61623; Normalized Difference Built-up Index (NDBI)</ns0:formula><ns0:p>NDBI is an effective transformation/index used in mapping building lands in an area automatically using Landsat 8 OLI images. The formula for NDBI index follows Eq. 6 <ns0:ref type='bibr' target='#b51'>(Zha et al., 2003)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science correlation is one of the correlation tests used to measure the strength of the linear relationship between two variables. Two variables are said to be correlated if a change in one variable is accompanied by a change in another variable, both of which change in the same direction or vice versa. The formula for calculating correlation can be seen in Eq. 7.</ns0:p><ns0:p>(7)</ns0:p><ns0:formula xml:id='formula_5'>&#119903; &#119909;&#119910; = &#119899;&#8721;&#119883;&#119884; -(&#8721;&#119883;)(&#8721;&#119884;) {&#119899;&#8721;&#119883; 2 -(&#8721;&#119883;) 2 }{&#119899;&#8721;&#119884; 2 -(&#8721;&#119884;) 2 }</ns0:formula><ns0:p>where is the correlation value, is the variable , and is the variable . <ns0:ref type='formula'>8</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_6'>&#119889;(&#119909;,&#119910;) = &#8721; &#119899; &#119894; = 1 (&#119909; &#119894; -&#119910; &#119894; ) 2</ns0:formula><ns0:p>where is the number of data. &#119899;</ns0:p></ns0:div> <ns0:div><ns0:head n='6.'>Spatial Interpolation</ns0:head><ns0:p>Interpolation is the method used to predict a value at locations where data are not available. Thus, interpolation is used to predict values of the surrounding points outside the sample point. Meanwhile, spatial interpolation shows the process of value estimation in the surrounding areas outside the sample point to determine the distribution of values in the area being mapped.</ns0:p><ns0:p>Inverse Weighted Distance (IDW) is used to interpolate prediction results. In IDW, the estimated value of a at location of x shows the average of the weighted closest observations. ( <ns0:ref type='formula'>9</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_7'>&#119886;(&#119909;) = &#8721; &#119899; &#119894; &#8462; &#119894; &#119895; &#119894; &#8721; &#119899; &#119894; &#8462; &#119894;</ns0:formula><ns0:p>where , &#946; &#8805; 0, corresponds to the Euclidean distance and &#946; determines the</ns0:p><ns0:formula xml:id='formula_8'>&#8462; &#119894; = |&#119909; -&#119909; &#119894; | -&#120573; |&#8230;|</ns0:formula><ns0:p>extent to which a point is preferred over other points. If the point x coincides with the observation location or sample point (x = x i ), the value of the sample point x is returned to avoid infinity weight.</ns0:p></ns0:div> <ns0:div><ns0:head n='7.'>Testing the accuracy of prediction results.</ns0:head><ns0:p>To test the accuracy of prediction results, the MSE, RMSE, and MAE are used. The analysis of the level of vulnerability will be continued when the result of the prediction testing is accurate.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67808:1:0:NEW 31 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>On the contrary, if the result of the analysis is not accurate, the analysis of the level of vulnerability will not be continued. It will be analyzed again using machine learning. The calculations of MSE, RMSE, and MAE, which is close to zero, provide good accuracies.</ns0:p><ns0:p>Therefore, the proposed algorithm for detecting tsunami risk areas can be described in Algorithm 1.</ns0:p><ns0:p>Algorithm 1: Algorithm for detecting tsunami risk areas Input: Landsat 8 image data ( ) and the number of nearest neighbors ( ).</ns0:p></ns0:div> <ns0:div><ns0:head>&#119883; &#119896;</ns0:head><ns0:p>Step 1: Extract the data into 11 bands.</ns0:p><ns0:p>Step 2: Atmospheric correction, using DOS1 (Eq. 1).</ns0:p><ns0:p>Step 3: Pre-processing computation of vegetation indices, using NDVI (Eq. 2), NDWI (Eq. 3), MNDWI (Eq. 4), SAVI (Eq. 5), and NDBI (Eq. 6).</ns0:p><ns0:p>Step 4: Compute the correlation test, using Pearson correlation (Eq. 7).</ns0:p><ns0:p>Step 5: Forecast and classify the data, using k-NN algorithm.</ns0:p><ns0:p>Step 7: Spatial interpolation, using IDW (Eq. 9).</ns0:p><ns0:p>Step 8: Map the tsunami risk areas.</ns0:p><ns0:p>Output: Tsunami risk local map.</ns0:p><ns0:p>Algorithm 1 is the algorithm to detect and determine area affected by tsunami, based on the vegetation indices, using k-NN algorithm. Each vegetation index produces one tsunami risk local map, so that Algorithm 1 generates five tsunami risk local maps, i.e., NDVI map, NDWI map, MNDWI map, SAVI map, and NDBI map. Therefore, in this paper, an algorithm to obtain a model that combines all vegetation indices is also proposed. The suitable vegetation indices are selected based on the local map results from Algorithm 1. This model calculates the tsunami risk value for each region and their level risks. Multiple linear regression is used to obtain the model. The algorithm for determining the tsunami risk value and level risk is described in Algorithm 2. Algorithm 2: Algorithm for determining tsunami risk values Input: independent variable, , (i.e., level risk value) and dependent variable, , (i.e., vegetation &#119910; &#119909; indices and other parameters)</ns0:p><ns0:p>Step 1: Select the dependent variable of vegetation indices based on the results of Algorithm 1.</ns0:p><ns0:p>Step 2: Apply a multiple linear regression, for the training</ns0:p><ns0:formula xml:id='formula_9'>&#119910; = &#119886; 0 + &#119886; 1 &#119909; 1 + &#119886; 2 &#119909; 2 + &#8230; + &#119886; &#119899; &#119909; &#119899; data.</ns0:formula><ns0:p>Step 3: Calculate the risk value for the testing data.</ns0:p><ns0:p>Step 4: Categorize the risk value into level risk. Output: risk value and level risk</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Discussions</ns0:head></ns0:div> <ns0:div><ns0:head>Data Generation</ns0:head><ns0:p>The data used were Landsat 8 OLI image data, which contained 11 bands as listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. All 11 bands were processed using QGIS according to the required indices; NDVI, NDBI, MNDWI, NDWI and SAVI. Each index was calculated using Eq. ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>)-( <ns0:ref type='formula'>6</ns0:ref>) to see monthly index results. The data taken for processing were the average result of each index. The data used for this study were yearly data for two different months, August and November, from 47 villages in Kulon Progo Regency. For processing, the data for August and November from 2014 to 2020 were combined. The data were arranged sequentially every year starting from August to November, followed by the index values and classification of each index. In the binary field, each index was adjusted to the potential risk posed and divided into two groups, TRUE (affected by tsunami) and FALSE (unaffected by tsunami). The total binary result of each index was calculated as a sum that indicates TRUE. Since there were five vegetation indices, then each region had a minimum of zero TRUE values and a maximum of five TRUE values. A region was classified into three risk levels, i.e., low risk, medium risk, and high risk. Thus, in this research, it is determined and simulated that if the number of TRUE &#8804; 1, a region has a low potential/impact (low risk), if the number of TRUE = 2, a region has a moderate potential/impact (medium risk), and if the number of TRUE &#8805; 3, a region has a high potential/impact of tsunami (high risk). Figure <ns0:ref type='figure'>3</ns0:ref> shows the calculation results of each index and their resulting potentials.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 3. Calculation Results of Each Index</ns0:head></ns0:div> <ns0:div><ns0:head>Correlation between Variables</ns0:head><ns0:p>Based on the results of the Pearson correlation analysis, it showed that the correlation between the vegetation indices of the three tsunami impact risk classes had a positive or negative correlation value. As presented in Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>, it can be seen that the lowest correlation with a negative value was the correlation between the MNDWI and NDVI indices, with a correlation value of -0.761. The scatter diagram showed the increase in the MNDWI index value which was not in line with the increase in the NDVI value for the potential tsunami risk. The highest correlation that had a positive value was the correlation between the SAVI and NDVI indices of 0.767. The distribution pattern of the data contained in the scatter diagram was closer to a straight line and the risk of tsunami impact increased in line with the increase in the value of the vegetation indices. The positive correlation between these two indices could be seen as the strongest relationship in the medium risk of tsunami impact. A positive correlation was indicated by the distribution pattern of the data pair points that moved closer to a straight line, which showed a close relationship between the risk of the tsunami impact and the index. This relationship might also be referred to as a unidirectional relationship. The visualization of the data displayed was not only in the form of closeness value between variables but also in the form of a bar plot presenting the distribution of data from 2014 to 2020. Data illustrated that moderate tsunami risks are more dominant than low tsunami risks. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Prediction and classification were done using the k-NN algorithm, where the number of k was determined by classification using k-NN with training data 70% and testing data 30% of the total data. To compare the performance of k, the calculation of the accuracy and value of the Kappa coefficient was carried out starting from up to . Based on the obtained results in &#119896; = 1 &#119896; = 15</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>, the highest accuracy and value of Kappa was in k-NN with k = 1 and 2. Therefore, in this study, predictions were made using . &#119896; = 1 </ns0:p></ns0:div> <ns0:div><ns0:head>Pr(&#119890;)</ns0:head><ns0:p>After the predictions are done, the MSE, RMSE, and MAE are calculated to see the prediction performance. Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the test of error and the resulting average from data using the prediction results of each index with k-NN, ANN, SVM, and CART algorithms.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. Error Value of Each Index with Prediction Results using k-NN, ANN, SVM, and CART algorithms.</ns0:p><ns0:p>The values of MSE, RMSE, and MAE for k-NN, ANN, SVM, and Cart of each index can be shown in Figures <ns0:ref type='figure' target='#fig_6'>6-8</ns0:ref>, respectively. As can be seen from these figures, the k-NN algorithm obtains the best performance for all indices, compared with ANN, SVM, and Cart algorithms. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Furthermore, the averages of each index using all algorithms are compared to the actual data. The results can be seen in Table <ns0:ref type='table'>3</ns0:ref>. From this table, k-NN obtains the closest average value for each index.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. The Average of Each Index using k-NN, ANN, SVM, and Cart</ns0:p><ns0:p>The Map of Tsunami Risk Areas Figure <ns0:ref type='figure' target='#fig_7'>9</ns0:ref> is the map showing the distribution of areas that have the potential for a tsunami using the IDW interpolation results. The higher the level of vulnerability of the area affected by the tsunami, the redder the area will be and the lower the level of vulnerability, the bluer the color will be. An area will be classified as a high vulnerability area when it has a high wettability value and a few vegetation. Figure <ns0:ref type='figure' target='#fig_7'>9</ns0:ref> shows the distribution map of tsunami risk areas based on NDWI index 2021. When an area is dominated by water or has a high level of wetness, it will have a high level of tsunami risk so that the area has a reddish color. On the contrary, an area with a low level of tsunami risk has a bluish color. The map of the distribution of potential areas in Figure <ns0:ref type='figure' target='#fig_8'>10</ns0:ref> provides tsunami risk areas based on the NDVI index. The higher the NDVI value, the higher possibility of an area having high vegetation. An area that has a low risk of tsunami, its color will be bluish because it has a lot of vegetation. For an area with a high risk of tsunami, its color will be close to red because the area has a low NDVI value and thus it shows that the area has only a few vegetation. The SAVI index presents areas that are adjusted according to their type of soil or the shape of the areas. Higher SAVI value indicates that the area is in the form of forested vegetation, while lower value means the area has many bodies of water such as rivers. Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref> shows that the area that is colored redder, which means that the area has a low SAVI value and has a high level of tsunami vulnerability because it has many areas of water, asphalt, paving, etc., meanwhile the blue area in the figure is an area that has a low level of tsunami vulnerability. Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref> Manuscript to be reviewed Computer Science study areas have undulating slopes, some are moderately sloping and some are hilly. The quantitative value of the slope map is used as an indicator to determine areas having a high risk of being prone to tsunami.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>12</ns0:ref>. The Slope Map Furthermore, since there are only three categories of risk level, they are high, medium, and low risk and no value that indicates the level of tsunami risk, then a new formula is also proposed and discussed to determine the tsunami risk value. In this formula, vegetation indices, slope, and distance are used to obtain the risk value, where distance is the shortest distance of a region with the shoreline. The value is on interval [0,1]. The higher the risk value, , the higher the area is &#119877;&#119881; at risk of a tsunami.</ns0:p><ns0:p>In this experiment, the data of vegetation indices, slope, and distance from 47 villages of 2014 until 2019 were used as the training data. NDBI and MNDWI were not used in the formula since it had no effect in determining the risk area, based on the experiment using k-NN. Since a region with low slope value, low NDVI value, or low distance had a high tsunami risk level, then they were applied as denominators in the model. Therefore, according to slope, NDWI, NDVI, and SAVI values of the training data as the independent variables and level risk values of the training data as the dependent variables, Algorithm 2 can be used to obtain the risk values of the testing data. With multiple linear regression model using Excel, the tsunami risk level, , was modeled &#119877;&#119881; as in Eq. ( <ns0:ref type='formula'>10</ns0:ref> could be explained by the independent variables (i.e., slope, NDWI, NDVI, SAVI, and distance).</ns0:p><ns0:p>To test the model, data from 2020 were used as the testing data. The MSE of the testing data was 0.047. Besides that, the risk level was categorized into low, medium, and high risk. Low risk was in interval [0, 0.4) of risk value, medium risk was in interval [0.4, 0.8) of risk value, and high risk was in interval [0.8, 1] of risk value. The model obtained an accuracy rate of 93.62%. Using the model from Eq. ( <ns0:ref type='formula'>10</ns0:ref>), the prediction of 2021 tsunami risk values for 47 villages is listed in Table <ns0:ref type='table'>4</ns0:ref>. Manuscript to be reviewed Computer Science resolutions. Experimental results and comparisons demonstrate the effectiveness of the proposed algorithm to detect the tsunami risk area. The result with k-NN algorithm gives MSEs of 0.0002 for MNDWI, 0.0002 for SAVI, 0.0006 for NDVI, 0.0003 for NDWI, and 0.0003 for NDBI. Moreover, based on the tsunami risk area of each vegetation index, a prediction model also has been proposed using NDVI, NDWI, SAVI, slope, and distance to obtain the tsunami risk value of an area, by using multiple linear regression with of 0.921. The accuracy rate from &#119877; 2 categorizing the risk value into level risk is about 93.62%. For future work, besides vegetation indices, slope, and distance, other parameters can also be used to find the tsunami risk area with a more accurate risk value, such as elevation, sea level, etc. Parameters that are not related to the determination of a potentially tsunami-prone area may also be found by certain methods. Another future work is to compare the Euclidean distance used in k-NN algorithm with other numerical similarity metric, such as cosine distance, Jaccard distance, etc. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67808:1:0:NEW 31 Jan 2022)Manuscript to be reviewed Computer Science the prediction and classification performance. The flowchart for this research method is depicted in Figure2and can be explained as follows.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Flowchart of The Research Method</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:11:67808:1:0:NEW 31 Jan 2022) Manuscript to be reviewed Computer Science displayed on the surface of the earth. It involves satellite movements, earth rotation, and terrain effects. Meanwhile, radiometric correction has a different goal. The stage of data pre-processing includes collecting Sentinel 2 satellite image data obtained from www.earthexplorer.usgs.gov. They are corrected not only geometrically, but also radiometrically and atmospherically. After the image is corrected, the clean band is calculated, according to the formula for each index. Sentinel 2 image data extraction uses the NDVI, NDBI, NDWI, MSAVI, and MNDWI formulas. The extraction results are numerical values, which can be used for the classification and prediction using k-NN. NDVI, MNDWI, MSAVI, NDWI, and NDBI formulas are explained as follows. &#61623; Normalized Difference Vegetation Index (NDVI)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Test Correlation test calculates the correlation between variables and analyzes the level of closeness of the relationship between the independent variable ( ) and the dependent variable ( ). Pearson &#119883; &#119884; PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67808:1:0:NEW 31 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Correlation between Variables</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Accuracy and Value of Cohen's Kappa</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. MSE values for k-NN, ANN, SVM, and Cart</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The Distribution Map of Tsunami Risk Areas Based for NDWI Index 2021</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. The Distribution Map of Tsunami Risk Areas Based for NDVI 2021 Index</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>also shows the distribution map of tsunami risk areas based on SAVI index 2021.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The Distribution Map of Tsunami Risk Areas for SAVI Index 2021</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Prediction of 2021 Tsunami Risk ValuesConclusionsTo sum up, the paper proposed a new algorithm to predict tsunami risk areas based on spatial prediction of vegetation indices. Atmospheric correction using the DOS1 algorithm is used for image correction since the field data parameters are not known. k-NN is used to classify and predict tsunami-affected areas from vegetation indices data that has spatial and temporal PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67808:1:0:NEW 31 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,70.87,294.46,672.95' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,166.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,322.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,315.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,313.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,313.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,313.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,348.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,359.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Table1shows the Landsat 8 operational land image (OLI) and thermal infrared sensor (TIRS). Landsat 8 Operational Land Image (OLI) and Thermal Infrared Sensor (TIRS) (U.S.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Geological Survey, 2021)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>used to classify image data and form a thematic map of Land Used Land Cover of the study area. Contour analysis of the study area is the goal behind the classification of the Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM) image data. k-NN is the classification method for a set of data based on pre-existing learning data. The classification process is done by finding the closest point from the old point of 'a' to a new point of 'a' (nearest neighbor). The closest point search technique is performed by using the Euclidean distance formula, as shown in Eq. 8. To use the k-NN algorithm, it is necessary to determine the value of , where is the number of nearest neighbors used to</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#119903; &#119909;&#119910;</ns0:cell><ns0:cell>&#119883;</ns0:cell><ns0:cell>&#119883;</ns0:cell><ns0:cell>&#119884;</ns0:cell><ns0:cell>&#119884;</ns0:cell></ns0:row><ns0:row><ns0:cell>5. Forecast and Classification</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Supervised classification is &#119896;</ns0:cell><ns0:cell>&#119896;</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>classify the new data.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>MSE, RMSE, and MAE Values of Each Index with Prediction Results using k-NN, ANN, SVM, and Cart Algorithms</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>MSE</ns0:cell><ns0:cell>RMSE</ns0:cell><ns0:cell>MAE</ns0:cell></ns0:row><ns0:row><ns0:cell>Index</ns0:cell><ns0:cell>Algorithm</ns0:cell><ns0:cell>k-</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>NN ANN SVM Cart k-NN ANN SVM Cart k-NN ANN SVM Cart MNDWI</ns0:head><ns0:label /><ns0:figDesc>0.0015 0.0015 0.0013 0.0015 0.0147 0.0387 0.0363 0.0388 0.0070 0.0308 0.0256 0.0309 SAVI 0.0002 0.0006 0.0007 0.0007 0.0130 0.0244 0.0266 0.0266 0.0063 0.0199 0.0191 0.0212 NDVI 0.0006 0.0015 0.0015 0.0018 0.0244 0.0385 0.0384 0.0423 0.0106 0.0302 0.0260 0.0335 NDWI 0.0003 0.0007 0.0006 0.0008 0.0180 0.0262 0.0253 0.0286 0.0086 0.0191 0.0166 0.0218 NDBI 0.0003 0.0008 0.0007 0.0008 0.0184 0.0281 0.0258 0.0286 0.0086 0.0214 0.0174 0.0218</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
" January 31, 2022 Dear Editors, We thank the editor and reviewers for their generous comments on the manuscript and we have edited the manuscript to address their concerns. We have improved the English language, corrected the figures and tables, and also added more explanations about the algorithm. We believe that the manuscript is now suitable for publication in PeerJ. Kristoko Dwi Hartomo On behalf of all authors. Comments from Editor The paper is interesting and addresses a critical problem. It is well written and describes most of the aspects. However, considering the reviewers' comments, I recommend a minor review of the paper and invite you to submit a revised manuscript after addressing all reviewers' comments. Also, thoroughly revise the manuscript to correct all spelling/grammatical/typo/style errors. Furthermore, consider revising the title to 'Vegetation indeces' spatial prediction based novel algorithm for determining tsunami risk areas and risk values'. Response: Thank you for the comment. We have checked all sentences of the revised manuscript carefully and changed the title to 'Vegetation indices' spatial prediction based novel algorithm for determining tsunami risk areas and risk values'. Reviewer 1 (Josephine Benjamin) Comments for the Author Basic reporting 1. There is a need to check the grammar, spelling, sentence structures, etc. of the paper. In the title “A New Algorithm for ……. on Spatial Prediction of 3 Vegetation Index”. The word “Index” should be in plural form since more than one index is used in the paper. In other parts of the paper, I found several errors on the spelling of words, verb agreement, correct usage of articles, etc. There is a need to go over the entire paper to check on these errors and reconstruct or rephrase some sentences which are redundant (e.g., Lines 239-242 needs to be reconstructed to avoid redundancy). I would suggest that the authors use an APP to correct these errors. Response: Thank you for the comment. We have checked all sentences of the revised manuscript carefully. 2. A reorganization of the Abstract is recommended. The results of the algorithm should be written in the last lines and the presentation of the algorithm and the indexes used should be more profound to emphasize the objective of the paper. Response: Thank you for the comment. We have reorganized the abstract in the revised manuscript, as follows. “This paper aims to propose a new algorithm to detect tsunami risk areas based on spatial modeling of vegetation indices and a prediction model to calculate the tsunami risk value. It employs atmospheric correction using DOS1 algorithm combined with k-NN algorithm to classify and predict tsunami-affected areas from vegetation indices data that have spatial and temporal resolutions. Meanwhile, the model uses the vegetation indices (i.e., NDWI, NDVI, SAVI), slope, and distance. The result of the experiment compared to other classification algorithms demonstrates good results for the proposed model. It has the smallest MSEs of 0.0002 for MNDWI, 0.0002 for SAVI, 0.0006 for NDVI, 0.0003 for NDWI, and 0.0003 for NDBI. The experiment also shows that the accuracy rate for the prediction model is about 93.62%.” 3. The tsunami risk levels (Levels 1, 2, and 3) are based on indicators resulting from the algorithm (Lines 358-360). First, the basis of these indicators was not explicitly explained. Second, how would you classify the risk if TRUE is within the range (1,2) or (2,3)? Is it always the case that the algorithm will result to TRUE=2 in your simulations? Response: Thank you for the comment. We have added the explanation of the risk levels at line 361-366 in the revised manuscript, as follows. “Since there were five vegetation indices, then each region had a minimum of zero TRUE values and a maximum of five TRUE values. A region was classified into three risk levels, i.e., low risk, medium risk, and high risk. Thus, in this research, it is determined and simulated that if the number of TRUE ≤ 1, a region has a low potential/impact (low risk), if the number of TRUE = 2, a region has a moderate potential/impact (medium risk), and if the number of TRUE ≥ 3, a region has a high potential/impact of tsunami (high risk).” 4. In your regression model (Equation 10), the authors need to explain how they obtained the constant coefficients (e.g., 0.95, 0.007, etc.) Response: Thank you for the comment. The constant coefficients of Eq. 10 were obtained from multiple linear regression model using Excel with slope, NDWI, NDVI, and SAVI values of the training data as the independent variables and level risk values of the training data as the dependent variables. We have explained how to obtain the constant coefficients in line 488-492 in the revised manuscript, as follows. “Therefore, according to slope, NDWI, NDVI, and SAVI values of the training data as the independent variables and level risk values of the training data as the dependent variables, Algorithm 2 can be used to obtain the risk values of the testing data. With multiple linear regression model using Excel, the tsunami risk level, , was modeled as in Eq. (10) as below, ...” 5. Comments on Figures Are the location maps taken by the authors themselves? If not, then there is a need to indicate the source below the figure. Figure 3, although it’s a captured image, its’ source should be written below. Also, the figure is more likely to be identified as a table rather than a figure. The x-axis and y-axis must be given labels on Figures 5,6,7,8 and the titles would be more clear and concise. In Figures 9,10,11, the legend uses a color coding to show the level of risk is not properly defined. Since you are identifying the risk levels with color codes, some colors are not properly identified. Also, it would be appropriate to give the range of values for the risk levels so that readers of the paper can properly distinguish the different risk levels in the graph. Response: Thank you for the comment. • For Figure 1, we have added the explanation in line 63-64 in the revised manuscript, as follows. “Figure 1 is obtained from the mapping of Shuttle Radar Topography Mission (SRTM) image from https://earthexplorer.usgs.gov/.” • Figure 3 is obtained from the calculations of each vegetation index. Satellite image (Landsat 8 OLI image) is downloaded from http://earthexplorer.usgs.gov/, which contains 11 bands. Those 11 bands will be processed and calculated using Eq. (2)-(6) to obtain the values of vegetation indices. We have added the explanation in line 351-353 in the revised manuscript, as follows. “The data used were Landsat 8 OLI image data, which contained 11 bands as listed in Table 1. All 11 bands were processed using QGIS according to the required indices; NDVI, NDBI, MNDWI, NDWI and SAVI. Each index was calculated using Eq. (2)-(6) to see monthly index results.” • Figure 5, 6, 7, 8 had been corrected. • Figure 9, 10, 11 had been corrected. 6. Comments on Tables Since the purpose of the authors is to make a comparative analysis on the amount of error generated using different algorithms and that of their proposed algorithm, I would suggest either of the following improvements: a. Tables 2,3,4,5 can be reconstructed as one table showing the amount of error (MSE. RMSE, MAE) of each algorithm based on each index, or b. Three separate tables for each classified error (i.e., MSE. RMSE, MAE) comparing the algorithms and the amount of error based on the 3 indexes. Since each table enumerates the amount of error of each index with prediction results using the varied machine learning algorithms in your study, it would be better to fuse all 4 tables as one where all the algorithms are listed together with the indexes and the classification of errors. In this way, you can make a clear comparison of the amount of error. Alternatively, you may create three tables, each for the identified error MSE, RMSE, MAE separately, and compare the results of the algorithms. Response: Thank you for the comment. We have reorganized Tables 2,3,4,5 into one table (Table 2) in the revised manuscript, as follows. Table 2. MSE, RMSE, and MAE Values of Each Index with Prediction Results using k-NN, ANN, SVM, and Cart Algorithms MSE RMSE MAE Algorithm Index k-NN ANN SVM Cart k-NN ANN SVM Cart k-NN ANN SVM Cart MNDWI 0.0015 0.0015 0.0013 0.0015 0.0147 0.0387 0.0363 0.0388 0.0070 0.0308 0.0256 0.0309 SAVI 0.0002 0.0006 0.0007 0.0007 0.0130 0.0244 0.0266 0.0266 0.0063 0.0199 0.0191 0.0212 NDVI 0.0006 0.0015 0.0015 0.0018 0.0244 0.0385 0.0384 0.0423 0.0106 0.0302 0.0260 0.0335 NDWI 0.0003 0.0007 0.0006 0.0008 0.0180 0.0262 0.0253 0.0286 0.0086 0.0191 0.0166 0.0218 NDBI 0.0003 0.0008 0.0007 0.0008 0.0184 0.0281 0.0258 0.0286 0.0086 0.0214 0.0174 0.0218 Experimental design The experimental design is well explained and detailed Response: Thank you for the comment. Validity of the findings The findings are valid based on the provided data. The objectives of the study were achieved. Response: Thank you for the comment. Reviewer 2 (Anonymous) Comments for the Author Basic reporting This paper studies a new algorithm to detect tsunami risk areas based on spatial modelling of vegetation index. I think three questions must be answered in this paper. 1. I suggest to add some more explanation for the flowchart/algorithm to make it more clear and understandable. Response: Thank you for the comment. We have added the explanation in line 331-338 in the revised manuscript, as follows. “Algorithm 1 is the algorithm to detect and determine area affected by tsunami, based on the vegetation indices, using k-NN algorithm. Each vegetation index produces one tsunami risk local map, so that Algorithm 1 generates five tsunami risk local maps, i.e., NDVI map, NDWI map, MNDWI map, SAVI map, and NDBI map. Therefore, in this paper, an algorithm to obtain a model that combines all vegetation indices is also proposed. The suitable vegetation indices are selected based on the local map results from Algorithm 1. This model calculates the tsunami risk value for each region and their level risks. Multiple linear regression is used to obtain the model. The algorithm for determining the tsunami risk value and level risk is described in Algorithm 2.” 2. In Eq. (8), the closest point search technique is performed by using the Euclidean distance formula. Why did you prefer Euclidean distance instead of Hamming distance or Hausdorff distance? Response: Thank you for the comment. Hamming distance is suitable for categorical data, while Hausdorff distance is suitable for image/shape data. In this paper, although we used image (map) data, the data is converted into numerical data. Therefore, the Euclidean distance is the most suitable distance. 3. Is it possible that the closest point search technique is performed by using the similarity between data points instead of Euclidean distance? Response: Thank you for the comment. Yes, it is possible to use other similarity metrics, as long as the metric is used for the numerical data. It will be good research for future work to compare the Euclidean distance with other similarity metrics. Experimental design No comments Validity of the findings No comments "
Here is a paper. Please give your review comments after reading it.
392
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The increasing demand for information and rapid growth of big data has dramatically increased textual data. For obtaining useful text information, the classification of texts is considered an imperative task. Accordingly, this paper develops a hybrid optimization algorithm for classifying the text. Here, the pre-pressing is done by the stemming process and stop word removal. In addition, the extraction of imperative features is performed, and the selection of optimal features is performed using Tanimoto similarity, which estimates the similarity between the features and selects the relevant features with higher feature selection accuracy. After that, a deep residual network trained by the Adam algorithm is utilized for dynamic text classification. In addition, the dynamic learning is performed by the proposed Rider invasive weed optimization (RIWO)-based deep residual network along with fuzzy theory. The proposed RIWO algorithm combines Invasive weed optimization (IWO) and the Rider optimization algorithm (ROA). These processes are done under the MapReduce framework. The analysis reveals that the proposed RIWO-based deep residual network outperformed other techniques with the highest True Positive Rate (TPR) of 85%, True Negative Rate (TNR) of 94%, and accuracy of 88.7%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>The massive demand for big data led to evaluating the source and implication of data. The fundamental opinion of analysis relies on designing PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science a novel framework for studying the data. The similarity measure is one of the mathematical models utilized for classifying and clustering the data. Here, the fundamental assessment of common similarity measures is provided. Here, the similarity metrics, like Jaccard <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>, Cosine <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref>, Euclidean distance <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, Extended Jaccard <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>, are utilized for evaluating distance or angle amongst the vectors. Here, the similarity measures are categorized as feature content and topology.</ns0:p><ns0:p>In topology, the features are organized in a hierarchical model, and the appropriate length of path amongst the features must be evaluated. The measures of features are devised based on evidence wherein the features having elevated frequency are employed to be explicit with elevated information , wherein features with less frequency are adapted with less information. Pair-Wise and ITSimmetrics fit into the class of feature content metrics. Here, the information content measure provides elevated priority to the highest features with a small difference between the two data, leading to improved outcomes. Here, the cosine and Euclidean belong to a class of topological metrics. It is susceptible to loss of information as two similar data offset by the existence of solitary feature having huge weight <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. The methods, such as clustering and classification, are utilized in text miningbased applications that help transform massive data into small subsets to increase computational effectiveness <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>.Here introduce the paper, and put a nomenclature, if necessary, in a box with the same font size as the rest of the paper. The paragraphs continue from here and are only separated by headings, subheadings, images, and formulae. Numbers, bold, and 9.5 pt arrange the section headings. Here are further instructions for the authors.</ns0:p><ns0:p>The text data consist of noisy and irrelevant features that made the learning techniques fail to improve accuracy. For removing redundant data, various data mining methods are adapted. Here, feature extraction and selection are two methods for classifying the data. The selection of components is utilized for eliminating the superfluous text features for effectively performing classification and clustering. The previous techniques concentrated on transforming huge data to small data considering classical distance measures. The reduction of dimensionality minimizes evaluation time and maximizes the efficiency of classification. The recovery of data and text are utilized in detecting synonyms and meanings of data. Several techniques have been devised for the classification and clustering process. The clustering is carried out using unsupervised techniques with different class label data <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. The goal of classifying text is to categorize data into different parts. Here, the goal is to allocate pertinent labels based on the content <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>.</ns0:p><ns0:p>Categorizing texts is considered a crucial part of processing the natural language. It is extensively employed in applications like automatic medical text classification <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref> traffic monitoring equipment <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>. For instance, most new services require repeatedly arranging huge articles in a single day <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>. The advanced services of mails offer the function to discover either junk mail or mail in an automated manner <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>. Other applications involve analysis of sentiment <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, modeling topic <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>, text clustering <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>, translation of language <ns0:ref type='bibr' target='#b18'>[17]</ns0:ref> and intent detection <ns0:ref type='bibr' target='#b19'>[18]</ns0:ref> <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. The classification of technology assists people filters useful data and poses more implications in real life. The design of text categorization and machine learning <ns0:ref type='bibr' target='#b39'>[38]</ns0:ref> <ns0:ref type='bibr' target='#b38'>[37]</ns0:ref> has shifted manually to machine <ns0:ref type='bibr' target='#b20'>[19]</ns0:ref>, <ns0:ref type='bibr' target='#b21'>[20]</ns0:ref>, <ns0:ref type='bibr' target='#b22'>[21]</ns0:ref>, <ns0:ref type='bibr' target='#b23'>[22]</ns0:ref>. There exist several textualization classification techniques <ns0:ref type='bibr' target='#b24'>[23]</ns0:ref>. The goal of these techniques is to categorize textual data. The categorization outcomes can fulfill an individual's requirements for classifying text and are suitable for attaining significant data rapidly. MapReduce is utilized for handling huge data with unstructured data <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. The aim is to devise an optimization-driven deep learning technique for classifying the texts using the MapReduce framework. Initially, the text data undergoes pre-processing for removing unnecessary words. Here, the preprocessing is performed using the stop word removal and stemming process. After that, the features, such as SentiWordNet features, thematic features, and contextual features, are extracted. </ns0:p></ns0:div> <ns0:div><ns0:head>&#61623;</ns0:head><ns0:p>The fuzzy theory is employed to handle dynamic data by performing weight bounding.</ns0:p><ns0:p>The rest of the sections are given as follows: Section 2 presents the classical text classification techniques survey. Section 3 described the developed text classification model. Section 4 discusses the results of the developed model for classical techniques, and section 5 presents the conclusion.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Literature review</ns0:head><ns0:p>The eight classical techniques based on text classification using big data and its issues are described below. Nihar M. Ranjan, Rajesh S. Prasad <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> developed an LFNN-based incremental learning technique for classifying text data based on context-semantic features. The methods employed a dynamic dataset for classification to learn the model dynamically. Here, the incremental learning procedure employed Back Propagation Lion (BPLion) Neural Network, wherein fuzzy bounding and Lion Algorithm (LA) were employed for selecting the weights. However, the technique failed to classify the sentence precisely. Vinay Kumar Kotte et al. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> devised a similarity function for clustering the feature pattern. The technique attained dimensionality reduction with improved accuracy. However, the technique failed to utilize membership functions for obtaining clusters. Jiaying Wang et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> devised a deep learning technique for classifying the text documents. In addition, a large-scale scope-based convolutional neural network (LSS-CNN) was utilized for categorizing the text. The method effectively computed scope-based data and parallel training for massive datasets. The technique attained improved scalability on big data but failed to attain the utmost accuracy. VenkatanareshbabuKuppiliet al. <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> developed Maxwell-Boltzmann Similarity Measure (MBSM) for classifying the text. Here, the MBSM was derived with feature values from the documents. The MBSM was devised by combining single label K-nearest neighbor's classification (SLKNN), multi-label KNN (MLKNN), and K-means clustering. However, the technique failed to include clustering techniques and query mining. Cheng Liu and Xiaofang Wang <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> devised a technique for classifying the text using English quality-related text data. Here, the goal was to extract, classify and examine the data from English texts considering cyclic neural networks. At last, the features with sophisticated English texts were generated. In addition, the technique combining attention was devised to improve label disorder and make the structure more reliable. However, the computation cost tends to be very high. Qi Li et al. <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> designed a method for classifying text by solving the misfitting issue by performing angle-pruning tasks from a database. The technique computed the efficiency of each convolutional filter using discriminative power produced at the pooling layer and shortened words obtained from the filter. However, the technique produced high computational complexity.</ns0:p><ns0:p>FatmaBensaid and Adel M. Alimi <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> devised Multi-Objective Automated Negotiation-based Online Feature Selection (MOANOFS) for classifying the texts. Here, the MOANOFS utilized Automated Negotiation and machine learning techniques to improve classification performance using ultra-high dimensional datasets. It helped the method to decide which features were the most pertinent. However, the method failed to select features from multi-classification domains. Jiang M et al. <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> devised a hybrid text classification model based on softmax regression for classifying text. Here, the deep belief network was utilized to classify text using learned feature space. However, the technique failed to filter extraneous characters for enhancing system performance.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Proposed RIWO-Based deep residual Network for text classification in big data</ns0:head><ns0:p>The mission of text classification is to categorize text data into different classes based on certain content. Text classification is considered an imperative role in processing the natural language. However, text classification is considered a challenging issue due to high dimensional and noisy texts, which is considered a complex process to devise an improved classifier for huge textual data. This research devises a novel hybrid optimization-driven deep learning technique for text classification using big data. Here, the goal is to devise a classifier that employs text data as input and allocates pertinent labels based on the content. At first, the input text data undergoes pre-processing to eliminate noise and artifacts in the text data. Here, the pre-processing is performed with stop word removal and stemming. Once the pre-processed data is obtained, the contextual features, thematic features, and SentiWordNet features are extracted. Once the features are extracted, the imperative features are chosen with Tanimoto similarity. The Tanimoto similarity method evaluates similarity amongst features and chooses the relevant features having high feature selection accuracy. Once the features are selected, a deep residual network <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> is used for dynamic text classification. The deep residual network is trained using the Adam algorithm <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>, <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref>, <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref>. In addition, dynamic learning is performed using the proposed RIWO algorithm along with the fuzzy theory. The proposed RIWO algorithm integrates IWO <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> and ROA <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the schematic view of text classification from the input text data in big data using the proposed RIWO method considering the Map reducer phase. Assume input text data with various attributes and is expressed as</ns0:p><ns0:formula xml:id='formula_0'>&#61480; &#61481;&#61480; &#61481; E e D d B B e d &#61603; &#61603; &#61603; &#61603; &#61501; 1 1 }; { ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Where, refers to text data contained in the database with an e d B , attribute in data. Here, data points are employed using attributes for each data. The other step is to eliminate artifacts and noise present in the data.</ns0:p><ns0:p>The data in a database is split into a specific number, which is equivalent to mappers present in the MapReduce model. The partitioned data is given by, &#61563; &#61565; , ;1</ns0:p><ns0:formula xml:id='formula_1'>d e q B D q N &#61501; &#61603; &#61603;<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where, symbolizes total mappers. Assume mappers in N MapReduce be expressed as</ns0:p><ns0:formula xml:id='formula_2'>&#61563; &#61565; N q M M M M M N q &#61603; &#61603; &#61501; 1 ; , , , , ,<ns0:label>2 1</ns0:label></ns0:formula><ns0:p>(3) Thus, input to mapper is formulated as,</ns0:p><ns0:formula xml:id='formula_3'>&#61563; &#61565; n l m r d D q l r q &#61603; &#61603; &#61603; &#61603; &#61501; 1 ; 1 ; ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Where, symbolizes split data given to mapper, and l r d , q D indicates data in mapper.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.'>Pre-Processing</ns0:head><ns0:p>The partitioned data from the text dataset is fed to pre-processing phase to remove the superfluous words by removing stop words and stemming. Here, pre-processing is an important process to arrange various data smoothly for offering effective outcomes-the pre-processing assists in explaining processing for obtaining improved representations. The dataset consists of unnecessary phrases and words that influence the process. Thus, pre-processing is important for removing inconsistent words from the dataset. Initially, the text data are accumulated in the relational dataset, and all reviews are divided into sentences and bags of the sentence. Thus, the elimination of stop words is carried out to maximize the performance of the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science text classification model. Here, the stemming and stop word removal refine the data.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1.'>Stop word removal</ns0:head><ns0:p>It is a process to remove words with less representative value for data. Some of the instances of non-representative words are pronouns and articles. While evaluating data, few words exist that are not valuable to text content. Thus, removing such redundant words is imperative, and this procedure is termed stop word removal <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>. The continual happening of words, such as articles, conjunctions, and prepositions, are adapted as stop words. In addition, the removal of the stop word is the most imperative technique, which is utilized to remove such redundant words using vocabulary as the size of vector space does not offer any meaning. The stop words indicate word, which does not hold any data. It is a process to eliminate stop words from a huge set of reviews. The elimination of the stop word is used to save huge space and perform processing faster to attain an effective process.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.2.'>Stemming</ns0:head><ns0:p>The stemming procedure is utilized to convert words to stem. In massive data, several words are utilized which convey a similar meaning. Thus, the critical method used to minimize words to root is called stemming. Stemming is a method of linguistic normalization wherein little words are reduced to general means. Moreover, stemming is the procedure to retrieve information for describing the mechanism for removing redundant words to their root form and word stem. For instance, if a word starts from connections, connection, connecting, connected, the word is reduced as connect <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_4'>&#61563; &#61565; k i l P Q M &#61603; &#61603; &#61501; &#61544; 1 ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where, symbolizes total words present in text data from the i Q database.</ns0:p><ns0:p>The pre-processed outcome generated from pre-processing is expressed as , which is subjected as an input to feature extraction phase l M</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Acquisition of features for producing highly pertinent features</ns0:head><ns0:p>It describes an imperative feature produced with input review, and the implication of feature extraction is to produce pertinent features that facilitate improved text classification. Moreover, data obstruction is reduced as text data is expressed as a minimized feature set. Thus, the pre-processed partitioned data is fed to feature extraction, wherein SentiWordNet features, contextual features, and thematic features are extracted.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1.'>Extraction of SentiWordNet features</ns0:head><ns0:p>The SentiWordNet features are utilized from pre-processed partitioned data by removing keywords from reviews. Here, the SentiWordNet <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref> is employed as a lexical resource to extract the SentiWordNet features. The SentiWordNet assigns each text of WordNet considering three numerical sentiment scores, like positive, negative, and neutral scores. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>SentiWordNet consists of different linguistic features: verbs, adverbs, adjectives, and n-gram features. SentiWordNet is a process to evaluate the score of a specific word using text data. Here, the SentiWordNet is employed to determine the polarity of offered review. Thus, the SentiWordNet is employed for discovering positivity and negativity. Hence, SentiWordNet is modeled as .</ns0:p><ns0:formula xml:id='formula_5'>1 F 3.2.</ns0:formula></ns0:div> <ns0:div><ns0:head n='2.'>Extraction of contextual features</ns0:head><ns0:p>The context-based features <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> are generated from pre-processed partitioned data that describes relevant words by dividing them through non-relevant reviews for effective classification. It requires discovering key terms that attain context terms and semantic meaning, establishing a proper context. The key term is considered a preliminary indicator for relevant review wherein context terms act as a validator that evaluates if the key term is an indicator. Here, the training dataset contains keywords with pertinent words. The context-based features assist in selecting the relevant and non-relevant reviews.</ns0:p><ns0:p>Consider representing a training dataset that has relevant and non-relevant reviews. In this method, assume represents key term and ii) By employing sliding window size, the conditions are mined as context terms. Hence, the size of the window is employed as a context span.</ns0:p><ns0:formula xml:id='formula_6'>s x indicates context term.</ns0:formula><ns0:p>iii) The pertinent terms generated are employed as a text, modeled as , and non-relevant is denoted as . The set of pertinent text is</ns0:p><ns0:formula xml:id='formula_7'>r d nr d modeled as</ns0:formula><ns0:p>, and the non-relevant set is referred to as .</ns0:p><ns0:formula xml:id='formula_8'>r d R _ nr d R _ iv)</ns0:formula><ns0:p>Thereafter, the score is evaluated for each distinctive term expressed as,</ns0:p><ns0:formula xml:id='formula_9'>S C R C R C x L x L x C nr d r d | ) ( ) ( | ) ( _ _ &#61485; &#61501; (7)</ns0:formula><ns0:p>Where, symbolizes language model for an excerpt with </ns0:p><ns0:formula xml:id='formula_10'>) ( _ C R x L</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.2.3.'>Extraction of thematic features</ns0:head><ns0:p>Here, the pre-processed partitioned data is used for finding l r d , thematic features. Here, the count of the thematic word <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref> in a sentence is imperative as the frequently occurring words are most probably connected to the topic in data. Thematic words are words that grab key topics defined in a provided document. In thematic features, the top most 10 frequent words are employed as a thematic. Thus, the thematic feature is modeled</ns0:p><ns0:formula xml:id='formula_11'>3 F as, &#61669; &#61501; &#61501; 0 1 t t T T F (8)</ns0:formula><ns0:p>Here, expresses the count of thematic words in a sentence, and it is</ns0:p><ns0:formula xml:id='formula_12'>T expressed as . &#61563; &#61565; T T T , ,. , 2 1</ns0:formula><ns0:p>Thus, the feature vector considering the contextual, thematic, and SentiWordNet features are expressed as, } , , { The selected features are expressed as . R The produced feature selection output obtained from the mapper is given as input to the reducer . Then, the text classification is performed U on reducer using the selected features and briefly illustrated below. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.'>Classification of texts with Adam-based deep residual network</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Here, text classification is performed with an Adam-based deep residual network using selected features . The classification of text data R assists in standardizing the infrastructure and makes the search simpler and more pertinent. In addition, the classification enhances the user's experience and simplifies navigation. In addition, text classification helps solve huge business issues in real-time, like social media and e-mails, which can speed work and take less time for processing. The deep residual network is more effective in the case of the count of attributes and computation. This network is capable of building deep representations at each layer. In addition, it can manage advanced deep learning tasks. The architecture of the deep residual network and training with the Adams algorithm is described below.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.1.'>Architecture of Deep residual network</ns0:head><ns0:p>Here, a deep residual network <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> is employed to make a productive decision in which text classification is performed. The DRN comprises different layers: residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers. Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> presents the structural design of a deep residual network with residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers for text classification.</ns0:p><ns0:p>-Convolutional (Conv) layer:</ns0:p><ns0:p>The two-dimensional Conv layer reduces free attributes in training, and it offers reimbursement for allocating weight. The cover layer process the input image with the sequence of filter known as kernel using a local connection. The cover layer utilizes a mathematical process for sliding the filter with the input matrix and computes the dot product of the kernel. The evaluation process of Conv layer is represented as, Pooling layer: This layer is associated with the Conv layer and is especially utilized to reduce the feature map's spatial size. Hence, the average pooling is selected to function on each slice and depth of the feature map. Manuscript to be reviewed Computer Science computational efficiency and fewer memory needs. Moreover, the problems associated with the non-stationary objectives and the subsistence of noisy gradients are handled effectively. In addition, Adam contains the following benefits. Here, the magnitudes of updated parameters are invariant in contrast to rescaling of gradient, and step size is handled with a hyperparameter that works with sparse gradients. In addition, the Adams is effective in performing step size annealing. The classification of text employs a deep residual network for texts. The steps of Adam are given as:</ns0:p><ns0:formula xml:id='formula_13'>&#61480; &#61481; &#61480; &#61481; &#61480; &#61481; &#61669;&#61669; &#61485; &#61501; &#61485; &#61501; &#61483; &#61483; &#61623; &#61501; 1 0 1 0 , , 2 E a E s s v a u s a O X O d B (11) &#61480; &#61481; O G O d B in C Z Z &#61482; &#61501; &#61669; &#61485; &#61501; 1 0 1<ns0:label>(</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>1 &#61483; &#61485; &#61501; &#61548; a in out Z a a (13) 1 &#61483; &#61485; &#61501; &#61548; s in out Z s s</ns0:formula><ns0:p>Step 1: Initialization The foremost step represents bias corrections initialization wherein l q signifies corrected bias of first moment estimate and represents corrected l m bias of second moment estimate.</ns0:p><ns0:p>Step 2: Discovery of error The bias error is computed to choose the optimum weight for training the deep residual network. Here, the error is termed as an error function that leads to an optimal global solution. The function is termed as a minimization function and is expressed as, . The corrected bias of the first-order</ns0:p><ns0:formula xml:id='formula_15'>&#61480; &#61481; &#61669; &#61501; &#61485; &#61501; f l l O f Err</ns0:formula><ns0:formula xml:id='formula_16'>) 1 ( &#61485; l moment is expressed as, ) 1 ( &#710;1l l l q q &#61544; &#61485; &#61501; (21) 1 1 1 1 ) 1 ( &#710;l l l G q q &#61544; &#61544; &#61485; &#61483; &#61501; &#61485; (22)</ns0:formula><ns0:p>The corrected bias of second-order moment is represented as, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_17'>) 1 ( &#710;2 l l l m m &#61544; &#61485; &#61501; (23) 2 2 1 2 ) 1 ( &#710;l l l H m m &#61544; &#61544; &#61485; &#61483; &#61501; &#61485; (24) ) ( 1 &#61485; &#61649; &#61501; l l</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Step 4: Determination of best solution: The best solution is determined with error, and a solution having a better solution is employed for classifying text.</ns0:p><ns0:p>Step 5: Termination: The optimum weights are produced repeatedly until utmost iterations are attained. Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> describes the pseudocode of the Adams technique. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.5.'>Dynamic learning with proposed RIWO-based deep residual network</ns0:head><ns0:p>For incremental data , dynamic learning is done using the proposed B RIWO-based deep residual network. Here, the assessment of incremental learning with the developed RIWO-based deep residual network is done to achieve effective text classification with the dynamic data. The deep residual network is trained with developed RIWO for generating optimum weights. The developed RIWO is generated by integrating ROA and IWO for acquiring effective dynamic text classification.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5.1.'>Architecture of deep residual network</ns0:head><ns0:p>The model of the deep residual network is already explained in section 3.4.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5.2.'>Training of deep residual network with proposed RIWO</ns0:head><ns0:p>The training of deep residual networks is performed with developed RIWO, which is devised by integrating IWO and ROA. Here, the ROA <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref> is motivated by the behavior of rider groups, which travel to attain a common target position to turn out to be a winner. In this model, the riders are chosen from the total riders of each group. Hence, it is concluded that this method produces enhanced accuracy of classification. Furthermore, the ROA is effective and follows the steps of fictional computing for addressing optimization problems but contains less convergence. IWO <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> is motivated by colonizing characteristics of weed plants. The technique provided a fast rate of convergence and elevated the accuracy. Hence, integrating IWO and ROA is carried out to enhance complete algorithmic performance. The steps present in the method are expressed as:</ns0:p><ns0:p>Step 1) Initialization of population The preliminary step is algorithm initialization, which is performed using four-rider groups provided by and represented as, The computation of error is already described in equation <ns0:ref type='bibr' target='#b20'>(19)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_18'>A } , , , , { 2 1 &#61514; &#61549; A A A A A &#61501;<ns0:label>(</ns0:label></ns0:formula><ns0:p>Step 3) Update riders position: The rider position in each set is updated for determining the leader. Thus, the rider updates position using a feature of each rider is defined below. The update position of each rider is expressed as, As per ROA <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>, the update overtaker position is used to increase the rate of success by determining the position of overtaker and is represented as,</ns0:p><ns0:formula xml:id='formula_19'>&#61531; &#61533; ) , ( * ) ( ) , ( ) , ( * 1 h L A g h g A h g A L n o n o n &#61622; &#61483; &#61501; &#61483; (27)</ns0:formula><ns0:p>Where, signifies direction indicator.</ns0:p><ns0:formula xml:id='formula_20'>) ( * g n &#61622;</ns0:formula><ns0:p>The attacker contains a propensity to grab the position of leaders and given by,</ns0:p><ns0:formula xml:id='formula_21'>&#61531; &#61533; n g L n u g L a n r u L A K Cos u L A u g A &#61483; &#61483; &#61501; &#61483; ) , ( * ) , ( ) , (</ns0:formula><ns0:p>, 1 <ns0:ref type='bibr' target='#b28'>(28)</ns0:ref> The bypass riders contain a familiar path, and its update is expressed as,</ns0:p><ns0:formula xml:id='formula_22'>&#61480; &#61481; &#61531; &#61533; &#61531; &#61533; u u A u u A u g A n n b n &#61540; &#61560; &#61540; &#61539; &#61548; &#61485; &#61483; &#61501; &#61483; 1 * ) , ( ) ( * ) , ( ) , ( 1 (29)</ns0:formula><ns0:p>Where, symbolizes random number, signifies arbitrary number &#61548; &#61539; amongst 1to , denotes an arbitrary number in 1 to and express P &#61560; P &#61540; arbitrary number between 0 and 1.</ns0:p><ns0:p>The follower poses a propensity to update position using leading rider position to attain target and given by, </ns0:p><ns0:formula xml:id='formula_23'>&#61531; &#61533; ) * ) , ( * ( ) , ( ) , ( , 1 n g L n h g L F n r h L A K Cos h L A h g A &#61483; &#61501; &#61483; (</ns0:formula><ns0:formula xml:id='formula_24'>th h n g r &#61531; &#61533; n g n h g L F n r K Cos h L A h g A * ( 1 ) , ( ) , ( , 1 &#61483; &#61501; &#61483; (31)</ns0:formula><ns0:p>The IWO assists in generating the best solutions. Hence, as per IWO <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>, the equation is represented as,</ns0:p><ns0:formula xml:id='formula_25'>F n best F n F n A A A n h g A &#61485; &#61483; &#61501; &#61483; ) ( ) , ( 1 &#61555; (32)</ns0:formula><ns0:p>Where, symbolizes new weed position in iteration ,</ns0:p><ns0:formula xml:id='formula_26'>F n A 1 &#61483; 1 &#61483; n F n</ns0:formula><ns0:p>A signifies current weed position refers to the best weed found in the best A whole population and represents current standard deviation.</ns0:p><ns0:formula xml:id='formula_27'>) (n &#61555; F n F n F n best A A n h g A A &#61483; &#61485; &#61501; &#61483; ) ( ) , ( 1 &#61555; (33)</ns0:formula><ns0:p>Substitute equation <ns0:ref type='bibr' target='#b33'>(33)</ns0:ref> in equation <ns0:ref type='bibr' target='#b31'>(31)</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_28'>&#61480; &#61481;&#61531; &#61533; n g n h g, F n F n F 1 n F 1 n r * ) Cos(K 1 A &#963;(n)A h) (g, A h) (g, A &#61483; &#61483; &#61485; &#61501; &#61483; &#61483; (34) &#61531; &#61533; &#61480; &#61481;&#61531; &#61533; n g n h g F n F n n g n h g F n F n r K Cos A A n r K Cos h g A h g A * ) ( 1 ) ( * ) ( 1 ) , ( ) , ( , , 1 1 &#61483; &#61483; &#61485; &#61483; &#61501; &#61483; &#61483; &#61555; (35) &#61531; &#61533; &#61531; &#61533;&#61531; &#61533; n g n h g F n F n n g n h g F n F n r K Cos A n A r K Cos h g A h g A * ) ( 1 ) ( * ) ( 1 ) , ( ) , ( , , 1 1 &#61483; &#61485; &#61501; &#61483; &#61485; &#61483; &#61483; &#61555; (36) &#61531; &#61533; &#61531; &#61533;&#61531; &#61533; n g n h g F n F n n g n h g F n r K Cos A n A r K Cos h g A * ) ( 1 ) ( * ) ( 1 1 ) , ( , , 1 &#61483; &#61485; &#61501; &#61485; &#61485; &#61483; &#61555; (37) &#61531; &#61533; &#61531; &#61533;&#61531; &#61533; n g n h g F n F n n g n h g F n r K Cos A n A r K Cos h g A * ) ( 1 ) ( * ) ( ) , ( , , 1 &#61483; &#61485; &#61501; &#61485; &#61483; &#61555; (38)</ns0:formula><ns0:p>The final update equation of the proposed RIWO is expressed as,</ns0:p><ns0:formula xml:id='formula_29'>&#61480; &#61481; n g n h g n g n h g F n F n F n r K Cos r K Cos n A A h g A * ) ( * ) ( 1 ) ( ( ) , ( , , 1 &#61483; &#61485; &#61485; &#61501; &#61483; &#61555; (39)</ns0:formula><ns0:p>Step 4) Re-evaluation of the error: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>After completing the update process, the error of each rider is computed. Here, the position of the rider who is in the leading position is replaced using the position of the new rider generated such that the error of the new rider is less.</ns0:p><ns0:p>Step 5) Update of Rider parameter:</ns0:p><ns0:p>The rider attribute update is imperative to determine an effectual optimal solution using error.</ns0:p><ns0:p>Step 6) Riding Off time:</ns0:p><ns0:p>The steps are iterated repeatedly until time attains off time , in OFF N which the leader is determined. The Pseudocode of developed RIWO is illustrated in table 2.</ns0:p></ns0:div> <ns0:div><ns0:head>Table2. Pseudocode of developed RIWO</ns0:head><ns0:p>Hence, the output produced from the developed RIWO-based deep residual network is , which helps to classify the text data considering &#61547; dynamic learning that helps to classify the dynamic data. Here, fuzzy bounding is employed for remodeling the classifier if an error of previous data to present data is high.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5.3.'>Fuzzy theory</ns0:head><ns0:p>The error is evaluated whenever incremental data is added to the model and weights are updated without using the previous weights. If the error evaluated by the present instance is less than the error of the previous instance then, the weights are updated based on the proposed RIWO algorithm; else, the classifiers are remodeled by setting a boundary for weight using fuzzy theory <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> and then, choose optimal weight using proposed RIWO algorithm. On arrival of data , the error will be </ns0:p><ns0:formula xml:id='formula_30'>1 &#61483; i d 1 &#61483; i e computed,</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.'>Results and Discussion</ns0:head><ns0:p>The competence of the technique is evaluated by analyzing the techniques with various measures like TPR, TNR, and accuracy. The Manuscript to be reviewed Computer Science assessment is done by considering mappers=3, mappers=4, and by varying the chunk size.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Experimental setup</ns0:head><ns0:p>The execution of the developed model is performed in PYTHON with windows 10 OS, Intel processor, 4GB RAM. Here, the analysis is performed by considering the NSL-KDD dataset.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.'>Dataset description</ns0:head><ns0:p>The dataset adapted for text classification involves Reuter and 20 Newsgroups database and is explained below.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.1.'>20 Newsgroups database</ns0:head><ns0:p>Ken Lang contributes the 20 Newsgroups dataset <ns0:ref type='bibr'>[24]</ns0:ref> for the newsreader to extract the Netnews. The dataset is established by collecting 20,000 newsgroup data, which is split amongst 20 different newsgroups. The database is popular for analyzing text applications for handling machine-learning methods, like clustering and classification of text. The dataset is organized into 20 different newsgroups; wherein each indicates different topics.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.2.'>Reuter database</ns0:head><ns0:p>David D. Lewis contributes the Reuters-21578 Text Categorization Collection Dataset <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>. The dataset comprises documents that occurred on Reuters newswires in 1987. The documents are arranged and indexed based on categories. The count of instances of the dataset is 21578 has five attributes. The count of websites attained by dataset is 163417.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Evaluation metrics</ns0:head><ns0:p>The efficiency of the developed model is examined by adopting measures like accuracy, TPR, and TNR.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.1.'>Accuracy</ns0:head><ns0:p>It is described as the measure of data that is precisely preserved and is expressed as,</ns0:p><ns0:formula xml:id='formula_31'>F H Q P Q P Acc &#61483; &#61483; &#61483; &#61483; &#61501; (42)</ns0:formula><ns0:p>Where, signifies true positive, symbolizes true negative, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_32'>F Q Q TNR &#61483; &#61501; (44)</ns0:formula><ns0:p>Where, is true negative and signifies false positive.</ns0:p></ns0:div> <ns0:div><ns0:head>Q F 4.4. Comparative methods</ns0:head><ns0:p>The analysis of the proposed technique is evaluated by evaluating methods with classical techniques, like LSS-CNN <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>, RNN <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>, SLKNN+MLKNN <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>, BPLion+LFNN <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>, SVM, NN, LSTM, and proposed RIWO-based deep residual network.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.'>Comparative analysis</ns0:head><ns0:p>The assessment of the proposed technique is performed by adopting certain measures, like accuracy, TPR, and TNR. Here, the analysis is performed by considering two datasets, namely Reuter dataset and the 20 Newsgroup dataset. In addition, the assessment of techniques is performed by considering the mapper size=3 and 4.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.1.'>Analysis with Reuter dataset</ns0:head><ns0:p>The assessment of techniques with the Reuter dataset considering TPR, TNR, and accuracy parameters is described. The assessment is done with mapper=3 and mapper=4 by varying the chunk size. a) Assessment with mapper=3 Figure <ns0:ref type='figure' target='#fig_12'>3</ns0:ref> presents an assessment of techniques with accuracy, TPR and TNR measure considering with Reuter dataset with mapper=3. The assessment of techniques with TPR measure is depicted in figure <ns0:ref type='figure' target='#fig_18'>3a</ns0:ref>). For chunk size=3, the TPR evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and proposed RIWO-based deep residual network are 0.747, 0.757, 0.771, 0.776, 0.780, 0.785, 0.790, and 0.803. Likewise, for chunk size=6, the TPR was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and the proposed RIWO-based deep residual network are 0.800, 0.812, 0.818, 0.818, 0.818, 0.819, 0.819, and 0.830. The assessment of techniques with TNR measure is depicted in figure <ns0:ref type='figure' target='#fig_18'>3b</ns0:ref>). For chunk size=3, the TNR was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and proposed RIWO-based deep residual network are 0.831, 0.842, 0.853, 0.861, 0.870, 0.880, 0.886, and 0.913. Likewise, for chunk size=6, the TNR was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and proposed RIWObased deep residual network are 0.846, 0.850, 0.865, 0.869, 0.878, 0.885, 0.896, and 0.925. The assessment of the method with accuracy measure is depicted in figure <ns0:ref type='figure' target='#fig_18'>3c</ns0:ref>). For chunk size=3, the accuracy was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and proposed RIWO-based deep residual network are 0.8306, 0.842, 0.852, 0.861, 0.870, 0.880, 0.886, and 0.913. Likewise, for chunk size=6, the accuracy The assessment of techniques with accuracy, TPR, and TNR measure considering with Reuter dataset using mapper=4 is described in figure <ns0:ref type='figure' target='#fig_17'>4</ns0:ref>. The assessment of techniques with TPR is displayed in figure <ns0:ref type='figure' target='#fig_19'>4a</ns0:ref>). For chunk size=3, the TPR evaluated by LSS-CNN is 0.754, RNN is 0.768, SLKNN+MLKNN is 0.792, SVM is 0.796, NN is 0.800, LSTM is 0.806, BPLion+LFNN is 0.810, and the proposed RIWO-based deep residual network is 0.828. Likewise, for chunk size=6, the TPR evaluated by LSS-CNN is0.810, RNN is 0.820, SLKNN+MLKNN is 0.824, SVM 0.824, NN is 0.825, LSTM is 0.826, BPLion+LFNN is 0.826, and the proposed RIWO-based deep residual network is 0.850. The assessment of techniques with TNR measure is depicted in figure <ns0:ref type='figure' target='#fig_19'>4b</ns0:ref>). For chunk size=3, the TNR evaluated by LSS-CNN is 0.839, RNN is 0.860, SLKNN+MLKNN is 0.863, SVM is 0.870, NN is 0.875, LSTM is 0.881, BPLion+LFNN is 0.896, and the proposed RIWO-based deep residual network is 0.925. Likewise, for chunk size=6, the TNR evaluated by LSS-CNN is 0.855, RNN is 0.856, SLKNN+MLKNN is 0.876, SVM is 0.878, NN is 0.885, LSTM is 0.893, BPLion+LFNN is 0.900, and the proposed RIWO-based deep residual network is 0.940. The assessment of the method with accuracy measure is displayed in figure <ns0:ref type='figure' target='#fig_19'>4c</ns0:ref>). For chunk size=3, the accuracy evaluated by LSS-CNN is 0.837, RNN is 0.843, SLKNN+MLKNN is 0.846, SVM is 0.850, NN is 0.855, LSTM is 0.858, BPLion+LFNN is 0.862, and the proposed RIWO-based deep residual network is 0.880. Likewise, for chunk size=6, the accuracy evaluated by LSS-CNN is 0.833, RNN is 0.849, SLKNN+MLKNN is 0.852, SVM is 0.857, NN is 0.859, LSTM is 0.863, BPLion+LFNN is 0.868, and the proposed RIWO-based deep residual network is 0.887. The performance improvement of LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN with respect to the proposed RIWO-based deep residual network considering accuracy are 6.087%, 4.284%, 3.945%, 3.38%, 3.156%, 2.705% and 2.142%.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.2.'>Analysis with 20 Newsgroup dataset</ns0:head><ns0:p>The assessment of techniques using 20 Newsgroup datasets with TPR, TNR, and accuracy parameters is elaborated. The assessment is done with mapper=3 and mapper=4 by altering the chunk size. a) Assessment with mapper=3 Manuscript to be reviewed Computer Science figure <ns0:ref type='figure'>6b</ns0:ref>). For chunk size=3, the TNR was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and proposed RIWObased deep residual networks are 0.831, 0.831, 0.842, 0.867, 0.848, 0.854, 0.859, and 0.870. Likewise, for chunk size=6, the TNR was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and proposed RIWO-based deep residual network are 0.836, 0.839, 0.851, 0.857, 0.863, 0.866, 0.871, and 0.910. The assessment of the method with accuracy measure is depicted in figure <ns0:ref type='figure'>6c</ns0:ref>). For chunk size=3, the accuracy was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and proposed RIWO-based deep residual network are 0.821, 0.831, 0.842, 0.846, 0.849, 0.854, 0.860, and 0.861. Likewise, for chunk size=6, the accuracy was evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and proposed RIWO-based deep residual network is 0.824, 0.838, 0.849, 0.852, 0.858, 0.863, 0.868, and 0.870. The performance improvement of LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN with respect to the proposed RIWO-based deep residual network considering accuracy are 5.287%, 3.678%, 2.413%, 2.068%, 1.379%, 0.804%, and 0.229%.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.6.'>Comparative discussion</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref> presents an assessment of techniques with accuracy, TPR, TNR considering Reuter and 20 Newsgroup datasets. Considering the Reuter dataset with mapper=3, the highest accuracy of 0.830 is evaluated by developed RIWO-based deep residual network while the accuracy of existing LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN are 0.800, 0.812, 0.818, 0.818, 0.818, 0.819, and 0.819. The proposed RIWO-based deep residual network measures the maximal TPR of 0.925, while LSS-CNN, RNN, SLKNN+MLKNN compute the TPR, SVM, NN, LSTM, BPLion+LFNN are 0.846, 0.850, 0.865, 0.869, 0.878, 0.885, and 0.896. The proposed RIWO-based deep residual network computes the highest TNR of 0.880, while the TPR, LSS-CNN, RNN, SLKNN+MLKNN, BPLion+LFNN are 0.824, 0.839, 0.849, 0.852, 0.856, 0.859 and 0.863. With mapper=4, the highest TPR of 0.850, TNR of 0.940, and accuracy of 0.887 are evaluated by a developed RIWO-based deep residual network. With 20 Newsgroup datasets and mapper=3, the highest TPR of 0.840, the highest TNR of 0.879, and the proposed RIWO-based deep residual network computes the highest accuracy of 0.859. With mapper=4, the highest TPR of 0.859, highest TNR of 0.910, and highest accuracy of 0.870 are evaluated by developed RIWO-based deep residual network. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Conclusion</ns0:head><ns0:p>A technique is presented for text classification in big data considering the MapReduce model. The purpose is to provide a hybrid optimizationdriven deep learning model for text classification. Here, the pre-processing is carried out using stemming and stop word removal. In addition, the mining of significant features is performed wherein SentiWordNet features, contextual features, and thematic features are mined from input preprocessed data. Furthermore, the selection of the best features is carried out with Tanimoto similarity. The Tanimoto similarity examines the similarity between the features and selects the pertinent features with higher feature selection accuracy. Then, a deep residual network is employed for dynamic text classification. Here, the Adam algorithm trains the deep residual network. In addition, dynamic learning is carried out with the proposed RIWO-based deep residual network and fuzzy theory for incremental text classification. Here, the deep residual network training is performed using the proposed RIWO. The proposed RIWO algorithm is the integration of IWO and ROA. The proposed RIWO-based deep residual network outperformed other techniques with the highest TPR of 85%, TNR of 94%, and accuracy of 88.7%. In the future, the proposed method's performance will be evaluated using different data sets. Also, the bias mitigation strategies that are not depended directly on a set of identity terms and the methods that are less dependent on individual words will be considered for effectively deal with biases tied to words used in many different contexts like white vs. black. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Schematic view of text classification from the input big data using proposed RIWO-based Deep Residual Network</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Here, different words indicate different polarity that indicates various word senses. The PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Key terms: Consider symbolize language model, that employs each term, and a metric is expressed as, of Context term:After discovering key terms, the technique begins the process of context term discovery, which is similar to first separately detecting each term. The steps employed in determining the context term are given as, i) Compute all instances of the key term employed among relevant and non-relevant reviews .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>r d relevant review set. The term indicates a language model for an PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021) Manuscript to be reviewed Computer Science excerpt with a non-relevant review set and represents the size of the S window. If the measure is a definite threshold then, that score is adapted as a context term . Thus, generated context-based features are modeled as S x .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>F</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>1 F 2 F 3 F 3 . 3 .</ns0:head><ns0:label>12333</ns0:label><ns0:figDesc>features, andrefers to thematic features. Feature selection using Tanimoto similarity The selection of imperative features from the extracted features is F made using Tanimoto similarity. The Tanimoto similarity computes similarity amidst features and selects features with high feature selection accuracy. Here, the Tanimoto similarity is expressed as,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>12) Where, expresses CNN feature of the input image, and refer O u v recording coordinates, signifies kernel matrix termed as a learnable G E E &#61620; parameter, and are the position index of the kernel matrix. Hence,</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>( 14 )Figure. 2 . 3 . 4 . 2 .</ns0:head><ns0:label>142342</ns0:label><ns0:figDesc>Figure. 2. Structural design of deep residual network with residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers for text classification 3.4.2. Training of Deep residual network with Adams algorithmThe deep residual network training is performed with the Adams technique that assists in discovering the best weights for tuning the deep residual network for classifying text. The optimal weights are produced with the Adams method, which assists in tuning the deep residual network for generating the best results. Adam<ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> represents a first-order stochastic gradient-based optimization extensively adapted to a fitness function that changes for attributes. The major implication of the method is</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Discovery of updated biasAdam is used to improving convergence behavior and optimization. This technique generates smooth variation with effectual computational efficiency and lower memory requirements. As per Adam<ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>, the bias is expressed as, step size, expresses corrected bias, indicates bias-</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>F 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>.3.2. TPR TPR refers ratio of the count of true positives with respect to the total number of positives. true positives, is the false negatives. P H 4.3.3. TNR The TNR refers to the ratio of negatives that are correctly detected. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:1:2:NEW 30 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure. 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure. 3. Assessment of different techniques comparing with the proposed method by considering Reuter dataset with mapper=3 a) TPR b) TNR c) Accuracy</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure. 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure. 4. Assessment of different techniques comparing with the proposed method by considering Reuter dataset with mapper=4 a) TPR b) TNR c) Accuracy</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure. 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure. 5. Assessment of different techniques comparing with the proposed method by considering 20 Newsgroup dataset with mapper=3 a) TPR b) TNR c) AccuracyFigure5presents an assessment of techniques with accuracy, TPR and TNR measure considering with 20 Newsgroup dataset with mapper=3. The assessment of techniques with TPR measure is depicted in figure5a). For chunk size=3, the maximal TPR of 0.834 is evaluated by the proposed RIWO-based deep residual network, while LSS-CNN evaluates TPR, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN are 0.708, 0.759, 0.780, 0.785, 0.792, 0.803, and 0.812. Likewise, for chunk size=6, the highest TPR of0.840 is evaluated by the proposed RIWO-based deep residual network, while LSS-CNN evaluates the TPR, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN are 0.796, 0.815, 0.818, 0.822, 0.825, 0.828, and 0.829. The assessment of techniques with TNR measure is depicted in figure5b). For chunk size=3, the TNR computed by the proposed RIWObased deep residual network is 0.862, while LSS-CNN evaluates TNR, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN are 0.832, 0.839, 0.843, 0.846, 0.848, 0.850, 0.851. Likewise, for chunk size=6, the TNR evaluated by the proposed RIWO-based deep residual network is 0.879 while TNR computed by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN are 0.833, 0.840, 0.849, 0.808, 0.851, 0.852, 0.854. The assessment of the method with accuracy measure is depicted in figure5c). For chunk size=3, the accuracy evaluated by the proposed RIWO-based deep residual network is 0.850, while LSS-CNN computes accuracy, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion + LFNN are 0.821, 0.822, 0.833, 0.837, 0.839, 0.840, 0.843. The performance improvement of LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion + LFNN with respect to the proposed RIWO-based deep residual network considering accuracy are 3.411%, 3.294%, 2%, 1.529%, 1.294%, 1.17% and 0.823%. Likewise, for chunk size=6, the accuracy evaluated by the proposed RIWO-based deep residual network is 0.859, while LSS-CNN computes accuracy, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN are 0.824, 0.830, 0.839, 0.843, 0.849, 0.852, and 0.856. b) Assessment with mapper=4</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 1 Schematic</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 2 Structural</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>These features are employed in a deep residual network for classifying the texts. Here, the deep residual network training is performed by the Adams algorithm. Finally, dynamic learning is carried out wherein the proposed RIWO-based deep residual network is employed for incremental text classification. Here, the fuzzy theory is employed for weight bounding to deal with the incremental data. In this process, the training of deep residual network is performed by the proposed RIWO, which is devised by combining ROA and IWO algorithm The key contribution of the paper: &#61623; Proposed RIWO-based deep residual network for text classification: A new method is developed for text classification using multidimensional features and MapReduce. Dynamic learning uses the proposed RIWO-based deep residual network for classifying texts. Here, the developed RIWO is adapted for deep residual network training.</ns0:figDesc><ns0:table /><ns0:note>&#61623;RIWO: It is devised by combining ROA and IWO algorithms.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pseudocode of Adams algorithm</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>and that will be compared with that of previous data . If</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>d</ns0:cell><ns0:cell>i</ns0:cell><ns0:cell>e</ns0:cell><ns0:cell>&#61500; i i e</ns0:cell><ns0:cell>1 &#61483;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>then prediction with training based on RIWO is made else the fuzzy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>bounding based learning will be done by bounding the weights, which is</ns0:cell></ns0:row><ns0:row><ns0:cell>given as,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Where,</ns0:cell><ns0:cell>&#61559;</ns0:cell><ns0:cell>t</ns0:cell><ns0:cell cols='6'>B is weight at the current iteration and t s F &#61559; &#61559; &#61501; &#61617;</ns0:cell><ns0:cell>F</ns0:cell><ns0:cell cols='2'>s</ns0:cell><ns0:cell>signifies a fuzzy</ns0:cell><ns0:cell>(40)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>score. For the dynamic data, extract the features</ns0:cell><ns0:cell>{F</ns0:cell><ns0:cell cols='2'>}</ns0:cell><ns0:cell cols='5'>. Here, the membership</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>degree is given as,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>|| Membership Degree &#61559; &#61501;</ns0:cell><ns0:cell>t</ns0:cell><ns0:cell>&#61485;</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>&#61485;</ns0:cell><ns0:cell>&#61559;</ns0:cell><ns0:cell>t</ns0:cell><ns0:cell>&#61485;</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>||</ns0:cell><ns0:cell>(41)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>Where, weights at iteration represents weights at iteration 2 &#61485; t &#61559; . At last, when the highest iteration is attained, the and 2 &#61485; t 1 &#61485; t &#61559; signifies 1 &#61485; t</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>process is stopped.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparative discussion</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Update bypass position with equation (29) Update follower position with equation (39) Update overtaker position with equation (27) Update attacker position with equation (28) Rank riders using error with equation<ns0:ref type='bibr' target='#b20'>(19)</ns0:ref> Choose the rider with minimal error Update steering angle, gear, accelerator, and brake</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Begin</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>Initialize solutions set</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Initialize algorithmic parameter</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Discover error using equation (19)</ns0:cell></ns0:row><ns0:row><ns0:cell>While For &#61550;</ns0:cell><ns0:cell cols='3'>n &#61500; 1 &#61501;</ns0:cell><ns0:cell>OFF P to N</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Return &#61483; &#61501; n n</ns0:cell><ns0:cell>A 1</ns0:cell><ns0:cell>L</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>End for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>End while</ns0:cell></ns0:row><ns0:row><ns0:cell>End</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
" Dear Editors, We would like to express our thanks to the editor and the anonymous reviewers for their time, accurate review of our manuscript, and invaluable comments on this paper. We have carefully revised the manuscript according to the reviewers’ suggestion. All the suggestions are addressed in this response, and the corresponding changes have been incorporated in the revised paper. We hope that our revised paper can meet the requirement of publication and we believe that the manuscript is now suitable for publication in PeerJ. In the revised paper, we use the red color to indicate the changes. Dr. Hemn Barzan Abdalla Assistant Professor of Computer Science Editor's Decision From the comments of the reviewers, I think this is a valuable contribution. Please revise the paper accordingly, and then it will be evaluated again by the reviewers. Additionally, Reviewer 1 has requested that you cite specific references. You may add them if you believe they are especially relevant. However, I do not expect you to include these citations, and if you do not include them, this will not influence my decision. Response: Thank you for your comments. The papers suggested by the reviewer have been included in the reference part and cited in the introduction part of the revised manuscript. Reviewer 1 In this paper, author presented a MapReduce model for text classification in big data. However, there are some limitations that must be addressed as follows. 1. The abstract is very lengthy and not attractive. Some sentences in abstract should be summarized to make it more attractive for readers. Response: The length of the abstract has been reduced to make it more attractive for readers. Action: The abstract in page 1 has been revised as follows. The increasing demand for information and rapid growth of big data has dramatically increased textual data. For obtaining useful text information, the classification of texts is considered an imperative task. Accordingly, this paper develops a hybrid optimization algorithm for classifying the text. Here, the pre-pressing is done by the stemming process and stop word removal. In addition, the extraction of imperative features is performed and the selection of optimal features is performed using Tanimoto similarity, which estimates the similarity between the features and selects the relevant features with higher feature selection accuracy. After that, a deep residual network trained by the Adam algorithm is utilized for dynamic text classification. In addition, the dynamic learning is performed by the proposed Rider invasive weed optimization (RIWO)-based deep residual network along with fuzzy theory. The proposed RIWO algorithm combines Invasive weed optimization (IWO) and the Rider optimization algorithm (ROA). These processes are done under the MapReduce framework. The analysis reveals that the proposed RIWO-based deep residual network outperformed other techniques with the highest True Positive Rate (TPR) of 85%, True Negative Rate (TNR) of 94%, and accuracy of 88.7%. 2. In Introduction section, it is difficult to understand the novelty of the presented research work. This section should be modified carefully. In addition, the main contribution should be presented in the form of bullets. Response: Thank you for your comment. The novelties of the proposed research work and the main contributions have been mentioned clearly in the end of the introduction part of the revised manuscript. Action: The following points have been highlighted at the end of the introduction part (section 1) in page 3. The aim is to devise an optimization-driven deep learning technique for classifying the texts using ‎MapReduce framework. Initially, the text data undergoes pre-processing for removing ‎unnecessary words. Here, the pre-processing is performed using the stop word removal ‎and stemming process. After that, the features, such as ‎SentiWordNet features, thematic features, and contextual features are extracted. These features ‎are employed in a deep residual network for classifying the texts. Here, the deep ‎residual network training is performed by the Adams algorithm. Finally, dynamic learning is carried ‎out wherein the proposed RIWO-based deep residual network is employed for incremental text ‎classification. Here, the fuzzy theory is employed for weight bounding to deal with the ‎incremental data. In this process, the training of deep residual network is performed by the ‎proposed RIWO, which is devised by combining ROA and IWO algorithm The key contribution of the paper:‎ • Proposed RIWO-based deep residual network for text classification: A new method is developed for text classification using multidimensional features and MapReduce. Dynamic learning uses the proposed ‎RIWO-based deep residual network for classifying texts. Here, the developed RIWO is adapted for deep residual network training. • RIWO: It is devised by combining ROA and IWO algorithms. • The ‎fuzzy theory is employed to handle dynamic data by performing weight bounding. ‎ 3. The most recent work about text classification and big data should be discussed as follows (‘An intelligent healthcare monitoring framework using wearable sensors and social networking data’, ‘Traffic accident detection and condition analysis based on social networking data’, ‘Fuzzy Ontology and LSTM-Based Text Mining: A Transportation Network Monitoring System for Assisting Travel’, and ‘Merged Ontology and SVM-Based Information Extraction and Recommendation System for Social Robots’). Response: The works mentioned above have been added in the reference part and cited in the introduction part of the revised manuscript. Action: The following papers have been added in the reference part and cited in the introduction (section 1). 1. Farman Ali, Shaker El-Sappagh, S.M.Riazul Islam, Amjad Ali, Muhammad Attique, Muhammad Imran, Kyung-SupKwak, 'An intelligent healthcare monitoring framework using wearable sensors and social networking data,' Future Generation Computer Systems, vol.114, pp.23-43, 2021. 2. Farman Ali, Amjad Ali, Muhammad Imran, Rizwan Ali Naqvi, Muhammad Hameed Siddiqi, Kyung-Sup Kwak, 'Traffic accident detection and condition analysis based on social networking data,' National library of medicine, 2021. 3. Farman Ali, Shaker El-Sappagh, and Daehan Kwak, 'Fuzzy Ontology and LSTM-Based Text Mining: A Transportation Network Monitoring System for Assisting Travel,' Sensors, vol.19, no.2, pp.234, 2019. 4. Farman Ali, Daehan Kwak, Pervez Khan, Shaker Hassan A. Ei-Sappagh, S. M. Riazul Islam, Daeyoung Park, and Kyung-Sup Kwak,'Department of Information and Communication Engineering, Inha University, Incheon, South Korea, 'Merged Ontology and SVM-Based Information Extraction and Recommendation System for Social Robots,' IEEE Xplore, vol.5, pp.12364-12379, 2017. 4. It is better to merge subsection 2.1 and 2.2. Response: Section 2.1 and 2.2 have been merged and provided as a single section (Literature review). Action: As per the reviewer’s comment, Section 2 (Literature review) has been reframed in page 3. 5. The authors should avoid the use of too many colors in figure (see figure1). Response: The colors in figure 1 and 2 have been removed and made as a clear figure in the revised manuscript. Action: The figure 1 in section 3 and figure 2 in section 3.4.1 have been redrawn as follows. Figure 1. Schematic view of text classification from the input big data using proposed RIWO-based Deep Residual Network Figure. 2. Structural design of deep residual network with residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers for text classification 6. Equations should be discussed deeply. Response: All the equations in the manuscript have been explained in text in the revised version of the manuscript. Action: The following terms are included in section 3 to define the terms of the equations. refers to text data contained in the database with an attribute in ‎ data. symbolizes total mappers. symbolizes split data given to ‎ mapper to process, and ‎‎ indicates data in ‎ mapper.‎ symbolizes total words present in text data from the database.‎ The pre-processed outcome generated from pre-processing is expressed as The pertinent terms generated are employed as a text, modeled as, and non-relevant is ‎denoted as ‎‎. The set of pertinent text is modeled as , and the non-relevant set is referred ‎to as .‎ 7. Captions of the Figures not self-explanatory. The caption of figures should be self-explanatory, and clearly explaining the figure. Extend the description of the mentioned figures to make them self-explanatory. Response: Captions of the figures have been improved and expanded in the revised manuscript to make it self-explanatory. Action: The captions used for figure 1 in section 3, figure 2 in section 3.4.1, figures 3, 4, 5, and 6 in section 4.5 are provided as follows. Figure 1. Schematic view of text classification from the input big data using proposed RIWO-based Deep Residual Network Figure. 2. Structural design of deep residual network with residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers for text classification Figure. 3. Assessment of different techniques comparing with the proposed method by considering Reuter dataset with mapper=3 a) TPR b) TNR c) Accuracy Figure. 4. Assessment of different techniques comparing with the proposed method by considering Reuter dataset with mapper=4 a) TPR b) TNR c) Accuracy Figure. 5. Assessment of different techniques comparing with the proposed method by considering 20 Newsgroup dataset with mapper=3 a) TPR b) TNR c) Accuracy. Figure 6. Assessment of different techniques comparing with the proposed method by considering 20 Newsgroup dataset with mapper=4 a) TPR b) TNR c) Accuracy 8. The whole manuscript should be thoroughly revised in order to improve its English. Response: The whole manuscript has been checked thoroughly to correct and improve its English. Action: The entire manuscript has been revised carefully and the grammar and syntax issues have been corrected. 9. More details should be included in future work. Response: Future work in the conclusion part has been improved in the revised version of the manuscript. Action: The following points have been included as a future work in section 5 (Conclusion) on page 23. In future, the performance of the proposed method will be evaluated using different data sets. Also, the bias mitigation strategies that are not depended directly on a set of identity terms and the methods that are less dependent on individual words will be considered for effectively deal with biases tied to words used in many different contexts like white vs. black. Reviewer 2: Basic reporting The increasing demand for information and rapid growth of big data have dramatically increased textual data. The amount of different kinds of data has led to the overloading of information. For obtaining useful text information, the classification of texts is considered an imperative task. This paper develops a technique for text classification in big data using the MapReduce model. The goal is to design a hybrid optimization algorithm for classifying the text. This work is meaningful and potential in this field. Response: Thank you for the positive comment. Experimental design This paper develops a technique for text classification in big data using the MapReduce model. Validity of the findings The pre-pressing is done with the steaming process and stop word removal. In addition, the Extraction of imperative features is performed wherein SentiWordNet features, contextual features, and thematic features are generated. Furthermore, the selection of optimal features is performed using Tanimoto similarity. Additional comments 1 This work should be polished by native English speaker. Some spelling and grammar mistakes should be avoided in this manuscript. Response: All the spelling and grammatical errors in the manuscript have been corrected in the revised version of the manuscript. Action: The entire manuscript has been read carefully and the grammar and spelling issues have been corrected. 2 There are several typical machine learning classification model, such as SVM, neural network and so on. So, authors should compare the proposed method with other typical machine learning methods. Response: Some typical machine learning classification models, such as SVM and neural network have been included for comparison in the revised version of the manuscript. Action: The performance of the proposed method has been compared with three more machine learning classification models, such as SVM, NN, LSTM and the corresponding results have been provided in sub-section 4.5 (Comparative Analysis). 3. Some deep learning methods, including LSTM, should be compared with this method. Response: Thank you for the valuable comment. The LSTM has been compared with the proposed method in the revised version of the manuscript. Action: The performance of the proposed method has been compared with three more machine learning classification models, such as SVM, NN, LSTM and the corresponding results have been provided in sub-section 4.5 (Comparative Analysis). 4 There are some typographical errors. Authors should polished them. Response: All the typological errors in the manuscript have been corrected in the revised version. Action: The entire manuscript has been read carefully and the typological errors have been corrected. "
Here is a paper. Please give your review comments after reading it.
393
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Increasing demands for information and the rapid growth of big data have dramatically increased the amount of textual data. In order to obtain useful text information, the classification of texts is considered an imperative task. Accordingly, this article will describe the development of a hybrid optimization algorithm for classifying text. Here, preprocessing was done using the stemming process and stop word removal. Additionally, we performed the extraction of imperative features and the selection of optimal features using the Tanimoto similarity, which estimates the similarity between features and selects the relevant features with higher feature selection accuracy. Following that, a deep residual network trained by the Adam algorithm was utilized for dynamic text classification. Dynamic learning was performed using the proposed Rider invasive weed optimization (RIWO)-based deep residual network along with fuzzy theory. The proposed RIWO algorithm combines invasive weed optimization (IWO) and the Rider optimization algorithm (ROA). These processes are carried out under the MapReduce framework. Our analysis revealed that the proposed RIWO-based deep residual network outperformed other techniques with the highest true positive rate (TPR) of 85%, true negative rate (TNR) of 94%, and accuracy of 88.7%.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>measures such as Jaccard <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>, cosine <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref>, Euclidean distance <ns0:ref type='bibr' target='#b10'>[9]</ns0:ref>, and extended Jaccard <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, which are utilized for evaluating distance or angle across vectors. In this article, the similarity measures are divided into feature content or topological metrics.</ns0:p><ns0:p>In topology, the features are organized in a hierarchical model and the appropriate path length across the features must be evaluated. The features are measured based on evidence: features with elevated frequency have explicitly elevated information, while features with lower frequency are adapted with less information. Pair-wise and ITSim metrics fit into the class of feature content metrics. The information content measure provides elevated priority to the highest features with a small difference between the two data, leading to improved outcomes. The cosine and Euclidean belong to the class of topological metrics. It is susceptible to information loss as two similar datasets can be significantly offset by the existence of solitary features <ns0:ref type='bibr' target='#b5'>[4]</ns0:ref>. Methods such as clustering and classification that are utilized in text mining-based applications can also help transform massive data into small subsets to increase computational effectiveness <ns0:ref type='bibr' target='#b3'>[2]</ns0:ref>.</ns0:p><ns0:p>Text data consist of noisy and irrelevant features that prevent learning techniques from improving their accuracy. To remove redundant data, various data mining methods have been adapted. Feature extraction and selection are two such methods used to classify data. The selection of components is utilized for eliminating extra text features for effective classification and clustering. The previous techniques transformed huge data into small data while taking classical distance measures into consideration. Reducing dimensionality minimizes evaluation time and maximizes the efficiency of classification. The recovery of data and text are utilized in detecting data synonyms and meaning. Several techniques have been devised for the classification and clustering processes. Clustering is carried out using unsupervised techniques with different class label data <ns0:ref type='bibr' target='#b3'>[2]</ns0:ref>. The goal of classifying text is to categorize data into different parts. In this study, the goal was to allocate pertinent labels based on content <ns0:ref type='bibr'>[3]</ns0:ref>.</ns0:p><ns0:p>Categorizing texts is considered a crucial part of processing natural language. It is extensively employed in applications such as automatic medical text classification <ns0:ref type='bibr' target='#b37'>[35]</ns0:ref> and traffic monitoring <ns0:ref type='bibr' target='#b38'>[36]</ns0:ref>. Most news services require repeated arrangement of numerous articles in a single day <ns0:ref type='bibr' target='#b14'>[13]</ns0:ref>. Advanced email services offer the function of sorting junk mail and mail in an automated manner <ns0:ref type='bibr' target='#b15'>[14]</ns0:ref>. Other applications involve the analysis of sentiment <ns0:ref type='bibr' target='#b16'>[15]</ns0:ref>, modeling topics <ns0:ref type='bibr' target='#b17'>[16]</ns0:ref>, text clustering <ns0:ref type='bibr' target='#b34'>[32]</ns0:ref>, translation of languages <ns0:ref type='bibr' target='#b19'>[17]</ns0:ref>, and intent detection <ns0:ref type='bibr' target='#b20'>[18]</ns0:ref> <ns0:ref type='bibr'>[3]</ns0:ref>. Technology classification assists people, filters useful data, and poses more implications in real life. The design of text categorization and machine learning <ns0:ref type='bibr' target='#b40'>[38]</ns0:ref> <ns0:ref type='bibr' target='#b39'>[37]</ns0:ref> has shifted from manual to machine <ns0:ref type='bibr' target='#b21'>[19]</ns0:ref> <ns0:ref type='bibr' target='#b22'>[20]</ns0:ref> [21] <ns0:ref type='bibr' target='#b24'>[22]</ns0:ref>. Several textualization classification techniques exist <ns0:ref type='bibr' target='#b25'>[23]</ns0:ref> with the goal of categorizing textual data. The categorization outcomes can fulfill an individual's requirements for classifying text and are suitable for rapidly attaining significant data. MapReduce is utilized for handling huge amounts of data using unstructured data <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>. Our aim is to devise an optimization-driven deep learning technique for extracted the features, such as SentiWordNet, thematic, and contextual features. These features were employed in a deep residual network for classifying the texts and the deep residual network training was performed using the Adam algorithm. Finally, dynamic learning was carried out wherein the proposed Rider invasive weed optimization (RIWO)-based deep residual network was employed for incremental text classification. The fuzzy theory was employed for weight bounding to deal with the incremental data. In this process, the deep residual network training was performed using the proposed RIWO, which was devised by combining the Rider optimization algorithm (ROA) and invasive weed optimization (IWO) algorithm.</ns0:p><ns0:p>The key contributions of this paper are: </ns0:p><ns0:formula xml:id='formula_0'>&#61623;</ns0:formula></ns0:div> <ns0:div><ns0:head n='2.'>Literature review</ns0:head><ns0:p>The eight classical techniques based on text classification using big data and their issues are described below. Ranjan &amp; Prasad [1] developed an LFNN-based incremental learning technique for classifying text data based on context-semantic features. The methods employed a dynamic dataset for classification to dynamically learn the model. Here, we employed the incremental learning procedure Back Propagation Lion (BPLion) Neural Network, and fuzzy bounding and Lion Algorithm (LA) were used to select the weights. However, the technique failed to precisely classify the sentence. Kotte et al. <ns0:ref type='bibr' target='#b3'>[2]</ns0:ref> devised a similarity function for clustering the feature pattern. The technique attained dimensionality reduction with improved accuracy. However, the technique failed to utilize membership functions for obtaining clusters. Wang et al. <ns0:ref type='bibr'>[3]</ns0:ref> devised a deep learning technique for classifying text documents. Additionally, a large-scale scope-based convolutional neural network (LSS-CNN) was utilized for categorizing the text. The method effectively computed scope-based data and parallel training for massive datasets. The technique attained high scalability on big data but failed to attain the utmost accuracy. Kuppili et al. <ns0:ref type='bibr' target='#b5'>[4]</ns0:ref> developed the Maxwell-Boltzmann Similarity Measure (MBSM) for classifying text. The MBSM was derived with feature values from the documents. The MBSM was devised by combining single label K-nearest neighbor's classification (SLKNN), multilabel KNN (MLKNN), and K-means clustering. However, the technique failed to include clustering techniques and query mining. Liu &amp; Wang <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref> devised a technique for classifying text using English quality-related text data. Here, the goal was to extract, classify, and examine the data from English texts while considering cyclic neural networks. Ultimately, the features with sophisticated English texts were generated. This technique also combined attention to improve label disorder and make the structure more reliable. However, the computation cost tended to be very high. Li et al. <ns0:ref type='bibr' target='#b7'>[6]</ns0:ref> designed a method for classifying text and solving the misfitting issue by performing angle-pruning tasks from a database. The technique computed the efficiency of each convolutional filter using discriminative power produced at the pooling layer and shortened words obtained from the filter. However, the technique produced high computational complexity.</ns0:p><ns0:p>Bensaid &amp; Alimi <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref> devised the Multi-Objective Automated Negotiation-based Online Feature Selection (MOANOFS) for classifying texts. The MOANOFS utilized automated negotiation and machine learning techniques to improve classification performance using ultra-high dimensional datasets. This helped the method to decide which features were the most pertinent. However, the method failed to select features from multiclassification domains. Jiang et al. <ns0:ref type='bibr' target='#b9'>[8]</ns0:ref> devised a hybrid text classification model based on softmax regression for classifying text. The deep belief network was utilized to classify text using learned feature space. However, the technique failed to filter extraneous characters for enhancing system performance.</ns0:p><ns0:p>Akhter et al. <ns0:ref type='bibr' target='#b41'>[39]</ns0:ref> developed a large multi-format and multi-purpose dataset with more than ten thousand documents organized into six classes. For text classification, they utilized a Single-layer Multisize Filters Convolutional Neural Network (SMFCNN). The SMFCNN obtained high accuracy, demonstrating its capability to classify long text documents in Urdu. Flores et al. <ns0:ref type='bibr' target='#b42'>[40]</ns0:ref> developed a query strategy and stopping criterion that transformed Classifier Regular Expression (CREGEX) in an active learning (AL) biomedical text classifier. As a result, the AL was permitted to decrease the number of training examples required for a similar performance in every dataset compared to passive learning (PL).</ns0:p><ns0:p>Huan et al. <ns0:ref type='bibr' target='#b43'>[41]</ns0:ref> introduced a method for Chinese text classification that depended on a feature-enhanced nonequilibrium bidirectional long shortterm memory (Bi-LSTM) network. This method enhanced the precision of Chinese text classification and had a reliable capability to recognize Chinese text features. However, the accuracy of Chinese text recognition needs improvement and the training processing time should be reduced. Dong et al. <ns0:ref type='bibr' target='#b44'>[42]</ns0:ref> introduced a text classification approach using a self-interaction attention mechanism and label embedding. This method showed high classification accuracy, but for practical application, more work should be done.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.'>Proposed RIWO-based deep residual network for text classification in big data</ns0:head><ns0:p>The objective of text classification is to categorize text data into different classes based on certain content. Text classification is considered an imperative part of processing natural language. However, it is considered a challenging and complex process due to high dimensional and noisy texts, and the need to devise an improved classifier for huge textual data. This study devised a novel hybrid optimization-driven deep learning technique for text classification using big data. Here, the goal was to devise a classifier that employs text data as input and allocates pertinent labels based on content. At first, the input text data underwent pre-processing to eliminate noise and artifacts. Pre-processing was performed with stop word removal and stemming. Once the pre-processed data were obtained, the contextual, thematic, and SentiWordNet features were extracted. Once the features were extracted, the imperative features were chosen using the Tanimoto similarity. The Tanimoto similarity method evaluates similarity across features and chooses the relevant features with high feature selection accuracy. Once the features were selected, a deep residual network <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref> was used for dynamic text classification. The deep residual network was trained using the Adam algorithm <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref> [33] <ns0:ref type='bibr' target='#b36'>[34]</ns0:ref>. Additionally, dynamic learning was performed using the proposed RIWO algorithm along with the fuzzy theory. The proposed RIWO algorithm integrates IWO <ns0:ref type='bibr' target='#b29'>[27]</ns0:ref> and ROA <ns0:ref type='bibr' target='#b30'>[28]</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows the schematic view of text classification from the input text data in big data using the proposed RIWO method, considering the MapReduce phase. Assume input text data with various attributes is expressed as:</ns0:p><ns0:formula xml:id='formula_1'>&#61480; &#61481;&#61480; &#61481; E e D d B B e d &#61603; &#61603; &#61603; &#61603; &#61501; 1 1 }; { ,<ns0:label>(1) Where,</ns0:label></ns0:formula><ns0:p>refers to text data contained in the database with an attribute e d B , in data. Data points are employed using attributes for each data point. The other step was to eliminate artifacts and noise present in the data.</ns0:p><ns0:p>The data in a database are split into a specific number that is equivalent to mappers present in the MapReduce model. The partitioned data is given by: &#61563; &#61565; , ;1</ns0:p><ns0:formula xml:id='formula_2'>d e q B D q N &#61501; &#61603; &#61603;<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where, symbolizes total mappers. Assume mappers in MapReduce N are expressed as:</ns0:p><ns0:formula xml:id='formula_3'>&#61563; &#61565; N q M M M M M N q &#61603; &#61603; &#61501; 1 ; , , , , ,<ns0:label>2 1</ns0:label></ns0:formula><ns0:p>(3) Thus, input to mapper is formulated as:</ns0:p><ns0:formula xml:id='formula_4'>&#61563; &#61565; n l m r d D q l r q &#61603; &#61603; &#61603; &#61603; &#61501; 1 ; 1 ; ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Where symbolizes split data given to mapper, and indicates data l r d , q D in mapper.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.'>Pre-processing</ns0:head><ns0:p>The partitioned data from the text dataset was pre-processed by removing stop words and using stemming. Pre-processing is an important process used to smoothly arrange various data and offer effective outcomes by improving representation. The dataset contains unnecessary phrases and words that influence the process. Therefore, pre-processing is important for removing inconsistent words from the dataset. Initially, the text data are accumulated in the relational dataset and all reviews are divided into sentences and bags of the sentence. The elimination of stop words is carried out to maximize the performance of the text classification model. Here, the stemming and stop word removal refined the data.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.1.'>Stop word removal</ns0:head><ns0:p>This is a process that removes words with less representative value for the data. Some of the non-representative words include pronouns and articles. When evaluating data, some words are not valuable to text content, and removing such redundant words is imperative. This procedure is termed stop word removal <ns0:ref type='bibr' target='#b31'>[29]</ns0:ref>. Certain words such as articles, conjunctions, and prepositions, continuously appear and are called stop words. The removal of the stop word, the most imperative technique, is utilized to remove redundant words using vocabulary because the vector space size does not offer any meaning. The stop words indicate the word, which does not hold any data. It is a process used to eliminate stop words from a large set of reviews. The elimination of the stop word is used to save space and accelerate and improve processing.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.1.2.'>Stemming</ns0:head><ns0:p>The stemming procedure is utilized to convert words to stem form. In massive amounts of data, several words are utilized that convey a similar meaning. Therefore, the critical method used to minimize words to root is stemming. Stemming is a method of linguistic normalization wherein little words are reduced. Moreover, it is the procedure used to retrieve information for describing the mechanism of reducing redundant words to their root form and word stem. For instance, the words connections, connection, connecting, and connected are all reduced to connect <ns0:ref type='bibr' target='#b31'>[29]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_5'>&#61563; &#61565; k i l P Q M &#61603; &#61603; &#61501; &#61544; 1 ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where symbolizes total words present in text data from the database.</ns0:p></ns0:div> <ns0:div><ns0:head>i Q</ns0:head><ns0:p>The pre-processed outcome generated from pre-processing is expressed as , which is subjected as an input to feature extraction phase.</ns0:p></ns0:div> <ns0:div><ns0:head>l M 3.2 Acquisition of features for producing highly pertinent features</ns0:head><ns0:p>This describes an imperative feature produced with input review, and the implication of feature extraction is used to produce pertinent features that facilitate improved text classification. Moreover, data obstruction is reduced because text data is expressed as a minimized feature set. Therefore, the preprocessed partitioned data is fed to feature extraction, wherein SentiWordNet, contextual, and thematic features are extracted.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2.1.'>Extraction of SentiWordNet features</ns0:head><ns0:p>The SentiWordNet features are utilized from pre-processed partitioned data by removing keywords from reviews. Here, the SentiWordNet <ns0:ref type='bibr' target='#b33'>[31]</ns0:ref> is employed as a lexical resource to extract the SentiWordNet features. The SentiWordNet assigns each WordNet text one of three numerical sentiment scores: positive, negative, or neutral. Here, different words indicated</ns0:p></ns0:div> <ns0:div><ns0:head>Comment [MT11]:</ns0:head><ns0:p>Please confirm if the changes made conform to the intended meaning of the sentence. Manuscript to be reviewed Computer Science different polarities that indicated various word senses. The SentiWordNet consisted of different linguistic features: verbs, adverbs, adjectives, and ngram features. SentiWordNet is a process used to evaluate the score of a specific word using text data. Here, the SentiWordNet was employed to determine the polarity of the offered review and for discovering positivity and negativity. Hence, SentiWordNet is modeled as .</ns0:p></ns0:div> <ns0:div><ns0:head>Comment [WU12]: I agree with the change</ns0:head><ns0:formula xml:id='formula_6'>1 F 3.2.</ns0:formula></ns0:div> <ns0:div><ns0:head n='2.'>Extraction of contextual features</ns0:head><ns0:p>The context-based features [1] were generated from pre-processed partitioned data that described relevant words by dividing them using nonrelevant reviews for effective classification. This requires finding key terms that have context and semantic meaning in order to establish a proper context. The key term is considered a preliminary indicator for relevant review while context terms act as a validator that can be used to evaluate if the key term is an indicator. Here, the training dataset contained keywords with pertinent words. The context-based features assisted in selecting the relevant and non-relevant reviews.</ns0:p><ns0:p>We considered representing a training dataset that had relevant and non-relevant reviews. Using this method, assume represents the key term rel non N _ -Discovery of the context term: After discovering key terms, the process of context term discovery, which is similar to separately detecting each term, begins. The steps employed in determining the context term are given as:</ns0:p><ns0:p>i) Computing all instances of the key term employed among relevant and non-relevant reviews.</ns0:p><ns0:p>ii) By employing sliding window size, the conditions are mined as context terms. Hence, the size of the window is employed as a context span.</ns0:p><ns0:p>iii) The pertinent terms generated are employed as a text, modeled as , and non-relevant terms are denoted as . The set of pertinent text is</ns0:p><ns0:formula xml:id='formula_7'>r d nr d modeled as</ns0:formula><ns0:p>, and the non-relevant set is referred to as .</ns0:p><ns0:formula xml:id='formula_8'>r d R _ nr d R _ iv)</ns0:formula><ns0:p>After that, the score evaluated for each distinctive term is expressed as:</ns0:p><ns0:formula xml:id='formula_9'>S C R C R C x L x L x C nr d r d | ) ( ) ( | ) ( _ _ &#61485; &#61501; (7)</ns0:formula><ns0:p>Where symbolizes the language model for an excerpt with </ns0:p><ns0:formula xml:id='formula_10'>) ( _ C R x L</ns0:formula></ns0:div> <ns0:div><ns0:head n='3.2.3.'>Extraction of thematic features</ns0:head><ns0:p>The pre-processed partitioned data is used to find thematic features. l r d , Here, the count of the thematic word <ns0:ref type='bibr' target='#b32'>[30]</ns0:ref> in a sentence is imperative as the frequently occurring words are most likely connected to the topic in the data. Thematic words are words that grab key topics defined in a provided document. In thematic features, the top 10 most frequent words are employed as thematic. Thus, the thematic feature is modeled as:</ns0:p><ns0:formula xml:id='formula_11'>3 F &#61669; &#61501; &#61501; 0 1 t t T T F (8)</ns0:formula><ns0:p>Where expresses the count of thematic words in a sentence, and it is</ns0:p><ns0:formula xml:id='formula_12'>T expressed as . &#61563; &#61565; T T T , ,. , 2 1</ns0:formula><ns0:p>The feature vectors considering the contextual, thematic, and SentiWordNet features are expressed as:</ns0:p><ns0:formula xml:id='formula_13'>} , , { 3 2 1 F F F F &#61501; (9)</ns0:formula><ns0:p>Where symbolizes SentiWordNet features, signifies contextual 1 F 2 F features, and refers to thematic features.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>F</ns0:head></ns0:div> <ns0:div><ns0:head n='3.3.'>Feature selection using the Tanimoto similarity</ns0:head><ns0:p>The selection of imperative features from the extracted features is F made using the Tanimoto similarity. The Tanimoto similarity computes similarity across features and selects features with high feature selection accuracy. Here, the Tanimoto similarity is expressed as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_14'>&#61669; &#61669; &#61669; &#61669; &#61501; &#61501; &#61501; &#61501; &#61485; &#61483; &#61501;</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Text classification is performed using an Adam-based deep residual network and selected features . The classification of text data assists in R standardizing the infrastructure and makes the search simpler and more pertinent. Additionally, classification enhances the user's experience, simplifies navigation, and helps solve business issues (such as social media and e-mails) in real-time. The deep residual network is more effective at counting attributes and computation. This network is capable of building deep representations at each layer and can manage advanced deep learning tasks. The architecture of the deep residual network and training with the Adam algorithm is described below.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.1.'>Architecture of the deep residual network</ns0:head><ns0:p>We employed a deep residual network <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref> in order to make a productive decision regarding which text classification to perform. The DRN is comprised of different layers: residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers. Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> shows the structural design of a deep residual network with residual blocks, Conv layers, linear classifier, and average pooling layers for text classification.</ns0:p><ns0:p>-Conv layer:</ns0:p><ns0:p>The two-dimensional Conv layer reduces free attributes in training and offers reimbursement for allocating weight. The cover layer processes the input image with the filter sequence known as the kernel using a local connection. The cover layer utilizes a mathematical process for sliding the filter with the input matrix and computes the dot product of the kernel. The evaluation process of the Conv layer is represented as: &#61482; Pooling layer: This layer is associated with the Conv layer and is especially utilized to reduce the feature map's spatial size. The average pooling is selected as a function of each slice and the depth of the feature map. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_15'>&#61480; &#61481; &#61480; &#61481; &#61480; &#61481; &#61669;&#61669; &#61485; &#61501; &#61485; &#61501; &#61483; &#61483; &#61623; &#61501; 1 0 1 0 , , 2 E a E s s v a u s a O X O d B (11) &#61480; &#61481; O G O d B in C Z Z &#61482; &#61501; &#61669; &#61485; &#61501; 1 0 1<ns0:label>(</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>1 &#61483; &#61485; &#61501; &#61548; a in out Z a a (13) 1 &#61483; &#61485; &#61501; &#61548; s in out Z s s</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>-Activation function: The nonlinear activation function is adapted for learning nonlinear and complicated features so it is utilized to improve the non-linearity of extracted features. Rectified linear unit (ReLU) is utilized for processing data. The ReLU function is formulated as:</ns0:p><ns0:formula xml:id='formula_17'>&#61480; &#61481; &#61678; &#61677; &#61676; &#61619; &#61515; &#61515; &#61500; &#61515; &#61501; 0 ; 0 ; 0 Re</ns0:formula><ns0:p>O LU <ns0:ref type='bibr' target='#b16'>(15)</ns0:ref> Where symbolizes a feature. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.4.2.'>Training of the deep residual network with the Adam algorithm</ns0:head><ns0:p>The deep residual network training is performed using the Adam technique which assists in discovering the best weights for tuning the deep residual network for classifying text. Adam <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref> represents a first-order stochastic gradient-based optimization extensively adapted to a fitness function that changes for attributes. The major implication of the method is computational efficiency and fewer memory needs. Moreover, the problems associated with the non-stationary objectives and the subsistence of noisy gradients are handled effectively. In this study, the magnitudes of the updated parameters were invariant in contrast to the rescaling of gradient, and step size was handled with a hyperparameter that worked with sparse gradients. In addition, Adam is effective in performing step size annealing. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Step 1: Initialization The first step represents bias correction initialization wherein signifies l q the corrected bias of the first moment estimate and represents the l m corrected bias of the second moment estimate.</ns0:p><ns0:p>Step 2: Discovery of error The bias error is computed to choose the optimum weight for training the deep residual network. Here, the error was termed as an error function that led to an optimal global solution. The function is termed as a minimization function and is expressed as: This technique generates smooth variation with effectual computational efficiency and lower memory requirements. As per Adam <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, the bias is expressed as:</ns0:p><ns0:formula xml:id='formula_18'>&#61480; &#61481; &#61669; &#61501; &#61485; &#61501; f l l O f Err</ns0:formula><ns0:formula xml:id='formula_19'>&#61541; &#61537; &#61553; &#61553; &#61483; &#61485; &#61501; &#61485; l l l l m q &#710;1 (20)</ns0:formula><ns0:p>Where refers to step size, expresses corrected bias, indicates bias-&#61537; l q &#710;l m corrected second-moment estimate, represents the constant, and</ns0:p><ns0:formula xml:id='formula_20'>&#61541; 1 &#61485; l &#61553;</ns0:formula><ns0:p>signifies the parameter at a prior time instant . The corrected bias of the ) 1 ( &#61485; l first-order moment is expressed as:</ns0:p><ns0:formula xml:id='formula_21'>) 1 ( &#710;1l l l q q &#61544; &#61485; &#61501; (21) 1 1 1 1 ) 1 ( &#710;l l l G q q &#61544; &#61544; &#61485; &#61483; &#61501; &#61485; (22)</ns0:formula><ns0:p>The corrected bias of the second-order moment is represented as:</ns0:p><ns0:formula xml:id='formula_22'>) 1 ( &#710;2 l l l m m &#61544; &#61485; &#61501; (23) 2 2 1 2 ) 1 ( &#710;l l l H m m &#61544; &#61544; &#61485; &#61483; &#61501; &#61485; (24) ) ( 1 &#61485; &#61649; &#61501; l l loss H &#61553; &#61553; (25)</ns0:formula><ns0:p>Step 4: Determination of the best solution: The best solution is determined with error, and a better solution is employed for classifying text.</ns0:p><ns0:p>Step 5: Termination: The optimum weights are produced repeatedly until the utmost iterations are attained. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> describes the pseudocode of the Adam technique. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For incremental data , dynamic learning is done using the proposed B RIWO-based deep residual network. Here, the assessment of incremental learning with the developed RIWO-based deep residual network was done to achieve effective text classification with the dynamic data. The deep residual network was trained with developed RIWO for generating optimum weights. The developed RIWO was generated by integrating ROA and IWO for acquiring effective dynamic text classification.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5.1.'>Architecture of deep residual network</ns0:head><ns0:p>The model of the deep residual network is described in section 3.4.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5.2.'>Training of deep residual network with proposed RIWO</ns0:head><ns0:p>The training of the deep residual networks was performed with the developed RIWO, which was devised by integrating IWO and ROA. Here, the ROA <ns0:ref type='bibr' target='#b30'>[28]</ns0:ref> was motivated by the behavior of the rider groups, which travel and compete to attain a common target position. In this model, the riders were chosen from the total number of riders for each group. We concluded that this method produces enhanced classification accuracy. Furthermore, the ROA is effective and follows the steps of fictional computing for addressing optimization problems but with less convergence. IWO <ns0:ref type='bibr' target='#b29'>[27]</ns0:ref> is motivated by colonizing characteristics of weed plants. The technique showed a fast convergence rate and elevated the accuracy. Hence, we integrated IWO and ROA to enhance complete algorithmic performance. The steps in the method are expressed as:</ns0:p><ns0:p>Step 1) Initialization of population The preliminary step is algorithm initialization, which is performed using four-rider groups provided by and represented as: The computation of errors is already described in equation <ns0:ref type='bibr' target='#b21'>(19)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_23'>A } , , , , { 2 1 &#61514; &#61549; A A A A A &#61501;<ns0:label>(</ns0:label></ns0:formula><ns0:p>Step 3) Update the riders' position: The rider position in each set is updated for determining the leader. Thus, the updated rider position using a feature of each rider is defined below. The updated position of each rider is expressed as:</ns0:p><ns0:p>As per ROA <ns0:ref type='bibr' target='#b30'>[28]</ns0:ref>, the updated overtaker position is used to increase the rate of success by determining the position of the overtaker and is represented as:</ns0:p><ns0:formula xml:id='formula_24'>&#61531; &#61533; ) , ( * ) ( ) , ( ) , ( * 1 h L A g h g A h g A L n o n o n &#61622; &#61483; &#61501; &#61483; (27)</ns0:formula><ns0:p>Where signifies the direction indicator.</ns0:p><ns0:formula xml:id='formula_25'>) ( * g n &#61622;</ns0:formula><ns0:p>The attacker has a propensity to grab the position of the leaders and is given by: <ns0:ref type='figure'>, (</ns0:ref> , 1 <ns0:ref type='bibr' target='#b30'>(28)</ns0:ref> The bypass riders contain a familiar path, and its update is expressed as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_26'>&#61531; &#61533; n g L n u g L a n r u L A K Cos u L A u g A &#61483; &#61483; &#61501; &#61483; ) , ( * ) , ( )</ns0:formula><ns0:formula xml:id='formula_27'>&#61480; &#61481; &#61531; &#61533; &#61531; &#61533; u u A u u A u g A n n b n &#61540; &#61560; &#61540; &#61539; &#61548; &#61485; &#61483; &#61501; &#61483; 1 * ) , ( ) ( * ) , ( ) , (</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where symbolizes the random number, signifies the arbitrary &#61548; &#61539; number between 1 and , denotes an arbitrary number between 1 and , P &#61560; P and expresses an arbitrary number between 0 and 1.</ns0:p></ns0:div> <ns0:div><ns0:head>&#61540;</ns0:head><ns0:p>The follower poses a propensity to update the position using the leading rider position to attain the target and is given by: </ns0:p><ns0:formula xml:id='formula_28'>&#61531; &#61533; ) * ) , ( * ( ) , ( ) , ( , 1 n g L n h g L F n r h L A K Cos h L A h g A &#61483; &#61501; &#61483; (</ns0:formula><ns0:formula xml:id='formula_29'>&#61531; &#61533; n g n h g L F n r K Cos h L A h g A * ( 1 ) , ( ) , ( , 1 &#61483; &#61501; &#61483; (31)</ns0:formula><ns0:p>The IWO assists in generating the best solutions. Per IWO <ns0:ref type='bibr' target='#b29'>[27]</ns0:ref>, the equation is represented as:</ns0:p><ns0:formula xml:id='formula_30'>F n best F n F n A A A n h g A &#61485; &#61483; &#61501; &#61483; ) ( ) , ( 1 &#61555; (32)</ns0:formula><ns0:p>Where symbolizes the new weed position in iteration ,</ns0:p><ns0:formula xml:id='formula_31'>F n A 1 &#61483; 1 &#61483; n F n</ns0:formula><ns0:p>A signifies the current weed position, refers to the best weed found in the best A whole population, and represents the current standard deviation.</ns0:p><ns0:formula xml:id='formula_32'>) (n &#61555; F n F n F n best A A n h g A A &#61483; &#61485; &#61501; &#61483; ) ( ) , ( 1 &#61555; (33)</ns0:formula><ns0:p>Substitute equation <ns0:ref type='bibr' target='#b35'>(33)</ns0:ref> in equation <ns0:ref type='bibr' target='#b33'>(31)</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_33'>&#61480; &#61481;&#61531; &#61533; n g n h g, F n F n F 1 n F 1 n r * ) Cos(K 1 A &#963;(n)A h) (g, A h) (g, A &#61483; &#61483; &#61485; &#61501; &#61483; &#61483; (34) &#61531; &#61533; &#61480; &#61481;&#61531; &#61533; n g n h g F n F n n g n h g F n F n r K Cos A A n r K Cos h g A h g A * ) ( 1 ) ( * ) ( 1 ) , ( ) , ( , , 1 1 &#61483; &#61483; &#61485; &#61483; &#61501; &#61483; &#61483; &#61555; (35) &#61531; &#61533; &#61531; &#61533;&#61531; &#61533; n g n h g F n F n n g n h g F n F n r K Cos A n A r K Cos h g A h g A * ) ( 1 ) ( * ) ( 1 ) , ( ) , ( , , 1 1 &#61483; &#61485; &#61501; &#61483; &#61485; &#61483; &#61483; &#61555; (36) &#61531; &#61533; &#61531; &#61533;&#61531; &#61533; n g n h g F n F n n g n h g F n r K Cos A n A r K Cos h g A * ) ( 1 ) ( * ) ( 1 1 ) , ( , , 1 &#61483; &#61485; &#61501; &#61485; &#61485; &#61483; &#61555; (37) &#61531; &#61533; &#61531; &#61533;&#61531; &#61533; n g n h g F n F n n g n h g F n r K Cos A n A r K Cos h g A * ) ( 1 ) ( * ) ( ) , ( , , 1 &#61483; &#61485; &#61501; &#61485; &#61483; &#61555; (38)</ns0:formula><ns0:p>The final updated equation of the proposed RIWO is expressed as:</ns0:p><ns0:formula xml:id='formula_34'>&#61480; &#61481; n g n h g n g n h g F n F n F n r K Cos r K Cos n A A h g A * ) ( * ) ( 1 ) ( ( ) , ( , , 1 &#61483; &#61485; &#61485; &#61501; &#61483; &#61555; (39)</ns0:formula><ns0:p>Step 4) Re-evaluation of the error: After completing the update process, the error of each rider is computed. The position of the rider in the leading position is replaced using the position of the new generated rider so that the error of the new rider is smaller.</ns0:p><ns0:p>Step 5) Update of the rider parameter:</ns0:p><ns0:p>The rider attribute update is imperative to determine an effectual optimal solution using the error.</ns0:p><ns0:p>Step 6) Riding off time:</ns0:p><ns0:p>The steps were iterated repeatedly until we attained off time , in OFF N which the leader was determined. The pseudocode of the developed RIWO is shown in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The output produced from the developed RIWO-based deep residual network is , which helps classify the text data since dynamic learning helps &#61547; classify the dynamic data. Here, we employed fuzzy bounding to remodel the classifier if there was a high chance of a previous data error.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.5.3.'>Fuzzy theory</ns0:head><ns0:p>An error is evaluated whenever incremental data is added to the model and weights are updated without using the previous weights. If the error evaluated by the present instance is less than the error of the previous instance then the weights are updated based on the proposed RIWO algorithm. Otherwise, the classifiers are remodeled by setting a boundary for weight using fuzzy theory [1] and optimal weight is chosen using the proposed RIWO algorithm. On arrival of data , the error will be &#61500; i i e e prediction with training based on RIWO is made. Otherwise, the fuzzy bounding-based learning will be done by bounding the weights, which is given as:</ns0:p><ns0:formula xml:id='formula_35'>1 &#61483; i d 1 &#61483; i e computed</ns0:formula><ns0:formula xml:id='formula_36'>B t s F &#61559; &#61559; &#61501; &#61617;<ns0:label>(40)</ns0:label></ns0:formula><ns0:p>Where is weight at the current iteration and signifies a fuzzy t &#61559; s F score. For the dynamic data, the features were extracted. Here, the } {F membership degree is given as:</ns0:p><ns0:formula xml:id='formula_37'>2 1 || || t t Membership Degree &#61559; &#61559; &#61485; &#61485; &#61501; &#61485;<ns0:label>(41)</ns0:label></ns0:formula><ns0:p>Where represents weights at iteration and signifies</ns0:p><ns0:formula xml:id='formula_38'>2 &#61485; t &#61559; 2 &#61485; t 1 &#61485; t &#61559; weights at iteration</ns0:formula><ns0:p>. When the highest iteration is attained, the process is</ns0:p><ns0:formula xml:id='formula_39'>1 &#61485; t stopped.</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.'>Results and Discussion</ns0:head><ns0:p>The competence of the technique is evaluated by analyzing the techniques using various measures like true positive rate (TPR), true negative rate (TNR), and accuracy. The assessment is done by considering mappers=3, mappers=4, and by varying the chunk size.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Experimental setup</ns0:head><ns0:p>The execution of the developed model was performed in PYTHON with Windows 10 OS, an Intel processor, and 4GB RAM. Here, the analysis was performed by considering the NSL-KDD dataset.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.'>Dataset description</ns0:head><ns0:p>The dataset adapted for text classification involved the Reuters and 20 Newsgroups databases and is explained below.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.1.'>20 Newsgroups database</ns0:head><ns0:p>The 20 Newsgroups dataset [24] was curated by Ken Lang for newsreaders to extract the Netnews. The dataset was established by collecting 20,000 newsgroup data points split across 20 different newsgroups.</ns0:p></ns0:div> <ns0:div><ns0:head>Comment [MT23]:</ns0:head><ns0:p>Please confim if the changes made conform to the intended meaning of the sentence. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Comment</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The database is popular for analyzing text applications used to handle machine-learning methods such as clustering and text classification. The dataset is organized into 20 different newsgroups covering different topics.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2.2.'>Reuters database</ns0:head><ns0:p>The Reuters-21578 Text Categorization Collection Dataset was curated by David D. Lewis <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref>. The dataset is comprised of documents collected from Reuters newswires starting in 1987. The documents are arranged and indexed based on categories. There were 21,578 instances in the dataset with five attributes. The number of websites attained by the dataset was 163,417.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Evaluation metrics</ns0:head><ns0:p>The efficiency of the developed model was examined by adopting measures such as accuracy, TPR, and TNR.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3.1.'>Accuracy</ns0:head><ns0:p>Accuracy is described as the measure of data that is precisely preserved and is expressed as:</ns0:p><ns0:formula xml:id='formula_40'>F H Q P Q P Acc &#61483; &#61483; &#61483; &#61483; &#61501; (42)</ns0:formula><ns0:p>Where signifies true positive, symbolizes true negative, denotes The TNR refers to the ratio of negatives that are correctly detected.</ns0:p><ns0:formula xml:id='formula_41'>F Q Q TNR &#61483; &#61501; (44)</ns0:formula><ns0:p>Where is true negative and signifies false positive.</ns0:p></ns0:div> <ns0:div><ns0:head>Q F 4.4. Comparative methods</ns0:head><ns0:p>We evaluated the proposed RIWO-based deep residual network by comparing it with other classical techniques such as LSS-CNN <ns0:ref type='bibr'>[3]</ns0:ref>, RNN <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>, SLKNN+MLKNN <ns0:ref type='bibr' target='#b5'>[4]</ns0:ref>, BPLion+LFNN [1], SVM, NN, and LSTM.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.'>Comparative analysis</ns0:head><ns0:p>The proposed technique was assessed using certain measures such as accuracy, TPR, and TNR. Here, the analysis was performed by considering the Reuters and 20 Newsgroups datasets, as well as the mapper size=3 and 4.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.1.'>Analysis with the Reuters dataset</ns0:head><ns0:p>The assessment of techniques with the Reuters dataset considering TPR, TNR, and accuracy parameters is described. The assessment is done with mapper=3 and mapper=4 and varying the chunk size. Figure <ns0:ref type='figure' target='#fig_20'>3</ns0:ref> shows an assessment of techniques measuring accuracy, TPR, and TNR in the Reuter datasets with mapper=3. The assessment of techniques with the TPR measure is depicted in Figure <ns0:ref type='figure' target='#fig_20'>3a</ns0:ref>. For chunk size=3, the TPR evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.747, 0.757, 0.771, 0.776, 0.780, 0.785, 0.790, and 0.803, respectively. Likewise, for chunk size=6, the TPR evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.800, 0.812, 0.818, 0.818, 0.818, 0.819, 0.819, and 0.830, respectively. The assessment of techniques measuring TNR is depicted in Figure <ns0:ref type='figure' target='#fig_20'>3b</ns0:ref>. For chunk size=3, the TNR evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.831, 0.842, 0.853, 0.861, 0.870, 0.880, 0.886, and 0.913, respectively. For chunk size=6, the TNR evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.846, 0.850, 0.865, 0.869, 0.878, 0.885, 0.896, and 0.925, respectively. The assessment of the methods measuring accuracy is depicted in Figure <ns0:ref type='figure' target='#fig_20'>3c</ns0:ref>. For chunk size=3, the accuracy evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.8306, 0.842, 0.852, 0.861, 0.870, 0.880, 0.886, and 0.913, respectively. Likewise, for chunk size=6, the accuracy evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.846, 0.850, 0.865, 0.869, 0.878, 0.885, 0.896, and 0.925, respectively. The performance improvement of LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, BPLion+LFNN with respect to the proposed RIWO-based deep residual network considering accuracy were 8.540%, 8.108%, 6.486%, 6.054%, 5.081%, 4.324%, and 3.135%, respectively. b) Assessment with mapper=4 We assessed the techniques by measuring accuracy, TPR, and TNR, considering the Reuters dataset and using mapper=4 (Figure <ns0:ref type='figure' target='#fig_18'>4</ns0:ref>). The assessment of techniques using TPR is displayed in Figure <ns0:ref type='figure' target='#fig_21'>4a</ns0:ref>. For chunk size=3, the TPR evaluated using LSS-CNN was 0.754, RNN was 0.768, SLKNN+MLKNN was 0.792, SVM was 0.796, NN was 0.800, LSTM was 0.806, BPLion+LFNN was 0.810, and the proposed RIWO-based deep residual network was 0.828. Likewise, for chunk size=6, the TPR evaluated using LSS-CNN was 0.810, RNN was 0.820, SLKNN+MLKNN was 0.824, SVM was 0.824, NN was 0.825, LSTM was 0.826, BPLion+LFNN was 0.826, and the proposed RIWO-based deep residual network was 0.850. The assessment of techniques measuring TNR is depicted in Figure <ns0:ref type='figure' target='#fig_21'>4b</ns0:ref>. For chunk size=3, the TNR evaluated using LSS-CNN was 0.839, RNN was 0.860, SLKNN+MLKNN was 0.863, SVM was 0.870, NN was 0.875, LSTM was 0.881, BPLion+LFNN was 0.896, and the proposed RIWO-based deep residual network was 0.925. Likewise, for chunk size=6, the TNR evaluated by LSS-CNN was 0.855, RNN was 0.856, SLKNN+MLKNN was 0.876, SVM was 0.878, NN was 0.885, LSTM was 0.893, BPLion+LFNN was 0.900, and the proposed RIWO-based deep residual network was 0.940. The assessment of the methods measuring accuracy is displayed in Figure <ns0:ref type='figure' target='#fig_21'>4c</ns0:ref>. For chunk size=3, the accuracy evaluated by LSS-CNN was 0.837, RNN was 0.843, SLKNN+MLKNN was 0.846, SVM was 0.850, NN was 0.855, LSTM was 0.858, BPLion+LFNN was 0.862, and the proposed RIWO-based deep residual network was 0.880. For chunk size=6, the accuracy evaluated by LSS-CNN was 0.833, RNN was 0.849, SLKNN+MLKNN was 0.852, SVM was 0.857, NN was 0.859, LSTM was 0.863, BPLion+LFNN was 0.868, and the proposed RIWO-based deep residual network was 0.887. The performance improvement of LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN with respect to the proposed RIWO-based deep residual network and considering accuracy was 6.087%, 4.284%, 3.945%, 3.38%, 3.156%, 2.705%, and 2.142%, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5.2.'>Analysis with the 20 Newsgroups dataset</ns0:head><ns0:p>The assessment of techniques using the 20 Newsgroups datasets with TPR, TNR, and accuracy parameters was elaborated. The assessment was done with mapper=3 and mapper=4 and by altering the chunk size. a) Assessment with mapper=3 Figure <ns0:ref type='figure' target='#fig_22'>5</ns0:ref> presents the assessment of techniques measuring accuracy, TPR and TNR and considering the 20 Newsgroups dataset with mapper=3. The assessment of techniques measuring TPR is depicted in Figure <ns0:ref type='figure' target='#fig_22'>5a</ns0:ref>. For chunk size=3, the maximum TPR of 0.834 was determined using the proposed RIWO-based deep residual network, while the TPR found using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN were 0.708, 0.759, 0.780, 0.785, 0.792, 0.803, and 0.812, respectively. Likewise, for chunk size=6, the highest TPR of 0.840 was found using the proposed RIWO-based deep residual network, while the TPR found using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN were 0.796, 0.815, 0.818, 0.822, 0.825, 0.828, and 0.829, respectively. The assessment of techniques measuring TNR is depicted in Figure <ns0:ref type='figure' target='#fig_22'>5b</ns0:ref>. For chunk size=3, the TNR computed using the proposed RIWO-based deep residual network was 0.862, while the TNR found using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN were 0.832, 0.839, 0.843, 0.846, 0.848, 0.850, and 0.851, respecitvely. Likewise, for chunk size=6, the TNR evaluated using the proposed RIWO-based deep residual network was 0.879, while those Figure <ns0:ref type='figure' target='#fig_23'>6</ns0:ref> presents the assessment of techniques measuring accuracy, TPR, and TNR and considering the 20 Newsgroups dataset with mapper=4. The assessment of techniques with TPR measure is depicted in Figure <ns0:ref type='figure' target='#fig_23'>6a</ns0:ref>. For chunk size=3, the TPR evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.721, 0.769, 0.798, 0.801, 0.803, 0.806, 0.810, and 0.845, respectively. Likewise, for chunk size=6, the TPR evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.810, 0.827, 0.836, 0.849, 0.840, 0.843, 0.846, and 0.859, respectively. The assessment of techniques measuring TNR is depicted in Figure <ns0:ref type='figure' target='#fig_23'>6b</ns0:ref>. For chunk size=3, the TNR evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and the proposed RIWO-based deep residual networks were 0.831, 0.831, 0.842, 0.867, 0.848, 0.854, 0.859, and 0.870, respectively. Likewise, for chunk size=6, the TNR evaluated by LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.836, 0.839, 0.851, 0.857, 0.863, 0.866, 0.871, and 0.910. The assessment of the method measuring accuracy is depicted in Figure <ns0:ref type='figure' target='#fig_23'>6c</ns0:ref>. For chunk size=3, the accuracy evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.821, 0.831, 0.842, 0.846, 0.849, 0.854, 0.860, and 0.861, respectively. Likewise, for chunk size=6, the accuracy evaluated using LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN, and the proposed RIWO-based deep residual network were 0.824, 0.838, 0.849, 0.852, 0.858, 0.863, 0.868, and 0.870, respectively. The performance improvements of LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, BPLion+LFNN with respect to the proposed RIWO-based deep residual network and considering Manuscript to be reviewed Computer Science accuracy were 5.287%, 3.678%, 2.413%, 2.068%, 1.379%, 0.804%, and 0.229%, respectively.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.6.'>Comparative discussion</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the assessment of techniques in terms of accuracy, TPR, and TNR in the Reuters and 20 Newsgroups datasets. The Reuters dataset with mapper=3 had a highest accuracy of 0.830 using the developed RIWObased deep residual network, while the accuracies of the existing LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN were 0.800, 0.812, 0.818, 0.818, 0.818, 0.819, and 0.819, respectively. The proposed RIWObased deep residual network measured the maximum TPR of 0.925, while LSS-CNN, RNN, SLKNN+MLKNN, SVM, NN, LSTM, and BPLion+LFNN computed TPR of 0.846, 0.850, 0.865, 0.869, 0.878, 0.885, and 0.896, respectively. The proposed RIWO-based deep residual network computed the highest TNR of 0.880, while LSS-CNN, RNN, SLKNN+MLKNN, and BPLion+LFNN computed TNR of 0.824, 0.839, 0.849, 0.852, 0.856, 0.859, and 0.863, respectively. With mapper=4, the highest TPR of 0.850, TNR of 0.940, and accuracy of 0.887 were found using the developed RIWO-based deep residual network. With the 20 Newsgroups datasets and mapper=3, the highest TPR of 0.840, the highest TNR of 0.879, and the highest accuracy of 0.859 were computed by the proposed RIWO-based deep residual network. With mapper=4, the highest TPR of 0.859, highest TNR of 0.910, and highest accuracy of 0.870 were determined using the developed RIWO-based deep residual network. </ns0:p></ns0:div> <ns0:div><ns0:head n='5.'>Conclusion</ns0:head><ns0:p>This article presents a technique for text classification of big data considering the MapReduce model. Its purpose is to provide a hybrid, optimization-driven, deep learning model for text classification. Here, preprocessing was carried out using stemming and stop word removal. The mining of significant features was also performed wherein SentiWordNet, contextual, and thematic features were mined from input pre-processed data. Furthermore, the selection of the best features was carried out using the Tanimoto similarity. The Tanimoto similarity examined the similarities between the features and selected the pertinent features with higher feature selection accuracy. Then, a deep residual network was employed for dynamic text classification. The Adam algorithm trained the deep residual network and dynamic learning was carried out with the proposed RIWO-based deep residual network and fuzzy theory for incremental text classification. Deep residual network training was performed using the proposed RIWO. The proposed RIWO algorithm is the integration of IWO and ROA, and it outperformed other techniques with the highest TPR of 85%, TNR of 94%, and accuracy of 88.7%. The proposed method's performance will be evaluated using different datasets in the future. Additionally, bias mitigation strategies that do not directly depend on a set of identity terms and methods that are less dependent on individual words will be considered to effectively Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Proposed RIWO-based deep residual network for text classification: A new method developed using multidimensional features and MapReduce. Dynamic learning uses the proposed RIWO-based deep residual network for classifying texts. Here, the developed RIWO was adapted for deep residual network training. &#61623; RIWO: Devised by combining ROA and IWO algorithms. &#61623; The fuzzy theory: Employed to handle dynamic data by performing weight bounding. The rest of the sections are given as follows: Section 2 presents the classical text classification techniques survey. Section 3 describes the developed text classification model. Section 4 discusses the results of the developed model for classical techniques, and section 5 presents the conclusion.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Schematic view of text classification from the input big data using the proposed RIWO-based deep residual network.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Please confirm if the changes made conform to the intended meaning of the sentence. Comment [WU14]: I agree with the change PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>key terms: The language model that employs each term and the metric are expressed as:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>2 F</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Please confirm if the changes made conform to the intended meaning of the sentence. Comment [WU16]: I agree with the change Comment [MT17]: Please confirm if the changes made conform to the intended meaning of the sentence. Comment [WU18]: I agree with the change PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022) Manuscript to be reviewed Computer Science excerpt with a non-relevant review set and represents the size of the S window. If the measure is a definite threshold then that score is adapted as a context term . Generated context-based features are modeled as S x .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>selected features are expressed as . R The produced feature selection output obtained from the mapper is input to the reducer . Then, the text classification is performed on the U reducer using the selected features, which is briefly illustrated below.3.4. Classification of texts with Adam-based deep residual networkPeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>12) Where expresses the CNN feature of the input image, and refer O u v to the recording coordinates, signifies the kernel matrix termed as G E E &#61620; a learnable parameter, and and are the position indices of the kernel a s matrix. Hence, expresses the size of the kernel for the input neuron Z G th Z and expresses the cross-correlation operator.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Please confirm if the changes made conform to the intended meaning of the sentence. Comment [WU20]: I agree with the change PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>&#61515;</ns0:head><ns0:label /><ns0:figDesc>-Batch normalization: Here, the training set was divided into various small sets known as mini-batches to train the model. It attains a balance between evaluation and convergence complexity. The input layers are normalized by scaling activations to maximize reliability and training speed.-Residual blocks: This indicates the shortcut connection amongst the Conv layers. The input is unswervingly allocated to output only if input and output are of equal size. classifier: After completion of the Conv layer, linear classifier performs a procedure to discover noisy pixels using input features. It is a combination of the softmax function and a fully connected layer. expresses weight matrix and represents bias.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>&#61547;Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Structural design of the deep residual network with residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers for text classification.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>The classification of text employs a deep residual network for texts. The steps of Adam are given as: PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>data, symbolizes output generated with the f &#61547; deep residual network classifier, and indicates the expected value. l O Step 3: Discovery of updated bias Adam is used to improving convergence behavior and optimization.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>2 )</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Determination of error:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>1 ( 29 )</ns0:head><ns0:label>29</ns0:label><ns0:figDesc>Comment [MT21]: Please confirm if the changes made conform to the intended meaning of the sentence. Comment [WU22]: I agree with the change PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>and compared with that of the previous data . If , then</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>[WU24]: I agree with the change PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>F 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>.3.2. TPR The TPR refers to the ratio of the count of true positives with respect to the total number of positives.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>Comment [MT25]: Please confirm if the changes made conform Comment [WU26]: I agree with the change Comment [MT27]: Please confirm if the changes made conform to the intended meaning of the sentence. Comment [WU28]: I agree with the change PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022) Manuscript to be reviewed Computer Science a) Assessment with mapper=3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Assessment of different techniques, comparing the proposed method with the Reuters dataset with mapper=3 a) TPR b) TNR c) Accuracy.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Assessment of different techniques comparing the proposed method and the Reuters dataset with mapper=4 a) TPR b) TNR c) Accuracy.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Assessment of different techniques comparing the proposed method and the 20 Newsgroups dataset with mapper=3 a) TPR b) TNR c) Accuracy.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Assessment of different techniques compared with the proposed method considering the 20 Newsgroups dataset with mapper=4 a) TPR b) TNR c) Accuracy.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head /><ns0:label /><ns0:figDesc>Comment [MT29]:Please confirm if the changes made conform to the intended meaning of the sentence.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_26'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_27'><ns0:head>Figure 2 Structural</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pseudocode of the Adam algorithm.</ns0:figDesc><ns0:table /><ns0:note>3.5. Dynamic learning with the proposed RIWO-based deep residual networkPeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Pseudocode of the developed RIWO.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66689:2:0:NEW 26 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparative discussion.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Update bypass position with equation (29) Update follower position with equation (39) Update overtaker position with equation (27) Update attacker position with equation (28) Rank riders using error with equation<ns0:ref type='bibr' target='#b21'>(19)</ns0:ref> Choose the rider with minimal error Update steering angle, gear, accelerator, and brake</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Begin</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>Initialize solutions set</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Initialize algorithmic parameter</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Discover error using equation (19)</ns0:cell></ns0:row><ns0:row><ns0:cell>While For &#61550;</ns0:cell><ns0:cell cols='3'>n &#61500; 1 &#61501;</ns0:cell><ns0:cell>OFF P to N</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Return &#61483; &#61501; n n</ns0:cell><ns0:cell>A 1</ns0:cell><ns0:cell>L</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>End for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>End while</ns0:cell></ns0:row><ns0:row><ns0:cell>End</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
" Dear Editors, We would like to express our thanks to the editor and the anonymous reviewers for their time, accurate review of our manuscript, and invaluable comments on this paper. We have carefully revised the manuscript according to the reviewers’ suggestion. All the suggestions are addressed in this response, and the corresponding changes have been incorporated in the revised paper. We hope that our revised paper can meet the requirement of publication and we believe that the manuscript is now suitable for publication in PeerJ. In the revised paper, we use the red color to indicate the changes. Dr. Hemn Barzan Abdalla Assistant Professor of Computer Science Editor's Decision From the comments of the reviewers, I think this is a valuable contribution. Please revise the paper accordingly, and then it will be evaluated again by the reviewers. Additionally, Reviewer 1 has requested that you cite specific references. You may add them if you believe they are especially relevant. However, I do not expect you to include these citations, and if you do not include them, this will not influence my decision. Response:Thankyou for your comments. The papers suggested by the reviewer have been included in the reference part and cited in the introduction part of the revised manuscript. Reviewer 1 In this paper, author presented a MapReduce model for text classification in big data. However, there are some limitations that must be addressed as follows. 1. The abstract is very lengthy and not attractive. Some sentences in abstract should be summarized to make it more attractive for readers. Response:The length of the abstract has been reduced to make it more attractive for readers. Action:The abstract in page 1 has been revised as follows. The increasing demand for information and rapid growth of big data has dramatically increased textual data. For obtaining useful text information, the classification of texts is considered an imperative task. Accordingly, this paper develops a hybrid optimization algorithm for classifying the text. Here, the pre-pressing is done by the stemming process and stop word removal. In addition, the extraction of imperative features is performed and the selection of optimal features is performed using Tanimoto similarity, which estimates the similarity between the features and selects the relevant features with higher feature selection accuracy. After that, a deep residual network trained by the Adam algorithm is utilized for dynamic text classification. In addition, the dynamic learning is performed by the proposed Rider invasive weed optimization (RIWO)-based deep residual network along with fuzzy theory. The proposed RIWO algorithm combines Invasive weed optimization (IWO) and the Rider optimization algorithm (ROA). These processes are done under the MapReduce framework. The analysis reveals that the proposed RIWO-based deep residual network outperformed other techniques with the highest True Positive Rate (TPR) of 85%, True Negative Rate (TNR) of 94%, and accuracy of 88.7%. 2. In Introduction section, it is difficult to understand the novelty of the presented research work. This section should be modified carefully. In addition, the main contribution should be presented in the form of bullets. Response:Thank you for your comment. The novelties of the proposed research work and the main contributions have been mentioned clearly in the end of the introduction part of the revised manuscript. Action: The following points have been highlighted at the end of the introduction part (section 1) in page 3. The aim is to devise an optimization-driven deep learning technique for classifying the texts using ‎MapReduce framework. Initially, the text data undergoes pre-processing for removing ‎unnecessary words. Here, the pre-processing is performed using the stop word removal ‎and stemming process. After that, the features, such as‎SentiWordNet features, thematic features, and contextual features are extracted. These features ‎are employed in a deep residual network for classifying the texts. Here, the deep ‎residual network training is performed by the Adams algorithm. Finally, dynamic learning is carried ‎out wherein the proposed RIWO-based deep residual network is employed for incremental text ‎classification. Here, the fuzzy theory is employed for weight bounding to deal with the ‎incremental data. In this process, the training of deep residual network is performed by the‎proposed RIWO, which is devised by combining ROA and IWO algorithm The key contribution of the paper:‎ • Proposed RIWO-based deep residual network for text classification: A new method is developed for text classification using multidimensional features and MapReduce. Dynamic learning uses the proposed ‎RIWO-based deep residual network for classifying texts. Here, the developed RIWO is adapted for deep residual network training. • RIWO: It is devised by combining ROA and IWO algorithms. • The ‎fuzzy theory is employed to handle dynamic data by performing weight bounding. ‎ 3. The most recent work about text classification and big data should be discussed as follows (‘An intelligent healthcare monitoring framework using wearable sensors and social networking data’, ‘Traffic accident detection and condition analysis based on social networking data’, ‘Fuzzy Ontology and LSTM-Based Text Mining: A Transportation Network Monitoring System for Assisting Travel’, and ‘Merged Ontology and SVM-Based Information Extraction and Recommendation System for Social Robots’). Response: The works mentioned above have been added in the reference part and cited in the introduction part of the revised manuscript. Action: The following papers have been added in the reference part and cited in the introduction (section 1). 1. Farman Ali, Shaker El-Sappagh, S.M.Riazul Islam, Amjad Ali, Muhammad Attique, Muhammad Imran, Kyung-SupKwak, 'An intelligent healthcare monitoring framework using wearable sensors and social networking data,' Future Generation Computer Systems, vol.114, pp.23-43, 2021. 2. Farman Ali, Amjad Ali, Muhammad Imran, Rizwan Ali Naqvi, Muhammad Hameed Siddiqi, Kyung-Sup Kwak, 'Traffic accident detection and condition analysis based on social networking data,' National library of medicine, 2021. 3. Farman Ali, Shaker El-Sappagh, and DaehanKwak, 'Fuzzy Ontology and LSTM-Based Text Mining: A Transportation Network Monitoring System for Assisting Travel,' Sensors, vol.19, no.2, pp.234, 2019. 4. Farman Ali, DaehanKwak, Pervez Khan, Shaker Hassan A. Ei-Sappagh, S. M. Riazul Islam, Daeyoung Park, and Kyung-Sup Kwak,'Department of Information and Communication Engineering, Inha University, Incheon, South Korea, 'Merged Ontology and SVM-Based Information Extraction and Recommendation System for Social Robots,' IEEE Xplore, vol.5, pp.12364-12379, 2017. 4. It is better to merge subsection 2.1 and 2.2. Response: Section 2.1 and 2.2 have been merged and provided as a single section (Literature review). Action:As per the reviewer’s comment, Section 2 (Literature review) has been reframed in page 3. 5. The authors should avoid the use of too many colors in figure (see figure1). Response: The colors in figure 1 and 2 have been removed and made as a clear figure in the revised manuscript. Action: The figure 1 in section 3 and figure 2 in section 3.4.1 have been redrawn as follows. Figure 1. Schematic view of text classification from the input big data using proposed RIWO-based Deep Residual Network Figure. 2. Structural design of deep residual network with residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers for text classification 6. Equations should be discussed deeply. Response:All the equations in the manuscript have been explained in text in the revised version of the manuscript. Action: The following terms are included in section 3 to define the terms of the equations. refers to text data contained in the database with an attribute in ‎ data. symbolizes total mappers. symbolizes split data given to ‎ mapper to process, and ‎‎indicates data in ‎ mapper.‎ symbolizes total words present in text data from the database.‎ The pre-processed outcome generated from pre-processing is expressed as The pertinent terms generated are employed as a text, modeled as, and non-relevant is ‎denoted as ‎‎. The set of pertinent text is modeledas , and the non-relevant set is referred ‎to as .‎ 7. Captions of the Figures not self-explanatory. The caption of figures should be self-explanatory, and clearly explaining the figure. Extend the description of the mentioned figures to make them self-explanatory. Response: Captions of the figures have been improved and expanded in the revised manuscript to make it self-explanatory. Action:The captions used for figure 1 in section 3, figure 2 in section 3.4.1, figures 3, 4, 5, and 6 in section 4.5are provided as follows. Figure 1. Schematic view of text classification from the input big data using proposed RIWO-based Deep Residual Network Figure. 2. Structural design of deep residual network with residual blocks, convolutional (Conv) layers, linear classifier, and average pooling layers for text classification Figure. 3. Assessment of different techniques comparing with the proposed method by considering Reuter dataset with mapper=3 a) TPR b) TNR c) Accuracy Figure. 4. Assessment of different techniques comparing with the proposed method by considering Reuter dataset with mapper=4 a) TPR b) TNR c) Accuracy Figure. 5. Assessment of different techniques comparing with the proposed method by considering 20 Newsgroup dataset with mapper=3 a) TPR b) TNR c) Accuracy. Figure 6. Assessment of different techniques comparing with the proposed method by considering 20 Newsgroup dataset with mapper=4 a) TPR b) TNR c) Accuracy 8. The whole manuscript should be thoroughly revised in order to improve its English. Response: The whole manuscript has been checked thoroughly to correct and improve its English. Action: The entire manuscript has been revised carefully and the grammar and syntax issues have been corrected. 9. More details should be included in future work. Response: Future work in the conclusion part has been improved in the revised version of the manuscript. Action: The following points have been included as a future work in section 5 (Conclusion) on page 23. In future, the performance of the proposed method will be evaluated using different data sets. Also, the bias mitigation strategies that are not depended directly on a set of identity terms and the methods that are less dependent on individual words will be considered for effectively deal with biases tied to words used in many different contexts like white vs. black. Reviewer 2: Basic reporting The increasing demand for information and rapid growth of big data have dramatically increased textual data. The amount of different kinds of data has led to the overloading of information. For obtaining useful text information, the classification of texts is considered an imperative task. This paper develops a technique for text classification in big data using the MapReduce model. The goal is to design a hybrid optimization algorithm for classifying the text. This work is meaningful and potential in this field. Response: Thank you for the positive comment. Experimental design This paper develops a technique for text classification in big data using the MapReduce model. Validity of the findings The pre-pressing is done with the steaming process and stop word removal. In addition, the Extraction of imperative features is performed wherein SentiWordNet features, contextual features, and thematic features are generated. Furthermore, the selection of optimal features is performed using Tanimoto similarity. Additional comments 1 This work should be polished by native English speaker. Some spelling and grammar mistakes should be avoided in this manuscript. Response: All the spelling and grammatical errors in the manuscript have been corrected in the revised version of the manuscript. Action:The entire manuscript has been read carefully and the grammar and spelling issues have been corrected. 2. There are several typical machine learning classification model, such as SVM, neural network and so on. So, authors should compare the proposed method with other typical machine learning methods. Response: Some typical machine learning classification models,such as SVM and neural network have been included for comparison in the revised version of the manuscript. Action: The performance of the proposed method has been compared with three more machine learning classification models, such as SVM, NN, LSTM and the corresponding results have been provided in sub-section 4.5 (Comparative Analysis). 3. Some deep learning methods, including LSTM, should be compared with this method. Response:Thank you for the valuable comment. The LSTM has been compared with the proposed method in the revised version of the manuscript. Action: The performance of the proposed method has been compared with three more machine learning classification models, such as SVM, NN, LSTM and the corresponding results have been provided in sub-section 4.5 (Comparative Analysis). 4. There are some typographical errors. Authors should polished them. Response: All the typological errors in the manuscript have been corrected in the revised version. Action:The entire manuscript has been read carefully and the typological errors have been corrected. Reviewer: Basic reporting The increasing demand for information and rapid growth of big data has dramatically increased textual data. For obtaining useful text information, the classification of texts is considered an imperative task. Experimental design This paper develops a hybrid optimization algorithm for classifying the text. Here, the pre-pressing is done by the stemming process and stop word removal. In addition, the extraction of imperative features is performed, and the selection of optimal features is performed using Tanimoto similarity, which estimatesthe similarity between the features and selects the relevant features with higher feature selection accuracy. After that, a deep residual network trained by the Adam algorithm is utilized for dynamic text classification. In addition, the dynamic learning is performed by the proposed Rider invasive weed optimization (RIWO)-based deep residual network along with fuzzy theory. The proposed RIWO algorithm combines Invasive weed optimization (IWO) and the Rider optimization algorithm (ROA). Validity of the findings The aim is to devise an optimization-driven deep learning technique for classifying the texts using the MapReduce framework. Initially, the text data undergoes pre-processing for removing unnecessary words. Here, the pre-processing is performed using the stop word removal and stemming process. After that, the features, such as SentiWordNet features, thematic features, and contextual features, are extracted. These features are employed in a deep residual network for classifying the texts. Here, the deep residualnetwork training is performed by the Adams algorithm. Finally, dynamic learning is carried out wherein the proposed RIWO-based deep residual network is employed for incremental text classification. Here, the fuzzy theory is employed for weight bounding to deal with the incremental data. Additional comments This paper develops a hybrid optimization algorithm for classifying the text. Here, the pre-pressing is done by the stemming process and stop word removal. In addition, the extraction of imperative features is performed, and the selection of optimal features is performed using Tanimoto similarity, which estimatesthe similarity between the features and selects the relevant features with higher feature selection accuracy. This work is meaningful and potential in this field. 1 Some errors, including spelling errors and grammar ones should be polished. Response: Spelling errors and grammar errors have been corrected. Action: The entire manuscript has been revised for correcting the spelling errors and grammar errors. 2 Some typical references should be discussed in this work. Response: More recent references have been discussed in Section 2 (Literature Review) of the revised manuscript. Action: The following references have been included in the references section and discussed in Section 2 (Literature Review) of the revised manuscript. 1. Muhammad Pervez Akhter, Zheng Jiangbin, Irfan Raza Naqvi, Mohammed Abdelmajeed, Atif Mehmood, and Muhammad Tariq Sadiq, 'Document-Level Text Classification Using Single-Layer Multisize Filters Convolutional Neural Network,' IEEE Access, vol. 8, pp.42689 - 42707, 2020. 2. Christopher A. Flores, Rosa L. Figueroa, and Jorge E. Pezoa, 'Active Learning for Biomedical Text Classification Based on Automatically Generated Regular Expressions,' IEEE Access, vol. 9, pp.38767 - 38777, 2021. 3. Hai Huan, Jiayu Yan, Yaqin Xie, Yifei Chen, Pengcheng Li, and Rongrong Zhu, 'Feature-Enhanced Nonequilibrium Bidirectional Long Short-Term Memory Model for Chinese Text Classification,' IEEE Access, vol. 8, pp. 199629 - 199637, 2020. 4. Yanru Dong, Peiyu Liu, Zhenfang Zhu, Qicai Wang, and Qiuyue Zhang, 'A Fusion Model-Based Label Embedding and Self-Interaction Attention for Text Classification,' vol.8, pp. 30548 - 30559, 2019. "
Here is a paper. Please give your review comments after reading it.
394
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Duct air quality monitoring (DAQM) is a typical process for building controls, with multiple infections outbreaks reported over time linked with duct system defilement. Various research works have been published with analyses on the air quality inside ducting systems using microcontrollers and low-cost smart sensors instead of conventional meters.</ns0:p><ns0:p>However, researchers faced problems sending data within limited range and cross-sections inside the duct to the gateway using available wireless technologies, as the transmission is entirely a non-line-of-sight. Therefore, this study developed a new instrument for DAQM to integrate microcontrollers and sensors with a mobile robot using LoRa as the wireless communication medium. The main contribution of this paper is the evaluation of mesh LoRa strategies using our instrument to overcome network disruption problems at the cross-sections and extend the coverage area within the duct environment. A mobile LoRabased data collection technique is implemented for various data sensors such as DHT22, MQ7, MQ2, MQ135, and DSM50A to identify carbon monoxide and smoke carbon dioxide PM2.5 levels. This study analyzed the efficiency of data transmission and signal strength to cover the air duct environment using several network topologies. The experimental design covered four different scenarios with different configurations in a multi-story building. The network performance evaluations focused on the packet delivery ratio (PDR) and the received signal strength indicator (RSSI). Experimental results in all scenarios showed an improvement in Packet Delivery Ratio (PDR) and significant improvement in the coverage area in the mesh network setup. The results conclude that the transmission efficiency and coverage area are significantly enhanced using the proposed LoRa mesh network and potentially expanded in larger duct environments.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Today, especially in urban areas, people spend up to 90% of their time indoors. In a fully airconditioned building with a centralized air-conditioning system, air flows are supplied to the room through the metal duct channels. So, the indoor air quality of the building is determined by these duct channels as air circulates inside the occupied range and provides fresh air <ns0:ref type='bibr' target='#b18'>(Ibrahim, 2016)</ns0:ref>. Many infection outbreaks have been reported, which are linked with the contamination of duct systems, cooling towers, ductwork, and filters <ns0:ref type='bibr' target='#b25'>(Moscato et al., 2017)</ns0:ref>. So, it is crucial to ensure the supply of fresh and clean air through the duct channels where the ductwork of a building can be contaminated internally in multiple ways (Z. <ns0:ref type='bibr' target='#b23'>Liu et al., 2018)</ns0:ref>. Therefore, periodic air quality for duct channels should be done to maintain the standard air quality and early contamination detection. The conventional method of collecting air samples and analyzing the quality in the laboratory is costly (S. <ns0:ref type='bibr' target='#b21'>Liu et al., 2016)</ns0:ref>. Several studies have evaluated and monitored indoor air quality with IoT tools <ns0:ref type='bibr' target='#b17'>(Husein et al., 2019)</ns0:ref>. Research has analyzed DAQM with smart nodes combined with microcontrollers and low-cost IoT sensors instead of commercial meters. The collected data are sent to a remote server using wireless data transmission technologies for the final analysis.</ns0:p><ns0:p>Several technologies are available for wireless communication, such as Bluetooth, Wi-Fi, Zigbee, GiFi, and Wimax <ns0:ref type='bibr' target='#b9'>(Garcia et al., 2018)</ns0:ref>. Previous researches on DAQM used Bluetooth technology between nodes, covering a concise area of wireless data transmission. Some studies used Wi-Fi that shows network disruption at the cross-sections of the duct channel. Data transmission in a duct environment is entirely a non-line-of-sight situation. <ns0:ref type='bibr' target='#b3'>(Chomba et al., 2011)</ns0:ref> showed that Wi-Fi signal strength in a non-line-of-sight indoor environment is reduced to less than -100 dBm for a 30-m distance between nodes. In a study by (N. <ns0:ref type='bibr' target='#b27'>Hashim, N. F. A. M. Azmi1, 2014)</ns0:ref>, it was observed that, for outdoor communication, the Wi-Fi signal lost after 150 meters. For indoor communication, the signal lost appeared after only 40 meters. In indoor situations, the Wi-Fi area coverage decreases due to obstacles in the indoor environment, which reduce the effectiveness of data transmission and result in path loss. Both Wi-Fi and Bluetooth technologies work based on radio wave transmission, and a radio wave cannot pass through metal <ns0:ref type='bibr' target='#b30'>(Smith &amp; Smith, 2005)</ns0:ref>, and <ns0:ref type='bibr' target='#b13'>(Hassan et al., 2016)</ns0:ref>. In a study by <ns0:ref type='bibr' target='#b31'>Swain et al., (2018)</ns0:ref>, ZigBee wireless technology sent sensed data from the underground mine to a monitoring station. In this experiment, the researchers experienced packet loss after 135 m and a sudden drop in the signal after 150 meters. Path loss for transmitted data is effortless and standard in a non-line-of-sight surrounded environment.</ns0:p><ns0:p>LoRa has emerged as one of the advancements in wireless technology with acceptable receiver sensitiveness and a low amount of BER (Bit Error Rate). It is considered to have reasonably priced chips for low data rate communication. LoRa can give the longest-range coverage compared with any other current radio technology like Wi-Fi, ZigBee, or Bluetooth <ns0:ref type='bibr' target='#b7'>(Daud et al., 2018)</ns0:ref>. It could cover up to 400m in a non-line of sight environment <ns0:ref type='bibr' target='#b28'>(Rahman &amp; Suryanegara, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b6'>(Dahiya, 2017)</ns0:ref>. However, the coverage area is reduced due to interference of data transmission affected by materials such as that graphite, aluminum foil, steel, and electrically conductive metals that can reflect or even absorb radio waves <ns0:ref type='bibr' target='#b11'>(Guan &amp; Chung, 2021)</ns0:ref>. Based on node amount and connection between nodes, various network topologies have emerged for different usage of LoRa. The most common topology of the LoRa network is Star Topology, Tree Topology, and Mesh Topology. <ns0:ref type='bibr' target='#b33'>Tehrani et al., (2021)</ns0:ref> discussed the star and tree topology of the LoRa network, which is limited to one hop and is defined by the scope of each node. In tree network topology, nodes can act as relays data from a node in a hierarchy farther from the base station in a tree network topology. <ns0:ref type='bibr' target='#b15'>Huh &amp; Kim (2019)</ns0:ref> state that the LoRa mesh topology model has no hierarchy, unlike in a tree topology. Experimental results showed that the presented method of the LoRa tree network improves the energy consumption of the entire IoT network compared with the star network. Each node can relay a data packet and co-operate with other network nodes to route a packet efficiently into the gateways. Compared to <ns0:ref type='bibr' target='#b20'>Lee &amp; Ke (2018)</ns0:ref> study, the star and mesh network topologies showed that an increase in communication range by 88.49% PDR, where mesh architecture is an appropriate solution to the issue without installing an additional gateway. Several parameters can be adjusted for different performance targets, like power level, Spreading Factor, bandwidth, and coding rate. Meanwhile, point-to-point communication-based star topology achieved only 58.7% on average under the same experimental environment. <ns0:ref type='bibr' target='#b14'>Hossinuzzaman &amp; Dahnil (2019)</ns0:ref> reported an improved network performance significantly a LoRa based mesh network architecture to enhance the packet delivery ratio during rain attenuation. One of the essential features of LoRa technology is Spreading Factor (SF), whereby multiple SF can be used to trade data rate, coverage range of the network, time on the air, receiver sensitivity, longer battery life <ns0:ref type='bibr' target='#b2'>(Centenaro et al., 2016)</ns0:ref>. The drawback of this approach, it could reduce the throughput rate of the network and can be responsible for severe data collision because this setup requires a longer air time for data transmission. This situation appeared due to many LoRa nodes transmitting data and receiving acknowledgments simultaneously <ns0:ref type='bibr' target='#b20'>(Lee &amp; Ke, 2018)</ns0:ref>. These situations can be liable for a massive drop in PDR <ns0:ref type='bibr' target='#b35'>(Varsier &amp; Schwoerer, 2017)</ns0:ref>. For these reasons mentioned above, in our research, we exclude the SF increment to solve the coverage range problem and intend to ensure the best PDR for the network system. <ns0:ref type='bibr' target='#b0'>Abdullah et al., (2013)</ns0:ref> developed a mechanical robot that can move through duct channels and collect temperature, humidity, and gas pollutants with sensors and the internal photos of duct channels with a camera. The researchers used a Bluetooth module to transmit the collected data to the data server for final analysis, but the Bluetooth class is unspecified in their research. <ns0:ref type='bibr' target='#b5'>Coleman &amp; Meggers (2018)</ns0:ref> found that sending the sensed data with Wi-Fi, from inside the duct channels to a remote server outside the duct, is not satisfactory. They acknowledged that in the cross-section of the ventilation air duct, they experienced several problems with Wi-Fi connectivity, which resulted in hardware failure. In order to improve network connectivity, the researchers used an additional antenna, which was an unstable and temporary solution to this problem. In a duct environment, metallic interference of the duct shield reduces signal strength, and a packet cannot reach as far as it should be, limiting the network coverage area. Consequently, when the distance between nodes increases during data transmission, several data packets fail to reach the destination, especially in large buildings or the multilevel of a building. The network coverage area is a significant barrier in gaining the maximum potential of network performance in a non-line-of-sight environment. Therefore, it is vital to select a wireless technology with a vast coverage capacity to overcome communication disruption for transmission of collected data to reduce the packet loss ratio.</ns0:p><ns0:p>This paper proposed a technique for obtaining data from the duct environment to enable air-quality monitoring and secure stable wireless network communication using LoRa with extended area coverage in multi-story buildings with high PDR and strong RSSI. LoRa technology is introduced for DAQM and compared to a two-node-based LoRa point-to-point network architecture. A LoRa based mesh network topology is proposed to cover a large area of DAQM and enhance the wireless communication performance. The main contribution of this paper is the evaluation of mesh LoRa strategies using our instrument to overcome network disruption problems at the cross-sections and extend the coverage area within the duct environment. The remainder of the paper is structured into three sections. The methodology is explained in Section 2, followed by The experimental analysis of the proposed objective in Section 3. Lastly, the concluding remarks and future works are described in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This section describes the three phases of this research methodology, which consist of node &amp; network architecture, experimental setup, and data collection &amp; evaluation procedures. An Arduino Uno is programmed for data collection with sensors and transmit collected through LoRa in two different network architectures. The master node is set as a data collector in the duct environment and sender the collected data to the destination. The end node is configured as the network receiver and is responsible for visualizing collected data. Two additional repeater nodes are used in mesh network architecture, where each repeater is programmed as a transceiver. Four experimental scenarios are designed to evaluate the network performance in several key parameters.</ns0:p></ns0:div> <ns0:div><ns0:head>Phase 1: Node and Network Architecture</ns0:head><ns0:p>Experiments are conducted based on two network architectures: two node-based networks and a mesh network. In the two node-based networks, one master node is used to sense air elements with sensors in the duct environment and programmed to send the data to the receiver end node through the LoRa wireless technology. Two additional repeater nodes are configured to relay data between the master node and the mesh network's end node.</ns0:p></ns0:div> <ns0:div><ns0:head>Two node-based network architecture</ns0:head><ns0:p>An Arduino Uno microcontroller and one Cytron LoRa RFM shield with a 915MHz antenna acted as primary to achieve master node formation. Five sensors are attached to the board to sense the level of particular elements in the air. The DHT22 sensor is used to measure temperature and humidity, MQ7, MQ2, and MQ135 are used to sense carbon monoxide, smoke, and carbon dioxide levels, respectively, and the DSM501A sensor is used to detect PM2.5 levels. A 5V power bank is used to supply the electricity in the node. The architecture of the network for the point-to-point two node-based communication is illustrated in Figure <ns0:ref type='figure'>1</ns0:ref>. DJI RoboMaster robot is used to carry all the instruments and travels through the duct channel. The streaming image from the FPV camera helps get visual feedback from the internal part of the ventilation duct. A rechargeable LED torchlight is attached to the robot's top to light up the pathway inside the duct environment. A Xiaomi Redmi Note 6 Pro android phone is used to operate RoboMaster remotely. An android app named robomaster.apk is installed in the phone to connect with the robot to control the movement of RoboMaster, real-time video monitoring during navigation in duct environment. One end node is programmed to receive the transmitted data from the master node for the sensing sensors. The end node consists of an Arduino Uno board, a Cytron LoRa RFM shield, and a 915-MHz antenna which configured to display the received data on the serial monitor of the Arduino IDE connected to an Asus X454L laptop, with Arduino IDE application installed for data visualization.</ns0:p><ns0:p>The workflow of nodes and network architecture is defined inside the code of Arduino Uno. The packet length of the data packet in master node equal to 80 bytes(NODE A with ID=1). The data is routed from the source node to the destination node based on unique Node ID in our network architecture. A Cytron LoRa shield is used master node for the experiment with a maximum of +14dBm transmission power, TxPower. The master node collects and combines all data, then append all sensor values as data packets, and finally send data packets to the destination.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:11:68391:1:0:NEW 3 Mar 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>All the data packets are formatted as a string, with 60,000 packets are sent serially from the master node. Interval transmission time is set as 2000 milliseconds between data collection. The end node is configured as the receiver node, defined as NODE D with ID=4, and 100 bytes of packet length. The data packet receiving policy is configured at end node with calculation of RSSI value. This experiment's network frequency is 915 MHz for both master and end nodes.</ns0:p><ns0:p>The data packet routing policy of the LoRa-based two node-based network at our experiment is that Node A collects data with sensors then sends the data as a packet to Node D based on Node ID. If some other nodes with different node ID or without node ID are present on the nodes, this packet will be discarded due to a mismatch of Node ID. The data packet will be transmitted if Node D is located within the transmission range. After receiving the packet, the end node decodes the data string and visualizes received data at the serial monitor, including RSSI value calculated by node D.</ns0:p></ns0:div> <ns0:div><ns0:head>Mesh network architecture</ns0:head><ns0:p>A mesh network topology is proposed and implemented to improve the data transmission efficiency. In the mesh network between master and end node, two additional nodes are programmed as repeater nodes. Each repeater node consists of one Arduino Uno, one Cytron LoRa RFM Shield and 5V power bank. The architecture of the proposed mesh network is presented in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>A master and an end node are used in our LoRa mesh topology. The configuration and workflow of these two nodes in coding is the same as the configuration is used for two-node-based network but the defined destination Node ID at master node and defined source Node ID at receiver end node are different. Repeater nodes are defined as NODE B &amp; NODE C and the Node ID is defined as respectively 2 &amp; 3. Inside the coding of master node, ID of Node B &amp; C. Data transmitted from Node A will be accepted by Node B &amp; C while discarded other nodes. Repeaters work as a transceiver to receive data from Node A and forward it to Node D. Then, the repeater node ID is added for the data packet will be transmitted to the end node. Interval time for Node B is set to 3000 milliseconds, while 1000 milliseconds for Node C. The network frequency is defined as 915 MHz for all nodes.</ns0:p><ns0:p>Our data packet routing policy used in this study is mesh topology. Node A collects data with sensors and sends it to the repeaters (Node B and Node C) instead of sending it directly to Node D. If both repeaters are within range, the nearest repeater will receive the packet first. The repeater Node ID will be included the packet and sent to Node D. If both repeater node receives and transmits the packets, Node D will receive the packet from Node C due to the specific interval settings. Then, the same data packet will be received from Node B. As the string begins with the Node ID, so from the received data string it can be identified that that packet is received from which repeater node, either Node B or Node C. Finally, the RSSI value are calculated and presented via the serial monitor. The workflow of nodes in the mesh network is shown in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Phase 2: Experimental Setup</ns0:head><ns0:p>In the experiment, network architecture data transmission is evaluated between different levels on the lab building of the computing department in UKM. In our testbed, the ventilation system consists of two duct channels. One channel is an I-shaped one, supplying fresh air to the air conditioning system. Another duct channel has a zigzag shape, which provides a cool air supply to the rooms. The master node collects data from different points inside the level-one ventilation duct channel, and the end node is placed near the HVAC control room of level one. After that, the end node is placed at level two and later at level three. When the master node sends data from different points of the duct channel to the end node, the nature of communication becomes different for each point due to varying distances between the nodes, the different shapes of the duct, and various levels of interference. End-node placement at each level creates a different non-line-ofsight network communication situation. In order to evaluate the maximum coverage area of our network, we analyze data transmission by placing the end node to varying distances inside the faculty compound. The analysis is conducted in four experimental setups to evaluate the data transmission performance. Table <ns0:ref type='table'>1</ns0:ref> presents the descriptions of all four evaluation scenarios used in this study.</ns0:p><ns0:p>For the mesh network, two intermediate repeater nodes are added in parallel height with the master node and two different corners of the outer side of the building to cover the entire area. Figure <ns0:ref type='figure'>4</ns0:ref> represents the overview of data transmission analysis, including the figure of all nodes, nine locations of master node placement inside duct map, repeaters, and end node's locations.</ns0:p><ns0:p>Our testbed is a three-story building involving the communication between the master node in level one and the end node in levels two and three. Therefore, the vertical network coverage range is considered based on the experiment results of Scenario-1 and Scenario-2. In the experiment of Scenario-4, the maximum horizontal range of area coverage with a mesh network is evaluated. Our node placements for the horizontal transmission range test is illustrated in Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Phase 3: Data Collection and Evaluation Procedures</ns0:head><ns0:p>The airflow inside the duct channel is controlled automatically during the experiment. The master node collects data from each point for five minutes, and after the collection, each data packet is immediately sent to the destination end node. The number of packets sent from the master node and the number of packets received by the end node within this timeframe are counted. Based on these parameters, the PDR value is calculated using Equation <ns0:ref type='formula'>1</ns0:ref>.</ns0:p><ns0:p>Packet Delivery Ratio (PDR) = (%) (1)</ns0:p><ns0:formula xml:id='formula_0'>&#119873;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119904;&#119906;&#119888;&#119888;&#119890;&#119904;&#119904;&#119891;&#119906;&#119897;&#119897;&#119910; &#119889;&#119890;&#119897;&#119894;&#119907;&#119890;&#119903;&#119890;&#119889; &#119901;&#119886;&#119888;&#119896;&#119890;&#119905;&#119904; &#119873;&#119906;&#119898;&#119887;&#119890;&#119903; &#119900;&#119891; &#119879;&#119900;&#119905;&#119886;&#119897; &#119879;&#119903;&#119886;&#119899;&#119904;&#119898;&#119894;&#119905;&#119905;&#119890;&#119889; &#119875;&#119886;&#119888;&#119896;&#119890;&#119905;&#119904; * 100</ns0:formula><ns0:p>The RSSI value calculation is programmed in the master node and derived from the received data visualization of the end node at Arduino IDE. The entire experiment is repeated three times on three different days to obtain a stable average value. As the experiment is conducted using two different network architectures, the comparison between both results has been made to calculate the improvement of data transmission using different topologies.</ns0:p><ns0:p>Based on previous research, the distance between the master and end nodes is an important parameter for network performance evaluation. So, before starting the experiment, the distance is calculated. In Scenario 3 and Scenario 4, the master node is placed inside the duct channel, and the end node is placed at seven several points in the FTSM campus. Therefore, the distance between nodes for these scenarios extends from the inside of the building to the outside area. For these two scenarios, the distance between the master node location (N1) and seven locations of the end node in Google Maps as presented in Figure <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>In Scenario 1 and Scenario 2, the master and end nodes are placed in an indoor area, and all the experimental points are situated within the lab building. So, for these two scenarios, the location of both the master and end node is indoors, and it would not be accurate to determine the vertical distance using Google Maps. Since the duct channels are set on top of the room ceiling, it is challenging to directly measure the distance from the master-node location to the end-node location. However, it is possible to measure the entire floor dimension or the distance between any two points on the floor of a building by counting the number of tiles on the floor. From the floor dimension, the distance between the master and end nodes is calculated here using the mathematical-geometric equation of Pythagoras' theorem on right-angled triangles (Equation <ns0:ref type='formula'>2</ns0:ref>):</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>&#119860;&#119862; 2 = &#119860;&#119861; 2 + &#119861;&#119862; 2</ns0:formula><ns0:p>Here, ABC is considered a right-angled triangle, and B = 900. If the length of AB is measured as 'a' and the length of BC is 'b' in the above equation, then the AC = c can be determined as in Equation <ns0:ref type='formula'>3</ns0:ref>. This equation is known as the hypotenuse equation, derived from Pythagoras' theorem.</ns0:p><ns0:p>(3) c = &#119886; 2 + &#119887; 2 Two situations are considered which the master and end nodes are both placed at the same level or at two different levels in the building. In this lab building, for level one, the entire floor is 116 tiles in length and 50 tiles in width, and each tile is 30 squared cm in size. So, the floor dimension of each level in the lab building is 3480 cm x 1500 cm. Besides, the total height of level one from floor to roof is measured at 405 cm, and the roof is 20 cm; the second floor is again 405 cm. Figure <ns0:ref type='figure'>7</ns0:ref> (a) presents the distance calculation between the master node location B inside the duct channel and the end node located at point R on top of a chair near the HVAC control room. R' is the direct vertical point of R, and M2 is the direct vertical point of B at the floor. M1 is the point where a and b are joined in 90 o angles. Our measurement shows that; a = 18 tiles = 540 cm and b = 38 tiles = 1140 cm. So M1M2R' is a right-angled triangle. According to Equation 3, c = 1261.43 cm. M1'M2'R is a mirrored triangle of M1M2R', where a = a', b = b', and c = c'. In this stage, a new triangle M2'RB is imagined, where d is measured with a measurement tape as 300 cm and 'c'. So, according to Equation 3, e = 1300 cm (approximately). So, from point B to end node location R, the distance is approximately 1300 cm. In the same way, the distance for the other points is also calculated. Following the same formula, the distance are calculated for Level 1-Level 2 and Level 1-Level 3. The calculation methods are illustrated in Figures <ns0:ref type='figure'>7(b) and 7(c</ns0:ref>). Various distances between the master and end nodes for Scenario 1 and Scenario 2 of our experiment are listed in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>This research study involves the experimental evaluation of LoRa data transmission for DAQM and analysis for network area coverage. Data transmission is evaluated in two scenarios, one for the two node-based networks and another for the mesh network. Data transmission is performed at different levels of the building. The network coverage area is analyzed within the computing campus by placing the end node at different locations with different distances from the master node. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Data Transmission</ns0:head><ns0:p>The experiment is first conducted for data transmission evaluation based on a two nodebased network architecture, which is considered Scenario 1. The master node is placed at point A and the end node at point R, where the location of both nodes is situated at level one. A total of 23 data packets are generated in the master node between 03:55 PM and 04:00 PM, and each packet is transmitted to the end node immediately after generation. All the received data can be visualized from the Arduino IDE in the end node. All the data packets were successfully received by counting the received packets within that time frame. The maximum RSSI value is recorded for point A at -87 and minimum at -93. After point A, the master node is shifted to point B by driving the RoboMaster. The process continues until the master node reaches point G. Then, the master node is taken out of the cold air duct and entered into the straight-shaped fresh air duct line, where points H and I are located. Following the same procedures, three experiment repetitions have been performed for data transmission analysis. After completing the experiment in a two node-based network, it is performed again using the mesh network architecture, which is considered Scenario 2. The location of the master node inside the duct channel and the end node at level one is presented in Figure <ns0:ref type='figure'>8</ns0:ref>.</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>8</ns0:ref>, points B, D and G are considered as cross section inside the duct channel for this experiment. The average value of PDR is calculated using the following scenarios: for point A, the total number of transmitted data packets in three repetitions is counted as 23*3=69, and the delivered packets are also counted from the end node as 23*3=69. Then, PDR is calculated using Equation <ns0:ref type='formula'>1</ns0:ref>. Following this formula, the average PDR is calculated for all points from A to I. For all these points lowest values of RSSI are sorted out. The lowest values from the recorded RSSI are considered here because a low RSSI value is responsible for network disruption and failure of data packet transmission. Points H and I are located inside the fresh air straight-shaped duct channel. For all levels of end node placement, it is founded that PDR value for data transmission from H and I is 100% and RSSI value also sufficient strong with two-node-based network architecture although point I is situated at the farthest distance from R. It indicates that, in nonline-of-sight indoor situations, data transmission efficiency depends not only on the distance between nodes but also on the transmission situation. As the cold air duct channel is zigzag-shaped, obstacles and interference are more in there, when straight-shaped fresh air duct channel consists of fewer obstacles comparatively due to its shape. For that reason, all transmission evaluation and comparison value of point H and I are omitted. Figure <ns0:ref type='figure'>9</ns0:ref> compares the results between Scenario 1 and Scenario 2 for points inside the cold air duct channel when master and end node both are placed at level 1. From the Figure <ns0:ref type='figure'>9</ns0:ref>, it has been found that for Scenario 1 and Scenario 2, for all location of master node placement PDR is calculated as 100%. The lowest RSSI for Scenario 1 is recorded as -97 and for Scenario-2 is -88. So from these results comparison it is determined that when master node and end node both are placed in level 1, two node based network is capable to transmit data from all location of master node placement with 100% PDR and strong RSSI.</ns0:p><ns0:p>After experimenting with level 1, the end node is placed at level 2 and repeated the same procedures. When the master node transmits data from level one to level two using two node-based network architecture, PDR reduces, and RSSI gets weak. During the data transmission for point A and B, PDR is found 100% but RSSI is reduced compared with level 1. When the master node arrived at point C, PDR is reduced to 91.30% and RSSI become -108. Then, at cross points D and G, PDR and RSSI values also decrease compared with level 1. After point C, PDR and RSSI are PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68391:1:0:NEW 3 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science reduced consistently up to point G. This indicates that PDR falls when the distance between the master and end nodes increases. To improve the data transmission efficiency at level 2 same experiment is conducted with mesh network architecture. The comparison of the PDR and RSSI between Scenario 1 and Scenario 2 at level 2 is presented in Figure <ns0:ref type='figure'>10</ns0:ref>. It is observed that packets are successfully delivered with 100% PDR and strong RSSI from all points by using mesh network including the cross points. In proposed mesh network lowest RSSI is recorded only -97. At point G where PDR and RSSI is recorded as the lowest accordingly 59.42% and -108 for two nodebased network, at same point by using mesh network PDR is founded 100% and RSSI is recorded -91only. Cross points are not facing any network disruption for data transmission. So, from this results comparison, it is determined that PDR and RSSI is enhanced by using mesh network architecture.</ns0:p><ns0:p>The same experiment is conducted by forwarding the end node at level 3. During data transmission from level one to level three using two node-based network architecture, PDR and RSSI decreased more significantly compared tp level 2. For all data transmission points including cross points network faced packet loss. Lowest PDR is founded at cross point D and point F which is 42.03%, where RSSI is accordingly -109 and -110. In this phase, maximum PDR value is calculated at point A, which is 81.16 %. A comparison between Scenario 1 and Scenario 2 when the end node is placed at level 3 is presented at Figure11. The proposed mesh network topology can successfully transmit data up to level three with the highest PDR and strong signal. At three cross points B, D and G, PDR is calculated 100% and RSSI is recorded accordingly -94, -96 and -99, which indicates our mesh network setup can transmit data to all the cross points too without any network disruptions. PDR is found 100% for all points and RSSI between -92 to -100 only. So, this setup might cover more areas up to level four. However, we limit the scope to test within three levels to prove that our developed mesh network system is good enough to explore or testbed vertically.</ns0:p><ns0:p>The enhancement of PDR for the mesh network performance is proven from the abovementioned comparison of the result for Scenario-1 and Scenario-2 at different level of the test bed building. As for level-1, in both scenario, PDR is found 100% and RSSI also bellow -97, so we have compared the result of only level-2 and level-3 to evaluate the improvement of network performance. From the Figure <ns0:ref type='figure'>10</ns0:ref> &amp; Figure <ns0:ref type='figure'>11</ns0:ref>, it is visualized that the LoRa network is performing comparatively far better in Scenario-2 which is mesh network topology. In Scenario-1, when the end node is placed in level two, only at points A &amp; B the packet delivery ratio was 100%, but in Scenario-2, in all data collection points, PDR is 100%. Based on the result, the worst situation for Scenario-1 is when the end node is located in level three, and the master node is placed in the cross point G at level one. The average PDR is recorded there as 42.03% only. Nevertheless, for the same point of experiment in Scenario-2, the PDR is 100%. So, it is visible that the mesh network improves the data transmission of our system. So based on these worst situations, the improvement of PDR by mesh network is calculated using Equation <ns0:ref type='formula'>4</ns0:ref>.</ns0:p><ns0:p>Improvement of PDR = ( ) * 100%</ns0:p><ns0:p>(4)</ns0:p><ns0:p>&#119877;&#119890;&#119888;&#119900;&#119903;&#119889;&#119890;&#119889; &#119875;&#119863;&#119877; &#119894;&#119899; &#119905;&#8462;&#119890; &#119908;&#119900;&#119903;&#119904;&#119905; &#119904;&#119894;&#119905;&#119906;&#119886;&#119905;&#119894;&#119900;&#119899; &#119908;&#119894;&#119905;&#8462; &#119872;&#119890;&#119904;&#8462; &#119873;&#119890;&#119905;&#119908;&#119900;&#119903;&#119896; &#119877;&#119890;&#119888;&#119900;&#119903;&#119889;&#119890;&#119889; &#119875;&#119863;&#119877; &#119894;&#119899; &#119905;&#8462;&#119890; &#119908;&#119900;&#119903;&#119904;&#119905; &#119904;&#119894;&#119905;&#119906;&#119886;&#119905;&#119894;&#119900;&#119899; &#119908;&#119894;&#119905;&#8462; &#119879;&#119908;&#119900; &#119873;&#119900;&#119889;&#119890; -&#119861;&#119886;&#119904;&#119890;&#119889; &#119873;&#119890;&#119905;&#119908;&#119900;&#119903;&#119896;</ns0:p><ns0:p>The improvement of PDR with our proposed mesh network is 237.92%. Improvement also has been calculated for RSSI value. It is observed that the lowest RSSI of the experiments for two node-based network is recorded -110 when the end node is located in level 3 and master node in Point G. For the same point in the mesh network, recorded RSSI is -100 only. Result also shows that when the distance between the master and end nodes is increased, RSSI also decreases. For Manuscript to be reviewed Computer Science two node-based network when the end node is placed at level 2, at all data collection points, RSSI value is more than -105 and for level 3, almost all points recorded lowest RSSI value is -110. But in the mesh network for both building levels, RSSI value is recorded from -90 to -100. So based on these worst situations, the improvement of RSSI for the mesh network in Scenario-2 is calculated using Equation <ns0:ref type='formula'>5</ns0:ref>. The improvement of RSSI with our proposed mesh network is 10 units. So, it justified that network efficiency is enhanced in this experiment by using mesh topology.</ns0:p><ns0:p>Improvement of RSSI = (RSSI in the worst situation with Mesh Network -RSSI in the worst situation with Two Node-Based Network)</ns0:p><ns0:p>(5)</ns0:p></ns0:div> <ns0:div><ns0:head>Network Coverage Area:</ns0:head><ns0:p>Furthermore, we expand our evaluation for coverage area using two node-based networks (Scenario-3) and mesh network (Scenario 4). Data is transmitted from the master node location N1 to each end node location for 5 minutes, and the same experiment is repeated forthree times. The summary of all three repetitions of data collection for the horizontal network coverage test is represented in Figure <ns0:ref type='figure'>12</ns0:ref>. The results show that our developed system is eligible for data communication with 100% PDR in our testbed's two nodes-based network setups up to 30 meters horizontally. When the end node is shifted to point P where the distance between nodes is 40m, PDR is decreased around 40% and RSSI is minimized down to -109. The end node has not received any signal for the rest of the points Q, R, S, T &amp; U. This severe reduction happens because there is much interference between the master and end node placement points like buildings, pillars, walls, trees, shades, staircase, etc. These interrupt the transmitted signal to reach the end node. So horizontally, our developed system cannot transmit data from a far distance.</ns0:p><ns0:p>To cover the entire testbed vertically and monitor the received data from a different campus building, we need to improve the network performance. As an attempt at improvement, a mesh network topology is proposed at this point of the experiment. The mesh network allows all data packets to be successfully transmitted to point U, which is 200m away from the master node. On the other hand, in point T, PDR is dropped, although the distance is 105m here. Figure <ns0:ref type='figure'>6</ns0:ref> shows the obstacles exist between the master node location N1 and end node placement point U, there are many trees in between the transmission path that interrupts communications. Still, there is no solid object like wall or building. That is main reason why sent data to point S are 100 percent PDR, although this is the point with the highest distance from the master node. On the other hand, from node N1 to point T, there are many walls, buildings, trees in between. As a result, although having less distance comparatively, data transmission is not efficient there, PDR is 84.06%. From Figure <ns0:ref type='figure'>12</ns0:ref>, it can be determined that for DAQM of the lab building, the end node can be placed around any block among A, B, D, or E to transmit data with 100% PDR using the proposed mesh network architecture. It is also observed that mesh topology can comparatively give access within a huge horizontal range. In Scenario-3, the data is transmitted up to 40m distance only in point P with PDR of 63.77%, but in Scenario-4 with mesh network data is transmitted up to 80m in point T when interference between nodes is more and in 200m distance at point S when interference between nodes is less. Improvement of network coverage area with the mesh network is calculated by Equation <ns0:ref type='formula'>6</ns0:ref>. Overall performance on the proposed mesh network shows an improvement of the network coverage area by five times. In Scenario-4 from the master node, data is successfully transmitted to almost all the points of the testbed when in Scenario-3 data can be transmitted only in two experimental points. It indicates that mesh topology has enhanced the horizontal coverage area of our developed system.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSIONS</ns0:head><ns0:p>The LoRa-based mesh network architecture can transmit data within our testbed area without any kind of network disruption. From the experimentation with different scenarios in the study, it is observed that LoRa technology can successfully transmit received data from the master node to the end node using different network setups. The results show that the two node-based network setup can only cover the area with total efficiency when the master and end nodes are located at level one. PDR and RSSI are decreased with the distance between the master and end nodes. This indicates that the developed platform with the two node-based networks can monitor a small testbed, and it works best within the same level of a building. When the lowest RSSI value becomes -109, the system must experience data loss.</ns0:p><ns0:p>The interference in the transmission path significantly affects data transmission performance over the distance. In Scenario 1, it has been seen that transmission performance from point I to end node location R is always better than the other points at less distance. This happens only due to duct shape. Interferences and obstacles are fewer in the transmission path in a straight duct channel than in a zigzag-shaped duct channel. The same result is visible in Scenario 4 if we investigate the data transmission performance from the master node to the end node location point T and point U. Data can be transmitted successfully to comparatively far distances if interferences are fewer.</ns0:p><ns0:p>The proposed LoRa-based mesh networks can perform with total efficiency within different levels of testbed buildings. PDR, RSSI, and network area coverage are increased in the mesh network. Improvements from using mesh networks in tested scenarios are; i) improvement of PDR by two times better and ten units of RSSI in scenarios 1 and 2, and ii) five-time improvement of horizontal network coverage area in scenarios 3 and 4. The signals become weak quickly in the two node-based networks, obstructed by duct surface and other interferences. So, sometimes, data packets from the master node cannot reach the end node, which results in packet loss. On the contrary, repeater nodes boost weak signals in the mesh architecture. The packets can reach the end node successfully to reduce packet loss. Data packets can travel to the farthest area, resulting in a coverage area extension.</ns0:p><ns0:p>So, this research work provides an improved solution compared with previous research. We introduced LoRa technology for DAQM that can successfully transmit and receive sensor data in non-line-of-sight duct environments. Experimental results showed no significant network disruption at all points in the duct, especially at the cross-sections, with two node-based point-topoint network when the master and end node is placed at the same level of the testbed building. Using LoRa based mesh network, we significantly improved network performance both in the horizontal and vertical range, which allows our implemented prototype to explore within a large multi-story building for DAQM. Wireless data transmission is an essential phase of DAQM. There</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68391:1:0:NEW 3 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68391:1:0:NEW 3 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>&#119908;&#119894;&#119905;&#8462; &#119872;&#119890;&#119904;&#8462; &#119873;&#119890;&#119905;&#119908;&#119900;&#119903;&#119896; &#119872;&#119886;&#119909;&#119894;&#119898;&#119906;&#119898; &#119889;&#119894;&#119904;&#119905;&#119886;&#119899;&#119888;&#119890; &#119888;&#119900;&#119907;&#119890;&#119903;&#119890;&#119889; &#119908;&#119894;&#119905;&#8462; &#119879;&#119908;&#119900; &#119873;&#119900;&#119889;&#119890; -&#119861;&#119886;&#119904;&#119890;&#119889; &#119873;&#119890;&#119905;&#119908;&#119900;&#119903;&#119896;</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,162.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,236.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,334.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,357.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,336.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,303.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,279.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,211.50' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,330.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,353.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,324.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,350.25' type='bitmap' /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:68391:1:0:NEW 3 Mar 2022)</ns0:note> </ns0:body> "
"Pusat Teknologi Kecerdasan Buatan Editors, PeerJ Computer Science PeerJ Center for Artificial Intelligence Technology 1st of March 2022 Dear Editor, We thank the reviewers for their generous comments on the manuscript “Enhancing data transmission in duct air quality monitoring using mesh network strategy for LoRa”. 2. We have edited the manuscript to address their concerns. Here, we provide you with table of corrections for your further evaluation 3. We believe that the manuscript is now suitable for publication in PeerJ Computer Science. Thank you, Regards, Dr Abdul Hadi Abd Rahman Corresponding author abdulhadi@ukm.edu.my PUSAT TEKNOLOGI KECERDASAN BUATAN (CAIT) Fakulti Teknologi & Sains Maklumat, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor Darul Ehsan Malaysia Telefon: +603-8921 6712 Faksimili: +603-8921 6094 E-mel: abdulhadi@ukm.edu.my Laman Web: http://www.ftsm.ukm.my/cait Response to reviewers Title: Enhancing Data Transmission in Duct Air Monitoring using Mesh Network Strategy for LoRa Manuscript number: 68391 Overall Revision Responses We appreciate the time and efforts spent by the editors and reviewers in reviewing our manuscript. The given details are of great benefit to our research. We have considered all the comments and revised the paper accordingly. The revised version is more rigorous and complete. We really appreciate the comments given by the editor and reviewer in ensuring the quality of our paper. Here are the responses to the comments and highlighted in yellow. …………………………………………………………………………………………………………………………………………… REVIEWER #1 1. Reviewer #1 (Basic Reporting - The paper purposes implementation of LoRa for air quality monitoring of large buildings, in this section they should clearly mention what is the novelty and contribution of this paper? Response: The novelty and contribution are highlighted in the abstract. Page 1. Line 2022 and in the introduction page 3 Line 123-125. “The main contribution of this paper is the evaluation of mesh LoRa strategies using our instrument to overcome network disruption problems at the cross-sections and extend the coverage area within the duct environment.” 2. Reviewer #1 (Experimental design - Authors haven't mentioned what is the uniqueness of their experiment as compared to previous research in this area.? Response: The uniqueness of the experiments is mentioned in a new added paragraph page 3 line 118-125 “This paper proposed a technique for obtaining data from the duct environment to enable air-quality monitoring and secure stable wireless network communication using LoRa with extended area coverage in multi-story buildings with high PDR and strong RSSI. LoRa technology is introduced for DAQM and compared to a two-node-based LoRa point-topoint network architecture. A LoRa based mesh network topology is proposed to cover a large area of DAQM and enhance the wireless communication performance. The main contribution of this paper is the evaluation of mesh LoRa strategies using our instrument to overcome network disruption problems at the cross-sections and extend the coverage area within the duct environment.” 3. Reviewer #1 (Comments - Data from tables 2-4 can be represented more simply in terms of a chart.) Response: Yes. The new Figure (9,10,11) are added by replacing tables, and the explanation continues based on that. Page 13, 14 and 15 “Figure 9, 10 and 11 are updated with the explanations” 4. Reviewer #1 (Comments - Figures should be made clearer?) Response: All figures are rebuilt with 600 dpi resolution, Figure 1 until 8 (page 4,5,6,8,9,10,11) 5. Reviewer #1 (Comments - Use the same kind of chart to represent PDR in different scenario in figure 11. Either both histogram or both lines? Response: Histogram is used for all scenario to represent PDR and RSSI as shown in new figure 9,10 and 11 on page 13,14,15 “Figure 9, 10 and 11 are updated with the explanations” “Figure 9 compares the results between Scenario 1 and Scenario 2 for points inside the cold air duct channel when master and end node both are placed at level 1. From the Figure 9, it has been found that for Scenario 1 and Scenario 2, for all location of master node placement PDR is calculated as 100%. The lowest RSSI for Scenario 1 is recorded as -97 and for Scenario-2 is -88. So from these results comparison it is determined that when master node and end node both are placed in level 1, two node based network is capable to transmit data from all location of master node placement with 100% PDR and strong RSSI.” “The comparison of the PDR and RSSI between Scenario 1 and Scenario 2 at level 2 is presented in Figure 10. It is observed that packets are successfully delivered with 100% PDR and strong RSSI from all points by using mesh network including the cross points. In proposed mesh network lowest RSSI is recorded only -97. At point G where PDR and RSSI is recorded as the lowest accordingly 59.42% and -108 for two node-based network, at same point by using mesh network PDR is founded 100% and RSSI is recorded -91only. Cross points are not facing any network disruption for data transmission. So, from this results comparison, it is determined that PDR and RSSI is enhanced by using mesh network architecture.” “In this phase, maximum PDR value is calculated at point A, which is 81.16 %. A comparison between Scenario 1 and Scenario 2 when the end node is placed at level 3 is presented at Figure11. The proposed mesh network topology can successfully transmit data up to level three with the highest PDR and strong signal. At three cross points B, D and G, PDR is calculated 100% and RSSI is recorded accordingly -94, -96 and -99, which indicates our mesh network setup can transmit data to all the cross points too without any network disruptions. PDR is found 100% for all points and RSSI between -92 to -100 only” 6. Reviewer #1 (Comments - Novelty not assessed?): Response: New paragraph added on page 17, line 497-504 “So, this research work provides an improved solution compared with previous research. We introduced LoRa technology for DAQM that can successfully transmit and receive sensor data in non-line-of-sight duct environments. Experimental results showed no significant network disruption at all points in the duct, especially at the cross-sections, with two node-based point-to-point network when the master and end node is placed at the same level of the testbed building. Using LoRa based mesh network, we significantly improved network performance both in the horizontal and vertical range, which allows our implemented prototype to explore within a large multi-story building for DAQM. Wireless data transmission is an essential phase of DAQM. There are a lot of obstacles and interferences that exist in a duct environment that affects the network coverage area and reduce transmission efficiency.” 7. Reviewer #1 (Additional comments- My question is why to use LoRa for this specific application? Why not use preexisting communication infrastructure available in the building like WIFI or LAN?) Response: The reason of not using preexisting WIFI or LAN is explained on page 2, line 5255 “Previous researches on DAQM used Bluetooth technology between nodes, covering a concise area of wireless data transmission. Some studies used Wi-Fi that shows network disruption at the cross-sections of the duct channel. Data transmission in a duct environment is entirely a non-line-of-sight situation.” 8. Reviewer #1 (Additional comments- Authors should clearly mention what is the novelty and contribution of this paper?) Response: The novelty and contribution are highlighted in the abstract. Page 1. Line 2022 and in the Introduction, Page 3; Line 123-125 “The main contribution of this paper is the evaluation of mesh LoRa strategies using our instrument to overcome network disruption problems at the cross-sections and extend the coverage area within the duct environment” 9. Reviewer #1 (Additional comments- Authors are suggested to include analysis regarding data rate of the network, Frequency of the data extraction from each node and how does the size of network effect the data extraction frequency?) Response: In this research, we focused on successful data transmission in a non-line-ofsight duct environment. We have selected PDR and RSSI as our performance metrics. The highest transmission power of the LoRa shield is used at the master node to ensure successful transmission. But data rate and bandwidth are related to transmission time, which is another new scope of the study. That’s why this point is discussed at the conclusion and mentioned as the future work and advancement of the experiment of this work. Page 17, Line 519-521 “Another potential area is to utilize using LoRa transmission power, spreading factor (SF), and bandwidth in a larger environment and number of nodes.” 10. Reviewer #1 (Additional comments- One of the main abilities of LoRa is the ability to transmit data on multiple spreading factor which allows it to trade data rate and range of communication. So instead of using mesh configuration variable spreading factors can be used for reliable communication?) Response: Instead of using multiple SF, mesh networking is chosen in this experiment for the enhancement of network performance. The reason for not using multiple SF is explained on page 3, line 84-96, “One of the essential features of LoRa technology is Spreading Factor (SF), whereby multiple SF can be used to trade data rate, coverage range of the network, time on the air, receiver sensitivity, longer battery life (Centenaro et al., 2016). The drawback of this approach, it could reduce the throughput rate of the network and can be responsible for severe data collision because this setup requires a longer air time for data transmission.” and the reason for choosing mesh network architecture is explained on page 6, line 195204 “Our data packet routing policy used in this study is mesh topology. Node A collects data with sensors and sends it to the repeaters (Node B and Node C) instead of sending it directly to Node D. If both repeaters are within range, the nearest repeater will receive the packet first. The repeater Node ID will be included the packet and sent to Node D. If both repeater node receives and transmits the packets, Node D will receive the packet from Node C due to the specific interval settings.” REVIEWER #2 11. Reviewer #2 (Basic Reporting- Some sentences in the text are not clear, for example, 'The repeater nodes were configured to receive the data and forward it to the end node while the end node was programmed to receive data from the repeater node instead of receiving it directly from the master node?) Response: The line of the given example is simplified and rewritten on page 6, line 205214 “A master and an end node are used in our LoRa mesh topology. The configuration and workflow of these two nodes in coding is the same as the configuration is used for twonode-based network but the defined destination Node ID at master node and defined source Node ID at receiver end node are different. Repeater nodes are defined as NODE B & NODE C and the Node ID is defined as respectively 2 & 3. Inside the coding of master node, ID of Node B & C. Data transmitted from Node A will be accepted by Node B & C while discarded other nodes. Repeaters work as a transceiver to receive data from Node A and forward it to Node D. Then, the repeater node ID is added for the data packet will be transmitted to the end node. Interval time for Node B is set to 3000 milliseconds, while 1000 milliseconds for Node C. The network frequency is defined as 915 MHz for all nodes.” 12. Reviewer #2 (Basic Reporting - Figure 1 is not necessary for illustration. Figure 4 is too simple to illustrate the algorithm flow?) Response: Agreed, Figure-1 is removed from the manuscripts. The algorithm flow is updated. New figure number of the algorithm, Figure-3 on page 6. Updated Figure 3 13. Reviewer #2 (Basic Reporting - Please state the conclusion in more professional terms?) Response: Conclusion updated on page 17. “This work presents an evaluation of wireless data transmission and enhancement of network performance for DAQM. The study considered several critical scenarios and compared the performance using LoRa based two node-based network architecture and mesh network. The proposed mesh network consists of four nodes and two repeater nodes that can cover the full testbeds of DAQM to transmit sensed data of DAQM to the end node wirelessly for final analysis. The result shows that using a mesh network in this experiment has improved PDR at 237% and significantly improved coverage area by five times compared with a two node-based network. A small mesh network is implemented, including only one master node and two repeater nodes. This study's outcome provides a baseline for implementation in a larger environment considering the influence of the number of nodes.” 14. Reviewer #2 (Experimental design - However, the description of the method used is too simple and the description of the network structure is not clear?) Response: The description of the method is updated on Page 4, line 131-139. “This section describes the three phases of this research methodology, which consist of node & network architecture, experimental setup, and data collection & evaluation procedures. An Arduino Uno is programmed for data collection with sensors and transmit collected through LoRa in two different network architectures. The master node is set as a data collector in the duct environment and sender the collected data to the destination. The end node is configured as the network receiver and is responsible for visualizing collected data. Two additional repeater nodes are used in mesh network architecture, where each repeater is programmed as a transceiver. Four experimental scenarios are designed to evaluate the network performance in several key parameters.” The network structure is described in detail in new paragraph on page 7, line 220-224. “Our data packet routing policy used in this study is mesh topology. Node A collects data with sensors and sends it to the repeaters (Node B and Node C) instead of sending it directly to Node D. If both repeaters are within range, the nearest repeater will receive the packet first. The repeater Node ID will be included the packet and sent to Node D. If both repeater node receives and transmits the packets, Node D will receive the packet from Node C due to the specific interval settings. Then, the same data packet will be received from Node B. As the string begins with the Node ID, so from the received data string it can be identified that that packet is received from which repeater node, either Node B or Node C. Finally, the RSSI value are calculated and presented via the serial monitor. The workflow of nodes in the mesh network is shown in Figure 3.” 15. Reviewer #2 (Experimental design - For the technical description, please specify the function of the mobile phone?) Response: Function of mobile phone is described on page 4, line 160-163 “A Xiaomi Redmi Note 6 Pro android phone is used to operate RoboMaster remotely. An android app named robomaster.apk is installed in the phone to connect with the robot to control the movement of RoboMaster, real-time video monitoring during navigation in duct environment.” 16. Reviewer #2 (Experimental design - The steps in the method description are reasonable and the detailed information is insufficient. For example, the flow chart is too simple?) Response: Methodology is described in detail from page 4-11, and the flow chart also updated. Updated methodology section. 17. Reviewer #2 (Additional comments - It is suggested to merge the contents of '2. RELATED WORKS' into '1. INTRODUCTION' because they are all literature analysis.?) Response: Both sections are combined and rephrase. Page 1-3 REVIEWER #3 18. Reviewer #3 (Basic Reporting - Your related works need more detail. I suggest that you consider the other works related to LoRa mesh networks and routing protocols for the wireless mesh networks?) Response: New paragraph is added on page 2, line 55-66 “(Chomba et al., 2011) showed that Wi-Fi signal strength in a non-line-of-sight indoor environment is reduced to less than –100 dBm for a 30-m distance between nodes. In a study by (N. Hashim, N. F. A. M. Azmi1, 2014), it was observed that, for outdoor communication, the Wi-Fi signal lost after 150 meters. For indoor communication, the signal lost appeared after only 40 meters. In indoor situations, the Wi-Fi area coverage decreases due to obstacles in the indoor environment, which reduce the effectiveness of data transmission and result in path loss. Both Wi-Fi and Bluetooth technologies work based on radio wave transmission, and a radio wave cannot pass through metal (Smith & Smith, 2005), and (Hassan et al., 2016). In a study by Swain et al., (2018), ZigBee wireless technology sent sensed data from the underground mine to a monitoring station. In this experiment, the researchers experienced packet loss after 135 m and a sudden drop in the signal after 150 meters. Path loss for transmitted data is effortless and standard in a non-line-of-sight surrounded environment.” 19. Reviewer #3 (Basic Reporting - Figures 4, 5, 6 should be provided in high quality for example with 600dpi?) Response: Figure 4,5 and 6 are rebuilt with 600 dpi resolution 20. Reviewer #3 (Basic Reporting - In Figures 11, 12 the comparison of PDRs should be described in 2 columns as the comparison in the Fig. 13.?) Response: The figures are updated and updated explanation pattern for each figure. Page 13-16, line 405-515 “Figure 9, 10 and 11 are updated with the explanations” 21. Reviewer #3 (Experimental design - However, the author should consider the influence of the number of master-nodes and the number of repeater-nodes to PDR.?) Response: This issue is considered as the limitation of our work and explained inside the conclusion part on page 17, line 519-521. “Another potential area is to utilize using LoRa transmission power, spreading factor (SF), and bandwidth in a larger environment and number of nodes.” 22. Reviewer #3 (Additional comments- and the authors haven’t provided which routing protocols used in the LoRa mesh network. - The authors should provide more detail about the proposed mesh architecture?) Response: The data routing policy for both network architecture is included, and detail description of proposed mesh network architecture is added on page 5-6, line 187-198 “A mesh network topology is proposed and implemented to improve the data transmission efficiency. In the mesh network between master and end node, two additional nodes are programmed as repeater nodes. Each repeater node consists of one Arduino Uno, one Cytron LoRa RFM Shield and 5V power bank. The architecture of the proposed mesh network is presented in Figure 2” -END OF DOCUMENT- "
Here is a paper. Please give your review comments after reading it.
395
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background. Large scale metagenomic projects aim to extract biodiversity knowledge between different environmental conditions. Current methods for comparing microbial communities face important limitations. Those based on taxonomical or functional assignation rely on a small subset of the sequences that can be associated to known organisms. On the other hand, de novo methods, that compare the whole sets of sequences, either do not scale up on ambitious metagenomic projects or do not provide precise and exhaustive results.</ns0:p><ns0:p>Methods. These limitations motivated the development of a new de novo metagenomic comparative method, called Simka. This method computes a large collection of standard ecological distances by replacing species counts by k-mer counts. Simka scales-up today's metagenomic projects thanks to a new parallel k-mer counting strategy on multiple datasets.</ns0:p><ns0:p>Results. Experiments on public Human Microbiome Project datasets demonstrate that Simka captures the essential underlying biological structure. Simka was able to compute in a few hours both qualitative and quantitative ecological distances on hundreds of metagenomic samples (690 samples, 32 billions of reads). We also demonstrate that analyzing metagenomes at the k-mer level is highly correlated with extremely precise de novo comparison techniques which rely on all-versus-all sequences alignment strategy or which are based on taxonomic profiling.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>It is estimated that only a fraction of 10 &#8722;24 to 10 &#8722;22 of the total DNA on earth has been sequenced <ns0:ref type='bibr'>(Nature Rev. Microbiol. editorial, 2011)</ns0:ref>. In large scale metagenomics studies such as Tara Oceans <ns0:ref type='bibr' target='#b20'>(Karsenti et al., 2011)</ns0:ref> most of the sequenced data comes from unknown organisms and their short reads assembly remains an inaccessible task (see for instance results from the CAMI challenge http://cami-challenge.org/). When precise taxonomic assignation is not feasible, microbial ecosystems can nevertheless be compared on the basis of their diversity, inferred from metagenomic read sets. In this framework, the beta-diversity, introduced in <ns0:ref type='bibr' target='#b38'>(Whittaker, 1960)</ns0:ref>, measures the dissimilarities between communities in terms of species composition. Such compositions may be approximated by sequencing marker genes, such as the rRNA 16S in bacterial communities <ns0:ref type='bibr' target='#b24'>(Liles et al., 2003)</ns0:ref>, and clustering the sequences into Operational Taxonomic Units (OTU) or working species. However, marker genes surveys suffer from amplification and primer bias <ns0:ref type='bibr' target='#b6'>(Cai et al., 2013)</ns0:ref> and therefore may not capture the whole microbial diversity of a sample. Furthermore, even within the captured diversity, the marker may not be informative enough to discriminate between sub-species or even species strains <ns0:ref type='bibr' target='#b31'>(Piganeau et al., 2011)</ns0:ref>. Finally, this approach is impractical for whole metagenomic sets for at least two reasons: clustering reads into putative species is computationally costly and leaves out a large fraction of the reads <ns0:ref type='bibr' target='#b28'>(Nielsen et al., 2014)</ns0:ref>.</ns0:p><ns0:p>In this context, it is more practical to ditch species composition altogether and compare microbial communities using directly the sequence content of metagenomic read sets. This has first been performed by using Blast <ns0:ref type='bibr' target='#b0'>(Altschul et al., 1990)</ns0:ref> for comparing read content <ns0:ref type='bibr' target='#b41'>(Yooseph et al., 2007)</ns0:ref>. This approach was successful but can not scale up to large studies made up of dozens or hundreds of large read sets, such as those generated from Illumina sequencers.</ns0:p><ns0:p>In 2012, the Compareads method <ns0:ref type='bibr' target='#b25'>(Maillet et al., 2012)</ns0:ref> was proposed. The method compares the whole sequence content of two read sets. It introduced a rough approximation of read similarity based on the number of shared words of length k (k-mer, with k typically around 30) and used it for providing so defined similar reads between read sets. The number of similar reads was then used for computing a Jaccard distance between pairs of read sets. Commet <ns0:ref type='bibr' target='#b26'>(Maillet et al., 2014)</ns0:ref> is an extended version of Compareads. It better handles the comparison of large read sets and provides a read sub-set representation that facilitates result analyses and reduces the disk footprint. <ns0:ref type='bibr' target='#b34'>Seth et al. (2014)</ns0:ref> used the notion of shared k-mers between samples for estimating dataset similarities. This is a slightly different problem as this was used for retrieving from an indexed database, samples similar to a query sample. More recently, two additional methods were developed to represent a metagenome by a feature vector that is then used to compute pairwise similarity matrices between multiple samples. For both methods, features are based on the k-mer composition of samples, but with a feature representing more than one k-mer and using only a subset of k-mers to reduce the dimension <ns0:ref type='bibr'>(Ulyantsev et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Ondov et al., 2016)</ns0:ref>. However, the approaches for k-mer grouping and sub-sampling are radically different. In MetaFast <ns0:ref type='bibr'>(Ulyantsev et al., 2016)</ns0:ref>, the subset of k-mers is obtained by post-processing de novo assemblies performed for each metagenome. A feature represents then a set of k-mers belonging to a same assembly graph 'component'. The relative abundance of such component in each sample is then used to compute the Bray-Curtis dissimilarity measure.</ns0:p><ns0:p>In Mash <ns0:ref type='bibr' target='#b29'>(Ondov et al., 2016)</ns0:ref> a sub-sampling of the k-mers is performed using the MinHash <ns0:ref type='bibr' target='#b5'>(Broder, 1997)</ns0:ref> approach (keeping by default 1,000 k-mers per sample). The method outputs then a Jaccard index of the presence-absence of such k-mers in two samples.</ns0:p><ns0:p>All these reference-free methods share the use of k-mers as the fundamental unit used for comparing samples. Actually, k-mers are a natural unit for comparing communities: (1) sufficiently long k-mers are usually specific of a genome <ns0:ref type='bibr' target='#b15'>(Fofanov et al., 2004)</ns0:ref>, (2) k-mer frequency is linearly related to genome's abundance <ns0:ref type='bibr' target='#b40'>(Wu and Ye, 2011)</ns0:ref>, (3) k-mer aggregates organisms with very similar k-mer composition (e.g. related strains from the same bacterial species) without need for a classification of those organisms <ns0:ref type='bibr' target='#b36'>(Teeling et al., 2004)</ns0:ref>. <ns0:ref type='bibr'>Dubinkina et al. (2016)</ns0:ref> conducted an extensive comparison between k-mer-based distances and taxonomic ones (ie. based on taxonomic assignation against a reference database) for several large scale metagenomic projects. They demonstrate that k-mer-based distances are well correlated to taxonomic ones, and are therefore accurate enough to recover known biological structure, but also to uncover previously unknown biological features that were missed by reference-based approaches due to incompleteness of reference databases.</ns0:p><ns0:p>Importantly, the greater k, the more correlated these taxonomic and k-mer-based distances seem to be. However, the study is limited to values of k lower than 11 for computational reasons and the correlation for large values of k still needs to be evaluated.</ns0:p><ns0:p>Even if Commet and MetaFast approaches were designed to scale-up to large metagenomic read sets, their use on data generated by large scale projects is turning into a bottleneck in terms of time and/or memory requirements. By contrast, Mash outperforms by far all other methods in terms of computational resource usage. However, this frugality comes at the expense of result quality and precision: the output distances and Jaccard indexes do not take into account relative abundance information and are not computed exactly due to k-mer sub-sampling.</ns0:p><ns0:p>In this paper, we present Simka. Simka compares N metagenomic datasets based on their k-mers counts. It computes a large collection of distances classically used in ecology to compare communities. Computation is performed by replacing species counts by k-mer counts, for a large range of kmer sizes, including large ones (up to 30). Simka is, to our knowledge, the first method able to rapidly compute a full range of distances enabling the comparison of any number of datasets. This is performed by processing data on-the-fly (i.e. without storage of large temporary results). With the exception of Mash that is, thanks to sub-sampling, approximately two to five time faster, Simka outperforms state-of-the-art read comparison methods in terms of computational needs. For instance, Simka ran on 690 samples from the Human Microbiome Project (HMP) (Human Microbiome Project Consortium, 2012a) (totalling 32 billion reads) in less than 10 hours and using no more than 70 GB RAM.</ns0:p><ns0:p>The contributions of this manuscript are three-fold. First we propose a new method for efficiently counting k-mers from a large number of metagenomic samples. The usefulness of such counting is not limited to comparative metagenomics and may have applications in many other fields. Second, we show how to derive a large number of ecological distances from k-mer counts. And third, we show on real datasets that k-mer-based distances are highly correlated to taxonomic distances: they therefore capture the same underlying structure and lead to the same conclusions.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials and Methods</ns0:head><ns0:p>The proposed algorithm enables to compute dissimilarity measures between read sets. In the following, in order to simplify the reading, we use the term 'distance' to refer to this measure.</ns0:p></ns0:div> <ns0:div><ns0:head>Overview</ns0:head><ns0:p>Given N metagenomic datasets, denoted as S 1 , S 2 , S i , ...S N , the objective is to provide a N &#215; N distance matrix D where D i,j represents an ecological distance between datasets S i and S j . Such possible distances are listed in Table <ns0:ref type='table'>1</ns0:ref>. The computation of the distance matrix can be theoretically decomposed into two distinct steps:</ns0:p><ns0:p>1. k-mer count. Each dataset is represented as a set of discriminant features, in our case, k-mer counts. More precisely, a k-mer count matrix KC of size W &#215; N is computed. W is the number of distinct k-mer among all the datasets. KC i,j represents the number of times a k-mer i is present in the dataset S j .</ns0:p><ns0:p>2. distance computation. Based on the k-mer count information, the distance matrix D is computed. Actually, many ecological distances (cf Table <ns0:ref type='table'>1</ns0:ref>) can be derived from matrix KC when replacing species counts by k-mer counts.</ns0:p><ns0:p>Actually, Simka does not require to have the full KC matrix to start the distance computation. But for sake of simplicity, we will first consider this matrix to be available.</ns0:p><ns0:p>The k-mer count step splits all the reads of the datasets into k-mers and performs a global count. This can be done by counting individually k-mers in each dataset, then merging the overall k-mer counts. The output is the matrix KC (of size W &#215; N ). Efficient algorithms, such as KMC2 <ns0:ref type='bibr' target='#b11'>(Deorowicz et al., 2015)</ns0:ref>, have recently been developed to count all the occurrences of distinct k-mers in a read dataset, allowing the computation to be executed in a reasonable amount of time and memory even on very large datasets. However, the main drawback of this approach is the huge main memory space it requires which is computed as follow: M em KC = W s * (8 + 4N ) bytes, with W s the number of distinct k-mers, N the number of samples, and 8 and 4 the number of bytes required to store respectively 31-mers and a k-mer count. For example, experiments on the HMP <ns0:ref type='bibr'>(Human Microbiome Project Consortium, 2012a)</ns0:ref> datasets (690 datasets containing on average 45 millions of reads each) would require a storage space of 260T B for the matrix KC.</ns0:p><ns0:p>However, a careful look at the definition of ecological distances (Table <ns0:ref type='table'>1</ns0:ref>)</ns0:p><ns0:p>shows that, up to some final transformation, they are all additive over the k-mers. Independent contributions to the distance can thus be computed in parallel from disjoint sets of k-mers and aggregated later on to construct the final distance matrix. Furthermore, each independent contribution can itself be constructed in an iterative way by receiving lines of the KC matrix, called abundance vectors, one at a time. The abundance vector of a specific k-mer simply consists of its N counts in the N datasets.</ns0:p><ns0:p>To sum up, instead of computing the complete k-mer count matrix KC, the alternative computation scheme we propose is to generate successive abundance vectors from which independent contributions to the distances can be iteratively updated in parallel. The great advantage is that the huge k-mer count matrix KC does not need to be stored anymore. However, this approach requires a new strategy to generate abundance vectors. We propose and describe below a new efficient multiset k-mer counting algorithm (called MKC) that can be highly parallelized on large computing resources infrastructures. As illustrated Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, Simka uses abundance vectors generated by MKC for computing ecological distances. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Multiset k-mer Counting</ns0:p><ns0:p>Starting from N datasets of reads, the aim is to generate abundance vectors that will feed the ecological distance computation step. This task is divided into two phases:</ns0:p><ns0:p>1. Sorting Count, 2. Merging Count.</ns0:p><ns0:p>Sorting Count Each k-mer of a dataset is extracted and its canonical representation is stored (the canonical representation of a k-mer is the smallest lexicographic value between the k-mer and its reverse complement). Canonical k-mers are then sorted in lexicographical order. Distinct k-mers can thus be identified and their number of occurrences computed.</ns0:p><ns0:p>As the number of distinct k-mers is generally huge, the sorting step is divided into two sub-tasks and proceeds as follows: the k-mers are first separated into P partitions, each stored on disk. After this preliminary task, each partition is sorted and counted independently, and stored again on disk.</ns0:p><ns0:p>Conceptually, at the end of the sorting count process, we dispose of N &#215; P sorted partitions. As the same distribution function is applied to all datasets, a partition P i contains a specific subset of k-mers common to all datasets.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>-A illustrates the Sorting Count phase.</ns0:p><ns0:p>The Sorting Count phase has a high parallelism potential. A first parallelism level is given by the independent counts of each dataset. N processes can thus be run in parallel, each one dealing with a specific dataset. A second level is given by the fine grained parallelism implemented in software such as DSK <ns0:ref type='bibr' target='#b32'>(Rizk et al., 2013)</ns0:ref> or KMC2 <ns0:ref type='bibr' target='#b11'>(Deorowicz et al., 2015)</ns0:ref> that intensively exploit today multicore processor capabilities. Thus, the overall Sorting Count process is especially suited for grid infrastructures made of hundred of nodes, and where each node implements 8 or 16-core systems. Furthermore, to limit disk bandwidth and avoid I/O bottleneck, partitions are compressed. A dictionary-based approach such as the one provided in zlib <ns0:ref type='bibr' target='#b12'>(Deutsch and Gailly, 1996)</ns0:ref> is used. This type of compression is very well suited here since it efficiently packs the high redundancy of sorted k-mers.</ns0:p><ns0:p>Merging Count Here, the data partitioning introduced in the previous step is advantageously used to generate abundance vectors. The N files associated to a partition P i , are taken as input of a merging process. These files contain k-mer counts sorted in lexicographical order. A Merge-Sort algorithm can thus be efficiently applied to directly generate abundance vectors.</ns0:p><ns0:p>In that scheme, P processes can be run independently, resulting in the generation of P abundance vectors in parallel, allowing to compute simultaneously P contributions of the ecological distance. Note that the abundance vectors do not need to be stored. They are only used as input streams for the next step. k-mer abundance filter Distinct k-mers with very low abundance usually come from sequencing errors. As a matter of fact, a single sequencing error creates up to k erroneous distinct k-mers. Filtering out these k-mers speeds-up the Simka process, as it greatly reduces the overall number of distinct k-mers, but may also impact the content of the distance matrix. This point is evaluated and discussed in the result section.</ns0:p><ns0:p>This filter is activated during the count process. Only k-mers whose abundance is equal to or greater than a given abundance threshold are kept. By default the threshold is set to 2. The k-mers that pass the filter are called 'solid k-mers'.</ns0:p></ns0:div> <ns0:div><ns0:head>Ecological distance computation</ns0:head><ns0:p>Simka computes a collection of distances for all pairs of datasets. As detailed in the previous section, abundance vectors are used as input data. For the sake of simplicity, we first explain the computations of the Bray-Curtis distance. All other distances, presented later on, can be computed in the same way, with only small adaptations.</ns0:p><ns0:p>Computing the Bray-Curtis distance The Bray-Curtis distance is given by the following equation:</ns0:p><ns0:formula xml:id='formula_0'>BrayCurtis Ab (S i , S j ) = 1 &#8722; 2 w&#8712;Si&#8745;Sj min(N Si (w), N Sj (w)) w&#8712;Si N Si (w) + w&#8712;Sj N Sj (w) (1)</ns0:formula><ns0:p>where w is a k-mer and N Si (w) is the abundance of w in the dataset S i . We consider here that w &#8712; S i &#8745; S j if N Si (w) &gt; 0 and N Sj (w) &gt; 0.</ns0:p><ns0:p>The equation involves marginal (or dataset specific) terms (i.e.</ns0:p><ns0:p>w&#8712;Si N Si (w) is the total amount of k-mers in dataset S i ) acting as normalizing constants and crossed terms that capture the (dis)similarity between datasets (i.e. w&#8712;Si&#8745;Sj min(N Si (w), N Sj (w)) is the total amount of k-mers in the intersection of the datasets S i and S j ). Marginal and crossed terms are then combined to compute the final distance.</ns0:p><ns0:p>Algorithm 1 shows that it is straightforward to compute the distance matrix between N datasets from the abundance vectors. Inputs of this algorithm are provided by the Multiple k-mer Counting algorithm (MKC). These are the P streams of abundance vectors and the marginal terms of the distance, i.e. the number of k-mers in each dataset, determined during the first step of the MKC which counts the k-mers.</ns0:p><ns0:p>A matrix, denoted M &#8745; , of dimension N &#215; N is initialized (step 1) to record the final value of the crossed terms of each pair of datasets. P independent processes are run (step 2) to compute P partial crossed term matrices, denoted M &#8745;part (step 3), in parallel. Each process iterates over its abundance vector stream (step 4). For each abundance vector, we loop over each possible pair of datasets (steps 5-6). The matrix M &#8745;part is updated (step 8) if the k-mer is shared, meaning that it has positive abundance in both datasets S i and S j (step 7). Since a distance matrix is symmetric with null diagonal, we limit the computation to the upper triangular part of the matrix M &#8745;part . The current abundance vector is then released. Each process writes its matrix M &#8745;part on the disk when its stream is done (step 9). When all streams are done, the algorithm reads each written M &#8745;part and accumulates it to M &#8745; (step 10-11). The last loop (steps 13 to 16) computes the Bray-Curtis distance for each pair of datasets and fills the distance matrix reported by Simka.</ns0:p><ns0:p>The amount of abundance vectors streamed by the MKC is equal to W s , which is also the total amount of distinct solid k-mers in the N datasets. This algorithm has thus a time complexity of O(W s &#215; N 2 ).</ns0:p><ns0:p>Other ecological distances The distance introduced in Eq. 1 is a single example of ecological distance. There exists numerous other ecological distances that can be broadly classified into two categories (see <ns0:ref type='bibr' target='#b23'>Legendre and De C&#225;ceres (2013)</ns0:ref> for a finer classification): distances based on presence-absence data (hereafter called qualitative) and distances based on proper abundance data (hereafter called quantitative). Qualitative distances are more sensitive to factors that affect presence-absence of organisms (such as pH, salinity, depth, </ns0:p><ns0:formula xml:id='formula_1'>5 for i &#8592; 0 to N &#8722; 1 do 6 for j &#8592; i + 1 to N &#8722; 1 do 7 if v[i] &gt; 0 and v[j] &gt; 0 then 8 M &#8745;part [i, j] &#8592; M &#8745;part [i, j] + min(v[i], v[j]) 9 Write M &#8745;part to disk 10 foreach each written matrix M &#8745;part do 11 M &#8745; &#8592; M &#8745; + M &#8745;part 12 Dist &#8592; empty squared matrix of size N // final distance matrix 13 for i &#8592; 0 to N &#8722; 1 do 14 for j &#8592; i + 1 to N &#8722; 1 do 15 Dist[i, j] = 1 &#8722; 2 * M &#8745; [i, j] / (V &#8746; [i] + V &#8746; [j]) 16 Dist[j, i] = 1 &#8722; 2 * M &#8745; [i, j] / (V &#8746; [i] + V &#8746; [j])</ns0:formula><ns0:p>17 return Dist humidity, absence of light, etc) and therefore useful to study bioregions. Quantitative distances focus on factors that affect relative changes (seasonal changes, nutrient availability, concentration of oxygen, depth, diet, disease, etc) and are therefore useful to monitor communities over time or along an environmental gradient. Note that some factors, such as pH, are likely to affect both presence-absence (for large changes in pH) and relative abundances (for small changes in pH). Algorithmically, most ecological distances, including most of those mentioned in <ns0:ref type='bibr' target='#b23'>Legendre and De C&#225;ceres (2013)</ns0:ref>, can be expressed for two datasets S i and S j as:</ns0:p><ns0:formula xml:id='formula_2'>Distance(S i , S j ) = g &#63723; &#63725; w&#8712;Si&#8746;Sj f N Si (w), N Sj (w), C Si , C Sj &#63734; &#63736; (2)</ns0:formula><ns0:p>where g and f are simple functions, and C Si is a marginal (i.e. dataset-specific) term of dataset S i , usually of size 1 (i.e. a scalar). In most distances, C Si is simply the total number of k-mers in S i . By contrast, the value of f corresponds 9</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2016:07:12307:1:1:NEW 23 Sep 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to crossed terms and requires knowledge of both N Si (w) and N Sj (w) (and potentially C Si and C Sj as well). For instance, for the abundance-based Bray-Curtis distance of Eq. 1, we have C Si = w&#8712;Si N Si (w), g(x) = 1 &#8722; 2x and f (x, y, X, Y ) = min(x, y)/(X + Y ). Those distances can be computed in a single pass over the data using a slightly modified variant of Algorithm 1. The marginal terms C Si are computed during the first step of the MKC which counts the k-mers of each dataset. The crossed terms involving f are computed and summed in steps 7-8 (but exact instructions depend on the nature of f ). Finally, the actual distances are computed in steps 15-16 and depend on both f and g.</ns0:p><ns0:p>Qualitative distances form a special case of ecological distances: they can all be expressed in terms of quantities a, b and c where a is the number of distinct k-mers shared between datasets S i and S j , b is the number of distinct k-mers specific to dataset S i and c is the number of distinct k-mers specific to dataset S j . Those distances easily fit in the previous framework as</ns0:p><ns0:formula xml:id='formula_3'>a = w&#8712;Si&#8745;Sj 1 {N S i (w)N S j (w)&gt;0} , C Si = w&#8712;Si 1 {N S i (w)&gt;0} = a + b and</ns0:formula><ns0:p>similarly C Sj = a + c. Therefore, a is a crossed term and b and c can be deduced from a and the marginal terms.</ns0:p><ns0:p>In the same vein, <ns0:ref type='bibr' target='#b8'>Chao et al. (2006)</ns0:ref> introduced variations of presence-absence distances incorporating abundance information to account for unobserved species. The main idea is to replace 'hard' quantities such as a/(a + b), the fraction of distinct k-mers from S i shared with S j , by probabilistic 'soft' ones: here the probability U &#8712; [0, 1] that a k-mer from S i is also found in S j . Similarly, the 'hard' fraction a/(a + c) of distinct k-mers from S j shared with S i is replaced by the 'soft' probability V that a k-mer from S j is also found in S i . U and V play the same role as a, b and c do in qualitative distances and are sufficient to compute the variants named AB-Jaccard, AB-Ochiai and AB-Sorensen. However and unlike the quantities a, b c, which can be observed from the data, U and V are not known in practice and must be estimated from the data. <ns0:ref type='bibr' target='#b8'>Chao et al. (2006)</ns0:ref> proposed several estimates for U and V . The most elaborate ones attempt to correct for differences in sampling depths and unobserved species by considering the complete k-mer counts vector of a sample. Those estimates are unfortunately untractable in our case as we stream only a few k-mer counts at a time. Instead we resort to the simplest estimates presented in <ns0:ref type='bibr' target='#b8'>Chao et al. (2006)</ns0:ref>, which lend themselves well to the additive and distributed nature of Simka:</ns0:p><ns0:formula xml:id='formula_4'>U = Y SiSj /C Si and V = Y Sj Si /C Sj where Y SiSj = w&#8712;Si&#8745;Sj N Si (w)1 {N S j (w)&gt;0} and C Si = w&#8712;Si N Si (w). Note that Y SiSj corresponds to crossed terms and is asymmetric, i.e. Y SiSj = Y Sj Si .</ns0:formula><ns0:p>Intuitively, U is the fraction of k-mers (not distinct anymore) from S i also found in S j and therefore gives more weights to abundant k-mers that its qualitative counterpart a/(a + b).</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> gives the definitions of the collection of distances computed by Simka while replacing species counts by k-mer counts. These are qualitative, quantitative and abundance-based variants of qualitative ecological distances.</ns0:p><ns0:p>The table also provides their expression in terms of C i , f and g, adopting the notations of Eq. 2.</ns0:p><ns0:p>Finally, note that the additive nature of the computed distances over k-mers is instrumental in achieving a linear time complexity (in W s , the amount of distinct solid k-mers) and in the parallel nature of the algorithm. The algorithm is therefore not amenable to other, more complex classes of distances that account for ecological similarities between species <ns0:ref type='bibr' target='#b30'>(Pavoine et al., 2011)</ns0:ref>, or edit distances between k-mers as those complex distances require all versus all k-mer comparisons.</ns0:p></ns0:div> <ns0:div><ns0:head>Name</ns0:head><ns0:p>Definition</ns0:p><ns0:formula xml:id='formula_5'>CS i f (x, y, X, Y ) g(x) Quantitative distances Chord 2 &#8722; 2 w NS i (w)NS j (w) CS i CS j w NS i (w) 2 xy XY &#8730; 2 &#8722; 2x Hellinger 2 &#8722; 2 w NS i (w)NS j (w) CS i CS j w NS i (w) &#8730; xy &#8730; XY &#8730; 2 &#8722; 2x Whittaker 1 2 w NS i (w)CS j &#8722; NS j (w)CS i CS i CS j w NS i (w) |xY &#8722; yX| XY x 2 Bray-Curtis 1 &#8722; 2 w min(NS i (w), NS j (w)) CS i + CS j w NS i (w) min(x, y) X + Y 1 &#8722; 2x Kulczynski 1 &#8722; 1 2 w (CS i + CS j ) min(NS i (w), NS j (w)) CS i CS j w NS i (w) (X + Y ) min(x, y) XY 1 &#8722; x 2 Jensen-Shannon 1 2 w NS i (w) CS i log 2CS j NS i (w) CS j NS i (w) + CS i NS j (w) + NS j (w) CS j log 2CS i NS j (w) CS j NS i (w) + CS i NS j (w) w NS i (w) x X log 2xY xY + yX + y Y log 2yX xY + yX x 2 Canberra 1 a + b + c w NS i (w) &#8722; NS j (w) NS i (w) + NS j (w) &#8722; x &#8722; y x + y 1 a + b + c x Qualitative distances Chord/Hellinger 2 1 &#8722; a (a + b)(a + c) &#8722; &#8722; &#8722; Whittaker 1 2 b a + b + c a + c + a a + b &#8722; a a + c &#8722; &#8722; &#8722; Bray-Curtis/Sorensen b + c 2a + b + c &#8722; &#8722; &#8722; Kulczynski 1 &#8722; 1 2 a a + b + a a + c &#8722; &#8722; &#8722; Ochiai 1 &#8722; a (a + b)(a + c) &#8722; &#8722; &#8722; Jaccard b + c a + b + c &#8722; &#8722; &#8722; Abundance-based (AB) variants of qualitative distances AB-Jaccard 1 &#8722; U V U + V &#8722; U V &#8722; &#8722; &#8722; AB-Ochiai 1 &#8722; &#8730; U V &#8722; &#8722; &#8722; AB-Sorensen 1 &#8722; 2U V U + V &#8722; &#8722; &#8722;</ns0:formula><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Definition of some classical ecological distances computed by Simka. All quantitative distances can be expressed in terms of CS, f = f (x, y, X, Y ) and g = g(x), using the notations of Eq. 2, and computed in one pass. Qualitative ecological distances (resp. AB-variants of qualitative distances) can also be computed in a single pass over the data by computing first a, b and c (resp. U and V ). See main text for the definition of a, b, c, U and V .</ns0:p></ns0:div> <ns0:div><ns0:head>Implementation</ns0:head><ns0:p>Simka is based on the GATB library <ns0:ref type='bibr' target='#b13'>(Drezen et al., 2014)</ns0:ref>, a C++ library optimized to handle very large sets of k-mers. It includes a powerful implementation of the sorting count algorithm with the latest improvements in terms of k-mer counting introduced by <ns0:ref type='bibr' target='#b11'>Deorowicz et al. (2015)</ns0:ref>.</ns0:p><ns0:p>Simka is usable on standard computers and has also been entirely parallelized for grid or cloud systems. It automatically splits the process into jobs according to the available number of nodes and cores. These jobs are sent to the job scheduling system, while the overall synchronization is performed at the data level.</ns0:p><ns0:p>Simka is an open source software, distributed under GNU affero GPL License, available for download at https://gatb.inria.fr/software/simka/.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>First, Simka performances are evaluated in terms of computation time, memory footprint and disk usage and compared to those of other state of the art methods. Then, the Simka distances are evaluated with respect to de novo and reference-based distances and with respect to known biological results.</ns0:p><ns0:p>We </ns0:p></ns0:div> <ns0:div><ns0:head>Performance Evaluation</ns0:head><ns0:p>Performances on small datasets The scalability of Simka was first evaluated on small subsets of the HMP project, where the number of compared samples varied from 2 to 40. When computing a simple distance, such as Bray-Curtis for instance, Simka running time shows a linear behavior with the number of compared samples (Figure <ns0:ref type='figure' target='#fig_3'>3-A</ns0:ref>). As expected, counting the kmers for each sample (MKC-count) consumes most of the time. This task has a theoretical time complexity linear with the number of kmers, and thus the number of samples, and this explains the observed linear behavior of the overall program. In fact, most steps of Simka, namely MKC-count, MKC-merge and simple distance computation, show a linear behavior between running time and the number of compared samples. The only exception is the computation of complex distances, where the time devoted to this task increases quadratically.</ns0:p><ns0:p>Both simple and complex distance computation algorithms have theoretical worst case quadratic time complexity relatively to N (the number of samples).</ns0:p><ns0:p>The difference of execution time comes then from the amount of operations required, in practice, to calculate the crossed terms of the distances. For a given abundance vector, the simple distances only need to be updated for each pair (S i , S j ) such that N Si &gt; 0 and N Sj &gt; 0 whereas complex distances need to be updated for each pair such that N Si &gt; 0 or N Sj &gt; 0, entailing a lot more In summary, Simka and Mash seems to be the only tools able to deal with very large metagenomics datasets, such as the full HMP project.</ns0:p><ns0:p>Performances on the full HMP samples Remarkably, on the full dataset of the HMP project (690 samples), the overall computation time of Simka is about 14 hours with very low memory requirements (see Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). By comparison, Metafast ran out of memory (it also ran out of memory while considering only a sub-sample composed of the 138 HMP gut samples) and</ns0:p><ns0:p>Commet took several days to compute one 1-vs-all distance matrix and therefore would require years of computation to achieve the N &#215; N distance matrix. Conversely, Mash ran in less than 5 hours (255 min) and is faster than Simka. This was expected since Mash outputs an approximation of a simple qualitative distance, based on a sub-sample of 10,000 k-mers. By comparison, Simka computes numerous distances, including quantitative ones, over 15 billion distinct k-mers (see Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). Note that Simka is also designed for coarse-grain parallelism, and such computation took less than 10 hours on a 200-CPU platform.</ns0:p><ns0:p>These results were obtained with default parameters, namely filtering out k-mers seen only once. On this dataset, this filter removes only 5 % of the data: solid k-mers (k-mers seen at least twice) account for 95% of all base pairs of the whole dataset (see Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). But interestingly, when speaking in terms of distinct k-mers, solid distinct k-mers represent less than half of all distinct k-mers before merging across all samples and only 15% after merging. <ns0:ref type='table'>S1</ns0:ref>.</ns0:p><ns0:p>time and the memory usage stay constant (see Fig. <ns0:ref type='figure' target='#fig_2'>S1</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation of the distances</ns0:head><ns0:p>We evaluate the quality of the distances computed by Simka answering two questions. First, are they similar to distances between read sets computed using other approaches? Second, do they recover the known biological structure of HMP samples? For the first evaluation, two types of other approaches are considered, either de novo ones (similar to Simka but based on read comparisons), or taxonomic distances, e.g. approaches based on a reference database.</ns0:p><ns0:p>Correlation with read-based approaches In this section, we focus on comparing Simka k-mer-based distance to two read-based approaches:</ns0:p><ns0:p>Commet <ns0:ref type='bibr' target='#b26'>(Maillet et al., 2014)</ns0:ref> and an alignment-based method using BLAT <ns0:ref type='bibr' target='#b21'>(Kent, 2002)</ns0:ref>. Both these read-based approaches define and use a read similarity notion. They derive the percentage of reads from one sample similar to at least one read from the other sample as a quantitative similarity measure between samples. Commet considers that two reads are similar if they share at least t non-overlapping k-mers (here t = 2, k = 33). For BLAT alignments, similarity was defined based on several identity thresholds: two reads were considered similar if their alignment spanned at least 70 nucleotides and had a percentage of identity higher than 92%, 95% or 98%. For ease of comparison, Simka distance was transformed to a similarity measure, such as the percentage of shared kmers (see Article S1 for details of transformation).</ns0:p><ns0:p>Looking at the correlation with Commet is interesting because this tool uses a heuristic based on shared k-mers but its final distance is expressed in terms of read counts. As shown in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> Manuscript to be reviewed Computer Science q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q r = 0.989 percentage of matched k-mers and the percentage of similar reads as defined by BLAT alignments (Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>). Interestingly, the correlation depends on the k-mer size and the identity threshold used for BLAT: larger k-mer sizes correlate better with higher identity thresholds and vice versa. The highest correlation is 0.987, obtained for Simka with k = 21 compared to BLAT results with 95% identity.</ns0:p><ns0:p>These results demonstrate that we can safely replace read-based metrics by a kmer-based one, and this enables to save huge amounts of time when working on large metagenomics projects. Moreover, the k-mer size parameter seems to be the counterpart of the identity threshold of alignment-based methods if one wants to tune the taxonomic precision level of the comparisons. Manuscript to be reviewed Computer Science q q q q q q q q q 0.875 distances that are based on sequence assignation to taxons by mapping to reference databases. To compare Simka to such traditional reference-based method, we used the HMP gut samples, which is a well studied dataset comprising 138 samples. The HMP consortium provides a quantitative taxonomic profile for each sample on its website. These profiles were obtained by mapping the reads on a reference genome catalog at 80% of identity. From these profiles, we computed the Bray-Curtis distance, latter used as a reference.</ns0:p></ns0:div> <ns0:div><ns0:head>Correlation with taxonomic distances on the gut sample</ns0:head><ns0:p>The complete protocol to obtain taxonomic distances is given in Article S1.</ns0:p><ns0:p>Only Mash and Simka results have been considered for this experiment. As previously mentioned, Commet and MetaFast could not scale this dataset.</ns0:p><ns0:p>Simka k-mer-based distance appears very well correlated to the traditional taxonomic distance (r = 0.89, see Fig. <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>). On this figure, one may also notice that Simka measures are robust with the whole range of distances. On the other hand, Mash distances correlate badly with taxonomic ones (r = 0.51, see Fig. <ns0:ref type='figure' target='#fig_3'>S3</ns0:ref> and the comparison protocol in Article S1). This is probably due to the fact that gut samples differ more in terms of relative abundances of microbes than in terms of composition (see next section). As Mash can only output a qualitative distance, it is ill equipped to deal with such a case. Additionally, as shown in Fig. <ns0:ref type='figure' target='#fig_3'>S3</ns0:ref>, this conclusion stands for the HMP samples from other body sites for which one disposes of high quality taxonomic distances.</ns0:p><ns0:p>Interestingly, these Simka results are robust with the k-mer filtering option and the k-mer size, as long as k is larger than 15 and with an optimal correlation obtained with k = 21 (see Fig. <ns0:ref type='figure' target='#fig_6'>S4</ns0:ref>). Notably, with very low values of k (k &lt; 15), the correlation drops (r = 0.5 for k = 12). This completes previous results suggesting that the larger the k the better the correlation, that were limited to k values smaller than 11 <ns0:ref type='bibr'>(Dubinkina et al., 2016)</ns0:ref>. Visualizing the structure of the HMP samples We propose to visualize the structure of the HMP samples and see if Simka is able to reproduce known biological results. To easily visualize those structures, we used the Principal Coordinate Analysis (PCoA) <ns0:ref type='bibr' target='#b2'>(Borg and Groenen, 2013)</ns0:ref> to get a 2-D representation of the distance matrix and of the samples: distances in the 2-D plane optimally preserve values of the distance matrix.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> shows the PCoA of the quantitative Ochiai distance computed by Simka on the full HMP samples. We can see that the samples are clearly segregated by body sites. This is in line with results from studies of the HMP consortium (Human Microbiome Project Consortium, 2012a; <ns0:ref type='bibr' target='#b9'>Costello et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b22'>Koren et al., 2013)</ns0:ref>. Moreover, one may notice that different distances can lead to different distributions of the samples, with some clusters being more or less discriminated (see Fig. <ns0:ref type='figure' target='#fig_8'>S5</ns0:ref>). This confirms the fact that it is important to conduct analyses using several distances as suggested in <ns0:ref type='bibr' target='#b22'>(Koren et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b23'>Legendre and De C&#225;ceres, 2013)</ns0:ref> as different distances may capture different features of the samples.</ns0:p><ns0:p>We conduct the same experiment on the 138 gut samples from the HMP project. <ns0:ref type='bibr' target='#b1'>Arumugam et al. (2011)</ns0:ref> showed that the gut samples are organized in three groups, known as enterotypes, and characterized by the abundance of a few genera: Bacteroides, Prevotella and genera from the Ruminococcaceae family. The original enterotypes were built from Jensen-Shannon distances on taxonomic profiles. The Fig. <ns0:ref type='figure' target='#fig_13'>8</ns0:ref> shows the PCoA of the Jensen-Shannon distances obtained with Simka. Mapping the relative abundance of those genera in each sample, as provided by the HMP consortium, on the 2-D representation reveals a clear gradient in the PCoA space. Simka distances therefore recover biological features it had no direct access to: here, the fact that gut samples are structured along gradients of Bacteroides, Prevotella and Ruminococcaceae. The fact that Simka is able to capture such subtle signal raises hope of drawing new interesting biological insights from the data, in particular for those metagenomics project lacking good references (soil, seawater for instance).</ns0:p></ns0:div> <ns0:div><ns0:head>Discussions</ns0:head><ns0:p>In this article, we introduced Simka, a new method for computing a collection of ecological distances, based on k-mer composition, between many large metagenomic datasets. This was made possible thanks to the Multiple k-mer</ns0:p><ns0:p>Count algorithm (MKC), a new strategy that counts k-mers with state-of-the-art time, memory and disk performances. The novelty of this strategy is that it counts simultaneously k-mers from any number of datasets, and that it represents results as a stream of data, providing counts in each dataset, k-mer per k-mer.</ns0:p><ns0:p>The distance computation has a time complexity in O(W &#215; N 2 ), with W is the number of considered distinct k-mers and N is the number of input samples. N is usually limited to a few dozens or hundreds and can not be reduced. Manuscript to be reviewed Computer Science q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Gastrointestinal Oral Urogenital Nasal Skin PC1 (18.73%) PC2 (12.37%) provides large speed improvement without affecting the results, at least on the tests performed on the HMP datasets. However, the HMP dataset is not representative of all metagenomics projects and, in some cases, this filter may not be desired. For instance, in the case of samples with low coverage or when performing qualitative studies where the rare species have more impact. As a matter of fact, it is notable that Simka is able to scale large datasets even with the solid filter disabled as shown in the performance section. Interestingly, when applied on a low coverage dataset, namely the Global Ocean Sampling <ns0:ref type='bibr' target='#b41'>(Yooseph et al., 2007)</ns0:ref>, Simka was able to capture the essential underlying biological structure with or without the k-mer solid filter (see Fig. <ns0:ref type='figure' target='#fig_9'>S6</ns0:ref>). However, an important incoming challenge is to precisely measure the impact of applied thresholds together with the choice of k, depending on the input dataset features such as community complexity and sequencing effort. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q PC1 (6.15%) PC3 (3.53%) 25 50 75 Abundance (%) A Bacteroides q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q PC1 (6.15%) PC3 (3.53%) 0 20 40 60 Abundance (%) B Prevotella q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q PC1 (6.15%) PC3 (3.53%) Distribution of the gut samples from the HMP project is shown in a PCoA of the Jensen-Shannon distance matrix. This distance matrix was computed by Simka with k = 21. Relative abundances (0-100%) of (A) Bacteroides, (B) Prevotella and (C) Ruminococcaceae, as computed with Metaphlan <ns0:ref type='bibr' target='#b33'>(Segata et al., 2012)</ns0:ref>, are mapped onto the sample points as color shades.</ns0:p><ns0:p>Since metagenomic projects are constantly growing, it is important to offer the possibility to add new sample(s) to a set for which distances are already computed, without starting back the whole computation from scratch. It is straightforward to adapt the MKC algorithm to such operation, but the merging step and distance computation step have to be done again. However, adding a new sample does not modify previously computed distances and only requires to compute a single line of the distance matrix, it can thus be achieved in linear time.</ns0:p><ns0:p>The motivation for computing a collection of distances rather than just one is two folds: different distances capture different features of the data <ns0:ref type='bibr' target='#b22'>(Koren et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b23'>Legendre and De C&#225;ceres, 2013;</ns0:ref><ns0:ref type='bibr' target='#b30'>Pavoine et al., 2011)</ns0:ref> and all the distances computed by Simka have in common that they are additive over k-mers and can thus be computed simultaneously using the same algorithm. To support the first point, we have seen that Mash performed badly when considering HMP samples per body sites since this tool can only take into account presence/absence information and not relative abundances in contrast to Simka. As a matter of fact, differences in relative abundances are subtler signals that are often at the heart of interesting biological insights in comparative genomics studies. For instance, <ns0:ref type='bibr'>Boutin et al. (2015)</ns0:ref> showed that the structure between different samples from lung disease patients was visible with the Bray Curtis (quantitative) distance and absent with the qualitative Jaccard distance, highlighting the role of the abundances of certain pathogenic microbes in the disease. In other studies, the response of bacterial communities to stress or environmental changes was shown to be driven by the increase in abundance of some rare taxa <ns0:ref type='bibr' target='#b35'>(Shade et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b16'>Genitsaris et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b10'>Coveley et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b17'>Gomez-Alvarez et al., 2016)</ns0:ref>.</ns0:p><ns0:p>A notable key point of our proposal is to estimate beta-diversity using k-mers diversity only. We are conscious this may lead to biased estimates of the beta-diversities defined from species composition data. The bias can run both ways: on the one hand, shared genomic regions or horizontal transfers between species will bias the k-mer-based distance downwards. On the other hand, genome size heterogeneity and k-mer composition variation along a microbe genome will bias the k-mer-based distance upwards. However, species composition based approaches are not feasible for large read sets from complex ecosystems (soil, seawater) due to the lack of good references and/or mapping scaling limitations. Moreover, our proposal has the advantage of being a de novo approach, unbiased by reference banks inconsistency and incompleteness. Finally, numerical experiments on the HMP datasets show that k-mer based and taxonomic distances are well correlated (r &gt; 0.8 for k &#8805; 21) and consequently that Simka recovers the same biological structure as taxonomic studies do.</ns0:p><ns0:p>There is nevertheless room for improving Simka distances. For instance, recently, <ns0:ref type='bibr' target='#b4'>B&#345;inda et al. (2015)</ns0:ref> showed that spaced seeds can improve the k-mer-based metagenomic classification obtained with the popular tool Kraken <ns0:ref type='bibr' target='#b39'>(Wood and Salzberg, 2014)</ns0:ref>. Spaced seeds can be seen as non-contiguous k-mers allowing therefore a certain number of mismatches when comparing them. Being less stringent when comparing k-mers could lead to more accurate </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Simka strategy. The first step takes as input N datasets and generates multiple streams of abundance vector from disjoint sets of k-mers. The abundance vector of a k-mer consists of its N counts in the N datasets. These abundance vectors are taken as input by the second step to iteratively update independent contributions to the ecological distance in parallel. Once an abundance vector has been processed, there is no need to keep it on record. The final step aggregates each contribution and computes the final distance matrix.</ns0:figDesc><ns0:graphic coords='6,396.66,434.15,79.12,117.55' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Multiset k-mer Counting strategy with k=3. (A) The sorting counting process, represented by a blue arrow, counts datasets independently. Each process outputs a column of P partitions (red squares) containing sorted k-mer counts. (B) The merging count process, represented by a green arrow, merges a row of N partitions. It outputs abundance vectors, represented in green, to feed the ecological distance computation process.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Algorithm 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Compute the Bray-Curtis distance (equation 1) between N datasets Input: -V s : vector of size P representing the abundance vector streams -V &#8746; : vector of size N containing the number of k-mers in each dataset Output: a distance matrix Dist 1 M &#8745; &#8592; empty square matrix of size N // number of k-mers in each dataset intersection 2 In parallel: foreach abundance vector stream S in V s do 3 M &#8745;part &#8592; empty squared matrix of size N // part of M &#8745; 4 foreach abundance vector v in S do</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Simka performances with respect to the number N of input samples. Each dataset is composed of two million reads. All tools were run on a machine equipped with a 2.50 GHz Intel E5-2640 CPU with 20 cores, 264 GB of memory. (A) and (B) CPU time with respect to N . For (A), colors correspond to different main Simka steps. (C) Memory footprint with respect to N . (D) Disk usage with respect to N . Parameters and command lines used for each tool are detailed in TableS1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>, on a dataset of 50 samples from the HMP project, Simka and Commet similarity measures are extremely well correlated (Spearman correlation coefficient r = 0.989). Similarly, clear correlations (r &gt; 0.89) are also observed between the 15 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Comparison of Simka and Commet similarity measures. Commet and Simka were both used with Commet default k value (k = 33). In this scatterplot, each point represents a pair of samples, whose X coordinate is the % of matched k-mers computed by Simka, and the Y coordinate is the % of matched reads computed by Commet.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>A traditional way of comparing metagenomics samples rely on so called taxonomic 16 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison of Simka and BLAT distances for several values of k and several BLAT identity thresholds. Spearman correlation values are represented with respect to k. The scatterplots obtained for each point of this figure are shown in Fig. S2).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Correlation between taxonomic distance and k-mer based distance computed by Simka on HMP gut samples. On this density plot, each point represents one or several pairs of the gut samples. The X coordinate indicates the Bray-Curtis taxonomic distance, and the Y coordinate the Bray-Curtis distance computed by Simka with k = 21. The color of a point is function of the amount of sample pairs with the given pair of distances (log-scaled).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>However, W may range in the hundreds of billions. The solid filter already 19 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Distribution of the diversity of the HMP samples by body sites. PCoA of the samples is based on the quantitative Ochiai distance computed by Simka with k = 21. Each dot corresponds to a sample and is coloured according to the human body site it was extracted from. The green color shades correspond to 3 different subparts of the Oral samples: Tongue dorsum, Supragingival plaque, Buccal mucosa (from top to down).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>20</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Relative abundances of main genera in HMP gut samples.Distribution of the gut samples from the HMP project is shown in a PCoA of the Jensen-Shannon distance matrix. This distance matrix was computed by Simka with k = 21. Relative abundances (0-100%) of (A) Bacteroides, (B) Prevotella and (C) Ruminococcaceae, as computed with Metaphlan<ns0:ref type='bibr' target='#b33'>(Segata et al., 2012)</ns0:ref>, are mapped onto the sample points as color shades.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>22</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016) Manuscript to be reviewed Computer Science distances, especially for viral metagenomic fractions which contain a lot of mutated sequences.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>operations. It is noteworthy that among all distances listed in Table1, all distances are simple, except the Whittaker, Jensen-Shannon and Canberra distances.When compared to other state of the art tools, namely Commet, Metafast and Mash, we parameterized Simka to compute only the Bray-Curtis distance, since all other tools compute only one such simple distance. The Fig.3-B-C-Dshows respectively the CPU time, the memory footprint and the disk usage of each tool with respect to an increasing number of samples N . Mash has definitely the best scalability but limitations of its computed distance are shown in the next section. Commet is the only one to show a quadratic time behaviour with N . For N = 40, Simka is 6 times faster than Metafast and 22 times faster than Commet. All tools, except Metafast, have a constant maximal memory footprint with respect to N . For metafast, we could not use its max memory usage option since it often created 'out of memory' errors. The disk usage of the four tools increases linearly with N . The linear coefficient is greater for Simka and MetaFast, but it remains reasonable in the case of Simka, as it is close to half of the input data size, which was 11 GB for N = 40.</ns0:figDesc><ns0:table /><ns0:note>12 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016) Manuscript to be reviewed Computer Science update</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Simka performances and k-mer statistics of the whole HMP project (690 samples) Simka was run on a machine equipped with a 2.50 GHz Intel E5-2640 CPU with 20 cores, 264 GB of memory, with k = 31. Numbers of distinct</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>14 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016)Manuscript to be reviewed k-mers are computed before and after the MKC-Merge algorithm: the before merging number is obtained by summing over all samples the distinct k-mers computed for each sample independently, whereas in the after merging number, k-mers shared by several samples are counted only once. Line 'Total time' does not include complex distances whose computation is optional.</ns0:note></ns0:figure> <ns0:note place='foot' n='7'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12307:1:1:NEW 23 Sep 2016) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear editors, We are pleased to re-submit the revised version of our manuscript entitled ”Multiple comparative metagenomics using multiset k-mer counting” by Benoit et al, for consideration for publication in Peerj journal. We would like to thank the editor and the reviewers for taking time and efforts to review our manuscript. You will find below a point by point response to the reviewers. Our responses are indicated in boxed text, and the corresponding parts that we have edited in the main document are indicated in red. We thank you in advance for the attention you will give to our reply and we hope that the revised version of our manuscript will meet the criteria for publication in Peerj. G. Benoit on the behalf of all authors. Reviewer 1: Li Song Basic reporting No comments. Experimental design In the supplementary figure 1, why the running time becomes constant after k=17? The running time should grow exponentially with the kmer size. My interpretation is that the authors only picked 1M reads from each 20 samples. As a result the total number of kmers is about 20M which is much smaller than 417 . I think the plot is unfair. The 1M sub-sampling of the samples is not responsible for the performance behaviors presented in supplementary figure 1. To ensure this, we performed the same experiment with the full samples (no more sub-sampling) and we obtained similar behaviors. We changed the supplemental figure 1 accordingly. The fact that the running time does not grow exponentially with k can be explained as follows. As explained lines 369-380 and Table 2, the running time of simka depends mainly on the number of solid distinct k-mers Ws in the dataset. As the data are not random sequences and if k is large enough (typically k >= 17), not all possible k-mers (4k ) can be seen in the data even if the samples contain an infinite number of reads. If k is large enough (so that there are few repeated k-mers inside and between the genomes) and k-mers with sequencing errors are filtered out, Ws is usually considered as the cumulated size of the genomes, therefore Ws does not depend on k. This explains why the running time remains constant after k = 17. 1 About a related issue in the same figure, when the data set is fixed, why the disk usage grows with the kmer size? Is it due to the difficulty of compression when there are more distinct kmers? Even so, I am surprised to see that the curve did not flat out until k >= 51. Conversely, the disk usage depends on k. Even if the number of k-mers to store is constant with k, the space of each k-mer grows linearly with k (k characters to store). Validity of the findings Im wondering whether the sample size has an impact on the correlation, i.e. some samples have much more reads than other samples. Though some of the distance has normalization factor, can it normalize the factor of the data set size? For example, in the BrayCurtis distance calculation, if one data set is just repeats of the other data set 100 times, then the distance will be close 1, where the distance should be 0. The normalization factors contained in the distance formula always concern a number of k-mers. Depending on the distance and on the parameters, this can be all the k-mers, only distinct ones or only solid distinct ones, and so on. In some datasets and cases, this may be linearly related to the data set size (for instance, in the given example, if one uses the presence-absence Jaccard index, the distance would be 0, as expected.), but in general this is surely not the case. We also remark that the shortcoming pointed out by the reviewer is inherent to the definition of the Bray-Curtis distance (and many others) rather than an artifact introduced by Simka. Therefore, if the sample sizes are significantly different between samples, we recommend, as is standard in community ecology, to use the option -max-reads of Simka to consider the same amount of reads for all datasets. Comments for the author In this paper, the author implemented a highly-parallel program Simka that can count the kmers from many metagenomic samples while computing the ecological distance with additive property between the samples. The authors did a thorough analysis showing that analyzing kmers gives similar results when using other more complex analyzing methods. Thus Simka can be applied to analyze large-scale metagenomic experiments. The framework of Simka is solid. It is quite scalable with respect to time and memory footprint. However, Simka heavily uses disk and is not scalable with respect to disk usage. 2 Reviewer 2: Qingpeng Zhang Basic reporting In this manuscript, the authors reported the development of a method to compare metagenomic datasets based on k-mer counting. Not like some other tools, this tool - Simka can not only calculate the Bray-Curtis similarity, but also many other similarity metrics, which is nice. In this method, the k-mers abundance profiles across the metagenomic datasets are calculated. However taking advantage of the additive nature of computing some similarity metrics, the k-mers abundance profiles do not need to be stored and so is the huge k-mer count matrix, which reduces the disk usage. The authors demonstrated the benchmarking of Simka compared to other tools and compared the similarity measurement computed with Simka to that computed using other methods like sequences alignment and taxonomic profiling. Generally the manuscript was well written, with comprehensive description of the methods, data and pipeline for reproducibility. The software package repository is well organized on Github and it has good and clear documentation, which is very nice. There are some comments below about this manuscript though. Experimental design Main Comments 1. One of the most challenging problems in using k-mer counting to compare metagenomics datasets is how to deal with the k-mers from sequencing errors. As the authors mentioned in line 196, many of the k-mers with very low abundance come from sequencing errors. The solution of this method is filtering out those k-mers with abundance as 1, with those solid k-mers left. This works fine with metagenomics data set with higher coverage, as shown in the manuscript, with HMP samples as the testing dataset. It will be interesting to see how this method performs for other metagenomic datasets with lower coverage or higher diversity, like some environmental data sets. The IMG/M datasets used in COMMET paper and the Global Ocean Sampling datasets used in Comareads and Mash are two good candidates since in this manuscript, the authors compared the performance of Simka with COMMET and Mash. Also in line 475, the authors mentioned Simka is able to capture such subtle signal raises hope of drawing new interesting biological insights from the data, in particular for those metagenomics project lacking good references (soil, seawater for instance). and in line 528, However, species composition based approaches are not feasible for large read sets from complex ecosystems (soil, seawater) due to the lack of good references and/or mapping scaling limitations. Moreover, our proposal has the advantage of being a de novo approach, unbiased by reference banks inconsistency and incompleteness. It will be great if there are experiment results on those soil, seawater samples that can support these points. 3 This is indeed a very interesting question. We agree that the HMP dataset is not representative of all metagenomic datasets, especially concerning the level of coverage of the genomes and that this feature may have strong impact on the solid kmer filter. Consequently, we added some experiments on the GOS dataset, as suggested by the reviewer. We first quantified the impact of the solid kmer filter on this dataset: we computed the correlation between distances obtained with or without this filter. Although it is lower than for the HMP gut dataset (0.999), the correlation obtained for the low-coverage GOS dataset is quite high: 0.97 (Spearman correlation on Bray-Curtis k = 21 distances). We also confirmed that Simka was able to recover the biological structure of the GOS samples: GOS samples are clustered according to their ocean origin (see heatmaps and sample classifications in Supplementary Figure 6), and that these qualitative results are robust with the use of the solid k-mer filter. These results are shown in details in the Supplementary file and are now discussed in a new paragraph in the Discussion section of the main manuscript. Even if, thanks to the reviewer remark, we added results on a sea water project, there is still large room for new applications over various heterogeneous projects. However, we reach here the limits of this technical paper, which aims to provide algorithmic description, biological validations as fair as possible, as well as discussions about possible future applications. Validity of the findings 2. In various parts in this manuscript, the authors mentioned that the solid k-mers filtering out does not affect results (line 376, 442, 489). This may be due to the high coverage of the HMP data sets. In the discussion in line 369-380, the authors mentioned that a small proportion (15%) of k-mers account for 95% of all base pairs of the whole dataset, which demonstrates that the HMP datasets have relatively higher sequencing coverage and most of the low abundance k-mers filtered out are probably sequencing errors. This may explain why the Simka results are robust with filtering (line 441) Just claiming that filtering out low abundance k-mers does not affect similarity measurement may not be accurate, at least before we see how this works for other environmental data sets with lower coverage recommended above. It will be nice if the authors can explain this more clearly. Again, we agree with the reviewer’s point. The experiments we conducted on the GOS dataset show that results are also robust with the solid filter on this lowcoverage dataset. As requested by the reviewer, we now clarify and discuss this point in the Discussion section. 3. Similarly, in line 490-493, the authors mentioned the filter can be disabled for samples with low coverage or where the rare species have more impact. But in this situation, how to deal with those large amount of erroneous k-mers? How will this affect the performance? 4 In line 493-495, the authors claimed Simka is able to scale without solid k-mers filter. This may be true for the HMP data shown in the manuscript, but we still need to wait to see how it works for the low coverage data sets. We have shown in the manuscript that the running time of Simka depends mainly on the number of distinct k-mers in the whole dataset. Indeed, when disabling the solid k-mer filter, the number of distinct k-mers can increase greatly. Whatever the coverage of the dataset, with the same number of reads, the number of sequencing errors and therefore the number of additional distinct k-mers due to sequencing errors will be roughly the same. Therefore, the impact of sequencing errors on Simka running time will be the same for high and low coverage datasets. What really matters is the total number of distinct k-mers, and that is the reason why we chose one of the largest publicly available dataset (HMP) to analyze Simka computational performances. We have shown that Simka has reasonable running time when dealing with hundreds of billions of distinct k-mers. Comments for the author 1. In Abstract- Methods, Simka scales-up today metagenomic projects thanks to a new parallel k-mer counting strategy on multiple datasets. Should today be todays? Typo corrected. 2. Line 3-6, In large scale metagenomics studies such as Tara Oceans (Karsenti et al., 2011) or Human Microbiome Project (HMP) (Human Microbiome Project Consortium, 2012a) most of the sequenced data comes from unknown organisms and their short reads assembly remains an inaccessible task. But in line 330, One advantage of this dataset(HMP) is that it has been extensively studied, in particular the microbial communities are relatively well represented in reference databases The descriptions about HMP seem like a contradiction here. We removed the reference to the HMP project in the first sentence (line 3-6). 3. Table2, 2X16G paired reads. it may be better to be just 2 X 16 billion paired reads. I am not quite sure if G can be used like this. We replaced ”G” by ”billion”. 4. Line 433, On the other hand, Mash distances correlate badly with taxonomic ones (r = 0.51, see the comparison protocol in Article S1). It will be nice to cite Figure S3 here. 5 The citation has been added. 5. Line133 For example, experiments on the HMP (Human Microbiome Project Consortium, 134 2012a) datasets (690 datasets containing on average 45 millions of reads each) require a storage space of 630TB for the matrix KC. How does the 630TB calculated? For this method, the k-mer counting matrix and k-mer frequency vectors do not need to be stored. But the frequencies of all the k-mers in each partitions across difference data sets (red squares in Figure 2) are still stored on the disk after the sorting count stage, right? If so, how different is it compared to storing the k-mer counting matrix? The reviewer raises an interesting point that needed to be clarified in the manuscript. This number was computed as follows: Ws ∗ (8 + 4N ) bytes, with Ws the number of distinct solid k-mers and N the number of samples. The first 8 bytes are for storing each 31-mer, the 4N bytes are for storing the N counts of each distinct k-mer. We used Ws = 251 × 109 (line 4 of table 2), but this is a mistake, we should have used the number ”after merging” Ws = 95 × 109 (line 5 of table2) (in fact, Ws = 251 × 109 may be considered as a worst case situation where samples share no k-mers). Thanks to the reviewer careful look, we therefore corrected this number in the manuscript from 630 TB to 260 TB, and explained how it is calculated. This does not impact the message of this paragraph. The example of the KC matrix storage space was intended to demonstrate that storing this matrix in main memory is not feasible, this is still the case with the correct number. We also clarified in the manuscript this detail (main memory), since main memory resources are much more limited than disk space. This is nevertheless interesting to compare it with the disk space used by the Simka partition files, in this case (1.6 TB, see table2). Apart from the compression share, we assume that the difference between the two numbers is mainly due to the fact that the matrix is very sparse, with lots of 0, and the absence of k-mers is not stored in the partition files. 6. What is the role of the GATB library in Simka? If the GATB does the actual sorting count, then the paragraph in this manuscript about sorting count may be condensed. Also the description about the work of Chao et al. (2006) can be more precise too. Actually, the sorting count is implemented in the GATB library. However, some features of this sorting count, such as the multiple datasets feature and the merging count, were developed for Simka purpose and then integrated in the GATB library, and are not described in the older GATB publication. We therefore argue that this paragraph is important for the understanding of the whole method. Concerning the Chao et al. (2006) description, we added some details as suggested by the reviewer. 6 7. The software package repository is well organized and with good documentation. Just while I was trying to run the example test with ./bin/simka -in example/simka input.txt -out results -out-tmp temp output, it failed with Illegal instruction: 4. I was using simka-v1.3.0-bin-Darwin.tar.gz and on mac OS 10.11. It may be the problem on my side. But this may be good for the authors to know. We are thankful to the reviewer for this feedback. However we were unable to reproduce the error. 7 "
Here is a paper. Please give your review comments after reading it.
396
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Road conditions monitoring is essential for improving traffic safety and reducing accidents.</ns0:p><ns0:p>Machine learning methods have recently gained prominence in the practically important task of controlling road surface quality. Several systems have been proposed using sensors especially accelerometers present in Smartphones due to their availability and low cost. However, these methods require practitioners to specify an exact set of features from all the sensors to provide more accurate results, including the time, frequency, and wavelet-domain signal features. It is important to know the effect of these features change on machine learning model performance in handling road anomalies classification tasks. Thus, we address such a problem by conducting a sensitivity analysis of three machine learning models which are Support Vector Machine, Decision Tree, and Multi-Layer Perceptron to test the effectiveness of the model by selecting features. We built a feature vector from all three axes of the sensors that boosts classification performance. Our proposed approach achieved an overall accuracy of 94\% on four types of road anomalies.</ns0:p><ns0:p>To allow an objective analysis of different features, we used available accelerometer datasets. Our objective is to achieve a good classification performance of road anomalies by distinguishing between significant and relatively insignificant features. Our chosen baseline machine learning models are based on their comparative simplicity and powerful empirical performance. The extensive analysis results of our study provide practical advice for practitioners wishing to select features effectively in real-world settings for road anomalies detection.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Monitoring the physical conditions of roads is an important area within the transportation domain. Road anomalies detection has been given a significant consideration and a variety of field experiments are done to establish the protocol, methods and algorithms <ns0:ref type='bibr' target='#b23'>Outay et al. (2020)</ns0:ref>. Road anomaly is referred to as any deviation or variation from the standard road surface conditions <ns0:ref type='bibr' target='#b29'>(Seraj et al., 2015)</ns0:ref>. There are various anomalies on the road because of which the vehicle may fall unexpectedly. Some of them are potholes, rutting, cracks, and speed bumps. This necessitates the development of automated techniques for detecting different road anomalies. Many systems have been implemented for fast and reliable anomalies detection which would indeed help prevent road accidents. Some of these systems are detecting anomalies either by using expensive and specialized road-monitoring equipment <ns0:ref type='bibr'>(inductive loops, video-detectors, magnetometers, etc.)</ns0:ref> or by surveying roads manually. A major drawback of these solutions is low flexibility and significant maintenance costs.</ns0:p><ns0:p>To overcome these drawbacks, A promising method for gathering real-world data is mobile sensing technology without the need to deploy special sensors and instruments <ns0:ref type='bibr' target='#b28'>(Schuurman et al., 2012)</ns0:ref>. It is a smartphone-based method of sensing, in which data is collected using embedded sensors <ns0:ref type='bibr' target='#b15'>(Lane et al., 2010)</ns0:ref>. Accelerometer sensors have been widely used to collect data for analysis. Based on many studies <ns0:ref type='bibr' target='#b2'>(Astarita et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b16'>Li et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b17'>Li and Goldberg, 2018)</ns0:ref>, road roughness is a source of vibration in vehicles and a well-known cause of wear and damage to the vehicle itself, as well as to bridges and pavements. These vibrations can be effectively captured by smartphone accelerometers.</ns0:p><ns0:p>There are three axes on an accelerometer (X, Y, and Z), which correspond to the longitudinal, vertical, and transverse directions of a smartphone, respectively. When acceleration is experienced in any of these axes, the accelerometer captures it (in m/s 2 ). Through analyzing these axes' signals, road anomalies can be potentially identified.</ns0:p><ns0:p>Different approaches in the literature have been proposed to classify road anomalies based on features obtained from the accelerometer sensor. Especially the machine learning algorithms which are quite diverse <ns0:ref type='bibr' target='#b10'>(Eriksson et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b24'>Perttunen et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b7'>Carlos et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bridgelall and Tolliver, 2020;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alam et al., 2020)</ns0:ref>. In <ns0:ref type='bibr' target='#b3'>(Basavaraju et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Silva et al., 2017)</ns0:ref>, the authors used Multilayer Perceptron (MLP) and they made comparisons using other models such as Random Forest (RF), Support Vector Machine (SVM), and Decision Tree (DT). Other researchers used a decision tree-based classifier <ns0:ref type='bibr' target='#b1'>(Alam et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b14'>Kalim et al., 2016)</ns0:ref> to detect and classify different types of road anomalies. Also, Support Vector Machines have been widely used in many works <ns0:ref type='bibr' target='#b10'>(Eriksson et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b24'>Perttunen et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b7'>Carlos et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bridgelall and Tolliver, 2020;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alam et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The efficiency of any machine learning model is highly related to determining the set of features that can 'best' describe the input data that has been collected from the accelerometer sensor.</ns0:p><ns0:p>Several well-known features have been used in literature, such as time-and frequency-domain features.</ns0:p><ns0:p>In most cases, they are mixed with other features <ns0:ref type='bibr' target='#b14'>(Kalim et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b31'>Silva et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b22'>Nunes and Mota, 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alam et al., 2020)</ns0:ref>. There have also been reports of feature extraction based on wavelets <ns0:ref type='bibr' target='#b3'>(Basavaraju et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b6'>Brisimi et al., 2016)</ns0:ref>. It is not always clear which features will give the best classification in previous literature. Additionally, it is important to minimize the number of features to make the classifier faster and also more accurate since as more features are added, the size of the design set must also increase. Knowing which features to extract from an abundance of features in the raw data is the most challenging part. In this paper, we will go into detail about those challenges by analyzing the sensibility of some machine learning models to different categories of features. By demonstrating that specific features from accelerometer data have the greatest impact on machine learning models, we can avoid employing redundant features in the classification step as much as possible. For this purpose, a comprehensive and up-to-date set of thirty features from the time, frequency and wavelet domains have been evaluated in this study.</ns0:p><ns0:p>Most previous studies have investigated a comparison between different machine learning models rather than investigating the sensibility of such models to selected features. Therefore, rather than studying features that practitioners should spend effort tuning, we aimed to learn which features perform better regardless of the dataset and which features are inconsequential. This paper reports the results of experiments investigating three machine learning models (Support Vector Machine, Multi-Layer Perceptron, and Decision Tree) in a large number of different configurations using two available datasets with different types of anomalies. This paper is structured as follows: first, we present the overall machine learning-based workflow of detecting road surface anomalies from smartphone sensors, followed by a review of the most used features in the literature as well as the machine learning techniques. Then, we present our experimental results using two types of accelerometer datasets. In the following sections, the models reviewed and their limitations are discussed. Finally, we summarize the results of the study and identify challenges with detecting road surface anomalies of smartphones, along with potential research directions that should be pursued.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS AND MATERIALS</ns0:head></ns0:div> <ns0:div><ns0:head>General Overview</ns0:head><ns0:p>The methodology we used to compare different feature techniques for automated detection of road anomalies is illustrated in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. In general, Online running and offline training are the two main phases of the system flow that uses a machine learning approach to identify road anomalies based on smartphone vibrations. In the first phase, a database of annotated data is used to discover and extract useful information during the feature extraction step. A machine learning model should then be trained using the extracted features. In order to determine how the trained model functions and what its reliability is, it should be tested against new data to determine its performance. It is important to mention that several types of embedded sensors can be used in providing input data. In our study, we will use the accelerometer sensor.</ns0:p><ns0:p>The resulting trained model is used during the second phase to detect and classify anomalies from real </ns0:p></ns0:div> <ns0:div><ns0:head>Feature Extraction</ns0:head><ns0:p>The specific technique used to detect anomalies is determined by the features extracted from the accelerometer readings. Furthermore, it is crucial to extract the relevant features from the accelerometer measurements, since the features become inputs to the specific technique.</ns0:p></ns0:div> <ns0:div><ns0:head>Time domain</ns0:head><ns0:p>Time domain features are often used as pre-processing <ns0:ref type='bibr' target='#b25'>(Radu et al., 2018)</ns0:ref> so they are easy to implement.</ns0:p><ns0:p>They can be used to extract basic signal patterns from raw sensor data. We used a total of 8 time domain features.</ns0:p><ns0:p>&#8226; Mean is the most common and easy implemented feature of the time domain. It only finds the mean of amplitude values over sample length of the signal x i that represents a sequence of N discrete values {x 1 , x 2 , . . . , x N }. It is obtained using Eq. 1</ns0:p><ns0:formula xml:id='formula_0'>mean(&#181;) = 1 N N &#8721; i=1 |x i |<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>&#8226; Integral square (IS) uses energy of the signal as a feature. It is a summation of square values of the signal amplitude. Generally, this parameter is defined as an energy index, which can be expressed as:</ns0:p><ns0:formula xml:id='formula_1'>IS = 1 N N &#8721; i=1 x 2 i (2)</ns0:formula><ns0:p>&#8226; The Variance is also most common statistical method for time domain feature extraction. It is defined as follows (3):</ns0:p><ns0:formula xml:id='formula_2'>variance = 1 N &#8722; 1 N &#8721; i=1 (x i &#8722; &#181;) 2</ns0:formula><ns0:p>(3)</ns0:p><ns0:p>&#8226; The Standard deviation is calculated using the following equation:</ns0:p><ns0:formula xml:id='formula_3'>StandardDeviation(&#963; ) = 1 N &#8722; 1 N &#8721; i=1 (x i &#8722; &#181;) 2 (4)</ns0:formula><ns0:p>&#8226; The Median is the number at which the data samples are divided into two regions with equal amplitude.</ns0:p></ns0:div> <ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; The Range is the difference between maximum and minimum sample values.</ns0:p><ns0:p>&#8226; The Root Mean Square (RMS) of a signal x i that represents a sequence of N discrete values {x 1 , x 2 , . . . , x N } is obtained using Eq. 5 and can be associated with meaningful context information.</ns0:p><ns0:formula xml:id='formula_4'>x RMS = x 2 1 + x 2 2 + ... + x 2 N N (5)</ns0:formula><ns0:p>&#8226; Entropy describes how much information about the data randomness is provided by a signal or event <ns0:ref type='bibr' target='#b30'>(Shannon, 1948)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Frequency domain</ns0:head><ns0:p>Initially, the time-domain vibration signals must be transformed into frequency-domain signals using a fast Fourier transform (FFT) in order to extract the frequency-domain features <ns0:ref type='bibr' target='#b20'>(Martens, 1992)</ns0:ref>. Our extracted features are the next:</ns0:p><ns0:p>&#8226; The Spectrum energy of the signal is equal to the squared sum of its spectral coefficients.</ns0:p><ns0:p>&#8226; Median Frequency (MF) -A frequency that divides the spectrum into two equally amplituded regions.</ns0:p><ns0:p>&#8226; Mean power (MP) -The Spectrum power average.</ns0:p><ns0:p>&#8226; Peak magnitude (PM) -The Spectrum amplitude at its maximum.</ns0:p><ns0:p>&#8226; Minimum magnitude (MM) -The lowest amplitude in the spectrum.</ns0:p><ns0:p>&#8226; Total power is defined as an aggregate of the signal power spectrum.</ns0:p><ns0:p>&#8226; The Discrete Cosine component is the first coefficient in the spectral representation of a signal and its value is often much larger than the remaining spectral coefficients.</ns0:p></ns0:div> <ns0:div><ns0:head>Time-Frequency (Wavelets) Domain</ns0:head><ns0:p>A wavelet is a fast-decaying function with zero averaging. The nice features of wavelets are that they have local property in both spatial and frequency domain and can be used to fully represent volumes with small numbers of wavelet coefficients. With the wavelet approach,</ns0:p><ns0:p>In a time-domain analysis section, the original time-domain signal is usually decomposed into distinct bands using designed filters paired with downsampling in order to split the signal when the effective sample rate remains unchanged <ns0:ref type='bibr' target='#b19'>(Mallat, 2009)</ns0:ref>. Based on the characteristics of the source and/or application, each produced band can be processed independently. After filtering and up-sampling, the signal is reconstructed as an approximate representation of the original. By iterating the low-pass output at each scale, the wavelet transform and its filter bank realization are repeated. Therefore, it produces a series of band-pass outputs, which are actually wavelet coefficients. As mentioned earlier, the wavelet is comprised of a high-pass filter, followed by a series of low-pass filters. For further details, see <ns0:ref type='bibr' target='#b8'>(Chau, 2001)</ns0:ref>. In our study, we used the following wavelet decomposition; the accelerometer signal is decomposed using the wavelet transform and the features defined as signal power measurements. Each of the five detailed coefficients is then summed up. At the end, it returns a total of 15 features. In table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, we summarize all the feature sets used in our research. Our choice of features is based on a critical issue is how to combine these different sets of features in a way that may increase the performance of the model classification. We used an overlapping sliding window scheme using a window of length w to group the data. The features are extracted from each window. Since the anomaly location is originally unknown and needs to be estimated, the overlapping window ensures that there exists some window that overlaps with the anomaly. Basically, the output from the previous step will be used as input to three different classifiers in order to compare their efficiency based on features selected .</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Baseline Models</ns0:head><ns0:p>As popular and reliable technologies that can be applied for classifying road vibration data, SVM, Decision</ns0:p><ns0:p>Trees and Neural Networks were utilized in this study.</ns0:p></ns0:div> <ns0:div><ns0:head>4/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Support Vector Machine</ns0:head><ns0:p>SVM aims to construct a hyperplane or set of hyperplanes in an N-dimensional space (N is the number of features), which can be used for classifying the data points. Many hyperplanes could be chosen; the best hyperplane is the one that has the maximum margin, i.e the maximum distance between data points of both classes. In 1992, a research conducted in <ns0:ref type='bibr' target='#b4'>(Boser et al., 1992)</ns0:ref> suggested a way to create nonlinear classifiers by using the kernel functions (originally proposed by Aizerman, Braverman and Rozonoer <ns0:ref type='bibr' target='#b0'>(Aiserman et al., 1964)</ns0:ref>) to maximum-margin hyperplanes. In this algorithm, nonlinear kernel functions are often used to transform input data to a high-dimensional feature space in which the input data become more separable compared to the original input space. Maximum-margin hyperplanes are then created. In the following text, for the purpose of this research, one type of SVM is used, C-SVC that can incorporate different basic kernels. Given training vectors.</ns0:p></ns0:div> <ns0:div><ns0:head>Decision Tree</ns0:head><ns0:p>A decision tree also known as a classification tree is a predictive model derived by recursively partitioning the feature space into subspaces that constitute a basis for prediction <ns0:ref type='bibr' target='#b27'>(Rokach, 2016)</ns0:ref>. In the tree structures, leaves represent classifications (also referred to as labels), non-leaf nodes are features, and branches represent conjunctions of features that lead to the classifications. Decision trees are derived by doing successive binary splits (some algorithms can make multiple branches for every split). With the first split, the data will be most distinct between two groups. The subgroups are then split until some stopping criteria are reached. There are differences in algorithmic approaches to establishing the distance between two groups <ns0:ref type='bibr' target='#b18'>(Loh, 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Multi Layer Perceptron</ns0:head><ns0:p>A multilayer perceptron can be used to perform classification. It is an example of a feed-forward neural network where each node performs a nonlinear activation function. The weights connecting the nodes are determined using back propagation. This network receives as input a feature vector extracted from the object to be classified, and outputs a block code, in which one high output identifies the class of the object and the other outputs are low. In order to approximate functions, only one hidden layer is required <ns0:ref type='bibr' target='#b9'>(Cybenko, 1989)</ns0:ref>. It depends on the problem as to how many layers in the network and nodes in each hidden layer are used.</ns0:p></ns0:div> <ns0:div><ns0:head>Datasets</ns0:head><ns0:p>To Manuscript to be reviewed</ns0:p><ns0:p>Computer Science these categories are split into segments according to a predetermined window size, w with an overlap of w/2. Every obtained segment is represented by a descriptor comprising the features mentioned previously.</ns0:p><ns0:p>Two types of accelerometer datasets are used: simulated data and real data. A part of the data was used to train, while the other part was used to test. With the training data, we calibrated the parameters for each model, trying to find the values that produced the best results. To make any approach usable, it is important to reduce the amount of data it needs to be calibrated. Accordingly, in all cases, the training components are shorter than the testing components testing components.</ns0:p></ns0:div> <ns0:div><ns0:head>Simulated Data</ns0:head><ns0:p>Pothole Lab <ns0:ref type='bibr' target='#b7'>(Carlos et al., 2018)</ns0:ref>, which can be used to generate virtual roads with a configurable number and nature of road anomalies, generated the first dataset we used for our experiments (DB1). Different roads were built with acceleration samples taken from the X,Y, and Z axes using a sampling rate of 50 Hz.</ns0:p><ns0:p>The generated roads are divided into three types: roads without anomalies, homogeneous roads (only one kind of anomaly), and heterogeneous roads (different types of anomalies). We generated 60 virtual roads with a total of 1591 anomalies of three types (potholes, metal speed bumps and asphalt speed bumps). The second dataset we employed (DB2) was published in <ns0:ref type='bibr' target='#b13'>(Gonz&#225;lez et al., 2017)</ns0:ref> where 12 vehicles were used to perform the data collection. This dataset contains accelerometer samples from z-axis only, using a sampling rate of 50 Hz. More than 500 events were recorded and classified into five categories: metal bumps, worn out road, potholes, asphalt bumps, and regular road. Table <ns0:ref type='table'>3</ns0:ref> shows the details of the dataset used in this study as well as the distribution of anomalies used.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. The real dataset used <ns0:ref type='bibr' target='#b13'>(Gonz&#225;lez et al., 2017)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Anomaly</ns0:head><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; The recall, or true positive rate, determines how well the classifier predicts positive samples. It is calculated in the following manner:</ns0:p><ns0:formula xml:id='formula_5'>Recall = T P T P + FN (7)</ns0:formula><ns0:p>&#8226; F1 Score is the Harmonic Mean between precision and recall. The greater the F1 Score, the better is the performance of our model. F1-score is calculated as the following:</ns0:p><ns0:formula xml:id='formula_6'>F1 = 2 &#215; Precision &#215; Recall Precision + Recall (8)</ns0:formula><ns0:p>where TP (True Positive) indicates its correct classification as the anomaly that has been observed (ground truth). A TN (true negative) indicates how many times an anomaly is classified properly as not being</ns0:p><ns0:p>observed. An algorithm that falsely identifies an anomaly that was not observed is known as a false positive (FP). FN (False Negatives), is the number of cases in which an anomaly was observed (ground truth) but classified as something else by the algorithm.</ns0:p><ns0:p>Moreover, we applied cross-validation in the performance evaluation of each classification model to estimate the skill of a machine learning model on unseen data. A 10 group split is a reasonable compromise for providing robust estimates of performance and being computationally feasible. Every unique group is treated as a hold-out or test set, and all the other groups are treated as training sets. We then fit the model to the training data and evaluate it on the test data. That evaluation score is retained. As the last step, we compute the skill of the model based on the sample of model evaluation scores. however, the cross-validation accuracy is the same as the overall accuracy in all our experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>The experiments covered a diverse combination of factors that could potentially affect the accuracy of any machine learning model on classifying road anomalies. To determine the usefulness of these factors in the used data sets, we carried out three types of experiments:</ns0:p><ns0:p>&#8226; The effect of the accelerometer axis data. We focused on varying the input data fed to our models by applying all the possible combinations of X, Y, and Z-axis. We used a window size equal to 30 and a time step of 15, from which we extracted 30 features as described in Table <ns0:ref type='table'>.</ns0:ref> 1</ns0:p><ns0:p>&#8226; The effect of the sliding window size. We considered multiple window sizes and we recorded the corresponding performance. We used only Z-axis data and all features were extracted for each window size.</ns0:p><ns0:p>&#8226; The effect of features extraction. We evaluated in our first experience the three categories of features (time, frequency, and wavelet) separately. Then, we explored all the possible combinations of features while recording the corresponding performance for each combination.</ns0:p><ns0:p>In all our experiments, we repeated the training and the testing procedures over many trials, and the average detector performance is reported. This is to discard randomness effects. In our machine learning models, we set the parameters following the suggestions in the literature work. For SVM, we used a regularization parameter equal to 1, the 'rbf' function kernel with a coefficient of 2 &#215; 10 &#8722;3 and a tolerance of 10 &#8722;3 . For MLP, the 'relu' activation function is used over 300 iterations. The random state value for DT was set to zero. We used in our tests a machine having a single Nvidia Tesla K80 GPU and 12 GB of RAM.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of accelerometer axis data</ns0:head><ns0:p>In this section, we examine how the tri-axial accelerometer sensor improves classification rate by analyzing the effectiveness and contribution of each axis. Three well-known anomalies were selected: potholes, asphalt speed bumps, and metal bumps. Experimental results were assessed using accuracy metrics. An Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>such architectures affords swapping in different axis data during model initialization. Therefore, we first extract all features from each window for each axis X, Y, and Z, involving time, frequency, and wavelet features. In the next step, we assess the sensitivity of the selected anomaly models according to the input axis data by depicting a combination of the different axis of the accelerometer to recognize the anomalies.</ns0:p><ns0:p>Specifically, we consider all possible combinations: only X-axis, only Y-axis, only Z-axis, X and Y axis, X and Z axis, Y and Z axis and, all axis. We report accuracy achieved using SVM, DT, and MLP. In Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> the best classification rates of different anomalies against X-axis is 86%, Y-axis is 88%, Z-axis is 90%, X-Y axis is 89%, X-Z axis is 91%, Y-Z axis is 94%, and X-Y-Z axis is 94%.</ns0:p><ns0:p>An overall analysis of the relative performance achieved using different axes shows that using information from all axes is giving the highest performance with all used models. Besides, according to the results obtained in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>, the use of the Z-axis either alone or with another axis has an impact on the performance of the models. By looking at achieved performances using all models, it shows that the top two accuracy values are 94% and 91%. The highest accuracy (94%) was achieved with the use of the Z-axis with the Y-axis and both X-Y-axis. It is important to notice that accuracy values in the last two columns of Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> are similar. This means that the X-axis has less effect on the model performance when compared to the other axis. Moreover, the Z-axis of the accelerometer sensor is a potential feature. The combination of the Z-axis with the X-axis increases the performance from 86% to 91%. Likewise, its combination with the Y-axis increases the performance from 88% to 94%. The lowest performance values reached are 80%, 81%, and 82%. In all of them, we notice the use of the X-axis either alone or combined with another axis. Unfortunately, simply using the X-axis or concatenating it with another axis does not seem helpful. Also, combining Y and Z-axis data with MLP may indeed give good results (94%). It is important to notice that models settings are kept the same as in the basic configuration to highlight the effect of axes used. We emphasize that MLP and SVM models gave better average results when compared to DT in most of the cases. </ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of sliding window size</ns0:head><ns0:p>We reported in the previous section the performance obtained by combining various accelerometer axes.</ns0:p><ns0:p>In this section, we discuss how the sliding window affects the model's performance. We first convert the accelerometer Z-axis signal into data windows of 30 features that overlap 50% of each other. To prevent removing relevant information from the signal, no preprocessing is applied. As long as the anomalies are diverse, this is normal practice, and even more so if the quality of the data permits it. A machine learning model is designed to identify windows when an anomaly occurs. We use a variety of window sizes for evaluation, ranging from 10 up to 100 in steps of 10. Previous anomalies detection systems have mostly used this interval. Fig. <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the performance results for diverse window sizes and models. Since we only need to see the trend in model accuracy as the window size is altered (rather than the absolute performance of each model), the accuracy is only shown as a percentage change as the sliding window size is increased.</ns0:p><ns0:p>Experimental results have shown that window sizes differ between models. One reason for this may be that different anomalies have varying durations. In DB1, SVM and DT models show an increasing performance with larger windows. For size 5, a minimum performance of 81% is achieved, which nevertheless increases up to 94% when the window is enlarged to 80. For some window sizes, the performance improves by less than 5% as compared to the performance at size 80. Actually, from a window size greater than 80, no significant benefits are obtained in all models' performances. Regarding DB2, different performance values have been recorded varying between 25% and 59%. Conversely to DB1, all models, especially DT and MLP, suffer from a worsening of performance when the window size is increased beyond 50. Window sizes between 45 and 50 provide the best performance for DT. This It is shown that each model has its optimal window size based on the results. However, considering the figure in Fig. <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>, a reasonable range of size might be around 30 to 60 for our used datasets comprising three types of anomalies for DB1 and four types in DB2. In addition, the classification accuracy trend seems to be different for each model in DB2. Nevertheless, it is also found that the window size is highly related to the type of anomaly. Studying each anomaly, in particular, would be of interest.</ns0:p><ns0:p>The choice of small window size is a challenging task when using machine learning techniques because the cost of labeling every short interval of data is extremely high. Several approaches may be used for solving such a problem, including incremental learning or reinforcement learning, which can maintain and modify the expert model over time without the need to re-train it. The main limitation that should be noted, which should be studied further is that only a few benchmarking datasets are available for the classification of road anomalies.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of features</ns0:head><ns0:p>In this experiment, the basic configuration is held constant, while we only alter the number of features extracted for each model. We consider the three feature domains: Time, Frequency, and Wavelet used separately. We report results in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Another important result we obtained is that the model accuracy is higher when using a simulated dataset. Models using data from road simulators (where the anomalies detected are recorded in controlled conditions) lack information about many factors that can affect the signal accuracy such as driver behavior.</ns0:p><ns0:p>On the other hand, the accuracy obtained using real data seems to give a good evaluation of any machine learning model.</ns0:p><ns0:p>A combination of several sets of features has additionally been considered. Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref> presents the obtained results. With the application of time and wavelet features, the accuracy increases significantly with an accuracy of 90% for DB1 and 52% for DB2. However, by using all the features, the accuracy is not improved, and the overall performance is lower.</ns0:p><ns0:p>The confusion matrices of best accuracy obtained using wavelet and time features are shown in Fig. <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. In DB1, the pothole is sometimes confused with no anomaly. However, there is almost no confusion between the metal bump and other anomalies. As a metal bump will have a different effect from a pothole or regular road, it makes sense because a metal bump is expected to have a distinctive effect.</ns0:p><ns0:p>The results are different in DB2 as we notice that the distribution of accuracy across categories is nearly the same. One possible reason is that DB2 is a collection of more realistic data.</ns0:p><ns0:p>The source code for these experiments has been released to researchers on request so they can replicate this work and examine more features and models. This study was conducted to encourage additional research exploring more features and models. This research will also be useful in many other applications, such as activity recognition, to select the appropriate feature extraction techniques. This study describes the methodological challenges of extracting features from various machine learning models input and explores what features are suitable for analyzing road anomalies. In addition, deep learning models may be compared to standard machine learning models.</ns0:p><ns0:p>To ensure excellent accuracy, other techniques could also be incorporated in addition to the wellknown ones used in this study. Several feature techniques combined, for instance, could produce good results. Many of these techniques, however, tend to be computationally expensive, and may not be suitable for applications requiring near-real-time processing. The 'best' domain of features depends on the model. However, it would seem that using the feature domains separately yields at worst precision. Another salient practical point is that the time required to extract features varies according to the domain. In practice, according to these observations, it may be beneficial to use different domain-specific features to improve classification performance.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison with literature works</ns0:head><ns0:p>To evaluate the ability of our approach to competitively produce good predictions of different types of anomalies, we made a comparison with different literature works detailed previously. It is important to notice that the datasets used in these methods are different. Thereby, an exact comparison is not possible due to different data sets and test setups. We mention in table 7 the types of anomalies detected in each method as well as the accuracy of the predictive model used. The accuracy values reported in the table are directly cited from the original publications. Using our proposed set of features, we achieved an accuracy average of 94%, which is a very competitive result compared to state-of-the-art works. Previous works in literature considered that only the Z-axis can represent the anomaly information. Also, their efforts have been focused on threshold heuristics. However, these strategies have shown their limitations when used in real-world applications. Remarkably, achieving a good classification performance is not related to the number of features used. A good set of selected features may conduct to obtaining a much more efficient classifier. This last point implies that the choice of discriminators plays an important role in detecting road irregularities.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>This work tackles the problem of classifying road anomalies using Machine Learning techniques. An extensive experiment was conducted to investigate three machine learning models for classifying road anomalies. In conclusion, we summarize our main findings and conclude with concrete advice for researchers and practitioners wishing to apply these machine learning models to real-world road assessment systems. The results show that the accuracy rates of machine learning models trained with features extracted from all three coordinate axes are significantly higher than those trained with the axis perpendicular to the ground only (Z-axis). All three machine learning techniques explored in this paper show this trend.</ns0:p><ns0:p>The results support our hypothesis that all three axes of the coordinate system provide useful information about the road's condition. To have a better performance, a sensitivity analysis resulted in the use of an overlapping window strategy having a size between 30 and 60. Further analysis is needed to completely understand the relation between the window size and the type of anomaly specified. Another important finding of our study is the sensibility of machine learning models to the selected features. We built a better feature vector that increases classification performance. This vector is based on wavelet features that outperform other domain features. Also, the MLP model has a reasonably high level of accuracy Manuscript to be reviewed</ns0:p><ns0:p>Computer Science especially wavelet features, seems to be more effective for preserving most road anomaly characteristics while separating each domain seems to be inefficient. This article discusses only datasets that contain roads with characteristics similar to those considered for this work. Our findings here may not apply to all cases. Nevertheless, we believe these suggestions are likely to be useful for researchers interested in integrating machine learning approaches into real-world anomaly detection tasks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021) Manuscript to be reviewed Computer Science time accelerometer readings. The feature extraction process is performed in both phases. The reason why it is a crucial step in the classification workflow.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A general flow of machine learning based approach for road anomalies detection.</ns0:figDesc><ns0:graphic coords='4,141.73,99.44,413.58,149.73' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>to assess the effectiveness of our models in classifying road anomalies. Following are the performance scores we calculated in this study. Accuracy, recall, and F1-score. The recall measure is dependent on an understanding and measurement of relevance. Measures of completeness and quantity can be viewed as recall. Accuracy measures the overall performance of the classification.&#8226; Accuracy is measured as a ratio of correctly predicted samples to the number of input samples. It gives a good estimation of the model performance only if there are an equal number of samples belonging to each class. It is calculated using the following equation:Accuracy = T N + T P T N + T P + FN + FP (6) 6/13PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>important property of road anomalies classification models based on accelerometer data is the possibility of taking as input 3 different signals from the accelerometer axis: X, Y, and Z-axis. The flexibility of 7/13 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Effect of the overlapping window size (using only Z-axis for: (a) DB1, (b) DB2.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Averaged Confusion matrices of MLP with wavelet and time features (using only Z-axis) for: (a) DB1, (b) DB2.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Features Summary.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Domain</ns0:cell><ns0:cell>Feature Name</ns0:cell><ns0:cell>Symbol</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>&#181;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Integral Square</ns0:cell><ns0:cell>IS</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Variance</ns0:cell><ns0:cell>Var</ns0:cell></ns0:row><ns0:row><ns0:cell>Time</ns0:cell><ns0:cell>Standard Deviation Median</ns0:cell><ns0:cell>&#963; 30 Med</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Range</ns0:cell><ns0:cell>Rg</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Root Mean Square</ns0:cell><ns0:cell>RMS</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Entropy</ns0:cell><ns0:cell>Ent</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Spectrum Energy</ns0:cell><ns0:cell>SE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Median Frequency</ns0:cell><ns0:cell>MF</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mean Power</ns0:cell><ns0:cell>MP</ns0:cell></ns0:row><ns0:row><ns0:cell>Frequency</ns0:cell><ns0:cell>Peak Magnitude</ns0:cell><ns0:cell>PM</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Minimum Magnitude</ns0:cell><ns0:cell>MM</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Total Power</ns0:cell><ns0:cell>TP</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Discrete Cosine</ns0:cell><ns0:cell>DC</ns0:cell></ns0:row><ns0:row><ns0:cell>Wavelet</ns0:cell><ns0:cell cols='2'>Five levels -Daubechies 2 cD i j</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>describes the details of the dataset used and the number and type of each anomaly. In this table, the term bumps represents Asphalt bumps, other terms are self-explanatory. The dataset were divided into training (70%) and testing sets (30%).</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The dataset generated with Pothole Lab<ns0:ref type='bibr' target='#b7'>(Carlos et al., 2018)</ns0:ref>.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Anomaly</ns0:cell><ns0:cell cols='2'>Training set Testing set</ns0:cell></ns0:row><ns0:row><ns0:cell>Potholes</ns0:cell><ns0:cell>362</ns0:cell><ns0:cell>155</ns0:cell></ns0:row><ns0:row><ns0:cell>Metal speed bumps</ns0:cell><ns0:cell>324</ns0:cell><ns0:cell>139</ns0:cell></ns0:row><ns0:row><ns0:cell>Asphalt speed bumps</ns0:cell><ns0:cell>427</ns0:cell><ns0:cell>184</ns0:cell></ns0:row><ns0:row><ns0:cell>Real Data</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>SVM 86% 88% 87%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>93%</ns0:cell></ns0:row><ns0:row><ns0:cell>DT</ns0:cell><ns0:cell>81% 83% 87%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>94%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>MLP 84% 85% 90%</ns0:cell><ns0:cell>82%</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>94%</ns0:cell><ns0:cell>93%</ns0:cell></ns0:row></ns0:table><ns0:note>(%) of our machine learning models according to accelerometer axes used Model X-axis Y-axis Z-axis (X-Y) axis (X-Z) axis (Y-Z) axis (X-Y-Z) axis</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Performance evaluation of our machine learning models according to domain feature used (without domain combination).When comparing the wavelet with the other feature domains, it provides a very competitive level of accuracy. An MLP model achieved an accuracy of 90% in DB1 and 49% in DB2 (about one percent higher than an SVM model). Similar results were obtained with MLP, which achieved the highest accuracy when</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Simulated Data</ns0:cell><ns0:cell /><ns0:cell>Real Data</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='7'>Model Accuracy Precision Recall F1 Accuracy Precision Recall F1</ns0:cell></ns0:row><ns0:row><ns0:cell>Time</ns0:cell><ns0:cell>SVM DT MLP</ns0:cell><ns0:cell>0.88 0.87 0.88</ns0:cell><ns0:cell>0.77 0.56 0.43</ns0:cell><ns0:cell>0.30 0.33 0.55 0.55 0.35 0.38</ns0:cell><ns0:cell>0.43 0.36 0.47</ns0:cell><ns0:cell>0.38 0.33 0.42</ns0:cell><ns0:cell>0.32 0.28 0.33 0.33 0.37 0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>Frequency</ns0:cell><ns0:cell>SVM DT MLP</ns0:cell><ns0:cell>0.86 0.83 0.86</ns0:cell><ns0:cell>0.74 0.42 0.27</ns0:cell><ns0:cell>0.26 0.26 0.42 0.42 0.25 0.24</ns0:cell><ns0:cell>0.29 0.34 0.24</ns0:cell><ns0:cell>0.21 0.30 0.04</ns0:cell><ns0:cell>0.20 0.16 0.30 0.30 0.2 0.07</ns0:cell></ns0:row><ns0:row><ns0:cell>Wavelet</ns0:cell><ns0:cell>SVM DT MLP</ns0:cell><ns0:cell>0.89 0.85 0.90</ns0:cell><ns0:cell>0.59 0.49 0.69</ns0:cell><ns0:cell>0.39 0.41 0.51 0.49 0.42 0.45</ns0:cell><ns0:cell>0.48 0.38 0.49</ns0:cell><ns0:cell>0.46 0.34 044</ns0:cell><ns0:cell>0.38 0.36 0.34 0.34 0.39 0.37</ns0:cell></ns0:row></ns0:table><ns0:note>9/13PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Performance evaluation of our machine learning models according to domain feature used (With domain combination). Theoretically, MLP classifiers implement empirical risk minimization, whereas SVMs minimize structural risk. So, both MLPs and SVMs are efficient and generate the best classification accuracy for our used datasets. However, from our results, we have noticed that the DT achieved the lowest accuracy for both DB1 and DB2. A possible explanation is that decision trees work better with training data which does not exist in our datasets. (DB1 contains four categories and DB2 contains five categories).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Simulated Data</ns0:cell><ns0:cell /><ns0:cell>Real Data</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='7'>Model Accuracy Precision Recall F1 Accuracy Precision Recall F1</ns0:cell></ns0:row><ns0:row><ns0:cell>T + F</ns0:cell><ns0:cell>SVM DT MLP</ns0:cell><ns0:cell>0.86 0.86 0.86</ns0:cell><ns0:cell>0.75 0.51 0.33</ns0:cell><ns0:cell>0.26 0.26 0.51 0.51 0.25 0.24</ns0:cell><ns0:cell>0.29 0.36 0.24</ns0:cell><ns0:cell>0.21 0.33 0.04</ns0:cell><ns0:cell>0.20 0.16 0.33 0.33 0.2 0.07</ns0:cell></ns0:row><ns0:row><ns0:cell>T + W</ns0:cell><ns0:cell>SVM DT MLP</ns0:cell><ns0:cell>0.89 0.88 0.90</ns0:cell><ns0:cell>0.59 0.55 0.67</ns0:cell><ns0:cell>0.39 0.41 0.57 0.56 0.42 0.47</ns0:cell><ns0:cell>0.49 0.42 0.52</ns0:cell><ns0:cell>0.48 0.39 0.54</ns0:cell><ns0:cell>0.40 0.38 0.39 0.39 0.44 0.44</ns0:cell></ns0:row><ns0:row><ns0:cell>F + W</ns0:cell><ns0:cell>SVM DT MLP</ns0:cell><ns0:cell>0.87 0.86 0.87</ns0:cell><ns0:cell>0.94 0.49 0.54</ns0:cell><ns0:cell>0.28 0.30 0.50 0.49 0.27 0.27</ns0:cell><ns0:cell>0.30 0.40 0.34</ns0:cell><ns0:cell>0.22 0.36 0.34</ns0:cell><ns0:cell>0.20 0.14 0.35 0.35 0.28 0.21</ns0:cell></ns0:row><ns0:row><ns0:cell>T + F + W</ns0:cell><ns0:cell>SVM DT MLP</ns0:cell><ns0:cell>0.87 0.87 0.87</ns0:cell><ns0:cell>0.94 0.53 0.57</ns0:cell><ns0:cell>0.28 0.30 0.56 0.54 0.47 0.47</ns0:cell><ns0:cell>0.30 0.42 0.42</ns0:cell><ns0:cell>0.22 0.39 0.30</ns0:cell><ns0:cell>0.20 0.14 0.39 0.39 0.30 0.25</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>using wavelet decomposition.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Accuracy comparison between the best classifier reported in this work and works from the literature. With the extraction of all domain features, an overall success rate of 94% is observed when compared to the ground truth. From this point of view, merging different feature domains,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Anomalies detected</ns0:cell><ns0:cell>Technique</ns0:cell><ns0:cell>Accuracy average</ns0:cell></ns0:row><ns0:row><ns0:cell>Eriksson et al. (2008)</ns0:cell><ns0:cell>Potholes</ns0:cell><ns0:cell>Threshold</ns0:cell><ns0:cell>92.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Pothole, Bumps,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fazeen et al. (2012)</ns0:cell><ns0:cell>Rough, Smooth</ns0:cell><ns0:cell>Threshold</ns0:cell><ns0:cell>85.6%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>uneven</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Martinez et al. (2014)</ns0:cell><ns0:cell>potholes, speed bumps, metal humps, rough roads</ns0:cell><ns0:cell>ANN, Logistic regression</ns0:cell><ns0:cell>86%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Potholes, Metal bumps,</ns0:cell><ns0:cell>ANN, SVM,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Gonz&#225;lez et al. (2017)</ns0:cell><ns0:cell>Asphalt bumps, Regular road,</ns0:cell><ns0:cell>DT, RF,</ns0:cell><ns0:cell>93.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Worn out road</ns0:cell><ns0:cell>NB, KR, KNN</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Alam et al. (2020)</ns0:cell><ns0:cell>speed-breakers, potholes, broken road patches</ns0:cell><ns0:cell>Decision tree</ns0:cell><ns0:cell>93%</ns0:cell></ns0:row><ns0:row><ns0:cell>This work</ns0:cell><ns0:cell>Potholes, Metal speed bumps, Asphalt speed bumps</ns0:cell><ns0:cell>MLP, DT, SVM</ns0:cell><ns0:cell>94%</ns0:cell></ns0:row><ns0:row><ns0:cell>in classifying anomalies.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='13'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65455:1:0:NEW 5 Dec 2021) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Imam AbdulRahman Bin Faisal University Eastern Province, Dammam, Saudi Arabia December 2nd, 2021 Dear Editors We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. In particular, the experimental section is detailed, and technical discussions were added. A new section containing a comparison with literature work is added and all language errors are corrected. We believe that the manuscript is now suitable for publication in PeerJ. Dr. Eman Ferjani Assistant Professor of Computer Science. On behalf of all authors. Reviewer 1 Basic reporting 1-The importance of the application domain should be shortly mentioned and emphasized in the abstract. 2-The study being retrospective and no independent assessor for the outcome would carry great bias in different issues including selection and exclusion of patients and outcome assessment. Experimental design 3-In each part, in the experimental results and analysis section, before giving the driven conclusions, the results should be summarized, explaining what is given in Table, column, row, etc. This is necessary, so that the reader can validate the outcomes obtained. 4-Include more technical discussions on the observations would strengthen the paper's contribution. Validity of the findings 5-The comparison is not fair to verify the proposed method. Include more technical discussions on the observations would strengthen the paper's contribution. 6-More recent references should also be included: Only 3 out of 25 cited papers are published in the last 5 years. 7-Language, grammar errors need to be addressed and add latest references. Additional comments 8-The study being retrospective and no independent assessor for the outcome would carry great bias in different issues including selection and exclusion of patients and outcome assessment. 1.Of course, you're right, so I've added the sentence in page 1, line 11 and lines 29-31. 2.Agreed. I have added a new section starting from page 11, line 353 about the comparison of our proposed set of features with literature works. A detailed discussion was added so that the reader finds clear the importance of our conducted study. 3.Agree. Therefore, I summarized our findings in the abstract (lines 20-21). Also, I added in each subsection of the experimental results, a description of the values in each table as well as a summary of the results. (Lines 226-240, lines 246-248, line 254, lines 256-258, lines 262269, lines 275-282, lines 287-297and lines 300-302 4.Agree. So, I added technical details in lines 287-297and lines 300-302 2 5.Same as 4 6.you're right. I added the following references: Bridgelall, R. and Tolliver, D. (2020). Accuracy enhancement of anomaly localization with participatory sensing vehicles. Sensors, 20(2):409 Outay, F., Mengash, H. A., and Adnan, M. (2020). Applications of unmanned aerial vehicle (uav) in road safety, traffic and highway infrastructure management: Recent advances and challenges. Transportation Research Part A: Policy and Practice, 141:116–129. 7.All Language, grammar errors are checked and corrected. 8.Same as 2. 3 Reviewer 2 Basic reporting 1. extracting 30 features what are those and not explained properly Experimental design 2. architecture is not available. if possible, add it Validity of the findings good Additional comments NO 1.You’re right. Therefore, I mentioned clearly in line 231 that the features extracted are described in table 1. 2.Agree. I added the lines 243-244 to describe the architecture used in the experiments. 4 Reviewer 3 Basic reporting 1.The article flows well. However, there are several grammatical and spelling errors in the submission. Please ensure you correct them. Experimental design 2.The datasets used in the paper should be further defined so that the reader understands different nuances involved in it. In the current state, the paper only describes the datasets in couple of sentences. Validity of the findings 3.I am fine with the methods used. Please include the majority class baseline for your models in table 4. 1.All Language, grammar errors are checked and corrected. 2.You’re right. So, I added lines 187-190, lines 193-196 and lines 203-204 to further define the used datasets. 3.Agreed. So, I added lines 238-240 to clarify what we did to avoid the randomness effect in our used classifiers, we run the experiment 10 times and we computed the average over runs. We have mentioned this clearly. Moreover, we applied cross-validation in the performance evaluation of each classification model to estimate the skill of a machine learning model on unseen data (Lines 217-218). 5 "
Here is a paper. Please give your review comments after reading it.
398
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Random forwarding networks play a significant role in solving security and load balancing problems. As a random quantity easily obtained by both sender and receiver, the end-toend delay of Random forwarding networks can be utilized as an effective random source for cryptography-related applications. In this paper, we propose a mathematical model of Random forwarding networks and give the calculation method of end-to-end delay distribution. In exploring the upper limit of the randomness of end-to-end delay, we find that the end-to-end delay collision of different forwarding routes is the main reason for the decrease of end-to-end delay randomness. Some of these collisions can be optimized by better network deployment, while others are caused by some interesting network topology, which is unavoidable. For further analysis, we propose an algorithm to calculate the inevitable collision in Random forwarding networks skillfully by using Symbol Matrix, and we give the optimal node forwarding strategy with the maximum randomness of the end-to-end delay for a given number of middle forwarding nodes and forwarding times.</ns0:p><ns0:p>Finally, we introduce a specific application of generating symmetric keys by using the randomness of the end-to-end delay.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>A drunk started from a bar to go home. When he arrived at a crossroads, he couldn't recognize the way back because of drunkenness. There were two choices in front of him. One was to stay in place for a while, the other was to choose a road in front of him at random. The streets of this city extend in all directions, and the drunk could go anywhere he could go. After walking a few blocks, the drunk woke up and went back to his home directly. Because the drunk goes to the bar every day, his wife at home is curious about what regularity of the time he comes back?</ns0:p><ns0:p>The problem of drunk returning home can be modeled by random forwarding networks and the time drunk spends on the road is the end-to-end delay of a random forwarding route. Suppose we have a random forwarding network G consisting of m middle forwarding nodes Z 1 , Z 2 , . . . , Z m . G plays the role of forwarding the delay measurement data packet sent by Alice to Bob. The forwarding rules are as follows:</ns0:p><ns0:p>&#8226; Firstly, Alice randomly selects a middle forwarding node to send the initial delay measurement data packet.</ns0:p><ns0:p>&#8226; Secondly, this middle forwarding node randomly selects other middle forwarding nodes as the next hop of forwarding or forwards the packet to itself, and stipulates that the total forwarding times of the delay measurement data packet is N, which is recorded in the data packet. Every time the data packet is forwarded, the remaining forwarding times is reduced by 1 by the currently receiving middle forwarding node.</ns0:p><ns0:p>&#8226; Finally, when the remaining forwarding time becomes 0, the current middle forwarding node directly forwards this delay measurement data packet to Bob and finishes this forwarding.</ns0:p><ns0:p>Obviously, the end-to-end delay is related to the number of middle forwarding nodes, random PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science forwarding strategy ,and forwarding times. The main content of this paper is to reveal the relationship between them.</ns0:p><ns0:p>First, let us introduce the definition of Random forwarding networks. Random forwarding networks (RFNs) are a kind of network consisting of several network nodes called middle forwarding nodes with random forwarding as the forwarding strategy. Different from the Open Shortest Path First (OSPF) forwarding strategy, RFNs do not focus on efficient data transmission, but on the security application and load balancing in the process of data forwarding.</ns0:p><ns0:p>Security application is embodied in the attacker's inability to track the data in the Random forwarding networks because the forwarding node is randomly selected rather than determined by some forwarding rules <ns0:ref type='bibr' target='#b8'>(Duan et al., 2013)</ns0:ref>. The famous Tor network takes advantage of the anonymity of random forwarding, and Tor agents replace users to visit service sites to keep users secure. Using onion routing technology, access requests are randomly forwarded among several Tor network agents, hiding users' real addresses <ns0:ref type='bibr' target='#b20'>(Syverson et al., 2001)</ns0:ref>. In Optical Transport Networks (OTN), random forwarding is potentially more secure than explicit forwarding, and the probability that a wiretapper recovers a whole secure data as the first try is in the range of 10 &#8722;7 <ns0:ref type='bibr' target='#b9'>(Engelmann et al., 2014)</ns0:ref>.</ns0:p><ns0:p>while Load balancing can evenly distribute tasks to multiple working nodes, which is an essential technology in high-performance web services <ns0:ref type='bibr' target='#b15'>(Liu et al., 2013)</ns0:ref>. In wireless sensor networks (WSNs), random forwarding can provide a more stable and longer lifetime of networks <ns0:ref type='bibr' target='#b14'>(Li and Kim, 2015)</ns0:ref>. In addition, RFNs are strongly extensible because the random forwarding strategy makes every node have equal status, and it can flexibly add new nodes without changing the basic forwarding logic. Because of the flexibility, RFNs also has strong robustness. When an abnormal node in an RFN is detected, the whole RFN can still work effectively by deleting the abnormal node from the forwarding list.</ns0:p><ns0:p>The whole network delay from Alice to Bob is the end-to-end delay. In RFNs, the end-to-end delay has strong randomness, and it can be easily measured by both sender and receiver, which is of great significance in cryptography <ns0:ref type='bibr' target='#b0'>(Abdelkefi and Jiang, 2011)</ns0:ref>. The delay between middle forwarding nodes has stability and reciprocity, in which stability means that there is no obvious fluctuation in the delay between middle forwarding nodes within a short period time (within a few minutes), while reciprocity means that the communication round-trip delay is approximately equal <ns0:ref type='bibr' target='#b5'>(Choi et al., 2004)</ns0:ref>.</ns0:p><ns0:p>In physical layer security, it is a valuable technology to generate security keys by using the reciprocity and randomness of wireless channels, which can enable both parties to quickly establish a secure communication channel <ns0:ref type='bibr' target='#b21'>(S&#225;nchez et al., 2020)</ns0:ref>. The lightweight security solutions relying on key generation from wireless channels are eminently suitable for the Internet of Things (IoTs) <ns0:ref type='bibr' target='#b26'>(Zhang et al., 2020)</ns0:ref>. Similarly, the end-to-end delay with reciprocity and randomness in RFNs can also be used to achieve the same purpose. However, the difference is that using wireless channel characteristics to generate keys has great restrictions on communication distance while using network characteristics has no such restrictions, which can achieve cross-regional key negotiation.</ns0:p><ns0:p>Therefore, in order to further explore the potential of RFNs in multi-node cross-domain secret sharing and key distribution, this paper mainly discusses the randomness of end-to-end delay in RFNs. The main contributions of this paper are summarized as follows:</ns0:p><ns0:p>1. We proposed the mathematical model of RFNs and derived the mathematical formula of the end-to-end delay distribution.</ns0:p><ns0:p>2. We presented a quantitative calculation method of the end-to-end delay randomness based on information entropy and give a theoretical explanation.</ns0:p><ns0:p>3. We explored the forwarding strategy that maximizes the randomness of end-to-end delay when the number of middle forwarding nodes and forwarding times is constant. We revealed the main reason for the decrease of the randomness of end-to-end delay is delay collision and provided the optimal forwarding strategy and the theoretical upper limit of end-to-end delay randomness under different numbers of middle forwarding nodes and random forwarding times. 4. We introduced the application of cross-domain key distribution using the randomness and reciprocity of end-to-end delay.</ns0:p></ns0:div> <ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>RFNS MODEL The end-to-end delay probability distribution</ns0:head><ns0:p>In this section, we will first give the algebraic relationship between end-to-end delay distribution and the forwarding strategy of middle forwarding nodes.</ns0:p><ns0:p>Because the time delay between each node in the forwarding network is stable in the short term, once the deployment of forwarding network G is completed, the time delay between nodes is determined in the short term. Here are some symbol habits used in this paper, the delay and the forwarding probability between Alice and middle forwarding nodes Z i are denoted as d ai and p ai respectively, the delay and the forwarding probability between middle forwarding nodes Z i and middle forwarding nodes Z j are denoted as d i j and p i j respectively, and the delay between Bob and middle forwarding nodes Z i are denoted as d ib .</ns0:p><ns0:p>In the process of forwarding, we use the delay monomial px d keep the cumulative information of probability and the cumulative information of delay because of such property:</ns0:p><ns0:formula xml:id='formula_0'>p 1 x d 1 &#8226; p 2 x d 2 = p 1 p 2 x d 1 +d 2 .</ns0:formula><ns0:p>take Figure <ns0:ref type='figure'>1</ns0:ref> as an example, the delay and the probability of Alice &#8594; Z 1 &#8594; Z 2 &#8594; Z 3 &#8594; Bob for the forwarding route r can be calculated as</ns0:p><ns0:formula xml:id='formula_1'>p r x d r = &#8719; p i x d i = (p a1 p 12 p 23 p 3b )x (d a1 +d 12 +d 23 +d 3b ) d a3 d 3b Alice Z 1 Z 2 Z 3 Bob Figure 1. Random forwarding networks for m = 3</ns0:formula><ns0:p>Figure 2 describes all possible forwarding routes from Alice to Bob under general conditions of m nodes and N times and defines the set of these routes as S. Since each route corresponds to a delay monomial p r x r , then the distribution of end-to-end delay is the sum of the delay multinomial p r x r corresponding to all routes, and we express this sum as</ns0:p><ns0:formula xml:id='formula_2'>p(x) = &#8721; r&#8712;S p r x d r</ns0:formula><ns0:p>After simplification, we get p(x) = &#8721; n i=1 p i x d i , which means that the probability of taking d i as the end-to-end delay is p i .</ns0:p><ns0:p>p(x) is the polynomial form of the end-to-end delay probability distribution. Considering the multilayer network structure of forwarding routes, the vector form of the end-to-end delay distribution polynomial p(x) can be calculated as follows p(x) = s s s </ns0:p></ns0:div> <ns0:div><ns0:head>Measurement of end-to-end delay randomness</ns0:head><ns0:p>As shown in Figure <ns0:ref type='figure' target='#fig_0'>3</ns0:ref>, the network model of end-to-end time delay generated by the random forwarding network is regarded as a black box. Given forwarding strategy (P A P A P A ,P Z P Z P Z ), this black box will randomly generate end-to-end time delay data, which will obey the probability distribution defined by p(x). This is similar to a discrete source sending uncertain symbols in communication. The randomness of a source sending symbols can be measured by information entropy, which reflects the uncertainty of a source by calculating the average self-information amount of symbols <ns0:ref type='bibr' target='#b18'>(Shannon, 1948)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RFN Generate Input</ns0:head></ns0:div> <ns0:div><ns0:head>P A, P Z</ns0:head><ns0:p>End-to-end delay Therefore, by calculating the information entropy of end-to-end delay, we can quantitatively analyze its randomness. If the randomness of end-to-end delay is exploited to generate the secret key, the effective length of the secret key is proportional to the randomness of end-to-end delay. For example, if the end-to-end delay is given by the front and back of a coin thrown, d 1 will be generated on the front side and d 2 will be generated on the backside, that is to say, the end-to-end delay will only generate two possible values with the same probability, so there are at most two corresponding secret keys. Although the secret key length can be expanded by some algorithms like Hash <ns0:ref type='bibr' target='#b3'>(Bellare et al., 1996)</ns0:ref>, the effective key code length is actually only 1bit, which is the information entropy of the end-to-end delay.</ns0:p><ns0:p>The measurement formula of end-to-end delay randomness is as follows</ns0:p></ns0:div> <ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_9'>2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_3'>H d = &#8722; &#8721; i p i log p i (2)</ns0:formula><ns0:p>where p i are the coefficients of p(x) calculated by Eq. ( <ns0:ref type='formula'>1</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>OPTIMIZATION OF THE RANDOMNESS OF END-TO-END DELAY</ns0:head><ns0:p>This section mainly discusses how to improve the randomness of end-to-end delay, which is of great significance in cryptography.</ns0:p></ns0:div> <ns0:div><ns0:head>evitable collision and inevitable collision of end-to-end delay</ns0:head><ns0:p>End-to-end delay collision (hereinafter referred to as collision) means that two different forwarding routes have the same end-to-end delay. Collision is one of the main reasons leading to the decrease of the randomness of end-to-end delay because of the reduction of end-to-end delay sample space.</ns0:p><ns0:p>Collisions that can be solved by adjusting RFNs deployment are referred to as evitable collisions.</ns0:p><ns0:p>Otherwise, they are referred to as inevitable collision. These two collisions are described in detail below.</ns0:p><ns0:p>evitable collision In order to show this collision intuitively, an example as shown in Figure <ns0:ref type='figure' target='#fig_1'>4</ns0:ref> is provided, which is an equal delay forwarding network with two middle forwarding nodes, in which the delay between any two nodes is approximately the same (replaced by 1).</ns0:p><ns0:p>Taking single forwarding as an example, it is easy to find from Figure <ns0:ref type='figure' target='#fig_1'>4</ns0:ref> that the end-to-end delay of</ns0:p><ns0:formula xml:id='formula_4'>route Alice &#8594; Z 1 &#8594; Z 1 &#8594; Bob is the same as Alice &#8594; Z 2 &#8594; Z 2 &#8594; Bob,</ns0:formula><ns0:p>and the end-to-end delay of route The end-to-end delay distribution polynomial corresponding to Figure <ns0:ref type='figure' target='#fig_1'>4</ns0:ref> is</ns0:p><ns0:formula xml:id='formula_5'>Alice &#8594; Z 1 &#8594; Z 2 &#8594; Bob is the same as Alice &#8594; Z 2 &#8594; Z 1 &#8594; Bob, that</ns0:formula><ns0:formula xml:id='formula_6'>p(x) = p a1 x p a2 x T p 11 p 12 x p 21 x p 22 x x = p a1 p 11 x 2 + p a2 p 22 x 2 + p a1 p 12 x 3 + p a2 p 21 x 3</ns0:formula><ns0:p>The collision of end-to-end delay is reflected by the existence of like terms in the end-to-end delay distribution polynomial, and the existence of like terms reduces the randomness of end-to-end delay. For</ns0:p><ns0:formula xml:id='formula_7'>example, Alice &#8594; Z 1 &#8594; Z 1 &#8594; Bob corresponds to p a1 p 11 x 2 , Alice &#8594; Z 2 &#8594; Z 2 &#8594; Bob corresponds to p a2 p 22</ns0:formula><ns0:p>x 2 , which are like terms.</ns0:p><ns0:p>Equal delay forwarding networks are prone to delay collisions. To avoid such collisions, the deployment of forwarding networks can be adjusted, such as the forwarding network shown in Figure <ns0:ref type='figure' target='#fig_2'>5</ns0:ref>.</ns0:p><ns0:p>Similarly, taking a single forwarding as an example, the corresponding end-to-end delay distribution polynomial is Manuscript to be reviewed There is no like term in the adjusted end-to-end delay distribution polynomial, that is to say, the endto-end delay corresponding to each possible forwarding route is different, which improves the randomness of the measurement delay. This kind of collision is called evitable collision.</ns0:p><ns0:formula xml:id='formula_8'>p(x) = p a1 x p a2 x</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Inevitable collision Taking m = 2 and N = 2 as an example, the end-to-end delay distribution polynomial is as follows p(x) = s s s will lead to the existence of like terms in the expansion. The collision caused by such like terms can not be avoided by adjusting the deployment. So, we call this kind of collision inevitable collision.</ns0:p><ns0:p>Taking the forwarding network in Figure <ns0:ref type='figure' target='#fig_2'>5</ns0:ref> as an example, make a forwarding route map under two forwarding, which is shown as Figure <ns0:ref type='figure' target='#fig_3'>6</ns0:ref>. The blue route (- </ns0:p><ns0:formula xml:id='formula_9'>&#8226;&#8226;) is Alice &#8594; Z 1 &#8594; Z 1 &#8594; Z 2 &#8594; Bob and the yellow route (-&#8226;) is Alice &#8594; Z 1 &#8594; Z 2 &#8594; Z 2 &#8594; Bob,</ns0:formula></ns0:div> <ns0:div><ns0:head>Fast Calculation of Inevitable Collision Using Symbol Matrix</ns0:head><ns0:p>The collision of end-to-end delay is the main reason for the decrease of the randomness of end-to-end delay. The evitable collision can be solved by adjusting the deployment, while the inevitable collision is an unavoidable problem in the optimization of the randomness of end-to-end delay. Therefore, this subsection introduces a method for quickly calculating the inevitable collision in RFNs.</ns0:p><ns0:p>We have known that the inevitable collision depends on whether there are like terms in the internal elements of matrix P P P N , which is an inherent property of matrix power operation and is independent of the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science value of the specific elements of the matrix itself. Symbol matrix is a matrix composed of simple symbols, which is very suitable for revealing the structure of like terms in the internal elements of matrix P P P N .</ns0:p><ns0:p>The diagonals of the symbol matrix are all replaced by 1, which represents that the nodes forward to themselves will not change the end-to-end delay. The non-diagonals represent the delay between different nodes and are replaced by symbols. In fact, the symbol matrix is only a simplification of the forwarding matrix P P P. In this paper, S S S m is used to denote the symbol matrix of the forwarding matrix P P P with m nodes.</ns0:p><ns0:p>Note that S S S m is symmetric.</ns0:p><ns0:p>For example, the symbol matrix S S S 2 for m = 2 is where 2a is the result of merging like terms, the coefficient represents that the number of inevitable collision routes is 2.</ns0:p><ns0:formula xml:id='formula_10'>S S S 2 = 1 a</ns0:formula><ns0:p>With the help of symbol matrix, it is easier to calculate the inevitable collision in complex cases. Taking m = 3 as an example, the symbol matrix S S S 3 is</ns0:p><ns0:formula xml:id='formula_11'>S S S 3 = &#63723; &#63725; 1 a b a 1 c b c 1 &#63734; &#63736; When N = 2, the symbol matrix S S S 2 3 is S S S 2 3 = &#63723; &#63725; 1 + a 2 + b 2 2a + bc 2b + ac 2a + bc 1 + a 2 + c 2 2c + ab 2b + ac 2c + ab 1 + b 2 + c 2 &#63734; &#63736;</ns0:formula><ns0:p>We find that the form of the elements on the main diagonal of S S S 2 3 is consistent, and the form of the elements on the upper triangle and the lower triangle (except the main diagonal) of S S S 2 3 is consistent. The difference only exists in the rotation of symbols, which is called rotation consistency. That is to say, as long as the first two elements of the first line of S S S 2 3 are calculated, the remaining elements can be recovered by rotation consistency.</ns0:p><ns0:p>So S S S 2 3 can be compressed as</ns0:p><ns0:formula xml:id='formula_12'>S S S 2 3 = (1 + a 2 + b 2 , 2a + bc) a,b,c</ns0:formula><ns0:p>Where the elements in () is the first two elements in S S S 2 3 and the subscript a, b, c denote the symbols of rotation.</ns0:p><ns0:p>Two operators is used to recover the original S S S 2 3 from the compressed S S S 2 3 .</ns0:p><ns0:p>&#8226; The first operator is the cyclic permutation transformation R: &#947; &#947; &#947; = [ f , g, e 23 (g), e 24 (g), . . . , e 2m (g)] T 8:</ns0:p><ns0:formula xml:id='formula_13'>&#63723; &#63725; f 1 (a, b, c) f 2 (a, b, c) f 3 (a, b, c) &#63734; &#63736; R &#8594; &#63723; &#63725; f 3 (&#963; (a, b, c)) f 1 (&#963; (a, b, c)) f 2 (&#963; (</ns0:formula><ns0:formula xml:id='formula_14'>f &#8592; &#947; &#947; &#947; T &#181; 0 &#181; 0 &#181; 0 9: g &#8592; &#947; &#947; &#947; T &#181; 1 &#181; 1 &#181; 1 ) 10: &#947; &#947; &#947; = [ f , g, e 23 (g), e 24 (g), . . . , e 2m (g)] T 11: S S S N m = [&#947; &#947; &#947;, R(&#947; &#947; &#947;), . . . , R m&#8722;1 (&#947; &#947; &#947;)] 12: return S S S N m</ns0:formula><ns0:p>&#8226; The second operator is replacement transformation e i j : </ns0:p><ns0:formula xml:id='formula_15'>f i (a, b, c) e i j &#8594; f j (a, b, c) = f i (e i j</ns0:formula><ns0:formula xml:id='formula_16'>&#181; &#181; &#181; = &#63723; &#63725; f 1 (a, b, c) f 2 (a, b, c) f 2 (e 23 (a, b, c)) &#63734; &#63736;</ns0:formula><ns0:p>By using operators R and e i j , the complete matrix S S S 2 3 can be recovered from the first two elements of the S S S 2 3 . This property is universal, and there is such rotation consistency for any number of nodes and any number of forwarding times (See APPENDIX for proof).</ns0:p><ns0:p>Now, we will show how to use these two operators to calculate S S S 3 3 easily:</ns0:p><ns0:formula xml:id='formula_17'>S S S 3 = (1, a) a,b,c , &#181; 0 &#181; 0 &#181; 0 = &#63723; &#63725; 1 a b &#63734; &#63736; , &#181; 1 &#181; 1 &#181; 1 = R(&#181; 0 &#181; 0 &#181; 0 ) = &#63723; &#63725; a 1 c &#63734; &#63736; , &#947; &#947; &#947; = &#63723; &#63725; 1 a e 23 (a) &#63734; &#63736; = &#181; 0 &#181; 0 &#181; 0 S S S 2 3 = (&#947; &#947; &#947; T &#181; 0 &#181; 0 &#181; 0 ,&#947; &#947; &#947; T &#181; 1 &#181; 1 &#181; 1 ) a,b,c = (1 + a 2 + b 2 , 2a + bc) a,b,c , &#947; &#947; &#947; = &#63723; &#63725; 1 + a 2 + b 2 2a + bc e 23 (2a + bc) &#63734; &#63736; = &#63723; &#63725; 1 + a 2 + b 2 2a + bc 2b + ac &#63734; &#63736; S S S 3 3 = (&#947; &#947; &#947; T &#181; 0 &#181; 0 &#181; 0 ,&#947; &#947; &#947; T &#181; 1 &#181; 1 &#181; 1 ) a,b,c = (1 + 3a 2 + 3b 2 + 2abc, a 3 + ab 2 + ac 2 + 3a + 3bc) a,b,c</ns0:formula><ns0:p>where &#947; &#947; &#947; is the first column of S S S N 3 .</ns0:p><ns0:p>Generally, the fast power of symmetric symbol matrix (FPSSM) is given by Algorithm 1 to calculate matrix S S S N m easily. Because every loop in FPSSM only needs to calculate two times vector multiplication, the algorithm reduces the time complexity of polynomial matrix multiplication from O(Nm 3 ) to O(Nm) and the space complexity from O(m 2 ) to O(1). The complexity here refers to the complexity of polynomial multiplication, not the complexity of conventional numerical multiplication.</ns0:p><ns0:p>Now we have powerful tools to study the inevitable collision of RFNs in complex conditions. As long as we calculate S S S N m , all possible inevitable collisions can be obtained. Take m = 3, N = 3 as an example, every term in S S S 3 3 whose coefficient is not 1 represents an inevitable collision. Figure <ns0:ref type='figure'>7</ns0:ref> shows the inevitable collision of 3a 2 and 2abc in S S S 3 3 . Among them, the first figure labeled 3a 2 shows a kind of inevitable collision caused by self forwarding, while the second figure labeled 2abc shows another kind of inevitable collision caused by symmetry in the forwarding route map. Of course, these two types are not mutually exclusive. There are also inevitable collisions caused by both self-forwarding and symmetry in forwarding route maps with more middle forwarding nodes.</ns0:p></ns0:div> <ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Alice</ns0:p><ns0:formula xml:id='formula_18'>Z 1 Z 2 Z 1 Z 2 Z 1 Z 2 Bob N=1 N=2 Z 3 Z 3 Z 3 Z 1 Z 2 N=3 Z 3 (2&#119886;&#119887;&#119888;) Alice Z1 Z2 Z1 Z2 Z1 Z2 Bob N=1 N=2 Z3 Z3 Z3 Z1 Z2 N=3 Z3 (3&#119886; 2 )</ns0:formula><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref>. Two typical inevitable collision of end-to-end delay for m = 3, N = 3</ns0:p><ns0:p>The upper limit of end-to-end delay randomness and the optimal forwarding strategy</ns0:p><ns0:p>In this subsection, we will explore how to formulate random forwarding strategies to achieve the upper limit of end-to-end delay randomness. We have known that the collision of end-to-end delay will lead to the decrease of randomness, so the first step is to adjust the deployment to remove all evitable collisions.</ns0:p><ns0:p>In this way, our goal becomes the optimal forwarding strategy under the inevitable collision deployment.</ns0:p><ns0:p>Our optimization problem is that, for a given non-evitable collision random forwarding network G (including Alice and Bob), what is the optimal forwarding strategy to maximize the entropy of the end-to-end delay information? The mathematical form is described as follows</ns0:p><ns0:formula xml:id='formula_19'>Given : G = (V, E), V = {Alice, Z 1 , Z 2 , . . . , Z m , Bob} Goal : max P A P A P A ,P Z P Z P Z H d = &#8722; &#8721; i p i log p i</ns0:formula><ns0:p>where p i are the coefficients of p(x) calculated by Eq. ( <ns0:ref type='formula'>1</ns0:ref>).</ns0:p><ns0:p>The maximum entropy problem is a convex optimization, and its optimal solution exists and is unique <ns0:ref type='bibr' target='#b4'>(Boyd and Vandenberghe, 2004)</ns0:ref>, which is the key to solving this optimization problem. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; Property 1</ns0:p><ns0:formula xml:id='formula_20'>C m (A A A) = A A A, A A A &#8712; R m&#215;m &#8226; Property 2 C(A A A)C(B B B) = C(AB AB AB), A A A,B B B &#8712; R m&#215;m</ns0:formula><ns0:p>&#8226; Property 3 C(x x x T Ay Ay Ay) = x x x T Ay Ay Ay, A A A &#8712; R m&#215;m , x x x,y y y &#8712; R m</ns0:p><ns0:p>Then, rewrite the end-to-end delay distribution polynomial p(x) with Hadamard Product as p(x) = s s s T P P P N t t t = (P A P</ns0:p><ns0:formula xml:id='formula_21'>A P A &#8226; x D A D A D A ) T (P Z P Z P Z &#8226; x D Z D Z D Z ) N x D B D B D B<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_22'>x D A D A D A = x d a1 x d a2 . . . x d am T , x D B D B D B = x d 1b x d 2b . . . x d mb T and x D Z D Z D Z = x d i j m&#215;m .</ns0:formula><ns0:p>The operator &#8226; is the Hadamard product operator defined by </ns0:p><ns0:formula xml:id='formula_23'>(A A A &#8226;B B B) i j = (A A A) i j (B B B) i j Because H d is calculated by p i ,</ns0:formula><ns0:formula xml:id='formula_24'>P A ) &#8226; x D A D A D A ) T (C(P Z P Z P Z ) &#8226; x D Z D Z D Z )) N x D B D B D B</ns0:formula><ns0:p>According to Property 2 and Property 3, we have </ns0:p><ns0:formula xml:id='formula_25'>C(P A P A P A ) T C(P Z P Z P Z ) N 1 1 1 = C(P A P A P A T )C(P Z P Z P Z N )1 1 1 = C(</ns0:formula><ns0:formula xml:id='formula_26'>P A ) = C 2 (P A P A P A ) = &#8226; &#8226; &#8226; = C m&#8722;1 (P A P A P A ) P Z P Z P Z = C(P Z P Z P Z ) = C 2 (P Z P Z P Z ) = &#8226; &#8226; &#8226; = C m&#8722;1 (P Z P Z P Z ) That is &#63729; &#63732; &#63732; &#63732; &#63732; &#63730; &#63732; &#63732; &#63732; &#63732; &#63731; p a1 = p a2 = &#8226; &#8226; &#8226; = p am = 1 m p 11 = p 22 = &#8226; &#8226; &#8226; = p mm p 12 = p 23 = &#8226; &#8226; &#8226; = p m1 . . . p 1m = p 21 = &#8226; &#8226; &#8226; = p m m&#8722;1</ns0:formula><ns0:p>In addition, according to the rotation consistency of the S S S N m , we know that the forwarding object Z 2 , Z 3 , . . . , Z m can rotate for node Z 1 , that is</ns0:p><ns0:formula xml:id='formula_27'>p 12 = p 13 = &#8226; &#8226; &#8226; = p 1m</ns0:formula><ns0:p>Let p 11 = p, p 12 = q, P A P A P A and P Z P Z P Z are updated as</ns0:p><ns0:formula xml:id='formula_28'>P A P A P A = 1 m 1 1 1 P Z P Z P Z = (p &#8722; q)I I I + q1 1 11 1 1 T</ns0:formula><ns0:p>where I I I is the identity matrix with ones down the diagonal. In fact, p represents the self-forwarding 227 probability of middle forwarding nodes, and q represents the forwarding probability between middle 228 forwarding nodes. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Substituting back into Eq. ( <ns0:ref type='formula' target='#formula_21'>3</ns0:ref>), we have p(x) = s s s T P P P</ns0:p><ns0:formula xml:id='formula_29'>N t t t = 1 m x D A D A D A T (((p &#8722; q)I I I + q1 1 11 1 1 T ) &#8226; x D Z D Z D Z ) N x D B D B D B (4)</ns0:formula><ns0:p>Then, our optimization goal is simplified as</ns0:p><ns0:formula xml:id='formula_30'>max p, q H d = &#8722; &#8721; i p i log p i s.t. p + (m &#8722; 1)q = 1, 0 &#8804; p, q &#8804; 1</ns0:formula><ns0:p>where p i are the coefficients of p(x) calculated by Eq. ( <ns0:ref type='formula'>4</ns0:ref>).</ns0:p><ns0:p>This optimization can be solved by the Karush-Kuhn-Tucker (KKT) conditional of Lagrange multiplier method as</ns0:p><ns0:formula xml:id='formula_31'>(m &#8722; 1) &#8706; H d &#8706; p = &#8706; H d &#8706; q</ns0:formula><ns0:p>p + (m &#8722; 1)q = 1 (5)</ns0:p><ns0:p>Considering Take m = 3, N = 2 as an example, because S 2 3 = (1 + a 2 + b 2 , 2a + bc) a,b,c , we get P P P 2 = (p 2 + q 2 x 2d 12 + q 2 x 2d 13 , 2pqx d 12 + q 2 x d 13 +d 23 ) x d 12 ,x d 13 ,x d 23</ns0:p><ns0:formula xml:id='formula_32'>P P P = ((p &#8722; q)I I I + q1 1 11 1 1 T ) &#8226; x D Z D Z D Z = &#63723; &#63724; &#63724; &#63725; p qx</ns0:formula><ns0:p>Then, H d for m = 3, N = 2 is calculated by Eq. ( <ns0:ref type='formula'>2</ns0:ref>) as Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>( p q + 2) log p q = ( p q &#8722; 2) log 2 p + 2q = 1 Through Newton's Method, the optimal forwarding strategy is p &#8776; 0.265 q &#8776; 0.3675 Then we know the best p in Figure <ns0:ref type='figure' target='#fig_8'>8</ns0:ref> is 0.265, and the maximum entropy H d max is 4.333 bits.</ns0:p><ns0:p>Similarly, we can calculate the optimal forwarding strategy under other m and N. Some results are given in the Table <ns0:ref type='table' target='#tab_9'>1 and Table 2</ns0:ref>. Table <ns0:ref type='table' target='#tab_8'>1</ns0:ref> provides the p value of the optimal forwarding strategy, which is the probability of self-forwarding. While the probability q representing the forwarding probability between middle forwarding nodes can be calculated by p = 1&#8722;p m&#8722;1 . Table <ns0:ref type='table' target='#tab_9'>2</ns0:ref> provides the maximum entropy H d max , which is the upper limit of end-to-end delay randomness. From these two tables, we can find that with the increase of forwarding times N, the p value of the best forwarding strategy tends to be stable gradually and the growth rate of the maximum entropy H d max is gradually decreasing, that is to say, it is impossible to increase the end-to-end delay randomness by the unlimited number of forwarding times. When the number of forwarding times cannot increase the end-to-end delay randomness, the only effective way is to add middle forwarding nodes.</ns0:p><ns0:p>Noted that when the number of middle forwarding nodes m = 2, since p is always equal to 0.5, we can get the expression of H d max about the number of forwarding times N as</ns0:p><ns0:formula xml:id='formula_33'>H d max (N) = N + 1 &#8722; 1 2 N N &#8721; i=0 C i N log 2 C i N &#8776; 1 2 log 2 N + 2</ns0:formula><ns0:p>which shows that the impact of forwarding times on end-to-end delay is logarithmic. 0.2 0.13 0.104 0.092 0.088 0.086 0.085 0.085 0.084 6 0.167 0.103 0.08 0.069 0.064 0.062 0.06 0.06 0.059 7 0.143 0.086 0.065 0.055 0.05 0.047 0.046 0.045 0.044 8 0.125 0.073 0.055 0.045 0.041 0.038 0.036 0.035 0.034 9 0.111 0.064 0.047 0.039 0.034 0.031 0.03 0.029 0.028 Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Randomness Analysis of End-to-End Delay in Equal Delay Forwarding Network</ns0:head><ns0:p>We have known that collision leads to the decrease of the randomness of end-to-end delay in RFNs and the Equal Delay Forwarding Network (EDFN) is the most collision-prone network theoretically, which is worth some analysis.</ns0:p><ns0:p>EDFN is defined as a forwarding network, in which the delay between nodes is approximately the same.</ns0:p><ns0:p>In EDFN, for any node Z i , there is no difference between forwarding to Z j1 or to Z j2 . From the symbolic point of view, Z j1 and Z j2 can rotate. As shown in Figure <ns0:ref type='figure'>9</ns0:ref>, let p denotes the self-forwarding probability of middle forwarding nodes and q denotes the forwarding probability between middle forwarding nodes.</ns0:p><ns0:formula xml:id='formula_34'>Z i Z j p q Figure 9.</ns0:formula><ns0:p>Optimal forwarding strategy for EDFN For convenience, the delay between middle forwarding nodes is normalized to 1, then the forwarding matrix P P P of EDFN is</ns0:p><ns0:formula xml:id='formula_35'>P P P = &#63723; &#63724; &#63724; &#63725; p qx . . . qx qx p . . . qx . . . . . . . . . . . . qx qx . . . p &#63734; &#63735; &#63735; &#63736; = (qx)1 1 11 1 1 T + (p &#8722; qx)I I I</ns0:formula><ns0:p>where I I I is the identity matrix and 1 1 1 is the m dimensional vector of ones.</ns0:p><ns0:p>Therefore, for the EDFN with m nodes and N forwarding times, the end-to-end delay distribution polynomial P(x) is</ns0:p><ns0:formula xml:id='formula_36'>p(x) = 1 m 1 1 1 T P P P N 1 1 1 = ( 1 m 1 1 1 T P P P1 1 1) N = (p + (m &#8722; 1)qx) N</ns0:formula><ns0:p>That is to say, the end-to-end delay of EDFN obeys binomial distribution, and the maximum entropy of binomial distribution is obtained at p = 0.5, so the optimal forwarding strategy for EDFN is</ns0:p><ns0:formula xml:id='formula_37'>p = 0.5 q = 1 2(m&#8722;1)</ns0:formula><ns0:p>Then the end-to-end delay distribution polynomial p(x) under the optimal forwarding strategy is</ns0:p><ns0:formula xml:id='formula_38'>p(x) = 1 2 N (1 + x) N = 1 2 N N &#8721; i=0 C i N x i</ns0:formula><ns0:p>So the end-to-end delay distribution of EDFN under the optimal forwarding strategy is p(</ns0:p><ns0:formula xml:id='formula_39'>d = i) = 1 2 N C i N ,</ns0:formula><ns0:p>and the maximum entropy of the end-to-end delay in EDFN is</ns0:p><ns0:formula xml:id='formula_40'>H d max = N &#8722; 1 2 N N &#8721; i=0 C i N log 2 C i N</ns0:formula><ns0:p>It can be found that the maximum entropy of EDFN is only related to the number of forwarding times N, and is irrelevant to the number of middle forwarding nodes m. What's worse, the maximum entropy of EDFN is 1 bit lower than the maximum entropy of RFNs with 2 nodes under the inevitable collision deployment. So, it just proves the conclusion that collision is the main reason for the decrease of randomness.</ns0:p></ns0:div> <ns0:div><ns0:head>13/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>APPLICATION: USING THE RANDOMNESS OF END-TO-END DELAY TO</ns0:p></ns0:div> <ns0:div><ns0:head>GENERATE SYMMETRIC KEYS</ns0:head><ns0:p>Key generation needs random sources. The original key distribution channel tends to be unsafe, so the original key exchange is a difficult problem. One idea is using the key distribution center (KDC) to generate random numbers and then delay the key distribution through the secure key exchange protocol <ns0:ref type='bibr' target='#b6'>(D'Arco, 2001)</ns0:ref>. In 1976, Diffie-Hellman proposed a key exchange scheme using discrete logarithm, but there is also a man-in-the-middle attack problem, and the security is rooted in the NP problem of discrete logarithm in the finite field on classical computers <ns0:ref type='bibr' target='#b7'>(Diffie and Hellman, 1976)</ns0:ref>. The development of quantum computing has impacted the cryptography algorithm based on discrete logarithm problems.</ns0:p><ns0:p>P.Shor has proved that there exist polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer <ns0:ref type='bibr' target='#b19'>(Shor, 1999)</ns0:ref>.</ns0:p><ns0:p>Another way of thinking is to abandon the idea that the key is distributed by the center, and choose the scheme that both sides of the communication measure the channel to obtain reciprocity characteristics.</ns0:p><ns0:p>This process does not need secret information exchange, so it avoids the risk that secret information eavesdrops. For example, the key is generated by using the frequency selective fading characteristic of the wireless channel, including measuring the received signal strength (RSS) <ns0:ref type='bibr' target='#b1'>(Awan et al., 2019)</ns0:ref>, the channel impulse response (CIR) in time-frequency domain <ns0:ref type='bibr' target='#b22'>(Walther et al., 2019)</ns0:ref>, and the phase <ns0:ref type='bibr' target='#b24'>(Zeinali and Hossein, 2016)</ns0:ref>, delay and envelope of the received channel <ns0:ref type='bibr' target='#b23'>(Ye et al., 2010)</ns0:ref>. The only problem is that the spatial distance between sender and receiver is limited in wireless channel key exchange, and the information exchange is mainly carried out by wire for the equipment with a far geographical distance.</ns0:p><ns0:p>There is also a lot of randomness in RFNs, and the end-to-end delay, which is mainly studied in this paper, is an ideal feature that satisfies both long-term randomness and short-term reciprocity and can be used for key generation. So, this section mainly introduces how to use the randomness of end-to-end delay to generate symmetric keys. RFNs Deployment RFNs can be applied in many scenarios, such as the large scenario of host group distributed between cities, or the small scenario of communication node cluster within the scope of LAN, especially in the scenario of encrypted communication needs between IoT device clusters. It is very convenient to generate the symmetric key with end-to-end delay. The deployment of RFNs mainly concerns two indicators, one is the number of middle forwarding nodes, the other is whether there is an evitable collision. The former affects the deployment cost, while the latter affects the efficiency of key generation.</ns0:p></ns0:div> <ns0:div><ns0:head>Secure</ns0:head><ns0:p>The number of middle forwarding nodes is determined by the demand of the real scene key generation rate. From the perspective of the economy, we hope to achieve the highest key generation rate with the least number of nodes. For example, if the key generation rate of r = 128 bit/s is required, then suppose the average time t required for a single measurement is 100ms, a single measurement can generate at least 12.8bit key. From the data in Table <ns0:ref type='table' target='#tab_9'>2</ns0:ref>, when the number of middle forwarding nodes m = 5 and the number of forwarding times N = 6, the key length is 13.145bits, which can meet the requirement. That is Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to say, the key length is determined by H d t &gt; r, and the number of middle forwarding nodes is determined by looking up Table <ns0:ref type='table' target='#tab_9'>2</ns0:ref>.</ns0:p><ns0:p>The evitable collision can be checked by calculating p(x). The number of inevitable collisions can be obtained by calculating the symbol matrix S S S N m and counting the coefficients, and other like terms are all evitable collisions. These evitable collisions can be avoided as far as possible by adjusting the deployment.</ns0:p><ns0:p>Forwarding Strategy Setting When the RFNs network is deployed, the optimal forwarding strategy p can be found through Table <ns0:ref type='table' target='#tab_8'>1</ns0:ref>, and then the internode forwarding probability q can be calculated by 1&#8722;p m&#8722;1 .</ns0:p><ns0:p>For the above example, the optimal forwarding strategy is p = 0.086 and q = 0.2285 for (m = 5, N = 6).</ns0:p><ns0:p>Because of the rotation among nodes, the forwarding strategies set by each node are the same, which is also very helpful in security, because attackers cannot identify forwarding nodes by counting forwarding rules. Although the forwarding strategy seems to be static, the dynamically adjusted forwarding strategy often divulges the information of the network itself, so that attackers can take advantage of it. When the number of forwarding nodes or forwarding times changes, the forwarding strategy of deployed nodes can be easily switched by looking up Table <ns0:ref type='table' target='#tab_8'>1</ns0:ref>.</ns0:p><ns0:p>Secure Measurement of End-to-End Delay The consistency of generated keys depends on the accurate measurement of end-to-end delay <ns0:ref type='bibr' target='#b10'>(Fabini and Abmayer, 2013)</ns0:ref>. In order to ensure that both sides of the communication can measure approximately the same end-to-end delay and meet the security requirements, we design a secure end-to-end delay measurement scheme as shown in Figure <ns0:ref type='figure' target='#fig_11'>11</ns0:ref>. The scheme steps are as follows:</ns0:p><ns0:p>Alice Bob Let d AB denotes the end-to-end delay from Alice to Bob and d BA denotes the end-to-end delay from Bob to Alice. Then according to this scheme, Alice and Bob can calculate &#8710;T ab and &#8710;T ba as measurement end-to-end delay as Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_41'>&#949; &#119860; T &#119886;&#119887; 1 T &#119887;&#119886; 1 T &#119887;&#119886; 2 T &#119886;&#119887; 2 &#949; &#119861; &#119889; &#119860;&#119861; &#119889; &#119861;&#119860;</ns0:formula><ns0:formula xml:id='formula_42'>&#8710;T ab = T 2 ab &#8722; T 1 ab = d AB + d BA + &#949; A + &#949; B &#8710;T ba = T 2 ba &#8722; T 1 ba = d AB + d BA + &#949; A + &#949; B<ns0:label>15</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Since &#8710;T ab = &#8710;T ba , the end-to-end delays measured by Alice and Bob are equal.</ns0:p><ns0:p>In terms of security, because each node only records the last hop node, Alice is anonymous in the forwarding packet, and only Bob's information is in the forwarding packet, so it is impossible to measure the end-to-end delay directly from the sending and receiving nodes. It is also difficult to obtain the end-toend delay by obtaining the forwarding route. Because the forwarding strategy is random, the probability of each node in the next hop is the same, so it cannot be traced. To obtain a complete forwarding route, the attacker needs to attack almost all forwarding nodes, which means that the cost of the attack is far greater than the benefit. So in general, the security of the scheme is guaranteed.</ns0:p><ns0:p>Quantization Encoding When we get the end-to-end delay data, we need to use quantization coding technology to convert it into a key. We use nonlinear quantization, and the distribution of quantization interval is consistent with that of end-to-end delay. Gray code is used in coding because Gray code belongs to reliability coding, which is an error minimization coding method <ns0:ref type='bibr' target='#b17'>(Mecklenburg et al., 1973)</ns0:ref>.</ns0:p><ns0:p>Another scheme is to encode the distribution of end-to-end delay by Huffman coding <ns0:ref type='bibr' target='#b12'>(Huffman, 1952)</ns0:ref>, and then make the nearest neighbor decision on the measured end-to-end delay and the theoretically calculated possible value.</ns0:p><ns0:p>Information Reconciliation An information reconciliation protocol is used to discard or correct the difference of key bits generated by the sender and the receiver, which is a common method for key agreement in physical layer security. Existing information reconciliation methods are mainly divided into reconciliation protocols and error correction coding. The reconciliation protocols mainly include BBBSS, Cascade ,and Winnow protocol. Error correction coding includes Hamming code, BCH code, Turbo code, LDPC code, etc <ns0:ref type='bibr' target='#b13'>(Huth et al., 2016)</ns0:ref>. Of course, if the process of information reconciliation causes information leakage, then privacy amplification is needed to discard some leaked bits <ns0:ref type='bibr' target='#b16'>(Maurer and Wolf, 2003)</ns0:ref>.</ns0:p><ns0:p>In Purple Mountain Laboratory of Nanjing, we design a symmetric key generation system according to the application introduced in this section <ns0:ref type='bibr' target='#b11'>(Huang et al., 2021)</ns0:ref>. The practical results show that this scheme is effective. According to our statistics, the key agreement rate of sending and receiving can be over 91%, which can meet our communication needs.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>This paper studies the randomness of end-to-end delay in Random forwarding networks (RFNs) through the problem of drunks returning home. In this paper, we solved six problems in the study of end-to-end randomness in RFNs. By establishing a mathematical model, we solved the first problem of what kind of distribution does end-to-end delay obey by deriving the formula Eq. ( <ns0:ref type='formula'>1</ns0:ref>) for calculating the random distribution of end-to-end delay; Then the second question of how to measure the randomness of endto-end delay was answered by analyzing the end-to-end delay generation model, and the conclusion is that the randomness of end-to-end delay can be quantitatively measured by information entropy; In the process of answering the third question of what is the reason for decline of the randomness of end-toend delay, we found that the end-to-end delay collision is the main reason, among which the evitable collision can be solved by adjusting RFNs deployment, while the inevitable collision can not be avoided;</ns0:p><ns0:p>Then, we proposed a fast algorithm FPSSM (Algorithm 1) for calculating inevitable collisions by using symbolic matrix and solved the optimization problem of maximizing the randomness of end-to-end delay to answer the fourth and fifth questions of what is the upper limit of end-to-end delay and how to reach the upper limit. We gave the flow of solving the optimization problem in detail, and then gave the optimization results in Table <ns0:ref type='table' target='#tab_8'>1</ns0:ref>: the upper limit of the randomness of end-to-end delay and Table <ns0:ref type='table' target='#tab_9'>2</ns0:ref>: the optimal forwarding strategy; Finally, we introduced the application of symmetric key generation based on end-to-end delay randomness to answer the final question of how to use the RFNs to share keys.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. End-to-end delay generation model</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Equal delay forwarding network for m = 2</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Adjust the deployed forwarding network for m = 2</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Inevitable collision of end-to-end delay (take forwarding network in Figure 5 as an example)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>a, b, c)) &#63734; &#63736; where permutation operator &#963; = a b c c a b and it makes S S S 2 3 = &#181; &#181; &#181; R(&#181; &#181; &#181;) R 2 (&#181; &#181; &#181;) , &#181; &#181; &#181; = &#63723; &#63725; 1 + a 2 + b 2 2a + bc 2b + ac (m,N) Input: m: Dimensions of Symbol Matrix S S S m ; N: Power of Symbol Matrix Multiplication 1: S S S m = Symbol Matrix Generate(m) 2: R = Cyclic Permutation Generate(S S S m ) 3: e i j = Replacement Generate(S S S m ) 4: f , g = S S S m [0, 0],S S S m [0, 1] 5: &#181; 0 &#181; 0 &#181; 0 ,&#181; 1 &#181; 1 &#181; 1 = S S S m [:, 0],S S S m [:, 1] 6: for i in [1, 2, . . . , N &#8722; 1] do 7:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>(a, b, c)), i, j &#8805; 2 where e i j can be generated by S S S m [:, j] = e i j (S S S m [:, i]). S S S m [:, i] denotes the ith column of S S S m . In recovering the compressed S S S 2 3 , we need e 23 = a b b a = a &#8596; b to recover &#181; &#181; &#181; as</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Firsta</ns0:head><ns0:label /><ns0:figDesc>, let's define a cyclic shift permutation operator C on the matrix A A A &#8712; R m&#215;m as 22 a 23 . . . a 21 a 32 a 33 . . . a 31 . . . . . . . . . . . . a 12 a 13 . . . In fact, C is a compound operation of cyclic left shift and cyclic upward shift on the matrix, so any element in the matrix is permuted as follows under the transformation of C a i j C &#8594; a [i+1] m [ j+1] n where [i + 1] m = (i mod m) + 1 ensures the cyclic property of the shift. Operator C has the following three important properties: 9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 shows the change of H d (m = 3, N = 2) (bits) with the change of p. It can be clearly seen from the figure that the best p corresponding to the maximum entropy is the position marked by the red dot.</ns0:figDesc><ns0:graphic coords='12,256.11,528.97,191.82,71.88' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Key generation process based on RFNs</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Secure end-to-end delay measurement scheme</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>/ 18 PeerJ</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>= p a1 x d a1 p a2 x d a2 . . . p am x d am T is the initial forwarding vector forwarded by Alice to the middle forwarding nodes, t t t = x d 1b x d 2b . . . x d mb T is the end forwarding vector forwarded by middle forwarding nodes to Bob, and P P P is the forwarding matrix of middle forwarding nodes forwarding</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>N=1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>N=2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>N=N</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Z 1</ns0:cell><ns0:cell>Z 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Z 1</ns0:cell><ns0:cell /><ns0:cell>Z 1</ns0:cell><ns0:cell>Z 1</ns0:cell></ns0:row><ns0:row><ns0:cell>Alice</ns0:cell><ns0:cell>Z 2</ns0:cell><ns0:cell>Z 2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Z 2</ns0:cell><ns0:cell>...</ns0:cell><ns0:cell>Z 2</ns0:cell><ns0:cell>Z 2</ns0:cell><ns0:cell>Bob</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>...</ns0:cell><ns0:cell>...</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>...</ns0:cell><ns0:cell /><ns0:cell>...</ns0:cell><ns0:cell>...</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Z m</ns0:cell><ns0:cell>Z m</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Z m</ns0:cell><ns0:cell /><ns0:cell>Z m</ns0:cell><ns0:cell>Z m</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>Figure 2. All possible forwarding routes of m nodes for N times</ns0:cell></ns0:row><ns0:row><ns0:cell>to each other, that is</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>&#63723;</ns0:cell><ns0:cell cols='6'>p 11 x d 11 . . . p 1m x d 1m</ns0:cell><ns0:cell>&#63734;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>P P P =</ns0:cell><ns0:cell>&#63724; &#63725;</ns0:cell><ns0:cell /><ns0:cell>. . .</ns0:cell><ns0:cell /><ns0:cell>. . .</ns0:cell><ns0:cell>. . .</ns0:cell><ns0:cell>&#63735; &#63736;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='6'>p m1 x d m1 . . . p mm x d mm</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>When the forwarding network is deployed, the delay d i j between any two nodes is determined.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>According to Eq. (1), p(x) is uniquely determined by the random forwarding strategy. Let P A P A P A =</ns0:cell></ns0:row><ns0:row><ns0:cell>p a1 p a2 . . . p am</ns0:cell><ns0:cell cols='8'>T denotes the initial random forwarding strategy forwarded by Alice to the middle</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>forwarding nodes. Let P Z P Z P Z denotes the random forwarding strategy of middle forwarding nodes forwarding</ns0:cell></ns0:row><ns0:row><ns0:cell>to each other, that is</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#63723;</ns0:cell><ns0:cell cols='3'>p 11 . . . p 1m</ns0:cell><ns0:cell>&#63734;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>P Z P Z P Z =</ns0:cell><ns0:cell>&#63724; &#63725;</ns0:cell><ns0:cell>. . .</ns0:cell><ns0:cell>. . .</ns0:cell><ns0:cell>. . .</ns0:cell><ns0:cell>&#63735; &#63736;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>p m1 . . . p mm</ns0:cell></ns0:row><ns0:row><ns0:cell>T P P P N t t t</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(1)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>where s s s 3/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>12 p 21 x 2d p 11 p 12 x d + p 12 p 22 x d p 21 p 11 x d + p 22 p 21 x d p 2 22 + p 21 p 12 x 2d</ns0:figDesc><ns0:table><ns0:row><ns0:cell>p 21 x d</ns0:cell><ns0:cell>p 12 x d p 22</ns0:cell><ns0:cell>2 t t t = s s s T</ns0:cell><ns0:cell>p 2 11 + p</ns0:cell></ns0:row></ns0:table><ns0:note>T P P P 2 t t t = s s s T p 11 t t t It can be found that the internal elements of matrix P P P 2 have like terms, such as p 11 p 12 x d + p 12 p 22 x d in the second column of the first row and p 21 p 11 x d + p 22 p 21 x d in the first column of the second row, which</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>12 p 21 x 2d p 11 p 12 x d + p 12 p 22 x d p 21 p 11 x d + p 22 p 21 x d p 2 22 + p 21 p 12 x 2d</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>a 1</ns0:cell><ns0:cell>&#8592;</ns0:cell><ns0:cell>p 11 p 21 x d</ns0:cell><ns0:cell>p 12 x d p 22</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>If N = 2, the symbol matrix S S S 2 2 is</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>S S S 2 2 =</ns0:cell><ns0:cell>1 + a 2 2a</ns0:cell><ns0:cell>2a 1 + a 2 &#8592;</ns0:cell><ns0:cell cols='2'>p 2 11 + p</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>which are the coefficients of p(x), and p i is distributed by P A</ns0:figDesc><ns0:table><ns0:row><ns0:cell>224</ns0:cell><ns0:cell /><ns0:cell>P A P A</ns0:cell><ns0:cell>T P Z P Z P Z</ns0:cell><ns0:cell>N 1 1 1</ns0:cell></ns0:row><ns0:row><ns0:cell>according to the end-to-end delay like term, that is to say, H d is decided by P A P A P A</ns0:cell><ns0:cell>T P Z P Z P Z</ns0:cell><ns0:cell cols='3'>N 1 1 1 (The notation 1 1 1</ns0:cell></ns0:row><ns0:row><ns0:cell>226</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='5'>Since the optimization objective is P A P A P A and P Z P Z P Z , by cyclic shifting P A P A P A and P Z P Z P Z in p(x) using C, we get</ns0:cell></ns0:row><ns0:row><ns0:cell>C(p(x)) = (C(P A P A</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>225represents a vector of ones of appropriate length).</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>d 12 . . . qx d 1m</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#63734;</ns0:cell></ns0:row><ns0:row><ns0:cell>qx d 21</ns0:cell><ns0:cell>p</ns0:cell><ns0:cell cols='2'>. . . qx d 2m</ns0:cell><ns0:cell>&#63735;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>&#63735;</ns0:cell></ns0:row><ns0:row><ns0:cell>. . .</ns0:cell><ns0:cell>. . .</ns0:cell><ns0:cell>. . .</ns0:cell><ns0:cell>. . .</ns0:cell><ns0:cell>&#63736;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>qx d m1 qx d m2 . . .</ns0:cell><ns0:cell>p</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Because x d</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>i j = x d ji , P P P is a symmetric symbolic matrix. Algorithm 1 can be used to calculate P P P N quickly and get the expression of H d .</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Optimal forwarding strategy (p value)</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>H d max</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>N 5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2.5</ns0:cell><ns0:cell>2.811</ns0:cell><ns0:cell>3.03</ns0:cell><ns0:cell>3.2</ns0:cell><ns0:cell>3.333</ns0:cell><ns0:cell>3.447</ns0:cell><ns0:cell>3.544</ns0:cell><ns0:cell>3.63</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell cols='3'>3.17 4.334 5.273</ns0:cell><ns0:cell>6.018</ns0:cell><ns0:cell>6.613</ns0:cell><ns0:cell>7.101</ns0:cell><ns0:cell>7.51</ns0:cell><ns0:cell>7.86</ns0:cell><ns0:cell>8.163</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>5.664 7.129</ns0:cell><ns0:cell>8.4</ns0:cell><ns0:cell cols='5'>9.483 10.415 11.226 11.94 12.573</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell cols='9'>4.644 6.691 8.565 10.267 11.788 13.145 14.361 15.458 16.455</ns0:cell></ns0:row><ns0:row><ns0:cell>m</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell cols='9'>5.17 7.523 9.725 11.778 13.666 15.395 16.979 18.436 19.782</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>7</ns0:cell><ns0:cell cols='9'>5.615 8.221 10.697 13.04 15.235 17.283 19.19 20.969 22.634</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell cols='2'>8.824 11.53</ns0:cell><ns0:cell cols='6'>14.12 16.577 18.898 21.097 23.152 25.102</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>9</ns0:cell><ns0:cell cols='9'>6.34 9.352 12.26 15.061 17.745 20.305 22.74 25.057 27.263</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The upper limit of end-to-end delay randomness (bits)</ns0:figDesc><ns0:table /><ns0:note>12/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66141:1:2:NEW 1 Mar 2022)</ns0:note></ns0:figure> </ns0:body> "
" Dear Editors and Reviewers, Thank you for your time and for the valuable comments. We have revised the manuscript in response to the suggestions made. We believe the manuscript is now suitable for resubmission in PeerJ. Our responses to the reviewers appear below. Best regards, Xiaowen Wang, Jie Huang, Zhenyu Duan, Yao Xu, Yifei Yao Februrary 17, 2022 Reviewer 1 (Anonymous) Basic reporting In Line 12 in the abstract: both receiver and receiver I believe this is an unforgivable mistake, please modify it The introduction isn't written well This manuscript needs substantial copyediting and English writing revisions Literature references are insufficiently provided. Thank you for your comment. (1) We have revised the abstract and replaced ‘both receiver and receiver’ by ‘both sender and receiver’. (2) We have major revised the Introduction part in our paper to respond to your suggestions. (3) We have major revised this manuscript to respond to your suggestions. (4) We have add a latest research in literature references about the application of RFNs, which is ‘Jie Huang, Xiaowen Wang, Wei Wang, Zhenyu Duan, 'A Novel Key Distribution Scheme Based on Transmission Delays', Security and Communication Networks, vol. 2021, 13 pages, 2021. https://doi.org/10.1155/2021/3125820’. Experimental design no comment Validity of the findings no comment Comments for the Author no comment Reviewer 2 (Anonymous) Basic reporting The authors propose a mathematical model of random forwarding networks (FNs) and derive the expression of end-to-end delay distribution in different FNs. The topic is interesting, but the quality of the manuscript can be improved in terms of its problem novelty and main contribution. The following comments are provided for the authors’ consideration: 1. Please try to demonstrate more results in comparing different parameter settings and benchmarks. It would be better that some comparisons between existing works and the proposed algorithm are provided. 2. Regarding the system model part, is the model specified for only one sender and one receiver? What is the adjustment if we increase the number of senders and receivers? 3. The abstract should be revised. For instance, “As a random quantity easily obtained by both receiver and receiver …” should be replaced by “As a random quantity easily obtained by both sender and receiver”. 4. Some references are wrongly referred. For example, Zhang, J., Li, G., Marshall, A., Hu, A., and Hanzo, L. (2020). A new frontier for iot security emerging from three decades of key generation relying on wireless channels. IEEE Access, PP(99) should be corrected. In fact, the page numbers is 138406-138446 and the volume is 8. Please check that. 5. Some equations are not numbered. 6. The caption of Figure 8 should be revised (H_d as a function of p for m = 3 and N = 2; and put H_d (bits) as ylabel). Thank you for your comment. (1) We have add a latest research in literature references about the application of RFNs, which is ‘Jie Huang, Xiaowen Wang, Wei Wang, Zhenyu Duan, 'A Novel Key Distribution Scheme Based on Transmission Delays', Security and Communication Networks, vol. 2021, 13 pages, 2021. https://doi.org/10.1155/2021/3125820’. (2) RFNs is not only designed for one sender and one receiver. More senders and one receivers can also use RFNs to share secrets by pairing in pairs. (3) We have revised the abstract and replaced ‘both receiver and receiver’ by ‘both sender and receiver’. (4) We have revised the reference ‘Zhang, J., Li, G., Marshall, A., Hu, A., and Hanzo, L. (2020). A new frontier for iot security emerging from three decades of key generation relying on wireless channels. IEEE Access, PP(99)’ to ‘Zhang, J., Li, G., Marshall, A., Hu, A., and Hanzo, L. (2020). A new frontier for iot security emerging from three decades of key generation relying on wireless channels. IEEE Access, 8:138406-138446’ (5) Since some equations are only derivations, we numbered only the important ones. (6) We have revised the Figure 8 by replacing H_d by H_d (bits). Experimental design Regarding the system model part, is the model specified for only one sender and one receiver? What is the adjustment if we increase the number of senders and receivers? Thank you for your comment. RFNs are not only designed for one sender and one receiver. More senders and receivers can also use RFNs to share secrets by pairing in pairs. Of course, if it is group key negotiation, we are doing related research. If you are interested, please pay attention to our progress. Validity of the findings Please try to demonstrate more results in comparing different parameter settings and benchmarks. It would be better that some comparisons between existing works and the proposed algorithm are provided. Thank you for your comment. Because this is a theoretical analysis of the randomness of end-to-end delay in random forwarding networks, and no related literature has been found so far, there is no way to compare it. However, all the conclusions in this paper have been deduced and proved, and can be verified by Monte Carlo method. Because this is a theoretical analysis of the randomness of end-to-end delay in random forwarding networks, and no related literature has been found so far, there is no way to compare it. However, all the conclusions in this paper have been deduced and proved, and can be verified by Monte Carlo method. Comments for the Author no comment "
Here is a paper. Please give your review comments after reading it.
399
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Blink detection is an important technique in a variety of settings, including facial movement analysis and signal processing. However, automatic blink detection is very challenging because of the blink rate. This research work proposed a real-time method for detecting eye blinks in a video series captured by a conventional camera. Automatic facial landmarks detectors are trained on a real-world dataset and demonstrate exceptional resilience to a wide range of environmental factors, including lighting conditions, face emotions, and head position. For each video frame, the proposed method calculates the facial landmark locations and extracts the vertical distance between the eyelids using the facial landmark positions. Our results show that the recognizable landmarks are sufficiently accurate to determine the degree of eye-opening and closing consistently. The proposed algorithm estimates the facial landmark positions, extracts a single scalar quantity by using Modified Eye Aspect Ratio (Modified EAR) and characterizing the eye closeness in each frame. Finally, blinks are detected by the Modified EAR threshold value and detecting eye blinks as a pattern of EAR values in a short temporal window. According to the results of the typical datasets, it is shown that the suggested approach is more efficient than the state-of-the-art techniques.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Eye blinks detection technology is essential and has been applied in different fields such as the intercommunication between disabled people and computers <ns0:ref type='bibr' target='#b26'>(Kr&#243;lak &amp; Strumi&#322;&#322;o, 2012)</ns0:ref>, drowsiness detection <ns0:ref type='bibr' target='#b43'>(Rahman et al., 2016)</ns0:ref>, the computer vision syndromes <ns0:ref type='bibr' target='#b1'>(Al Tawil et al., 2020)</ns0:ref> <ns0:ref type='bibr' target='#b17'>(Drutarovsky &amp; Fogelton, 2015)</ns0:ref>, anti-spoofing protection in face recognition systems <ns0:ref type='bibr' target='#b39'>(Pan et al., 2007)</ns0:ref>, and cognitive load <ns0:ref type='bibr' target='#b54'>(Wilson, 2002)</ns0:ref>.</ns0:p><ns0:p>According to a literature review, we found that computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In <ns0:ref type='bibr' target='#b3'>(Anitha et al., 2020)</ns0:ref>, Viola and Jones face detection algorithms are used to train and classify images sequentially. An alarm will sound if the eyes remain closed for a certain time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether eye is closed or open is done on the right eye only using SVM and Adaboost <ns0:ref type='bibr' target='#b18'>(Fatima et al., 2020)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b21'>(Ghourabi et al., 2020)</ns0:ref> recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced in particular by the shadows cast by glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. For the classification of the driver's state, <ns0:ref type='bibr' target='#b15'>(Dreisig et al., 2020)</ns0:ref> developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The bestperforming feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. Using head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts, the Driver State Alert Control system is expected to detect drowsiness and collision liability associated with strong emotional factors <ns0:ref type='bibr' target='#b41'>(Persson et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Moreover, the investigation of an individual's eye state in terms of blink time, blink count, and frequency provides valuable information about the subject's mental health. The result uses to explore the effects of external variables on changes in emotional states. Individuals' normal eyesight is characterized by the presence of spontaneous eye blinking at a certain frequency. The following elements impact eye blinking, including the condition of the eyelids, the condition of the eyes, the presence of illness, the existence of contact lenses, the psychological state, the surrounding environment, medicines, and other stimuli. The blinking frequency ranges between 6 and 30 times per minute <ns0:ref type='bibr' target='#b47'>(Rosenfield, 2011)</ns0:ref>. Furthermore, the term 'eye blink' refers to the quick shutting and reopening of the eyelids, which normally lasts between 100 and 400. Reflex blinking occurs significantly faster than spontaneous blinking, which occurs significantly less frequently. The frequency and length of blinking may be influenced by factors such as relative humidity, temperature, light, tiredness, illness, and physical activity. Real-time facial landmark detectors <ns0:ref type='bibr'>(&#268;ech et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b14'>(Dong et al., 2018)</ns0:ref> are available that captures most of the distinguishing features of human facial images, including the corner of the eye angles and eyelids. A person's eye size does not match another's eye; for example, one person has big eyes, and the other has small eyes. They don't have the same eyes or height value, as expected. When a person with small eyes closes his or her eyes, he or she may appear to have the same eye height as a person with large eyes. This issue will affect the experimental results. Therefore, we propose a simple but effective technique for detecting eye blink using a newly developed facial landmark detector with a modified Eye Aspect Ratio (EAR). Because our objective is to identify endogenous eye blinks, a typical camera with a frame rate of 25-30 frames per second (fps) is adequate. Eye blinks disclosure can be based on motion tracking within the eye region <ns0:ref type='bibr' target='#b13'>(Divjak &amp; Bischof, 2009)</ns0:ref>. <ns0:ref type='bibr' target='#b28'>Lee et al. (Lee et al., 2010)</ns0:ref> try to estimate the state of an eye, including an eye open or closed. Gracia et al. <ns0:ref type='bibr' target='#b20'>(Garc&#237;a et al., 2012)</ns0:ref> experiment with eye closure for individual frames, which is consequently used in a sequence for blink detection. Other methods compute a difference between frames, including pixels values <ns0:ref type='bibr' target='#b27'>(Kurylyak et al., 2012)</ns0:ref> and descriptors <ns0:ref type='bibr' target='#b33'>(Malik &amp; Smolka, 2014)</ns0:ref>. Using the effective Eye Aspect Ratio <ns0:ref type='bibr' target='#b32'>(Maior et al., 2020)</ns0:ref> and face landmarks <ns0:ref type='bibr' target='#b35'>(Mehta et al., 2019)</ns0:ref> methods, we developed our own algorithm to perfect it. Another method for blink detection is based on template matching <ns0:ref type='bibr' target='#b4'>(Awais et al., 2013)</ns0:ref>. The templates with open and/or closed eyes are learned and normalized cross-correlation.</ns0:p><ns0:p>Eye blinks can also be detected by measuring ocular parameters, for example by fitting ellipses to eye pupils <ns0:ref type='bibr' target='#b5'>(Bergasa et al., 2006)</ns0:ref> using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to <ns0:ref type='bibr' target='#b5'>(Bergasa et al., 2006)</ns0:ref>. Adopting EAR as a metric to detect blink in <ns0:ref type='bibr' target='#b45'>(Rakshita, 2018)</ns0:ref> yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. This research work is organized as follows. Related work and the approach we intend to use in this study describes in Materials and Methods section. Section 4 describes the experiment and results. A detailed description of the findings of our study is provided in Section 4. Finally, conclusions are drawn, and future work is proposed in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>A. Eye Blink Detection with Facial Landmarks</ns0:head><ns0:p>Eye blinking is a suppressed process that involves the rapid closure and reopening of the eyelid. Multiple muscles are involved in the blinking of the eyes. The orbicularis oculi and levator palpebrae superioris are the two primary muscles that regulate eye closure and opening. Blinking serves some important purposes, one of which is to moisten the corner of an individual's eye. Additionally, it cleans the cornea of the eye when the eyelashes are unable to catch all of the dust and debris that enter the eye. Everyone must blink to spread tears over the entire surface of the eyeball, and especially over the surface of the cornea. Blinking also performs as a reflex to prevent foreign objects from entering the eye. The goal of facial landmark identification is to identify and track significant landmarks on the face. Face tracking becomes strong for rigid facial deformation and not stiff due to head movements and facial expressions. Furthermore, facial landmarks were successfully applied to face alignment, head pose estimation, face swapping (D. <ns0:ref type='bibr' target='#b8'>Chen et al., 2019)</ns0:ref>, and blink detection <ns0:ref type='bibr' target='#b6'>(Cao et al., 2021)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b24'>(Kim et al., 2020)</ns0:ref> implement semantic segmentation to accurately extract facial landmarks. Semantic segmentation architecture and datasets containing facial images and ground truth pairs are introduced first. Further, they propose that the number of pixels should be more evenly distributed according to the face landmark in order to improve classification performance. In <ns0:ref type='bibr' target='#b53'>(Utaminingrum et al., 2021)</ns0:ref>, suggest a segmentation and probability calculation for a white pixel analysis based on facial landmarks as one way to detect the initial position of an eye movement. Calculating the difference between the horizontal and vertical lines in the eye area can be used to detect blinking eyes. Other research study <ns0:ref type='bibr' target='#b36'>(Navastara et al., 2020)</ns0:ref> report the features of eyes are extracted by using a Uniform Local Binary Pattern (ULBP) and the Eyes Aspect Ratio (EAR).</ns0:p><ns0:p>In our research we implement the Dlib's 68 Facial landmark <ns0:ref type='bibr' target='#b23'>(Kazemi &amp; Sullivan, 2014)</ns0:ref>. The Dlib library's pre-trained facial landmark detector is used to estimate 68 (x, y)-coordinates corresponding to facial structures on the face. The 68 coordinates' indices Jaw Points = 0-16, Right Brow Points = 17-21, Left Brow Points = 22-26, Nose Points = 27-35, Right Eye Points = 36-41, Left Eye Points = 42-47, Mouth Points = 48-60, Lips Points = 61-67 and shown in Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>. Facial landmark points identification using Dlib's 68 Model consists of the following two steps: (1) Face detection: Face detection is the first method that locates a human face and returns a value in x, y, w, h which is a rectangle. (2) Face landmark: After getting the location of a face in an image, we have to through points inside the rectangle. This annotation is part of the 68-point iBUG 300-W dataset on which the Dlib face landmark predictor is trained. Whichever data set is chosen, the Dlib framework can be used to train form predictors on the input training data.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Eye Aspect Ratio (EAR)</ns0:head><ns0:p>Eye Aspect Ratio (EAR) is a scalar value that responds, especially for opening and closing eyes <ns0:ref type='bibr' target='#b50'>(Sugawara &amp; Nikaido, 2014)</ns0:ref>. A drowsy detection and accident avoidance system based on the blink duration was developed by <ns0:ref type='bibr' target='#b40'>(Pandey &amp; Muppalaneni, 2021</ns0:ref>) and their work system has shown the good accuracy on yawning dataset (YawDD). To distinguish between the open and closed states of the eye, they used an EAR threshold of 0.35. Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref> depicts the progression of time it takes to calculate a typical EAR value for one blink. During the flashing process, we can observe that the EAR value increases or decreases rapidly. According to the results of previous studies, we used threshold values to identify the rapid increase or decrease in EAR values caused by blinking. As per previous research, we know that setting the threshold at 0.2 is beneficial for the work at hand. In addition to this approach, many additional approaches to blink detection using image processing techniques have been suggested in the literature. However, they have certain drawbacks, such as strict restrictions on image and text quality, which are difficult to overcome. EAR formula is insensitive to the direction and distance of the face, thus providing the benefit of identifying faces from a distance. EAR value calculates by substituting six coordinates around the eyes shown in Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> into Equation ( <ns0:ref type='formula'>1</ns0:ref>) -( <ns0:ref type='formula'>2</ns0:ref>) <ns0:ref type='bibr' target='#b58'>(You et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Noor et al., 2020)</ns0:ref>.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#119864;&#119860;&#119877; = &#8214;&#119875;2 -&#119875;6&#8214; + &#8214;&#119875;3 -&#119875;5&#8214; 2&#8214;&#119875;1 -&#119875;4&#8214; ) (2) &#119860;&#119881;&#119866; &#119864;&#119860;&#119877; = 1 2 (&#119864;&#119860;&#119877; &#119871;&#119890;&#119891;&#119905; + &#119864;&#119860;&#119877; &#119877;&#119894;&#119892;&#8462;&#119905;</ns0:formula><ns0:p>Equation (1) describes the EAR equations, where P1 to P6 represent the 2D landmark positions on the retina. As illustrated in Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>, P2, P3, P5, and P6 were used to measure eye height, while P1 and P4 were used to measure eye width. When the eyes are opened, the EAR of the eyes remains constant, but when the eyes are closed, the EAR value rapidly decreases to almost zero, as shown in Figure <ns0:ref type='figure' target='#fig_7'>3(b)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Modified Eye Aspect Ratio (Modified EAR)</ns0:head><ns0:p>Based on the fact that people have different eye sizes, in this study, we recalculate the EAR <ns0:ref type='bibr' target='#b22'>(Huda et al., 2020)</ns0:ref> value used as a threshold. In this research, we proposed the modified eye aspect ratio (Modified EAR) for closed eyes with Equation ( <ns0:ref type='formula'>3</ns0:ref>) and open eyes with Equation ( <ns0:ref type='formula'>4</ns0:ref>).</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_1'>&#119864;&#119860;&#119877; &#119862;&#119897;&#119900;&#119904;&#119890;&#119889; = &#8214;&#119875;2 -&#119875;6&#8214; &#119898;&#119894;&#119899; + &#8214;&#119875;3 -&#119875;5&#8214; &#119898;&#119894;&#119899; 2&#8214;&#119875;1 -&#119875;4&#8214; &#119898;&#119886;&#119909; (4) &#119864;&#119860;&#119877; &#119874;&#119901;&#119890;&#119899; = &#8214;&#119875;2 -&#119875;6&#8214; &#119898;&#119886;&#119909; + &#8214;&#119875;3 -&#119875;5&#8214; &#119898;&#119886;&#119909; 2&#8214;&#119875;1 -&#119875;4&#8214; &#119898;&#119894;&#119899;</ns0:formula><ns0:p>From Equation (3) and Equation <ns0:ref type='formula'>4</ns0:ref>) we calculate our Modified EAR in Equation ( <ns0:ref type='formula'>5</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>D. Eye Blink Detection Flowchart</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref> describes eye blink detection process. The first step is to divide the video into frames. Next, the Facial landmarks feature <ns0:ref type='bibr' target='#b55'>(Wu &amp; Ji, 2019)</ns0:ref> is implemented with the help of Dlib to detect the face. The detector used here is made up of classic Histogram of Oriented Gradients (HOG) <ns0:ref type='bibr' target='#b12'>(Dhiraj &amp; Jain, 2019)</ns0:ref> feature along with a linear classifier. Facial landmarks detector is implemented inside Dlib <ns0:ref type='bibr' target='#b25'>(King, 2009)</ns0:ref> to detect facial features like eyes, ears, and nose. Following the detection of the face, the eye area is identified using the facial landmarks dataset. We can identify 68 landmarks <ns0:ref type='bibr' target='#b57'>(Yin et al., 2020)</ns0:ref> on the face using this dataset. A corresponding index accompanies each landmark. The targeted area of the face is identified via the application of the index criteria. Point index for two eyes as follows: (1). Left eye : <ns0:ref type='bibr'>( 37,</ns0:ref><ns0:ref type='bibr'>38,</ns0:ref><ns0:ref type='bibr'>39,</ns0:ref><ns0:ref type='bibr'>40,</ns0:ref><ns0:ref type='bibr'>41,</ns0:ref><ns0:ref type='bibr'>42)</ns0:ref>, (2). Right eye: <ns0:ref type='bibr'>(43,</ns0:ref><ns0:ref type='bibr'>44,</ns0:ref><ns0:ref type='bibr'>45,</ns0:ref><ns0:ref type='bibr'>46,</ns0:ref><ns0:ref type='bibr'>47,</ns0:ref><ns0:ref type='bibr'>48)</ns0:ref> <ns0:ref type='bibr' target='#b29'>(Ling et al., 2021)</ns0:ref> <ns0:ref type='bibr' target='#b51'>(Tang et al., 2018)</ns0:ref>. After extracting the eye PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science region, it is processed for detecting eye blinks. The eye region discovery is made at the beginning stage of the system. Our research detects the blinks with the help of two lines. Lines are drawn horizontally, and vertically splitting the eye. The act of temporarily closing the eyes and moving the eyelids is referred to as blinking. Blinking eyes is a rapid natural process. We can assume that the eye is closed/blinked when: (1) Eyeball is not visible, (2) Eyelid is closed, (3) Upper and lower eyelids are connected. For an opened eye, both vertical and horizontal lines are almost identical, while for a closed eye, the vertical line becomes smaller or almost vanished. This research study sets a threshold value based on Modified EAR equations. If the EAR is smaller than the Modified EAR Threshold for 3 seconds, we can consider eyes blink. In our experiment, we implement three different threshold value 0.2, 0.3, and modified the EAR threshold for each video dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>E. Eye Blink Dataset</ns0:head><ns0:p>Eyeblink8 dataset is more challenging as it includes facial emotions, head gestures, and looking down on a keyboard. This dataset consists of 408 blinks on 70,992 video frames, as annotated by <ns0:ref type='bibr' target='#b19'>(Fogelton &amp; Benesova, 2016)</ns0:ref>, with a video resolution of 640 &#215; 480 pixels. The video was captured at 30 fps with an average length from 5000 to 11,000 frames. The talking face dataset consists of one video recording of one subject talking in front of the camera. The person in the video is making various facial expressions, including smiles, laughing, and funny face. Moreover, this video clip is captured with 30 fps with a resolution of 720 &#215; 576 and contains 61 annotated blinks <ns0:ref type='bibr' target='#b19'>(Fogelton &amp; Benesova, 2016)</ns0:ref>.</ns0:p><ns0:p>The annotations start with line '#start' and rows consist of the following information frame ID: blink ID: NF: LE_FC: LE_NV: RE_FC: RE_NV: F_X: F_Y: F_W: F_H: LE_LX: LE_LY: LE_RX: LE_RY: RE_LX: RE_LY: RE_RX: RE_RY. The example of a frame consist a blink as follows: 2851: 9: X: X: X: X: X: <ns0:ref type='bibr'>240: 204: 138: 122: 258: 224: 283 :225 :320 :226 :347 :224</ns0:ref>. A blink may consist of fully closed eyes or not. According to blinkmatters.com, the scale of a blink consists of fully closed eyes was 90% to 100%. The row will be like: 2852: 9: X: C: X: C: X: <ns0:ref type='bibr'>239: 204: 140: 122: 259: 225: 284: 226: 320: 227: 346: 226</ns0:ref>. We are only interested in the blink ID and eye completely closed (FC) columns in our study experiment, therefore, we will ignore any other information.</ns0:p><ns0:p>TalkingFace dataset consists of one video recording of one subject talking in front of the camera and making different facial expressions. This video clip is captured with 25 fps with a resolution of 720 &#215; 576 and contains 61 annotated blinks <ns0:ref type='bibr' target='#b17'>(Drutarovsky &amp; Fogelton, 2015)</ns0:ref> <ns0:ref type='bibr' target='#b19'>(Fogelton &amp; Benesova, 2016)</ns0:ref>.</ns0:p><ns0:p>Modified EAR threshold equation implemented for each dataset. After calculation, the EAR threshold for the Talking face dataset is 0.2468, Eyeblink Video 4 0.2923, Eyeblink Video 8 0.2105, and 0.2103 for Eye Video 1. The dataset information explains in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>Eye Video 1 dataset labeling procedure using Eyeblink annotator 3.0 by Andrej Fogelton <ns0:ref type='bibr' target='#b19'>(Fogelton &amp; Benesova, 2016)</ns0:ref>. The annotation tool uses OpenCV 2.4.6. Eye Video 1 were captured at 29.97 fps and has a length of 1829 frames with 29.6 MB. Our dataset has the unique characteristics of people with small eyes and glasses. The environment is the people who drive the car. This dataset can be used for further research. It is difficult to find a dataset of people with small eyes, wearing glasses, and driving cars based on our knowledge. We collect the video from the car dashboard camera in Wufeng District, Taichung, Taiwan.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>2 and Table 3</ns0:ref> explains the statistics on the prediction and test set for each video dataset. Talking Face dataset capture with 30 fps, 5000 frames, and the duration is 1667.67 seconds. Statistics on the prediction set show that the number of closed frames processed is 292, and the number of blinks is 42 for EAR threshold 0.2. However, statistics on the test set describe the number of closed frames as 153, and the number of blinks is 61. This experiment exhibits an accuracy of 96.85% and an AUC of 94.68%. The highest AUC score for the Talking Face dataset was achieved while using Modified EAR threshold 0.2468; it obtains 96.85%. Furthermore, Eyeblink8 dataset video 4 processed 5454 frames with 30 fps and duration of 181.8 seconds. The maximum AUC obtains while implementing our Modified EAR threshold was 0.2923; it achieved 91.17%. Moreover, Eyeblink8 dataset video 8 contains 10712 frames with 30 fps and durations 357.07 seconds. This dataset also got the best AUC when employed Modified EAR threshold 0.2105, it achieves 96.60%. Eyeblink8 dataset video 8 exhibits the minimum result of 21.1% accuracy and 60.2% AUC while employed EAR threshold 0.2. In this study, we prioritize AUC because of some reasons. First, the AUC is scale-invariant and assesses how well the predictions are ordered rather than how well they are ordered in real numbers. Second, AUC is not affected by categorization limits. It assesses the prediction accuracy of the models, regardless of the categorization criteria used to assess them.</ns0:p><ns0:p>Talking Face video dataset exhibits 94% accuracy and 96.85% AUC. Followed by Eyeblink8, video 8 achieves 95% accuracy and 96% AUC. Further Eyeblink8 video 4 obtains 83% accuracy and 91.17% AUC. Although the Talking Face dataset gets a high accuracy of 97% when using an EAR threshold of 0.2, it reaches the lowest AUC of 94.86%.</ns0:p><ns0:p>In Table <ns0:ref type='table'>3</ns0:ref>, Eye video 1 dataset processed 1829 frames with 29.97 fps and duration of 61.03 seconds. The maximum AUC 0.4931% and accuracy 93.88% were obtains while implementing our Modified EAR threshold 0.2103.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>5(a)-(c)</ns0:ref> shows the confusion matrix for the Talking Face dataset. Figure <ns0:ref type='figure' target='#fig_9'>5(d)-(f</ns0:ref>) explains the confusion matrix for the Eyeblink8 dataset video 4. The confusion matrix for Eyeblink8 dataset video 8 is drawn in Figure <ns0:ref type='figure' target='#fig_9'>5(g)-(i)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>(i) describes the False Positive (FP) 2 out of 107 positive labels (0.0187%) and False Negative (FN) 523 out of 10556 negative labels (0.0495%). Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> and Table <ns0:ref type='table'>5</ns0:ref> represents the evaluation performance of Precision, Recall, and F1 Score for each dataset in detail. The experimental results show that our proposed method, Modified EAR has the best performance compared to others. Furthermore, researchers have only used 0.2 or 0.3 as the EAR threshold, even though not all people's eye sizes are the same. Therefore, it is better to recalculate the EAR threshold to determine whether the eye is closed or open to identify the blink more precisely.</ns0:p><ns0:p>Based on Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> and Table <ns0:ref type='table'>5</ns0:ref> we can conclude that, when we recalculate the EAR threshold value the experiment achieves the best performance for all datasets. The TalkingFace dataset applies an EAR threshold of 0.2468, Eyeblink8 video 4 uses an EAR threshold of 0.2923, Eyeblink8 video 8 uses 0.2105 as an EAR threshold, and Eye Video 1 uses 0.2103 as an EAR threshold value. Our research experiment processes video frame by frame and detects eye blink for every three frames shown in Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>. The experiment results just show the beginning, middle, and finish frames of blinks. Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> illustrates the Eye Video 1 dataset result. The first blink started at: 1th frame, middle of action at: 5th frame, ended at: 8th frame. Moreover, the second blink started at: 224th frame, middle of action at: 227th frame, ended at: 229th frame.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In our research work, the AUC score is more important than accuracy because of several reasons as follow <ns0:ref type='bibr' target='#b31'>(Lobo et al., 2008)</ns0:ref>: (1) Our experiment is concerned with ranking predictions, not with producing well-calibrated probabilities. (2) The video dataset is heavily imbalanced. It was discussed extensively in the research paper by Takaya Saito and Marc Rehmsmeier <ns0:ref type='bibr' target='#b48'>(Saito &amp; Rehmsmeier, 2015)</ns0:ref>. The intuition is the following: the false-positive rate for highly imbalanced datasets is pulled down due to many true negatives. (3) We concentrated on classes that were both positive and negative. If we are as concerned with true negatives as we are with true positives, it makes sense to utilize AUC.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> describes the EAR and Error analysis of the Talking Face video dataset. Consider that the optimal slope of the linear regression is m &gt;= 0. Our experiment plots the whole data and obtains m=0. Blinking is infrequent and has a negligible effect on the overall EAR measurement trend. Moreover, Cumulative error is meaningless for blinks due to its delayed impact. Additionally, errors perform more like properly distributed data than the EAR values explain in Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref>.</ns0:p><ns0:p>In this paper, the performance of the proposed eye blink detection technique is evaluated by comparing detected blinks with ground-truth blinks using the two standard datasets described above. The output examples can be classified into three classes. True positive (TP) is the number of correctly recognized samples; false positive (FP), which assigns to the number of samples with correct identification; false negative (FN), which assigns to the number of samples with incorrect identification; true negative (TN) is the number of unrecognized samples. Precision and recall are represented by <ns0:ref type='bibr' target='#b10'>(Dewi, Chen, Liu, et al., 2021)</ns0:ref> <ns0:ref type='bibr' target='#b56'>(Yang et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b11'>(Dewi, Chen, Yu, et al., 2021)</ns0:ref> in Equation ( <ns0:ref type='formula'>7</ns0:ref>)-( <ns0:ref type='formula'>9</ns0:ref>). Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> represents the first blink and second blink analysis for the Talking Face video Dataset. The lower bound is better for estimating blinks. The lower bound of calibration show 0.1723 for ear value. Detects 6 frames in details as follows: 4 frames for first blink and 2 frames for second blink. Furthermore, the lower bound of errors obtains -0.0817 error value. Detects 12 frames with 8 frames for the first blink and 4 frames for second blink. Also, analyzing with z_limit =2 on calibration does better than running on errors. The lower bound of calibration obtains a 0.1723 EAR value. It detects 12 frames with 8 frames for the first blink, 4 frames for the second blink. The lower bound of errors exhibits -0.0817 error value. It detects 12 frames with 8 frames for the first blink, 4 frames for the second blink. Using the mentioned dataset, the proposed method outperforms methods in previous research. The statistics are listed in Table <ns0:ref type='table' target='#tab_7'>6</ns0:ref>. We obtain the highest Precision, 99%, for all datasets. Moreover, our proposed method achieves 97% Precision on Eye Video 1 dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper proposes a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. We analyzed and discussed in detail the experiment result with the public dataset and our dataset Eye video 1. Our work proves that using Modified EAR as a new threshold can improve blink detection results.</ns0:p><ns0:p>In the future, we will focus on the dataset that has facial actions, including smiling and yawning. Both basic and adaptive models lack facial emotions such as smiling and yawning. Machine learning methods may be a viable option, and we will implement SVM in our future research. If the subject's eyes are closed between 90% and 100%, the provided flag will change from X to C. eye not visible (NV)</ns0:p><ns0:p>While the subject's eye is obscured by the hand, poor lighting, or even excessive head movement, this variable shifts from X to N. face bounding box (F_X, F_Y, F_W, F_H)</ns0:p><ns0:p>x and y coordinates, width, height. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>&#119872;&#119900;&#119889;&#119894;&#119891;&#119894;&#119890;&#119889; &#119864;&#119860;&#119877; &#119879;&#8462;&#119903;&#119890;&#119904;&#8462;&#119900;&#119897;&#119889; = (&#119864;&#119860;&#119877; &#119874;&#119901;&#119890;&#119899; + &#119864;&#119860;&#119877; &#119862;&#119897;&#119900;&#119904;&#119890;&#119889; )/2 Eye Status (6) { &#119864;&#119860;&#119877; &#8804; &#119864;&#119860;&#119877; &#119879;&#8462;&#119903;&#119890;&#119904;&#8462;&#119900;&#119897;&#119889; = &#119864;&#119910;&#119890; &#119862;&#119897;&#119900;&#119904;&#119890;&#119889; &#119864;&#119860;&#119877; &#8805; &#119864;&#119860;&#119877; &#119879;&#8462;&#119903;&#119890;&#119904;&#8462;&#119900;&#119897;&#119889; = &#119864;&#119910;&#119890; &#119874;&#119901;&#119890;&#119899; Equation (6) depicts the EAR output range while the eyes are open and closed. When the eyes are closed, the EAR value will be close to 0, but the EAR value may be any integer larger than 0 when the eyes are open.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Another evaluation index, F1<ns0:ref type='bibr' target='#b52'>(Tian et al., 2019)</ns0:ref>(R. C.<ns0:ref type='bibr' target='#b9'>Chen et al., 2020)</ns0:ref> is shown in Equation (9).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>left and right eye corners positions RX (right corner x coordinate), LY (left corner y coordinate) 2 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 Eye detection using Facial landmarks (Right Eye Points = 36-41, Left Eye Points = 42-47).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 Eye detection using Facial landmarks (Right Eye Points = 36-41, Left Eye Points = 42-47).</ns0:figDesc><ns0:graphic coords='26,42.52,250.12,525.00,156.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Single Blink Detection. First Blink detected between 60th and 65th frames.</ns0:figDesc><ns0:graphic coords='27,42.52,178.87,525.00,156.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 Open and closed eyes with facial landmarks (P1-P6).</ns0:figDesc><ns0:graphic coords='28,42.52,178.87,525.00,99.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Eye Blink Detection Flowchart. If the EAR is smaller than the Modified EAR Threshold for 3 seconds, we can consider eyes blink.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Confusion Matrix (True positive (TP), False positive (FP), True negative (TN), False Negative (FN)).</ns0:figDesc><ns0:graphic coords='31,42.52,199.12,525.00,401.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Eye Video 1 dataset result. 1th blink started at: 1th frame, middle of action at: 5th frame, ended at: 8th frame.</ns0:figDesc><ns0:graphic coords='32,42.52,199.12,525.00,268.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Talking Face Dataset EAR and Error Analysis range between 0-5000 frame.</ns0:figDesc><ns0:graphic coords='33,42.52,178.87,525.00,166.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. First Blink and Second Blink Talking Face Dataset Analysis in range 0-215 frame.</ns0:figDesc><ns0:graphic coords='34,42.52,199.12,525.00,180.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,70.87,242.26,672.95' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Dataset Information.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Dataset Information. file, a frame counter may be used to get a timestamp. blink ID A unique blink ID is defined as a series of identical blink ID frames. An eye blink interval is defined as a sequence of identical blink ID frames. non frontal face (NF)While the individual is gazing to the side and blinking, the supplied variable changes from X to N.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>frame ID In a separate left eye (LE), Left Eye.</ns0:cell></ns0:row><ns0:row><ns0:cell>right eye (RE),</ns0:cell><ns0:cell>Right Eye.</ns0:cell></ns0:row><ns0:row><ns0:cell>face (F)</ns0:cell><ns0:cell>Face.</ns0:cell></ns0:row><ns0:row><ns0:cell>eye fully closed (FC)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistics on prediction and test set Talking Face and Eyeblink8 Dataset.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistics on prediction and test set Talking Face and Eyeblink8 Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell cols='2'>Talking Face</ns0:cell><ns0:cell cols='3'>Eyeblink8 Video 4</ns0:cell><ns0:cell cols='3'>Eyeblink8 Video 8</ns0:cell></ns0:row><ns0:row><ns0:cell>Video Info</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FPS</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell></ns0:row><ns0:row><ns0:cell>Frame Count</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell cols='2'>5454 5454</ns0:cell><ns0:cell>5454</ns0:cell><ns0:cell cols='2'>10712 10712</ns0:cell><ns0:cell>10712</ns0:cell></ns0:row><ns0:row><ns0:cell>Durations (s)</ns0:cell><ns0:cell cols='9'>166.67 166.67 166.67 181.8 181.8 181.8 357.07 357.07 357.07</ns0:cell></ns0:row><ns0:row><ns0:cell>EAR Threshold (t)</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.2468</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.2923</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.2105</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Statistics on the prediction set are</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total Number of Frames Processed</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell cols='2'>5315 5315</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell cols='2'>10663 10663</ns0:cell><ns0:cell>10663</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Closed Frames</ns0:cell><ns0:cell>292</ns0:cell><ns0:cell>1059</ns0:cell><ns0:cell>458</ns0:cell><ns0:cell>123</ns0:cell><ns0:cell>1081</ns0:cell><ns0:cell>1035</ns0:cell><ns0:cell>8520</ns0:cell><ns0:cell>1991</ns0:cell><ns0:cell>628</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Blinks</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>78</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>347</ns0:cell><ns0:cell>124</ns0:cell><ns0:cell>51</ns0:cell></ns0:row><ns0:row><ns0:cell>Statistics on the test set are</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total Number of Frames Processed</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell cols='2'>5315 5315</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell cols='2'>10663 10663</ns0:cell><ns0:cell>10663</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Closed Frames</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>107</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Blinks</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>30</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Eye Closeness Frame by Frame Test Scores</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell cols='2'>0.9678 0.819</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell cols='5'>0.982 0.819 0.8273 0.211 0.8233</ns0:cell><ns0:cell>0.951</ns0:cell></ns0:row><ns0:row><ns0:cell>AUC</ns0:cell><ns0:cell cols='2'>0.9486 0.907</ns0:cell><ns0:cell cols='6'>0.9685 0.803 0.907 0.9117 0.602 0.9108</ns0:cell><ns0:cell>0.966</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Evaluation performance of Precision, Recall, and F1-Score (Talking Face and Eyeblink8 Dataset).</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Evaluation performance of Precision, Recall, and F1-Score (Talking Face and Eyeblink8 Dataset).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Talking Face</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Eyeblink8 Video 4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Eyeblink8 Video 8</ns0:cell></ns0:row><ns0:row><ns0:cell>Evaluation</ns0:cell><ns0:cell cols='2'>precision recall</ns0:cell><ns0:cell>f1-score</ns0:cell><ns0:cell cols='3'>support precision recall</ns0:cell><ns0:cell>f1-score</ns0:cell><ns0:cell cols='3'>support precision recall</ns0:cell><ns0:cell>f1-score</ns0:cell><ns0:cell>support</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>EAR Threshold (t) = 0.2</ns0:cell><ns0:cell cols='4'>EAR Threshold (t) = 0.2</ns0:cell><ns0:cell cols='3'>EAR Threshold (t) = 0.2</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.97 0.98</ns0:cell><ns0:cell>4847</ns0:cell><ns0:cell>0.99.</ns0:cell><ns0:cell cols='2'>0.99 0.99</ns0:cell><ns0:cell>5198</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.20 0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.49</ns0:cell><ns0:cell cols='2'>0.93 0.64</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell cols='2'>0.62 0.60</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell cols='2'>1.00 0.02</ns0:cell><ns0:cell>107</ns0:cell></ns0:row><ns0:row><ns0:cell>Macro avg</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell cols='2'>0.95 0.81</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell cols='2'>0.80 0.80</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell cols='2'>0.60 0.18</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight avg</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.97 0.97</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.98 0.98</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.21 0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.97</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.98</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.21</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>EAR Threshold (t) = 0.3</ns0:cell><ns0:cell cols='4'>EAR Threshold (t) = 0.3</ns0:cell><ns0:cell cols='3'>EAR Threshold (t) = 0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.9</ns0:cell><ns0:cell>4847</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.9</ns0:cell><ns0:cell>5198</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.82 0.90</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.14</ns0:cell><ns0:cell cols='2'>1.00 0.25</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell cols='2'>1.00 0.10</ns0:cell><ns0:cell>107</ns0:cell></ns0:row><ns0:row><ns0:cell>Macro avg</ns0:cell><ns0:cell>0.57</ns0:cell><ns0:cell cols='2'>0.91 0.57</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell cols='2'>0.91 0.55</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell cols='2'>0.91 0.50</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight avg</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell cols='2'>0.82 0.88</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.82 0.88</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.82 0.89</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.82</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.82</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.82</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>EAR Threshold (t) = 0.2468</ns0:cell><ns0:cell cols='4'>EAR Threshold (t) = 0.2923</ns0:cell><ns0:cell cols='3'>EAR Threshold (t) = 0.2105</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.94 0.97</ns0:cell><ns0:cell>4847</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.82 0.90</ns0:cell><ns0:cell>5198</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.95 0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell cols='2'>1.00 0.50</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell cols='2'>1.00 0.20</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell cols='2'>0.97 0.27</ns0:cell><ns0:cell>107</ns0:cell></ns0:row><ns0:row><ns0:cell>Macro avg</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell cols='2'>0.97 0.73</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell cols='2'>0.91 0.55</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell cols='2'>0.98 0.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight avg</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.94 0.95</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.83 0.89</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.95 0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.94</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.83</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.95</ns0:cell></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Previous research comparison with proposed method.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Previous research comparison with proposed method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>Precision (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Lee et al. (Lee et al., 2010)</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>83.30</ns0:cell></ns0:row><ns0:row><ns0:cell>Drutarovskys et al. (Drutarovsky &amp; Fogelton, 2015)</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>92.20</ns0:cell></ns0:row><ns0:row><ns0:cell>Fogelton et al. (Fogelton &amp; Benesova, 2016)</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>95.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>98.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Drutarovskys et al. (Drutarovsky &amp; Fogelton, 2015)</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>79.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Fogelton et al. (Fogelton &amp; Benesova, 2016)</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>94.69</ns0:cell></ns0:row><ns0:row><ns0:cell>Al-Gawwam et al. (Al-Gawwam &amp; Benaissa, 2018)</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>96.65</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>99.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>Eye Video 1</ns0:cell><ns0:cell>97.00</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:1:1:NEW 3 Feb 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editor,  Many thanks for allowing us to revise our manuscript for possible publication in the Journal PeerJ Computer Science. The paper is titled ' Adjusting eye aspect ratio for strong eye blink detection based on facial landmarks'. We have modified the manuscript accordingly, and detailed corrections are listed below point by point: Comments: Reviewer #1: Editor comments (Yan Chai Hum) 1) Captions of figures and title of tables are not well written. Please include more description with the purpose of bringing out the gist of the figures and tables to deliver the main message. Responses: Thanks to reviewer for the comments. We revised our captions of figures and title of tables in our manuscript. 2) The gap of knowledge that author wish to fill must be rewritten to include more background and details so that the novelty of the research can be emphasized. Responses: Thanks to reviewer for the comments. We revised our background and details in introduction line 88-101. Eye blinks can also be detected by measuring ocular parameters, for example by fitting ellipses to eye pupils (Bergasa et al., 2006) using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to (Bergasa et al., 2006). Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. We add more explanation in line 130-134. In our research we implement the Dlib's 68 Facial landmark (Kazemi & Sullivan, 2014). The Dlib library's pre-trained facial landmark detector is used to estimate 68 (x, y)-coordinates corresponding to facial structures on the face. The 68 coordinates' indices Jaw Points = 0–16, Right Brow Points = 17–21, Left Brow Points = 22–26, Nose Points = 27–35, Right Eye Points = 36–41, Left Eye Points = 42–47, Mouth Points = 48–60, Lips Points = 61–67 and shown in Figure 1. 3) Please include at least 10 more recent references (recent 3 years preferably). Please enrich your literature review and revise the literature review to better explain the state of the art instead of just listing out relevant works. Try your best to bridge previous relevant works to your research of this paper clearly. Responses: Thanks to reviewer for the comments. We add 10 more recent reference in our manuscript. We add explanation of relevant work in line 41-57. According to a literature review, we found that computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In (Anitha et al., 2020), the sequence of images is trained and classified using the Viola and Jones face detection algorithm in such a way that an alarm buzzes if the eyes are continuously closed for a predetermined period of time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether eye is closed or open is done on the right eye only using SVM and Adaboost (Fatima et al., 2020). In (Ghourabi et al., 2020) recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced in particular by the shadows cast by glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. For the classification of the driver's state, (Dreisig et al., 2020) developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The best-performing feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. Using head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts, the Driver State Alert Control system is expected to detect drowsiness and collision liability associated with strong emotional factors (Persson et al., 2021). We add explanation of relevant work in line 88-101. Eye blinks can also be detected by measuring ocular parameters, for example by fitting ellipses to eye pupils (Bergasa et al., 2006) using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to (Bergasa et al., 2006). Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. We add explanation of relevant work in line 121-134. In (Kim et al., 2020) implement semantic segmentation to accurately extract facial landmarks. Semantic segmentation architecture and datasets containing facial images and ground truth pairs are introduced first. Further, they propose that the number of pixels should be more evenly distributed according to the face landmark in order to improve classification performance. In (Utaminingrum et al., 2021), suggest a segmentation and probability calculation for a white pixel analysis based on facial landmarks as one way to detect the initial position of an eye movement. Calculating the difference between the horizontal and vertical lines in the eye area can be used to detect blinking eyes. Other research study (Navastara et al., 2020) report the features of eyes are extracted by using a Uniform Local Binary Pattern (ULBP) and the Eyes Aspect Ratio (EAR). We add 10 new references based on reviewer suggestions. Anitha, J., Mani, G., & Venkata Rao, K. (2020). Driver Drowsiness Detection Using Viola Jones Algorithm. Smart Innovation, Systems and Technologies, 159, 583–592. https://doi.org/10.1007/978-981-13-9282-5_55 Fatima, B., Shahid, A. R., Ziauddin, S., Safi, A. A., & Ramzan, H. (2020). Driver Fatigue Detection Using Viola Jones and Principal Component Analysis. Applied Artificial Intelligence, 34(6), 456–483. https://doi.org/10.1080/08839514.2020.1723875 Ghourabi, A., Ghazouani, H., & Barhoumi, W. (2020). Driver Drowsiness Detection Based on Joint Monitoring of Yawning, Blinking and Nodding. Proceedings - 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing, ICCP 2020, 407–414. https://doi.org/10.1109/ICCP51029.2020.9266160 Dreisig, M., Baccour, M. H., Schack, T., & Kasneci, E. (2020). Driver Drowsiness Classification Based on Eye Blink and Head Movement Features Using the k-NN Algorithm. 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020, 889–896. https://doi.org/10.1109/SSCI47803.2020.9308133 Persson, A., Jonasson, H., Fredriksson, I., Wiklund, U., & Ahlstrom, C. (2021). Heart Rate Variability for Classification of Alert Versus Sleep Deprived Drivers in Real Road Driving Conditions. IEEE Transactions on Intelligent Transportation Systems, 22(6). https://doi.org/10.1109/TITS.2020.2981941 García, I., Bronte, S., Bergasa, L. M., Almazán, J., & Yebes, J. (2012). Vision-based drowsiness detector for real driving conditions. IEEE Intelligent Vehicles Symposium, Proceedings, 618–623. https://doi.org/10.1109/IVS.2012.6232222 Bergasa, L. M., Nuevo, J., Sotelo, M. A., Barea, R., & Lopez, M. E. (2006). Real-time system for monitoring driver vigilance. IEEE Transactions on Intelligent Transportation Systems, 7(1), 63–77. https://doi.org/10.1109/TITS.2006.869598 Kim, H., Kim, H., Rew, J., & Hwang, E. (2020). FLSNet: Robust Facial Landmark Semantic Segmentation. IEEE Access, 8. https://doi.org/10.1109/ACCESS.2020.3004359 Utaminingrum, F., Purwanto, A. D., Masruri, M. R. R., Ogata, K., & Somawirata, I. K. (2021). Eye movement and blink detection for selecting menu on-screen display using probability analysis based on facial landmark. International Journal of Innovative Computing, Information and Control, 17(4). https://doi.org/10.24507/ijicic.17.04.1287 Huda, C., Tolle, H., & Utaminingrum, F. (2020). Mobile-based driver sleepiness detection using facial landmarks and analysis of EAR Values. International Journal of Interactive Mobile Technologies, 14(14), 16–30. https://doi.org/10.3991/IJIM.V14I14.14105 Navastara, D. A., Putra, W. Y. M., & Fatichah, C. (2020). Drowsiness Detection Based on Facial Landmark and Uniform Local Binary Pattern. Journal of Physics: Conference Series, 1529(5). https://doi.org/10.1088/1742-6596/1529/5/052015 Thanks to reviewer for the comments. We explain our main contribution in introduction line 92-101. Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. 4) Suggest to add experiments regarding the environmental conditions when taking the images. Responses: Thanks to reviewer for the comments. We explain our dataset in line 199-229. E. Eye Blink Dataset Eyeblink8 dataset is more challenging as it includes facial emotions, head gestures, and looking down on a keyboard. This dataset consists of 408 blinks on 70,992 video frames, as annotated by (Fogelton & Benesova, 2016), with a video resolution of 640 × 480 pixels. The video was captured at 30 fps with an average length from 5000 to 11,000 frames. The talking face dataset consists of one video recording of one subject talking in front of the camera. The person in the video is making various facial expressions, including smiles, laughing, and funny face. Moreover, this video clip is captured with 30 fps with a resolution of 720 × 576 and contains 61 annotated blinks (Fogelton & Benesova, 2016). The annotations start with line '#start' and rows consist of the following information frame ID: blink ID: NF: LE_FC: LE_NV: RE_FC: RE_NV: F_X: F_Y: F_W: F_H: LE_LX: LE_LY: LE_RX: LE_RY: RE_LX: RE_LY: RE_RX: RE_RY. The example of a frame consist a blink as follows: 2851: 9: X: X: X: X: X: 240: 204: 138: 122: 258: 224: 283 :225 :320 :226 :347 :224. A blink may consist of fully closed eyes or not. According to blinkmatters.com, the scale of a blink consists of fully closed eyes was 90% to 100%. The row will be like: 2852: 9: X: C: X: C: X: 239: 204: 140: 122: 259: 225: 284: 226: 320: 227: 346: 226. We are only interested in the blink ID and eye completely closed (FC) columns in our study experiment, therefore, we will ignore any other information. TalkingFace dataset consists of one video recording of one subject talking in front of the camera and making different facial expressions. This video clip is captured with 25 fps with a resolution of 720 × 576 and contains 61 annotated blinks (Drutarovsky & Fogelton, 2015)(Fogelton & Benesova, 2016). Modified EAR threshold equation implemented for each dataset. After calculation, the EAR threshold for the Talking face dataset is 0.2468, Eyeblink Video 4 0.2923, and Eyeblink Video 8 0.2105. The dataset information explains in Table 1. Eye Video 1 dataset labeling procedure using Eyeblink annotator 3.0 by Andrej Fogelton (Fogelton & Benesova, 2016). The annotation tool uses OpenCV 2.4.6. Eye Video 1 were captured at 29.97 fps and has a length of 1829 frames with 29.6 MB. 5) More explanation of the selection of threshold is required. Responses: Thanks to reviewer for the comments. We explain of the selection of threshold in line 142-164. A. Eye Aspect Ratio (EAR) Eye Aspect Ratio (EAR) is a scalar value that responds, especially for opening and closing eyes (Sugawara & Nikaido, 2014). A drowsy detection and accident avoidance system based on the blink duration was developed by (Pandey & Muppalaneni, 2021) and their work system has shown the good accuracy on yawning dataset (YawDD). To distinguish between the open and closed states of the eye, they used an EAR threshold of 0.35. Figure 2 depicts the progression of time it takes to calculate a typical EAR value for one blink. During the flashing process, we can observe that the EAR value increases or decreases rapidly. According to the results of previous studies, we used threshold values ​​to identify the rapid increase or decrease in EAR values ​​caused by blinking. As per previous research, we know that setting the threshold at 0.2 is beneficial for the work at hand. In addition to this approach, many additional approaches to blink detection using image processing techniques have been suggested in the literature. However, they have certain drawbacks, such as strict restrictions on image and text quality, which are difficult to overcome. EAR formula is insensitive to the direction and distance of the face, thus providing the benefit of identifying faces from a distance. EAR value calculates by substituting six coordinates around the eyes shown in Figure 3 into Equation (1) – (2) (You et al., 2019)(Noor et al., 2020). Furthermore, we explain our modified eye aspect ratio (modified EAR) as new threshold in line 165-176. B. Modified Eye Aspect Ratio (Modified EAR) Based on the fact that people have different eye sizes, in this study, we recalculate the EAR (Huda et al., 2020) value used as a threshold. In this research, we proposed the modified eye aspect ratio (Modified EAR) for closed eyes with Equation (3) and open eyes with Equation (4). (3) (4) From Equation (3) and Equation 4) we calculate our Modified EAR in Equation (5) (5) Eye Status (6) Equation (6) depicts the EAR output range while the eyes are open and closed. When the eyes are closed, the EAR value will be close to 0, but the EAR value may be any integer larger than 0 when the eyes are open. Another explanation about threshold value in line 195-198 and line 220-222. This research study sets a threshold value based on Modified EAR equations. If the EAR is smaller than the Modified EAR Threshold for 3 seconds, we can consider eyes blink. In our experiment, we implement three different threshold value 0.2, 0.3, and modified the EAR threshold for each video dataset. Modified EAR threshold equation implemented for each dataset. After calculation, the EAR threshold for the Talking face dataset is 0.2468, Eyeblink Video 4 0.2923, Eyeblink Video 8 0.2105, and 0.2103 for Eye Video 1. The dataset information explains in Table 1. Reviewer 1 (Khin Wee Lai) Basic reporting 1. interesting topic and the presented writing are adequate. However, majority of the literature references are too old (more than 5 years). The gap of knowledge of the proposed topic is unclear. Too short prior arts have been discussed, thus, not much of gap of knowledge can be identified from the present works. Responses: Thanks to reviewer for the comments. We add 10 more recent reference in our manuscript. We add explanation of relevant work in line 41-57. According to a literature review, we found that computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In (Anitha et al., 2020), the sequence of images is trained and classified using the Viola and Jones face detection algorithm in such a way that an alarm buzzes if the eyes are continuously closed for a predetermined period of time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether eye is closed or open is done on the right eye only using SVM and Adaboost (Fatima et al., 2020). In (Ghourabi et al., 2020) recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced in particular by the shadows cast by glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. For the classification of the driver's state, (Dreisig et al., 2020) developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The best-performing feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. Using head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts, the Driver State Alert Control system is expected to detect drowsiness and collision liability associated with strong emotional factors (Persson et al., 2021). We add explanation of relevant work in line 88-101. Eye blinks can also be detected by measuring ocular parameters, for example by fitting ellipses to eye pupils (Bergasa et al., 2006) using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to (Bergasa et al., 2006). Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. We add explanation of relevant work in line 121-134. In (Kim et al., 2020) implement semantic segmentation to accurately extract facial landmarks. Semantic segmentation architecture and datasets containing facial images and ground truth pairs are introduced first. Further, they propose that the number of pixels should be more evenly distributed according to the face landmark in order to improve classification performance. In (Utaminingrum et al., 2021), suggest a segmentation and probability calculation for a white pixel analysis based on facial landmarks as one way to detect the initial position of an eye movement. Calculating the difference between the horizontal and vertical lines in the eye area can be used to detect blinking eyes. Other research study (Navastara et al., 2020) report the features of eyes are extracted by using a Uniform Local Binary Pattern (ULBP) and the Eyes Aspect Ratio (EAR). We add 10 new references based on reviewer suggestions. Anitha, J., Mani, G., & Venkata Rao, K. (2020). Driver Drowsiness Detection Using Viola Jones Algorithm. Smart Innovation, Systems and Technologies, 159, 583–592. https://doi.org/10.1007/978-981-13-9282-5_55 Fatima, B., Shahid, A. R., Ziauddin, S., Safi, A. A., & Ramzan, H. (2020). Driver Fatigue Detection Using Viola Jones and Principal Component Analysis. Applied Artificial Intelligence, 34(6), 456–483. https://doi.org/10.1080/08839514.2020.1723875 Ghourabi, A., Ghazouani, H., & Barhoumi, W. (2020). Driver Drowsiness Detection Based on Joint Monitoring of Yawning, Blinking and Nodding. Proceedings - 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing, ICCP 2020, 407–414. https://doi.org/10.1109/ICCP51029.2020.9266160 Dreisig, M., Baccour, M. H., Schack, T., & Kasneci, E. (2020). Driver Drowsiness Classification Based on Eye Blink and Head Movement Features Using the k-NN Algorithm. 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020, 889–896. https://doi.org/10.1109/SSCI47803.2020.9308133 Persson, A., Jonasson, H., Fredriksson, I., Wiklund, U., & Ahlstrom, C. (2021). Heart Rate Variability for Classification of Alert Versus Sleep Deprived Drivers in Real Road Driving Conditions. IEEE Transactions on Intelligent Transportation Systems, 22(6). https://doi.org/10.1109/TITS.2020.2981941 García, I., Bronte, S., Bergasa, L. M., Almazán, J., & Yebes, J. (2012). Vision-based drowsiness detector for real driving conditions. IEEE Intelligent Vehicles Symposium, Proceedings, 618–623. https://doi.org/10.1109/IVS.2012.6232222 Bergasa, L. M., Nuevo, J., Sotelo, M. A., Barea, R., & Lopez, M. E. (2006). Real-time system for monitoring driver vigilance. IEEE Transactions on Intelligent Transportation Systems, 7(1), 63–77. https://doi.org/10.1109/TITS.2006.869598 Kim, H., Kim, H., Rew, J., & Hwang, E. (2020). FLSNet: Robust Facial Landmark Semantic Segmentation. IEEE Access, 8. https://doi.org/10.1109/ACCESS.2020.3004359 Utaminingrum, F., Purwanto, A. D., Masruri, M. R. R., Ogata, K., & Somawirata, I. K. (2021). Eye movement and blink detection for selecting menu on-screen display using probability analysis based on facial landmark. International Journal of Innovative Computing, Information and Control, 17(4). https://doi.org/10.24507/ijicic.17.04.1287 Huda, C., Tolle, H., & Utaminingrum, F. (2020). Mobile-based driver sleepiness detection using facial landmarks and analysis of EAR Values. International Journal of Interactive Mobile Technologies, 14(14), 16–30. https://doi.org/10.3991/IJIM.V14I14.14105 Navastara, D. A., Putra, W. Y. M., & Fatichah, C. (2020). Drowsiness Detection Based on Facial Landmark and Uniform Local Binary Pattern. Journal of Physics: Conference Series, 1529(5). https://doi.org/10.1088/1742-6596/1529/5/052015 Thanks to reviewer for the comments. We explain our main contribution in introduction line 92-101. Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. 2. The landmark used in the study are rely on prior reported works from Dlib, and dataset utilized the publicly available resources from eyeblink8. Suggest the author to test on their own dataset. Responses: Thanks to reviewer for the comments. We test on our own dataset and explain in line 223-229. Eye Video 1 dataset labeling procedure using Eyeblink annotator 3.0 by Andrej Fogelton (Fogelton & Benesova, 2016). The annotation tool uses OpenCV 2.4.6. Eye Video 1 were captured at 29.97 fps and has a length of 1829 frames with 29.6 MB. Our dataset has the unique characteristics of people with small eyes and glasses. The environment is the people who drive the car. This dataset can be used for further research. It is difficult to find a dataset of people with small eyes, wearing glasses, and driving cars based on our knowledge. We collect the video from the car dashboard camera in Wufeng District, Taichung, Taiwan. Moreover, we explain the statistic and test set Eye Video 1 dataset on Table 3 in line 255-257. In Table 3, Eye video 1 dataset processed 1829 frames with 29.97 fps and duration of 61.03 seconds. The maximum AUC 0.4931% and accuracy 93.88% were obtains while implementing our Modified EAR threshold 0.2103. We explain the Evaluation performance of Precision, Recall, and F1-Score (Eye Video 1 Dataset) in Table 5 line 271-279, Figure 6 line 274-279. Based on Table 4 and Table 5 we can conclude that, when we recalculate the EAR threshold value the experiment achieves the best performance for all datasets. The TalkingFace dataset applies an EAR threshold of 0.2468, Eyeblink8 video 4 uses an EAR threshold of 0.2923, Eyeblink8 video 8 uses 0.2105 as an EAR threshold, and Eye Video 1 uses 0.2103 as an EAR threshold value. Our research experiment processes video frame by frame and detects eye blink for every three frames shown in Figure 6. The experiment results just show the beginning, middle, and finish frames of blinks. Figure 6 illustrates the Eye Video 1 dataset result. The first blink started at: 1th frame, middle of action at: 5th frame, ended at: 8th frame. Moreover, the second blink started at: 224th frame, middle of action at: 227th frame, ended at: 229th frame. Experimental design the proposed methods are in line with their problem statement. Again, I don't see much of novelty of their proposed works here although the methodology is explained sufficiently. Validity of the findings Both qualitative and quantitative analyses are adequate. Additional comments NA Responses: Thanks to reviewer for the comments. We revised our manuscript based on reviewer suggestions. Reviewer 2 (Anonymous) Basic reporting 1. A good motivational sentence is needed in the Introduction section of the study. Responses: Thanks to reviewer for the comments. We revised our manuscript based on reviewer suggestions. We revised our introduction in line 36-105. Eye blinks detection technology is essential and has been applied in different fields such as the intercommunication between disabled people and computers (Królak & Strumiłło, 2012), drowsiness detection (Rahman et al., 2016), the computer vision syndromes (Al Tawil et al., 2020)(Drutarovsky & Fogelton, 2015), anti-spoofing protection in face recognition systems (Pan et al., 2007)(Dewi et al., 2020), and cognitive load (Wilson, 2002). According to a literature review, we found that computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In (Anitha et al., 2020), the sequence of images is trained and classified using the Viola and Jones face detection algorithm in such a way that an alarm buzzes if the eyes are continuously closed for a predetermined period of time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether eye is closed or open is done on the right eye only using SVM and Adaboost (Fatima et al., 2020). In (Ghourabi et al., 2020) recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced in particular by the shadows cast by glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. For the classification of the driver's state, (Dreisig et al., 2020) developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The best-performing feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. Using head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts, the Driver State Alert Control system is expected to detect drowsiness and collision liability associated with strong emotional factors (Persson et al., 2021). Moreover, the investigation of an individual's eye state in terms of blink time, blink count, and frequency provides valuable information about the subject's mental health. The result uses to explore the effects of external variables on changes in emotional states. Individuals' normal eyesight is characterized by the presence of spontaneous eye blinking at a certain frequency. The following elements impact eye blinking, including the condition of the eyelids, the condition of the eyes, the presence of illness, the existence of contact lenses, the psychological state, the surrounding environment, medicines, and other stimuli. The blinking frequency ranges between 6 and 30 times per minute (Rosenfield, 2011)(Tai et al., 2020). Furthermore, the term 'eye blink' refers to the quick shutting and reopening of the eyelids, which normally lasts between 100 and 400 milliseconds (Stern et al., 1984). Reflex blinking occurs significantly faster than spontaneous blinking, which occurs significantly less frequently. The frequency and length of blinking may be influenced by factors such as relative humidity, temperature, light, tiredness, illness, and physical activity. Real-time facial landmark detectors (Čech et al., 2016)(Dong et al., 2018) are available that captures most of the distinguishing features of human facial images, including the corner of the eye angles and eyelids. A person's eye size does not match another's eye; for example, one person has big eyes, and the other has small eyes. They don't have the same eyes or height value, as expected. When a person with small eyes closes his or her eyes, he or she may appear to have the same eye height as a person with large eyes. This issue will affect the experimental results. Therefore, we propose a simple but effective technique for detecting eye blink using a newly developed facial landmark detector with a modified Eye Aspect Ratio (EAR). Because our objective is to identify endogenous eye blinks, a typical camera with a frame rate of 25–30 frames per second (fps) is adequate. Eye blinks disclosure can be based on motion tracking within the eye region (Divjak & Bischof, 2009). Lee et al. (Lee et al., 2010) try to estimate the state of an eye, including an eye open or closed. Gracia et al. (García et al., 2012) experiment with eye closure for individual frames, which is consequently used in a sequence for blink detection. Other methods compute a difference between frames, including pixels values (Kurylyak et al., 2012) and descriptors (Malik & Smolka, 2014). We built our algorithm upon the successful method of Eye Aspect Ratio (Maior et al., 2020)(Mehta et al., 2019) and facial landmark (Wu & Ji, 2019a). Another method for blink detection is based on template matching (Awais et al., 2013). The templates with open and/or closed eyes are learned and normalized cross-correlation. Eye blinks can also be detected by measuring ocular parameters, for example by fitting ellipses to eye pupils (Bergasa et al., 2006) using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to (Bergasa et al., 2006). Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. 2. The problem space of the study should be expressed more clearly. With this explanation, it will be clearer how eye blink detection handles problem solving in the study. Responses: Thanks to reviewer for the comments. We explain our main contribution in introduction line 96-105. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. This research work is organized as follows. Related work and the approach we intend to use in this study describes in Materials and Methods section. Section 4 describes the experiment and results. A detailed description of the findings of our study is provided in Section 4. Finally, conclusions are drawn, and future work is proposed in Section 5. 3. Perhaps, the literature section can be improved by mentioning that there are different fields of study related to eyeblink. Responses: Thanks to reviewer for the comments. We add 10 more recent reference in our manuscript. We add explanation of relevant work in line 41-57. According to a literature review, we found that computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In (Anitha et al., 2020), the sequence of images is trained and classified using the Viola and Jones face detection algorithm in such a way that an alarm buzzes if the eyes are continuously closed for a predetermined period of time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether eye is closed or open is done on the right eye only using SVM and Adaboost (Fatima et al., 2020). In (Ghourabi et al., 2020) recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced in particular by the shadows cast by glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. For the classification of the driver's state, (Dreisig et al., 2020) developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The best-performing feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. Using head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts, the Driver State Alert Control system is expected to detect drowsiness and collision liability associated with strong emotional factors (Persson et al., 2021). We add explanation of relevant work in line 88-101. Eye blinks can also be detected by measuring ocular parameters, for example by fitting ellipses to eye pupils (Bergasa et al., 2006) using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to (Bergasa et al., 2006). Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. This paper's main contributions are (1) We propose a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. (2) Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. (3) We analyzed and discussed in detail the experiment result with the public dataset such as Talking Face and Eyeblink8 dataset. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. We add explanation of relevant work in line 121-129. In (Kim et al., 2020) implement semantic segmentation to accurately extract facial landmarks. Semantic segmentation architecture and datasets containing facial images and ground truth pairs are introduced first. Further, they propose that the number of pixels should be more evenly distributed according to the face landmark in order to improve classification performance. In (Utaminingrum et al., 2021), suggest a segmentation and probability calculation for a white pixel analysis based on facial landmarks as one way to detect the initial position of an eye movement. Calculating the difference between the horizontal and vertical lines in the eye area can be used to detect blinking eyes. Other research study (Navastara et al., 2020) report the features of eyes are extracted by using a Uniform Local Binary Pattern (ULBP) and the Eyes Aspect Ratio (EAR). We add 10 new references based on reviewer suggestions. Anitha, J., Mani, G., & Venkata Rao, K. (2020). Driver Drowsiness Detection Using Viola Jones Algorithm. Smart Innovation, Systems and Technologies, 159, 583–592. https://doi.org/10.1007/978-981-13-9282-5_55 Fatima, B., Shahid, A. R., Ziauddin, S., Safi, A. A., & Ramzan, H. (2020). Driver Fatigue Detection Using Viola Jones and Principal Component Analysis. Applied Artificial Intelligence, 34(6), 456–483. https://doi.org/10.1080/08839514.2020.1723875 Ghourabi, A., Ghazouani, H., & Barhoumi, W. (2020). Driver Drowsiness Detection Based on Joint Monitoring of Yawning, Blinking and Nodding. Proceedings - 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing, ICCP 2020, 407–414. https://doi.org/10.1109/ICCP51029.2020.9266160 Dreisig, M., Baccour, M. H., Schack, T., & Kasneci, E. (2020). Driver Drowsiness Classification Based on Eye Blink and Head Movement Features Using the k-NN Algorithm. 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020, 889–896. https://doi.org/10.1109/SSCI47803.2020.9308133 Persson, A., Jonasson, H., Fredriksson, I., Wiklund, U., & Ahlstrom, C. (2021). Heart Rate Variability for Classification of Alert Versus Sleep Deprived Drivers in Real Road Driving Conditions. IEEE Transactions on Intelligent Transportation Systems, 22(6). https://doi.org/10.1109/TITS.2020.2981941 García, I., Bronte, S., Bergasa, L. M., Almazán, J., & Yebes, J. (2012). Vision-based drowsiness detector for real driving conditions. IEEE Intelligent Vehicles Symposium, Proceedings, 618–623. https://doi.org/10.1109/IVS.2012.6232222 Bergasa, L. M., Nuevo, J., Sotelo, M. A., Barea, R., & Lopez, M. E. (2006). Real-time system for monitoring driver vigilance. IEEE Transactions on Intelligent Transportation Systems, 7(1), 63–77. https://doi.org/10.1109/TITS.2006.869598 Kim, H., Kim, H., Rew, J., & Hwang, E. (2020). FLSNet: Robust Facial Landmark Semantic Segmentation. IEEE Access, 8. https://doi.org/10.1109/ACCESS.2020.3004359 Utaminingrum, F., Purwanto, A. D., Masruri, M. R. R., Ogata, K., & Somawirata, I. K. (2021). Eye movement and blink detection for selecting menu on-screen display using probability analysis based on facial landmark. International Journal of Innovative Computing, Information and Control, 17(4). https://doi.org/10.24507/ijicic.17.04.1287 Huda, C., Tolle, H., & Utaminingrum, F. (2020). Mobile-based driver sleepiness detection using facial landmarks and analysis of EAR Values. International Journal of Interactive Mobile Technologies, 14(14), 16–30. https://doi.org/10.3991/IJIM.V14I14.14105 Navastara, D. A., Putra, W. Y. M., & Fatichah, C. (2020). Drowsiness Detection Based on Facial Landmark and Uniform Local Binary Pattern. Journal of Physics: Conference Series, 1529(5). https://doi.org/10.1088/1742-6596/1529/5/052015 Experimental design 1. It should be stated more clearly how the process is followed for the selection of the threshold. If this value is determined manually, how this value is selected should be explained. Responses: Thanks to reviewer for the comments. We explain of the selection of threshold in line 142-164. C. Eye Aspect Ratio (EAR) Eye Aspect Ratio (EAR) is a scalar value that responds, especially for opening and closing eyes (Sugawara & Nikaido, 2014). A drowsy detection and accident avoidance system based on the blink duration was developed by (Pandey & Muppalaneni, 2021) and their work system has shown the good accuracy on yawning dataset (YawDD). To distinguish between the open and closed states of the eye, they used an EAR threshold of 0.35. Figure 2 depicts the progression of time it takes to calculate a typical EAR value for one blink. During the flashing process, we can observe that the EAR value increases or decreases rapidly. According to the results of previous studies, we used threshold values ​​to identify the rapid increase or decrease in EAR values ​​caused by blinking. As per previous research, we know that setting the threshold at 0.2 is beneficial for the work at hand. In addition to this approach, many additional approaches to blink detection using image processing techniques have been suggested in the literature. However, they have certain drawbacks, such as strict restrictions on image and text quality, which are difficult to overcome. EAR formula is insensitive to the direction and distance of the face, thus providing the benefit of identifying faces from a distance. EAR value calculates by substituting six coordinates around the eyes shown in Figure 3 into Equation (1) – (2) (You et al., 2019)(Noor et al., 2020). Furthermore, we explain our modified eye aspect ratio (modified EAR) as new threshold in line 165-176. D. Modified Eye Aspect Ratio (Modified EAR) Based on the fact that people have different eye sizes, in this study, we recalculate the EAR (Huda et al., 2020) value used as a threshold. In this research, we proposed the modified eye aspect ratio (Modified EAR) for closed eyes with Equation (3) and open eyes with Equation (4). (3) (4) From Equation (3) and Equation 4) we calculate our Modified EAR in Equation (5) (5) Eye Status (6) Equation (6) depicts the EAR output range while the eyes are open and closed. When the eyes are closed, the EAR value will be close to 0, but the EAR value may be any integer larger than 0 when the eyes are open. Another explanation about threshold value in line 195-198 and line 220-222. This research study sets a threshold value based on Modified EAR equations. If the EAR is smaller than the Modified EAR Threshold for 3 seconds, we can consider eyes blink. In our experiment, we implement three different threshold value 0.2, 0.3, and modified the EAR threshold for each video dataset. Modified EAR threshold equation implemented for each dataset. After calculation, the EAR threshold for the Talking face dataset is 0.2468, Eyeblink Video 4 0.2923, Eyeblink Video 8 0.2105, and 0.2103 for Eye Video 1. The dataset information explains in Table 1. 2. It has been stated that environmental conditions are important in taking camera images. However, there is no experimental design related to this. Validity of the findings well prepared. Additional comments I think the study will get better after these edits and corrections. Responses: Thanks to reviewer for the comments. We explain our dataset in line 207-237. F. Eye Blink Dataset Eyeblink8 dataset is more challenging as it includes facial emotions, head gestures, and looking down on a keyboard. This dataset consists of 408 blinks on 70,992 video frames, as annotated by (Fogelton & Benesova, 2016), with a video resolution of 640 × 480 pixels. The video was captured at 30 fps with an average length from 5000 to 11,000 frames. The talking face dataset consists of one video recording of one subject talking in front of the camera. The person in the video is making various facial expressions, including smiles, laughing, and funny face. Moreover, this video clip is captured with 30 fps with a resolution of 720 × 576 and contains 61 annotated blinks (Fogelton & Benesova, 2016). The annotations start with line '#start' and rows consist of the following information frame ID: blink ID: NF: LE_FC: LE_NV: RE_FC: RE_NV: F_X: F_Y: F_W: F_H: LE_LX: LE_LY: LE_RX: LE_RY: RE_LX: RE_LY: RE_RX: RE_RY. The example of a frame consist a blink as follows: 2851: 9: X: X: X: X: X: 240: 204: 138: 122: 258: 224: 283 :225 :320 :226 :347 :224. A blink may consist of fully closed eyes or not. According to blinkmatters.com, the scale of a blink consists of fully closed eyes was 90% to 100%. The row will be like: 2852: 9: X: C: X: C: X: 239: 204: 140: 122: 259: 225: 284: 226: 320: 227: 346: 226. We are only interested in the blink ID and eye completely closed (FC) columns in our study experiment, therefore, we will ignore any other information. TalkingFace dataset consists of one video recording of one subject talking in front of the camera and making different facial expressions. This video clip is captured with 25 fps with a resolution of 720 × 576 and contains 61 annotated blinks (Drutarovsky & Fogelton, 2015)(Fogelton & Benesova, 2016). Modified EAR threshold equation implemented for each dataset. After calculation, the EAR threshold for the Talking face dataset is 0.2468, Eyeblink Video 4 0.2923, and Eyeblink Video 8 0.2105. The dataset information explains in Table 1. Eye Video 1 dataset labeling procedure using Eyeblink annotator 3.0 by Andrej Fogelton (Fogelton & Benesova, 2016). The annotation tool uses OpenCV 2.4.6. Eye Video 1 were captured at 29.97 fps and has a length of 1829 frames with 29.6 MB. "
Here is a paper. Please give your review comments after reading it.
400
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Blink detection is an important technique in a variety of settings, including facial movement analysis and signal processing. However, automatic blink detection is very challenging because of the blink rate. This research work proposed a real-time method for detecting eye blinks in a video series. Automatic facial landmarks detectors are trained on a real-world dataset and demonstrate exceptional resilience to a wide range of environmental factors, including lighting conditions, face emotions, and head position. For each video frame, the proposed method calculates the facial landmark locations and extracts the vertical distance between the eyelids using the facial landmark positions. Our results show that the recognizable landmarks are sufficiently accurate to determine the degree of eye-opening and closing consistently. The proposed algorithm estimates the facial landmark positions, extracts a single scalar quantity by using Modified Eye Aspect Ratio (Modified EAR) and characterizing the eye closeness in each frame. Finally, blinks are detected by the Modified EAR threshold value and detecting eye blinks as a pattern of EAR values in a short temporal window. According to the results from a typical data set, it is seen that the suggested approach is more efficient than the state-of-the-art technique.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Eye blinks detection technology is essential and has been applied in different fields such as the intercommunication between disabled people and computers <ns0:ref type='bibr' target='#b24'>(Kr&#243;lak &amp; Strumi&#322;&#322;o, 2012)</ns0:ref>, drowsiness detection <ns0:ref type='bibr' target='#b38'>(Rahman et al., 2016)</ns0:ref>, the computer vision syndromes <ns0:ref type='bibr' target='#b0'>(Al Tawil et al., 2020)</ns0:ref> <ns0:ref type='bibr' target='#b15'>(Drutarovsky &amp; Fogelton, 2015)</ns0:ref>, anti-spoofing protection in face recognition systems <ns0:ref type='bibr' target='#b34'>(Pan et al., 2007)</ns0:ref>, and cognitive load <ns0:ref type='bibr' target='#b48'>(Wilson, 2002)</ns0:ref>.</ns0:p><ns0:p>According to the literature review, computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In <ns0:ref type='bibr' target='#b1'>(Anitha et al., 2020)</ns0:ref>, Viola and Jones face detection algorithms are used to train and classify images sequentially. Further, an alarm will sound if the eyes remain closed for a certain time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether eye is closed or open is done on the right eye only using SVM and Adaboost <ns0:ref type='bibr' target='#b16'>(Fatima et al., 2020)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b19'>(Ghourabi et al., 2020)</ns0:ref> recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced in particular by the shadows cast by glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. For the classification of the driver's state, <ns0:ref type='bibr' target='#b14'>(Dreisig et al., 2020)</ns0:ref> developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The bestperforming feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. The Driver State Alert Control system expects to detect drowsiness and collision liability associated with strong emotional factors such as head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts <ns0:ref type='bibr' target='#b36'>(Persson et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Moreover, investigating an individual's eye state in terms of blink time, blink count, and frequency provides valuable information about the subject's mental health. The result uses to explore the effects of external variables on changes in emotional states. Individuals' normal eyesight is characterized by the presence of spontaneous eye blinking at a certain frequency. The following elements impact eye blinking, including the condition of the eyelids, the condition of the eyes, the presence of illness, contact lenses, the psychological state, the surrounding environment, medicines, and other stimuli. The blinking frequency ranges between 6 and 30 times per minute <ns0:ref type='bibr' target='#b40'>(Rosenfield, 2011)</ns0:ref>. Furthermore, the term 'eye blink' refers to the quick shutting and reopening of the eyelids, which normally lasts between 100 and 400. Reflex blinking occurs significantly faster than spontaneous blinking, which occurs significantly less frequently. The frequency and length of blinking may be influenced by relative humidity, temperature, light, tiredness, illness, and physical activity. Real-time facial landmark detectors <ns0:ref type='bibr'>(&#268;ech et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b12'>(Dong et al., 2018)</ns0:ref> are available that captures most of the distinguishing features of human facial images, including the corner of the eye angles and eyelids. A person's eye size does not match another's eye; for example, one has big eyes but the other has small eyes. They don't have the same eyes or height value, as expected. When a person with small eyes closes his or her eyes, he or she may appear to have the same eye height as a person with large eyes. This issue will affect the experimental results. Therefore, we propose a simple but effective technique for detecting eye blink using a newly developed facial landmark detector with a modified Eye Aspect Ratio (EAR). Because our objective is to identify endogenous eye blinks, a typical camera with a frame rate of 25-30 frames per second (fps) is adequate. Eye blinks disclosure can be based on motion tracking within the eye region <ns0:ref type='bibr' target='#b11'>(Divjak &amp; Bischof, 2009)</ns0:ref>. <ns0:ref type='bibr' target='#b26'>Lee et al. (Lee et al., 2010)</ns0:ref> try to estimate the state of an eye, including an eye open or closed. Gracia et al. <ns0:ref type='bibr' target='#b18'>(Garc&#237;a et al., 2012)</ns0:ref> experiment with eye closure for individual frames, which is consequently used in a sequence for blink detection. Other methods compute a difference between frames, including pixels values <ns0:ref type='bibr' target='#b25'>(Kurylyak et al., 2012)</ns0:ref> and descriptors <ns0:ref type='bibr' target='#b30'>(Malik &amp; Smolka, 2014)</ns0:ref>. Using the effective Eye Aspect Ratio <ns0:ref type='bibr' target='#b29'>(Maior et al., 2020)</ns0:ref> and face landmarks <ns0:ref type='bibr' target='#b31'>(Mehta et al., 2019)</ns0:ref> methods, we developed our own algorithm to perfect it. Another method for blink detection is based on template matching <ns0:ref type='bibr' target='#b2'>(Awais et al., 2013)</ns0:ref>. The templates with open and/or closed eyes are learned and normalized cross-correlation.</ns0:p><ns0:p>Eye blinks can also be detected by measuring ocular parameters, for example by fitting ellipses to eye pupils <ns0:ref type='bibr' target='#b3'>(Bergasa et al., 2006)</ns0:ref> using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to <ns0:ref type='bibr' target='#b3'>(Bergasa et al., 2006)</ns0:ref>. Adopting EAR as a metric to detect blink in <ns0:ref type='bibr' target='#b39'>(Rakshita, 2018)</ns0:ref> yields interesting results in terms of robustness. Based on the previous research result, the blinks rate is previously determined using the EAR threshold value 0.2. Due to the large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study.</ns0:p><ns0:p>The most significant contributions made by this paper are as follows: (1) Blink types are automatically classified using a method that defines a new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. A detailed description of this method is provided in the paper. (2) Adjusted Eye Aspect Ratio for Strong Blink Detection based on facial landmarks was performed in this experiment. Then, we analyze and discuss in detail the experimental results with public datasets including the Talking Face and Eyeblink8 datasets. (3) We proposed a new Eye Video 1 dataset and our dataset has the unique characteristics of people with small eyes and glasses. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment.</ns0:p><ns0:p>Furthermore, this research work is organized as follows. Related work and the approach we intend to use in this study describes in Materials and Methods section. Section 4 describes the experiment and results. A detailed description of the findings of our study is provided in Section 4. Finally, conclusions are drawn, and future work is proposed in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials &amp; Methods</ns0:head></ns0:div> <ns0:div><ns0:head>A. Eye Blink Detection with Facial Landmarks</ns0:head><ns0:p>Eye blinking is a suppressed process that involves the rapid closure and reopening of the eyelid. Multiple muscles are involved in the blinking of the eyes. The orbicularis oculi and levator palpebrae superioris are the two primary muscles that regulate eye closure and opening. Blinking serves some important purposes, one of which is to moisten the corner of an individual's eye. Additionally, it cleans the cornea of the eye when the eyelashes are unable to catch all of the dust and debris that enter the eye. Everyone must blink to spread tears over the entire surface of the eyeball, and especially over the surface of the cornea. Blinking also performs as a reflex to prevent foreign objects from entering the eye. The goal of facial landmark identification is to identify and track significant landmarks on the face. Face tracking becomes strong for rigid facial deformation and not stiff due to head movements and facial expressions. Furthermore, facial landmarks were successfully applied to face alignment, head pose estimation, face swapping (D. <ns0:ref type='bibr' target='#b6'>Chen et al., 2019)</ns0:ref>, and blink detection <ns0:ref type='bibr' target='#b4'>(Cao et al., 2021)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b22'>(Kim et al., 2020)</ns0:ref> implement semantic segmentation to accurately extract facial landmarks. Semantic segmentation architecture and datasets containing facial images and ground truth pairs are introduced first. Further, they propose that the number of pixels should be more evenly distributed according to the face landmark in order to improve classification performance. In <ns0:ref type='bibr' target='#b47'>(Utaminingrum et al., 2021)</ns0:ref>, suggest a segmentation and probability calculation for a white pixel analysis based on facial landmarks as one way to detect the initial position of an eye movement. Calculating the difference between the horizontal and vertical lines in the eye area can be used to detect blinking eyes. Other research study <ns0:ref type='bibr' target='#b32'>(Navastara et al., 2020)</ns0:ref> report the features of eyes are extracted by using a Uniform Local Binary Pattern (ULBP) and the Eyes Aspect Ratio (EAR).</ns0:p><ns0:p>In our research we implement the Dlib's 68 Facial landmark <ns0:ref type='bibr' target='#b21'>(Kazemi &amp; Sullivan, 2014)</ns0:ref> (1) Face detection: Face detection is the first method that locates a human face and returns a value in x, y, w, h which is a rectangle. (2) Face landmark: After getting the location of a face in an image, we have to through points inside the rectangle. This annotation is part of the 68-point iBUG 300-W dataset on which the Dlib face landmark predictor is trained. Whichever data set is chosen, the Dlib framework can be used to train form predictors on the input training data.</ns0:p></ns0:div> <ns0:div><ns0:head>B. Eye Aspect Ratio (EAR)</ns0:head><ns0:p>Eye Aspect Ratio (EAR) is a scalar value that responds, especially for opening and closing eyes <ns0:ref type='bibr' target='#b44'>(Sugawara &amp; Nikaido, 2014)</ns0:ref>. A drowsy detection and accident avoidance system based on the blink duration was developed by <ns0:ref type='bibr' target='#b35'>(Pandey &amp; Muppalaneni, 2021</ns0:ref>) and their work system has shown the good accuracy on yawning dataset (YawDD). To distinguish between the open and closed states of the eye, they used an EAR threshold of 0.3. Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> depicts the progression of time it takes to calculate a typical EAR value for one blink. During the flashing process, we can observe that the EAR value increases or decreases rapidly. According to the results of previous studies, we used threshold values to identify the rapid increase or decrease in EAR values caused by blinking. As per previous research, we know that setting the threshold at 0.2 is beneficial for the work at hand. In addition to this approach, many additional approaches to blink detection using image processing techniques have been suggested in the literature. However, they have certain drawbacks, such as strict restrictions on image and text quality, which are difficult to overcome. Based on the previous research result, we selected EAR threshold of 0.2 and 0.3 in our experiment. EAR formula is insensitive to the direction and distance of the face, thus providing the benefit of identifying faces from a distance. EAR value calculates by substituting six coordinates around the eyes shown in Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> into Equation ( <ns0:ref type='formula'>1</ns0:ref>) -( <ns0:ref type='formula'>2</ns0:ref>) <ns0:ref type='bibr'>(You et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b33'>(Noor et al., 2020)</ns0:ref>.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>&#119864;&#119860;&#119877; = &#8214;&#119875;2 -&#119875;6&#8214; + &#8214;&#119875;3 -&#119875;5&#8214; 2&#8214;&#119875;1 -&#119875;4&#8214; ) (2) &#119860;&#119881;&#119866; &#119864;&#119860;&#119877; = 1 2 (&#119864;&#119860;&#119877; &#119871;&#119890;&#119891;&#119905; + &#119864;&#119860;&#119877; &#119877;&#119894;&#119892;&#8462;&#119905;</ns0:formula><ns0:p>Equation (1) describes the EAR equations, where P1 to P6 represent the 2D landmark positions on the retina. As illustrated in Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>, P2, P3, P5, and P6 were used to measure eye height, while P1 and P4 were used to measure eye width. When the eyes are opened, the EAR of the eyes remains constant, but when the eyes are closed, the EAR value rapidly decreases to almost zero, as shown in Figure <ns0:ref type='figure' target='#fig_8'>3(b)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>C. Modified Eye Aspect Ratio (Modified EAR)</ns0:head><ns0:p>Based on the fact that people have different eye sizes, in this study, we recalculate the EAR <ns0:ref type='bibr' target='#b20'>(Huda et al., 2020)</ns0:ref> value used as a threshold. In this research, we proposed the modified eye aspect ratio (Modified EAR) for closed eyes with Equation ( <ns0:ref type='formula'>3</ns0:ref>) and open eyes with Equation ( <ns0:ref type='formula'>4</ns0:ref>).</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_1'>&#119864;&#119860;&#119877; &#119862;&#119897;&#119900;&#119904;&#119890;&#119889; = &#8214;&#119875;2 -&#119875;6&#8214; &#119898;&#119894;&#119899; + &#8214;&#119875;3 -&#119875;5&#8214; &#119898;&#119894;&#119899; 2&#8214;&#119875;1 -&#119875;4&#8214; &#119898;&#119886;&#119909; (4) &#119864;&#119860;&#119877; &#119874;&#119901;&#119890;&#119899; = &#8214;&#119875;2 -&#119875;6&#8214; &#119898;&#119886;&#119909; + &#8214;&#119875;3 -&#119875;5&#8214; &#119898;&#119886;&#119909; 2&#8214;&#119875;1 -&#119875;4&#8214; &#119898;&#119894;&#119899;</ns0:formula><ns0:p>From Equation (3) and Equation <ns0:ref type='formula'>4</ns0:ref>) we calculate our Modified EAR in Equation ( <ns0:ref type='formula'>5</ns0:ref>) <ns0:ref type='formula'>6</ns0:ref>) depicts the EAR output range while the eyes are open and closed. When the eyes are closed, the EAR value will be close to 0, but the EAR value may be any integer larger than 0 when the eyes are open.</ns0:p><ns0:formula xml:id='formula_2'>(5) &#119872;&#119900;&#119889;&#119894;&#119891;&#119894;&#119890;&#119889; &#119864;&#119860;&#119877; &#119879;&#8462;&#119903;&#119890;&#119904;&#8462;&#119900;&#119897;&#119889; = (&#119864;&#119860;&#119877; &#119874;&#119901;&#119890;&#119899; + &#119864;&#119860;&#119877; &#119862;&#119897;&#119900;&#119904;&#119890;&#119889; )/2 Eye Status (6) { &#119864;&#119860;&#119877; &#8804; &#119864;&#119860;&#119877; &#119879;&#8462;&#119903;&#119890;&#119904;&#8462;&#119900;&#119897;&#119889; = &#119864;&#119910;&#119890; &#119862;&#119897;&#119900;&#119904;&#119890;&#119889; &#119864;&#119860;&#119877; &#8805; &#119864;&#119860;&#119877; &#119879;&#8462;&#119903;&#119890;&#119904;&#8462;&#119900;&#119897;&#119889; = &#119864;&#119910;&#119890; &#119874;&#119901;&#119890;&#119899; Equation (</ns0:formula></ns0:div> <ns0:div><ns0:head>D. Eye Blink Detection Flowchart</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref> describes eye blink detection process. The first step is to divide the video into frames. Next, the Facial landmarks feature <ns0:ref type='bibr' target='#b49'>(Wu &amp; Ji, 2019)</ns0:ref> is implemented with the help of Dlib to detect the face. The detector used here is made up of classic Histogram of Oriented Gradients (HOG) <ns0:ref type='bibr' target='#b10'>(Dhiraj &amp; Jain, 2019)</ns0:ref> feature along with a linear classifier. Facial landmarks detector is implemented inside Dlib <ns0:ref type='bibr' target='#b23'>(King, 2009)</ns0:ref> to detect facial features like eyes, ears, and nose.</ns0:p><ns0:p>Following the detection of the face, the eye area is identified using the facial landmarks dataset. We can identify 68 landmarks (Yin et al., 2020) on the face using this dataset. A corresponding index accompanies each landmark. The targeted area of the face is identified via the application of the index criteria. Point index for two eyes as follows: (1). Left eye : <ns0:ref type='bibr'>( 37,</ns0:ref><ns0:ref type='bibr'>38,</ns0:ref><ns0:ref type='bibr'>39,</ns0:ref><ns0:ref type='bibr'>40,</ns0:ref><ns0:ref type='bibr'>41,</ns0:ref><ns0:ref type='bibr'>42)</ns0:ref>, (2). Right eye: <ns0:ref type='bibr'>(43,</ns0:ref><ns0:ref type='bibr'>44,</ns0:ref><ns0:ref type='bibr'>45,</ns0:ref><ns0:ref type='bibr'>46,</ns0:ref><ns0:ref type='bibr'>47,</ns0:ref><ns0:ref type='bibr'>48)</ns0:ref> <ns0:ref type='bibr' target='#b27'>(Ling et al., 2021)</ns0:ref> <ns0:ref type='bibr' target='#b45'>(Tang et al., 2018)</ns0:ref>. After extracting the eye region, it is processed for detecting eye blinks. The eye region discovery is made at the beginning stage of the system.</ns0:p><ns0:p>Our research detects the blinks with the help of two lines. Lines are drawn horizontally, and vertically splitting the eye. The act of temporarily closing the eyes and moving the eyelids is referred to as blinking. Blinking eyes is a rapid natural process. We can assume that the eye is closed/blinked when: (1) Eyeball is not visible, (2) Eyelid is closed, (3) Upper and lower eyelids are connected. For an opened eye, both vertical and horizontal lines are almost identical, while for a closed eye, the vertical line becomes smaller or almost vanished. This research study sets a threshold value based on Modified EAR equations. If the EAR is smaller than the Modified EAR Threshold for 3 seconds, we can consider eyes blink. In our experiment, we implement three different threshold value 0.2, 0.3, and modified the EAR threshold for each video dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>E. Eye Blink Dataset</ns0:head><ns0:p>Eyeblink8 dataset is more challenging as it includes facial emotions, head gestures, and looking down on a keyboard. This dataset consists of 408 blinks on 70,992 video frames, as annotated by <ns0:ref type='bibr' target='#b17'>(Fogelton &amp; Benesova, 2016)</ns0:ref>, with a video resolution of 640 &#215; 480 pixels. The video was captured at 30 fps with an average length from 5000 to 11,000 frames. The talking face dataset consists of one video recording of one subject talking in front of the camera. The person in the video is making various facial expressions, including smiles, laughing, and funny face. Moreover, this video clip is captured with 30 fps with a resolution of 720 &#215; 576 and contains 61 annotated blinks <ns0:ref type='bibr' target='#b17'>(Fogelton &amp; Benesova, 2016)</ns0:ref>.</ns0:p><ns0:p>The annotations start with line '#start' and rows consist of the following information frame ID: blink ID: NF: LE_FC: LE_NV: RE_FC: RE_NV: F_X: F_Y: F_W: F_H: LE_LX: LE_LY: LE_RX: LE_RY: RE_LX: RE_LY: RE_RX: RE_RY. The example of a frame consist a blink as follows: 2851: 9: X: X: X: X: X: <ns0:ref type='bibr'>240: 204: 138: 122: 258: 224: 283 :225 :320 :226 :347 :224</ns0:ref>. A blink may consist of fully closed eyes or not. According to blinkmatters.com, the scale of a blink consists of fully closed eyes was 90% to 100%. The row will be like: 2852: 9: X: C: X: C: X: <ns0:ref type='bibr'>239: 204: 140: 122: 259: 225: 284: 226: 320: 227: 346: 226</ns0:ref>. We are only interested in the blink ID and eye completely closed (FC) columns in our study experiment, therefore, we will ignore any other information.</ns0:p><ns0:p>TalkingFace dataset consists of one video recording of one subject talking in front of the camera and making different facial expressions. This video clip is captured with 25 fps with a resolution of 720 &#215; 576 and contains 61 annotated blinks <ns0:ref type='bibr' target='#b15'>(Drutarovsky &amp; Fogelton, 2015)</ns0:ref> <ns0:ref type='bibr' target='#b17'>(Fogelton &amp; Benesova, 2016)</ns0:ref>.</ns0:p><ns0:p>Modified EAR threshold equation implemented for each dataset. After calculation, the EAR threshold for the Talking face dataset is 0.2468, Eyeblink Video 4 0.2923, Eyeblink Video 8 0.2105, and 0.2103 for Eye Video 1. The dataset information explains in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Eye Video 1 dataset labeling procedure using Eyeblink annotator 3.0 by Andrej Fogelton <ns0:ref type='bibr' target='#b17'>(Fogelton &amp; Benesova, 2016)</ns0:ref>. The annotation tool uses OpenCV 2.4.6. Eye Video 1 were captured at 29.97 fps and has a length of 1829 frames with 29.6 MB. Our dataset has the unique characteristics of people with small eyes and glasses. The environment is the people who drive the car. This dataset can be used for further research. It is difficult to find a dataset of people with small eyes, wearing glasses, and driving cars based on our knowledge. We collect the video from the car dashboard camera in Wufeng District, Taichung, Taiwan.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> and Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> explains the statistics on the prediction and test set for each video dataset. Talking Face dataset capture with 30 fps, 5000 frames, and the duration is 1667.67 seconds. Statistics on the prediction set show that the number of closed frames processed is 292, and the number of blinks is 42 for EAR threshold 0.2. However, statistics on the test set describe the number of closed frames as 153, and the number of blinks is 61. This experiment exhibits an accuracy of 96.85% and an AUC of 94.68%. The highest AUC score for the Talking Face dataset was achieved while using Modified EAR threshold 0.2468; it obtains 96.85%. Furthermore, Eyeblink8 dataset video 4 processed 5454 frames with 30 fps and duration of 181.8 seconds. The maximum AUC obtains while implementing our Modified EAR threshold was 0.2923; it achieved 91.17%. Moreover, Eyeblink8 dataset video 8 contains 10712 frames with 30 fps and durations 357.07 seconds. This dataset also got the best AUC when employed Modified EAR threshold 0.2105, it achieves 96.60%. Eyeblink8 dataset video 8 exhibits the minimum result of 21.1% accuracy and 60.2% AUC while employed EAR threshold 0.2. In this study, we prioritize AUC because of some reasons. First, the AUC is scale-invariant and assesses how well the predictions are ordered rather than how well they are ordered in real numbers. Second, AUC is not affected by categorization limits. It assesses the prediction accuracy of the models, regardless of the categorization criteria used to assess them.</ns0:p><ns0:p>Talking Face video dataset exhibits 94% accuracy and 96.85% AUC. Followed by Eyeblink8, video 8 achieves 95% accuracy and 96% AUC. Further Eyeblink8 video 4 obtains 83% accuracy and 91.17% AUC. Although the Talking Face dataset gets a high accuracy of 97% when using an EAR threshold of 0.2, it reaches the lowest AUC of 94.86%.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>, Eye video 1 dataset processed 1829 frames with 29.97 fps and duration of 61.03 seconds. The maximum AUC 0.4931% and accuracy 93.88% were obtains while implementing our Modified EAR threshold 0.2103. Table <ns0:ref type='table' target='#tab_8'>4 and Table 5</ns0:ref> represents the evaluation performance of Precision, Recall, and F1 Score for each dataset in detail. The experimental results show that our proposed method, Modified EAR has the best performance compared to others. Furthermore, researchers have only used 0.2 or 0.3 as the EAR threshold, even though not all people's eye sizes are the same. Therefore, it is better to recalculate the EAR threshold to determine whether the eye is closed or open to identify the blink more precisely.</ns0:p><ns0:p>Based on Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> and Table <ns0:ref type='table'>5</ns0:ref> we can conclude that, when we recalculate the EAR threshold value the experiment achieves the best performance for all datasets. The TalkingFace dataset applies an EAR threshold of 0.2468, Eyeblink8 video 4 uses an EAR threshold of 0.2923, Eyeblink8 video 8 uses 0.2105 as an EAR threshold, and Eye Video 1 uses 0.2103 as an EAR threshold value. Our research experiment processes video frame by frame and detects eye blink for every three frames shown in Figure <ns0:ref type='figure' target='#fig_13'>6</ns0:ref>. The experiment results just show the beginning, middle, and finish frames of blinks. Figure <ns0:ref type='figure' target='#fig_13'>6</ns0:ref> illustrates the Eye Video 1 dataset result. The first blink started at: 1th frame, middle of action at: 5th frame, ended at: 8th frame. Moreover, the second blink started at: 224th frame, middle of action at: 227th frame, ended at: 229th frame.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In our research work, the AUC score is more important than accuracy because of several reasons as follow <ns0:ref type='bibr' target='#b28'>(Lobo et al., 2008)</ns0:ref>: (1) Our experiment is concerned with ranking predictions, not with producing well-calibrated probabilities. (2) The video dataset is heavily imbalanced. It was discussed extensively in the research paper by Takaya Saito and Marc Rehmsmeier <ns0:ref type='bibr' target='#b42'>(Saito &amp; Rehmsmeier, 2015)</ns0:ref>. The intuition is the following: the false-positive rate for highly imbalanced datasets is pulled down due to many true negatives. (3) We concentrated on classes that were both positive and negative. If we are as concerned with true negatives as we are with true positives, it makes sense to utilize AUC.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_15'>7</ns0:ref> describes the EAR and Error analysis of the Talking Face video dataset. Consider that the optimal slope of the linear regression is m &gt;= 0. Our experiment plots the whole data and obtains m=0. Blinking is infrequent and has a negligible effect on the overall EAR measurement trend. Moreover, Cumulative error is meaningless for blinks due to its delayed impact. Additionally, errors perform more like properly distributed data than the EAR values explain in Figure <ns0:ref type='figure' target='#fig_15'>7</ns0:ref>.</ns0:p><ns0:p>In this paper, the performance of the proposed eye blink detection technique is evaluated by comparing detected blinks with ground-truth blinks using the two standard datasets described above. The output examples can be classified into three classes. True positive (TP) is the number of correctly recognized samples; false positive (FP), which assigns to the number of samples with correct identification; false negative (FN), which assigns to the number of samples with incorrect identification; true negative (TN) is the number of unrecognized samples. Precision and recall are represented by <ns0:ref type='bibr' target='#b8'>(Dewi, Chen, Liu, et al., 2021)</ns0:ref> <ns0:ref type='bibr' target='#b50'>(Yang et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b9'>(Dewi, Chen, Yu, et al., 2021)</ns0:ref> in Equation ( <ns0:ref type='formula'>7</ns0:ref>)-( <ns0:ref type='formula'>9</ns0:ref>). Figure <ns0:ref type='figure' target='#fig_17'>8</ns0:ref> represents the first blink and second blink analysis for the Talking Face video Dataset. The lower bound is better for estimating blinks. The lower bound of calibration show 0.1723 for ear value. Detects 6 frames in details as follows: 4 frames for first blink and 2 frames for second blink. Furthermore, the lower bound of errors obtains -0.0817 error value. Detects 12 frames with 8 frames for the first blink and 4 frames for second blink. Also, analyzing with z_limit =2 on calibration does better than running on errors. The lower bound of calibration obtains a 0.1723 EAR value. It detects 12 frames with 8 frames for the first blink, 4 frames for the second blink. The lower bound of errors exhibits -0.0817 error value. It detects 12 frames with 8 frames for the first blink, 4 frames for the second blink. Using the mentioned dataset, the proposed method outperforms methods in previous research. The statistics are listed in Table <ns0:ref type='table' target='#tab_9'>6</ns0:ref>. We obtain the highest Precision, 99%, for all datasets. Moreover, our proposed method achieves 97% Precision on Eye Video 1 dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper proposes a method to automatically classify blink types by determining the new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. Adjusted Eye Aspect Ratio for strong Eye Blink Detection based on facial landmarks. We analyzed and discussed in detail the experiment result with the public dataset and our dataset Eye video 1. Our work proves that using Modified EAR as a new threshold can improve blink detection results.</ns0:p><ns0:p>In the future, we will focus on the dataset that has facial actions, including smiling and yawning. Both basic and adaptive models lack facial emotions such as smiling and yawning. Machine learning methods may be a viable option, and we will implement SVM in our future research. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>. The Dlib library's pre-trained facial landmark detector is used to estimate 68 (x, y)-coordinates corresponding to facial structures on the face. The 68 coordinates' indices Jaw Points = 0-16, Right Brow Points = 17-21, Left Brow Points = 22-26, Nose Points = 27-35, Right Eye Points = 36-41, Left Eye Points = 42-47, Mouth Points = 48-60, Lips Points = 61-67 and shown in Figure 1. Facial landmark points identification using Dlib's 68 Model consists of the following two steps:</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5(a)-(c) shows the confusion matrix for the Talking Face dataset. Figure 5(d)-(f) explains the confusion matrix for the Eyeblink8 dataset video 4. The confusion matrix for Eyeblink8 dataset</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Another evaluation index, F1 (Tian et al., 2019)(R. C. Chen et al., 2020) is shown in Equation (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Figure 1</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 Eye detection process using facial landmarks (Right Eye Points = 36-41, Left Eye Points = 42-47).</ns0:figDesc><ns0:graphic coords='23,42.52,250.12,525.00,156.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Single blink detection process and the first blink is detected between 60th and 65th frames.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Single blink detection process and the first blink is detected between 60th and 65th frames.</ns0:figDesc><ns0:graphic coords='26,42.52,250.12,525.00,156.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 The examples of open eyes and closed eyes with facial landmarks (P1-P6).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 The examples of open eyes and closed eyes with facial landmarks (P1-P6).</ns0:figDesc><ns0:graphic coords='29,42.52,204.37,525.00,99.75' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Eye Blink Detection Flowchart.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Eye Blink Detection Flowchart.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Confusion Matrix on the Talking Face dataset (True positive (TP), False positive (FP), True negative (TN), False Negative (FN)).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Confusion Matrix on the Talking Face dataset (True positive (TP), False positive (FP), True negative (TN), False Negative (FN)).</ns0:figDesc><ns0:graphic coords='32,42.52,250.12,525.00,401.25' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Eye Video 1 dataset result. 1th blink started at: 1th frame, middle of action at: 5th frame, ended at: 8th frame.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Eye Video 1 dataset result. 1th blink started at: 1th frame, middle of action at: 5th frame, ended at: 8th frame.</ns0:figDesc><ns0:graphic coords='33,42.52,250.12,525.00,268.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Talking Face Dataset EAR and Error Analysis range between 0-5000 frame.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Talking Face Dataset EAR and Error Analysis range between 0-5000 frame.</ns0:figDesc><ns0:graphic coords='34,42.52,204.37,525.00,166.50' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. First Blink and Second Blink Talking Face Dataset Analysis in range 0-215 frame.</ns0:figDesc><ns0:graphic coords='35,42.52,224.62,525.00,180.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. First Blink and Second Blink Talking Face Dataset Analysis in range 0-215 frame.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,70.87,242.26,672.95' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Statistics on prediction and test set on the Eye Video 1 Dataset</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Statistics on prediction and test set on the Eye Video 1 Dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Statistics on prediction and test set on the Eye Video 1 Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell>Eye Video 1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Video Info</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FPS</ns0:cell><ns0:cell>29.97</ns0:cell><ns0:cell>29.97</ns0:cell><ns0:cell>29.97</ns0:cell></ns0:row><ns0:row><ns0:cell>Frame Count</ns0:cell><ns0:cell>1829</ns0:cell><ns0:cell>1829</ns0:cell><ns0:cell>1829</ns0:cell></ns0:row><ns0:row><ns0:cell>Durations (s)</ns0:cell><ns0:cell>61.03</ns0:cell><ns0:cell>61.03</ns0:cell><ns0:cell>61.03</ns0:cell></ns0:row><ns0:row><ns0:cell>EAR Threshold (t)</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.2103</ns0:cell></ns0:row><ns0:row><ns0:cell>Statistics on the prediction set are</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total Number of Frames Processed</ns0:cell><ns0:cell>1829</ns0:cell><ns0:cell>1829</ns0:cell><ns0:cell>1829</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Closed Frames</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>888</ns0:cell><ns0:cell>56</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Blinks</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>93</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>Statistics on the test set are</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total Number of Frames Processed</ns0:cell><ns0:cell>1829</ns0:cell><ns0:cell>1829</ns0:cell><ns0:cell>1829</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Closed Frames</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>58</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Blinks</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>14</ns0:cell></ns0:row><ns0:row><ns0:cell>Eye Closeness Frame by Frame Test Scores</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.9273</ns0:cell><ns0:cell>0.5014</ns0:cell><ns0:cell>0.9388</ns0:cell></ns0:row><ns0:row><ns0:cell>AUC</ns0:cell><ns0:cell>0.4872</ns0:cell><ns0:cell>0.4006</ns0:cell><ns0:cell>0.4931</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Evaluation performance of Precision, Recall, and F1-Score on the Talking Face</ns0:figDesc><ns0:table><ns0:row><ns0:cell>and Eyeblink8 Dataset</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Evaluation performance of Precision, Recall, and F1-Score on the Talking Face and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Eyeblink8 Dataset</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Evaluation performance of Precision, Recall, and F1-Score on the Talking Face and Eyeblink8 Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Talking Face</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Eyeblink8 Video 4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Eyeblink8 Video 8</ns0:cell></ns0:row><ns0:row><ns0:cell>Evaluation</ns0:cell><ns0:cell cols='2'>precision recall</ns0:cell><ns0:cell>f1-score</ns0:cell><ns0:cell cols='3'>support precision recall</ns0:cell><ns0:cell>f1-score</ns0:cell><ns0:cell cols='3'>support precision recall</ns0:cell><ns0:cell>f1-score</ns0:cell><ns0:cell>support</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>EAR Threshold (t) = 0.2</ns0:cell><ns0:cell cols='4'>EAR Threshold (t) = 0.2</ns0:cell><ns0:cell cols='3'>EAR Threshold (t) = 0.2</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.97 0.98</ns0:cell><ns0:cell>4847</ns0:cell><ns0:cell>0.99.</ns0:cell><ns0:cell cols='2'>0.99 0.99</ns0:cell><ns0:cell>5198</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.20 0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.49</ns0:cell><ns0:cell cols='2'>0.93 0.64</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell cols='2'>0.62 0.60</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell cols='2'>1.00 0.02</ns0:cell><ns0:cell>107</ns0:cell></ns0:row><ns0:row><ns0:cell>Macro avg</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell cols='2'>0.95 0.81</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell cols='2'>0.80 0.80</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell cols='2'>0.60 0.18</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight avg</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.97 0.97</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.98 0.98</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.21 0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.97</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.98</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.21</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>EAR Threshold (t) = 0.3</ns0:cell><ns0:cell cols='4'>EAR Threshold (t) = 0.3</ns0:cell><ns0:cell cols='3'>EAR Threshold (t) = 0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.9</ns0:cell><ns0:cell>4847</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.9</ns0:cell><ns0:cell>5198</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.82 0.90</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.14</ns0:cell><ns0:cell cols='2'>1.00 0.25</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell cols='2'>1.00 0.10</ns0:cell><ns0:cell>107</ns0:cell></ns0:row><ns0:row><ns0:cell>Macro avg</ns0:cell><ns0:cell>0.57</ns0:cell><ns0:cell cols='2'>0.91 0.57</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell cols='2'>0.91 0.55</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell cols='2'>0.91 0.50</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight avg</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell cols='2'>0.82 0.88</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.82 0.88</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.82 0.89</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.82</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.82</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.82</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>EAR Threshold (t) = 0.2468</ns0:cell><ns0:cell cols='4'>EAR Threshold (t) = 0.2923</ns0:cell><ns0:cell cols='3'>EAR Threshold (t) = 0.2105</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.94 0.97</ns0:cell><ns0:cell>4847</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.82 0.90</ns0:cell><ns0:cell>5198</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.95 0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell cols='2'>1.00 0.50</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell cols='2'>1.00 0.20</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell cols='2'>0.97 0.27</ns0:cell><ns0:cell>107</ns0:cell></ns0:row><ns0:row><ns0:cell>Macro avg</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell cols='2'>0.97 0.73</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell cols='2'>0.91 0.55</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell cols='2'>0.98 0.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight avg</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.94 0.95</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell cols='2'>0.83 0.89</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell cols='2'>0.95 0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.94</ns0:cell><ns0:cell>5000</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.83</ns0:cell><ns0:cell>5315</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.95</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 (on next page)</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of previous research with the proposed method</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of previous research with the proposed method</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of previous research with the proposed method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>Precision (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Lee et al. (Lee et al., 2010)</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>83.30</ns0:cell></ns0:row><ns0:row><ns0:cell>Drutarovskys et al. (Drutarovsky &amp; Fogelton, 2015)</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>92.20</ns0:cell></ns0:row><ns0:row><ns0:cell>Fogelton et al. (Fogelton &amp; Benesova, 2016)</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>95.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>Talking Face</ns0:cell><ns0:cell>98.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Drutarovskys et al. (Drutarovsky &amp; Fogelton, 2015)</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>79.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Fogelton et al. (Fogelton &amp; Benesova, 2016)</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>94.69</ns0:cell></ns0:row><ns0:row><ns0:cell>Al-Gawwam et al. (Al-Gawwam &amp; Benaissa, 2018)</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>96.65</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>Eyeblink8</ns0:cell><ns0:cell>99.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>Eye Video 1</ns0:cell><ns0:cell>97.00</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:07:63933:2:0:NEW 4 Mar 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editor,  Many thanks for allowing us to revise our manuscript for possible publication in the Journal PeerJ Computer Science. The paper is titled 'Adjusting eye aspect ratio for strong eye blink detection based on facial landmarks.' We have modified the manuscript accordingly, and detailed corrections are listed below point by point: Comments: Reviewer #1: Editor comments (Yan Chai Hum) The previous comments (as follows) were not addressed sufficiently. Please add more detail to these areas. 1) Captions of figures and title of tables are not well written. Please include more description with the purpose of bringing out the gist of the figures and tables to deliver the main message. Responses: Thanks to the reviewer for the comments. We revised our captions of figures and the title of tables in our manuscript and included more description to bring out the gist of the figures and tables to deliver the main message. 2) The gap of knowledge that author wish to fill must be rewritten to include more background and details so that the novelty of the research can be emphasized. Responses: Thanks to the reviewer for the comments. We revised our background and details in the Introduction line 36-110. We add more information about eye blinks detection technology. Eye blinks detection technology is essential and has been applied in different fields such as the intercommunication between disabled people and computers (Królak & Strumiłło, 2012), drowsiness detection (Rahman et al., 2016), the computer vision syndromes (Al Tawil et al., 2020)(Drutarovsky & Fogelton, 2015), anti-spoofing protection in face recognition systems (Pan et al., 2007), and cognitive load (Wilson, 2002). According to the literature review, computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In (Anitha et al., 2020), Viola and Jones face detection algorithms are used to train and classify images sequentially. Further, an alarm will sound if the eyes remain closed for a certain time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether the eye is closed or open is done on the right eye only using SVM and Adaboost (Fatima et al., 2020). We add more literature review in the Introduction, as follows: (1) In (Ghourabi et al., 2020) recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced by the shadows cast, glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. (2) For the classification of the driver's state, (Dreisig et al., 2020) developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The best-performing feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. The Driver State Alert Control system expects to detect drowsiness and collision liability associated with strong emotional factors such as head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts (Persson et al., 2021). (3) Moreover, investigating an individual's eye state in terms of blink time, blink count, and frequency provides valuable information about the subject's mental health. The result uses to explore the effects of external variables on changes in emotional states. Individuals' normal eyesight is characterized by the presence of spontaneous eye blinking at a certain frequency. The following elements impact eye blinking, including the condition of the eyelids, the condition of the eyes, the presence of illness, contact lenses, the psychological state, the surrounding environment, medicines, and other stimuli. The blinking frequency ranges between 6 and 30 times per minute (Rosenfield, 2011). Furthermore, the term 'eye blink' refers to the quick shutting and reopening of the eyelids, which normally lasts between 100 and 400. Reflex blinking occurs significantly faster than spontaneous blinking, which occurs significantly less frequently. The frequency and length of blinking may be influenced by relative humidity, temperature, light, tiredness, illness, and physical activity. Real-time facial landmark detectors (Čech et al., 2016)(Dong et al., 2018) are available that captures most of the distinguishing features of human facial images, including the corner of the eye angles and eyelids. (4) A person's eye size does not match another's eye; for example, one has big eyes but the other has small eyes. They don't have the same eyes or height value, as expected. When a person with small eyes closes his or her eyes, he or she may appear to have the same eye height as a person with large eyes. This issue will affect the experimental results. Therefore, we propose a simple but effective technique for detecting eye blink using a newly developed facial landmark detector with a modified Eye Aspect Ratio (EAR). Because our objective is to identify endogenous eye blinks, a typical camera with a frame rate of 25–30 frames per second (fps) is adequate. Eye blinks disclosure can be based on motion tracking within the eye region (Divjak & Bischof, 2009). (5) Lee et al. (Lee et al., 2010) try to estimate the state of an eye, including an eye open or closed. Gracia et al. (García et al., 2012) experiment with eye closure for individual frames, which is consequently used in a sequence for blink detection. Other methods compute a difference between frames, including pixels values (Kurylyak et al., 2012) and descriptors (Malik & Smolka, 2014). We developed our algorithm to perfect it using the effective Eye Aspect Ratio (Maior et al., 2020) and face landmarks (Mehta et al., 2019) methods. Another method for blink detection is based on template matching (Awais et al., 2013). The templates with open and/or closed eyes are learned and normalized cross-correlation. (6) Eye blinks can also be detected by measuring ocular parameters, for example, by fitting ellipses to eye pupils (Bergasa et al., 2006) using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to (Bergasa et al., 2006). Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. Based on the previous research, the blinks rate is previously determined using the EAR threshold value 0.2. Due to a large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. Our main contribution describes on Introduction page 1. The most significant contributions made by this paper are as follows: (1) Blink types are automatically classified using a method that defines a new threshold based on the Eye Aspect Ratio value as a new parameter called Modified EAR. A detailed description of this method is provided in the paper. (2) An Adjusted Eye Aspect Ratio for Strong Blink Detection uses facial landmarks to performed in our experiment. Then, we analyze and discuss in detail for experimental results using the public datasets, including the Talking Face and Eyeblink8 datasets. (3) We proposed a new Eye Video 1 dataset, and our dataset has the unique characteristics of people with small eyes and glasses. (4) Our experimental results show that using the proposed Modified EAR as a new threshold can improve blink detection results in the experiment. 3) Please include at least 10 more recent references (recent 3 years preferably). Please enrich your literature review and revise the literature review to better explain the state of the art instead of just listing out relevant works. Try your best to bridge previous relevant works to your research of this paper clearly. Responses: Thanks to the reviewer for the comments. We added 10 new references and explanations in our literature review based on reviewer suggestions. According to a literature review, computer vision techniques rely heavily on the driver's facial expression to determine their state of drowsiness. In (Anitha et al., 2020), Viola and Jones face detection algorithms are used to train and classify images sequentially. An alarm will sound if the eyes remain closed for a certain time. Other research proposed a low-cost solution for driver fatigue detection based on micro-sleep patterns. The classification to find whether the eye is closed or open is done on the right eye only using SVM and Adaboost (Fatima et al., 2020). Reference: Anitha, J., Mani, G., & Venkata Rao, K. (2020). Driver Drowsiness Detection Using Viola Jones Algorithm. Smart Innovation, Systems and Technologies, 159, 583–592. https://doi.org/10.1007/978-981-13-9282-5_55 Fatima, B., Shahid, A. R., Ziauddin, S., Safi, A. A., & Ramzan, H. (2020). Driver Fatigue Detection Using Viola Jones and Principal Component Analysis. Applied Artificial Intelligence, 34(6), 456–483. https://doi.org/10.1080/08839514.2020.1723875 In (Ghourabi et al., 2020) recommend a reliable method for detecting driver drowsiness by analyzing facial images. In reality, the blinking detection's accuracy may be reduced by the shadows cast by glasses and/or poor lighting. As a result, drowsiness symptoms include yawning and nodding in addition to the frequency of blinking, which is what most existing works focus solely on. For the classification of the driver's state, (Dreisig et al., 2020) developed and evaluated a feature selection method based on the k-Nearest Neighbor (KNN) algorithm. The best-performing feature sets yield valuable information about the impact of drowsiness on the driver's blinking behavior and head movements. The Driver State Alert Control system expects to detect drowsiness and collision liability associated with strong emotional factors such as head-shoulder inclination, face detection, eye detection, emotion recognition, estimation of eye openness, and blink counts (Persson et al., 2021). Reference: Ghourabi, A., Ghazouani, H., & Barhoumi, W. (2020). Driver Drowsiness Detection Based on Joint Monitoring of Yawning, Blinking and Nodding. Proceedings - 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing, ICCP 2020, 407–414. https://doi.org/10.1109/ICCP51029.2020.9266160 Dreisig, M., Baccour, M. H., Schack, T., & Kasneci, E. (2020). Driver Drowsiness Classification Based on Eye Blink and Head Movement Features Using the k-NN Algorithm. 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020, 889–896. https://doi.org/10.1109/SSCI47803.2020.9308133 Persson, A., Jonasson, H., Fredriksson, I., Wiklund, U., & Ahlstrom, C. (2021). Heart Rate Variability for Classification of Alert Versus Sleep Deprived Drivers in Real Road Driving Conditions. IEEE Transactions on Intelligent Transportation Systems, 22(6). https://doi.org/10.1109/TITS.2020.2981941 Previous research explains the effective Eye Aspect Ratio as follows. Lee et al. (Lee et al., 2010) try to estimate the state of an eye, including an eye open or closed. Gracia et al. (García et al., 2012) experiment with eye closure for individual frames, which is consequently used in a sequence for blink detection. Other methods compute a difference between frames, including pixels values (Kurylyak et al., 2012) and descriptors (Malik & Smolka, 2014). We developed an algorithm to perfect it using the effective Eye Aspect Ratio (Maior et al., 2020) and face landmarks (Mehta et al., 2019) methods. Another method for blink detection is based on template matching (Awais et al., 2013). The templates with open and/or closed eyes are learned and normalized cross-correlation. Reference: García, I., Bronte, S., Bergasa, L. M., Almazán, J., & Yebes, J. (2012). Vision-based drowsiness detector for real driving conditions. IEEE Intelligent Vehicles Symposium, Proceedings, 618–623. https://doi.org/10.1109/IVS.2012.6232222 Eye blinks can also be detected by measuring ocular parameters, for example, by fitting ellipses to eye pupils (Bergasa et al., 2006) using the modification of the algebraic distance algorithm for conic approximation. The frequency, amplitude, and duration of mouth and eye opening and closing play an important role in identifying a driver's drowsiness, according to (Bergasa et al., 2006). Adopting EAR as a metric to detect blink in (Rakshita, 2018) yields interesting results in terms of robustness. The blink rate is previously determined using the EAR threshold value 0.2. Due to a large number of individuals involved and the variation and features between subjects, such as natural eye openness, this approach was considered impractical for this study. Reference: Bergasa, L. M., Nuevo, J., Sotelo, M. A., Barea, R., & Lopez, M. E. (2006). Real-time system for monitoring driver vigilance. IEEE Transactions on Intelligent Transportation Systems, 7(1), 63–77. https://doi.org/10.1109/TITS.2006.869598 We add previous research studies in section materials and methods line 112. In (Kim et al., 2020) implement semantic segmentation to extract facial landmarks accurately. Semantic segmentation architecture and datasets containing facial images and ground truth pairs are introduced first. Further, they propose that the number of pixels is more evenly distributed according to the face landmark to improve classification performance. In (Utaminingrum et al., 2021) suggest a segmentation and probability calculation for a white pixel analysis based on facial landmarks as one way to detect the initial position of an eye movement. Calculating the difference between the horizontal and vertical lines in the eye area can detect blinking eyes. The other research study (Navastara et al., 2020) report the features of eyes are extracted by using a Uniform Local Binary Pattern (ULBP) and the Eyes Aspect Ratio (EAR). Reference: Kim, H., Kim, H., Rew, J., & Hwang, E. (2020). FLSNet: Robust Facial Landmark Semantic Segmentation. IEEE Access, 8. https://doi.org/10.1109/ACCESS.2020.3004359 Utaminingrum, F., Purwanto, A. D., Masruri, M. R. R., Ogata, K., & Somawirata, I. K. (2021). Eye movement and blink detection for selecting menu on-screen display using probability analysis based on facial landmark. International Journal of Innovative Computing, Information and Control, 17(4). https://doi.org/10.24507/ijicic.17.04.1287 Navastara, D. A., Putra, W. Y. M., & Fatichah, C. (2020). Drowsiness Detection Based on Facial Landmark and Uniform Local Binary Pattern. Journal of Physics: Conference Series, 1529(5). https://doi.org/10.1088/1742-6596/1529/5/052015 The EAR recalculation explanation is based on the previous research result. We explain in section C Modified Eye Aspect Ratio (Modified EAR). Based on the fact that people have different eye sizes, in this study, we recalculate the EAR (Huda et al., 2020) value used as a threshold. In this research, we proposed the modified eye aspect ratio (Modified EAR) for closed eyes with Equation (3) and open eyes with Equation (4). Reference: Huda, C., Tolle, H., & Utaminingrum, F. (2020). Mobile-based driver sleepiness detection using facial landmarks and analysis of EAR Values. International Journal of Interactive Mobile Technologies, 14(14), 16–30. https://doi.org/10.3991/IJIM.V14I14.14105 4) Suggest to add experiments regarding the environmental conditions when taking the images. Responses: Thanks to the reviewer for the comments. We employ the open dataset TalkingFace and Eyeblink 8 Dataset from https://www.blinkingmatters.com/research. We also labeled our dataset in our experiment, namely the Eye Video 1 dataset. We describe all the datasets on section E eye blink dataset line 206-236. Eyeblink8 dataset is more challenging as it includes facial emotions, head gestures, and looking down on a keyboard. This dataset consists of 408 blinks on 70,992 video frames, as annotated by (Fogelton & Benesova, 2016), with a video resolution of 640 × 480 pixels. The video was captured at 30 fps with an average length from 5000 to 11,000 frames. The talking face dataset consists of one video recording of one subject talking in front of the camera. The person in the video is making various facial expressions, including smiles, laughing, and funny face. Moreover, this video clip is captured with 30 fps with a resolution of 720 × 576 and contains 61 annotated blinks (Fogelton & Benesova, 2016). The annotations start with line '#start' and rows consist of the following information frame ID: blink ID: NF: LE_FC: LE_NV: RE_FC: RE_NV: F_X: F_Y: F_W: F_H: LE_LX: LE_LY: LE_RX: LE_RY: RE_LX: RE_LY: RE_RX: RE_RY. The example of a frame consist a blink as follows: 2851: 9: X: X: X: X: X: 240: 204: 138: 122: 258: 224: 283 :225 :320 :226 :347 :224. A blink may consist of fully closed eyes or not. According to blinkmatters.com, the scale of a blink consisting of fully closed eyes was 90% to 100%. The row will be like: 2852: 9: X: C: X: C: X: 239: 204: 140: 122: 259: 225: 284: 226: 320: 227: 346: 226. We are only interested in the blink ID and eye completely closed (FC) columns in our study experiment; therefore, we will ignore any other information. The explanation about TalkingFace Dataset is as follows. The TalkingFace dataset consists of one video recording of one subject talking in front of the camera and making different facial expressions. This video clip is captured with 25 fps with a resolution of 720 × 576 and contains 61 annotated blinks (Drutarovsky & Fogelton, 2015)(Fogelton & Benesova, 2016). Modified EAR threshold equation implemented for each dataset. After calculation, the EAR threshold for the Talking face dataset is 0.2468, Eyeblink Video 4 0.2923, Eyeblink Video 8 0.2105, and 0.2103 for Eye Video 1. The dataset information explains in Table 1. Eye Video 1 dataset labeling procedure using Eyeblink annotator 3.0 by Andrej Fogelton (Fogelton & Benesova, 2016). The annotation tool uses OpenCV 2.4.6. Eye Video 1 was captured at 29.97 fps and has a length of 1829 frames with 29.6 MB. Our dataset has the unique characteristics of people with small eyes and glasses. The environment is the people who drive the car. This dataset can be used for further research. It is difficult to find a dataset of people with small eyes, wearing glasses, and driving cars based on our knowledge. We collect the video from the car dashboard camera in Wufeng District, Taichung, Taiwan. 5) More explanation of the selection of threshold is required. Thanks to the reviewer for the comments. We explain the selection of the threshold value as follows. Based on the previous research, the blinks rate is previously determined using the EAR threshold value 0.2. Due to a large number of individuals involved and the variation and features between subjects, such as natural eye openness; this approach was considered impractically. Eye Aspect Ratio (EAR) is a scalar value that responds, especially for opening and closing eyes (Sugawara & Nikaido, 2014). A drowsy detection and accident avoidance system based on the blink duration was developed by (Pandey & Muppalaneni, 2021). Their work system has shown accuracy on the Yawning dataset (YawDD). To distinguish between the open and closed states of the eye, they used an EAR threshold of 0.3. We selected the EAR thresholds of 0.2 and 0.3 in our experiment based on the previous research. Moreover, we proposed the modified eye aspect ratio (Modified EAR) in this research because people have different eye sizes. We explain in section C Modified Eye Aspect Ratio (Modified EAR) line 170. "
Here is a paper. Please give your review comments after reading it.
401
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In many nations affected by the COVID-19 pandemic, the situation in higher education institutions has changed. During the pandemic, these institutions have introduced numerous esolutions to continue the process of education. Besides, research has shown many benefits in the last years of MOOCs. Yet, to date there are little studies to explore some individual characteristics, such as learners' metacognitive skills, that might have an impact on learning outcomes in MOOCs. Furthermore, promotion of deep learning is a serious challenge for online courses including MOOCs. Therefore, the purpose of this research was to explore the role of metacognition in promoting deep learning in MOOCs during COVID-19 pandemic. Participants were students at the department of home economics who were all at the seventh academic level.</ns0:p><ns0:p>Based on their scores on the metacognition awareness inventory (MAI), they were divided into two experimental groups, i.e high metacognition students and low metacognition students. A three-aspect assessment card of deep learning namely connecting concepts, creating new concepts, and critical thinking was used to collect data. Results showed that MOOC was more effective in fostering the deep learning aspects of high metacognition skills, and deep learning as a whole. With regard to backward seeking and slow watching events, results showed significant differences in favor of high metacognition students (HMs). Nevertheless, there were no statistically significant differences between students in both groups regarding the pausing event.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>In many nations affected by the COVID-19 pandemic, the situation in higher education institutions has changed. During the pandemic, these institutions have introduced numerous e-solutions to continue the process of education. Besides, research has shown many benefits in the last years of MOOCs. Yet, to date there are little studies to explore some individual characteristics, such as learners' metacognitive skills, that might have an impact on learning outcomes in MOOCs. Furthermore, promotion of deep learning is a serious challenge for online courses including MOOCs. Therefore, the purpose of this research was to explore the role of metacognition in promoting deep learning in MOOCs during COVID-19 pandemic. Participants were students at the department of home economics who were all at the seventh academic level. Based on their scores on the metacognition awareness inventory (MAI), they were divided into two experimental groups, i.e high metacognition students and low metacognition students. A three-aspect assessment card of deep learning namely connecting concepts, creating new concepts, and critical thinking was used to collect data. Results showed that MOOC was more effective in fostering the deep learning aspects of high metacognition skills, and deep learning as a whole. With regard to backward seeking and slow watching events, results showed significant differences in favor of high metacognition students (HMs).</ns0:p><ns0:p>Nevertheless, there were no statistically significant differences between students in both groups regarding the pausing event.</ns0:p></ns0:div> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The disease caused by the SARS-CoV-2 virus has been known as COVID-19. It first appeared in November, 2019 in the Chinese province of Wuhan and spread quickly around the world <ns0:ref type='bibr' target='#b15'>(Dias &amp; Lopes, 2020)</ns0:ref>. Education was one of the domains that has been tremendously affected by this pandemic that has been a real and immediate challenge to insecurity, health, unemployment and so on <ns0:ref type='bibr' target='#b76'>(Surkhali &amp; Garbuja, 2020)</ns0:ref>. To control this current outbreak, extensive steps have been introduced to minimize person-to-person transmission of COVID-19 <ns0:ref type='bibr' target='#b70'>(Rothan &amp; Byrareddy, 2020)</ns0:ref>. In education and more specifically, during the second semester of the academic year, 2020, governments, all over the world, announced the closure of all schools and higher education institutions in an attempt to contain COVID-19, Saudi Arabia was not an exception. Closure of educational institutions is a non-pharmaceutical measure used in many countries experiencing pandemics <ns0:ref type='bibr' target='#b18'>(Doyle, 2020)</ns0:ref>. Thus, the alternative was to move from conventional to online learning in a scenario where learners are not allowed to go to educational institutions <ns0:ref type='bibr' target='#b8'>(Basilaia &amp; Kvavadze, 2020)</ns0:ref>.</ns0:p><ns0:p>Online courses were delivered for free and for anyone and the Internet connection became a common arena for large-scale instruction due to understanding of online learning and open access teaching movement <ns0:ref type='bibr' target='#b85'>(Williams &amp; Stafford, 2018)</ns0:ref>. Utilizing different educational technologies, most tertiary institutions nowadays, offer many opportunities for online learning <ns0:ref type='bibr' target='#b23'>(Elfeky &amp; Elbyaly, 2017)</ns0:ref>, such as Massive Open Online Courses (MOOCs). MOOCs are now generating considerable media attention and important interest from higher education institutions. MOOCs are a relatively modern online active learning phenomenon <ns0:ref type='bibr' target='#b90'>(Yuan &amp; Powell, 2013b)</ns0:ref>, where Active learning is the key to guarantee deep learning, <ns0:ref type='bibr' target='#b10'>Biggs (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>Budd, Robinson, and Kainz (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b87'>Wu, Chen, Zhong, Wang, and Shi (2021)</ns0:ref>. In addition, worldwide enthusiasm for this pedagogical model that was believed to have the potential to Manuscript to be reviewed Computer Science revolutionize the educational delivery was stimulated by MOOCs <ns0:ref type='bibr' target='#b64'>(Paton &amp; Fluck, 2018)</ns0:ref>. Several researchers have pointed out that MOOCs have considerable potential for enhancing teaching and learning <ns0:ref type='bibr' target='#b0'>(Adam, 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chen et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b17'>Doo &amp; Tang, 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Ferguson &amp; Clow, 2015;</ns0:ref><ns0:ref type='bibr' target='#b45'>Kizilcec &amp; Schneider, 2015;</ns0:ref><ns0:ref type='bibr' target='#b53'>Mac Lochlainn &amp; Nic Giolla Mhich&#237;l, 2020)</ns0:ref>. However, the role of the metacognition skills in MOOCs are still not receiving the attention they deserve, despite the numerous studies that have been conducted to explore the impact of some individual characteristics on success in MOOCs <ns0:ref type='bibr' target='#b4'>(Ashton &amp; Davies, 2015;</ns0:ref><ns0:ref type='bibr' target='#b58'>Milligan &amp; Littlejohn, 2017;</ns0:ref><ns0:ref type='bibr' target='#b68'>Prinsloo &amp; Slade, 2019)</ns0:ref>.</ns0:p><ns0:p>Metacognition is often simplified as thinking about thinking or cognition about cognition <ns0:ref type='bibr' target='#b46'>(Ku &amp; Ho, 2010)</ns0:ref>. It addresses the conscious experience, self-regulation, and self-knowledge of one's cognitions or emotions <ns0:ref type='bibr' target='#b84'>(Wagener, 2013)</ns0:ref>. It is related to the awareness and comprehension of a person regarding the cognitive phenomena <ns0:ref type='bibr' target='#b56'>(Medina &amp; Castleberry, 2017)</ns0:ref>. On the other part, deep learning requires activating the individual's awareness regarding the cognitive phenomena, <ns0:ref type='bibr' target='#b10'>(Biggs, 2011;</ns0:ref><ns0:ref type='bibr' target='#b26'>Engel, Pallas, &amp; Lambert, 2017)</ns0:ref>. Metacognition variable is classified into two categories, high metacognition and low metacognition <ns0:ref type='bibr' target='#b69'>(Redondo &amp; L&#243;pez, 2018)</ns0:ref>. Educational research from the 21 st century clearly demonstrates the necessity of the educational practices that help learners acquire the metacognitive abilities that they will need to succeed in the today's and future complicated and globalized society <ns0:ref type='bibr' target='#b37'>(Howe &amp; Wig, 2017;</ns0:ref><ns0:ref type='bibr'>Howlett et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b83'>Wafubwa &amp; Cs&#237;kos, 2021)</ns0:ref>. On the other hand, deep learning includes creating new connections and concepts, integrating what students are learning with what they already know, in addition to critical thinking <ns0:ref type='bibr' target='#b33'>(Filius et al., 2018)</ns0:ref>. Critical thinking is conceptualized as an operative higher order thinking example that can be accounted for because of validated and reliable tests, <ns0:ref type='bibr' target='#b59'>(Miri &amp; David, 2007)</ns0:ref>. Meanwhile, there is no doubt that metacognition is a core component of higher Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>order thinking in various forms <ns0:ref type='bibr' target='#b46'>(Ku &amp; Ho, 2010)</ns0:ref>. Deep thinking learners can relate ideas and topic to prior experiences and knowledge <ns0:ref type='bibr' target='#b2'>(Alt &amp; Boniel-Nissim, 2018)</ns0:ref>. Besides, learners can develop a deeper approach to learning through the application of metacognition, resulting in greater academic achievement in courses where expertise needs to be incorporated and applied <ns0:ref type='bibr' target='#b63'>(Papinczak &amp; Young, 2008)</ns0:ref>. In the context of MOOCs, deep learning may be a real challenge due to asynchronous written interaction and the lack of body language and visual cues <ns0:ref type='bibr' target='#b33'>(Filius et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Henderikx &amp; Kreijns, 2019)</ns0:ref>. In other words, the sense of community and interaction can be seen as deep learning prerequisites <ns0:ref type='bibr'>(Ertmer et al., 2007)</ns0:ref>. MOOCs can be an online learning form of higher education that has a strong potential to enhance deep learning and as long as learners do not see each other, such an interaction will mostly written and asynchronous and could have consequences for choosing an approach of deep learning <ns0:ref type='bibr' target='#b33'>(Filius et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Video lectures are also one main part of the MOOC course design where the learning platforms store data of web log including student experiences with the course material e.g., video interaction events <ns0:ref type='bibr' target='#b60'>(Mubarak, Cao, &amp; Ahmed, 2021)</ns0:ref>. The event of video interaction is cognitive engagement including pausing, backward seeking, and video slow watching <ns0:ref type='bibr' target='#b51'>(Li &amp; Baker, 2018)</ns0:ref>.</ns0:p><ns0:p>Experimental proofs from early research on interactive educational videos in online learning shows that allowing the learners the chance to interact with videos through pausing, backward seeking, and slow watching, significantly improves learning <ns0:ref type='bibr' target='#b77'>(Tang &amp; Xing, 2018;</ns0:ref><ns0:ref type='bibr' target='#b88'>Xing, 2019;</ns0:ref><ns0:ref type='bibr' target='#b92'>Zhang &amp; Zhou, 2006)</ns0:ref>. Moreover, a MOOC participant leaves behind an easily accessible log of behaviors, such as information about every time he begins rewinds or pauses a video. Despite these attributes of a MOOC, much of the previous research does not directly discuss the actual cognitive processes underlying events of video interaction <ns0:ref type='bibr' target='#b51'>(Li &amp; Baker, 2018)</ns0:ref>, and does not investigate the relationship between metacognition and video interaction events in MOOC. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Henceforth, adaptation of the presented information to the learner's cognitive processing needs, such events may let him control the density of the presented information, speed, and order <ns0:ref type='bibr' target='#b11'>(Brinton &amp; Buccapatnam, 2015;</ns0:ref><ns0:ref type='bibr' target='#b51'>Li &amp; Baker, 2018;</ns0:ref><ns0:ref type='bibr' target='#b91'>Zhang &amp; Skryabin, 2016)</ns0:ref>.</ns0:p><ns0:p>Henceforth, this research aims to investigate the extent to which metacognition variable can promote the intended outcomes of deep learning like connecting concepts, creating new concepts, and critical thinking, <ns0:ref type='bibr' target='#b33'>(Filius et al., 2018)</ns0:ref>. In addition, it aims to measure the extent to which students' pausing, backward seeking, and video slow watching can be used to infer the relationship between metacognition and video interaction events in MOOCs during COVID-19 pandemic. In short, it aims to answer these questions:</ns0:p><ns0:p>RQ1: To what extent does the MOOC promote deep learning namely, connecting concepts, creating new concepts, and critical thinking of students of high metacognition and low metacognition? RQ2: Does the learners' metacognition, whether high or low, impact events of video interaction like slow watching, backward seeking, and pausing in MOOCs?</ns0:p></ns0:div> <ns0:div><ns0:head>Literature Review</ns0:head></ns0:div> <ns0:div><ns0:head>MOOCs</ns0:head><ns0:p>MOOCs have emerged as a popular mechanism for individuals to acquire acquiring new skills and knowledge <ns0:ref type='bibr' target='#b58'>(Milligan &amp; Littlejohn, 2017)</ns0:ref>. Hence, a primary goal of MOOCs is to provide people an opportunity to learn <ns0:ref type='bibr' target='#b44'>(Kizilcec &amp; P&#233;rez-Sanagust&#237;n, 2016)</ns0:ref>. MOOCs are unlike most other types of online learning in higher education. They are free and are funded by top-tier institutions that offer them an air of prestige that has never been achieved before by online courses <ns0:ref type='bibr' target='#b31'>(Evans &amp; Baker, 2016)</ns0:ref>. They, meanwhile encourage learners to study when and where they choose reflecting so There is an improvement in the autonomy of learners attending a MOOC compared to those attending a conventional course <ns0:ref type='bibr' target='#b40'>(Jansen &amp; Van Leeuwen, 2017)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>MOOCs are rapidly a growing method of educational provision <ns0:ref type='bibr' target='#b36'>(Hone &amp; El Said, 2016)</ns0:ref>. They can be seen as an expansion of current online education approaches, in terms of scalability and open access to courses <ns0:ref type='bibr' target='#b89'>(Yuan &amp; Powell, 2013a)</ns0:ref>. Their course structures consist of auto-graded quizzes, online discussion forums, and lecture videos <ns0:ref type='bibr' target='#b49'>(Lee &amp; Watson, 2020)</ns0:ref>. In other words, they are built as an alternative to most practices of traditional online learning that deliver content through single or centralized platform <ns0:ref type='bibr' target='#b41'>(Joksimovi&#263; et al., 2018)</ns0:ref>. Therefore, in the coming years, MOOCs are expected to be playing a key role in the learning of undergraduate students.</ns0:p></ns0:div> <ns0:div><ns0:head>Deep Learning</ns0:head><ns0:p>Surface learning and deep learning approaches are two main forms of learning by which learners can learn <ns0:ref type='bibr' target='#b33'>(Filius et al., 2018)</ns0:ref>. Deep learning is a process of learning advocated by the theory of constructivist learning that occurs through students' social negotiation, collaboration, and reflection on their own practices of learning advocated by theory of constructivist learning <ns0:ref type='bibr' target='#b48'>(Lee &amp; Baek, 2012)</ns0:ref>. Besides, it is an approach of complex personal development involving the change of learning habits, epistemological beliefs and perceptions. It focuses on the underlying meanings, main ideas, themes and principles. It also stresses the importance of refining ideas, applying knowledge and utilizing evidence across contexts <ns0:ref type='bibr' target='#b10'>(Biggs, 2011;</ns0:ref><ns0:ref type='bibr' target='#b16'>Donnison &amp; Penn-Edwards, 2012;</ns0:ref><ns0:ref type='bibr' target='#b86'>Wingate, 2007)</ns0:ref>. In contrast, surface learning is a passive treatment of information, employs low-level metacognition, and lacks thinking <ns0:ref type='bibr' target='#b48'>(Lee &amp; Baek, 2012)</ns0:ref>. In addition, it treats the course as routinely memorizing facts and carrying out procedures. It focuses on unrelated bits of knowledge and lower requirements of syllabus <ns0:ref type='bibr' target='#b16'>(Donnison &amp; Penn-Edwards, 2012;</ns0:ref><ns0:ref type='bibr' target='#b27'>Entwistle &amp; Peterson, 2004)</ns0:ref>.</ns0:p><ns0:p>Deep learning requires learners to relate ideas and topics to prior experiences and knowledge as an activity of constructivist education, which refers to the idea that skills and content should be understood within the student's prior knowledge framework <ns0:ref type='bibr' target='#b1'>(Alt, 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alt &amp;</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Boniel-Nissim, 2018). On the other side, surface learning, which is confined to memorizing facts and rote learning, requires students to memorize or replicate the learning material for a test <ns0:ref type='bibr' target='#b33'>(Filius et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b67'>Price, 2013)</ns0:ref>. With the latter, only the basics of the learning material are learned <ns0:ref type='bibr' target='#b71'>(Rozgonjuk &amp; Saal, 2018)</ns0:ref>. Nevertheless, deep learning is made up of three main aspects namely, creating new concepts, connecting concepts, and critical thinking <ns0:ref type='bibr' target='#b33'>(Filius et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Further, deep learning leads to higher academic success and performance. <ns0:ref type='bibr' target='#b42'>(Karaman &amp; Demirci, 2019;</ns0:ref><ns0:ref type='bibr' target='#b80'>Uludag &amp; Uludag, 2017)</ns0:ref>. In professional environments where online tools are to be utilized, deep learners can be efficiently supported <ns0:ref type='bibr' target='#b48'>(Lee &amp; Baek, 2012;</ns0:ref><ns0:ref type='bibr' target='#b52'>Li &amp; Xing, 2021)</ns0:ref> although it may be a challenge in the context of MOOCs as mentioned earlier.</ns0:p></ns0:div> <ns0:div><ns0:head>Metacognition</ns0:head><ns0:p>Metacognition is involved in most learning situations <ns0:ref type='bibr' target='#b84'>(Wagener, 2013)</ns0:ref> because it refers to a person's cognition and knowledge regarding the cognitive phenomena <ns0:ref type='bibr' target='#b56'>(Medina &amp; Castleberry, 2017)</ns0:ref>. It enables the student to be more aware of the achieved progress <ns0:ref type='bibr' target='#b78'>(Tops &amp; Callens, 2014)</ns0:ref>. In other words, metacognition refers to one's own thoughts and cognitions <ns0:ref type='bibr' target='#b19'>(Driessen, 2014)</ns0:ref>. It is the highest form of one's intellectual capacity <ns0:ref type='bibr' target='#b62'>(Paliokas, 2009)</ns0:ref>.</ns0:p><ns0:p>Knowledge about the cognitive tasks, strategic knowledge, and self-knowledge are the main substructures of metacognition <ns0:ref type='bibr' target='#b66'>(Polegato, 2014)</ns0:ref>. During the process of learning, metacognition directs the students' learning strategies <ns0:ref type='bibr' target='#b56'>(Medina &amp; Castleberry, 2017)</ns0:ref> where strategies of metacognition represent an important variable <ns0:ref type='bibr' target='#b34'>(Halpern, 1998)</ns0:ref>. Aspects of metacognitive interact with a variety of external and internal factors such as socio-economic status, motivation, and type of instruction <ns0:ref type='bibr' target='#b56'>(Medina &amp; Castleberry, 2017)</ns0:ref>. The Metacognitive Awareness Inventory (MAI) developed by <ns0:ref type='bibr' target='#b72'>Schraw and Dennison (1994)</ns0:ref> usually measures learners' metacognitive skills. It follows a common model of two components, i.e. Regulation of Cognition and Knowledge of Cognition <ns0:ref type='bibr' target='#b55'>(M&#228;kip&#228;&#228;, Kallio, &amp; Hotulainen, 2021)</ns0:ref>. Regulation of Cognition expresses the students' need to modify to students' modifying the progress of their cognitive activity and control of their own cognitive processing <ns0:ref type='bibr' target='#b14'>(Cleary &amp; Kitsantas, 2017)</ns0:ref> while Knowledge of Cognition refers to what learners know about their own cognition or about cognition in general. It involves procedural, conditional, and declarative knowledge <ns0:ref type='bibr' target='#b55'>(M&#228;kip&#228;&#228; et al., 2021)</ns0:ref>. In brief, interest in the metacognition role has been steadily rising in most education forms <ns0:ref type='bibr' target='#b57'>(Meijer et al., 2013)</ns0:ref>. Several researches have put forth that metacognition is a milestone variable to estimating the learning performance <ns0:ref type='bibr' target='#b7'>(Ba&#351; &amp; Sa&#287;&#305;rl&#305;, 2017;</ns0:ref><ns0:ref type='bibr' target='#b82'>Veenman, 2006)</ns0:ref>.</ns0:p><ns0:p>Therefore, the present study aims at investigating the role of metacognition in the promoting of deep learning in MOOCs.</ns0:p></ns0:div> <ns0:div><ns0:head>Methodology Participants</ns0:head><ns0:p>Participants in the present study were (59) students at the department of home economics at Najran University. They were all at their seventh level and were all enrolled in 'Research Paper Writing' course that was provided via Coursera platform. The quantitative MAI whose reliability coefficient was confirmed by <ns0:ref type='bibr' target='#b72'>Schraw and Dennison (1994)</ns0:ref> and validated by <ns0:ref type='bibr' target='#b75'>Sperling and Howard (2004)</ns0:ref> was used to assess the level of metacognitive awareness of each student.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>(1</ns0:ref> The researchers confirmed that, for research involving human subjects, they have met ethical guidelines.</ns0:p><ns0:p>In addition, homogeneity of learners' previous deep learning as a whole were checked using ANOVA after the pre-application of the assessment card. Results in table (2) show that F. ratio (1.08) was not significant at (&#945;=0.541 &gt; 0.05). In other words, there were no statistically significant differences in learners' deep learning as a whole on the pre-application of the assessment card for both groups. One interesting explanation also for this, lies in their previous enrollment and success in 'Computer in Teaching' course that developed their technology skills.</ns0:p><ns0:p>As shown in table 3, F. ratios (1.77) were also insignificant (&#945;=0.583 &gt; 0.05) and so we can claim that all participants' technology skills were also homogeneous in the course of 'Research Paper Writing' course.</ns0:p></ns0:div> <ns0:div><ns0:head>Study Procedure</ns0:head><ns0:p>It is important to bear in mind that the use of technology in educational settings needs to be based on the dominant theories and methods of education <ns0:ref type='bibr' target='#b25'>(Elfeky, Masadeh, &amp; Elbyaly, 2020;</ns0:ref><ns0:ref type='bibr' target='#b65'>Patten &amp; S&#225;nchez, 2006)</ns0:ref>. Consequently, this is applied to MOOCs as one type of educational use of technology <ns0:ref type='bibr' target='#b22'>(Elfeky &amp; Elbyaly, 2016</ns0:ref><ns0:ref type='bibr' target='#b39'>, 2021)</ns0:ref>. MOOCs are based on the theory of connectivism learning <ns0:ref type='bibr' target='#b73'>(Siemens, 2014)</ns0:ref>, which is a new theory utilized via social learning experiences to explain learning in the digital age, with a focus on students making connections to skills and knowledge <ns0:ref type='bibr' target='#b64'>(Paton &amp; Fluck, 2018)</ns0:ref>. The current research is a three-phase method. In the first phase, metacognition of students was evaluated utilizing the Metacognitive Awareness Inventory Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(MAI) instrument expanded by <ns0:ref type='bibr' target='#b72'>Schraw and Dennison (1994)</ns0:ref>. It is a questionnaire of (52) questions of five point-Likert scale. Responses were ranging from 'strongly agree' to 'strongly disagree'. In the second phase, the 'Research Paper Writing' course was offered to participants via Coursera platform (www.coursera.org). In the third phase, students' outcomes of intended deep learning in MOOC were assessed using an assessment card and data of video interaction log collected via the Coursera platform.</ns0:p><ns0:p>Participants were invited to take part in a MOOC by one of the research team member through Zoom platform. A 15-minute MOOC orientation lecture was given with a clarification of how MOOC could be utilized as a resource for research paper writing. Help was offered when Manuscript to be reviewed Computer Science assessment card. Meanwhile, data of video interaction log were collected via the Coursera platform in order to infer relationship between metacognition and events of video interaction including data about learners' every video player clicks mainly slow watching, pausing, and backward seeking. Event of pausing was defined while watching the video as the learner could stop the video lecture by clicking the pause button. Event of backward seeking was also defined as moving the video head of play to a new position before the old position by the learner, e.g., changing the video head of play from marker 15:20 to marker 12:41. In addition, event of slow watching was known as the learner's changing video playing speed to slower one than it was before changing.</ns0:p><ns0:p>Besides, via an assessment card of deep learning, the intended outcomes of deep learning were assessed at the end of the course. A team of three professors rated all the research papers.</ns0:p><ns0:p>Via discussion, main differences in evaluation were overcome. For scoring, signs agreed upon were utilized. Data of assessment card was used to identify utility of metacognition in promoting deep learning in MOOCs.</ns0:p></ns0:div> <ns0:div><ns0:head>Instruments of Data Collection</ns0:head><ns0:p>In order to describe students' behaviors associated with deep learning, an assessment card of three main aspects namely connecting concepts, creating new concepts, and critical thinking was developed. Items of these three main aspects constituted the deep learning operationalization based on Approaches to Study Inventory (ASI) of <ns0:ref type='bibr' target='#b28'>Entwistle and Ramsden (1983)</ns0:ref> and the Study Process Questionnaire (SPQ) of <ns0:ref type='bibr' target='#b9'>Biggs (1987)</ns0:ref>. To validate the content of the prepared assessment card, it was presented to a set of arbitrators, who were all experts in the fields of home economics, curricula and instruction methods, and educational technology. The total number of the assessment card's items was (19). Critical thinking aspect consisted of ( <ns0:ref type='formula'>7</ns0:ref> Manuscript to be reviewed Computer Science the aspect of connecting concepts also involved (7) items while only (5) items constituted the creating new concepts aspect. Participants' responses to these items ranged from 1= Strongly disagree to 5+ Strongly agree on a five-point Likert Scale (See appendix A). Furthermore, using Cronbach Alpha to ascertain the card's reliability, the card's internal reliability was 0.89 (critical thinking: 0.877, connecting concepts: 0.849, and creating new concepts: 0.854). To ensure the inter-rater reliability of the evaluation results, an independent professor was requested to analyze and check the papers of approximately (10%) of the whole research papers. Agreement percentage of all raters was approximately (92%).</ns0:p></ns0:div> <ns0:div><ns0:head>Data Analysis</ns0:head><ns0:p>Quantitative and qualitative data were taken into consideration. More specifically, results of the assessment card were taken as a beginning point for the analysis to explore the role of metacognition in promoting deep learning. Data of video interaction log were also accounted for in order to infer the relationship among video interaction events and metacognition in MOOCs.</ns0:p><ns0:p>Besides, the independent sample t-test was utilized, and a significance level of p &lt; 0.05 was adopted for the research.</ns0:p></ns0:div> <ns0:div><ns0:head>Ethical Statement</ns0:head><ns0:p>Approval was received from the Deanship of Scientific Research review board at Najran University (10/918/1442/137). The procedures used in this study adhere to the tenets of the Helsinki Declaration.</ns0:p></ns0:div> <ns0:div><ns0:head>Results</ns0:head></ns0:div> <ns0:div><ns0:head>Usefulness of metacognition in promoting learners' ability in critical thinking in MOOCs</ns0:head><ns0:p>Results related to critical thinking aspect presented in table 2 show that there are significant differences between learners of high metacognition skills (HMs) and their peers of Manuscript to be reviewed Computer Science low metacognition skills (LMs) with regard to their critical thinking skills (P=.000 &lt;.05). Mean scores of both groups obviously indicate that critical thinking of learners in the HM group was better than the critical thinking of their peers in the LM group in MOOCs. To say it in a different way, MOOC was more effective in promoting the critical thinking of high metacognition students. Eta Square (&#951;2) was utilized to determine how much the learners' ability in the HMs group to think critically in the HMs group was boosted in order to ascertain this finding. The estimated value (&#951;2=0.305) confirms that the additional value of MOOC utilization in enhancing the ability of high metacognition learners to think critically was more effective than in enhancing the ability of low metacognition students. More specifically, results show that MOOC was more effective in the enhancement of HMs' ability in testing the effect of the independent variable on the dependent one, identifying the study questions to be answered, and formulating probable answers that can be tested for every question. In addition, it was more effective in writing the study null and alternative hypotheses, enhancing high metacognition participants' ability to distinguish between hypotheses that can be tested descriptively or quantitatively, concluding results in a shorter time, and including the answer of a previous study in the topic of the chosen study.</ns0:p></ns0:div> <ns0:div><ns0:head>Usefulness of metacognition in promoting learners' ability to connect concepts in MOOCs</ns0:head><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> presents the findings related to the usefulness of metacognition in promoting learners' ability to connect concepts, i.e. connect new knowledge with what students' already knew. Results indicate statistically significant differences in connecting concepts between learners in the HMs and LMs groups (P=.000 &lt;.05). That is, the ability of HM students to connect concepts as reflected on the assessment card was much better than that of the LM students in MOOCs. In other words, MOOC enhancement of the ability of high metacognition PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science learners to connect concepts was much better than the enhancement of the ability of peers with low metacognition. In order to ascertain this finding, Eta Square was utilized to determine how much the learners' ability to connect concepts in the HMs group was boosted. The estimated value (&#951;2=0.297) confirms that the additional value of MOOC utilization was more effective in enhancing the ability of high metacognition learners to connect concepts than in enhancing the learners of low metacognition ability. More specifically, results show that MOOC was more effective in the enhancement of HMs' ability in considering the criteria for formulating a good research title, writing a key question that the study will answer, and identifying the study population. Besides, it was more effective in documenting references and resources; identifying independent, dependent and persistent variable; describing the sampling technique and type; and identifying topic related laws, principles or theories.</ns0:p></ns0:div> <ns0:div><ns0:head>Usefulness of metacognition in promoting the learners' ability to create new concepts in</ns0:head></ns0:div> <ns0:div><ns0:head>MOOCs</ns0:head><ns0:p>Results related to the usefulness of metacognition in creating new concepts are presented in table 4. They indicate statistically significant differences in creating new concepts between students in the HMs and LMs groups (P=.000 &lt;.05). That is, HMs' ability to create new concepts as reflected on the assessment card was much better than that of LMs' ability in MOOCs. In other words, MOOC enhanced the ability of high metacognition learners to create new concepts much better than the ability of their peers with low metacognition. Eta Square was utilized to determine how much the learners' ability to create new concepts in the HMs group was boosted in order to ascertain this finding. The estimated value (&#951;2=0.341) confirms that the additional value of MOOC utilization was more effective in enhancing the ability of high metacognition learners to create new concepts than the ability of peers with low metacognition. In brief, results</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>show that MOOC was more effective in the enhancement of HMs' ability to formulate the terms associated with the result and its causes or phenomena and their conditions, describe the proposed experimental design, identify the data collection techniques and tools, identify topic related data or results, and process data or results.</ns0:p></ns0:div> <ns0:div><ns0:head>Usefulness of metacognition in promoting deep learning as a whole in MOOCs</ns0:head><ns0:p>Findings as shown in table 4 present statistically significant differences in deep learning as a whole between students in the HMs and LMs groups (P=.000 &lt;.05). In other words, HMs' deep learning was much better than that of LMs' deep learning. The result given is not a sudden, since it is based on the previous results. To determine the amount of improvement in participants' deep learning as a whole, Eta Square was utilized in order to ascertain this finding. The estimated value (&#951;2=0.372) confirms that the additional value of MOOC utilization was more effective in enhancing the ability of high metacognition learners' deep learning as a whole than the deep learning of peers with low metacognition. Besides, this result confirms that MOOC was more effective in fostering the deep learning as a whole of high metacognition learners.</ns0:p></ns0:div> <ns0:div><ns0:head>Usefulness of learners' metacognition in video interaction events in MOOCs</ns0:head><ns0:p>Results related to video interaction events presented in table <ns0:ref type='table'>5</ns0:ref> shows that there were no statistically significant differences between participants in both groups with regard to pausing (P=.883&gt;.05). On the contrary, there were statistically significant differences with regard to backward seeking in both groups (P=.038&lt;.05). Mean scores of both groups obviously indicate that participants' backward seeking in HM group was greater than the backward seeking of peers in the LM group in MOOCs. Besides, there were statistically significant differences between both groups with regard to the slow watching in (P=.032&lt;.05). Mean scores of both groups obviously indicate that HM participants' slow watching was greater than the slow watching of peers in the LM group in MOOCs. That is, it can be inferred that high metacognition of participants was more effective in backward seeking and slow watching in video lecture in MOOCs.</ns0:p></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Best understanding of how learning outcomes are related to metacognition is important for identifying participants' behavior in internet-enabled learning. The overarching aim of the current research was to disclose the role of high and low metacognition in promoting deep learning represented in creating new concepts, connecting concepts, and critical thinking. It also aimed to measure slow watching, backward seeking, and pausing of videos in order to infer the relationship among video interaction events and metacognition in MOOCs during COVID-19 Pandemic. Major results of this study can be explained in light of deep learning aspects. First, results suggest that MOOC was more effective on high metacognition participants than on low metacognition ones in promoting critical thinking. Such a result emphasizes what <ns0:ref type='bibr' target='#b56'>Medina and Castleberry (2017)</ns0:ref> has claimed regarding the ability of metacognition to improve thinking and learning. In addition, metacognition constitutes an essential part of cognitive development cognitively to make critical thinking possible <ns0:ref type='bibr' target='#b47'>(Kuhn, 1999)</ns0:ref>. It has an important path to critical thinking <ns0:ref type='bibr' target='#b54'>(Magno, 2010)</ns0:ref>. Besides, this result corroborates the findings of <ns0:ref type='bibr' target='#b3'>Arslan (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b61'>Naimnule and Corebima (2018)</ns0:ref>, which also found that there was a relationship among the skills of critical thinking and metacognitive where critical thinking positively predicted metacognition. Manuscript to be reviewed Computer Science support to appropriately build on their previous ideas and on how to coherently construct their new ones.</ns0:p><ns0:p>One more interesting thing is the fact that the findings of this research confirm the role of using MOOC in promoting students' abilities in creating new concepts, particularly, high metacognition ones. Hence, participants in the present study, as <ns0:ref type='bibr' target='#b69'>Redondo and L&#243;pez (2018)</ns0:ref> mention were able to innovate, exercise their intellectual capacities, and approach novel processes. Another important fact these results foster, is the fact that deep learning of high metacognition learners was found much better than that of peers of low metacognition.</ns0:p><ns0:p>Consequently, it can be said that MOOC is usually more effective with high metacognition learners in fostering their deep learning as a whole. This conclusion is, to a large extent, in line with <ns0:ref type='bibr' target='#b79'>Tsai and Lin (2018)</ns0:ref> that enhancing metacognition of learners can lead to continuance to learn with MOOCs and increased online learning interest. Furthermore, findings of this study are in harmony with <ns0:ref type='bibr' target='#b6'>Barak and Watted (2016)</ns0:ref> that MOOC success search should include a better learner characteristics' understanding from different disciplines. Therefore, high metacognition is highly needed once lecturers seek to promote students' deep learning in MOOCs. In short, it can be said that findings of this study can contribute to study and practice around learner characteristics and learning outcome in the context of MOOCs during the Corona Virus pandemic and the future, in general. In particular, these findings can strengthen our notion that intended deep learning outcome might be connected to learners' metacognitive skills in MOOCs.</ns0:p><ns0:p>As for the cognitive processes that underlie video interaction events, findings revealed that there was a relatively significant relationship between backward seeking event and high metacognition in MOOC. This result, to a large extent, corroborates the findings of <ns0:ref type='bibr' target='#b51'>Li and Baker (2018)</ns0:ref> Manuscript to be reviewed Computer Science seeking. That is, event of backward seeking is positively linked to utilizing cognitive strategies and investing mental effort. High metacognitive skills, on the other part, help participants to understand what they know and do not know and consequently help them get the missing information that is called as self-directed or self-regulated learning <ns0:ref type='bibr' target='#b56'>(Medina &amp; Castleberry, 2017)</ns0:ref>. Many MOOC researches shows that better quiz results are predicted by backward seeking events <ns0:ref type='bibr' target='#b11'>(Brinton &amp; Buccapatnam, 2015;</ns0:ref><ns0:ref type='bibr' target='#b50'>Li &amp; Baker, 2016)</ns0:ref>. Similar to the previous theme, results of this study also indicated a relatively significant relationship between slow watching event and high metacognition in MOOC. Such a result highlights what has been claimed by <ns0:ref type='bibr' target='#b74'>Sinha and Jermann (2014)</ns0:ref> about the positive association between in-video persistence, slow watching event, and persistence of the course. Once again, this result supports the findings of <ns0:ref type='bibr' target='#b51'>Li and Baker (2018)</ns0:ref> with regard to the fact that, for all-rounders, slow watching was indicative of greater course grades. In other words, high metacognitive skills allow learners to be more conscious of the progress made <ns0:ref type='bibr' target='#b78'>(Tops &amp; Callens, 2014)</ns0:ref>. However, findings found that there were no statistically significant differences between the two groups with regard to pauses. Findings such as these can be attributed to certain reasons or facts like, for example, the fact that pausing event can be seen as an indication bout the cognitive load increase <ns0:ref type='bibr' target='#b81'>(Van Merrienboer &amp; Sweller, 2005)</ns0:ref>. Findings may also prove what <ns0:ref type='bibr' target='#b51'>Li and Baker (2018)</ns0:ref> claim that for reasons unrelated to learning, learners may pause, like taking a break to do something else. Manuscript to be reviewed Computer Science explore the role of metacognition in MOOCs where male and female learners study together.</ns0:p></ns0:div> <ns0:div><ns0:head>Limitations and future directions</ns0:head><ns0:p>Third, female learners in this study were tertiary students, so our results can not be compared with other age groups as <ns0:ref type='bibr' target='#b21'>Elfeky (2017)</ns0:ref> states. Fourthly, data analyzed were collected from one public tertiary institutions so generalizability of results can not be done. Therefore, researchers of future studies are called to utilize data from different institutions in other countries <ns0:ref type='bibr'>(Elbyaly &amp; Elfeky, 2021)</ns0:ref>, they are called to reveal the impact of utilizing the Big Data analytics in MOOCS to promote deep learning for learners with low metacognition.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>The present study is a three-phase method study where participants' metacognition was evaluated in the first phase. The second phase involved the delivery of the 'Research Paper Writing' to participants via the Coursera platform. In last phase, the intended deep learning outcomes in MOOCs were identified, and data of video interaction log were collected via Coursera platform. The overarching aim of the current research was to disclose the role of metacognition, high and low in promoting various aspects of deep learning. i.e. creating new concepts; connecting concepts; and critical thinking. It also targeted measuring slow watching, backward seeking, and pausing of videos in order to infer the relationship among video interaction events and metacognition in MOOCs during COVID-19 Pandemic. Results proved that high metacognition could promote learners' critical thinking, connecting concepts, creating new concepts, and deep learning as a whole in MOOCs. In other words, metacognitive skills matter and supporting these skills can help to also promote students' deep learning in MOOCs.</ns0:p><ns0:p>They also showed statistically significant differences with regard to backward seeking and slow watching events in favor of HMs, while no statistically significant differences with regard to pausing event were noticed. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>) reveals that the mean score and standard deviation of students in the first group was (M=198.63 &amp; SD=9.74), while it was (M=141.26 &amp; SD=11.42) for the second group. That is, out of a total of 260 quantitative MAI points, metacognition was graded into two groups, high metacognition (HM &#8805; 65 per cent) and low metacognition (LM &lt; 65 per cent) in line with Ayd&#305;n and Co&#351;kun (2011); Redondo and L&#243;pez (2018). In other words, based on their scores on the MAI instrument, participants were divided into two experimental groups, the first group consisted of (27) high metacognition female students, while the second one involved (32) low PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)Manuscript to be reviewed Computer Science metacognition female students. Their average age was 21 years and the standard deviation was 1.76. Before proceeding learning, all participants were informed of the research aim and signed consent forms. They were given the opportunity to not participate and withdraw without penalty.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>needed through Zoom platform to complete the MOOC sign-up operation. Participation was voluntary, free, promoted and encouraged through the research team, i.e. course teaching team. The MOOC lasted for six weeks from March to May 2020. Each week, two blocks on related subjects were introduced. Each block comprised of 45 minutes of devoted to studying background materials and consisted of two phases. The first 30 minutes were allotted to assignments or tasks, and the other 25 minutes were assigned for video watching. More specifically, the 'Research Paper Writing' course was delivered via the Coursera platform that aimed to allow participants to practice what they need by having them to finally write a research paper. Each week covered one or more topics about the research paper writing, such as, selecting an academic topic, formulating an appropriate research question, creating an outline, and looking for source material and researching. In addition, participants were to create a bibliography of annotated, write several paragraphs including the introductory paragraph, work cited page, and finally carefully review and edit the research paper. Once research papers, as an assignment, were submitted, participant students' deep learning was to be assessed using the assigned PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>) items, PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Furthermore</ns0:head><ns0:label /><ns0:figDesc>, the findings of this study indicated that MOOC was more influential on high metacognition learners in enhancing connecting concepts, i.e. connecting new knowledge with what they already know. Such a result confirms that MOOC could provide participants with PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>regarding the significant relationship among students' course grades and backward PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>This research had several limitations. First, exploration of the role of metacognition in promoting deep learning in MOOCs was the focus of the present study. Second, implementation of the current study was limited to a sample of female students of home economics major. Therefore, researchers are invited to carry out similar researches in other environments to PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Differences between participants' metacognition levelsNote. HMs * are high metacognition students; LMs ** are low metacognition students.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Group</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>M</ns0:cell><ns0:cell>SD</ns0:cell></ns0:row><ns0:row><ns0:cell>Metacognition</ns0:cell><ns0:cell>HMs *</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>198.63</ns0:cell><ns0:cell>9.74</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LMs **</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>141.26</ns0:cell><ns0:cell>11.42</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67632:1:0:NEW 21 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editor and Reviewers, Thank you very much for your kind email of 2-February-2022 about the comments to our manuscript (#CS-2021:11:67632:0:1:REVIEW). We truly appreciate the opportunity to revise the paper for publication consideration in PeerJ Computer Science submission. We sincerely thank the reviewers’, editorial critical comments, and thoughtful suggestions. Based on these comments and suggestions, we have made extensive modifications on the original manuscript. We hope the revised manuscript will meet the journal’s standard and could be acceptable for publication. Below you will find our point-by-point responses to the comments and suggestions. Best regards, Sincerely yours To the comments from Editor: We greatly appreciate the Editor’s positive comments and insightful suggestions of the manuscript. Accordingly, we have carefully made significant revisions and incorporated the suggested information in the revised manuscript. We hope the revised paper can meet the standards and be acceptable for publication. Specific comments: The comment The paper needs further major revisions - the authors should clarify more about the dataset used and its features. Also the training ad testing model should be explained in a systematic way. My response Participants ……………. Table (1) reveals that the mean score and standard deviation of students in the first group was (M=198.63 & SD=9.74), while it was (M=141.26 & SD=11.42) for the second group. To the comments from Reviewer #1: General Comments: The paper aims at addressing role of metacognition in promoting deep learning via MOOCs. The work targets to answer two research questions. The work reported though not novel is acceptable contribution as the attempt is considerable. Response: We sincerely thank the Reviewer’s positive comments on the manuscript. According to the Reviewer’s insightful comments and suggestions, we have thoroughly revised the manuscript and hope the revised version can meet the standards and be acceptable for publication. Specific comments: The comment In my view, A detailed analysis w.r.t Anova would have been better instead of just stating the outcome. Some more w.r.t analysis and key findings would add value. Authors can address these in revision. My response Participants ………… In addition, homogeneity of learners' previous deep learning as a whole were checked using ANOVA after the pre-application of the assessment card. Results in table (2) show that F. ratio (1.08) was not significant at (α=0.541 > 0.05). In other words, there were no statistically significant differences in learners' deep learning as a whole on the pre- application of the assessment card for both groups. One interesting explanation also for this, lies in their previous enrollment and success in 'Computer in Teaching' course that developed their technology skills. As shown in table 3, F. ratios (1.77) were also insignificant (α=0.583 > 0.05) and so we can claim that all participants' technology skills were also homogeneous in the course of 'Research Paper Writing' course. Table 2 Differences between the two groups regarding participants' previous deep learning as a whole on the Pre- application of the assessment card Sum of Squares DF Mean of Squares F. ratio Sig. Between Groups 2.23 1 2.23 1.08 0.541 Within Groups 887.6 57 15.74 Total 889.83 58 The comment A more comparative evaluation is preferred as present version is limited. My response Participants ……………. Table (1) reveals that the mean score and standard deviation of students in the first group was (M=198.63 & SD=9.74), while it was (M=141.26 & SD=11.42) for the second group. To the comments from Reviewer #2: We sincerely thank the Reviewer’s positive comments on the manuscript. According to the Reviewer’s insightful comments and suggestions, we have thoroughly revised the manuscript and hope the revised version can meet the standards and be acceptable for publication. Specific comments: The comment its an attempt to explore the Role of Metacognition in Promoting Deep Learning in MOOCs during COVID-19 Pandemic. deep learning is not elevated in detail. data set taken for experimentation is not mentioned in detail. needs more explanation about dataset taken and its features. how they have trained the model and how they tested. utilization of deep learning model to be discussed. Results generated from the proposed idea are limited more investigation to be done on idea(methodology), dataset taken and projecting the role of deep learning in MOOCs My response Study Procedure ………… Besides, via an assessment card of deep learning, the intended outcomes of deep learning were assessed at the end of the course. A team of three professors rated all the research papers. Via discussion, main differences in evaluation were overcome. For scoring, signs agreed upon were utilized. Data of assessment card was used to identify utility of metacognition in promoting deep learning in MOOCs. To the comments from Reviewer #3: We also sincerely thank the Reviewer’s positive comments on the manuscript. According to the Reviewer’s insightful comments and suggestions, we have thoroughly revised the manuscript and hope the revised version can meet the standards and be acceptable for publication. Specific comments: The comment if it is taken for a large sample will the same results will be appeared? My response Limitations and future directions ......... Second, implementation of the current study was limited to a sample of female students of home economics major. Therefore, researchers are invited to carry out similar researches in other environments to explore the role of metacognition in MOOCs where male and female learners study together. Third, female learners in this study were tertiary students, so our results can not be compared with other age groups as Elfeky (2017) states. Fourthly, data analyzed were collected from one public tertiary institutions so generalizability of results can not be done. Therefore, researchers of future studies are called to utilize data from different institutions in other countries (Elbyaly & Elfeky, 2021), they are called to reveal the impact of utilizing the Big Data analytics in MOOCS to promote deep learning for learners with low metacognition. The comment As per the Author's view 203 out of 260 quantitative MAI points, metacognition was graded into two groups, high 204 metacognition (HM ≥ 65 percent) and low metacognition (LM < 65 percent) in line with Aydın205 and Coşkun (2011); Redondo and López (2018). In other words, based on their scores on the 206 MAI instrument, participants were divided into two experimental groups. There is a scope for comparative study. is it possible to incorporate? My response Participants ……………. Table (1) reveals that the mean score and standard deviation of students in the first group was (M=198.63 & SD=9.74), while it was (M=141.26 & SD=11.42) for the second group. The comment is it justifiable that the authors noted that all participants have studied and passed the course's prerequisite 14 courses, mainly Research Methodology, Teaching & Statistics Principles, and Principles of 215 statistics. Performance can be upgraded My response Was deleted "
Here is a paper. Please give your review comments after reading it.
402
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Labor market transformations significantly affect the sphere of information technologies (IT) introducing new instruments, architectures, and frameworks. Employers operate with new knowledge domains which demand specific competencies from workers including combinations of both technical ('hard') and non-technical ('soft') skills. The educational system is now required to provide the alumni with up-to-date skill sets covering the latest labor market trends. However, there is a big concern about the self-adaptation of educational programs for meeting the companies' needs. Accordingly, frequent changes in job position requirements call for the tool for in-time categorization of vacancies and skills extraction. This study aims to show the demand for skills in the IT sphere in the Commonwealth of Independent States (CIS) region and discover the mapping between required skill sets and job occupations. The proposed methodology for skills identification uses natural language processing, hierarchical clustering, and association mining techniques. The results reveal explicit information about the combinations of 'soft' and 'hard' skills required for different professional groups. These findings provide valuable insights for supporting educational organizations, human resource (HR) specialists, and state labor authorities in the renewal of existing knowledge about skill sets for IT professionals. In addition, the provided methodology for labor market monitoring has a high potential to ensure effective matching of employees.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The labor market in the digital era provides many challenges for the educational system and potential employees. For example, companies adapt to modern technological changes and competing environments demanding a broader range of skills from their workers. Accordingly, new knowledge domains stimulate the creation of new job tasks which require the combination of competencies from different occupations.</ns0:p><ns0:p>Hence, the educational curricula should be flexible for these transformations to provide alumni with up-to-date skills.</ns0:p><ns0:p>The sphere of information technologies (IT) is especially involved in the processes mentioned above.</ns0:p><ns0:p>IT professionals are required to possess particular combination of skills. Therefore, recruiters demand not only technical ('hard') but also non-technical ('soft') skills which are significant for an IT career <ns0:ref type='bibr' target='#b37'>(Litecky et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b30'>Johnson, 2016;</ns0:ref><ns0:ref type='bibr' target='#b31'>Kappelman et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b40'>Matturro et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Dubey and Tiwari, 2020)</ns0:ref>. The last includes, for example, communication abilities, leadership, teamwork, time management. Thus, IT industry representatives and human resource (HR) specialists introduce sets of competencies providing information in job advertisements in order to hire the most suitable specialists.</ns0:p><ns0:p>A wide range of research examines the IT sphere due to its feasibility to technologies provided and the simplicity of skills generalization. Academic literature accumulates knowledge about the demand side of the IT labor market and investigates issues connected with skills identification from online job advertisements <ns0:ref type='bibr' target='#b41'>(Papoutsoglou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Khaouja et al., 2021)</ns0:ref>. Firstly, related research highlights the importance of online job advertisement platforms in the process of labor market monitoring. Secondly, job advertisements data are used for clustering vacancies among job occupations <ns0:ref type='bibr' target='#b18'>(De Mauro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020)</ns0:ref>. Unstructured textual fields from job titles and their extended descriptions help to extract specific topics with keywords and run algorithms for vacancies' clusterization. Accordingly, text mining techniques are widely applied for the identification of core competencies. Thirdly, the data obtained from hiring platforms are matched with official classifiers for occupations and skills and then unified with labor market representation <ns0:ref type='bibr' target='#b1'>(Amato et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b6'>Botov et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Boselli et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Colombo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b38'>Lovaglio et al., 2018)</ns0:ref>. Existing studies use several open-source databases with processed 'soft' and 'hard' skills for the mentioned tasks. Authors point out that non-technical skills are highly required along with the technical ones but their interrelation is under-investigated <ns0:ref type='bibr' target='#b51'>(Verma et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b45'>Radovilsky et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Fareri et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Litecky et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b44'>Poonnawat et al., 2017)</ns0:ref>.</ns0:p><ns0:p>IT industry expects to hire candidates who operate with broad knowledge domains. Nevertheless, questions about the need for 'hard' and 'soft' skills are raised separately in related works. Existing studies investigate such skills without finding the interrelation between technical and non-technical competencies.</ns0:p><ns0:p>Moreover, the variety of techniques and methodological pipelines allows generalizing existing approaches in terms of knowledge extraction, standardization, and skill set identification. The main problem is still to investigate which combinations of 'soft' and 'hard' skills are demanded by the companies hiring IT professionals. In this regard, the paper systematizes existing skill identification pipelines. In addition, it presents an approach which allows to find and visualize the interrelation between non-technical skills and combinations of technical competencies characterizing particular IT occupations using the data from the Commonwealth of Independent States (CIS) labor market. This research aims to identify 'soft' and 'hard' skills domains for IT occupations. Therefore, this study focuses on the following research questions:</ns0:p><ns0:p>1. Which 'soft' and 'hard' skills are in-demand for IT specialties in the CIS region? 2. Which aggregated job families can be identified based on required competencies? 3. Which in-demand combinations of technical and non-technical skills can be highlighted?</ns0:p><ns0:p>The paper fills a gap in the semi-automatic analysis of competencies in the labor market based on online job advertisements and provides an extended skill-based approach to define clusters of vacancies and combinations of skills in them. I unify 'soft' and 'hard' skills in IT specialties, aggregate job families in attribution to required competencies, and investigate which combinations of them are needed in IT job occupations.</ns0:p><ns0:p>The significant result of the paper relates to the proposition of non-technical skills that are associated with several combinations of technical skills by job occupations. Obtained information has a high potential to be implemented in the educational process and the maintenance of already existing learning standards. Also, a particular interest of educational institutions concerns to support of their alumni in the process of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>hiring. According to the results of this study, the possession of only technical skills among IT professions cannot guarantee successful job matching. Thus, the competitiveness of the potential job candidates is compensated by the presence of well-matched technical and non-technical skills combinations. This paper is a continuation of our prior research <ns0:ref type='bibr' target='#b50'>(Ternikov and Aleksandrova, 2020)</ns0:ref>. The main improvements from that study imply the following aspects: the broader data-set is used; the implementation of semi-automatic procedures for skills standardization for synonyms and generalized terms detection; job occupations identification is based on skills clustering and association mining; the analysis of non-technical and technical skills including their interrelation.</ns0:p><ns0:p>The contribution of this paper is five-fold. First, I present an in-depth review of methodological pipelines for skills extraction and identification from IT job postings. Second, I provide the skill-driven pipeline for job advertisement analysis that allows to find the interrelation between skills for particular job occupations. Third, I use experimental setting to analyse the data from the CIS region that is poorly investigated. Fourth, I identify the combinations of demanded 'soft' and 'hard' skills for IT professionals using association mining techniques. I provide a common understanding of labor market requirements in a measurable form including not only the frequency of the presented competencies but also the probability of the co-occurrence of their combinations. Fifth, I present insights in terms of skills identification and 'soft' skills' importance for future research directions.</ns0:p><ns0:p>The rest of the paper is structured as follows. Related works section provides an overview of studies on skill identification. The methodology, research pipeline, data collection, and data processing are described in the following section. The last three sections contain experiment results, discussion and conclusions.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Existing studies extract knowledge from IT job advertisements using different methodological pipelines.</ns0:p><ns0:p>Researchers identify and aggregate skills into sets related to job occupations and introduce several experiments. In particular, competence frequency and skill sets interrelation are analyzed. Moreover, some papers provide matching between job occupations and competencies. Generally, the process of skills identification from job advertisements consists of three phases: text processing, skill base mapping, and gathering skill sets related to job occupations. Accordingly, the choice of skills identification and aggregation approaches depends on the sample size, type of skills, and research focus. In this section, a review of recent studies in terms of methodological pipelines is provided. Related works are summarized in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. The table is structured as follows: the first four columns indicate the groups of related works in attribution to the context of skills extraction and standardization (the sample size, approach for skills extraction, and skill types used in the analysis). The following two columns aggregate the related works connected to methods of skill sets and job occupations detection. The last three columns point out the main focus of the papers from the previous two columns.</ns0:p></ns0:div> <ns0:div><ns0:head>Skills extraction and standardization</ns0:head><ns0:p>Related research highlights four main approaches for extraction and standardization of skills from job advertisements depending on the degree of human involvement, namely, content analysis, frequency count, topic modeling, and classification. A detailed description of each approach applied to the IT sector is provided below.</ns0:p></ns0:div> <ns0:div><ns0:head>Content Analysis</ns0:head><ns0:p>Content analysis is based on a semi-manual aggregation of qualitative data. It allows to build concepts and assign related topics to each data entry (job advertisement description). Commonly, it consists of two stages. The first stage of the procedure is frequent terms extraction, the second stage -manual mark-up and pre-defined topics (concepts) allocation. In general, the sample size for this method does not exceed 1,000 data entries due to a limited number of job postings obtained for the research.</ns0:p><ns0:p>In the context of skills standardization, the content analysis provides more precise results comparing with other automatic algorithms <ns0:ref type='bibr' target='#b12'>(Cegielski and Jones-Farmer, 2016)</ns0:ref>. The advantage is the correct classification of skills with different notations and the same meaning, and vice versa. However, this method is time-consuming and not applicable to large datasets. Accordingly, related research, which applies content analysis, investigates only the frequency count of extracted skills <ns0:ref type='bibr' target='#b28'>(Hussain et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b14'>Chang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Matturro, 2013;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chaibate et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b49'>Steinmann et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b48'>Sodhi and Son, 2010)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67981:1:0:NEW 31 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Thus, the issues which cover interrelation between skills and their mapping with job occupations are under-examined.</ns0:p></ns0:div> <ns0:div><ns0:head>Frequency Count</ns0:head><ns0:p>Frequency count and manual processing of textual data are generally used as a part of a methodological pipeline. However, several studies provide this approach separately on the stage of skills standardization.</ns0:p><ns0:p>I can identify two main directions for its usage.</ns0:p><ns0:p>Firstly, skills extraction and ambiguous terms reduction <ns0:ref type='bibr' target='#b0'>(Ahmed et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b23'>Gardiner et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Daneva et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bensberg et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b50'>Ternikov and Aleksandrova, 2020)</ns0:ref>. Researchers use standard text pre-processing techniques such as punctuation removal, letters lowercase, tokenization. Then, authors manually extract or remove frequent items obtained after TF-iDF (term frequency -inverse document frequency) procedure.</ns0:p><ns0:p>Secondly, data entries categorization, annotation, and labeling <ns0:ref type='bibr' target='#b22'>(Florea and Stray, 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>De Mauro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b47'>Skhvediani et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brooks et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b52'>Wowczko, 2015;</ns0:ref><ns0:ref type='bibr' target='#b21'>Fareri et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b51'>Verma et al., 2019)</ns0:ref>. Authors tokenize textual data and use contiguous sequences of n words (n-grams). Then, researchers manually validate obtained items by comparing their frequencies and aggregating them in groups of skills with the same meaning (synonyms).</ns0:p><ns0:p>Summing up, manual processing of frequent terms allows to aggregate a larger amount of information comparing with content analysis. The processing is based on the parts of job descriptions which consist of several tokens of text sequences. Moreover, the procedure is independent of pre-defined concepts. The last-mentioned allows using computerized approaches for matching skills and job occupations <ns0:ref type='bibr' target='#b21'>(Fareri et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brooks et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b0'>Ahmed et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b51'>Verma et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b50'>Ternikov and Aleksandrova, 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Topic Modeling</ns0:head><ns0:p>Topic modeling is an automatic approach for allocating unstructured information into groups with common semantic patterns. In the context of skills extraction, this method operates with words (or sequences of words) and their occurrence in job advertisements. Existing studies on topic modeling provide no manual correction of obtained terms comparing with already mentioned approaches. However, the description of the resulting topics is carried with manual validation by domain experts.</ns0:p><ns0:p>Topic modeling is flexible and covers different research objectives. For example, some authors analyze the popularity of job skills <ns0:ref type='bibr' target='#b36'>(Litecky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b54'>Xu et al., 2018)</ns0:ref>. Other authors discover competencies interrelation <ns0:ref type='bibr' target='#b53'>(Wu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019)</ns0:ref>. Moreover, the use of unsupervised models for topic modeling helps to match skill sets and job occupations <ns0:ref type='bibr' target='#b43'>(Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b45'>Radovilsky et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Debortoli et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>De Mauro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Litecky et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b44'>Poonnawat et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Classification</ns0:head><ns0:p>Classification tasks for skills extraction are raised in more recent research. Authors use supervised machine learning algorithms in order to match particular job descriptions with already labeled databases of competencies. This approach is mainly used with large datasets. Accordingly, the portion of data is marked-up manually, and then, the results are validated.</ns0:p><ns0:p>The initial task of these works is to match job advertisement descriptions with a standardized database of skills or occupations. Hence, the algorithmic pipeline depends on a domain basis for skills validation.</ns0:p><ns0:p>Some authors use knowledge graphs indicating the co-occurrence of raw terms <ns0:ref type='bibr' target='#b29'>(Jia et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Giabelli et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b3'>B&#246;rner et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Colace et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Boselli et al., 2018)</ns0:ref>. The others apply descriptions of skills from national occupational classifiers <ns0:ref type='bibr' target='#b11'>(Cao et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b16'>Colombo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b32'>Karakatsanis et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b38'>Lovaglio et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b6'>Botov et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b1'>Amato et al., 2015)</ns0:ref>. The rest -use manual mark-up for extracted knowledge domains <ns0:ref type='bibr' target='#b9'>(Calanca et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b46'>Sayfullina et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Hoang et al., 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Soft and hard skills detection</ns0:head><ns0:p>The analysis of the required skills obtained from online job advertisements is widely investigated in the research literature. The authors highlight two types of skills: 'soft' and 'hard'. These skills are analyzed together or separately depending on research objectives.</ns0:p><ns0:p>Some authors use only one type of mentioned competencies and do not provide interaction between skills of different types. Specifically, related works which use only 'soft' skills in the IT sector are generally mapping studies. The main objectives follow the skill base dictionary creation and skill frequency count <ns0:ref type='bibr' target='#b46'>(Sayfullina et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b9'>Calanca et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b17'>Daneva et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b22'>Florea and Stray, 2018</ns0:ref>; Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b0'>Ahmed et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b39'>Matturro, 2013;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chaibate et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b49'>Steinmann et al., 2013)</ns0:ref>. In contrast, studies operating only with 'hard' skills focus on the characteristics of skill set related job families <ns0:ref type='bibr' target='#b19'>(Debortoli et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>De Mauro et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The other existing studies use both 'soft' and 'hard' skills. Some authors analyze mixed skill sets but without distinction between their types <ns0:ref type='bibr' target='#b15'>(Colace et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b32'>Karakatsanis et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b38'>Lovaglio et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b6'>Botov et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Boselli et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Amato et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b29'>Jia et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>B&#246;rner et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Litecky et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b44'>Poonnawat et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b54'>Xu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b52'>Wowczko, 2015;</ns0:ref><ns0:ref type='bibr' target='#b21'>Fareri et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bensberg et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b50'>Ternikov and Aleksandrova, 2020;</ns0:ref><ns0:ref type='bibr' target='#b51'>Verma et al., 2019)</ns0:ref>. Another group of papers provides separation in two ways. Firstly, authors use manual mark-up and ready skill bases before the stage of skill sets detection <ns0:ref type='bibr' target='#b11'>(Cao et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b27'>Hoang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b47'>Skhvediani et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b26'>Hiranrat and Harncharnchai, 2018;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cegielski and Jones-Farmer, 2016)</ns0:ref>.</ns0:p><ns0:p>Secondly, researchers obtain 'soft' skills mapping using domain experts correction after classifying or clustering job advertisements <ns0:ref type='bibr' target='#b16'>(Colombo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Litecky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b23'>Gardiner et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brooks et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Hussain et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b48'>Sodhi and Son, 2010;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019;</ns0:ref><ns0:ref type='bibr' target='#b45'>Radovilsky et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Giabelli et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Wu et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Identification of job occupations</ns0:head><ns0:p>Analysis of skills identification pipelines includes the stage of aggregation of job occupations (job related skill sets). Recent research could be separated into groups based on the approach of data processing. The first group uses skill base mapping which helps to match information from job postings with descriptions obtained from official classifiers of occupations and skills. The second group includes clustering methods which allow merging similar vacancies containing the same required skill sets.</ns0:p></ns0:div> <ns0:div><ns0:head>Skill Base Mapping</ns0:head><ns0:p>The presence of official classifiers allows to distribute the raw data from job postings among groups created by domain experts. Some authors use job titles and their descriptions from vacancies in order to match them with ESCO (European Skills/Competences, qualifications and Occupations), ISCO (International Standard Classification of Occupations), or O*NET (Occupational Information Network) classifiers <ns0:ref type='bibr' target='#b5'>(Boselli et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b11'>Cao et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b16'>Colombo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b38'>Lovaglio et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Colace et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Giabelli et al., 2020)</ns0:ref>. Others use the structure of job advertisements provided on hiring platforms and validate it with domain experts mark-up and professional standards <ns0:ref type='bibr' target='#b14'>(Chang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b50'>Ternikov and Aleksandrova, 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Hiranrat and Harncharnchai, 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Amato et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b46'>Sayfullina et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Litecky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b27'>Hoang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b6'>Botov et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brooks et al., 2018)</ns0:ref>. This research employs text mining techniques such as TF-iDF and n-grams that are used for specific words and phrases extraction. By implementing algorithms of classification, the authors highlight several groups of IT occupations where the most frequent technical and non-technical skills are outlined. Such division of skills is obtained from a given classifier and manual processing.</ns0:p><ns0:p>The other group of related works uses manually obtained expert-based keywords and content analysis or classifying vacancies and identifying the most frequent skills <ns0:ref type='bibr' target='#b2'>(Bensberg et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b21'>Fareri et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b23'>Gardiner et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b0'>Ahmed et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b22'>Florea and Stray, 2018;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cegielski and Jones-Farmer, 2016;</ns0:ref><ns0:ref type='bibr' target='#b28'>Hussain et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b39'>Matturro, 2013;</ns0:ref><ns0:ref type='bibr' target='#b17'>Daneva et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chaibate et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b49'>Steinmann et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b48'>Sodhi and Son, 2010;</ns0:ref><ns0:ref type='bibr' target='#b51'>Verma et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b47'>Skhvediani et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b52'>Wowczko, 2015)</ns0:ref>. The authors examine digital-oriented vacancies and structure competencies on a qualitative basis. For example, online vacancies are chosen and filtered by job position names. After implementing text mining analysis, the key skills are grouped into several categories with the use of official classifiers.</ns0:p></ns0:div> <ns0:div><ns0:head>Clustering</ns0:head><ns0:p>The structure of online job postings does not resemble official classifiers in every case. Several authors obtain their categorizations of vacancies based on clustering and data-driven approaches. However, the choice of an appropriate clustering algorithm depends on the data structure and information available. For example, researchers use Latent Dirichlet Allocation (De <ns0:ref type='bibr' target='#b18'>Mauro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019)</ns0:ref> and Hierarchical Clustering <ns0:ref type='bibr' target='#b43'>(Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Litecky et al., 2009)</ns0:ref>.</ns0:p><ns0:p>The process of key skills identification in most cases is determined by keywords extraction (particular skill set). In addition, the number of clusters is set experimentally and specific groups of competencies are corrected manually. Commonly, the number of groups varies from 8 to 10. Accordingly, two directions of the research can be outlined. The first one includes the creation of groups of vacancies based on Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>words obtained from job descriptions (De <ns0:ref type='bibr' target='#b18'>Mauro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Litecky et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b54'>Xu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>B&#246;rner et al., 2018)</ns0:ref>. For instance, some keywords are combined into skill sets and redistributed between clusters. The second direction uses skill sets representation as vectors of words <ns0:ref type='bibr' target='#b25'>(Gurcan and Cagiltay, 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Debortoli et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b53'>Wu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b45'>Radovilsky et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b44'>Poonnawat et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b9'>Calanca et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b29'>Jia et al., 2018)</ns0:ref>. However, authors create groups of skills based on a clustering algorithm for particular job occupations.</ns0:p></ns0:div> <ns0:div><ns0:head>Application of skills identification and research gap</ns0:head><ns0:p>Most related works gather knowledge about the relative frequency of in-demand skills. Precisely, combining quantitative and qualitative approaches, authors identify existing competencies on the labor market in the IT sphere. Moreover, existing studies provide listings of technical and non-technical competencies analyzing their combinations. However, the co-occurrence is provided on the basis of mixed-type skill sets or among technical competencies <ns0:ref type='bibr' target='#b42'>(Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b53'>Wu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>B&#246;rner et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019)</ns0:ref>. The other studies implement matching experiments between obtained skill sets and the professional structure of the job market. Authors point out that non-technical skills are highly required along with the technical ones but do not investigate their interrelation <ns0:ref type='bibr' target='#b51'>(Verma et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b21'>Fareri et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b45'>Radovilsky et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Litecky et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b44'>Poonnawat et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Despite the abundance of studies in this research field, interactions and combinations between technical and non-technical skills inside different job profiles are under-examined <ns0:ref type='bibr' target='#b0'>(Ahmed et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b11'>Cao et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b22'>Florea and Stray, 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017)</ns0:ref>. Moreover, this paper goes beyond the vast majority of studies as it provides combinations of technical and non-technical skills based on classification and matching of job postings using association mining techniques.</ns0:p></ns0:div> <ns0:div><ns0:head>DATA AND METHODOLOGY</ns0:head><ns0:p>The research uses a semi-automatic methodology for in-demand skills identification among IT job occupations. The model pipeline of this study includes the stages of natural language processing, clustering analysis, and association mining (Figure <ns0:ref type='figure'>1</ns0:ref>). The analysis was implemented in three main steps. Firstly, data were collected and processed for standardized skills extraction. Secondly, skill-based clustering analysis for job occupations detection was conducted using the hierarchical clustering algorithm.</ns0:p><ns0:p>Thirdly, association mining for key combinations of 'soft' and 'hard' skills was carried out inside the obtained groups of job occupations. Each stage of the methodology is described in detail in the subsequent sections.</ns0:p></ns0:div> <ns0:div><ns0:head>Data collection and processing</ns0:head><ns0:p>The data were collected from one of the largest hiring platforms in the CIS (Commonwealth of Independent States) region named HeadHunter (www.hh.ru). Typical structure of online job advertisement (vacancy)</ns0:p><ns0:p>includes the following main fields: vacancy ID, job name, specialization codes (from 1 to 6 professional area codes), publishing date, area (region), description (unstructured text), skills (the sets of 0 to 30 elements which consist of unstructured texts each up to 100 symbols). This study used the sample of IT vacancies with proposed skills (HeadHunter specialization 'IT, Internet, Telecom') covering the 2015-2019 period, which contains 351,623 observations. In addition, the set of 3,034 unique frequent skills was extracted and prepared for further standardization.</ns0:p><ns0:p>The general logic of skill standardization follows the steps of similar terms finding (synonyms) and generalized terms aggregation (the common term for the particular subset of skills) <ns0:ref type='bibr' target='#b27'>(Hoang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Hiranrat and Harncharnchai, 2018;</ns0:ref><ns0:ref type='bibr' target='#b50'>Ternikov and Aleksandrova, 2020;</ns0:ref><ns0:ref type='bibr' target='#b38'>Lovaglio et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Gardiner et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brooks et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Karakatsanis et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b51'>Verma et al., 2019)</ns0:ref>. Moreover, the steps of matching abbreviations and multi-lingual terms processing are highlighted separately. To minimize the manual processing the following procedures were implemented:</ns0:p><ns0:p>1. splitting the terms by punctuation into smaller ones;</ns0:p><ns0:p>2. removal of exuberant punctuation and digits;</ns0:p><ns0:p>3. tracking the terms with white-spaces (words reordering and stemming); 4. finding of potential abbreviations by the first letters extraction from terms with white-spaces;</ns0:p><ns0:p>5. translation of terms with Cyrillic letters using Yandex. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>English);</ns0:p><ns0:p>6. tokenizing and extracting n-grams (n &#8712; {2, 3, 4}); 7. matching terms, tokens and n-grams based on TF-iDF (term frequency -inverse document frequency) approach;</ns0:p><ns0:p>8. manual correction and aggregation of terms into synonyms (different notation of the same skill) and generalized terms (common notation for different sub-skills).</ns0:p><ns0:p>At the endpoint of this stage 1,730 terms were obtained (including both synonyms and generalized terms) which were matched with the initial sample data. After the matching, 97.7% of vacancies remained with at least one matched term from the obtained dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Implementation of Hierarchical Clustering</ns0:head><ns0:p>In order to run a clustering algorithm based on the information about the competencies, the division between 'soft' and 'hard' skills was provided. The following analysis was based on the two-stage model which implies clusterization only over 'hard' skills <ns0:ref type='bibr' target='#b37'>(Litecky et al., 2004)</ns0:ref>.</ns0:p><ns0:p>From the initial sample of 1,730 terms, 'soft' skills were semi-manually processed with the use of already introduced databases of non-technical skills <ns0:ref type='bibr' target='#b9'>(Calanca et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b46'>Sayfullina et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Accordingly, I have enriched the existing dictionaries of non-technical terms proposed in the other works within the HeadHunter specific notations of different skills. 94 'soft' skills were extracted and aggregated to 41 generalized groups. Then, the remaining 'hard' skills were extracted and processed to identify only the competencies which were presented in the combinations with at least one of the processed 'soft' skills. Thus, 544 technical skills were obtained.</ns0:p><ns0:p>Each skill is represented by a set of vacancy IDs. Jaccard indexes were calculated over all the paired combinations of 'hard' skills <ns0:ref type='bibr' target='#b15'>(Colace et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b29'>Jia et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Giabelli et al., 2020)</ns0:ref>. Jaccard index J is the measure of similarity between two sets of objects A and B denoted as</ns0:p><ns0:formula xml:id='formula_0'>J(A, B) = |A &#8745; B|/|A &#8746; B|.</ns0:formula><ns0:p>To calculate these similarity measures within the reasonable computational time the MinHash procedure with 100 hash-functions was used <ns0:ref type='bibr' target='#b7'>(Broder, 1997)</ns0:ref>. Based on the calculations, the dissimilarity square matrix was created for clustering purposes. Each element of the matrix corresponds to the Jaccard distance</ns0:p><ns0:formula xml:id='formula_1'>d J (A, B) = 1 &#8722; J(A, B</ns0:formula><ns0:p>) between the sets of vacancy IDs for each skill.</ns0:p><ns0:p>Then, I used the Hierarchical Clustering procedure based on Weighted Pair Group Method with Arithmetic Mean (WPGMA) which is computationally easier for the detection of clusters with a different number of elements <ns0:ref type='bibr' target='#b43'>(Pejic-Bach et al., 2020)</ns0:ref>. Formally, the distance d between two combined clusters i &#8746; j, and the other cluster k is defined as d (i&#8746; j),k = (d i,k + d j,k )/2. Thus, in order to obtain disconcordant clusters over 'hard' skills only, WPGMA was implemented. The empirical choice of the number of clusters is justified in related research <ns0:ref type='bibr' target='#b18'>(De Mauro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Debortoli et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Litecky et al., 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Implementation of Association Mining</ns0:head><ns0:p>Association rules mining was proposed over the obtained clusters and terms. So, following the mentioned approach, suitable and more likely connected combinations of 'hard' skills were matched with one appropriate 'soft' skill. The assessment of the association rules for the sets A (left-hand side) and B (righthand side) is based on the following indicators: Support = P(A &#8745; B), Con f idence = P(A &#8745; B)/P(A), and</ns0:p></ns0:div> <ns0:div><ns0:head>Li f t = P(A &#8745; B)/(P(A) &#8226; P(B)).</ns0:head><ns0:p>In this paper the following formal setting is used: left-hand side (lhs) relates to 'hard' skills, right-hand side (rhs) implies one pre-processed 'soft' skill (each rule interpretation is the chance of the match between the set of 'hard' skills and the particular 'soft' skill), 'Confidence' threshold equals 0.001, 'Support' threshold -0.0005. Finally, completed rules were merged with obtained clusters, and only rules where all 'hard' skills are simultaneously presented in one cluster were analyzed.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>This section provides detailed results of each step of the methodological pipeline. Firstly, the most frequent 'hard' and 'soft' skills are identified. Secondly, clusters of professional skill sets are described.</ns0:p><ns0:p>Thirdly, in-demand combinations of skills are listed and visualized by job occupations.</ns0:p></ns0:div> <ns0:div><ns0:head>8/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67981:1:0:NEW 31 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Identification of soft and hard skills</ns0:head><ns0:p>The skills obtained after the data processing and standardization stage were separated into two groups:</ns0:p><ns0:p>'hard' and 'soft' skills. The formulations of technical competencies which have been translated into the English language were saved in lowercase. The relative frequencies over the whole range of IT vacancies are presented in the form of word clouds (Figure <ns0:ref type='figure'>2</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>Clusters of job occupations</ns0:head><ns0:p>Hierarchical Clustering algorithm based on the WPGMA method was run over 'hard' skills. As a result, 544 technical competencies were unequivocally distributed along 10 clusters. The number of clusters was chosen on the basis of internal validity scores for Hierarchical Clustering (Figure <ns0:ref type='figure'>3</ns0:ref>) and in terms of preserving relatively the same number of representatives in each group of 'hard' skills. Formally, the 'Connectivity' metric should be minimized; 'Dunn' and 'Silhouette' -maximized.</ns0:p><ns0:p>The clusters themselves and the ten most frequent elements are presented in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>. The table includes the cluster name, the number of represented skills, and the 10 most frequent technical skills (according to their occurrence in analyzed vacancies). The names of clusters were introduced manually based on frequent terms and common ways of representation of IT groups in related works. Following the way of clusters aggregation in related research, the second cluster and the third one were merged due to the inclusion of close skills for 'Big Data' and 'Machine Learning' ('ML') respectively ('Big Data &amp; ML (#1)' and 'Big Data &amp; ML (#2)') (De <ns0:ref type='bibr' target='#b18'>Mauro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020)</ns0:ref>. The resulting list of 9 clusters relates to professional skill sets containing several specific skills required by employers in the IT sector. The names of the clusters correspond to particular job occupations.</ns0:p></ns0:div> <ns0:div><ns0:head>Identification of in-demand skills for IT</ns0:head><ns0:p>The final stage of the analysis includes the aggregation of the obtained results after clustering and association mining steps. Identification of the most interdependent skill sets demands both understanding of common requirements for 'hard' skills and finding an appropriate 'soft' skill that has a high chance to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science be combined with the technical competencies.</ns0:p><ns0:p>Firstly, 'soft' skills are left out. Then, the most required combinations (pairs) of 'hard' skills are extracted using the Jaccard similarity matrix obtained before the clustering stage. Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref> presents Top-5 most coherent combinations for each job occupation. The table indicates the cluster name in the first column, pairs of skills (the second and the third columns), and the Jaccard similarity for the proposed pair based on sets of vacancies where such skills are introduced.</ns0:p><ns0:p>Secondly, preserving relations between 'soft' and 'hard' skills poses a question regarding the most demanded non-technical competencies for different job occupations. In order to answer the stated question, the results of association mining analysis were used.</ns0:p><ns0:p>At first, only the rules containing 'Lift' which exceeds unity were preserved. It allowed to maintain only relations of two skill sets with high chances to be required together. Then, all the rules were averaged by 'Confidence' in attribution to different clusters to obtain an interpretation of the relative occurrence frequency of 'soft' skill among combinations of 'hard' skills connected to it. As a result, only 22 'soft' skills remained for 9 groups of job occupations. Visually the grid with average 'Confidences' (in %)</ns0:p><ns0:p>of obtained association rules is presented in Figure <ns0:ref type='figure'>4</ns0:ref>. Each circle indicates the relative frequency of the particular 'soft' skill among the sets of 'hard' skills related to it in a specified job group. The size, the value, and the color saturation of each circle are all attributed to average 'Confidences'. In addition, Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref> provides an extended description of the most probable and suitable combinations (by maximum 'Confidence') of technical competencies for already extracted 'soft' skills. The table lists the obtained rules indicating the 'soft' skill (rhs), name of the cluster, 'hard' skills (lhs), and measures of 'Confidence', 'Lift', and the number of vacancies where the rule is mentioned (N).</ns0:p><ns0:p>The grid provides some valuable insights into non-technical competencies that are highly required by IT companies in the labor market. For example, the most represented non-technical skills are 'Agile', 'Scrum', and 'Organizational Skills'. However, not all of them are required to the same degree for different professional groups. The most frequent technical skills relate to scripting and mark-up programming languages that are mostly used in web development. The other frequent skills cover the sphere of database administration and competencies related to project management.</ns0:p><ns0:p>According to non-technical skills, knowledge of the English language and teamwork skills are highly required. Namely, for SEO specialists 'Communication Skills' are important, but for Web Developers ('WebDev') -'English' and 'Creativity'. At the same time the requirements for 'soft' skills in general are widely diverse. On the one hand, 'Support' and 'Analytics' specialists have to possess lots of non-technical skills. On the other hand, 'Testing', 'Database', and 'Hardware' occupations have no strong requirements for such competencies. Moreover, the dominance is preserved for the majority of the specialties of the 'hard' skills. In this case only 20-30% of the required combinations of competencies include non-technical skills.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Analysis of job advertisements data includes stages of skills processing and identification, skill set mapping, and knowledge extraction. In this study, in order to identify the interrelation between 'soft' and 'hard' skills for IT professionals, the competencies were extracted and standardized in accordance with natural language processing and skill base mapping. Consequently, skill types were separated into two dictionaries of technical and non-technical competencies. Then, 'hard' skills were gathered into skill sets related to particular job occupations, using hierarchical cluster analysis. As a result, nine groups of technical skills were identified. Next, each skill cluster was mapped with already processed 'soft' skills.</ns0:p><ns0:p>Finally, association mining was introduced for the creation of the grid map indicating the linkage between job occupations and non-technical skills.</ns0:p><ns0:p>According to the results, the obtained skill sets (related to job occupations) have disjoint redistribution of processed technical skills. On the one hand, it allows to separate job positions with mixed-tasks and concentrate on the pure relation to particular non-technical skills. On the other hand, this approach does not capture particular job positions but focuses on the professional area of competence. The lastmentioned allows identifying in-demand skills on the broader level of investigation <ns0:ref type='bibr' target='#b43'>(Pejic-Bach et al., 2020)</ns0:ref>. For instance, in this study, the most demanded 'hard' skills relate to technologies used in web service development and testing (HTML, JavaScript, PHP, CSS). Namely, the job occupation including mentioned competencies I call 'Testing'. Related work gets approximately the same distribution of frequent 'hard' skills on cross-country level. However, the target cluster is named 'Web Technicians', Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>'Web Programming' or 'Software Development' <ns0:ref type='bibr' target='#b24'>(Giabelli et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b29'>Jia et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Hiranrat and Harncharnchai, 2018;</ns0:ref><ns0:ref type='bibr' target='#b53'>Wu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b50'>Ternikov and Aleksandrova, 2020)</ns0:ref>. The main distinctions are based on the aggregation level of job advertisements and the usage of mixed skill sets. The provided approach relates to the sub-division of skills needed for software development. Hence, it focuses more on skill-cluster rather than on specific job position names.</ns0:p><ns0:p>Nevertheless, the most required technical skills and their combinations in the IT labor market vary among different regions and job postings platforms. Existing studies mention that competencies are regionally specific <ns0:ref type='bibr' target='#b26'>(Hiranrat and Harncharnchai, 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Giabelli et al., 2020)</ns0:ref>. For example, in the CIS region such skills are '1C' (ERP system), 'compass 3d' (3D modeling software), and 'Yandex Direct' (platform for context advertising). Some of them are highlighted in the studies that analyze the HeadHunter database <ns0:ref type='bibr' target='#b6'>(Botov et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b50'>Ternikov and Aleksandrova, 2020)</ns0:ref>. In addition, all frequent skills mentioned in prior studies are presented in this research. Note that the frequency of occurrence is slightly diverse depending on the level of job postings aggregation.</ns0:p><ns0:p>According to the findings, the most in-demand non-technical skills are the following: 'Teamwork', 'Negotiation Skills', 'Business Communication', and 'English'. Taking into account the most accurate combinations of competencies related to occupational skill sets, I can also outline 'Agile', 'Scrum', 'Organizational Skills', 'Presentation Skills', and 'Analytical Skills'. Generally, these results do not contradict the existing research <ns0:ref type='bibr' target='#b13'>(Chaibate et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Matturro, 2013;</ns0:ref><ns0:ref type='bibr' target='#b49'>Steinmann et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b17'>Daneva et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Giabelli et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Hiranrat and Harncharnchai, 2018;</ns0:ref><ns0:ref type='bibr' target='#b47'>Skhvediani et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b53'>Wu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b25'>Gurcan and Cagiltay, 2019;</ns0:ref><ns0:ref type='bibr' target='#b0'>Ahmed et al., 2012)</ns0:ref>. Actually, I can again pose the regional specificity of in-demand non-technical skills. Interestingly, in contrast to the findings, 'Communication Skills' are topped in the majority of studies, but 'Teamwork' is less required. However, due to slightly different formulations provided in several job ads databases, I can conclude that some names of skills could be aggregated, e.g., 'Communication Skills' and 'Business Communication';</ns0:p><ns0:p>'Leadership' and 'Organizational Skills', 'Interpersonal Skills' and 'Teamwork'.</ns0:p></ns0:div> <ns0:div><ns0:head>Threats to validity</ns0:head><ns0:p>The study faces some limitations that should be acknowledged. Accordingly, I overview the possible validity threats in more detail. Namely, internal, external, and construct validity.</ns0:p></ns0:div> <ns0:div><ns0:head>Internal validity</ns0:head><ns0:p>In the study, I overcome several limitations which stated in related research: narrow data period, small sample size, use of mono-language ads, selection bias <ns0:ref type='bibr' target='#b43'>(Pejic-Bach et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brooks et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b47'>Skhvediani et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Cao et al., 2021)</ns0:ref>. However, data structure limitations should be considered. The data are comprised of platform-specific job postings. On the one hand, the vacancies are obtained from the homogeneous source of data. On the other hand, the job platform allows recruiters not to fill in specific skills field that is used for the analysis. Accordingly, some job postings contain only basic skills or broad skill sets. In addition, the demand for skills varies for different regions representing the IT vacancies. Moreover, there is no common principle to assign specific professional areas to a particular vacancy. Thus, HR specialists are not fully able to precisely specify the occupation using the HeadHunter professional area classifier.</ns0:p></ns0:div> <ns0:div><ns0:head>External validity</ns0:head><ns0:p>Obtained results are generalizable beyond the experimental setting. However, the demand for IT specialists in the labor market is not restricted by only one hiring platform. Additional data sources should be used in further investigations. Moreover, the internal recruiting processes inside many companies are not publicly disclosed. Accordingly, companies may promote themselves and post false recruitment information.</ns0:p><ns0:p>In future research, the hidden employment might be taken into account using company survey data enrichment.</ns0:p></ns0:div> <ns0:div><ns0:head>Construct validity</ns0:head><ns0:p>The research design is based on data-driven textual processing of skills and vacancies. The noise in data, including different notations of the same skills, was partly reduced by the skill standardization procedure. Limitations that could be highlighted for the clustering approach include an empirically chosen number of clusters, the need for additional manual processing of extracted words and phrases, and a strong connection between the quality of available data and expected results. Overall, relatively low relation of 'soft' skills to more computationally intensive 'hard' skills could be the consequence of vacancy Manuscript to be reviewed</ns0:p><ns0:p>Computer Science postings structure and the dominance of technical competencies there. Related research also proposes the effect of superiority of technical skills over non-technical <ns0:ref type='bibr' target='#b47'>(Skhvediani et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b53'>Wu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b42'>Papoutsoglou et al., 2017)</ns0:ref>. Thorough data processing is needed to perform reasonable competencies allocation in certain job positions in case of skills detection <ns0:ref type='bibr' target='#b17'>(Daneva et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b43'>Pejic-Bach et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The results indicate that skills from one cluster may relate to different purposes, e.g., knowledge of specific protocols, programming languages with their libraries, special software, integrated environments, frameworks, managerial or analytical tools. In this study, the difference between skills' purposes is not analyzed but it could be added for the further implications of the applied methodology.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This research gives a broader understanding of the demand-side requirements on the CIS labor market in the IT sphere. The most accurate combinations of 'soft' and 'hard' skills are obtained and ready for implementation in educational curriculum and professional standards. Furthermore, the applied methodology allows transferring the data analysis techniques into other regional markets. Thus, the highlighted methodology has a high potential to be used both in a theoretical and practical manner.</ns0:p><ns0:p>Summing up, the main practical implications relate to the modification of HR and educational policies.</ns0:p><ns0:p>I provide a novel approach for skills grid visualization that may help to identify the most in-demand skill sets. Moreover, the provided methodology helps to gather information for in-time labor market monitoring. The last undoubtedly can foster the curricula development and modernization to ensure the proper hiring.</ns0:p><ns0:p>This paper provides an approach for key skills detection in different job occupations that is scalable for similar data structures of the labor market. The novelty of the research design relates to the pipeline, namely, steps of unstructured text standardization, key competencies extraction, application of hierarchical clustering based on dependencies between different skills, and the use of association mining techniques for obtaining knowledge on job requirements.</ns0:p><ns0:p>Based on the presented analysis, further studies could explore the following directions. Firstly, research on the dynamics of skill sets changes. Secondly, an overview of the other sectors of the labor market.</ns0:p><ns0:p>Thirdly, classification of job occupations and skills with the use of already existing state classifiers.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67981:1:0:NEW 31 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67981:1:0:NEW 31 Jan 2022)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,174.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Literature overview of pipelines for IT job advertisements analysis.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>References</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Job occupation clusters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell># Cluster name</ns0:cell><ns0:cell cols='2'>N of skills Top-10 'hard' skills</ns0:cell></ns0:row><ns0:row><ns0:cell>1 Analytics</ns0:cell><ns0:cell>62</ns0:cell><ns0:cell>1C, accounting, 1c programming, trade management, optimiza-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>tion of business processes, enterprise management, Business</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Analysis, ERP, Reporting, create configuration 1c</ns0:cell></ns0:row><ns0:row><ns0:cell>2 Big Data &amp; ML (#1)</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>Data Analysis, Machine Learning, Mathematical Statistics, Data</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Mining, statistical analysis, Mathematical Modeling, MATLAB,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>forecasting, Mathematical Programming, Mathematics</ns0:cell></ns0:row><ns0:row><ns0:cell>3 Big Data &amp; ML (#2)</ns0:cell><ns0:cell>73</ns0:cell><ns0:cell>Linux, Java, Python, PostgreSQL, C++, Jira, Spring, Nginx,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Apache, Ruby</ns0:cell></ns0:row><ns0:row><ns0:cell>4 Databases</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>SQL, C#, MSSQL, .NET, ORACLE, ASP.NET, MVC, MS SQL</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Server, SVN, MS Visual Studio</ns0:cell></ns0:row><ns0:row><ns0:cell>5 Engineering</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>Project Documentation, AutoCAD, System Analysis, assign-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>ing tasks to developers, gost, automation of processes, pro-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>cess control system, System Integration, Product Development,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>normative-technical documentation</ns0:cell></ns0:row><ns0:row><ns0:cell>6 Hardware</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>Windows, server administration, configuring dns, network equip-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>ment, setting up the pc, technical support, software setting, IP,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>TCP, pc repair</ns0:cell></ns0:row><ns0:row><ns0:cell>7 SEO</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>SEO, technical website audit, contextual advertising, Internet Marketing, copywriting, web analytics, Yandex Direct, SMM, administration, promotion 8 Support 65 sales, Project Manager, business correspondence, B2B, Phone Calls, personnel management, customer engagement, Team management, Client management, CRM 9 Testing 81 HTML, JavaScript, CSS, PHP, Git, MySQL, OOP, jQuery, 1C-Bitrix, Ajax 10 WebDev 41 Web Design, writing skills, UI, Graphic Design, UX, layout, CorelDRAW, editing, translation, rewriting</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Key pairs of 'hard' skills by job occupations.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Cluster name</ns0:cell><ns0:cell>Skill #1</ns0:cell><ns0:cell>Skill #2</ns0:cell><ns0:cell>Jaccard index</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>financial statements</ns0:cell><ns0:cell>Reporting</ns0:cell><ns0:cell>0.45</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1C</ns0:cell><ns0:cell>accounting</ns0:cell><ns0:cell>0.36</ns0:cell></ns0:row><ns0:row><ns0:cell>Analytics</ns0:cell><ns0:cell>1C</ns0:cell><ns0:cell>1c programming</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>accounting</ns0:cell><ns0:cell>trade management</ns0:cell><ns0:cell>0.34</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BPMN</ns0:cell><ns0:cell>UML</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>imap</ns0:cell><ns0:cell>pop3</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>imap</ns0:cell><ns0:cell>SMTP</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell>Big Data &amp; ML</ns0:cell><ns0:cell>pop3</ns0:cell><ns0:cell>SMTP</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CD</ns0:cell><ns0:cell>CI</ns0:cell><ns0:cell>0.61</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NumPy</ns0:cell><ns0:cell>Pandas</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>IBM</ns0:cell><ns0:cell>Lotus Notes</ns0:cell><ns0:cell>0.49</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ASP.NET</ns0:cell><ns0:cell>C#</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>Databases</ns0:cell><ns0:cell>.NET</ns0:cell><ns0:cell>C#</ns0:cell><ns0:cell>0.40</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>.NET</ns0:cell><ns0:cell>ASP.NET</ns0:cell><ns0:cell>0.36</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ASP.NET</ns0:cell><ns0:cell>MVC</ns0:cell><ns0:cell>0.24</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>instrumentation</ns0:cell><ns0:cell>process control system</ns0:cell><ns0:cell>0.20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>compass 3d</ns0:cell><ns0:cell>SolidWorks</ns0:cell><ns0:cell>0.16</ns0:cell></ns0:row><ns0:row><ns0:cell>Engineering</ns0:cell><ns0:cell>automation of processes</ns0:cell><ns0:cell>process control system</ns0:cell><ns0:cell>0.14</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Product Development</ns0:cell><ns0:cell>the launch of new products</ns0:cell><ns0:cell>0.14</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>AutoCAD</ns0:cell><ns0:cell>normative-technical documentation</ns0:cell><ns0:cell>0.13</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>IP</ns0:cell><ns0:cell>TCP</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BGP</ns0:cell><ns0:cell>OSPF</ns0:cell><ns0:cell>0.57</ns0:cell></ns0:row><ns0:row><ns0:cell>Hardware</ns0:cell><ns0:cell>setting up the pc</ns0:cell><ns0:cell>software setting</ns0:cell><ns0:cell>0.51</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>configuring dns</ns0:cell><ns0:cell>setting up the pc</ns0:cell><ns0:cell>0.46</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>server administration</ns0:cell><ns0:cell>Windows</ns0:cell><ns0:cell>0.42</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>promotion</ns0:cell><ns0:cell>website promotion</ns0:cell><ns0:cell>0.54</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>contextual advertising</ns0:cell><ns0:cell>Yandex Direct</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>SEO</ns0:cell><ns0:cell>branding</ns0:cell><ns0:cell>promotion</ns0:cell><ns0:cell>0.44</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SEO</ns0:cell><ns0:cell>technical website audit</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>contextual advertising</ns0:cell><ns0:cell>web analytics</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>223-fz [procurement law] 44-fz [public procurement law]</ns0:cell><ns0:cell>0.61</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>staff training</ns0:cell><ns0:cell>Training</ns0:cell><ns0:cell>0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>Support</ns0:cell><ns0:cell>customer engagement</ns0:cell><ns0:cell>sales</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>B2B</ns0:cell><ns0:cell>customer engagement</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>B2B</ns0:cell><ns0:cell>sales</ns0:cell><ns0:cell>0.32</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CSS</ns0:cell><ns0:cell>HTML</ns0:cell><ns0:cell>0.74</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>HTML</ns0:cell><ns0:cell>JavaScript</ns0:cell><ns0:cell>0.51</ns0:cell></ns0:row><ns0:row><ns0:cell>Testing</ns0:cell><ns0:cell>Functional testing</ns0:cell><ns0:cell>regression testing</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MySQL</ns0:cell><ns0:cell>PHP</ns0:cell><ns0:cell>0.46</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CSS</ns0:cell><ns0:cell>JavaScript</ns0:cell><ns0:cell>0.44</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>UI</ns0:cell><ns0:cell>UX</ns0:cell><ns0:cell>0.66</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Illustrator</ns0:cell><ns0:cell>Photoshop</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row><ns0:row><ns0:cell>WebDev</ns0:cell><ns0:cell>Graphic Design</ns0:cell><ns0:cell>Web Design</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CorelDRAW</ns0:cell><ns0:cell>Graphic Design</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>editorial activities</ns0:cell><ns0:cell>proofreading</ns0:cell><ns0:cell>0.24</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>11/16</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67981:1:0:NEW 31 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Rules with maximum Confidence by 'soft' skills.</ns0:figDesc><ns0:table><ns0:row><ns0:cell># rhs</ns0:cell><ns0:cell>Cluster</ns0:cell><ns0:cell>lhs</ns0:cell><ns0:cell>Confidence</ns0:cell><ns0:cell>Lift</ns0:cell><ns0:cell>N</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>name</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>1 Agile</ns0:cell><ns0:cell>Analytics</ns0:cell><ns0:cell>PMBOK</ns0:cell><ns0:cell cols='2'>0.43 23.64</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2 Analytical Skills</ns0:cell><ns0:cell cols='2'>Engineering System Analysis</ns0:cell><ns0:cell>0.18</ns0:cell><ns0:cell>8.37</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3 Business Communica-</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>business correspondence, contracts,</ns0:cell><ns0:cell cols='2'>0.97 17.15</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>tion</ns0:cell><ns0:cell /><ns0:cell>Phone Calls, work with a large</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>amount of information</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>4 Care</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>work with a large amount of infor-</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>7.33</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>mation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5 Communication Skills</ns0:cell><ns0:cell>SEO</ns0:cell><ns0:cell>advertising campaigns, marketing</ns0:cell><ns0:cell cols='2'>0.75 21.19</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>research, promotion, Strategic Mar-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>keting</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>6 Creativity</ns0:cell><ns0:cell>WebDev</ns0:cell><ns0:cell>editing, proofreading, writing skills</ns0:cell><ns0:cell cols='2'>0.42 36.21</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>7 Dedication</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>sales</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>1.63</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>8 English</ns0:cell><ns0:cell>WebDev</ns0:cell><ns0:cell>translation</ns0:cell><ns0:cell cols='2'>0.80 11.25</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9 Interpersonal Skills</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>Phone Calls</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>2.14</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>10 Kanban</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>Project Manager</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>5.62</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>11 Leadership Skills</ns0:cell><ns0:cell cols='2'>Engineering assigning tasks to developers</ns0:cell><ns0:cell cols='2'>0.15 21.24</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>12 Liability</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>customer engagement, sales</ns0:cell><ns0:cell cols='2'>0.02 14.98</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>13 Negotiation Skills</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>B2B, business correspondence,</ns0:cell><ns0:cell cols='2'>1.00 12.01</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>contracts, customer engagement,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Phone Calls, Project Manager,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Sales Management</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>14 Organizational Skills</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>business correspondence, Event</ns0:cell><ns0:cell cols='2'>0.92 21.71</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Management, personnel manage-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>ment</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>15 Presentation Skills</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>B2B, contracts, powerpoint, sales</ns0:cell><ns0:cell cols='2'>0.96 21.58</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>16 Responsibility</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>work with a large amount of infor-</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>2.64</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>mation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>17 Result Orientation</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>Customer Support, sales, Sales</ns0:cell><ns0:cell cols='2'>0.81 43.53</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Planning</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>18 Sales Skills</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>Customer Support, Sales Manage-</ns0:cell><ns0:cell cols='2'>0.92 30.00</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>ment, Sales Planning</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19 Scrum</ns0:cell><ns0:cell>Analytics</ns0:cell><ns0:cell>PMBOK</ns0:cell><ns0:cell cols='2'>0.30 19.18</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>20 Stress Resistance</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>work with a large amount of infor-</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell>4.52</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>mation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>21 Teamwork</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>business correspondence, Phone</ns0:cell><ns0:cell cols='2'>0.95 11.36</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Calls, Project Manager, work with</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>a large amount of information</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>22 Time Management</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell>business correspondence</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>1.77</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>12/16</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67981:1:0:NEW 31 Jan 2022)</ns0:note></ns0:figure> </ns0:body> "
"Rebuttal Letter Dear Editors I fully appreciate and thank the Editor, both Reviewers, and PeerJ Staff for their time, explicit comments, and generous help on prior stages of the manuscript submission and review. I consider all suggestions and recommendations valuable for my work, and I have tried to cover and correct all issues that have been proposed. The following sections address the review comments (italicized) and my responses (non-italicized) in the textual blocks format. The pages and lines with changes correspond to the file with tracked changes. Editor comments (Stefan Wagner) Both reviewers agreed that the paper is interesting and well-written in general. Both make important comments on how to improve the paper. Therefore, I would like you to go through these and incorporate them as well as possible. For me, the most critical issues are: - Make context clear early on - Discuss threats to validity explicitly (probably in separate section) - Make distinction to previous paper explicitly clear - Make also the data available, or argue why this is not possible. Thank you for your comments. I have tried to consider all suggestions and made the following additions: • the contextual information for each table: Table 1 (P.3 L.115-120); Table 2 (P.9 L.366-369); Table 3 (P.9 L.383-386); Table 4 (P.10 L.400-402); • “Threats to validity” subsection (P.13-14 L.465-512); • distinction to prior research (P.3 L.87-91); • raw data (in Associated Data supplement). Finally, I suggest you rethink the names 'hard' and 'soft'. I think they do not denote the concepts very well. Why don't you go for 'technical' and 'non-technical' as you use that already to describe your terms? Thank you for your suggestion. That is not an easy question, which, I think, has no correct answer. I have faced several challenges while working on this manuscript. Firstly, the paper focuses on practical (applied) aspects valuable for economics and human resource research. Secondly, the paper is not fully about economics and management, because it better fits (I consider) computer science field covering thoroughly the algorithmic and data analysis aspects. Accordingly, by analyzing the related works with a focus on economics and computer science, I have noted that the terminology about “soft” and “hard” skills is mostly implemented. However, the way of their categorization in the computer science field slightly varies from paper to paper. Technically, most of these works operate with groups of so-called “technical” and “non-technical” skills. To be precise, I have used such categories as synonyms, because of the way of data processing. Paying tribute to both broad fields of computer science and economics & management, I wanted to save both possible notations, considering that the major field is computer science (miscellaneous). I suppose that somewhere in the text it might be confusing to the reader, so I have tried to rethink the specification of “hard/soft” notation towards “technical/non-technical” in the main sections as you have suggested: in Introduction (L.64, 80-81, 84-85), Related work (L.270-271), Results (L.388, 404, 406, 409, 416), Discussion (L.428-429, 450). Reviewer 1 (Anonymous) Basic reporting - In general, I found the manuscript to be mostly well written. The text could however benefit from some academic proofreading with regards to more minor stuff related to punctuation and minor readability edits. Thank you for your suggestion. I have made additional proofreading, that, I believe, will improve the readability of the paper. Changes were introduced on the following lines: 10-12, 28-29, 35, 38-39, 56, 98-99, 131, 138, 178, 190, 192-193, 214, 216, 230, 288, 290, 310, 316, 348-349, 356-357, 390, 395, 410-412, 415, 426, 445, 451, 453 (punctuation issues); 14, 33-34, 40, 44, 50-52, 83, 103, 111-112, 123, 130-131, 152-153, 155, 164-166, 196-200, 212-213, 219, 224-225, 228-229, 234, 253-255, 257, 282-283, 293, 298, 311, 315, 322, 324-325, 334-336, 362, 371, 374, 382-384, 391-394, 405, 411, 414-415, 420-422, 424-425, 427, 446, 490-491, 494-501, 518, 520, 525-527, 529-530 (readability issues); 14, 17-18, 23, 53, 57, 60, 62-63, 65-66, 77, 92-100, 102, 109, 115, 133, 143, 156-157, 168-169, 171, 175, 182, 189-190, 193-194, 204-205, 207-208, 217, 249, 251, 260, 264, 266-267, 273, 332, 352, 364-365, 373, 378-379, 390-391, 418-419, 423, 432-435, 439-441, 443, 450, 452, 457-458, 460, 514-515, 517, 531-532 (context and style issues). -While Table 1 is briefly introduced at the start of “Related Work”, it could be introduced better to the reader. Currently, you mention many of these terms in the starting paragraph, but how they are shown in Table 1 is a bit obscure at first. In my opinion, by having a paragraph strictly introducing the structure of Table 1, the readability of the paper would be improved. The way the table classifies prior work based on sample size and used approach is commendable, however. -All other Tables could also be introduced to the reader in a more comprehensive manner. Agree. Therefore, I have added supporting context before each table: Table 1 (P.3 L.115-120); Table 2 (P.9 L.366-369); Table 3 (P.9 L.383-386); Table 4 (P.10 L.400-402). - How did you get the cluster names for Tables 2, 3, and 4? Did you name the clusters by yourself? This could be mentioned briefly in the manuscript. Thank you for the comment. I named clusters myself based on the most frequent skills and ways of naming in related works. Hence, I have added this clarification on P.9 L.368-369. Experimental design - Three research goals are mentioned in the introduction section, and the author has results for these goals in the results section. - How does this paper differ from the paper “Demand for skills on the labor market in the IT sector”, by the same author? By glancing over the previous paper, you also cite in your manuscript, it seems the novelty lies in the hard and soft skill classification and the association rule mining. I think the paper would be improved by clearly stating the differences between these two papers, perhaps towards the end of the introduction, near your contributions, with a sentence of two. Thank you for your time in glancing at my prior research. In short, in current research, I have used a larger database (not one city-level), more thorough data processing, clustering, association mining techniques, and separation between “hard” and “soft” skills. I agree with your comment and propose additional notes on P.3 L.87-91. - I find the way you report soft and hard skills identification confusing. How does semi-manual identification work exactly? Do you use the same list as Calanca et al.? If so, will you miss soft skills not contained in the list? Is this a validity threat that should be discussed? Thank you for the important note, that I agree to be precisely clarified in the manuscript. I use databases containing non-technical skills proposed by the other authors to ease the first iteration of manual processing. I enrich the existing dictionaries with HeadHunter specific terms and notations after, so the list of skills is different from proposed in the other works (P.8 L.319-320). - While the code used to analyze the data is shared, the data itself is not. Thus, I was not able to rerun the analysis done by the author. Sharing the data with the completed manuscript would greatly improve the reproducibility of the work. Agree. I have enclosed the file with raw data (first introduced on P.8 L.289-292) in the Associated Data supplement. Validity of the findings - Personally, I didn't find anything surprising in your results. You compare your results to previous findings, noting the region specificity in certain skills. However, I don't see if your results miss something from previous results. That is, are some skills which are present and popular in previous findings not present in your results? Investigating this could perhaps expand both the results and discussion sections. I have analyzed the findings and compared them with prior research. Actually, all skills from previous results are represented in this paper’s findings. The crucial point, while comparing with the other papers, relates to different ways of skills aggregation. The last makes it harder to provide a skill-by-skill comparison. However, using aggregated groups of skills and disaggregated information (where available) from the other research, I can note the different ranking of skills across several countries and hiring platforms. Interestingly, in the CIS labor market, technical skills are dominated over non-technical. Therefore, in order to better clarify the transition between these issues, I have added supporting sentences in the “Discussion” section (P.13 L.447-449) and focused on some aspects in the “Threats to validity” subsection (P.13-14 L.468-512). - You mention that the study was done on the data related to the Commonwealth of Independent States. In my opinion, this contextual information is important enough, so it could be added to the title, e.g., “Insights from IT Job Advertisements in the CIS region”. At the very least, I think this should be mentioned in the abstract. Agree. I have made corresponding changes in Title (P.1 L.1-3), Abstract (P.1 L.17-18), and Introduction (P.2 L.65, P.3 L.95-96). - While the code used to analyze the data is shared, the data itself is not. Thus, I was not able to rerun the analysis done by the author. Sharing the data with the completed manuscript would greatly improve the reproducibility of the work. Agree. I have enclosed the file with raw data (first introduced on P.8 L.289-292) in the Associated Data supplement. - Some limitations on the discussion section are already mentioned, but some of them are very brief. This could be expanded to its own section, e.g. internal, external, and construct validity. Agree. I have enlarged the “Discussion” section with specified “Threats to validity” subsection where I have tried to clarify possible validity threats in more detail (P.13-14 L.465-512). Reviewer 2 (Anonymous) Basic reporting Summary: The author of this paper studied the demand for skills in the IT-sphere to discover the mapping between required skill-sets and job occupations. The proposed methodology for skills identification uses natural language processing, hierarchical clustering, and association mining techniques. The dataset used covers the 2015–2019 period, with 351,623 observations. A set of 3,034 unique frequent skills was extracted and prepared for further standardization. The results show explicit information about the combinations of 'soft' and 'hard' skills required for different professional groups. The author explains the goal of the study highlighting that the findings provide valuable insights for supporting educational organizations, human resource (HR) specialists, and state labor authorities in the renewal of existing knowledge about skill-sets for IT professionals. The paper is well structured and easy to follow. The related works are well presented, putting the study in context. The raw data have been shared, along with the code used to perform the study. The results include the definitions of all terms. Experimental design The research is within the aims and scope of the journal. The research questions are defined on page 2. To help the reader the author might restructure the goals of the paper in a question format, assigning the number to each question, and providing a specific answer to each question in the results section. Thank you for your time and comments. I agree with the last suggestion, and I have reformulated the research questions in question format (P.2 L.67-74). Therefore, I believe subsection titles (in the “Results” section) and their content is now becoming easier to follow by the reader. The goals of the study are relevant and meaningful, and it is stated how this study fills the gap in the literature. The methodology is well explained and performed to a high technical standard, and all the methods are described with sufficient details which allow replication studies. Validity of the findings All the underlying data have been provided, and the statistics explained. However, I strongly encourage the author to dedicate a specific section of the paper to better explain the limitations of the study (threats to validity). Agree. I have enlarged the “Discussion” section with specified “Threats to validity” subsection where I have tried to clarify possible validity threats in more detail (P.13-14 L.465-512). Conclusions are well stated and linked to the research questions, but the author could help the reader in re-organizing the RQ as suggested in the previous section. Additional comments Overall, the paper is well written and interesting. It fits the journal, but the presentation can be improved. Thank you for your comment. I have made additional proofreading, that, I believe, will improve the readability of the paper. Changes were introduced on the following lines: 10-12, 28-29, 35, 38-39, 56, 98-99, 131, 138, 178, 190, 192-193, 214, 216, 230, 288, 290, 310, 316, 348-349, 356-357, 390, 395, 410-412, 415, 426, 445, 451, 453 (punctuation issues); 14, 33-34, 40, 44, 50-52, 83, 103, 111-112, 123, 130-131, 152-153, 155, 164-166, 196-200, 212-213, 219, 224-225, 228-229, 234, 253-255, 257, 282-283, 293, 298, 311, 315, 322, 324-325, 334-336, 362, 371, 374, 382-384, 391-394, 405, 411, 414-415, 420-422, 424-425, 427, 446, 490-491, 494-501, 518, 520, 525-527, 529-530 (readability issues); 14, 17-18, 23, 53, 57, 60, 62-63, 65-66, 77, 92-100, 102, 109, 115, 133, 143, 156-157, 168-169, 171, 175, 182, 189-190, 193-194, 204-205, 207-208, 217, 249, 251, 260, 264, 266-267, 273, 332, 352, 364-365, 373, 378-379, 390-391, 418-419, 423, 432-435, 439-441, 443, 450, 452, 457-458, 460, 514-515, 517, 531-532 (context and style issues). Thank you for your valuable comments and suggestions. I have tried to follow them all and believe that the manuscript is now suitable for publication in PeerJ Computer Science. Sincerely, Andrei A. Ternikov HSE University, Moscow, Russia "
Here is a paper. Please give your review comments after reading it.
403
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Influencing and framing debates on Twitter provides power to shape public opinion. Bots have become essential tools of 'computational propaganda' on social media such as Twitter, often contributing to a large fraction of the tweets regarding political events such as elections. Although analyses have been conducted regarding the first impeachment of former president Donald Trump, they have been focused on either a manual examination of relatively few tweets to emphasize rhetoric, or the use of Natural Language Processing (NLP) of a much larger corpus with respect to common metrics such as sentiment. In this paper, we complement existing analyses by examining the role of bots in the first impeachment with respect to three questions as follows. (Q1) Are bots actively involved in the debate? (Q2) Do bots target one political affiliation more than another? (Q3) Which sources are used by bots to support their arguments? Our methods start with collecting over 10M tweets on six key dates, from October 6th 2019 to January 21st 2020. We used machine learning to evaluate the sentiment of the tweets (via BERT) and whether it originates from a bot. We then examined these sentiments with respect to a balanced sample of Democrats and Republicans directly relevant to the impeachment, such as House Speaker Nancy Pelosi, senator Mitch McConnell, and (then former Vice President) Joe Biden. The content of posts from bots was further analyzed with respect to the sources used (with bias ratings from AllSides and Ad Fontes) and themes. Our first finding is that bots have played a significant role in contributing to the overall negative tone of the debate (Q1). Bots were targeting Democrats more than Republicans (Q2), as evidenced both by a difference in ratio (bots had more negative-to-positive tweets on Democrats than Republicans) and in composition (use of derogatory nicknames). Finally, the sources provided by bots were almost twice as likely to be from the right than the left, with a noticeable use of hyper-partisan right and most extreme right sources (Q3). Bots were thus purposely used to promote a misleading version of events. Overall, this suggests an</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head></ns0:div> <ns0:div><ns0:head>42</ns0:head><ns0:p>The efforts of party leaders and presidents to shape public understanding of issues through mainstream 43 and partisan media are well documented <ns0:ref type='bibr' target='#b77'>(Sellers, 2009;</ns0:ref><ns0:ref type='bibr' target='#b89'>Vinson, 2017)</ns0:ref>. Partisan media intentionally adopt 44 ideological frames and cover stories in ways that favor the politicians in their own party and create negative 45 impressions of those in the other party, contributing to polarization in American politics <ns0:ref type='bibr'>(Levendusky,</ns0:ref><ns0:ref type='bibr'>46</ns0:ref> 2013; <ns0:ref type='bibr' target='#b22'>Forgette, 2018)</ns0:ref>. At the interface of political communication and computational sciences, the emerging field of computational politics has produced many analyses of polarization over the recent years <ns0:ref type='bibr' target='#b29'>(Haq et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b63'>Pozen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b96'>Yarchi et al., 2021)</ns0:ref>. Comparatively less computational research has been devoted to how politicians influence debate on social media, but interest in this question has grown during Donald Trump's presidency as he extensively communicated directly to the public via Twitter <ns0:ref type='bibr' target='#b57'>(Ouyang and Waterman, 2020)</ns0:ref>.</ns0:p><ns0:p>While mainstream media traditionally favor those with political power, giving them coverage that expands their public reach, social media rewards those who can attract an audience. On Twitter, a large attentive audience can amplify a politician's message and change or mobilize public opinion <ns0:ref type='bibr' target='#b100'>(Zhang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b68'>Russell, 2021)</ns0:ref>. Trump's rhetoric features many characteristics of populism, including emotional appeals based on opinion rather than fact and allegations of corruption, that attracts followers; indeed, his messages are often picked up by right-wing populists leaders around the world <ns0:ref type='bibr' target='#b61'>(P&#233;rez-Curiel et al., 2021)</ns0:ref>. As research on the 2016 presidential campaign illustrates, Trump had an attentive audience, particularly on the far right, that readily amplified his message through retweets <ns0:ref type='bibr' target='#b100'>(Zhang et al., 2018)</ns0:ref>; a similar phenomenon was found in the 2020 election, for which datasets are gradually assembled and examined <ns0:ref type='bibr' target='#b0'>(Abilov et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b27'>Grimminger and Klinger, 2021)</ns0:ref>. These events repeatedly show that news media responded to Trump's large volume of tweets by giving him substantially more coverage than other candidates, further expanding his reach to influence and frame public debate <ns0:ref type='bibr' target='#b91'>(Wells et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Computational analyses further suggest that the use of tweets was a strategic activity, as Trump's Twitter activity increased in response to waning news coverage from Center Right, Mainstream, Left Wing, and Far Left media <ns0:ref type='bibr' target='#b91'>(Wells et al., 2020)</ns0:ref>. Influencing and framing the debate on Twitter thus serves to shape media coverage, public opinion, and ultimately impact the political agenda <ns0:ref type='bibr'>(S &#184;ahin et al., 2021)</ns0:ref>.</ns0:p><ns0:p>In this sense, shaping public discourse is connected to political power. Studies of Twitter rhetoric indicate that patterns in framing and use of language occur within digital networks that create group identities <ns0:ref type='bibr' target='#b2'>(Alfonzo, 2021)</ns0:ref>. Critical discourse analysis suggests that examining messaging can identify power asymmetries, revealing the institutions and leaders who control public discourse in particular environments or around specific issues and events <ns0:ref type='bibr' target='#b88'>(Van Dijk, 2009)</ns0:ref>. Such research can reveal abuse of power by dominant groups and resistance by marginalized groups. Given Trump's populist rhetoric and its potential for spreading disinformation <ns0:ref type='bibr' target='#b61'>(P&#233;rez-Curiel et al., 2021)</ns0:ref>, amplification of his messages would have important implications for public discourse. Though critical analysis would require a more detailed and nuanced coding of the tweets than we are able to provide with our methodology, looking at the amplification of political discourse by bots allows us to see the potential manipulation of public discourse, especially if the bots favor one faction, that could affect the power dynamics between parties and have significant negative implications for democracy. This kind of power merits additional study. Our study of the first impeachment of Donald Trump provides such an opportunity. This paper is part of the growing empirical literature on the social influence carried by Trump's tweets prior to the permanent suspension of his account on January 8, 2021. Such studies are now covered by dedicated volumes <ns0:ref type='bibr' target='#b37'>(Kamps, 2021;</ns0:ref><ns0:ref type='bibr' target='#b76'>Schier, 2022)</ns0:ref> as well as a wealth of scholarly works, including largescale social network analysis and mining <ns0:ref type='bibr' target='#b101'>(Zheng et al., 2021)</ns0:ref>, analyses of media responses <ns0:ref type='bibr' target='#b14'>(Christenson et al., 2021)</ns0:ref>, the interplay of tweets and media <ns0:ref type='bibr' target='#b53'>(Morales et al., 2021)</ns0:ref>, or the (absence of) impact between his tweets and stock returns <ns0:ref type='bibr' target='#b47'>(Machus et al., 2022)</ns0:ref>. Related studies also include analyses about comments from political figures about Trump <ns0:ref type='bibr' target='#b51'>(Milford, 2021;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alexandre et al., 2021)</ns0:ref> and vice versa <ns0:ref type='bibr' target='#b10'>(Brown Crosby, 2022)</ns0:ref>, analyses of his letter during the first impeachment <ns0:ref type='bibr' target='#b66'>(Reyes and Ross, 2021)</ns0:ref>, and detailed examinations about how Trump's message was perceived by specific groups such as White extremists <ns0:ref type='bibr' target='#b45'>(Long, 2022)</ns0:ref>. The impeachment and handling of COVID-19 have received particular attention on Twitter <ns0:ref type='bibr' target='#b17'>(Dejard et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cervi et al., 2021)</ns0:ref>, as key moments of his presidency. While the use of bots to create social media content during presidential elections of COVID-19 have received attention <ns0:ref type='bibr' target='#b95'>(Xu and Sasahara, 2021)</ns0:ref>, there is still a need to examine how bots were used during the first impeachment.</ns0:p><ns0:p>The specific contribution of our study is to complement existing (predominantly manual) analyses of tweets during the first impeachment, which often focused on tweets from Donald Trump <ns0:ref type='bibr' target='#b26'>(Gould, 2021;</ns0:ref><ns0:ref type='bibr' target='#b19'>Driver, 2021)</ns0:ref> or Senators <ns0:ref type='bibr' target='#b48'>(McKee et al., 2022)</ns0:ref>, by using machine learning to examine the role of bots in a large collection of tweets. This complementary analysis has explicitly been called for in recent publications <ns0:ref type='bibr' target='#b83'>(Tachaiya et al., 2021)</ns0:ref>, which performed large-scale sentiment analyses during the first impeachment (via Reddit and 4chan), but did not investigate the efforts at social engineering deployed via bots. Our specific contribution focuses on addressing three specific questions: The remainder of this paper is organized as follows. In the next section, we explain how we gathered over 13M tweets based on six key dates of the first impeachment, and analyzed them using BERT (for sentiment) and Botometer (for bot detection) as well as bias ratings (for source evaluation) and Latent Dirichlet Allocation (for theme mining). We then present our results for a balanced sample of key political figures in the impeachment (including Adam Schiff, Joe Biden, Mitch McConnell, Nancy Pelosi, Donald Trump) as well as others relevant to our analysis. For transparency and replicability of research, tweets and detailed results are available on a third-party repository without registration needed, at https://osf.io/3nsf8/. Finally, our discussion contextualizes the answer for each of the three questions within the broader literature on computational politics and summarizes some of our limitations.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>For a detailed description of the Ukraine Whistleblower Scandal, we refer the readers to a recent analysis by McKee and colleagues who also contextualize this situation within the broader matter of hyperpolarization <ns0:ref type='bibr' target='#b48'>(McKee et al., 2022)</ns0:ref>. A short overview is as follows. On September 9, 2019, the House Intelligence Committee, headed by Rep. Adam Schiff, was alerted to the existence of a whistleblower complaint that involved the Trump Administration. Over the next two weeks, information from media reports indicated that President Trump had withheld aid to Ukraine possibly to pressure the new Ukrainian President to launch a corruption investigation into Democratic presidential hopeful Joe Biden's son Hunter who had been on the board of a Ukrainian company when the elder Biden was Vice President during the Obama Administration <ns0:ref type='bibr' target='#b84'>(Trautman, 2019)</ns0:ref>. By September 25, House Speaker Nancy Pelosi, a Democrat, had launched an impeachment inquiry, and the Trump Administration had released notes from Trump's call with the Ukrainian President that some saw as evidence that Trump had conditioned aid and a visit to the White House on Ukraine investigating Hunter Biden. Our data collection started soon after, covering six key dates leading to the impeachment process.</ns0:p><ns0:p>On October 6, 2019, a second whistleblower was identified and we collected 1, 968, 943 tweets. On October 17, 2019, then Senate Majority Leader Mitch McConnell briefed the Senate on the impeachment process (1, 960, 808 tweets). On November 14, 2019, the second day of impeachment hearings began in the House Intelligence Committee (1, 977, 712 tweets). On December 5, 2019, one day after the House Judiciary Committee opened impeachment hearings, four constitutional lawyers testified in front of Congress (1, 960, 813 tweets). On December 18, 2019, the House debated and passed two articles of impeachment. We collected 2, 041, 924 to gather responses on the following day. Finally, on January 21, 2020, the Senate impeachment trial began and we ended the data collection with 3, 360, 434 tweets. In total, 13, 272, 577 tweets were collected (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>).</ns0:p><ns0:p>Throughout the process, both parties and the President tried to control the public narrative over impeachment. Early in the process, Democrats began to talk about abuse of power or corruption by Trump and eventually shifted to the more specific language of bribery and extortion. Meanwhile, Republican elected officials seem to be trying to frame this as a coup by the Democrats. Republicans directed their ire at House Intelligence Committee Chairman Democrat Adam Schiff who led the impeachment inquiry.</ns0:p><ns0:p>The President and some of his Republican allies, including Senator Lindsey Graham and aide Stephen Miller, tried to divert attention from the President by alleging misconduct by Joe and Hunter Biden. Our key words for data collection were thus chosen to reflect the messaging of the two parties in what shaping up as a largely partisan process: Trump, Impeachment, Coup, Abuse of Power, Schiff (as in Adam), and Biden. We did not explicitly include words like disinformation, though the messaging may have included unfounded allegations, because such words were not part of either party's framing.</ns0:p><ns0:p>Tweets were gathered using our Twitter mining system <ns0:ref type='bibr' target='#b49'>(Mendhe et al., 2020)</ns0:ref>, which allows for the collection of social media data and the application of many filters for cleaning and further use for machine learning techniques. </ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>This study and the methods employed only target texts in English, hence we removed tweets in other languages as well as tweets only containing hyperlinks. Language detection of Twitter data is more difficult than that of standard texts due to the unusual language used within tweets. We used the langdetect library (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>), which utilizes a naive Bayes classifier and achieves strong performances on tweets <ns0:ref type='bibr' target='#b46'>(Lui and Baldwin, 2014)</ns0:ref>. Tweets in English may have to undergo extensive cleaning through a variety of filters, depending on the model used for analysis. As detailed in the next section, we use the BERT model, or</ns0:p><ns0:p>Bidirectional Encoder Representations from Transformers. BERT is pre-trained on unprocessed English text <ns0:ref type='bibr' target='#b18'>(Devlin et al., 2018)</ns0:ref> hence the only processing performed was to remove tweets that consisted of only hyperlinks. A more detailed discussion of our data collection platform and its combination with BERT can be found in <ns0:ref type='bibr' target='#b64'>(Qudar and Mago, 2020)</ns0:ref>, while an abundance of studies show how BERT is used for classification of tweets <ns0:ref type='bibr' target='#b79'>(Singh et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b97'>Yenduri et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b70'>Sadia and Basak, 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis</ns0:head><ns0:p>In textual data analysis, we can apply information extraction and text mining to parse information from a large data set. In order to understand how each side put their messages out, data was broken down by mention of specific politicians-those who were the subject of impeachment (Trump) and countermessaging by Republicans (Biden), those who were important to the impeachment process and messaging for their party (Pelosi, McConnel, and Schiff), and two Republicans (Miller and Graham) 'who represent accounts closely associated with President Trump' per previous Twitter analyses <ns0:ref type='bibr' target='#b8'>(Boucher and Thies, 2019;</ns0:ref><ns0:ref type='bibr' target='#b30'>Hawkins and Kaltwasser, 2018)</ns0:ref>. This sample of actors is evenly divided among Democrats and Republicans. After assigning tweets to each of the actors, we then used sentiment analysis and bot detection. The analysis was further nuanced by accounting for the use of a derogatory nickname (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>) in lieu of the official actor's name. In addition, we performed complementary examinations of the content of tweets by bots, by examining their themes (in contrast with human tweets) and the bias exhibited in the websites used as sources.</ns0:p><ns0:p>In summary, the data associated with each tweet (content, time, author) is automatically expanded with the list of actors involved, the type of author (bot, human, unknown), the sentiment (negative, neutral, positive), and the list of websites used.</ns0:p></ns0:div> <ns0:div><ns0:head>Sentiment Analysis</ns0:head><ns0:p>Sentiment analysis is a very important technique to analyze textual data and is frequently used in natural language processing, text analysis, and computational linguistics. Our model classifies sentiment into three categories: negative, neutral, and positive. As mentioned above, we use the natural language processing model BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers <ns0:ref type='bibr' target='#b18'>(Devlin et al., 2018)</ns0:ref>. As its name indicates, BERT is a tre-trained deep bidirectional transformer, whose architecture consists of multiple encoders, each composed of two types of layers (multi-head self-attention layers, feed forward layers). To Tweet Output t&#252;m d&#305;s &#184;politikay&#305; tek bir adama yani trump'a baglamak t&#252;m tr I'm sure Trump will tweet about rep Elijah Cummings passing en appreciate the number of parameters, consider that the text first goes through an embedding process (two to three dozen million parameters depending on the model), followed by transformers (each of which adds 7 or 12.5 million parameters depending on the model), ending with a pooling layer (0.5 or 1 million more parameters depending on the model). All of these parameters are trainable. Although the model has already been pre-trained, it is common practice to give it additional training based on data specific to the context of interest.</ns0:p><ns0:p>In order to train the model, a set of 3,000 unprocessed tweets was randomly selected, distributed evenly across each of the six collection dates. The tweets were then manually labelled for overall sentiment, and fed into the model for training. This approach is similar to other works, such as <ns0:ref type='bibr' target='#b27'>Grimminger and Klinger (2021)</ns0:ref>, in which 3,000 tweets were manually annotated for a BERT classification. While there is no universal number for how many tweets should be labeled when using BERT, we observe a range across studies from under 2, 000 labeled tweets (S &#184;as &#184;maz and Tek, 2021; <ns0:ref type='bibr' target='#b3'>Alomari et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b60'>Peisenieks and Skadin &#184;&#353;, 2014)</ns0:ref> up to several thousands <ns0:ref type='bibr' target='#b24'>(Golubev and Loukachevitch, 2020;</ns0:ref><ns0:ref type='bibr' target='#b69'>Rustam et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b54'>Nabil et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b99'>Zhang et al., 2020)</ns0:ref>. In line with previous studies on sentiment analysis in tweets <ns0:ref type='bibr' target='#b69'>(Rustam et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b39'>Khan et al., 2021)</ns0:ref>, we compared BERT and several other supervised machine learning models from the textblob library (i.e., decision tree, Naive Bayes) as well as scikit-learn (logistic regression, support vector machine). Our comparison in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> confirms that BERT leads to satisfactory and more stable performances than alternatives, thus validating the choice of BERT in our corpus.</ns0:p></ns0:div> <ns0:div><ns0:head>Bot Detection</ns0:head><ns0:p>We use the botometer library (formerly know as BotOrNot) to detect the presence of bots in Twitter.</ns0:p><ns0:p>botometer is a paid service that utilizes the Twitter API to compile over 1,000 features of a given Twitter account, including how frequently the account tweets, how similar each tweet is to previous tweets from the account, and the account's ratio of followers to followees <ns0:ref type='bibr' target='#b16'>(Davis et al., 2016)</ns0:ref>. Features are examined both within their respective categories and collectively. The resulting series of probabilities is returned (Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>). To provide a conservative estimate, we scanned almost half of the different accounts associated with tweets (1, 101, 023 out of 2, 438, 343 unique accounts). Since some accounts were deleted or suspended (n = 113, 037), our analysis of bots is based on a sample of 987, 987 active accounts. We note that the percentage of accounts unavailable is similar to <ns0:ref type='bibr' target='#b42'>Le et al. (2019)</ns0:ref>, in which 9.5% of Twitter accounts involved in the 2016 election were suspended.</ns0:p><ns0:p>In order to classify Twitter accounts as human or bot, a classifier needs to be properly calibrated on annotated botometer data <ns0:ref type='bibr' target='#b81'>(Spagnuolo, 2019)</ns0:ref>. We considered three supervised machine learning techniques, which are frequently used alongside BERT to process tweets: decision trees, extreme gradient boosted trees (XGBoost), and random forests <ns0:ref type='bibr' target='#b41'>(Kumar et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b69'>Rustam et al., 2021)</ns0:ref>. All three methods were implemented in scikit-learn, and our scripts are provided publicly on our repository. Training data, consisting of manually labelled Twitter accounts, was retrieved from https://botometer.osome.</ns0:p><ns0:p>iu.edu/bot-repository/datasets.html and compiled, thus providing 3, 000 accounts. For each of the three supervised machine learning training, we optimized a classifier using 10-fold cross validation and a grid search over commonly used hyper-parameters. The hyper-parameters considered and their values are listed in Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, while each model was exported from scikit-learn and saved on our open repository for inspection and reuse by the research community. As performances are comparable across the three techniques, we used the decision tree (which can more readily be interpreted by experts)</ns0:p><ns0:p>to classify accounts from our collected data.</ns0:p></ns0:div> <ns0:div><ns0:head>Content of tweets by Bots</ns0:head><ns0:p>Once we detect the posts emanating from bots, we can further examine their content. In particular, we use a frequency counts approach to measure the number of times that each website is cited in the bots' tweets. Specifically, we retrieve all URLs from these tweets and map them to a common form such that 'www.foxnews.com', 'media2.foxnews.com' or 'radio.foxnews.com' are all counted as foxnews. Then, we examine the political leanings of the websites by cross-referencing them with sources for bias ratings.</ns0:p><ns0:p>We accomplish this in the same manner as in the recent work of Huszar and colleagues, we used two sources for bias ratings: AllSides (https://www.allsides.com/media-bias/ratings) and Ad Fontes (https://adfontesmedia.com/interactive-media-bias-chart). In line with their work, we do not claim that either source provides objectively better ratings <ns0:ref type='bibr' target='#b32'>(Husz&#225;r et al., 2022)</ns0:ref>, hence we report both. If a website is rated by neither source, then we access it to read its content and evaluate it manually; such evaluations are disclosed explicitly.</ns0:p><ns0:p>In addition to sources, we examined themes. In line with the typical computational approach used in similar studies on Twitter <ns0:ref type='bibr' target='#b35'>Jelodar et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b56'>Ostrowski (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b55'>Negara et al. (2019)</ns0:ref>, we use Latent Dirichlet allocation (LDA) to extract topics. To allow for fine-grained comparison, we perform the extraction on every day of the data collection and separate themes of human posts from bots.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The size of the dataset after preprocessing is provided in the first subsection. For transparency and replicability of research, tweets are available at https://osf.io/3nsf8/; they are organized by key actors and labeled with the sentiment expressed or whether they originate from a bot.</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>In Twitter research, pre-processing often leads to removing most of the data. For example, our previous research on Twitter regarding the Supreme Court <ns0:ref type='bibr' target='#b72'>(Sandhu et al., 2019a)</ns0:ref> discarded 87 &#8722; 89% of the data, while our examination of Twitter and obesity discarded 73% of the data <ns0:ref type='bibr' target='#b73'>(Sandhu et al., 2019b)</ns0:ref>. The reason is that pre-processing has historically involved a series of filters (e.g., removing words that are not deemed informative in English, removing hashtags and emojis), which were necessary as the analysis model could not satisfactorily cope with raw data. In contrast, BERT can directly take tweets as input. Our pre-processing thus only eliminated non-English tweets and tweets with no substantive information that consisted of only hyperlinks. As a result, most of the data is preserved for analysis: we kept 11, 007, 403 of the original 13, 568, 750 tweets ( </ns0:p></ns0:div> <ns0:div><ns0:head>Sentiment analysis</ns0:head><ns0:p>analyses shows significant negative sentiment for all key actors (Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>). As expected, when referred to by their nicknames, tweets about the actors were almost unanimously negative. Of the four key actors, Adam Schiff has the most polarizing results, with the highest proportion of positive tweets (9.20%) and the second highest proportion of negative tweets (74.10%). This matches observations during the trial, suggesting that Republicans strongly disliked Schiff for his leading role in the Impeachment Trial, while Democrats supported his efforts. Although the proportions of sentiments expressed vary over time (Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>), we note that negative sentiments dominate on every one of the six dates for data sampling.</ns0:p></ns0:div> <ns0:div><ns0:head>Bot Detection</ns0:head><ns0:p>Results from bot detection show that at least 22-24% of tweets were sent by bot controlled accounts, with a few examples provided in Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref>. Deleted accounts as well as accounts for which no information was obtained are grouped together in the 'Unknown' category. As Twitter will remove bots that manipulate conversations, it is possible that a significant share of the 'Unknown' category is also composed of Manuscript to be reviewed Although the overall tone of the tweets is negative (Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>), the ratio of negatives-to-positives shows a clear differentiation in the use of bots compared to humans. For example, consider Adam Schiff (Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, top-left). Based on tweets by humans, he has a ratio of 583616 70666 &#8776; 8.3 negatives-to-positives. In comparison, this ratio is 354241 36542 &#8776; 9.7 for tweets authored by bots. Overall, we observe that the only cases in which bots had a greater negative-to-positive ratio of posts than humans was for Democrats: Adam Schiff (9.7 negative tweets by bot for each positive vs. 8.3 for humans), Chuck Schumer (40.2 vs 33.4), Joe Biden (39.1 vs 33.8), and Nancy Pelosi (10.9 vs 10.2). In contrast, bots were always less negative than humans for Republicans: Donald Trump (6.1 for bots vs 7.9 for humans), Lindsey Graham (8.1 vs 9.0), Mitch McConnell (19.7 vs 23.6), and Stephen Miller (59.6 vs 103.8).</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>We did not find evidence of a clear temporal trend (e.g., monotonic increase or decrease) of bots over time (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>), which may suggest that they were employed based on specific events.</ns0:p></ns0:div> <ns0:div><ns0:head>Top Sources Used by Bots</ns0:head><ns0:p>A total of 1,171 distinct websites were used as sources across 54,899 tweets classified as written by bot accounts. The complete list is provided as part of our supplementary online materials (https: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>//osf.io/3nsf8/).</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>AllSides and Ad Fontes.</ns0:p><ns0:p>Given the large number of distinct websites, we focused on those used at least 50 times, excluding the categories aforementioned (social media, web hosting, search engines). When using AllSides, the sample appears relatively balanced (Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>), because a large number of websites are not rated. Ad Fontes provides more evaluations, which starts to show that websites lean to the right. We complemented the ratings of Ad Fontes with manual evaluations (Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>), showing that (i) there are almost twice as many right-leaning sources than left-leaning sources and (ii) the right-leaning sources are much more extreme.</ns0:p><ns0:p>For example, 41 websites are more right-leaning than FoxNews while only 16 were more left-leaning than CNN. The imbalance is even more marked when we consider the volume of tweets. Indeed, out of the 10 most used websites, eight lean to the right and two to the left (italicized): FoxNews (3.73% of tweets with Manuscript to be reviewed (1.41%). Tweets associated with some of these websites are provided in Table <ns0:ref type='table' target='#tab_10'>10</ns0:ref> as examples. In sum, there is ample evidence that the bots tend to promote a Republican viewpoint, often based on misleading sources.</ns0:p><ns0:p>Our findings are similar to <ns0:ref type='bibr' target='#b86'>Tripodi and Ma (2022)</ns0:ref>, who noted that almost two thirds of the sources in official communication from the White House under the Trump Administration relied on 'RWME, a network fueled by conspiracy theories and fringe personalities who reject normative journalistic practices.'</ns0:p><ns0:p>Although other news outlets were cited, such as the Washington Post and CNN, this was often in the context of characterizing them as 'fake news'. Other recent analyses have also noted that the use of popular far-right websites was more prevalent in counties that voted for Trump in 2020 <ns0:ref type='bibr' target='#b13'>(Chen et al., 2022)</ns0:ref>.</ns0:p><ns0:p>The possibility for websites that fuel partisanship and anger to be highly profitable may have contributed to their proliferation <ns0:ref type='bibr' target='#b90'>(Vorhaben, 2022)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Themes for Humans Vs. Bots</ns0:head><ns0:p>Interactive visualizations for themes are provided as part of our supplementary online materials (https:</ns0:p><ns0:p>//osf.io/3nsf8/), with one example shown in Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>. Themes should be interpreted with caution.</ns0:p><ns0:p>For example, consider posts on December 6th 2019. For bots, a major theme links Senator Nancy Pelosi Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with 'hate', whereas for humans she is first associated with impeachment, house, president, and then only with hate. However, bots are not actually conveying images of hatred towards Pelosi. Rather, the connection is related to a news story that occurred the day before, in which a reported asked Pelosi if he hated Trump, and she responded that she did not hate anyone; the reaction went viral. Although bots have picked up on it more than the humans, it is only tangentially related to impeachment and does not indicate that bots are hating Pelosi.</ns0:p><ns0:p>A more robust and relevant pattern, across the entire time period, is that Biden is a much more salient term for the humans than the bots. Based on associated terms, it appears to be connected to Trump and other Republicans (particularly Lindsey Graham and outlets such as Fox News) trying to distract from impeachment by making allegations that Biden was corrupt in his dealings with Ukraine. It thus appears that bots and humans occasionally focus on different stories. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>tions via social media, by spending 'as much as $21,000' on a Facebook ad entitled 'Biden Corruption', which included debunked claims that then Vice President Joe Biden had tied Ukrainian aid to dropping an investigation of his son <ns0:ref type='bibr' target='#b28'>(Grynbaum and Hsu, 2019)</ns0:ref>. Throughout the process, the President denigrated Democrats as a group and individually in his public comments and on Twitter. Strategies have included the use of nicknames, which contributes to painting a gallery of 'known villains' <ns0:ref type='bibr' target='#b52'>(Montgomery, 2020)</ns0:ref> and fits within the broader pattern of the far right in online harassment of political enemies <ns0:ref type='bibr' target='#b5'>(Bambenek et al., 2022)</ns0:ref>. This use of nicknames also echoes Trump's strategy in employing derogatory language when depicting other groups, such as the frequent use of terms such as 'animal' or 'killer' when referring to immigrants. Such strategies are part of a broader trend of incivility by American politicians on Twitter, with uncivil tweets serving as powerful means of political mobilization and fundraising <ns0:ref type='bibr' target='#b4'>(Ballard et al., 2022)</ns0:ref> by drawing on emotions such as anger <ns0:ref type='bibr' target='#b36'>(Joosse and Zelinsky, 2022;</ns0:ref><ns0:ref type='bibr' target='#b92'>Wenzler et al., 2022)</ns0:ref>; the ensuing attention may even lead politicians to engage into greater incivility <ns0:ref type='bibr' target='#b23'>(Frimer et al., 2022)</ns0:ref>. In the case of Trump, the violence-inducing rhetoric resulted in permanent suspension of his Twitter account, days before being charged by the House of Representatives for 'incitement of insurrection' <ns0:ref type='bibr' target='#b93'>(Wheeler and Muwanguzi, 2022)</ns0:ref>.</ns0:p><ns0:p>Previous research on Trump and social media mining concluded that 'political troll groups recently gained spotlight because they were considered central in helping Donald Trump win the 2016 US presidential election' <ns0:ref type='bibr' target='#b21'>(Flores-Saviaga et al., 2018)</ns0:ref>. Far from isolated individuals with their own motives, research has shown that 'trolls' could be involved in coordinated campaigns to manipulate public opinion on the Web <ns0:ref type='bibr' target='#b98'>(Zannettou et al., 2019)</ns0:ref>. Given the polarization of public opinions and the 'hyper-fragmentation of the mediascape', such campaigns frequently tailor their messages to appeal to very specific segments of the electorate <ns0:ref type='bibr' target='#b65'>(Raynauld and Turcotte, 2022)</ns0:ref>. A growing literature also highlights that the groups involved in these campaigns are occasionally linked to state-funded media <ns0:ref type='bibr' target='#b80'>(Sloss, 2022)</ns0:ref>, as seen by the strategic use of Western platforms by Chinese state media to advance 'alternative news' <ns0:ref type='bibr' target='#b44'>(Liang, 2021)</ns0:ref> with a marked preference for positives about China <ns0:ref type='bibr' target='#b94'>(Wu, 2022)</ns0:ref> or the disproportionate presence of bots among followers of Russia Today (now RT) <ns0:ref type='bibr' target='#b15'>(Crilley et al., 2022)</ns0:ref>.</ns0:p><ns0:p>In this paper, we pursued three questions about bots: their overall involvement (Q1), their targets (Q2), and the sources used (Q3). We found that bots were actively used to influence public perceptions on online social media (Q1). Posts created by bots were almost exclusively negative and they targeted Democrat figures more than Republicans (Q2), as evidenced both by a difference in ratio (bots were more negative on Democrats than Republicans) and in composition (use of nicknames). While the effect of a presidential tweet has been debated <ns0:ref type='bibr' target='#b50'>(Miles and Haider-Markel, 2020)</ns0:ref>, there is evidence that such an aggressive use of Twitter bots enhances political polarization <ns0:ref type='bibr' target='#b25'>(Gorodnichenko et al., 2021)</ns0:ref>. Consequently, the online political debate was not so much framed as an exchange of 'arguments' or support of various actors, but rather as a torrent of negative posts, heightened by the presence of bots. The sources used by bots confirm that we are not in the presence of arguments supported by factual references; rather, we witnessed a large use of heavily biased websites, disproportionately promoting the views of partisan or extreme right groups (Q3). Since social media platforms are key drivers of traffic towards far-right websites (together with search engines) <ns0:ref type='bibr' target='#b13'>(Chen et al., 2022)</ns0:ref>, it is important to note that many of the links towards such websites are made available to the public through bots. Although our findings may point to the automatic removal of bots from political debates as a possible intervention <ns0:ref type='bibr' target='#b11'>(Cantini et al., 2022)</ns0:ref>, there is evidence that attempts at moderating content can lead to even greater polarization <ns0:ref type='bibr' target='#b87'>(Trujillo and Cresci, 2022)</ns0:ref>, hence the need for caution when intervening.</ns0:p><ns0:p>While previous studies found a mostly anti-Trump sentiment in human responses to tweets (Roca-Cuberes and Young, 2020), the picture can become different when we account for the sheer volume of bots and the marked preference in their targets. This use of bots was not an isolated incident, as the higher use of bots by Trump compared to Clinton was already noted during the second U.S. Presidential Debate <ns0:ref type='bibr' target='#b31'>(Howard et al., 2016)</ns0:ref>, with estimates as high as three pro-Trump bots for every pro-Clinton bot <ns0:ref type='bibr' target='#b20'>(Ferrara, 2020)</ns0:ref>. Our analysis thus provides further evidence for the presence of 'computational propaganda' in the US.</ns0:p><ns0:p>There are three limitations to this work. First, it is possible that tweets support an individual despite being negative. However, this effect would be more likely for Republicans than Democrats, which does not alter our conclusions about the targeted use of bots. Indeed, Trump's rhetoric has been found to be 'unprecedentedly divisive and uncivil' <ns0:ref type='bibr' target='#b9'>(Brewer and Egan, 2021)</ns0:ref>, which can be characterized by a definite and negative tone <ns0:ref type='bibr' target='#b75'>(Savoy, 2021)</ns0:ref>. Consequently, some of the posts categorized as negatives may Manuscript to be reviewed</ns0:p><ns0:p>Computer Science not be against Trump, but rather amplify his message and rhetoric to support him. The prevalence of such cases may be limited, as a recent analysis (over the last two US presidential elections) showed that the prevalence of negative posts was associated with lower popular support <ns0:ref type='bibr' target='#b78'>(Shah et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Second, although we observe an overwhelmingly negative tone in posts from bots, there is still a mixture of positive and negative tweets. This mixture may reflect that the use of bots in the political sphere is partly a state-sponsored activity. That is, various states intervene and their different preferences produce a heterogeneous set of tweets <ns0:ref type='bibr' target='#b40'>(Kie&#223;ling et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b82'>Stukal et al., 2019)</ns0:ref>. Although studies on centrally coordinated disinformation campaigns often focus on how one specific entity (attempts to) shape the public opinion <ns0:ref type='bibr' target='#b38'>(Keller et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b34'>Im et al., 2020)</ns0:ref>, complex events involve multiple entities promoting different messages. For example, analysis of the 2016 election showed that IRA trolls generated messages in favor of Trump, whereas Iranian trolls were against Trump <ns0:ref type='bibr' target='#b98'>(Zannettou et al., 2019)</ns0:ref>. It is not currently possible to know with certainty which organizations were orchestrating which bot accounts, hence our study cannot disentangle bot posts to ascribe their mixed messages to different organizations.</ns0:p><ns0:p>Finally, in line with previous large-scale studies in natural language processing (NLP) regarding</ns0:p><ns0:p>Trump, we automatically categorized each post with respect to sentiment <ns0:ref type='bibr' target='#b83'>(Tachaiya et al., 2021)</ns0:ref>. Although this approach may involve manual reading for a small fraction of tweets (to create an annotated dataset that then trains a machine learning model), most of the corpus is then processed automatically. Such an automatic categorization allows analysts to cope with a vast amount of data, thus favoring breadth in the pursuit of processing millions of tweets. In contrast, studies favoring depth have relied on a qualitative approach underpinned by a manual analysis of the material, which allows examination of the 'storytelling' aspect through changes in rhetoric over time <ns0:ref type='bibr' target='#b62'>(Phelan, 2021)</ns0:ref>. As our study is rooted in NLP and performed over millions of tweets, we did not manually read them to construct and follow narratives. This may be of interest for future studies, who could use a complementary qualitative approach to contrast the rhetoric of posts from bots and humans, or how bots may shape the nature of the arguments rather than only their polarity. Such mixed methods studies can combine statistical context analysis and qualitative textual analysis (O'Boyle and <ns0:ref type='bibr' target='#b58'>Haq, 2022)</ns0:ref> to offer valuable insights.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>We found that bots have played a significant role in contributing to the overall negative tone of the debate during the first impeachment. Most interestingly, we presented evidence that bots were targeting Democrats more than Republicans. The sources promoted by bots were twice as likely to espouse</ns0:p><ns0:p>Republican views than Democrats, with a noticeable presence of highly partisan or even extreme right websites. Together, these findings suggest an intentional use of bots as part of a larger strategy rooted in computational propaganda.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67829:1:1:NEW 25 Feb 2022) Manuscript to be reviewed Computer Science (Q1) Are bots actively involved in the debate? (Q2) Do bots target one political affiliation more than another? (Q3) Which sources are used by bots to support their arguments?</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Classification results (sentiments and bots) for each actor. The high-resolution figure can be zoomed in, using the digital version of this article.</ns0:figDesc><ns0:graphic coords='10,141.73,440.21,413.58,244.18' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Percentage of tweets per category of sentiments over time for each actor. Note that using a percentage instead of the absolute number of tweets allows to compare results across actors, since each one is associated with a different volume of tweets.</ns0:figDesc><ns0:graphic coords='11,141.73,63.79,413.57,259.56' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Joint examination of sentiments and bots for each major actor, with additional political figures included. The high-resolution figure can be zoomed in, using the digital version of this article.</ns0:figDesc><ns0:graphic coords='13,141.73,206.10,413.57,206.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Temporal trend in the number of posts by bots for each major actor, with additional political figures included. The high-resolution figure can be zoomed in, using the digital version of this article.</ns0:figDesc><ns0:graphic coords='13,141.73,470.45,413.60,213.94' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Interactive visualization for topics from bots on January 21st 2020.</ns0:figDesc><ns0:graphic coords='15,141.73,384.52,413.58,311.82' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Distribution of tweets collected</ns0:figDesc><ns0:table><ns0:row><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Actor</ns0:cell><ns0:cell>Nicknames</ns0:cell></ns0:row><ns0:row><ns0:cell>Nancy Pelosi</ns0:cell><ns0:cell>Nervous Nancy</ns0:cell></ns0:row><ns0:row><ns0:cell>Adam Schiff</ns0:cell><ns0:cell>Shifty Schiff</ns0:cell></ns0:row><ns0:row><ns0:cell>Joe Biden</ns0:cell><ns0:cell>Sleepy Joe, Creepy Joe</ns0:cell></ns0:row><ns0:row><ns0:cell>Mitch McConnell</ns0:cell><ns0:cell>Midnight Mitch</ns0:cell></ns0:row></ns0:table><ns0:note>. language detection using langdetect 4/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:1:1:NEW 25 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Nicknames of key actors</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model Name</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Metric</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.62</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.64</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.8816</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.744</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.537 0.2562</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.3465</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.478</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.2104</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.2922</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.658</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.664</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.918</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.7707</ns0:cell></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.638 0.3607</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.4608</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.617</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.0979</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.1686</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell cols='2'>Accuracy 0.7398</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.821</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.806</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.813</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.6246 0.666</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.6348</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.612</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.584</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.598</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.635</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.652</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.7535</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regressor</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.543 0.303</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.389</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.72</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.13125</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.217</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.64</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.639</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.762</ns0:cell></ns0:row><ns0:row><ns0:cell>Support Vector Machine</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.637 0.238</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.3465</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.665</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.128</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.199</ns0:cell></ns0:row></ns0:table><ns0:note>5/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:1:1:NEW 25 Feb 2022) Manuscript to be reviewed Computer Science of classification accuracy for BERT (which we use in this study), TextBlob library, and two scikit-learn algorithms. 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:1:1:NEW 25 Feb 2022)Manuscript to be reviewedComputer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Botometer results</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Hyper-parameters</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Performances</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Approach</ns0:cell><ns0:cell>Values explored via grid search</ns0:cell><ns0:cell>Best values</ns0:cell><ns0:cell>Acc. F1</ns0:cell><ns0:cell>ROC-AUC</ns0:cell><ns0:cell>Pre-cision</ns0:cell><ns0:cell>Re-call</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Criterion (gini / entropy),</ns0:cell><ns0:cell>entropy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>max depth (1, 2, . . ., 10), min samples leaf (2, 3, . . ., 20),</ns0:cell><ns0:cell>5 14</ns0:cell><ns0:cell cols='2'>.885 .903 .890</ns0:cell><ns0:cell>.938</ns0:cell><ns0:cell>.871</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max leaf nodes (1, 2, . . ., 20)</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>eXtreme Gradient</ns0:cell><ns0:cell>Max depth (5, 6, . . ., 10),</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Boosting (XGBoost)</ns0:cell><ns0:cell>alpha (.1, .3, .5), learning rate (.01, .02, . . ., .05),</ns0:cell><ns0:cell>.5 .02</ns0:cell><ns0:cell cols='2'>.892 .909 .893</ns0:cell><ns0:cell>.933</ns0:cell><ns0:cell>.887</ns0:cell></ns0:row><ns0:row><ns0:cell>for trees</ns0:cell><ns0:cell>estimators (100, 200, 300)</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Max depth (5, 6, . . ., 10),</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Random Forests</ns0:cell><ns0:cell>criterion (gini / entropy),</ns0:cell><ns0:cell>entropy</ns0:cell><ns0:cell cols='2'>.898 .915 .899</ns0:cell><ns0:cell>.934</ns0:cell><ns0:cell>.897</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>estimators (100, 200, 300)</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Classification performances on bot detection.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Date</ns0:cell><ns0:cell cols='2'>Tweets Collected Proportion remaining</ns0:cell></ns0:row><ns0:row><ns0:cell>6 Oct 2019</ns0:cell><ns0:cell>1,877,693</ns0:cell><ns0:cell>95.3%</ns0:cell></ns0:row><ns0:row><ns0:cell>17 Oct 2019</ns0:cell><ns0:cell>1,830,552</ns0:cell><ns0:cell>93.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>14 Nov 2019</ns0:cell><ns0:cell>1,828,402</ns0:cell><ns0:cell>92.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>5 Dec 2019</ns0:cell><ns0:cell>1,855,771</ns0:cell><ns0:cell>94.6%</ns0:cell></ns0:row><ns0:row><ns0:cell>19 Dec 2019</ns0:cell><ns0:cell>1,998,802</ns0:cell><ns0:cell>97.9%</ns0:cell></ns0:row><ns0:row><ns0:cell>21 Jan 2020</ns0:cell><ns0:cell>3,230,843</ns0:cell><ns0:cell>96.1%</ns0:cell></ns0:row></ns0:table><ns0:note>). 7/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:1:1:NEW 25 Feb 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Distribution of Tweets</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The distribution of sources is heavily imbalanced. The vast majority (n=909, 77.62%) of these websites were used less than 10 times each, while the top 10 websites were used 32,050 Merry Christmas Eve Patriots! I have a grateful heart for President Donald Trump, a man who loves our God and our country, and defends our people. Keep the faith Patriots, and know that we are on the right path. God Bless you all. #TRUMP2020Landside https://t.co/y5duaB6wrR @RyanHillMI @realDonaldTrump You're right. It wasn't a trial. It was a perfect example of tyranny. But, tweets can only be so long and impeachment didn't fit. So, trial. Two sample tweets for each day of data collection.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>times (i.e., in 58.38% of tweets with URLs). The most cited website is Twitter (19258 tweets, 35.08%), as expected by the practice of retweets. Several other websites are social media (youtube in 1884 tweets or 3.43%, facebook in 266 tweets or 0.48%, Periscope in 215 tweets or 0.39%), web hosting solutions (wordpress in 111 tweets or 0.20%, change.org in 106 tweets or 0.19%, blogspot in 101 tweets or 0.18%), or search engines (civiqs in 65 tweets or 0.12%, google in 65 tweets or 0.12%). The remainder are primarily websites that propose news, hence they were evaluated with respect to the bias ratings from9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:1:1:NEW 25 Feb 2022) Manuscript to be reviewed 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:1:1:NEW 25 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Website categories per source for bias rating (AllSides, Ad Fontes), using their nomenclature (e.g., 'very left' in AllSides, 'hyper-partisan left' in Ad Fontes). * Manual screening was applied to websites not rated by Ad Fontes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Sample tweets pointing to hyper-partisan right or extreme right websites.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>sources), Trending Politics (2.80%), Breitbart (2.60%), The Right Scoop (2.49%), The Gateway Pundit</ns0:cell></ns0:row><ns0:row><ns0:cell>(2.48%), Sara A Carter (2.19%), Wayne Dupree (1.86%), CNN (1.69%), Raw Story (1.57%), Kerry Picket</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"DEPARTMENT OF COMPUTER SCIENCE AND SOFTWARE ENGINEERING College of Engineering and Computing Benton Hall Room 205 Academic Editor PeerJ Computer Science OXFORD, OHIO 45056 (513) 529-0340 (513) 529-0333 FAX February 21, 2022 Dear Dr Kathiravan Srinivasan, We would like to thank you for arranging reviews from three professionals in our field. These timely and detailed reviews have certainly helped to improve this manuscript. Changes are shown in red in the PDF and this letter details the changes that we have done, in response to each of the three reviews. Reviewer #1 (Furqan Rustam) 1- Abstract is too long and even the main points of studies are missing in the abstract. it's too general. → We have reduced the size of the abstract and rewrote its key points. In particular, our main research questions are now more clearly conveyed (Q1, Q2, Q3). 2- Add contributions in the introduction section with bullets. → In line with changes in the abstract, we have added a numbered list to make our contributions more visible in the introduction. 3- Literature is missing at least add some NLP-related studies. I am suggesting some below: a- A performance comparison of supervised machine learning models for Covid-19 tweets sentiment analysis b- Tweets classification on the base of sentiments for US airline companies c- Determining the Efficiency of Drugs under Special Conditions from Users’ Reviews on Healthcare Web Forums d- US Based COVID-19 Tweets Sentiment Analysis Using TextBlob and Supervised Machine Learning Algorithms e- Sentiment Analysis and Topic Modeling on Tweets about Online Education during COVID-19 f- Deepfake tweets classification using stacked Bi-LSTM and words embedding → We have added 25 references on NLP, Twitter, and Politics. That also includes several of the suggestions above, chosen to maximize relevance with the paper. We appreciate the suggestions. 4- Add some description of the used model's architectures. → Lines 185-193 now detail the architecture of the model. 5- Add comparison with Textblob and perform the comparison of machine learning models with textblob sentiments. → This is an excellent idea. We explain this comparison in lines 201-205 and present results through the new dedicated Table 5. 6- We didn't used neural networks models? → We used BERT for sentiment analysis, which is a (deep) neural network model. Reviewer #2 Title: I woud add the date of the political trial to the title to diferentiate from other impeachments. → This is a good idea for clarification. We now indicate in the title that we studied the first impeachment. Abstract: The objective of the research should be clearer in the abstract, in line with the established hypothesis. → We clarified the research objectives by having numbered questions (Q1, Q2, Q3). The study should be expanded in future research and the current limitations should be overcome. → We expanded the research in several ways. We showed that the method used to automatically detect sentiment achieves high accuracy when compared to common alternatives. As suggested by the reviewer, we analyzed the content of the bots’ tweets. In particular, we used two sources of bias ratings to analyze the websites used in the bots’ tweets. This new analysis revealed a very marked bias towards conservative websites, and particularly extreme right websites. We also provided additional interactive visualizations which allow readers to interact with our content further. This type of study, more than the numerical value, requires the qualitative and discursiverhetorical analysis of the content of the bots. It does not matter if it is necessary to reduce the sample and resort to manual measurement. → We manually cross-referenced the main sources used in posts with the two sources of bias ratings. It was indeed necessary to reduce the size of the analysis, so we focused on sources used in at least 50 posts. We also noted the difficulty of interpreting themes, while providing a case that was clearer for interpretation. Note that PeerJ Computer Science has a focus on computational techniques; we note and emphasize the importance of complementary qualitative analyses, but we would go beyond the scope of the journal if they started to take a significant portion of the work presented here. The final selection of actors is not clear; in the Discussion section in Figure 3 eight actors are discussed ... In previous graphs, only four are discussed. They should also be reflected in the abstract and in the Method (selected sample). Is the sample considered balanced based on the ideology and membership of the Democratic and Republican parties of these actors? → We apologize for the lack of clarity in our process and the composition of the sample. We now indicate early on that it is indeed a balanced sample (abstract, lines 108-110). The description of our selection process was expanded in lines 166-172. Literature: The literatura missing references to the use of disinformation, political corrupción, twitter-rhetoric and critical analysis of speech. In general, we value the updated bibliography that is provided related to the study. Some recommended papers are related in the bibliography. → In addition to the 25 technical papers added for reviewer 1, we also added 3 references on topic modeling and 3 references on critical analyses of speech. As we were not able to read the reference provided in Spanish, we have identified the work of Perez-Curiel et al. as a useful complement to provide a global perspective. Our introduction was expanded to better account for these perspectives (lines 55-59; lines 69-81). Research questions are not defined, essential in the methodological section of a study with these characteristics. → The three research questions (one of which was suggested by the reviewer) are now defined in the abstract and introduction, and results with respect to each question are summarized in the discussion. The criteria for selecting the sample of political actors analyzed are not clearly described. → Please see our response at the top of this page. A variable sheet is recommended to help identify which measurement elements have been taken into account → We now provide a summary of measurements in lines 177-179. Although a data dictionary would be common practice for qualitative studies, it is not the case for computational studies. We would be happy to expand on the summary provided if anything else can be helpful for the readership of PeerJ Computer Science. I would recommend to apply the triangulated content analysis methodology with a triple approach (qualitative, quantitative, discursive) on a smaller sample of bots. → We have expanded our analysis by examining the sources used by bots and commented on themes, while providing a much greater level of interaction with the collection through complementary interactive visualizations on our open online repository. The fact of explaining the variables in the methodology would help to better understand the subsequent tables and graphs and the results of the analysis. → In addition to the new summary of measurements, we have expanded our technical discussion of the methods (e.g., the functioning of BERT). We also provided several examples of tweets to illustrate the sort of data that we deal with, at each stage. We expanded the discussion to help with interpretation as well. We value the justified selection of sampling dates. As keywords, It would include the word disinformation, fallacy, lie, or something that reflects the purpose of the bot. → Unfortunately, bots do not state their purpose explicitly. Their messages do not include words such as ‘disinformation’, ‘fallacy’, or ‘lie’ unless they refer to an alleged lie from someone (e.g., ‘Joe Biden lies’), which humans can do as well. That is why it became necessary to use advanced computational methods that can detect whether the account is a bot. This is better explained in lines 147-148. Define the selection criteria for these actors; see if there is a balance in the selection, between Democratic and Republican leaders. → It is indeed balanced. See response at the top of the previous page. We recommend incluiding: -Krippendorff, Klaus (2004). Content analysis. Sage. ASIN: B01B8SR47Y -Van-Dijk, Teun A. (2015). “Critical discourse studies. A sociocognitive Approach”, Methods of Critical Discourse Studies, v. 3, n. 1. -Pérez‐Curiel, Concha, Rivas‐de‐Roca, Rubén; García‐Gordillo, Mar (2021). “Impact of Trump’s Digital Rhetoric on the US Elections: A View from Worldwide Far‐Right Populism”, Social Sciences, v. 10, n. 152. https://doi.org/10.3390/ socsci100501 → We are thankful for these references. We enjoyed reading them. We wish we could have read the fourth paper as well, if it wasn’t for the language barrier. We have cited all of the above. It would be appropriate to include some captures of Twitter messages cataloged as Bots. → We have now done it, by showing Twitter messages from each day of data collection (so readers can note a shift in content) as well as messages that rely on different websites. The question is: what variables determines the feeling, considering that it is a subjective factor? → We have expanded our explanation of how BERT detects sentiment. Sentiments are indeed subjective, but BERT’s high accuracy compared to alternatives (Table 5) has made it one of the most widely used methods to process a very large number of tweets. It has thus become a de facto standard in computational studies such as the one presented here. The conclusions should be completed much more in spite of the fact that the discussion is quite developed. → We appreciate the suggestion. In computer science, the conclusions are normally kept very short, never more than a paragraph. The culture of the field is that conclusions are only there to ‘conclude’, they give a short take-away message rather than new information (compared to the discussion). To strike a balance at the reviewer’s request, we have expanded the conclusion while keeping it within a paragraph. The article should generally deal more in depth with the literature related to the proposed topic. → We agree with the recommendation. There are now over 30 new references to this subject, leading to four pages dedicated to references. Reviewer #3 In this manuscript, the authors examine the role of bots in the spreading of Donald Trump’s tweets during the first impeachment trial. Specifically, the authors collected and analyzed more than 13 million tweets, collected during six key dates during the impeachment trial, to perform (1) sentiment analysis and (2) bot detection. Overall, the setup of the study and the analysis of the data were done well. I recommend Accept. → We are delighted by this fair summary of our work. Thank you for these positive words. • Misspelling of the word “analyses.” In the manuscript, the authors misspelled the noun “analyses” as “analyzes.” See, for instance, line 15, 66, and 67. • Unnecessary capitalizations. For example, the word “tweets” is generally lowercase, but is uppercase in multiple spots in this manuscript (line 123, etc.) • Misspelling of the word “Twitter.” In the manuscript, Twitter is sometimes misspelled as “Tweeter.” (line 139) → We apologize for our oversights in editing the manuscript. Everything has been fixed: analyzes were changed into analyses (consistently), Tweets were made into tweets (unless the word started a sentence or served as column header), and Twitter was consistently spelled. On behalf of the authors, Philippe J. Giabbanelli, PhD Associate Professor, Dept. of Computer Science & Software Engineering, Miami University Associate Editor, Social Network Analysis & Mining "
Here is a paper. Please give your review comments after reading it.
404
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Influencing and framing debates on Twitter provides power to shape public opinion. Bots have become essential tools of 'computational propaganda' on social media such as Twitter, often contributing to a large fraction of the tweets regarding political events such as elections. Although analyses have been conducted regarding the first impeachment of former president Donald Trump, they have been focused on either a manual examination of relatively few tweets to emphasize rhetoric, or the use of Natural Language Processing (NLP) of a much larger corpus with respect to common metrics such as sentiment. In this paper, we complement existing analyses by examining the role of bots in the first impeachment with respect to three questions as follows. (Q1) Are bots actively involved in the debate? (Q2) Do bots target one political affiliation more than another? (Q3) Which sources are used by bots to support their arguments? Our methods start with collecting over 13M tweets on six key dates, from October 6th 2019 to January 21st 2020. We used machine learning to evaluate the sentiment of the tweets (via BERT) and whether it originates from a bot. We then examined these sentiments with respect to a balanced sample of Democrats and Republicans directly relevant to the impeachment, such as House Speaker Nancy Pelosi, senator Mitch McConnell, and (then former Vice President) Joe Biden. The content of posts from bots was further analyzed with respect to the sources used (with bias ratings from AllSides and Ad Fontes) and themes. Our first finding is that bots have played a significant role in contributing to the overall negative tone of the debate (Q1). Bots were targeting Democrats more than Republicans (Q2), as evidenced both by a difference in ratio (bots had more negative-to-positive tweets on Democrats than Republicans) and in composition (use of derogatory nicknames). Finally, the sources provided by bots were almost twice as likely to be from the right than the left, with a noticeable use of hyper-partisan right and most extreme right sources (Q3). Bots were thus purposely used to promote a misleading version of events. Overall, this suggests an</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head></ns0:div> <ns0:div><ns0:head>42</ns0:head><ns0:p>The efforts of party leaders and presidents to shape public understanding of issues through mainstream 43 and partisan media are well documented <ns0:ref type='bibr' target='#b77'>(Sellers, 2009;</ns0:ref><ns0:ref type='bibr' target='#b89'>Vinson, 2017)</ns0:ref>. Partisan media intentionally adopt 44 ideological frames and cover stories in ways that favor the politicians in their own party and create negative 45 impressions of those in the other party, contributing to polarization in American politics <ns0:ref type='bibr'>(Levendusky,</ns0:ref><ns0:ref type='bibr'>46</ns0:ref> 2013; <ns0:ref type='bibr' target='#b22'>Forgette, 2018)</ns0:ref>. At the interface of political communication and computational sciences, the emerging field of computational politics has produced many analyses of polarization over the recent years <ns0:ref type='bibr' target='#b29'>(Haq et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b63'>Pozen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b96'>Yarchi et al., 2021)</ns0:ref>. Comparatively less computational research has been devoted to how politicians influence debate on social media, but interest in this question has grown during Donald Trump's presidency as he extensively communicated directly to the public via Twitter <ns0:ref type='bibr' target='#b57'>(Ouyang and Waterman, 2020)</ns0:ref>.</ns0:p><ns0:p>While mainstream media traditionally favor those with political power, giving them coverage that expands their public reach, social media rewards those who can attract an audience. On Twitter, a large attentive audience can amplify a politician's message and change or mobilize public opinion <ns0:ref type='bibr' target='#b100'>(Zhang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b68'>Russell, 2021)</ns0:ref>. Trump's rhetoric features many characteristics of populism, including emotional appeals based on opinion rather than fact and allegations of corruption, that attracts followers; indeed, his messages are often picked up by right-wing populists leaders around the world <ns0:ref type='bibr' target='#b61'>(P&#233;rez-Curiel et al., 2021)</ns0:ref>. As research on the 2016 presidential campaign illustrates, Trump had an attentive audience, particularly on the far right, that readily amplified his message through retweets <ns0:ref type='bibr' target='#b100'>(Zhang et al., 2018)</ns0:ref>; a similar phenomenon was found in the 2020 election, for which datasets are gradually assembled and examined <ns0:ref type='bibr' target='#b0'>(Abilov et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b27'>Grimminger and Klinger, 2021)</ns0:ref>. These events repeatedly show that news media responded to Trump's large volume of tweets by giving him substantially more coverage than other candidates, further expanding his reach to influence and frame public debate <ns0:ref type='bibr' target='#b91'>(Wells et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Computational analyses further suggest that the use of tweets was a strategic activity, as Trump's Twitter activity increased in response to waning news coverage from Center Right, Mainstream, Left Wing, and Far Left media <ns0:ref type='bibr' target='#b91'>(Wells et al., 2020)</ns0:ref>. Influencing and framing the debate on Twitter thus serves to shape media coverage, public opinion, and ultimately impact the political agenda <ns0:ref type='bibr'>(S &#184;ahin et al., 2021)</ns0:ref>.</ns0:p><ns0:p>In this sense, shaping public discourse is connected to political power. Studies of Twitter rhetoric indicate that patterns in framing and use of language occur within digital networks that create group identities <ns0:ref type='bibr' target='#b2'>(Alfonzo, 2021)</ns0:ref>. Critical discourse analysis suggests that examining messaging can identify power asymmetries, revealing the institutions and leaders who control public discourse in particular environments or around specific issues and events <ns0:ref type='bibr' target='#b88'>(Van Dijk, 2009)</ns0:ref>. Such research can reveal abuse of power by dominant groups and resistance by marginalized groups. Given Trump's populist rhetoric and its potential for spreading disinformation <ns0:ref type='bibr' target='#b61'>(P&#233;rez-Curiel et al., 2021)</ns0:ref>, amplification of his messages would have important implications for public discourse. Though critical analysis would require a more detailed and nuanced coding of the tweets than we are able to provide with our methodology, looking at the amplification of political discourse by bots allows us to see the potential manipulation of public discourse, especially if the bots favor one faction, that could affect the power dynamics between parties and have significant negative implications for democracy. This kind of power merits additional study. Our study of the first impeachment of Donald Trump provides such an opportunity. This paper is part of the growing empirical literature on the social influence carried by Trump's tweets prior to the permanent suspension of his account on January 8, 2021. Such studies are now covered by dedicated volumes <ns0:ref type='bibr' target='#b37'>(Kamps, 2021;</ns0:ref><ns0:ref type='bibr' target='#b76'>Schier, 2022)</ns0:ref> as well as a wealth of scholarly works, including largescale social network analysis and mining <ns0:ref type='bibr' target='#b101'>(Zheng et al., 2021)</ns0:ref>, analyses of media responses <ns0:ref type='bibr' target='#b14'>(Christenson et al., 2021)</ns0:ref>, the interplay of tweets and media <ns0:ref type='bibr' target='#b53'>(Morales et al., 2021)</ns0:ref>, or the (absence of) impact between his tweets and stock returns <ns0:ref type='bibr' target='#b47'>(Machus et al., 2022)</ns0:ref>. Related studies also include analyses about comments from political figures about Trump <ns0:ref type='bibr' target='#b51'>(Milford, 2021;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alexandre et al., 2021)</ns0:ref> and vice versa <ns0:ref type='bibr' target='#b10'>(Brown Crosby, 2022)</ns0:ref>, analyses of his letter during the first impeachment <ns0:ref type='bibr' target='#b66'>(Reyes and Ross, 2021)</ns0:ref>, and detailed examinations about how Trump's message was perceived by specific groups such as White extremists <ns0:ref type='bibr' target='#b45'>(Long, 2022)</ns0:ref>. The impeachment and handling of COVID-19 have received particular attention on Twitter <ns0:ref type='bibr' target='#b17'>(Dejard et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cervi et al., 2021)</ns0:ref>, as key moments of his presidency. While the use of bots to create social media content during presidential elections of COVID-19 have received attention <ns0:ref type='bibr' target='#b95'>(Xu and Sasahara, 2021)</ns0:ref>, there is still a need to examine how bots were used during the first impeachment.</ns0:p><ns0:p>The specific contribution of our study is to complement existing (predominantly manual) analyses of tweets during the first impeachment, which often focused on tweets from Donald Trump <ns0:ref type='bibr' target='#b26'>(Gould, 2021;</ns0:ref><ns0:ref type='bibr' target='#b19'>Driver, 2021)</ns0:ref> or Senators <ns0:ref type='bibr' target='#b48'>(McKee et al., 2022)</ns0:ref>, by using machine learning to examine the role of bots in a large collection of tweets. This complementary analysis has explicitly been called for in recent publications <ns0:ref type='bibr' target='#b83'>(Tachaiya et al., 2021)</ns0:ref>, which performed large-scale sentiment analyses during the first impeachment (via Reddit and 4chan), but did not investigate the efforts at social engineering deployed via bots. Our specific contribution focuses on addressing three specific questions: The remainder of this paper is organized as follows. In the next section, we explain how we gathered over 13M tweets based on six key dates of the first impeachment, and analyzed them using BERT (for sentiment) and Botometer (for bot detection) as well as bias ratings (for source evaluation) and Latent Dirichlet Allocation (for theme mining). We then present our results for a balanced sample of key political figures in the impeachment (including Adam Schiff, Joe Biden, Mitch McConnell, Nancy Pelosi, Donald Trump) as well as others relevant to our analysis. For transparency and replicability of research, tweets and detailed results are available on a third-party repository without registration needed, at https://osf.io/3nsf8/. Finally, our discussion contextualizes the answer for each of the three questions within the broader literature on computational politics and summarizes some of our limitations.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>For a detailed description of the Ukraine Whistleblower Scandal, we refer the readers to a recent analysis by McKee and colleagues who also contextualize this situation within the broader matter of hyperpolarization <ns0:ref type='bibr' target='#b48'>(McKee et al., 2022)</ns0:ref>. A short overview is as follows. On September 9, 2019, the House Intelligence Committee, headed by Rep. Adam Schiff, was alerted to the existence of a whistleblower complaint that involved the Trump Administration. Over the next two weeks, information from media reports indicated that President Trump had withheld aid to Ukraine possibly to pressure the new Ukrainian President to launch a corruption investigation into Democratic presidential hopeful Joe Biden's son Hunter who had been on the board of a Ukrainian company when the elder Biden was Vice President during the Obama Administration <ns0:ref type='bibr' target='#b84'>(Trautman, 2019)</ns0:ref>. By September 25, House Speaker Nancy Pelosi, a Democrat, had launched an impeachment inquiry, and the Trump Administration had released notes from Trump's call with the Ukrainian President that some saw as evidence that Trump had conditioned aid and a visit to the White House on Ukraine investigating Hunter Biden. Our data collection started soon after, covering six key dates leading to the impeachment process.</ns0:p><ns0:p>On October 6, 2019, a second whistleblower was identified and we collected 1, 968, 943 tweets. On October 17, 2019, then Senate Majority Leader Mitch McConnell briefed the Senate on the impeachment process (1, 960, 808 tweets). On November 14, 2019, the second day of impeachment hearings began in the House Intelligence Committee (1, 977, 712 tweets). On December 5, 2019, one day after the House Judiciary Committee opened impeachment hearings, four constitutional lawyers testified in front of Congress (1, 960, 813 tweets). On December 18, 2019, the House debated and passed two articles of impeachment. We collected 2, 041, 924 to gather responses on the following day. Finally, on January 21, 2020, the Senate impeachment trial began and we ended the data collection with 3, 360, 434 tweets. In total, 13, 272, 577 tweets were collected (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>).</ns0:p><ns0:p>Throughout the process, both parties and the President tried to control the public narrative over impeachment. Early in the process, Democrats began to talk about abuse of power or corruption by Trump and eventually shifted to the more specific language of bribery and extortion. Meanwhile, Republican elected officials seem to be trying to frame this as a coup by the Democrats. Republicans directed their ire at House Intelligence Committee Chairman Democrat Adam Schiff who led the impeachment inquiry.</ns0:p><ns0:p>The President and some of his Republican allies, including Senator Lindsey Graham and aide Stephen Miller, tried to divert attention from the President by alleging misconduct by Joe and Hunter Biden. Our key words for data collection were thus chosen to reflect the messaging of the two parties in what shaping up as a largely partisan process: Trump, Impeachment, Coup, Abuse of Power, Schiff (as in Adam), and Biden. We did not explicitly include words like disinformation, though the messaging may have included unfounded allegations, because such words were not part of either party's framing.</ns0:p><ns0:p>Tweets were gathered using our Twitter mining system <ns0:ref type='bibr' target='#b49'>(Mendhe et al., 2020)</ns0:ref>, which allows for the collection of social media data and the application of many filters for cleaning and further use for machine learning techniques. </ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>This study and the methods employed only target texts in English, hence we removed tweets in other languages as well as tweets only containing hyperlinks. Language detection of Twitter data is more difficult than that of standard texts due to the unusual language used within tweets. We used the langdetect library (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>), which utilizes a naive Bayes classifier and achieves strong performances on tweets <ns0:ref type='bibr' target='#b46'>(Lui and Baldwin, 2014)</ns0:ref>. Tweets in English may have to undergo extensive cleaning through a variety of filters, depending on the model used for analysis. As detailed in the next section, we use the BERT model, or</ns0:p><ns0:p>Bidirectional Encoder Representations from Transformers. BERT is pre-trained on unprocessed English text <ns0:ref type='bibr' target='#b18'>(Devlin et al., 2018)</ns0:ref> hence the only processing performed was to remove tweets that consisted of only hyperlinks. A more detailed discussion of our data collection platform and its combination with BERT can be found in <ns0:ref type='bibr' target='#b64'>(Qudar and Mago, 2020)</ns0:ref>, while an abundance of studies show how BERT is used for classification of tweets <ns0:ref type='bibr' target='#b79'>(Singh et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b97'>Yenduri et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b70'>Sadia and Basak, 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis</ns0:head><ns0:p>In textual data analysis, we can apply information extraction and text mining to parse information from a large data set. In order to understand how each side put their messages out, data was broken down by mention of specific politicians-those who were the subject of impeachment (Trump) and countermessaging by Republicans (Biden), those who were important to the impeachment process and messaging for their party (Pelosi, McConnel, and Schiff), and two Republicans (Miller and Graham) 'who represent accounts closely associated with President Trump' per previous Twitter analyses <ns0:ref type='bibr' target='#b8'>(Boucher and Thies, 2019;</ns0:ref><ns0:ref type='bibr' target='#b30'>Hawkins and Kaltwasser, 2018)</ns0:ref>. This sample of actors is evenly divided among Democrats and Republicans. After assigning tweets to each of the actors, we then used sentiment analysis and bot detection. The analysis was further nuanced by accounting for the use of a derogatory nickname (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>) in lieu of the official actor's name. In addition, we performed complementary examinations of the content of tweets by bots, by examining their themes (in contrast with human tweets) and the bias exhibited in the websites used as sources.</ns0:p><ns0:p>In summary, the data associated with each tweet (content, time, author) is automatically expanded with the list of actors involved, the type of author (bot, human, unknown), the sentiment (negative, neutral, positive), and the list of websites used.</ns0:p></ns0:div> <ns0:div><ns0:head>Sentiment Analysis</ns0:head><ns0:p>Sentiment analysis is a very important technique to analyze textual data and is frequently used in natural language processing, text analysis, and computational linguistics. Our model classifies sentiment into three categories: negative, neutral, and positive. As mentioned above, we use the natural language processing model BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers <ns0:ref type='bibr' target='#b18'>(Devlin et al., 2018)</ns0:ref>. As its name indicates, BERT is a tre-trained deep bidirectional transformer, whose architecture consists of multiple encoders, each composed of two types of layers (multi-head self-attention layers, feed forward layers). To Tweet Output t&#252;m d&#305;s &#184;politikay&#305; tek bir adama yani trump'a baglamak t&#252;m tr I'm sure Trump will tweet about rep Elijah Cummings passing en appreciate the number of parameters, consider that the text first goes through an embedding process (two to three dozen million parameters depending on the model), followed by transformers (each of which adds 7 or 12.5 million parameters depending on the model), ending with a pooling layer (0.5 or 1 million more parameters depending on the model). All of these parameters are trainable. Although the model has already been pre-trained, it is common practice to give it additional training based on data specific to the context of interest.</ns0:p><ns0:p>In order to train the model, a set of 3,000 unprocessed tweets was randomly selected, distributed evenly across each of the six collection dates. The tweets were then manually labelled for overall sentiment, and fed into the model for training. This approach is similar to other works, such as <ns0:ref type='bibr' target='#b27'>Grimminger and Klinger (2021)</ns0:ref>, in which 3,000 tweets were manually annotated for a BERT classification. While there is no universal number for how many tweets should be labeled when using BERT, we observe a range across studies from under 2, 000 labeled tweets (S &#184;as &#184;maz and Tek, 2021; <ns0:ref type='bibr' target='#b3'>Alomari et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b60'>Peisenieks and Skadin &#184;&#353;, 2014)</ns0:ref> up to several thousands <ns0:ref type='bibr' target='#b24'>(Golubev and Loukachevitch, 2020;</ns0:ref><ns0:ref type='bibr' target='#b69'>Rustam et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b54'>Nabil et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b99'>Zhang et al., 2020)</ns0:ref>. In line with previous studies on sentiment analysis in tweets <ns0:ref type='bibr' target='#b69'>(Rustam et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b39'>Khan et al., 2021)</ns0:ref>, we compared BERT and several other supervised machine learning models from the textblob library (i.e., decision tree, Naive Bayes) as well as scikit-learn (logistic regression, support vector machine). Our comparison in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> confirms that BERT leads to satisfactory and more stable performances than alternatives, thus validating the choice of BERT in our corpus.</ns0:p></ns0:div> <ns0:div><ns0:head>Bot Detection</ns0:head><ns0:p>We use the botometer library (formerly know as BotOrNot) to detect the presence of bots in Twitter.</ns0:p><ns0:p>botometer is a paid service that utilizes the Twitter API to compile over 1,000 features of a given Twitter account, including how frequently the account tweets, how similar each tweet is to previous tweets from the account, and the account's ratio of followers to followees <ns0:ref type='bibr' target='#b16'>(Davis et al., 2016)</ns0:ref>. Features are examined both within their respective categories and collectively. The resulting series of probabilities is returned (Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>). To provide a conservative estimate, we scanned almost half of the different accounts associated with tweets (1, 101, 023 out of 2, 438, 343 unique accounts). Since some accounts were deleted or suspended (n = 113, 037), our analysis of bots is based on a sample of 987, 987 active accounts. We note that the percentage of accounts unavailable is similar to <ns0:ref type='bibr' target='#b42'>Le et al. (2019)</ns0:ref>, in which 9.5% of Twitter accounts involved in the 2016 election were suspended.</ns0:p><ns0:p>In order to classify Twitter accounts as human or bot, a classifier needs to be properly calibrated on annotated botometer data <ns0:ref type='bibr' target='#b81'>(Spagnuolo, 2019)</ns0:ref>. We considered three supervised machine learning techniques, which are frequently used alongside BERT to process tweets: decision trees, extreme gradient boosted trees (XGBoost), and random forests <ns0:ref type='bibr' target='#b41'>(Kumar et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b69'>Rustam et al., 2021)</ns0:ref>. All three methods were implemented in scikit-learn, and our scripts are provided publicly on our repository. Training data, consisting of manually labelled Twitter accounts, was retrieved from https://botometer.osome.</ns0:p><ns0:p>iu.edu/bot-repository/datasets.html and compiled, thus providing 3, 000 accounts. For each of the three supervised machine learning training, we optimized a classifier using 10-fold cross validation and a grid search over commonly used hyper-parameters. The hyper-parameters considered and their values are listed in Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, while each model was exported from scikit-learn and saved on our open repository for inspection and reuse by the research community. As performances are comparable across the three techniques, we used the decision tree (which can more readily be interpreted by experts)</ns0:p><ns0:p>to classify accounts from our collected data.</ns0:p></ns0:div> <ns0:div><ns0:head>Content of tweets by Bots</ns0:head><ns0:p>Once we detect the posts emanating from bots, we can further examine their content. In particular, we use a frequency counts approach to measure the number of times that each website is cited in the bots' tweets. Specifically, we retrieve all URLs from these tweets and map them to a common form such that 'www.foxnews.com', 'media2.foxnews.com' or 'radio.foxnews.com' are all counted as foxnews. Then, we examine the political leanings of the websites by cross-referencing them with sources for bias ratings.</ns0:p><ns0:p>We accomplish this in the same manner as in the recent work of Huszar and colleagues, we used two sources for bias ratings: AllSides (https://www.allsides.com/media-bias/ratings) and Ad Fontes (https://adfontesmedia.com/interactive-media-bias-chart). In line with their work, we do not claim that either source provides objectively better ratings <ns0:ref type='bibr' target='#b32'>(Husz&#225;r et al., 2022)</ns0:ref>, hence we report both. If a website is rated by neither source, then we access it to read its content and evaluate it manually; such evaluations are disclosed explicitly.</ns0:p><ns0:p>In addition to sources, we examined themes. In line with the typical computational approach used in similar studies on Twitter <ns0:ref type='bibr' target='#b35'>Jelodar et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b56'>Ostrowski (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b55'>Negara et al. (2019)</ns0:ref>, we use Latent Dirichlet allocation (LDA) to extract topics. To allow for fine-grained comparison, we perform the extraction on every day of the data collection and separate themes of human posts from bots.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The size of the dataset after preprocessing is provided in the first subsection. For transparency and replicability of research, tweets are available at https://osf.io/3nsf8/; they are organized by key actors and labeled with the sentiment expressed or whether they originate from a bot.</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing</ns0:head><ns0:p>In Twitter research, pre-processing often leads to removing most of the data. For example, our previous research on Twitter regarding the Supreme Court <ns0:ref type='bibr' target='#b72'>(Sandhu et al., 2019a)</ns0:ref> discarded 87 &#8722; 89% of the data, while our examination of Twitter and obesity discarded 73% of the data <ns0:ref type='bibr' target='#b73'>(Sandhu et al., 2019b)</ns0:ref>. The reason is that pre-processing has historically involved a series of filters (e.g., removing words that are not deemed informative in English, removing hashtags and emojis), which were necessary as the analysis model could not satisfactorily cope with raw data. In contrast, BERT can directly take tweets as input. Our pre-processing thus only eliminated non-English tweets and tweets with no substantive information that consisted of only hyperlinks. As a result, most of the data is preserved for analysis: we kept 11, 007, 403 of the original 13, 568, 750 tweets ( </ns0:p></ns0:div> <ns0:div><ns0:head>Sentiment analysis</ns0:head><ns0:p>analyses shows significant negative sentiment for all key actors (Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>). As expected, when referred to by their nicknames, tweets about the actors were almost unanimously negative. Of the four key actors, Adam Schiff has the most polarizing results, with the highest proportion of positive tweets (9.20%) and the second highest proportion of negative tweets (74.10%). This matches observations during the trial, suggesting that Republicans strongly disliked Schiff for his leading role in the Impeachment Trial, while Democrats supported his efforts. Although the proportions of sentiments expressed vary over time (Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>), we note that negative sentiments dominate on every one of the six dates for data sampling.</ns0:p></ns0:div> <ns0:div><ns0:head>Bot Detection</ns0:head><ns0:p>Results from bot detection show that at least 22-24% of tweets were sent by bot controlled accounts, with a few examples provided in Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref>. Deleted accounts as well as accounts for which no information was obtained are grouped together in the 'Unknown' category. As Twitter will remove bots that manipulate conversations, it is possible that a significant share of the 'Unknown' category is also composed of Manuscript to be reviewed Although the overall tone of the tweets is negative (Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>), the ratio of negatives-to-positives shows a clear differentiation in the use of bots compared to humans. For example, consider Adam Schiff (Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, top-left). Based on tweets by humans, he has a ratio of 583616 70666 &#8776; 8.3 negatives-to-positives. In comparison, this ratio is 354241 36542 &#8776; 9.7 for tweets authored by bots. Overall, we observe that the only cases in which bots had a greater negative-to-positive ratio of posts than humans was for Democrats: Adam Schiff (9.7 negative tweets by bot for each positive vs. 8.3 for humans), Chuck Schumer (40.2 vs 33.4), Joe Biden (39.1 vs 33.8), and Nancy Pelosi (10.9 vs 10.2). In contrast, bots were always less negative than humans for Republicans: Donald Trump (6.1 for bots vs 7.9 for humans), Lindsey Graham (8.1 vs 9.0), Mitch McConnell (19.7 vs 23.6), and Stephen Miller (59.6 vs 103.8).</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>We did not find evidence of a clear temporal trend (e.g., monotonic increase or decrease) of bots over time (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>), which may suggest that they were employed based on specific events.</ns0:p></ns0:div> <ns0:div><ns0:head>Top Sources Used by Bots</ns0:head><ns0:p>A total of 1,171 distinct websites were used as sources across 54,899 tweets classified as written by bot accounts. The complete list is provided as part of our supplementary online materials (https: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>//osf.io/3nsf8/).</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>AllSides and Ad Fontes.</ns0:p><ns0:p>Given the large number of distinct websites, we focused on those used at least 50 times, excluding the categories aforementioned (social media, web hosting, search engines). When using AllSides, the sample appears relatively balanced (Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>), because a large number of websites are not rated. Ad Fontes provides more evaluations, which starts to show that websites lean to the right. We complemented the ratings of Ad Fontes with manual evaluations (Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>), showing that (i) there are almost twice as many right-leaning sources than left-leaning sources and (ii) the right-leaning sources are much more extreme.</ns0:p><ns0:p>For example, 41 websites are more right-leaning than FoxNews while only 16 were more left-leaning than CNN. The imbalance is even more marked when we consider the volume of tweets. Indeed, out of the 10 most used websites, eight lean to the right and two to the left (italicized): FoxNews (3.73% of tweets with Manuscript to be reviewed (1.41%). Tweets associated with some of these websites are provided in Table <ns0:ref type='table' target='#tab_10'>10</ns0:ref> as examples. In sum, there is ample evidence that the bots tend to promote a Republican viewpoint, often based on misleading sources.</ns0:p><ns0:p>Our findings are similar to <ns0:ref type='bibr' target='#b86'>Tripodi and Ma (2022)</ns0:ref>, who noted that almost two thirds of the sources in official communication from the White House under the Trump Administration relied on 'RWME, a network fueled by conspiracy theories and fringe personalities who reject normative journalistic practices.'</ns0:p><ns0:p>Although other news outlets were cited, such as the Washington Post and CNN, this was often in the context of characterizing them as 'fake news'. Other recent analyses have also noted that the use of popular far-right websites was more prevalent in counties that voted for Trump in 2020 <ns0:ref type='bibr' target='#b13'>(Chen et al., 2022)</ns0:ref>.</ns0:p><ns0:p>The possibility for websites that fuel partisanship and anger to be highly profitable may have contributed to their proliferation <ns0:ref type='bibr' target='#b90'>(Vorhaben, 2022)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Themes for Humans Vs. Bots</ns0:head><ns0:p>Interactive visualizations for themes are provided as part of our supplementary online materials (https:</ns0:p><ns0:p>//osf.io/3nsf8/), with one example shown in Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>. Themes should be interpreted with caution.</ns0:p><ns0:p>For example, consider posts on December 6th 2019. For bots, a major theme links Senator Nancy Pelosi Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with 'hate', whereas for humans she is first associated with impeachment, house, president, and then only with hate. However, bots are not actually conveying images of hatred towards Pelosi. Rather, the connection is related to a news story that occurred the day before, in which a reported asked Pelosi if he hated Trump, and she responded that she did not hate anyone; the reaction went viral. Although bots have picked up on it more than the humans, it is only tangentially related to impeachment and does not indicate that bots are hating Pelosi.</ns0:p><ns0:p>A more robust and relevant pattern, across the entire time period, is that Biden is a much more salient term for the humans than the bots. Based on associated terms, it appears to be connected to Trump and other Republicans (particularly Lindsey Graham and outlets such as Fox News) trying to distract from impeachment by making allegations that Biden was corrupt in his dealings with Ukraine. It thus appears that bots and humans occasionally focus on different stories. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>tions via social media, by spending 'as much as $21,000' on a Facebook ad entitled 'Biden Corruption', which included debunked claims that then Vice President Joe Biden had tied Ukrainian aid to dropping an investigation of his son <ns0:ref type='bibr' target='#b28'>(Grynbaum and Hsu, 2019)</ns0:ref>. Throughout the process, the President denigrated Democrats as a group and individually in his public comments and on Twitter. Strategies have included the use of nicknames, which contributes to painting a gallery of 'known villains' <ns0:ref type='bibr' target='#b52'>(Montgomery, 2020)</ns0:ref> and fits within the broader pattern of the far right in online harassment of political enemies <ns0:ref type='bibr' target='#b5'>(Bambenek et al., 2022)</ns0:ref>. This use of nicknames also echoes Trump's strategy in employing derogatory language when depicting other groups, such as the frequent use of terms such as 'animal' or 'killer' when referring to immigrants. Such strategies are part of a broader trend of incivility by American politicians on Twitter, with uncivil tweets serving as powerful means of political mobilization and fundraising <ns0:ref type='bibr' target='#b4'>(Ballard et al., 2022)</ns0:ref> by drawing on emotions such as anger <ns0:ref type='bibr' target='#b36'>(Joosse and Zelinsky, 2022;</ns0:ref><ns0:ref type='bibr' target='#b92'>Wenzler et al., 2022)</ns0:ref>; the ensuing attention may even lead politicians to engage into greater incivility <ns0:ref type='bibr' target='#b23'>(Frimer et al., 2022)</ns0:ref>. In the case of Trump, the violence-inducing rhetoric resulted in permanent suspension of his Twitter account, days before being charged by the House of Representatives for 'incitement of insurrection' <ns0:ref type='bibr' target='#b93'>(Wheeler and Muwanguzi, 2022)</ns0:ref>.</ns0:p><ns0:p>Previous research on Trump and social media mining concluded that 'political troll groups recently gained spotlight because they were considered central in helping Donald Trump win the 2016 US presidential election' <ns0:ref type='bibr' target='#b21'>(Flores-Saviaga et al., 2018)</ns0:ref>. Far from isolated individuals with their own motives, research has shown that 'trolls' could be involved in coordinated campaigns to manipulate public opinion on the Web <ns0:ref type='bibr' target='#b98'>(Zannettou et al., 2019)</ns0:ref>. Given the polarization of public opinions and the 'hyper-fragmentation of the mediascape', such campaigns frequently tailor their messages to appeal to very specific segments of the electorate <ns0:ref type='bibr' target='#b65'>(Raynauld and Turcotte, 2022)</ns0:ref>. A growing literature also highlights that the groups involved in these campaigns are occasionally linked to state-funded media <ns0:ref type='bibr' target='#b80'>(Sloss, 2022)</ns0:ref>, as seen by the strategic use of Western platforms by Chinese state media to advance 'alternative news' <ns0:ref type='bibr' target='#b44'>(Liang, 2021)</ns0:ref> with a marked preference for positives about China <ns0:ref type='bibr' target='#b94'>(Wu, 2022)</ns0:ref> or the disproportionate presence of bots among followers of Russia Today (now RT) <ns0:ref type='bibr' target='#b15'>(Crilley et al., 2022)</ns0:ref>.</ns0:p><ns0:p>In this paper, we pursued three questions about bots: their overall involvement (Q1), their targets (Q2), and the sources used (Q3). We found that bots were actively used to influence public perceptions on online social media (Q1). Posts created by bots were almost exclusively negative and they targeted Democrat figures more than Republicans (Q2), as evidenced both by a difference in ratio (bots were more negative on Democrats than Republicans) and in composition (use of nicknames). While the effect of a presidential tweet has been debated <ns0:ref type='bibr' target='#b50'>(Miles and Haider-Markel, 2020)</ns0:ref>, there is evidence that such an aggressive use of Twitter bots enhances political polarization <ns0:ref type='bibr' target='#b25'>(Gorodnichenko et al., 2021)</ns0:ref>. Consequently, the online political debate was not so much framed as an exchange of 'arguments' or support of various actors, but rather as a torrent of negative posts, heightened by the presence of bots. The sources used by bots confirm that we are not in the presence of arguments supported by factual references; rather, we witnessed a large use of heavily biased websites, disproportionately promoting the views of partisan or extreme right groups (Q3). Since social media platforms are key drivers of traffic towards far-right websites (together with search engines) <ns0:ref type='bibr' target='#b13'>(Chen et al., 2022)</ns0:ref>, it is important to note that many of the links towards such websites are made available to the public through bots. Although our findings may point to the automatic removal of bots from political debates as a possible intervention <ns0:ref type='bibr' target='#b11'>(Cantini et al., 2022)</ns0:ref>, there is evidence that attempts at moderating content can lead to even greater polarization <ns0:ref type='bibr' target='#b87'>(Trujillo and Cresci, 2022)</ns0:ref>, hence the need for caution when intervening.</ns0:p><ns0:p>While previous studies found a mostly anti-Trump sentiment in human responses to tweets (Roca-Cuberes and Young, 2020), the picture can become different when we account for the sheer volume of bots and the marked preference in their targets. This use of bots was not an isolated incident, as the higher use of bots by Trump compared to Clinton was already noted during the second U.S. Presidential Debate <ns0:ref type='bibr' target='#b31'>(Howard et al., 2016)</ns0:ref>, with estimates as high as three pro-Trump bots for every pro-Clinton bot <ns0:ref type='bibr' target='#b20'>(Ferrara, 2020)</ns0:ref>. Our analysis thus provides further evidence for the presence of 'computational propaganda' in the US.</ns0:p><ns0:p>There are three limitations to this work. First, it is possible that tweets support an individual despite being negative. However, this effect would be more likely for Republicans than Democrats, which does not alter our conclusions about the targeted use of bots. Indeed, Trump's rhetoric has been found to be 'unprecedentedly divisive and uncivil' <ns0:ref type='bibr' target='#b9'>(Brewer and Egan, 2021)</ns0:ref>, which can be characterized by a definite and negative tone <ns0:ref type='bibr' target='#b75'>(Savoy, 2021)</ns0:ref>. Consequently, some of the posts categorized as negatives may Manuscript to be reviewed</ns0:p><ns0:p>Computer Science not be against Trump, but rather amplify his message and rhetoric to support him. The prevalence of such cases may be limited, as a recent analysis (over the last two US presidential elections) showed that the prevalence of negative posts was associated with lower popular support <ns0:ref type='bibr' target='#b78'>(Shah et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Second, although we observe an overwhelmingly negative tone in posts from bots, there is still a mixture of positive and negative tweets. This mixture may reflect that the use of bots in the political sphere is partly a state-sponsored activity. That is, various states intervene and their different preferences produce a heterogeneous set of tweets <ns0:ref type='bibr' target='#b40'>(Kie&#223;ling et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b82'>Stukal et al., 2019)</ns0:ref>. Although studies on centrally coordinated disinformation campaigns often focus on how one specific entity (attempts to) shape the public opinion <ns0:ref type='bibr' target='#b38'>(Keller et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b34'>Im et al., 2020)</ns0:ref>, complex events involve multiple entities promoting different messages. For example, analysis of the 2016 election showed that IRA trolls generated messages in favor of Trump, whereas Iranian trolls were against Trump <ns0:ref type='bibr' target='#b98'>(Zannettou et al., 2019)</ns0:ref>. It is not currently possible to know with certainty which organizations were orchestrating which bot accounts, hence our study cannot disentangle bot posts to ascribe their mixed messages to different organizations.</ns0:p><ns0:p>Finally, in line with previous large-scale studies in natural language processing (NLP) regarding</ns0:p><ns0:p>Trump, we automatically categorized each post with respect to sentiment <ns0:ref type='bibr' target='#b83'>(Tachaiya et al., 2021)</ns0:ref>. Although this approach may involve manual reading for a small fraction of tweets (to create an annotated dataset that then trains a machine learning model), most of the corpus is then processed automatically. Such an automatic categorization allows analysts to cope with a vast amount of data, thus favoring breadth in the pursuit of processing millions of tweets. In contrast, studies favoring depth have relied on a qualitative approach underpinned by a manual analysis of the material, which allows examination of the 'storytelling' aspect through changes in rhetoric over time <ns0:ref type='bibr' target='#b62'>(Phelan, 2021)</ns0:ref>. As our study is rooted in NLP and performed over millions of tweets, we did not manually read them to construct and follow narratives. This may be of interest for future studies, who could use a complementary qualitative approach to contrast the rhetoric of posts from bots and humans, or how bots may shape the nature of the arguments rather than only their polarity. Such mixed methods studies can combine statistical context analysis and qualitative textual analysis (O'Boyle and <ns0:ref type='bibr' target='#b58'>Haq, 2022)</ns0:ref> to offer valuable insights.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>We found that bots have played a significant role in contributing to the overall negative tone of the debate during the first impeachment. Most interestingly, we presented evidence that bots were targeting Democrats more than Republicans. The sources promoted by bots were twice as likely to espouse</ns0:p><ns0:p>Republican views than Democrats, with a noticeable presence of highly partisan or even extreme right websites. Together, these findings suggest an intentional use of bots as part of a larger strategy rooted in computational propaganda.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67829:2:0:NEW 22 Mar 2022) Manuscript to be reviewed Computer Science (Q1) Are bots actively involved in the debate? (Q2) Do bots target one political affiliation more than another? (Q3) Which sources are used by bots to support their arguments?</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Classification results (sentiments and bots) for each actor. The high-resolution figure can be zoomed in, using the digital version of this article.</ns0:figDesc><ns0:graphic coords='10,141.73,440.21,413.58,244.18' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Percentage of tweets per category of sentiments over time for each actor. Note that using a percentage instead of the absolute number of tweets allows to compare results across actors, since each one is associated with a different volume of tweets.</ns0:figDesc><ns0:graphic coords='11,141.73,63.79,413.57,259.56' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Joint examination of sentiments and bots for each major actor, with additional political figures included. The high-resolution figure can be zoomed in, using the digital version of this article.</ns0:figDesc><ns0:graphic coords='13,141.73,206.10,413.57,206.78' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Temporal trend in the number of posts by bots for each major actor, with additional political figures included. The high-resolution figure can be zoomed in, using the digital version of this article.</ns0:figDesc><ns0:graphic coords='13,141.73,470.45,413.60,213.94' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Interactive visualization for topics from bots on January 21st 2020.</ns0:figDesc><ns0:graphic coords='15,141.73,384.52,413.58,311.82' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Distribution of tweets collected</ns0:figDesc><ns0:table><ns0:row><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Actor</ns0:cell><ns0:cell>Nicknames</ns0:cell></ns0:row><ns0:row><ns0:cell>Nancy Pelosi</ns0:cell><ns0:cell>Nervous Nancy</ns0:cell></ns0:row><ns0:row><ns0:cell>Adam Schiff</ns0:cell><ns0:cell>Shifty Schiff</ns0:cell></ns0:row><ns0:row><ns0:cell>Joe Biden</ns0:cell><ns0:cell>Sleepy Joe, Creepy Joe</ns0:cell></ns0:row><ns0:row><ns0:cell>Mitch McConnell</ns0:cell><ns0:cell>Midnight Mitch</ns0:cell></ns0:row></ns0:table><ns0:note>. language detection using langdetect 4/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:2:0:NEW 22 Mar 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Nicknames of key actors</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model Name</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Metric</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.62</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.64</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.8816</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.744</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.537 0.2562</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.3465</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.478</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.2104</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.2922</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.658</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.664</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.918</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.7707</ns0:cell></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.638 0.3607</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.4608</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.617</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.0979</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.1686</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell cols='2'>Accuracy 0.7398</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.821</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.806</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.813</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.6246 0.666</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.6348</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.612</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.584</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.598</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.635</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.652</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.7535</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regressor</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.543 0.303</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.389</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.72</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.13125</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.217</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Overall</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>0.64</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.639</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Negative</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.762</ns0:cell></ns0:row><ns0:row><ns0:cell>Support Vector Machine</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Precision Recall</ns0:cell><ns0:cell>0.637 0.238</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.3465</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Precision</ns0:cell><ns0:cell>0.665</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Positive</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>0.128</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>0.199</ns0:cell></ns0:row></ns0:table><ns0:note>5/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:2:0:NEW 22 Mar 2022) Manuscript to be reviewed Computer Science of classification accuracy for BERT (which we use in this study), TextBlob library, and two scikit-learn algorithms. 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:2:0:NEW 22 Mar 2022)Manuscript to be reviewedComputer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Botometer results</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Hyper-parameters</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Performances</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Approach</ns0:cell><ns0:cell>Values explored via grid search</ns0:cell><ns0:cell>Best values</ns0:cell><ns0:cell>Acc. F1</ns0:cell><ns0:cell>ROC-AUC</ns0:cell><ns0:cell>Pre-cision</ns0:cell><ns0:cell>Re-call</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Criterion (gini / entropy),</ns0:cell><ns0:cell>entropy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>max depth (1, 2, . . ., 10), min samples leaf (2, 3, . . ., 20),</ns0:cell><ns0:cell>5 14</ns0:cell><ns0:cell cols='2'>.885 .903 .890</ns0:cell><ns0:cell>.938</ns0:cell><ns0:cell>.871</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max leaf nodes (1, 2, . . ., 20)</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>eXtreme Gradient</ns0:cell><ns0:cell>Max depth (5, 6, . . ., 10),</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Boosting (XGBoost)</ns0:cell><ns0:cell>alpha (.1, .3, .5), learning rate (.01, .02, . . ., .05),</ns0:cell><ns0:cell>.5 .02</ns0:cell><ns0:cell cols='2'>.892 .909 .893</ns0:cell><ns0:cell>.933</ns0:cell><ns0:cell>.887</ns0:cell></ns0:row><ns0:row><ns0:cell>for trees</ns0:cell><ns0:cell>estimators (100, 200, 300)</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Max depth (5, 6, . . ., 10),</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Random Forests</ns0:cell><ns0:cell>criterion (gini / entropy),</ns0:cell><ns0:cell>entropy</ns0:cell><ns0:cell cols='2'>.898 .915 .899</ns0:cell><ns0:cell>.934</ns0:cell><ns0:cell>.897</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>estimators (100, 200, 300)</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Classification performances on bot detection.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Date</ns0:cell><ns0:cell cols='2'>Tweets Collected Proportion remaining</ns0:cell></ns0:row><ns0:row><ns0:cell>6 Oct 2019</ns0:cell><ns0:cell>1,877,693</ns0:cell><ns0:cell>95.3%</ns0:cell></ns0:row><ns0:row><ns0:cell>17 Oct 2019</ns0:cell><ns0:cell>1,830,552</ns0:cell><ns0:cell>93.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>14 Nov 2019</ns0:cell><ns0:cell>1,828,402</ns0:cell><ns0:cell>92.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>5 Dec 2019</ns0:cell><ns0:cell>1,855,771</ns0:cell><ns0:cell>94.6%</ns0:cell></ns0:row><ns0:row><ns0:cell>19 Dec 2019</ns0:cell><ns0:cell>1,998,802</ns0:cell><ns0:cell>97.9%</ns0:cell></ns0:row><ns0:row><ns0:cell>21 Jan 2020</ns0:cell><ns0:cell>3,230,843</ns0:cell><ns0:cell>96.1%</ns0:cell></ns0:row></ns0:table><ns0:note>). 7/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:2:0:NEW 22 Mar 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Distribution of Tweets</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The distribution of sources is heavily imbalanced. The vast majority (n=909, 77.62%) of these websites were used less than 10 times each, while the top 10 websites were used 32,050 Merry Christmas Eve Patriots! I have a grateful heart for President Donald Trump, a man who loves our God and our country, and defends our people. Keep the faith Patriots, and know that we are on the right path. God Bless you all. #TRUMP2020Landside https://t.co/y5duaB6wrR @RyanHillMI @realDonaldTrump You're right. It wasn't a trial. It was a perfect example of tyranny. But, tweets can only be so long and impeachment didn't fit. So, trial. Two sample tweets for each day of data collection.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>times (i.e., in 58.38% of tweets with URLs). The most cited website is Twitter (19258 tweets, 35.08%), as expected by the practice of retweets. Several other websites are social media (YouTube in 1884 tweets or 3.43%, Facebook in 266 tweets or 0.48%, Periscope in 215 tweets or 0.39%), web hosting solutions (WordPress in 111 tweets or 0.20%, Change.org in 106 tweets or 0.19%, Blogspot in 101 tweets or 0.18%), or search engines (Civiqs in 65 tweets or 0.12%, Google in 65 tweets or 0.12%). The remainder are primarily websites that propose news, hence they were evaluated with respect to the bias ratings from9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:2:0:NEW 22 Mar 2022) Manuscript to be reviewed 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67829:2:0:NEW 22 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Website categories per source for bias rating (AllSides, Ad Fontes), using their nomenclature (e.g., 'very left' in AllSides, 'hyper-partisan left' in Ad Fontes). * Manual screening was applied to websites not rated by Ad Fontes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Sample tweets pointing to hyper-partisan right or extreme right websites.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>sources), Trending Politics (2.80%), Breitbart (2.60%), The Right Scoop (2.49%), The Gateway Pundit</ns0:cell></ns0:row><ns0:row><ns0:cell>(2.48%), Sara A Carter (2.19%), Wayne Dupree (1.86%), CNN (1.69%), Raw Story (1.57%), Kerry Picket</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"DEPARTMENT OF COMPUTER SCIENCE AND SOFTWARE ENGINEERING College of Engineering and Computing Benton Hall Room 205 Academic Editor PeerJ Computer Science OXFORD, OHIO 45056 (513) 529-0340 (513) 529-0333 FAX March 22, 2022 Dear Dr Kathiravan Srinivasan, We are delighted by the feedback from the three reviewers. We agree with their assessment that the manuscript has been significantly improved, thanks to their guidance. We noted the final two minor points from reviewer #3: • Discrepancy in number of tweets collected. In the abstract, the authors noted that they collected more than 10 million tweets. However, in the Methods section (Line 141), the authors noted they collected more than 13 million tweets. This is likely a typo, but it needs to be corrected → The reviewer is correct. The abstract should have said more than 13M, rather than rounding it to 10. We thus corrected the abstract as 13. • To be consistent with official spelling, I recommend change these company names: “youtube” to “YouTube,” “facebook” to “Facebook,” “civiqs” to “Civiqs,” “google” to “Google,” “wordpress” to “WordPress,” “change.org” to “Change.org,” “blogspot” to “Blogspot,” etc. This specific comment is particular to Lines 308-315. Interestingly, company names are correctly capitalized in Lines 316-330, so I recommend revising to be consistent. I also recommend checking the rest of the manuscript (text, figures, tables, etc.) for these issues. → We have made all of these replacements, as can be seen in track changes. We look forward to working with the publishing office at PeerJ and will continue to be very reactive if any other formatting matters arise. On behalf of the authors, Philippe J. Giabbanelli, PhD Associate Professor, Dept. of Computer Science & Software Engineering, Miami University Associate Editor, Social Network Analysis & Mining "
Here is a paper. Please give your review comments after reading it.
406
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Author verification of handwritten text is required in several application domains and has drawn a lot of attention by research community due to its importance. Though, several approaches have been proposed for the text-independent writer verification of handwritten text, none of these have addressed the problem domain where author verification is sought based on partially damaged hand written documents (e.g. during forensic analysis).</ns0:p><ns0:p>In this paper, we propose an approach for offline text-independent writer verification of handwritten Arabic text based on individual character shapes (within Arabic alphabet). The proposed approach enables writer verification for partially damaged documents where certain handwritten characters can still be extracted from the damaged document. We also provide a mechanism to identify which Arabic characters are more effective during writer verification process. We have collected a new dataset (AHAWP) for this purpose in a classroom setting with 82 different users. The dataset consists of 53199 user written isolated Arabic characters, 8144 Arabic words, 10780 characters extracted from these words. CNN based models are developed for verification of writers based on individual characters with an accuracy of 94% for isolated character shapes and 90% for extracted character shapes. Our proposed approach provided up to 95% writer verification accuracy for partially damaged documents.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Handwriting is a skill that most people develop over the years and is considered a behavioral distinguishing factor among individuals <ns0:ref type='bibr' target='#b26'>(Rehman et al., 2019a)</ns0:ref>. It is unlikely that two different individuals will produce very similar handwriting <ns0:ref type='bibr' target='#b31'>(Srihari et al., 2002)</ns0:ref> and therefore handwriting can be used for forensic analysis by domain experts and could be of major importance in the process of identifying authorship of documents, signatures forgery, alteration detection, legal documents verification, etc. The differences in people's handwriting are most likely to manifest and be very noticeable when the considered writing language is with many variations in terms of the language dimension such that the number of existing characters, their shapes and deviations when appearing in words compared to appearing in sentences or even when being isolated characters.</ns0:p><ns0:p>The Arabic language has lately been the focus of many researchers due to its widespread use as well as the inherent challenges in terms of being a complex character-based language. All Arabic language related research can be categorized into any of these areas, namely: character recognition, writer identification and verification, text to speech conversion, speech recognition, language analysis, understanding and translation <ns0:ref type='bibr' target='#b26'>(Rehman et al., 2019a)</ns0:ref>. Most of the research work has attempted to deal with the challenges encountered with the nature of the Arabic language. These challenges can be summarized in the following four items:</ns0:p><ns0:p>&#8226; Alphabet characters large variations -the number of characters along with their variations in terms of their positions in words (isolated, initial, end, and middle) include 101 different shapes <ns0:ref type='bibr' target='#b2'>(Ahmed and Sulong, 2014)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows all possible variations of all 28 characters in Arabic alphabet. It is worth mentioning that the same table could be augmented with three composed special Arabic character (Arabic long vowels 'alif ( )', 'waw ( )' and 'yaa ( )' with a 'hamza ( ') being placed on top or bottom of the character. This makes the total number of character variations reach 111.</ns0:p><ns0:p>&#8226; Character similarities -many of the characters are very similar in shape, however, the only difference may be the position of a single 'dot' or the number of dots.</ns0:p><ns0:p>&#8226; Human writing style -differs from one individual to another in terms of character shapes, size, overlap, and how neighboring characters are being interconnected. For instance, one individual may write multiple dots as a connected line segment, while others may write them separately.</ns0:p><ns0:p>&#8226; Arabic language cursive nature -in the sense that there exists a 'virtual' baseline line that connects words when writing sentences. This cursive nature distinguishes the Arabic language from others (such as Latin, Chinese, etc.). (1) verification and (2) identification. The former is considered as a binary classification problem which involves the decision of rejecting or accepting the authentication of a handwriting sample with other samples. On the other hand, the latter is a multinomial classification which attempts to identify a genuine writer among a list of many writers based on handwriting similarities.</ns0:p></ns0:div> <ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67980:2:0:NEW 20 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Extensive research has been done related to the topic of identifying authors based on existing handwriting and numerous approaches have been suggested to handle such a problem. Most of the state of the art approaches were based on the analysis of words or sub-words from handwritten Arabic scripts <ns0:ref type='bibr' target='#b20'>(Maliki et al., 2017)</ns0:ref>. They have attempted to create feature vector following a manual feature extraction process; a step deemed difficult since it requires Arabic knowledge and expertise to ensure that the influential features are being targeted and eventually extracted <ns0:ref type='bibr' target='#b27'>(Rehman et al., 2019b)</ns0:ref>. It is shown that the performance of the writer identification model is highly dependent on the selection of features along with the applied classifier <ns0:ref type='bibr' target='#b27'>(Rehman et al., 2019b)</ns0:ref>. Writer identification using handwriting approaches can be categorized into two broad categories: (1) text-dependent and (2) text-independent <ns0:ref type='bibr' target='#b32'>(Xing and Qiao, 2016)</ns0:ref>. Earlier text-dependent approaches using words for writer identification (or verification) have focused on learning from a small set of user written words. Although this approach works quite well on these selected words, it is difficult to scale to include all possible words and their variants in the Arabic dictionary.</ns0:p><ns0:p>Existing research work in writer identification and verification domain has primarily focused on identifying techniques to determine authorship assuming that an undamaged user writing is available for the task at hand <ns0:ref type='bibr' target='#b0'>(Abdi and Khemakhem, 2015;</ns0:ref><ns0:ref type='bibr' target='#b2'>Ahmed and Sulong, 2014;</ns0:ref><ns0:ref type='bibr' target='#b26'>Rehman et al., 2019a)</ns0:ref>.</ns0:p><ns0:p>These techniques have not considered the likely scenario in forensic analysis where user identification or verification is sought based on a partially damaged documents as shown in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. This has increased the complexity of such task as it is difficult for machine learning based models to generalize for all possible distortions in handwritten documents.</ns0:p><ns0:p>In this paper, we propose a writer verification approach based on individual Arabic character shapes.</ns0:p><ns0:p>The proposed approach enables writer verification for partially damaged documents where certain handwritten characters can still be extracted from the damaged document. This approach also has the additional advantage that the set of Arabic character shapes is limited and a deep learning model can be easily trained on a complete set of characters as opposed to considering an unreasonably large Arabic word-based dataset. The writer verification can thus be performed by extracting character shapes from the undamaged part of the document and then using the learned model to identify/verify the user. It is important to mention that the proposed approach is not dependent on specific Arabic words, works very well for any word in the Arabic dictionary, and is not limited by the number of unique words captured in the dataset.</ns0:p><ns0:p>Our proposed approach requires a dataset of user handwritten Arabic characters per user. Unfortunately, existing Arabic writer identification datasets did not either provide user handwritten characters (rather Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>contain</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; How effective are the CNN-based architectures to learn a model that can verify users based on their handwritten Arabic character shapes?</ns0:p><ns0:p>&#8226; How accurately can we use the models trained on these individual character shapes to verify users based on Arabic character shapes extracted from the partially damaged handwritten document?</ns0:p></ns0:div> <ns0:div><ns0:head n='1.1'>Paper contributions</ns0:head><ns0:p>The key contributions of this paper can be summarized as follows:</ns0:p><ns0:p>&#8226; Proposes and evaluates an individual character based text-independent writer verification approach for partially damaged handwritten Arabic documents.</ns0:p><ns0:p>&#8226; Provides a comprehensive Arabic language dataset, which consists of 53 199 handwritten isolated Arabic characters, 8 144 Arabic words (which encompass all characters), and 10 780 character shapes extracted from these words. The extracted character shapes dataset is constructed through the manual extraction of every character from the set of Arabic words handwritten by multiple users.</ns0:p><ns0:p>&#8226; Provides a CNN based model to verify writers based on individual handwritten Arabic character shapes.</ns0:p><ns0:p>&#8226; Proposes a mechanism to identify the Arabic character shapes that are more effective for writer verification.</ns0:p><ns0:p>&#8226; Provides a comparative analysis of performance of writer verification based on isolated and extracted Arabic character shapes.</ns0:p><ns0:p>The rest of the paper is structured as follows. We start with an overview of the existing work being done related to writer identification in Section 2. Then, we describe our proposed writer verification approach in Section 3. It also describes the process used to develop our dataset. In Section 4, we describe the experimental results and provide a discussion of these results. Finally, in Section 5 conclusions are drawn.</ns0:p><ns0:p>The list of abbreviations and symbols used in this paper are listed in Tables <ns0:ref type='table' target='#tab_2'>1 and 2</ns0:ref> respectively. Also, a detailed summary of the various datasets related to this research field is presented. Finally, the papers are classified based on the feature set used in the classification process, the classifiers used and their relative performance measures in terms of accuracy <ns0:ref type='bibr' target='#b2'>(Ahmed and Sulong, 2014)</ns0:ref>.</ns0:p><ns0:p>The second review paper provides a comprehensive review (of about 200 papers) in the domain of writer identification and verification <ns0:ref type='bibr' target='#b26'>(Rehman et al., 2019a)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>based on novel features extracted from Grey Level Run Length (GLRL) matrices. They also used the IFN/ENIT dataset to get the features from 2200 documents written in Arabic language. A comparison of the proposed approach is made with a popular Grey Level Co-occurrence Matrices (GLCM) technique and their approach gives better performance based on the fact that the GLRL matrices have more discriminatory features as compared to GLCM matrices. A chi-square distance measure for the proposed approach gave a classification accuracy of 96% taking into consideration the top-10 features of the GLRL matrices.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b3'>(Al-Dmour and Zitar, 2007)</ns0:ref>, the authors have used manual extraction of features based on hybrid spectral-statistical measures (SSMs) of the Arabic handwriting texture. They have compared their approach with multi-channel Gabor filters and GLCM matrices for feature extraction. Texture features included writing with a wide range of frequencies and orientations to make the features as generic as possible. The reduced feature set was obtained with a hybrid SVM-GA technique for making the model less complex. The writer identification results were obtained using four different classifiers; namely, Linear Discriminant Classifier (LDC), SVM, Weighted Euclidean Distance (WED), and k-NN with a maximum accuracy of 90%. In another paper, the authors propose and implement a writer identification technique based on extracting handwritten words that are characterized by two textural descriptors; namely, HOG and GLRL matrices. By fusing both similarity scores, they claim to have achieved a better writer identification. Their systems is tested on three datasets; namely, IFN/ENIT, KHATT and QUWI datasets which have handwritten documents from 411, 1000 and 1017 writers, respectively. The classification results that were achieved were 96.86%, 85.40% and 76.27% on the IFN/ENIT, KHATT and QUWI datasets, respectively <ns0:ref type='bibr' target='#b14'>(Hannad et al., 2019)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b20'>(Maliki et al., 2017)</ns0:ref>, the authors have proposed to generate features from sub-words rather than the whole words in the Arabic sentences or documents. This is a technique that falls in the category of text-dependent writer identification, where a dataset consists of 20 text samples from 95 writers. They identified 22 sub-words out of 49 which were contributing to a better performance in writer classification.</ns0:p><ns0:p>The features were compared for similarity with two distance measure techniques; namely, Euclidean distance and Dynamic Time Warping. With this approach, a classification accuracy of 98% was achieved.</ns0:p><ns0:p>One of the major drawback of this approach is that the experiments were conducted on an indigenous dataset which makes the comparison with other approaches not feasible. In a different technique of manual feature extraction, the authors proposed to avoid segmentation of words into sub-letters and used feature extraction techniques like Speed Up Robust Feature Transform (SURF) and k-NN to improve Arabic writer identification accuracy (Abdul <ns0:ref type='bibr' target='#b1'>Hassan et al., 2019)</ns0:ref>. K-means algorithm was also utilized to identify and cluster similar features to improve the prediction process in terms of speed and accuracy.</ns0:p><ns0:p>They have tested their approach on the IFN/ENIT dataset and have achieved a recognition rate of 96.6%.</ns0:p><ns0:p>In a different work <ns0:ref type='bibr' target='#b29'>(Sheikh and Khotanlou, 2017)</ns0:ref>, the authors devised a Hidden Markov Model (HMM) based writer identification for the Persian (Farsi) writings. The HMM classifier was used to capture the angular characteristics of the written text. This resulted in a network chain of angular models leading to a comprehensive database for classification purposes. The same database was used during writer identification and have achieved an accuracy of about 60%, which is a bit on the lower side. This could be attributed to the complex structure of the Persian written text. An interesting work has also been presented in <ns0:ref type='bibr' target='#b11'>(Durou et al., 2019)</ns0:ref> where a manual feature extraction based approach is compared with the automated feature extraction method. In the first category, SURF and SIFT methods were utilized during the feature extraction step and then the SVM and k-NN classifiers were used during writers classification. In the latter approach, the feature set was automatically generated with the help of CNN through the AlexNet model. <ns0:ref type='bibr' target='#b7'>(Balaha et al., 2021b)</ns0:ref>, <ns0:ref type='bibr' target='#b8'>(Balaha et al., 2021c)</ns0:ref>. Deep learning has also been combined with other approaches such as Mathematical Morphology Operations (MMO) to provide for better character recognition <ns0:ref type='bibr' target='#b13'>(Elkhayati et al., 2022)</ns0:ref>. Some of the authors have tested their approaches on already existing datasets, apart from providing newer datasets to aid in further research in this area <ns0:ref type='bibr' target='#b17'>(Khosroshahi et al., 2021)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Deep Learning Based</ns0:head></ns0:div> <ns0:div><ns0:head>Specific Application in Forensics:</ns0:head><ns0:p>In order to put our proposed work in context, we performed a survey of literature in the domain of handwriting identification with a specific focus on its application in forensic analysis. The majority of the proposed approaches fell under the category of conventional ML-based. For instance, researchers in <ns0:ref type='bibr' target='#b23'>(Okawa and Yoshida, 2017)</ns0:ref> have used feature extraction based on pen pressure and shape on a dataset of handwritten text in Japanese Kanji characters <ns0:ref type='bibr' target='#b22'>(Okawa and Yoshida, 2015)</ns0:ref> collected with the participation of 54 users. An accuracy of 96% was achieved using the SVM classifier.</ns0:p><ns0:p>In a different work <ns0:ref type='bibr' target='#b24'>(Parziale et al., 2016)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science verification approach using handwritten Arabic character shapes and then use these trained models to provide writer verification of partially damaged handwritten Arabic documents during forensic analysis.</ns0:p><ns0:p>The details of our proposed approach are described in the next section and the experimental results are presented in Section 4.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>PROPOSED APPROACH</ns0:head><ns0:p>As discussed in Section 1, our proposed approach addresses the following key concerns:</ns0:p><ns0:p>&#8226; Develop a CNN based model to verify users based on their handwritten Arabic character shapes</ns0:p><ns0:p>&#8226; Use the CNN model trained on individual character shapes for writer verification of partially damaged Arabic documents (by extracting characters from user handwritten text)</ns0:p><ns0:p>In order to train the model on Arabic character shapes, we had to collect a dataset of user handwritten Arabic characters. However, the Arabic character writing style varies depending on whether the character is written as an isolated character (not part of a word) or as part of a word. For example, Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows the variations among the same characters written by two different users in isolation and as part of the word.</ns0:p><ns0:p>It can be seen that there are substantial variations in the same character written by the same user depending on whether it is written in isolation or as part of the word. We, therefore, had two possibilities for user handwritten character dataset collection:</ns0:p><ns0:p>&#8226; Each user writes all possible variants of Arabic characters (isolated characters)</ns0:p><ns0:p>&#8226; Each user writes certain Arabic text and then manually extract the Arabic character shapes from these words (extracted characters)</ns0:p><ns0:p>For a comparative analysis both datasets are collected and these datasets are referred to as: Isolated Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> shows samples collected from three different users. In real-world applications, users can use different writing instruments. We had advised the students to use a ballpoint pen but did not restrict them to any specific color. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Extracted Alphabet Characters Dataset (EAD)</ns0:head><ns0:p>The extracted characters dataset consisted of characters cropped from user handwritten Arabic text. The users were asked to write Arabic text (consisting of ten Arabic words). The set of words were selected such that they covered the entire set of Arabic alphabet characters (but not all character shape variations).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> shows a sample of user written text. The characters were extracted from these words manually, and a sample of extracted characters from the first word is shown in Figure <ns0:ref type='figure'>6</ns0:ref>. The complete dataset consists of 10, 780 extracted characters from different users. was that, in our problem domain we focused on writer verification rather than writer identification. The OVR approach is a suitable approach in our problem domain (a typical forensic analysis case), where user verification is sought from a small group of suspects and generally does not need hundreds of users.</ns0:p><ns0:p>Each user would then be verified by using its own model. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>&#8226; target class (representing the target user of this classifier)</ns0:p><ns0:p>&#8226; rest class (represented the rest of the users) Thus, the target class had fewer instances than the rest class. In order to balance the dataset, the target class data was augmented with a 5 percent random shift (left, right, up and down) along with a 10-degree random rotation. The CNN classifier was optimized using hyperparameter tuning to improve the validation accuracy. The trained models were then tested using each user's test set to determine test accuracy.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Writer Verification of Partially Damaged Arabic Documents</ns0:head><ns0:p>The individual writer verification models trained on isolated and extracted characters can then be used as components to verify authorship of damaged handwritten documents. It is a text-independent approach that can be used for any user written text. The approach works by extracting individual character (a i ) from each user written partially recovered text (w) where w = a 1 , a 2 , . . . , a m . Each a i &#8712; w can then be used to verify the target user (user j ) using their corresponding character based model f user j such that:</ns0:p><ns0:formula xml:id='formula_1'>f user j (a i ) =</ns0:formula><ns0:p>1; a i is verified to be written by user j 0; a i is not verified to be written by user j (1)</ns0:p><ns0:p>The verification accuracy (&#946; user j (w)) of the recovered text (w) for user j can thus be computed as:</ns0:p><ns0:formula xml:id='formula_2'>&#946; user j (w) = (&#8721; i f user j (a i )) |w|<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>We can define a threshold &#966; such that a document with text (w) is verified to be written by user j , if (&#946; user j (w) &gt;= &#966; ). We will use the notation &#945; user j (w) to refer to the writer verification of text w for user j : &#945; user j (w) = 1; &#946; user j &gt;= &#966; 0; otherwise (3)</ns0:p><ns0:p>We will use the notation &#946; (w) to refer to average writer verification accuracy (across n users) of user handwritten text w:</ns0:p><ns0:formula xml:id='formula_3'>&#946; (w) = (&#8721; j &#945; user j (w)) |n|<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>In this paper we use &#966; = 0.75 (i.e. any handwritten text is verified to be written by user j if 75% of recovered characters from the document are verified to be written by user j ). A proper selection of &#966; value is application-dependent. Figure <ns0:ref type='figure' target='#fig_8'>8</ns0:ref> shows how our proposed writer verification approach can be deployed to verify the authorship of any document.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>EXPERIMENTAL RESULTS AND DISCUSSION</ns0:head><ns0:p>The experiments were conducted on a GPU machine having 32 Gigabytes of memory, Nvidia GeForce GTX-1080 GPU with 2560 CUDA cores, and 3.70 GHz CPU with 6 cores. All the experiments were performed using the Python programming language with TensorFlow libraries. The experimental environment details are summarized in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Writer verification using isolated characters</ns0:head><ns0:p>Our initial analysis was conducted using the IAD dataset. The purpose of the analysis was to determine the efficacy of CNN based approach to verify a user based on their handwritten isolated Arabic characters.</ns0:p><ns0:p>We started with a CNN model with a single convolution and neural network layer. Figure <ns0:ref type='figure' target='#fig_9'>9</ns0:ref> <ns0:ref type='table' target='#tab_8'>5</ns0:ref>. The model takes as input 64X64</ns0:p><ns0:p>images and applies a convolutional layer with 128 filters (filter size 3X3). This is followed by an ELU activation layer to provide non-linearity and max pooling layer to extract prominent features and also reduce the features space. This was followed by three similar convolutional and max pooling layers. A dropout layer (probability=0.5) was added after each max pooling layer to reduce overfitting. The output of convolutional layers was 256 features that were then processed by a neural network hidden layer of 128 neurons followed by the output layer. The trained models were tested on the IAD test set. We represent recall and precision of i th model as &#947; i and &#961; i respectively. Equations 5 and 6 show their calculations, where &#964; is the total number of correct target class predictions, &#958; is the total errors made to verify the target class, and &#951; is the total errors made by the model to incorrectly identify the other users as the target user. So, in essence, &#947; shows the verification accuracy of the target user (i.e. ratio of correct target class verification out of the target user written character shapes). Henceforth, we will use the term &#947; as target user verification accuracy. The metric &#961; shows the ratio of correct target class verification out of all the target class predictions made by the model.</ns0:p><ns0:formula xml:id='formula_4'>&#947; i = &#964;/(&#964; + &#958; )</ns0:formula><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_5'>&#961; i = &#964;/(&#964; + &#951;)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>The column 'test iad all' in Figure <ns0:ref type='figure' target='#fig_0'>10</ns0:ref> shows the test accuracy (&#8486; test ), precision (&#961;) and target user verification accuracy (&#947; iad test ) for testing the model against all characters in the IAD test set. In this paper, we are mainly concerned with target user verification accuracy (&#947;). The average &#947; iad test is 91%, which indicates that the trained model works reasonably well on previously unseen isolated characters to verify the target user. Some users had a low &#947; iad test values (e.g. user06 has 82%) while a few others had a very high value of &#947; iad test (e.g. 100% for user02). The very high validation and test accuracy attained by some users can be attributed to their unique writing styles.</ns0:p><ns0:p>In order to understand the reason behind the lower recall for some of the users, we had to look into the performance of the model on each user written character. We collected the ratio of verification errors made per character by each target user model. We represent the ratio of verification error made by i th target user model against k th character as &#948; ik such that &#947; i = 1 &#8722; &#8721; k &#948; ik . Figure <ns0:ref type='figure' target='#fig_12'>11</ns0:ref> shows the average error (&#955; k ) across all users for each character for the IAD dataset where &#955; k = &#8721; i &#948; ik n where n is the total number of users. It can be seen that most of the characters got less than 10% error, but some characters (e.g. alif regular, lam regular, etc.) had high errors. For example, alif regular had a 40% average error. It can be attributed to the writing style of these characters, as alif regular is written like a straight line and there would be quite less distinction in its writing style across users.</ns0:p><ns0:p>The average errors (&#955; k ) shown in Figure <ns0:ref type='figure' target='#fig_12'>11</ns0:ref> do not provide us with enough details on whether the errors were made by a single user as an outlier or spread across a large set of users. In order to understand the distributions of errors, we show the individual error values (&#948; ik ) of two best, average and worst performing characters using heat map in Figure <ns0:ref type='figure' target='#fig_1'>12</ns0:ref>. It can be seen that the best performing characters (kaf regular and feh begin) perform well across all users. The worst performing characters (alif regular and alif hamza) perform worst across the majority of the users. Based on the above analysis, we can deduce that some character shapes have more distinguishing features while others have lesser distinguishing features for writer identification. Hence, it is better to ignore the worst performing character shapes for writer identification. We evaluated the model by eliminating the 25% worst performing character shapes (highlighted with bold font in Figure <ns0:ref type='figure' target='#fig_12'>11</ns0:ref>). The results of &#8486; iad reduced test and &#947; iad reduced test are shown in the 'test iad reduced' column in Figure <ns0:ref type='figure' target='#fig_0'>10</ns0:ref>. It can be seen that the performance has improved for each user model with the reduced set of character shapes. The average model performance improved to 93.75% from 91.25%. The model trained on the IAD dataset performed quite well on the test set of isolated characters. However, in practice, we need to verify the writer based on written text rather than just the isolated characters. Therefore, we evaluated model performance on characters extracted from user written text by testing it against the test set of the EAD dataset. The column 'test ead' in Figure <ns0:ref type='figure' target='#fig_0'>10</ns0:ref> shows the &#8486; ead test and &#947; ead test values for the EAD test set. The average &#947; ead test was a meager 74% and six out of twenty users had &#947; ead test values close to 50%. This means that model trained on the IAD dataset does not perform well on characters extracted from the text. As anticipated, the isolated characters are quite different from extracted character and therefore cannot be used as a reliable model to predict user written text. </ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Writer verification using extracted characters</ns0:head><ns0:p>As seen in the previous experiments, the models trained on isolated characters cannot be used reliably to identify user written text (i.e. characters extracted from the text). Therefore, we evaluated a CNN based OVR model that was trained using the EAD dataset. The results obtained for twenty randomly selected users' models are shown in Figure <ns0:ref type='figure' target='#fig_4'>13</ns0:ref>. The average training and validation accuracies (&#8486; ead training and &#8486; ead validation ) of these models was 97.5% and 92% respectively. This shows that the models learned well on training data. Test accuracy (&#8486; ead test ) was also quite close to validation accuracy (89.2%). However, target user verification accuracy (&#947; ead test ) was close to 85% which is lower than the target user verification accuracy of isolated characters (&#947; iad all = 91.3%). Upon further investigation, it was found that this can be attributed to the presence of large variations within the extracted character shapes for the same user. In contrast, the isolated character shapes of the same user did not have such a large variation. When users are writing words in a flow, the shape of the same character changes across words. The shape of the character also varies depending upon how the writer joins it with the neighboring characters. To illustrate these variations in the characters written by the same user, samples of two different character shapes (ain middle and yaa middle) written by same user (user05) are shown in Figure <ns0:ref type='figure' target='#fig_5'>14</ns0:ref>.</ns0:p><ns0:p>It can also be noticed that some user verification models did not perform well, for example, user06</ns0:p><ns0:p>had target user verification accuracy of only 52.9%. On closer inspection, it was found that the model performed badly with more than 80% error on few characters (jeem middle, feh middle, ain middle, noon end, alif hamza, lam alif). For example, the average error on character 'jeem middle' from other user models was 13.1%, but the user06 model had an error of 94.7%. Similarly, character 'ain middle' had 97.4% error for user06 model while the average error for other users is only 12%. This large error is due to the resemblance of these characters with other users' handwritten characters.</ns0:p><ns0:p>We used the same methodology as described in Section 4.1 to identify the performance of individual characters. The average errors (&#955; k ) are shown in Figure <ns0:ref type='figure' target='#fig_0'>15</ns0:ref> and the 33% worst performing characters (having an average error larger than 17%) are highlighted in bold font. We re-evaluated the model using the reduced set of characters (i.e. using characters which are not highlighted with bold font in Figure <ns0:ref type='figure' target='#fig_0'>15</ns0:ref>).</ns0:p><ns0:p>The results of &#8486; ead reduced test and &#947; ead reduced test are shown in the 'test ead reduced' column in Figure <ns0:ref type='figure' target='#fig_4'>13</ns0:ref>.</ns0:p><ns0:p>It can be seen that the performance has improved for each user model with the reduced set of characters. The average model performance improved to 87.3% from 85.3%.</ns0:p><ns0:p>It can also be observed in Figure <ns0:ref type='figure' target='#fig_0'>15</ns0:ref> that the similar shaped characters in EAD dataset have large variation in their errors. For example, we have noticed the disparities in average error of following similar shaped characters:</ns0:p><ns0:p>&#8226; yaa begin (18.1%) and beh begin (9.7%)</ns0:p><ns0:p>&#8226; Haa middle (17.4%), Khah middle (4.7%)</ns0:p><ns0:p>&#8226; Feh begin (11.1%), Qaf begin (5.3%)</ns0:p><ns0:p>&#8226; Teh middle (17.6%), Theh regular (12.4%)</ns0:p><ns0:p>&#8226; Noon end (27.2%), Noon regular(13.4%)</ns0:p><ns0:p>Upon further investigations, we discovered that sometimes a character shape written by a user matches with a different character shape of another user (when character shapes are similar). For example, we noticed that user09 did not put dots in the right place for yaa begin. This resulted in a large error because the model took it as beh begin of user07 These issues were less prominent in IAD because of character shapes written in isolation being more consistent, accurate (e.g. no dots related issues) and our choice of using a single character shape from within the group of similar shaped characters (as shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). This resulted in having lesser variations in character shapes written by the same user and also lesser chances of errors caused by similar shaped characters. For instance, for IAD dataset the errors for Yaa-begin, Beh-begin, Noon-begin are 6.3%, 5.4% and 7.2% respectively, which has relatively smaller differences in errors.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Using character based models for writer verification of partially damaged documents</ns0:head><ns0:p>In this section, we describe the evaluation of the character based models (as mentioned in Section 4.2)</ns0:p><ns0:p>for writer verification of partially damaged documents. Each character extracted from the recovered text w written by user j was checked using the EAD model ( f user j (a i )). The writer verification accuracy of recovered document (&#946; user j (w)) for twenty randomly selected users is shown in Figure <ns0:ref type='figure' target='#fig_15'>16</ns0:ref>. The overall writer verification accuracy of the damaged documents (&#946; (w)) is also shown at the end of the It can be seen that overall user verification accuracy is highly dependent on the quality of characters recovered and it varies between 85% (with good performing character shapes=10%) to 95% (with good performing character shapes=90%). We get similar results when the percentage of a recovered document is smaller (say 10% document recovered). During forensic analysis, writer verification based on documents with higher recovered characters and higher overall accuracy would be preferred over writer verification based on lower recovery and lower overall accuracy. In some problem domains, forensic experts might be willing to trade off document recovery and only seek high-performance characters to increase the writer verification accuracy but in some other problem domains, experts might want to figure out verification accuracy based on the completely recovered document, irrespective of character shape quality.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>CONCLUSIONS</ns0:head><ns0:p>This paper provides a mechanism for writer verification of partially damaged handwritten documents (e.g.</ns0:p><ns0:p>during forensic analysis) where complete text is unavailable, but certain characters can still be extracted.</ns0:p><ns0:p>The paper proposes an individual character based approach for text-independent writer verification. The writer verification models based on isolated and extracted character shapes were developed using CNN.</ns0:p><ns0:p>The paper shows that writer verification based on individual isolated characters can be improved from 91% to 94% by eliminating the characters which do not provide any useful information to verify the writer. The paper shows a similar writer verification approach based on the characters extracted from user-written text. The writer verification accuracy based on extracted characters improves from 85% to 87% on a reduced set of good performing characters. The model performance on extracted characters is lower than on isolated characters because of the inconsistencies in user writing of characters as part of a word (depending on where the characters occur in the word). It was also considered desirable to extract only a single character from the group of similar shaped alphabet characters during extraction process, to reduce the chances of verification errors due to similarity of character shapes. Overall, it is shown that a writer verification accuracy between 80% to 95% can be attained for partially damaged documents of several degrees depending on the percentage of good performing character shapes extracted from the recovered document.</ns0:p></ns0:div> <ns0:div><ns0:head n='5.1'>Limitations and Future work</ns0:head><ns0:p>The work has few limitations and therefore can be extended in multiple ways. Although our EAD dataset was based on a carefully selected group of Arabic words that covered all Arabic alphabet characters, it did not include all character shapes (i.e. begin, middle, end and regular variants). A more comprehensive extracted alphabet characters dataset should be considered to include all character shapes. Additionally, writer identification (rather than just verification) of partially damaged handwritten documents would be a more challenging task. Also, the extraction of character shapes from handwritten text is a manual and tedious process which limits the scalability of our approach. This can be overcome with some other techniques such as automatically identifying intact parts of the document using object detection techniques and then using existing CNN based approaches on the recovered documents. It would also be interesting to evaluate the impact of 'transfer learning' on the accuracy of the developed models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Character shapes in Arabic alphabet grouped per similarity in writing style</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Writer verification for partially damaged document</ns0:figDesc><ns0:graphic coords='4,224.45,507.81,248.15,176.13' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Survey papers :</ns0:head><ns0:label>papers</ns0:label><ns0:figDesc>The first review paper presents a comprehensive summary (of about 50 research papers) of the research work related to Arabic writer identification. The authors classify the work based on text-dependent and text-independent studies and also based on writer identification versus verification.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Approaches: Most of the existing work using deep learning based techniques is used to automate the feature extraction phase but does not use deep learning based model as a classifier. For instance, the paper (Rehman et al., 2019b) uses transfer learning to use the feature-set generated by the pre-trained AlexNet CNN model (5 Conv layers and 3 FC layers) on the ImageNet dataset. The authors have utilized these features on the QUWI dataset which consists of pages written in both English and Arabic languages and have performed both text-dependent (100,000 words) and text-independent (60,000 words) approaches for identifying the handwriting of 1017 writers. The authors have performed segmentation of the document images where the text lines are extracted from the written paragraphs. The classification task is performed with the help of a multi-class SVM classifier. Data augmentation involves finding contours, sharpening and negative image generation of each word from the handwritten sentences. Performance results were obtained based on 80% of the data being used for training and 20% being used for testing purposes and have achieved an accuracy of 92.2% during the Arabic writer identification. The work in (Kumar and Sharma, 2020) identifies the writer without the need for segmenting the lines or words from the Arabic handwritten document. It proposes and evaluates an approach that is based on CNN and weakly supervised region selection mechanism in the input image, which is the complete document. The basic idea behind avoiding segmentation of the document into lines and words is, to 6/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67980:2:0:NEW 20 Mar 2022)Manuscript to be reviewed Computer Science extract the features from a document for different depth levels using the CNN model, where a window of different sizes (4 x 4, 8 x 8 and 16 x 16 grid cells) is applied. Then, features are selected from different cell regions of the document and a voting weight of each selected cell region is obtained. The class of the writer is then decided based on the combination of probability vectors of the selected cell with their weights. The authors have considered various languages and have used the IFN/ENIT dataset for Arabic writer identification. The proposed approach has achieved an accuracy of 98.24%.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Isolated vs. Extracted character shape variations</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Samples of handwritten isolated Arabic characters</ns0:figDesc><ns0:graphic coords='10,355.86,642.12,157.18,61.35' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Sample of user written text</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Writer verification using Arabic characters</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. A sample usage of the proposed writer verification approach</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Model accuracy with under-fit model (A), over-fit model (B) and optimized model (C)</ns0:figDesc><ns0:graphic coords='13,286.64,470.55,132.59,97.64' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>scheme is used to highlight the minimum, maximum and variation in the results. The model accuracy is represented as &#8486;. Therefore, &#8486; iad training column shows the training accuracy and &#8486; iad validation shows the validation accuracy during model training. It can be seen that average validation accuracy is 94% and the difference between training and validation accuracies is small. This indicates that the model has learnt quite well from the dataset.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67980:2:0:NEW 20 Mar 2022) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Average error (&#955; k ) of isolated characters across all users for IAD dataset</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 13 .Figure 14 .</ns0:head><ns0:label>1314</ns0:label><ns0:figDesc>Figure 13. Model accuracies using EAD dataset</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Writer verification accuracy (&#946; user j (w)) of partially damaged documents with varying percentage of good performing character shapes in the recovered text</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>List of Abbreviations</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Abbreviation</ns0:cell><ns0:cell>Full Form</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN</ns0:cell><ns0:cell>Convolutional Neural Network</ns0:cell></ns0:row><ns0:row><ns0:cell>AHAWP</ns0:cell><ns0:cell>Arabic Handwritten Alphabet, Words and Paragraphs Per User</ns0:cell></ns0:row><ns0:row><ns0:cell>ML</ns0:cell><ns0:cell>Machine Learning</ns0:cell></ns0:row><ns0:row><ns0:cell>DL</ns0:cell><ns0:cell>Deep Learning</ns0:cell></ns0:row><ns0:row><ns0:cell>GLRL</ns0:cell><ns0:cell>Grey Level Run Length</ns0:cell></ns0:row><ns0:row><ns0:cell>GLCM</ns0:cell><ns0:cell>Grey Level Co-occurrence Matrix</ns0:cell></ns0:row><ns0:row><ns0:cell>SSM</ns0:cell><ns0:cell>Spectral Statistical Measures</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>Support Vector Machines</ns0:cell></ns0:row><ns0:row><ns0:cell>GA</ns0:cell><ns0:cell>Genetic Algorithm</ns0:cell></ns0:row><ns0:row><ns0:cell>LDC</ns0:cell><ns0:cell>Linear Discriminant Classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>WED</ns0:cell><ns0:cell>Weighted Euclidean Distance</ns0:cell></ns0:row><ns0:row><ns0:cell>SURF</ns0:cell><ns0:cell>Speed Up Robust Features</ns0:cell></ns0:row><ns0:row><ns0:cell>HMM</ns0:cell><ns0:cell>Hidden Markov Model</ns0:cell></ns0:row><ns0:row><ns0:cell>AHCD</ns0:cell><ns0:cell>Arabic Handwritten Character Dataset</ns0:cell></ns0:row><ns0:row><ns0:cell>SOM</ns0:cell><ns0:cell>Self Organising Maps</ns0:cell></ns0:row><ns0:row><ns0:cell>IAD</ns0:cell><ns0:cell>Isolated Alphabet Characters Dataset</ns0:cell></ns0:row><ns0:row><ns0:cell>EAD</ns0:cell><ns0:cell>Extracted Alphabet Characters Dataset</ns0:cell></ns0:row><ns0:row><ns0:cell>OVR</ns0:cell><ns0:cell>One Versus Rest</ns0:cell></ns0:row><ns0:row><ns0:cell>QUWI</ns0:cell><ns0:cell>Qatar University Writer Identification</ns0:cell></ns0:row><ns0:row><ns0:cell>KHATT</ns0:cell><ns0:cell>KFUPM Handwritten Arabic TexT</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>List of Symbols</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Symbol</ns0:cell><ns0:cell>Meaning</ns0:cell></ns0:row><ns0:row><ns0:cell>a i</ns0:cell><ns0:cell>Individual character</ns0:cell></ns0:row><ns0:row><ns0:cell>w</ns0:cell><ns0:cell>Recovered Arabic text</ns0:cell></ns0:row><ns0:row><ns0:cell>user j</ns0:cell><ns0:cell>j th writer</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>&#946; user j (w) Verification accuracy of recovered text (w) for j th writer</ns0:cell></ns0:row><ns0:row><ns0:cell>&#966;</ns0:cell><ns0:cell>Decision threshold</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>&#945; user j (w) Writer verification of recovered text (w)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#946; (w)</ns0:cell><ns0:cell>Overall writer verification accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8486;</ns0:cell><ns0:cell>Model accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947; i</ns0:cell><ns0:cell>Recall</ns0:cell></ns0:row><ns0:row><ns0:cell>&#961; i</ns0:cell><ns0:cell>Precision</ns0:cell></ns0:row><ns0:row><ns0:cell>&#948; ik</ns0:cell><ns0:cell>Ratio of errors made by i th model against k th character</ns0:cell></ns0:row><ns0:row><ns0:cell>&#955; k</ns0:cell><ns0:cell>Average error of each character across all writers</ns0:cell></ns0:row><ns0:row><ns0:cell>2 RELATED WORK</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>This section present an extensive survey and review of the research literature pertaining to the writer</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>identification and verification. Even though this domain encompasses various languages, we choose</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>to focus on the Arabic language related research to limit our scope of work. We propose categorizing</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>existing related literature into two broad categories which are based on the machine learning models being</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>considered in the writer identification context. The first category (Conventional ML-based approach) is</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>the one where the features from the handwritten text are extracted manually and then a Machine Learning</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(ML) classification algorithm is used for writer identification. In the second category (termed as Deep</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Learning based approach) the features are extracted automatically using a Deep Learning ML model and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>simultaneously user identification is performed. Before we proceed with reviewing both categories, we</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>present a summary of two related important survey papers we came across during the literature search.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>The dataset used for this purpose was the ICFHR-2012 which has Arabic documents written by 200 writers. Results showed that the CNN based model outperformed the SVM and k-NN based approaches by at least 4-5% improvement in terms of classification accuracy. Researchers have also used deep learning approaches for Arabic character recognition. Although this work is different from our domain, we present some of the recent work to highlight how CNN based approaches have been used on Arabic dataset. In a recent work (Altwaijry and Al-Turaiki, 2021), the authors have proposed a CNN-based Arabic Character Recognition. The CNN model was used to extract the features from the handwritten character and the softmax component of it was used to perform the</ns0:figDesc><ns0:table /><ns0:note>classification task. The work used two databases for this purpose; namely, AHCD (Arabic Handwriting Dataset) and Hijja datasets. Based on the experimental results, they claimed a prediction accuracy of 97% for the AHCD and 88% for the Hijja dataset. This work focuses on the identification of Arabic characters rather than writer identification.Similarly, the work presented in (El-Sawy et al., 2017) uses a CNN based approach for recognition of handwritten Arabic letters. It does not address the problem of writer identification from Arabic character shapes but rather focuses on Arabic character recognition problem. The said approach in this paper is useful in providing insights on how to design the CNN model and how to extract letters from words or sentences in this context. As a matter of fact, it uses 16,800 handwritten characters from 60 writers, where each writer wrote all the Arabic characters ten times on two separate forms. This data was fed to a CNN model which gave an accuracy of 94.9%. Recent works on applying CNN-based deep learning approaches have resulted in better accuracy</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>, the authors have proposed a process to generate statistical features (mean and variance) of English characters by measuring height, width and angles between the strokes of different characters. Other conventional ML modeling techniques which were considered were</ns0:figDesc><ns0:table><ns0:row><ns0:cell>based on unsupervised approach of Self Organizing Maps (SOM) ((Schomaker et al., 2007)) and Neural</ns0:cell></ns0:row><ns0:row><ns0:cell>Networks combined with Genetic Algorithm ((Pervouchine and Leedham, 2007)) for forensic analysis of</ns0:cell></ns0:row><ns0:row><ns0:cell>English characters. Also, some of the other approaches were from non-ML domain such as Dynamic Time</ns0:cell></ns0:row><ns0:row><ns0:cell>Warping applied to Allographs ((Niels et al., 2007)) and adopting feature-based codebooks generated by</ns0:cell></ns0:row><ns0:row><ns0:cell>using Fourier and Wavelet transforms of the handwritten characters ((Kumar et al., 2014)). The research</ns0:cell></ns0:row></ns0:table><ns0:note>work in (<ns0:ref type='bibr' target='#b15'>(He and Schomaker, 2020)</ns0:ref>) is the closest relevant work to what we intend to achieve in this work and is based on combining the words-based CNN model with their fragments based models (called FragNet) and improving the classification accuracy to reach to 100% for the CERUG-EN dataset and 96.3%, 99.1% and 97.6%, respectively for IAM, CVL and Firemaker datasets. Their approach, however, did not consider the problem domain of writer verification of partially damaged documents.Based on the summary of the literature survey presented in Table3, it can be seen that there is quite7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67980:2:0:NEW 20 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>A comparative summary of most recent related work writer verification or identification in forensic analysis domain. To the best of our knowledge, there is no existing work on writer verification or identification of partially damaged handwritten Arabic documents. Additionally, existing writer identification and verification work has not used individual character shapes. This makes it difficult for us to compare our work with any of the existing work in a meaningful way. The main contribution of this paper is to propose and evaluate a CNN based writer</ns0:figDesc><ns0:table><ns0:row><ns0:cell>No.</ns0:cell><ns0:cell cols='2'>Paper</ns0:cell><ns0:cell /><ns0:cell cols='2'>Approach</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>Results</ns0:cell><ns0:cell>Pros</ns0:cell><ns0:cell>Cons</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='3'>Sheikh and Khotan-</ns0:cell><ns0:cell>HMM</ns0:cell><ns0:cell /><ns0:cell>Persian Dataset</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>Catering to Persian lan-</ns0:cell><ns0:cell>Very low accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>lou (2017)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>guage</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell cols='3'>Hannad et al. (2019)</ns0:cell><ns0:cell cols='2'>HOG + GLRL</ns0:cell><ns0:cell>IFN, KHATT,</ns0:cell><ns0:cell>96.86%</ns0:cell><ns0:cell>Dataset variation</ns0:cell><ns0:cell>Manual feature extrac-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>QUWI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell cols='3'>Abdul Hassan et al.</ns0:cell><ns0:cell cols='2'>SURF + k-NN</ns0:cell><ns0:cell>IFN/ENIT</ns0:cell><ns0:cell>96.6%</ns0:cell><ns0:cell>Higher accuracy</ns0:cell><ns0:cell>Manual feature extrac-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>Rehman</ns0:cell><ns0:cell>et</ns0:cell><ns0:cell>al.</ns0:cell><ns0:cell cols='2'>Review paper</ns0:cell><ns0:cell>Review paper</ns0:cell><ns0:cell>Review paper</ns0:cell><ns0:cell>200 papers reviewed</ns0:cell><ns0:cell>Two years old</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(2019a)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='3'>He and Schomaker</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>(Frag-</ns0:cell><ns0:cell>CERUG-EN</ns0:cell><ns0:cell>100%</ns0:cell><ns0:cell>High accuracy</ns0:cell><ns0:cell>For English only</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Net)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell cols='3'>Altwaijry and Al-</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell>AHCD, Hijja</ns0:cell><ns0:cell>97%</ns0:cell><ns0:cell>High accuracy</ns0:cell><ns0:cell>Character recognition</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Turaiki (2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>only</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell cols='3'>Balaha et al. (2021a)</ns0:cell><ns0:cell cols='2'>Review paper</ns0:cell><ns0:cell>Review paper</ns0:cell><ns0:cell>Review paper</ns0:cell><ns0:cell>Most recent review</ns0:cell><ns0:cell>Less focus on Writer</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>identification</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell cols='3'>Balaha et al. (2021b)</ns0:cell><ns0:cell cols='2'>CNN (AHCR-</ns0:cell><ns0:cell>HMBD,</ns0:cell><ns0:cell>100%</ns0:cell><ns0:cell>Good accuracy</ns0:cell><ns0:cell>No Writer identifica-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>DLS)</ns0:cell><ns0:cell /><ns0:cell>CMATER,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>AIA9k</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell cols='3'>Elkhayati et al.</ns0:cell><ns0:cell cols='2'>CNN + MMO</ns0:cell><ns0:cell>IFN/ENIT</ns0:cell><ns0:cell>97.35%</ns0:cell><ns0:cell>High accuracy</ns0:cell><ns0:cell>Manual segmentation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(2022)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell cols='3'>Proposed approach</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell>Novel dataset</ns0:cell><ns0:cell>96%</ns0:cell><ns0:cell>Character-based writer</ns0:cell><ns0:cell>Limited to Arabic char-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>verification</ns0:cell><ns0:cell>acters</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>little work in</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The optimized CNN model used for training</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Layer Network Layer Output Shape Parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Convolution 1</ns0:cell><ns0:cell>(62, 62, 128)</ns0:cell><ns0:cell>1280</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Max Pooling 1</ns0:cell><ns0:cell>(31, 31, 128)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Dropout 1</ns0:cell><ns0:cell>(31, 31, 128)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>Convolution 2</ns0:cell><ns0:cell>(29, 29, 64)</ns0:cell><ns0:cell>73792</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Max Pooling 2</ns0:cell><ns0:cell>(14, 14, 64)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>Dropout 2</ns0:cell><ns0:cell>(14, 14, 64)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>Convolution 3</ns0:cell><ns0:cell>(12, 12, 64)</ns0:cell><ns0:cell>36928</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Max Pooling 3</ns0:cell><ns0:cell>(6, 6, 64)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Dropout 3</ns0:cell><ns0:cell>(6, 6, 64)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Convolution 4</ns0:cell><ns0:cell>(4, 4, 64)</ns0:cell><ns0:cell>36928</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Max Pooling 4</ns0:cell><ns0:cell>(2, 2, 64)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Flatten Layer</ns0:cell><ns0:cell>(256)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>Dense Layer 1</ns0:cell><ns0:cell>(128)</ns0:cell><ns0:cell>32896</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>Dropout 4</ns0:cell><ns0:cell>(128)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>Dense Layer 2</ns0:cell><ns0:cell>(1)</ns0:cell><ns0:cell>29</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Total parameters: 181,953</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>Error ratio of best, average and worst performing characters across all users for IAD dataset (darker color indicates higher error).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>.026</ns0:cell><ns0:cell>0.132</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.447</ns0:cell><ns0:cell>0.342</ns0:cell></ns0:row><ns0:row><ns0:cell>user06</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.132</ns0:cell><ns0:cell>0.711</ns0:cell><ns0:cell>0.184</ns0:cell></ns0:row><ns0:row><ns0:cell>user07</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.605</ns0:cell></ns0:row><ns0:row><ns0:cell>user08</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.158</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.553</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>user09</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.184</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.526</ns0:cell></ns0:row><ns0:row><ns0:cell>user10</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.132</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.447</ns0:cell><ns0:cell>0.158</ns0:cell></ns0:row><ns0:row><ns0:cell>user11</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>0.553</ns0:cell></ns0:row><ns0:row><ns0:cell>user12</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.289</ns0:cell><ns0:cell>0.132</ns0:cell></ns0:row><ns0:row><ns0:cell>user13</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.105</ns0:cell><ns0:cell>0.421</ns0:cell><ns0:cell>0.605</ns0:cell></ns0:row><ns0:row><ns0:cell>user14</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.447</ns0:cell></ns0:row><ns0:row><ns0:cell>user15</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.947</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>user16</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.526</ns0:cell><ns0:cell>0.026</ns0:cell></ns0:row><ns0:row><ns0:cell>user17</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.474</ns0:cell></ns0:row><ns0:row><ns0:cell>user18</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.316</ns0:cell></ns0:row><ns0:row><ns0:cell>user19</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.763</ns0:cell><ns0:cell>0.526</ns0:cell></ns0:row><ns0:row><ns0:cell>user20</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.053</ns0:cell><ns0:cell>0.132</ns0:cell><ns0:cell>0.368</ns0:cell><ns0:cell>0.447</ns0:cell></ns0:row><ns0:row><ns0:cell>Figure 12.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head /><ns0:label /><ns0:figDesc>Average error (&#955; k ) of extracted characters across all users for EAD dataset only a single character from the group of similar character shapes during extraction process to reduce the overall chances of verification errors.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>lam_begin khah_middle qaf_begin lam_alif dad_middle tah_middle sheen_middle &#8235;&#1600;&#1588;&#1600;&#8236; 0.079 teh_closed &#8235;&#1604;&#1600;&#8236; 0.045 sad_begin &#8235;&#1600;&#1582;&#1600;&#8236; 0.047 feh_begin &#8235;&#1602;&#1600;&#8236; 0.053 theh_regular &#8235;&#1571;&#1604;&#8236; 0.068 noon_regular &#8235;&#1600;&#1590;&#1600;&#8236; 0.071 sheen_begin &#8235;&#1588;&#1600;&#8236; &#8235;&#1589;&#1600;&#8236; 0.097 feh_middle &#8235;&#1601;&#1600;&#8236; 0.111 zay_end &#8235;&#1579;&#8236; 0.124 alif_regular &#8235;&#1606;&#8236; 0.134 ghain_begin 0.14 ain_middle &#8235;&#1600;&#1591;&#1600;&#8236; 0.073 heh_middle &#8235;&#1600;&#1726;&#1600;&#8236; 0.141 thal_regular &#8235;&#1600;&#1577;&#8236; 0.141 dal_end kaf_middle &#8235;&#1600;&#1603;&#1600;&#8236; 0.082 meem_begin &#8235;&#1605;&#1600;&#8236; 0.15 meem_end thah_end &#8235;&#1600;&#1592;&#8236; 0.096 lam_regular &#8235;&#1604;&#8236; 0.158 seen_middle beh_begin &#8235;&#1576;&#1600;&#8236; 0.097 yaa_middle &#8235;&#1600;&#1610;&#1600;&#8236; 0.159 jeem_middle &#8235;&#1600;&#1580;&#1600;&#8236; 0.173 &#8235;&#1600;&#1601;&#1600;&#8236; 0.16 haa_middle &#8235;&#1600;&#1586;&#8236; 0.161 teh_middle &#8235;&#1575;&#8236; 0.162 lam_middle &#8235;&#1594;&#1600;&#8236; 0.162 yaa_begin &#8235;&#1600;&#1593;&#1600;&#8236; 0.163 raa_end &#8235;&#1584;&#8236; 0.168 meem_middle &#8235;&#1600;&#1605;&#1600;&#8236; 0.256 &#8235;&#1600;&#1581;&#1600;&#8236; 0.174 &#8235;&#1600;&#1578;&#1600;&#8236; 0.176 &#8235;&#1600;&#1604;&#1600;&#8236; 0.18 &#8235;&#1610;&#1600;&#8236; 0.181 &#8235;&#1600;&#1585;&#8236; 0.196 &#8235;&#1600;&#1583;&#8236; 0.171 heh_regular &#8235;&#1729;&#8236; 0.256 &#8235;&#1600;&#1605;&#8236; 0.171 alif_hamza &#8235;&#1571;&#8236; 0.265 &#8235;&#1600;&#1587;&#1600;&#8236; 0.172 noon_end &#8235;&#1600;&#1606;&#8236; 0.272</ns0:cell></ns0:row><ns0:row><ns0:cell>Figure 15.</ns0:cell></ns0:row></ns0:table><ns0:note>. Due to this reason, yaa begin has larger error (18.1%) than beh begin (9.7%). Similar observations were made for other similar shaped character in EAD. Noon end has the highest error because of its high similarity in writing style across several users and also similarity with Noon regular written by other users. Based on the above observations, it seems desirable to use 16/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67980:2:0:NEW 20 Mar 2022) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head /><ns0:label /><ns0:figDesc>table.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>Percentage of good performance characters within recovered document</ns0:cell></ns0:row><ns0:row><ns0:cell>userid</ns0:cell><ns0:cell cols='4'>10% 20% 50% 75% 90% 10% 20% 50% 75% 90% 10% 20% 50% 75% 90% 10% 20% 50% 75% 90%</ns0:cell></ns0:row><ns0:row><ns0:cell>user01</ns0:cell><ns0:cell cols='4'>0.77 0.8 0.78 0.86 0.87 0.8 0.79 0.83 0.88 0.88 0.74 0.78 0.84 0.85 0.86 0.75 0.82 0.88 0.87 0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>user02</ns0:cell><ns0:cell cols='4'>0.93 0.93 0.97 0.98 0.99 0.92 0.93 0.96 0.99 1 0.94 0.96 0.95 1 0.98 0.95 0.95 0.98 0.98 1</ns0:cell></ns0:row><ns0:row><ns0:cell>user03</ns0:cell><ns0:cell cols='4'>0.96 0.95 0.96 0.98 0.99 0.93 0.91 0.98 0.99 0.99 0.93 0.95 0.96 0.99 1 0.93 0.95 0.92 0.98 1</ns0:cell></ns0:row><ns0:row><ns0:cell>user04</ns0:cell><ns0:cell cols='4'>0.94 0.93 0.95 0.96 0.97 0.93 0.9 0.93 0.93 0.95 0.95 0.9 0.97 0.96 0.98 0.95 0.97 0.97 0.98 0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>user05</ns0:cell><ns0:cell cols='4'>0.76 0.78 0.82 0.85 0.9 0.8 0.77 0.79 0.85 0.9 0.89 0.75 0.86 0.86 0.92 0.85 0.87 0.9 0.8 0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>user06</ns0:cell><ns0:cell cols='4'>0.49 0.49 0.57 0.57 0.68 0.47 0.53 0.57 0.64 0.65 0.54 0.57 0.64 0.72 0.69 0.52 0.43 0.6 0.55 0.53</ns0:cell></ns0:row><ns0:row><ns0:cell>user07</ns0:cell><ns0:cell cols='4'>0.99 0.99 0.98 0.98 0.97 0.99 1 0.98 0.95 0.98 0.99 0.98 0.98 0.97 0.96 1</ns0:cell><ns0:cell>1 0.95 0.98 0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>user08</ns0:cell><ns0:cell cols='4'>0.88 0.86 0.89 0.91 0.95 0.87 0.87 0.9 0.94 0.95 0.85 0.91 0.92 0.95 0.95 0.85 0.93 0.9 0.88 0.95</ns0:cell></ns0:row><ns0:row><ns0:cell>user09</ns0:cell><ns0:cell cols='4'>0.87 0.87 0.86 0.88 0.86 0.89 0.82 0.85 0.87 0.88 0.85 0.85 0.85 0.92 0.84 0.9 0.92 0.85 0.92 0.88</ns0:cell></ns0:row><ns0:row><ns0:cell>user10</ns0:cell><ns0:cell cols='4'>0.77 0.75 0.86 0.92 0.96 0.72 0.76 0.84 0.94 0.97 0.7 0.76 0.92 0.94 0.96 0.77 0.77 0.85 0.92 1</ns0:cell></ns0:row><ns0:row><ns0:cell>user11</ns0:cell><ns0:cell cols='4'>0.99 0.99 0.95 0.95 0.93 0.99 0.99 0.96 0.94 0.91 0.97 0.99 0.97 0.95 0.94 0.97 1</ns0:cell><ns0:cell>0.9 0.93 0.85</ns0:cell></ns0:row><ns0:row><ns0:cell>user12</ns0:cell><ns0:cell cols='4'>0.63 0.65 0.73 0.7 0.81 0.66 0.63 0.72 0.72 0.81 0.69 0.59 0.71 0.75 0.78 0.67 0.67 0.58 0.75 0.8</ns0:cell></ns0:row><ns0:row><ns0:cell>user13</ns0:cell><ns0:cell cols='4'>0.88 0.86 0.89 0.9 0.92 0.82 0.88 0.89 0.89 0.92 0.91 0.91 0.89 0.95 0.92 0.9 0.87 0.93 0.92 0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>user14</ns0:cell><ns0:cell cols='4'>0.85 0.88 0.92 0.94 0.99 0.85 0.87 0.94 0.97 0.99 0.84 0.86 0.94 0.98 1 0.88 0.83 0.93 0.98 1</ns0:cell></ns0:row><ns0:row><ns0:cell>user15</ns0:cell><ns0:cell cols='4'>0.84 0.86 0.93 0.97 0.98 0.9 0.9 0.88 0.97 0.99 0.9 0.91 0.93 0.97 0.98 0.87 0.83 0.93 0.97 1</ns0:cell></ns0:row><ns0:row><ns0:cell>user16</ns0:cell><ns0:cell cols='4'>0.8 0.8 0.88 0.94 0.98 0.81 0.78 0.89 0.93 0.99 0.77 0.83 0.92 0.96 0.98 0.82 0.82 0.92 0.98 1</ns0:cell></ns0:row><ns0:row><ns0:cell>user17</ns0:cell><ns0:cell cols='4'>0.72 0.68 0.73 0.79 0.78 0.7 0.66 0.69 0.78 0.76 0.7 0.68 0.74 0.74 0.79 0.7 0.62 0.9 0.73 0.65</ns0:cell></ns0:row><ns0:row><ns0:cell>user18</ns0:cell><ns0:cell cols='4'>0.79 0.79 0.88 0.95 0.98 0.81 0.79 0.87 0.94 0.99 0.74 0.83 0.93 0.97 0.97 0.85 0.9 0.87 0.95 1</ns0:cell></ns0:row><ns0:row><ns0:cell>user19</ns0:cell><ns0:cell cols='4'>0.78 0.74 0.83 0.84 0.88 0.79 0.8 0.8 0.84 0.9 0.82 0.83 0.86 0.86 0.89 0.7 0.8 0.78 0.85 0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>user20</ns0:cell><ns0:cell cols='4'>0.8 0.75 0.82 0.82 0.86 0.76 0.75 0.81 0.83 0.86 0.8 0.82 0.8 0.81 0.82 0.9 0.83 0.82 0.87 0.85</ns0:cell></ns0:row><ns0:row><ns0:cell>&#120573;(&#119908;)(&#8709; = 0.75)</ns0:cell><ns0:cell cols='4'>0.85 0.8 0.85 0.9 0.95 0.8 0.85 0.85 0.9 0.95 0.7 0.85 0.85 0.9 0.95 0.8 0.85 0.9 0.9 0.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>80% document recovered</ns0:cell><ns0:cell>50% document recovered</ns0:cell><ns0:cell>20% document recovered</ns0:cell><ns0:cell>10% document recovered</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Original Manuscript ID: #CS-2021:11:67980:1:2:REVIEW Original Article Title: 'Writer Verification of Partially Damaged Handwritten Arabic Documents based on Individual Character Shapes' To: Academic Editor, PeerJ Computer Science Re: Response to reviewers Dear Editor, Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments. We have gone through the reviewers’ concerns and suggestions carefully and made several changes to the manuscript. The suggested changes have improved the quality of the manuscript. We are thankful to the reviewers for their constructive comments. We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) a revised manuscript PDF file highlighting the changes (auto generated using an online PDF diff tool), and (c) a clean updated manuscript without highlights (PDF main document). Please let us know if you see any issues. Best regards, Majid Khan Reviewer #1 Comments Thanks to the authors for updating the manuscript. After checking their responses to my comments, I can declare that they have made a suitable major revision. However, I have two minor comments: (1) Increase the size (i.e., width) of Figure 7 and Figure 9. (2) The authors should get another editing help from someone with full professional proficiency in English. For example, Table 2 caption “List of Symbols used” should be “List of Symbols”. Author Response Thanks for your encouraging comments and kind feedback to improve the quality of the paper. It is highly appreciated. Thanks for highlighting this issue. The width of Figure 7 and Figure 9 has now been increased to the maximum possible width (as per max text width). We have also removed the extra space in-between Figure 9(A), 9(B) and 9(C). The paper has now been thoroughly reviewed for English language proficiency by an expert and several language related issues have been fixed across the paper (please see the automatically generated draftable_comparison.pdf diff file for the changes). The captions of Table 1 and Table 2 have also been updated. "
Here is a paper. Please give your review comments after reading it.
407
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The discovery of a new form of corona-viruses in December 2019, SARS-CoV-2, commonly named COVID-19, has reshaped the world. With health and economy issues, scientists have been focusing on understanding the dynamics of the disease, in order to provide the governments with the best policies and strategies allowing them to reduce the span of the virus. The world has been waiting for the vaccine for more than one year. WHO (World Health Organization) is advertising vaccine as a safe and effective measure to fight off the virus. Saudi Arabia was the fourth country in the world to start to vaccinate its population.</ns0:p><ns0:p>Even with the new simplified COVID-19 rules, the third dose is still mandatory. COVID-19 vaccines have raised many questions regarding in its efficiency and its role to reduce the number of infections. In this work,we try to answer these question and propose a new mathematical model with five compartments, including susceptible, vaccinated, infectious, asymptotic and recovered individuals. We provide theoretical results regarding the effective reproduction number, the stability of endemic equilibrium and disease free equilibrium. We provide numerical analysis of the model based on the Saudi case. Our developed model shows that the vaccine reduces the transmission rate and provides an explanation to the rise in the number of new infections immediately after the start of the vaccination campaign in Saudi Arabia.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The outbreak of several pandemics such as COVID-19, requires the development of mathematical models in order to exhibit key epidemiological features, investigate transmission dynamics, and develop adequate control policies. Mathematical modelling, when dealing with infectious diseases, allows revealing inherent patterns and underlying structures that govern outbreaks. Simple models that contain the essential components and interactions are powerful tools to test different hypotheses and understand disease control for both short and long time. The stability analysis near the free disease equilibrium will show if the apparition of new infection cases will yield to disease outbreak. Some countries such Tunisia and Jordan registered zero cases for days in Summer 2020 but the introduction of new cases resulted in critical endemic situation by Autumn.</ns0:p><ns0:p>The complex spreading patterns of COVID-19 and the various spread speed of its variants make its containing and mitigating real challenges. The existing models vary in form and complexity, but the common objective is to provide important information for global health decision makers about the disease dynamics. The first control measure was lockdown and then health authorities imposed mask wearing and social distancing. The authors of <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> showed, by extending the classical SEIR model, that social distance is an important factor to reduce the reproduction number and this to reduce the virus spread. Despite the herd immunity acquired via vaccine or infection, the social Health Organization that one in three people who get COVID-19 do not show any symptoms. This is a challenging problem for health authorities as the asymptotic individuals carry the virus and may infect other people without knowing it. Moreover, consequent efforts were made worldwide since the authorisation of new vaccines by the end of 2020. By the end of November 2021, more than 50% of the world population has first dose administered and only 40% has second dose administered. In order to study the efficacy of vaccination to contain the virus spread and its negative consequences, our model include vaccinated state. The objective is to provide efficient public health policies in determining optimal vaccination strategies. Some questions have raised since the beginning of vaccination campaigns: how many individuals should be vaccinated? Is the vaccine a solution to get rid of the disease permanently? These questions are related to financial and moral costs associated with the chosen governmental policy. This paper gives theoretical and numerical analysis associated with COVID-19 epidemic dynamics in order to answer these critical questions. Although we focus mainly on the Saudi case, the model structure is general and numerically adapted to any specific context without loss of validity of the qualitative results here shown. The main contributions of this research are given as follows:</ns0:p><ns0:p>&#8226; Developing a novel mathematical model to predict the spread of COVID-19, with the presence of vaccinated and asymptotic compartments &#8226; Analyzing the existence of endemic equilibrium point and the stability of disease-free equilibrium</ns0:p><ns0:p>&#8226; Investigating a real case study in Saudi Arabia, discussing the impact of vaccination on disease dynamics In Section 2, we present the related works regarding epidemic modeling with a focus on COVID-19 control strategies and particularly population vaccination. Sections 3 and 4 include model description and analysis, respectively. The numerical results are given in Section 5 and Section 6 concludes this paper.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>Related Works</ns0:head><ns0:p>The mathematical modeling in epidemiology started in England, in the 18th century, when Bernoulli analyzed the mortality of smallpox. Since then, a large variety of epidemiological models have been developed <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> [25][6] <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>. In this section, we present recent works proposed in this century, impacted by several outbreaks such as Ebola, Zika, and the swine flu <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>.</ns0:p><ns0:p>Alexander and Moghadas <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> developed a Susceptible-Infected-Recovered-Susceptible (SIRS) epidemic model. The authors considered that the immunity acquired by the population after infection decreases over time. The dynamical behavior of the model is investigated using different types of bifurcation, including saddle-node, Hopf, and Bogdanov-Takens. The stability analysis based on the basic reproductive number and the rate of loss of natural immunity demonstrated the coexistence of two concentric limit cycles. These theoretical results have epidemiological implications such the determination of epidemic outbreak and the control the disease spread.</ns0:p><ns0:p>The authors of <ns0:ref type='bibr' target='#b37'>[37]</ns0:ref> investigated the Susceptible, Exposed, Infectious, Quarantine, Susceptible (SEIQS) epidemic model, with a nonlinear incidence rate. This model takes into consideration the communal sanitation measure of quarantine, aiming at avoiding broad infection. The authors provided a stability analysis using codimension-1 (transcritical,saddle-node, and Hopf) and codimension-2 bifurcations (Bogdanov-Takens).</ns0:p><ns0:p>Recently, Lu et. al <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> studied the the SIRS epidemic model, the same considered in <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> but with a generalized non-monotone incidence rate. The incidence rate is a function of the infection force of a disease and the number of susceptible individuals. The given formula for the incidence rate models the psychological pressure of some epidemic disease. The government is, in general, lead to take some protective measures like lockdown when the infection number becomes very high. The authors showed that the model has both repelling and attracting Bogdanov-Takens bifurcations. Moreover, from the super-critical Hopf bifurcation, the authors concluded that a disease following this model presents periodic outbreak, which is very important to understand its dynamics, in the real world.</ns0:p><ns0:p>The impact of treatment function was investigated in <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> using SIS model, where recovered individuals become again susceptible and the incidence rate is supposed bi-linear. In the considered model, the treatment function is saturated, which results in the existence of backward bifurcation.</ns0:p><ns0:p>Thus, the eradication of the disease is not only related to the reproduction number but also to other biological or epidemiological mechanisms, such as imperfect vaccine. The bifurcation analysis outlines the necessary conditions to eliminate the disease. Zhang et. al <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref> discussed the impact of the number of hospital beds on SIS epidemic model, by considering a nonlinear recovery rate. The authors calculated the basic reproduction number corresponding to their model. This number determines the condition for the disease-free equilibrium to be globally asymptotically stable.</ns0:p><ns0:p>The limitations of medical resources, mainly the availability of vaccinations, is modeled using a piecewise-defined function for patient treatment in <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref>. This function admits a backward bifurcation with limited available medical resources. The variation of vaccination threshold affects the existence of multiple steady states,crossing cycle, and generalized endemic equilibria. Similarly, Perez et al.</ns0:p><ns0:p>[29] considered nonlinear incidence rate for a generalized SIR model. Besides, the authors assumed that the model has saturated Holling type II treatment rate and logistic growth. Non linear and saturated functions allows to represent more accurately the dynamics epidemic diseases. Similar to previous stated works, the authors revealed the importance of the basic reproduction number R 0 , whose value determines the existence of endemic equilibrium and the stability of the disease-free equilibrium. Under some conditions related to the disease transmission rate and the treatment rate, the model may undertake a backward bifurcation and a Hopf bifurcation. The above-mentioned articles considered general disease models. In the literature, we can also find specific models targeting particular disease such as avian influenza <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> and bacterial meningitis <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. Since the declaration of World Health Organization (WHO) of the Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV-2) as a pandemic on March 2020, the scientific community has been trying to understand the dynamics of this virus.</ns0:p><ns0:p>One of the measures to control the virus spread in to declare total or partial lockdown, forcing social distancing. The scientific community believes that the main cause of infection is the inhalation of virus droplets <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. Gevertz et al. <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> modeled social distancing as a flow rate between susceptible and asymptomatic individuals. The model reveals the existence of of a critical implementation delay, when implementing social distancing mandates. A delay of two weeks is the critical threshold between infection containment and infection expansion.</ns0:p><ns0:p>Nadim and Chattopadhyay <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> investigated the effect of imperfect lockdown. In the adopted model, when the basic reproduction number, R 0 is less than unity, the stable disease free equilibrium coexists with a stable endemic equilibrium. This means that COVID-19 undergoes backward bifurcation. This phenomenon was observed in the Kingdom of Saudi Arabia where the new cases were decreasing to reach 97 in 06, January 2021. Unfortunately, this rate reached 386 new cases, after one month, which obliged the Ministry of Health to declare partial lockdown for 10 days. The infection force is so high that the disease cannot be totally eradicated. The authors showed that under perfect lockdown, this backward bifurcation does not exist, but such condition is not possible in the real world. In <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, the authors included in their mathematical model, based on the classical SEIR, several prevention actions such as test campaign on the population and quarantining infected persons. The model took in consideration infection treatment efforts, such as vaccination and the therapy of induced cardio-respiratory complications. Besides the usual classes of the population, the authors considered two new classes, driven by specific characteristics of the virus: infected but asymptomatic patients and suspected infected individuals. The theoretical results, tuned using the Chinese case, were compared to United Kingdom case and the Italian case, showing the similarity between the model dynamics and the real epidemic behaviour. L&#252; et. al <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> proposed an epidemic model that distinguishes between the first and the second waves od COVID-19 in China. The two-stage model includes Contacts compartment, besides the usual Susceptible, Infectious and Recovered compartments. The authors of <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> raised the issue of undetected symptomatic and asymptomatic individuals. They also investigated the effect of two control strategies that include the improvement of treatment for infected and isolated individuals. Another scientific aspect of COVID-19 is the possible transmission of the virus through contaminated surfaces. It is believed that the virus can survive several days on the surfaces depending on the material (wood, glass or plastic). Another issue faced by the governments is the awareness level of the population. Some individuals, deliberately, decide not to apply the precautionary measures, mainly wearing mask and respecting social distancing <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>.</ns0:p><ns0:p>The issue of the efficiency of social distancing and rapid testing strategies against the pandemic was examined in <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>, where the authors extended the standard SEIR model. The authors considered also the problem of undetected asymptomatic individuals, who have no symptoms but participate actively to virus spread. Furthermore, the limitation of medical resources was incorporated to the model. The theoretical findings emphasized the role of the basic reproduction number R 0 in the existence of stable COVID-19 free and COVID-endemic equilibrium points. This conclusion is contested by Mohd and Sulaiman <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>, who studied the SIRS model with limited medical resources and false detection issues.</ns0:p><ns0:p>The authors showed that the condition of reducing the basic reproduction number under the unity value is necessary to eliminate the disease but not sufficient.</ns0:p><ns0:p>Since the authorization COVID-19 vaccines, several research works focused on giving insight to mathematical characteristic of virus spread after population vaccination. Algehyne et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> used nonlinear functional analysis and fractal fractional derivative to model the evolution over time of four compartments: susceptible, infected, infected positive tested, and recovered. The Spanish case was investigated in <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> using also fractional derivatives. It is important to highlight that these works do not consider vaccinated state as a separate compartment. They rather consider that the vaccinated individuals are moved from susceptible to recovered compartment. The vaccinated individuals are considered to move also from exposed state in <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>.</ns0:p><ns0:p>Different mathematical tools are used by Rajaei et al. <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref> to compare the effect of vaccination with social distancing and hospitalization. Extended Kalman filter (EKF) is used for state estimation under uncertainty.</ns0:p><ns0:p>Most of the existing research works developing a relationship between infectious and asymptotic individuals focus on estimating the model parameters using actual data <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> [33] <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. To the best of our knowledge, our work is the first to provide to study mathematical stability of endemic and disease free equilibrium. Most relevant works in COVID-19 mathematical modeling are compared in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, in terms of disease control strategy, considered model and country of the case study.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>Proposed model and effective reproduction number</ns0:head><ns0:p>Our objective is to derive the mathematical equations that better present the dynamics of COVID-19 virus. The population is divided into five compartments: susceptible, vaccinated, infectious, asymptotic, and recovered; the numbers in these states are denoted by S(t), V(t), I(t), A(t), and R(t), respectively. Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Ref.</ns0:p></ns0:div> <ns0:div><ns0:head>Control Measure</ns0:head><ns0:p>Model Country of case study <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> lockdown SEIR+Hospitalized+Lockdown India, Mexico, South Africa and Argentina <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> test campaign and quarantine SEIR+ Quarantined UK, China, Italy <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> social distancing SAIR - <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> social distancing and mask SEIR+ Asymptotic WHO wearing <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> social distancing and rapid SEIR+ Asymptotic Indonesia testing <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> social distancing SEIR+Quarantined+Hospitalized Egypt and Ghana <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> social distancing SAIR WHO <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> vaccination SIR Brasil <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> vaccination SEIRS+Death Spain <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref> vaccination Based on the above assumptions and Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, we formulate the following model of differential equations.</ns0:p><ns0:formula xml:id='formula_0'>dS dt = &#923; &#8722; &#181;S &#8722; &#968;S &#8722; &#946;IS &#8722; &#945;AS (3.1a) dV dt = &#8722;&#181;V + &#968;S &#8722; &#951;V &#8722; &#1013; 1 &#946;IV &#8722; &#1013; 2 &#945;AV (3.1b) dI dt = &#8722;&#181;I + &#946;IS + &#1013; 1 &#946;IV + &#963;A &#8722; &#947; 1 I (3.1c) dA dt = &#8722;&#181;A + &#945;AS &#8722; &#963;A + &#1013; 2 &#945;AV &#8722; &#947; 2 A (3.1d) dR dt = &#8722;&#181;R + &#951;V + &#947; 1 I + &#947; 2 A (3.1e)</ns0:formula><ns0:p>The basic reproduction number is defined as the number of secondary infections produced by a single infectious individual during his or her entire infectious period. Since we introduce a vaccination program in our model, it is called the effective reproduction number. The system (3.1) has always a disease-free equilibrium, which is obtained by setting all the derivatives to zero with I = A = 0, that yields to: P 0 = (S 0 , I 0 , A 0 , R 0 , V 0 ) = ( &#923; &#181;+&#968; , 0, 0, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_1'>F(x) = &#63723; &#63724; &#63724; &#63724; &#63724; &#63725; &#946;IS + &#1013; 1 &#946;IV &#945;AS + &#1013; 2 &#945;AV 0 0 0 &#63734; &#63735; &#63735; &#63735; &#63735; &#63736; , N (x) = &#63723; &#63724; &#63724; &#63724; &#63724; &#63725; (&#181; + &#947; 1 )I &#8722; &#963;A (&#181; + &#963; + &#947; 2 )A (&#181; + &#951;)V &#8722; &#968;S + &#1013; 1 &#946;IV + &#1013; 2 &#945;AV &#181;R &#8722; &#951;V &#8722; &#947; 1 I &#8722; &#947; 2 A &#8722;&#923; + (&#181; + &#968;)S + &#946;IS + &#945;AS &#63734; &#63735; &#63735; &#63735; &#63735; &#63736; .</ns0:formula><ns0:p>The infected compartments are A and I, giving m=2. With A=I=0, the Jacobian matrices of F(x) and N (x) at the disease-free equilibrium P 0 are, respectively,</ns0:p><ns0:formula xml:id='formula_2'>DF(P 0 ) = &#63723; &#63724; &#63724; &#63725; F 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 &#63734; &#63735; &#63735; &#63736; , DN (P 0 ) = &#63723; &#63724; &#63724; &#63725; N 0 0 0 &#1013; 1 &#946;V 0 &#1013; 2 &#945;V 0 &#181; + &#951; 0 &#8722; &#968; &#8722;&#947; 1 &#8722; &#947; 2 &#8722; &#951; &#181; 0 &#946;S 0 &#945;S 0 0 0 &#181; + &#968; &#63734; &#63735; &#63735; &#63736; , F = &#946;S 0 + &#1013; 1 &#946;V 0 0 0 &#945;S 0 + &#1013; 2 &#945;V 0 , N = &#181; + &#947; 1 &#8722; &#963; 0 &#181; + &#963; + &#947; 2 .</ns0:formula><ns0:p>Our developed model is similar to the two-strain model in <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref> with two infectious compartments.</ns0:p><ns0:p>F N &#8722;1 , the next generation matrix of system (3.1) has the two eigenvalues.</ns0:p><ns0:formula xml:id='formula_3'>R 1 = &#946;(S 0 +&#1013; 1 V 0 ) &#181;+&#947; 1 = &#946;&#923;( 1 &#181;+&#968; +&#1013; 1 &#968; (&#181;+&#968;)(&#181;+&#951;) ) &#181;+&#947; 1 = &#946;&#923;(1+&#1013; 1 &#968; &#181;+&#951; ) (&#181;+&#947; 1 )(&#181;+&#968;) R 2 = &#945;(S 0 +&#1013; 2 V 0 ) &#181;+&#947; 2 +&#963; = &#945;&#923;(1+&#1013; 2 &#968; &#181;+&#951; ) (&#181;+&#947; 2 +&#963;)(&#181;+&#968;) = &#945;&#923;(&#181;+&#951;+&#1013; 2 &#968;) (&#181;+&#947; 2 +&#963;)(&#181;+&#968;)(&#181;+&#951;)</ns0:formula><ns0:p>The effective reproduction number for the system is the maximum of the two.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>Model analysis</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Existence of endemic equilibrium point</ns0:head><ns0:p>In this section, we investigate the conditions for the existence of endemic equilibria of system (3.1). Any equilibrium satisfies the following equations:</ns0:p><ns0:formula xml:id='formula_4'>&#923; &#8722; &#181;S &#8722; &#968;S &#8722; &#946;IS &#8722; &#945;AS = 0 (4.1a) &#8722;&#181;V + &#968;S &#8722; &#951;V &#8722; &#1013; 1 &#946;IV &#8722; &#1013; 2 &#945;AV = 0 (4.1b) &#8722;&#181;I + &#946;IS + &#1013; 1 &#946;IV + &#963;A &#8722; &#947; 1 I = 0 (4.1c) &#8722;&#181;A + &#945;AS &#8722; &#963;A + &#1013; 2 &#945;AV &#8722; &#947; 2 A = 0 (4.1d) &#8722;&#181;R + &#951;V + &#947; 1 I + &#947; 2 A = 0 (4.1e)</ns0:formula><ns0:p>The equation (4.1d) gives the following expression:</ns0:p><ns0:formula xml:id='formula_5'>S = &#181;+&#963;+&#947; 2 &#945; &#8722; &#1013; 2 V .</ns0:formula><ns0:p>The equation (4.1b) gives the following expression:</ns0:p><ns0:formula xml:id='formula_6'>V = &#968;(&#181;+&#963;+&#947; 2 ) &#945;(&#181;+&#951;+&#1013; 1 &#946;I+&#1013; 2 &#945;A+&#968;&#1013; 2 ) .</ns0:formula><ns0:p>From Eq. (4.1c) and assuming &#1013; 1 &#8722; &#1013; 2 = 0, we deduce the following expressions:</ns0:p><ns0:formula xml:id='formula_7'>A = ( &#181;+&#947; 1 &#963; &#8722; &#946;(&#181;+&#963;+&#947; 2 ) &#945;&#963; )I = D I S = &#181;+&#963;+&#947; 2 &#945; (&#181;+&#951;+&#1013; 1 &#946;I+&#1013; 2 &#945;A) (&#181;+&#951;+&#1013; 1 &#946;I+&#1013; 2 &#945;A+&#968;&#1013; 2 )</ns0:formula><ns0:p>The equation (4.1a) gives the following expression:</ns0:p><ns0:formula xml:id='formula_8'>&#181;+&#963;+&#947; 2 &#945;</ns0:formula><ns0:p>(&#181;+&#951;+&#1013; 1 &#946;I+&#1013; 2 &#945;D I)</ns0:p><ns0:formula xml:id='formula_9'>(&#181;+&#951;+&#1013; 1 &#946;I+&#1013; 2 &#945;D I+&#968;&#1013; 2 ) [&#181; + &#968; + (&#946; + &#945;D)I] = &#923;</ns0:formula><ns0:p>We arrange the previous expression to get the following: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_10'>(&#1013; 1 &#946; + &#1013; 2 &#945;D)(&#946; + &#945;D)I 2 + [(&#181; + &#951;)(&#946; + &#945;D) + (&#181; + &#968;)(&#1013; 1 &#946; + &#1013; 2 &#945;D) &#8722; &#923;&#945; &#1013; 1 &#946;+&#1013; 2 &#945;D &#181;+&#963;+&#947; 2 ]I + (&#181; + &#951;)(&#181; + &#968;) &#8722; &#923;&#945;</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We denote by: a=(&#1013;</ns0:p><ns0:formula xml:id='formula_11'>1 &#946; + &#1013; 2 &#945;( &#181;+&#947; 1 &#963; &#8722; &#946;(&#181;+&#963;+&#947; 2 ) &#945;&#963; ))(&#946; + &#945;( &#181;+&#947; 1 &#963; &#8722; &#946;(&#181;+&#963;+&#947; 2 ) &#945;&#963; )) b=[(&#181;+&#951;)(&#946;+&#945;( &#181;+&#947; 1 &#963; &#8722; &#946;(&#181;+&#963;+&#947; 2 ) &#945;&#963; ))+(&#181;+&#968;)(&#1013; 1 &#946;+&#1013; 2 &#945;( &#181;+&#947; 1 &#963; &#8722; &#946;(&#181;+&#963;+&#947; 2 ) &#945;&#963; ))&#8722;&#923;&#945; &#1013; 1 &#946;+&#1013; 2 &#945;( &#181;+&#947; 1 &#963; &#8722; &#946;(&#181;+&#963;+&#947; 2 ) &#945;&#963; ) &#181;+&#963;+&#947; 2 ] c=(&#181; + &#951;)(&#181; + &#968;) &#8722; &#923;&#945; &#181;+&#951;+&#968;&#1013; 2 &#181;+&#963;+&#947; 2 = (&#181; + &#951;)(&#181; + &#968;)(1 &#8722; R 2 )</ns0:formula><ns0:p>The existence of endemic equilibrium is determined by the existence of positive solutions of the quadratic equation</ns0:p><ns0:formula xml:id='formula_12'>P (I) = aI 2 + bI + c = 0 (4.2)</ns0:formula><ns0:p>.</ns0:p><ns0:p>The number of endemic equilibria of the considered system depends on parameter values a, b, and c. This equation may have zero, one or two solutions. We denote R 20 = &#945;&#923;</ns0:p><ns0:formula xml:id='formula_13'>(&#181;+&#947; 2 +&#963;)(&#181;+&#951;) then R 2 = R 20 &#181;+&#951;+&#1013; 2 &#968; &#181;+&#968;</ns0:formula><ns0:p>We denote by</ns0:p><ns0:formula xml:id='formula_14'>&#968; crit def = (R 20 &#8722; 1)&#181; + R 20 &#951; 1 &#8722; &#1013; 2 R 20 ., where R 2 (&#968; crit ) = 1,</ns0:formula><ns0:p>Since the model parameters A and I are positive, it follows that D &gt;0 and a &gt; 0. Furthermore, if</ns0:p><ns0:formula xml:id='formula_15'>R 2 &gt; 1, then c &lt; 0. Since dR 2 d&#968; = &#8722;R 20 &#951; + (1 &#8722; &#1013; 2 )&#181; (&#181; + &#968;) 2 &lt; 0 Thus, R 2 is decreasing function of &#968; and if &#968; &lt; &#968; crit , then R 2 &gt;1</ns0:formula><ns0:p>. We deduce that for R 2 &gt; 1, P(I) has a unique positive root.</ns0:p><ns0:p>If R 2 &lt; 1, we have c&gt;0 and &#968; &#8805; &#968; crit . Since b(&#968;) is an increasing function of &#968;, if b(&#968; crit ) &#8805; 0, then b(&#968;) &gt; 0 for &#968; &gt; &#968; crit . In this case, P(I) has no positive real root and the system have no endemic equilibrium.</ns0:p><ns0:p>We consider now the case where b(&#968; crit ) &lt; 0. We denote by &#8710;(&#968; If R 2 = 1, we have c=0. In this case, system has a unique endemic equilibrium for b(&#968;) &lt; 0 and no endemic equilibrium for b(&#968;) &gt; 0.</ns0:p><ns0:formula xml:id='formula_16'>) def = b 2 (&#968;)&#8722;4ac(&#968;). If c(&#968; crit ) = 0, &#8710;(&#968; crit ) &gt; 0. Since b(&#968;) is</ns0:formula></ns0:div> <ns0:div><ns0:head n='4.2'>Stability of disease-free equilibrium</ns0:head><ns0:p>The Jacobian matrix with respect to the system (3.1) is given by:</ns0:p><ns0:formula xml:id='formula_17'>J 0 (P 0 ) = &#63726; &#63727; &#63727; &#63727; &#63727; &#63728; &#8722;(&#181; + &#947; 1 ) + &#946;(S 0 + &#1013; 1 V 0 ) &#963; 0 0 0 0 &#8722; (&#181; + &#963; + &#947; 2 ) + &#945;(S 0 + &#1013; 2 V 0 ) 0 0 0 &#8722;&#1013; 1 &#946;V 0 &#8722; &#1013; 2 &#945;V 0 &#8722; (&#181; + &#951;) 0 &#968; &#947; 1 &#947; 2 &#951; &#8722; &#181; 0 &#8722;&#946;S 0 &#8722; &#945;S 0 0 0 &#8722; (&#181; + &#968;) &#63737; &#63738; &#63738; &#63738; &#63738; &#63739; . &#955; &#8722; J 0 (P 0 ) = 0.</ns0:formula><ns0:p>The characteristic polynomial of the Jacobian matrix at DFE is given by det(J 0 &#8722; &#955;I) = 0, where &#955; is the eigenvalue and I is 5 &#215; 5 identity matrix. Thus, J 0 has eigenvalues given by:</ns0:p><ns0:formula xml:id='formula_18'>&#955; 1 = &#8722;&#181; &#955; 2 = &#8722;(&#181; + &#951;) &#955; 3 = &#8722;(&#181; + &#968;) &#955; 4 = &#8722;(&#181; + &#947; 1 ) + &#946;(S 0 + &#1013; 1 V 0 ) = (&#181; + &#947; 1 )(R 1 &#8722; 1) &#955; 5 = &#8722;(&#181; + &#963; + &#947; 2 ) + &#945;(S 0 + &#1013; 2 V 0 ) = (&#181; + &#963; + &#947; 2 )(R 2 &#8722; 1)</ns0:formula><ns0:p>All the eigenvalues are strictly negative except for &#955; 4 and &#955; 5 . These eigenvalues depend the sign of (R 2 &#8722; 1) and (R 1 &#8722; 1). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>when a small number of infected individuals introduced. Did the system stay disease free or an endemic state may appear?</ns0:p><ns0:p>Theorem 1 Based on the Theorem of <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref>, we have the following results. If R 1 &gt; 1 or/and R 2 &gt; 1, then &#955; 4 or/and &#955; 5 is/are strictly positive. In this case the DFE is unstable. If R 1 &lt; 1 and R 2 &lt; 1, then &#955; 4 and &#955; 5 are strictly negative. The system is locally asymptotically stable.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>Numerical simulations</ns0:head><ns0:p>In this paper, we focus on vaccination analysis in Saudi Arabia. The presented numerical simulations provide also general results that can be applied to any region. The estimation of model parameters is very important to have accurate numerical results. One approach consists in calibrating the model by fitting it with reported data using, for example, least square method <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>. In our paper, we used scientific reported facts about COVID-19 transmission mechanisms. The research report <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref> provides information about asymptotic individuals for COVID-19.</ns0:p><ns0:p>Most people, with no symptoms at the beginning, develop symptoms in 7-13 days, which corresponds to the &#963; &#8722;1 . Recall that &#947; 1 is the recovery rate of infectious individuals. Interpreted as the expected value of a Poisson process, &#947; 1 can be related to the needed time from the beginning of the infection till recovery <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. With average recovery duration equal to 10 days <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, the recovery rate of infectious individuals is &#947; 1 = 0.1.</ns0:p><ns0:p>Let &#969; denote the fraction of asymptotic individuals among positive cases. According to <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>, and based on 13 studies involving 21,708 people in 2020, &#969; = 0.17. Using the same methodology as in <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> &#947; 2 = &#969; 1&#8722;&#969; &#963; &#8776; 0.2&#963;. The asymptotic people are estimated to be 42% less contagious than symptomatic individuals <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>. Thus, &#945; = 0.42 &#946;.</ns0:p><ns0:p>Parameter Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>By 18/12/2020, considered as time 0 in this model, the number of recoveries is equal to 351722 , the number of active cases is equal to 3014. Assuming that &#969; = 0.17, the number of initial infectious with and without symptoms is equal to 2501 and 513, respectively.</ns0:p><ns0:p>First, we investigate the effect of the vaccination rate on the effective reproduction number defined as the maximum of the two entities R 1 and R 2 . According to <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, COVID-19 transmission rate &#946; ranges between 0.233 and 0.462. We would like to highlight the fact in our model that a vaccinated individual may be infectious with or without symptoms. This result is very important as, till the end of 2021, an important portion of worldwide population is still opposed to vaccine. In the case of high transmission rate and low vaccination rate, R 1 is higher than 1. The disease free equilibrium is consequently unstable according to theorem 1. For the Saudi case, when beta is equal to 0.233, R 1 and R 2 are equal to 0.0797 and 0.0195, respectively. When beta is equal to 0.462, R 1 and R 2 are equal to 0.1580 and 0.0387, respectively. For the Saudi Arabia, the effective reproduction number is less than 1, even for high transmission rate.</ns0:p><ns0:p>This result is explained by the high vaccination rate. The effect of the transmission rate can also be observed in the amplitude of each category of individuals. When the models converge, the number of infectious and asymptotic individuals are zero.</ns0:p><ns0:p>We emphasize here our theoretical result, mentioned in Theorem 1, that states that if both R 1 and R 2 are less than one, the disease free equilibrium is stable. This result is consistent with the simulation results. The difference between the two considered scenarios lies in the percentage of susceptible and vaccinated individuals in the equilibrium. This percentage is very low when the transmission rate is high. Although the percentages of vaccinated individuals are close, we observe a remarkable difference in the number of infectious individuals. When the transmission rate is high, almost 40% of the population is infected, which rises public health issues. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Computer Science</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head n='5.1'>Managerial insights and practical implications</ns0:head><ns0:p>By March 2022, several countries, including Saudi Arabia, have simplified COVID-19 rules by relaxing some safe management measures such as social distancing and mask wearing. However, the Saudi ministry of health kept the condition of three vaccine doses. This decision is part of a public health strategy that continues to monitor virus variants, mainly Omicron and its sublineages BA.1 and BA.2.</ns0:p><ns0:p>WHO's Technical Advisory Group requests countries to continue to be vigilant because of potential significant rise in the number of infections. Although these variants might resist neutralizing antibodies in the blood, the vaccine allows preventing severe illness and death. Unlike several countries, Saudi Arabia was spared from the lack of oxygen generators, the most important medical equipment for hospitalized patients. The main concern is, however, regulations regarding vaccination. The vaccine was made mandatory for individuals over 12 years old. The open question is the need for vaccine for children aged from 5 to 11 years. Therefore, managers need to simulate the effect of vaccination and make predictions of the virus spread. This work tries to provide them with an efficient tool that captures the specificity of the Saudi case.</ns0:p></ns0:div> <ns0:div><ns0:head n='6'>Conclusion</ns0:head><ns0:p>In this paper, we present a mathematical model for COVID-19, based on the virus behaviour. Our main target is to evaluate the effect of vaccination on the population. The presence of individuals presenting no symptoms and the immunity loss are the main characteristics that make this virus different from other known and already modeled diseases. We provide analytical expression of the effective reproduction number with is a key factor to determine necessary conditions for endemic and disease free equilibrium. We supported our theoretical findings with the numerical analysis applied to the Saudi case. The main findings of this work are as follows.</ns0:p><ns0:p>&#8226; People can observe that the vaccine helps to reduce the severity of symptoms. We gave a mathematical proof that the vaccine reduces the transmission rate.</ns0:p><ns0:p>&#8226; The vaccination campaign in Saudi Arabia was immediately followed by the rise in the number of infections. We have showed that this observation is mathematically justified and this rise is a necessary transition before the increase of new infections.</ns0:p><ns0:p>&#8226; By adjusting the model parameters based on collected data, we provide the decision-makers with the vaccination rate necessary for virus spread control.</ns0:p><ns0:p>Recently, the scientific community is observing the new variants that show each time different patterns. We aim in the future, to develop a new model with the new observed characteristics of variants such as beta and omicron. As discussed in the previous section, several countries including Saudi Arabia did not face medical resource shortages. However, as highlighted by Lotfi <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>, the management of medical waste is critical during COVID-19 pandemic. Moreover, the number of asymptomatic and untested symptomatic infections is uncertain. As future work, we propose to capture disease dynamic uncertainty and incorporate risk assessment, to alleviate the impacts of pandemic peak. Hybrid fuzzy, data-driven, robust optimization, and stochastics <ns0:ref type='bibr'>[21] [22]</ns0:ref> are examples of uncertainty analysis methods. Conditional value at risk (CVaR), which is the average shortfall beyond the VaR point, is a consistent and coherent risk assessment measure.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69957:1:0:NEW 12 Mar 2022) Manuscript to be reviewed Computer Science distancing is still recommended as a public health measure. Driven by the observed characteristics of COVID-19, we propose a mathematical model with two infectious states. It was reported by World</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>depicted the flow diagram of the disease spread. 4 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69957:1:0:NEW 12 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Proposed model.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>6 PeerJ</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>&#951;&#968;&#923; &#181;(&#181;+&#968;)(&#181;+&#951;) , &#968;&#923; (&#181;+&#968;)(&#181;+&#951;) ) Let x = (I, A, V, R, S) T . System (3.1) can be rewritten as x &#8242; = F(x) &#8722; N (x),, where F be the rate of appearance of new infections in each compartment. The progression from A to I is not considered to be new infection, but rather the progression of an infected individual through various infectious compartments. Comput. Sci. reviewing PDF | (CS-2022:01:69957:1:0:NEW 12 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>&#181;+&#951;+&#968;&#1013; 2 &#181;+&#963;+&#947; 2 7</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69957:1:0:NEW 12 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>an increasing linear function of &#968;, there is a unique &#968; &gt; &#968; crit such that b( &#968;) = 0. and &#8710;(&#968;) has a unique root &#968; in [&#968; crit , &#968;]. P(I) has two roots and the system 3.1 has two endemic equilibria for &#968; crit &lt; &#968; &lt; &#968;. and P(I) ha no real positive root and the system (3.1) has no endemic equilibria for &#968; &gt; &#968;.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Fig 2a and Fig 2b show the evolution of both R 1 and R 2 as a function of the vaccination rate &#968; with virus transmission rate equal to 0.233 and 0.462, respectively. In both cases, R 1 corresponding to the strain of infectious individuals with symptoms is greater than R 2 , corresponding to the strain of asymptotic individuals. Thus the number of individuals infected by one person currying the virus is mainly affected by individuals showing usual symptoms. Mathematical theoretical result confirms that the vaccine reduces the spread of the virus among the population.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Varying R 1 and R 2 as a function of &#968;, &#946; = 0.233. Varying R 1 and R 2 as a function of &#968;, &#946; = 0.462.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Varying R 1 and R 2 as a function of vaccination rate for two virus transmission rates, &#946; = 0.233 and &#946; = 0.462.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Fig. 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig.3shows the weekly number of new active cases and recovered after infection in KSA, starting from 18/12/2020, the date when the vaccination starts. We can see that after 12 weeks, the number of cases raises. This behaviour was surprising for a population waiting to see the effect of vaccination.It's only after 31 weeks that the number of new cases start to decrease. The same phenomena was observed in both Fig.4cand Fig.5c. The theatrical results is conform to actual statistics. The effect of vaccination is not immediate; it needs several weeks to observe a decrease in the number of new infectious cases. We compare the evolution in time of the five compartments (S, V, I, A ,and R) presented in</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Number of active cases and recovered after infection in KSA. Starting from 18/12/2020.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Percentage of susceptible, vaccinated, infectious. asymptotic and recovered individuals, &#946; = 0.233, &#968; = 0.0012.</ns0:figDesc><ns0:graphic coords='13,167.48,488.78,208.13,147.00' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Percentage of susceptible, vaccinated, infectious. asymptotic and recovered individuals, &#946; = 0.462, &#968; = 0.0012.</ns0:figDesc><ns0:graphic coords='14,167.48,488.78,208.13,147.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Most relevant studies in the field of COVID-19 prediction trend</ns0:figDesc><ns0:table><ns0:row><ns0:cell>SEIR+Quarantined+Hospitalized Canada</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 and</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Table 3 summarize the different model parameters and variables.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Parameter Description</ns0:cell></ns0:row><ns0:row><ns0:cell>&#923;</ns0:cell><ns0:cell>Recruitment rate of susceptible humans</ns0:cell></ns0:row><ns0:row><ns0:cell>&#181;</ns0:cell><ns0:cell>Natural mortality rate</ns0:cell></ns0:row><ns0:row><ns0:cell>1/&#951;</ns0:cell><ns0:cell>Immunity development period</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947; 1</ns0:cell><ns0:cell>Recovery rate Infectious</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947; 2</ns0:cell><ns0:cell>Recovery rate Asymptotic</ns0:cell></ns0:row><ns0:row><ns0:cell>1/ &#963;</ns0:cell><ns0:cell>Period for asymptotic individuals to develop symptoms</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Model parameters and description.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variables</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Variable Description</ns0:cell></ns0:row><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell>Transmission rate for Infectious</ns0:cell></ns0:row><ns0:row><ns0:cell>&#945;</ns0:cell><ns0:cell>Transmission rate for Asymptotic</ns0:cell></ns0:row><ns0:row><ns0:cell>&#968;</ns0:cell><ns0:cell>Vaccination coverage rate</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Model variables and description.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>The data set is provided by King Abdullah Petroleum Studies and Research Center (KAPSARC). It includes five classes: Tested, Cases, Recoveries, Critical, Mortalities and Active and it spans the period from 04/03/2020 to 08/11/2021.It includes also important events and measures such as international flights suspension and lockdown.We use Simulink Tool in order to simulate different scenarios.The death and birth rate for Saudi Arabia are estimated to be equal to 3.39 and 14.56 for 1000 per year, respectively. The vaccination campaign started on 18/12/2020 with a vaccination coverage of the total population of 0.02% to reach about 65% of the adult population fully vaccinated in November 2021. The vaccination rate is considered a the percentage of the total population that get vaccinated per day. With approximately 45000 administrated doses per day and a total population of 35339000 in 2021, this rate is about 0.00 127. Is this rate enough to eradicate the disease? This what we are trying to answer is this work.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Model parameters and values.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Range</ns0:cell></ns0:row><ns0:row><ns0:cell>&#923;</ns0:cell><ns0:cell>14.56 per 1000 per year</ns0:cell></ns0:row><ns0:row><ns0:cell>&#946;</ns0:cell><ns0:cell>[0.233, 0.462] [8]</ns0:cell></ns0:row><ns0:row><ns0:cell>&#945;</ns0:cell><ns0:cell>0.42 &#946;</ns0:cell></ns0:row><ns0:row><ns0:cell>&#968;</ns0:cell><ns0:cell>model parameter</ns0:cell></ns0:row><ns0:row><ns0:cell>&#181;</ns0:cell><ns0:cell>3.39 per 1000 per year</ns0:cell></ns0:row><ns0:row><ns0:cell>1/&#951;</ns0:cell><ns0:cell>14 days</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947; 1</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell>&#947; 2</ns0:cell><ns0:cell>0.2 &#963;</ns0:cell></ns0:row><ns0:row><ns0:cell>1/&#963;</ns0:cell><ns0:cell>[7,13] days</ns0:cell></ns0:row></ns0:table><ns0:note>9 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69957:1:0:NEW 12 Mar 2022)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69957:1:0:NEW 12 Mar 2022)Manuscript to be reviewed</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69957:1:0:NEW 12 Mar 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"College of Computer and Information Science Princess Nourah bint AbdulRahman University Email:mshamdi@pnu.edu.sa March 12th, 2022 Dear editors, We would like to thank you for considering our manuscript: Mathematical COVID-19 model with vaccination: A case study in Saudi Arabia (CS-2022:01:69957:0:1:REVIEW). Thanks to the helpful reviewers' comments, the manuscript quality is improved. Our responses are outlined below and the updated information is highlighted using the red color, in the revised manuscript. We hope that these revisions are sufficient to make our manuscript suitable for publication. Reviewer 1 Basic reporting Experimental design Validity of the findings Additional comments Comment 1 Please add contribution with bullet mark in introduction (Please see and cite [1-9]) Response Contributions have been listed, with bullet mark in introduction; lines 54-60 Comment 2 Please add literature review table and add your research in end of table and show gap research (Please see and cite [1-9]) Response Literature review table has been added (Table 1, p.5); with our research in end of table. 1 Comment 3 Please add notation list and classify to sets (indices), parameters, decision variables (Please see and cite [1-9]) Response The used symbols have been classified to parameters and variables (Table 2 and 3, respectively) Comment 4 Please add Managerial insights and practical implications (Please see and cite [1-9]). Response Managerial insights and practical implications section has been added (lines 341-354). Comment 5 Please add results with bullet mark in conclusion (Please see and cite [1-9]). Response Results have been added with bullet mark in conclusion (lines 362-369). Comment 6 Please suggest uncertainty form like fuzzy, robust optimization, data-driven and stochastic. Response Based on the work in [1][2] and [9] (numbering is based on the list below of suggested references), a future work has been added in the conclusion. We propose to include uncertainty and risk assessment in our developed model (lines 372-379). Comment 7 Please suggest risk form like CVaR, VaR. Response Based on the work in [1][2] and [9] (numbering is based on the list below of suggested references), a future work has been added in the conclusion. We propose to include uncertainty and risk assessment in our developed model (lines 372-379). 2 Suggested references by the reviewer [1] Lotfi, R., Kheiri, K., Sadeghi, A., & Babaee Tirkolaee, E. (2022). An extended robust mathematical model to project the course of COVID-19 epidemic in Iran. Annals of Operations Research, 1-25. [2] Lotfi, R., Kargar, B., Gharehbaghi, A., & Weber, G. W. (2021). Viable medical waste chain network design by considering risk and robustness. Environmental Science and Pollution Research, 1-16. [3] Lotfi, R., Mardani, N., & Weber, G. W. (2021). Robust bi-level programming for renewable energy location. International Journal of Energy Research, 45(5), 7521-7534. [4] Lotfi, R., Kargar, B., Hoseini, S. H., Nazari, S., Safavi, S., & Weber, G. W. Resilience and sustainable supply chain network design by considering renewable energy. International Journal of Energy Research. [5] Lotfi, R., Mehrjerdi, Y. Z., Pishvaee, M. S., Sadeghieh, A., & Weber, G. W. (2021). A robust optimization model for sustainable and resilient closed-loop supply chain network design considering conditional value at risk. Numerical Algebra, Control & Optimization, 11(2), 221. [6] Lotfi, R., Yadegari, Z., Hosseini, S. H., Khameneh, A. H., Tirkolaee, E. B., & Weber, G. W. (2020). A robust time-cost-quality-energy-environment trade-off with resourceconstrained in project management: A case study for a bridge construction project. Journal of Industrial & Management Optimization. [7] Lotfi, R., Safavi, S., Gharehbaghi, A., Ghaboulian Zare, S., Hazrati, R., & Weber, G. W. (2021). Viable Supply Chain Network Design by considering Blockchain Technology and Cryptocurrency. Mathematical Problems in Engineering, 2021. [8] Lotfi, R., Sheikhi, Z., Amra, M., AliBakhshi, M., & Weber, G. W. (2021). Robust optimization of risk-aware, resilient and sustainable closed-loop supply chain network design with Lagrange relaxation and fix-and-optimize. International Journal of Logistics Research and Applications, 1-41. [9] Lotfi, R., Kargar, B., Rajabzadeh, M., Hesabi, F., & ?zceylan, E. (2022). Hybrid Fuzzy and Data-Driven Robust Optimization for Resilience and Sustainable Health Care Supply Chain with Vendor-Managed Inventory Approach. International Journal of Fuzzy Systems, 116. 3 Reviewer 2 Basic reporting The English is poor. Authors have to revise the paper carefully. For example, 'The discovery of a new form of corona-viruses in December 2019, SARS-CoV-2, commonly named covid-19, has transformed the world.' the word transformed here is wrongly used. 'The world has been waiting for the vaccine for almost one year.' may be more than one year? Also line 261 and line 265 should be corrected. Response The paper was proofread and the mentioned sentences have been corrected. Experimental design The numerical experiment are given. However, some detail illustration should be given on these results with figures. Response The numerical results showing the evolution of the different types of individuals (susceptible, vaccinated, infectious, asymptotic, and recovered) are illustrated using figures. Validity of the findings The results in this paper deserve publication. Response Thank you very much. Additional comments The references are not completed, such as Nonlinear Dynamics 106, 1491-1507, 2021, Nonlinear Dynamics 106, 1347-1358, 2021. Response The related works Section has been updated to include the reference Nonlinear Dynamics 106, 1491-1507, 2021 (lines 136-138) 4 Reviewer 3 Basic reporting The COVID-19 pandemic has already spread throughout the world and the people are aware about the diseases and they are using precautions about the pandemic. But, still the covid-19 is spreading very quickly. Formulate a mathematical model for the transmission dynamics of COVID-19. Few minor issues: ---- The abstract is a little thin and does not quite convey the vibrancy of the findings and the depth of the main conclusions. The authors should please extend this somewhat for a better first impression. ---- The manuscript lacks motivation. Author needs to include the motivation of the study. -----To stop the spread of the diseases vaccine is needed. But, in absence of the vaccine people must have maintain the social distancing. In order to maintain the social distancing must obey the modeling rule. The introduction need to be improved by incorporating some recent references of COVID-19 pandemic. To do so, I suggest some modeling work must be included in the references: 1- 'Modeling and forecasting the COVID-19 pandemic in India, Chaos, Soliton & Fract. 139 (2020) 110049', 2- “Mathematical modeling of the COVID-19 outbreak with intervention strategies, Results in Physics, 2021, 104285”. Response ---- The abstract has been updated by adding the main findings (lines 18-20). ---- The abstract has been updated by adding our motivations (lines 9-14). ---- The Introduction section has been updated to emphasis the importance of social distancing. The reference “Mathematical modeling of the COVID-19 outbreak with intervention strategies, Results in Physics, 2021, 104285” has been added ', (lines 35-39). 5 Experimental design Need to explain properly the parameter estimation. For parameter estimation and model validation authors should include the manuscripts: 3- 'Forecasting the daily and cumulative number of cases for the COVID-19 pandemic in India, Chaos, 30(7) (2020) 071101', 4- “Impact of social media advertisements on the transmission dynamics of COVID-19 pandemic in India, Journal of Applied Mathematics and Computing (2021)” Response The Section Numerical simulations has been updated to include the manuscript 'Impact of social media advertisements on the transmission dynamics of COVID-19 pandemic in India, Journal of Applied Mathematics and Computing (2021) ', (lines 282-285). Validity of the findings For model validation and proper writing of the manuscript authors should include the manuscripts: 5- ' Modeling the dynamics of COVID-19 pandemic with implementation of intervention strategies, The European Physical Journal Plus (2022)'; 6- Mathematical modeling and optimal intervention strategies of the COVID-19 outbreak, Nonlinear Dynamics, (2022)'. Response The related works Section has been updated to include the reference 'Modeling the dynamics of COVID-19 pandemic with implementation of intervention strategies, The European Physical Journal Plus (2022)' (lines 138-141) Additional comments No comments Dr. Monia Hamdi Assistant professor On behalf of all authors. 6 "
Here is a paper. Please give your review comments after reading it.
408
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Statistical analyses of biomechanical finite element (FE) simulations are frequently conducted on scalar metrics extracted from anatomically homologous regions, like maximum von Mises stresses from demarcated bone areas. Advantages of this approach are numerical tabulability and statistical simplicity, but disadvantages include region demarcation subjectivity, spatial resolution reduction, and results interpretation complexity when attempting to mentally map tabulated results to original anatomy. This study proposes a method which abandons the two aforementioned advantages to overcome these three limitations. The method is inspired by parametric random field theory (RFT), but instead uses a non-parametric analogue to RFT which permits flexible model-wide statistical analyses through non-parametrically constructed probability densities regarding volumetric upcrossing geometry. We illustrate method fundamentals using basic 1D and 2D models, then use a public model of hip cartilage compression to highlight how the concepts can extend to practical biomechanical modeling. The ultimate whole-volume results are easy to interpret, and for constant model geometry the method is simple to implement. Moreover, our analyses demonstrate that the method can yield biomechanical insights which are difficult to infer from single simulations or tabulated multi-simulation results. Generalizability to non-constant geometry including subjectspecific anatomy is discussed.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>In numerical finite element (FE) simulations of biomechanical continua model inputs like material properties and load magnitude are often imprecisely known. This uncertainty arises from a variety of sources including: measurement inaccuracy, in vivo measurement inaccessibility, and natural betweensubject material, anatomical and loading variability <ns0:ref type='bibr' target='#b5'>(Cheung et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b40'>Ross et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b6'>Cox et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b9'>Cox et al., , 2015;;</ns0:ref><ns0:ref type='bibr' target='#b18'>Fitton et al., 2012b)</ns0:ref>. Despite this uncertainty, an investigator must choose specific parameter values because numerical simulation requires it. Parameters are typically derived from published data, empirical estimation, or mechanical intuition <ns0:ref type='bibr' target='#b21'>(Kupczik et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b8'>Cox et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b7'>Cox et al., , 2013;;</ns0:ref><ns0:ref type='bibr' target='#b38'>Rayfield, 2011;</ns0:ref><ns0:ref type='bibr' target='#b10'>Cuff et al., 2015)</ns0:ref>.</ns0:p><ns0:p>It is also possible to perform multiple FE simulations using a spectrum of feasible model input values to generate a distribution of model outputs <ns0:ref type='bibr' target='#b11'>(Dar et al., 2002;</ns0:ref><ns0:ref type='bibr' target='#b1'>Babuska and Silva, 2014)</ns0:ref>. More simply, probabilistic model inputs yield probabilistic outputs, and continuum mechanics' inherent nonlinearities ensure that these input and output probabilities are nonlinearly related. Probing output distributions statistically therefore generally requires numerical simulation. Such analyses can require substantial computational resources: probabilistic FE outputs have been shown to converge to stable numerical values only for on-the-order of 1000 to 100,000 simulation iterations depending on model complexity (Dopico-PeerJ Comput. Sci. reviewing PDF | (CS-2015:08:6202:1:0:NEW 23 Sep 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b12'>Gonz&#225;lez et al., 2009)</ns0:ref>. The advent of personal computing power has mitigated problems associated with this computational demand and has led to a sharp increase in probabilistic FE simulation in a variety of engineering fields <ns0:ref type='bibr' target='#b44'>(Stefanou, 2009)</ns0:ref> including biomechanics <ns0:ref type='bibr' target='#b15'>(Easley et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b22'>Laz et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b26'>Lin et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b37'>Radcliffe and Taylor, 2007;</ns0:ref><ns0:ref type='bibr' target='#b19'>Fitzpatrick et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Producing a probabilistic input-output mapping is conceptually simple: iteratively change input parameters according to a particular distribution and assemble output parameters for each iteration to yield an output distribution. The simplest method is Monte Carlo simulation which randomly generates input parameters based on given mean and standard deviation values <ns0:ref type='bibr' target='#b11'>(Dar et al., 2002)</ns0:ref>. More complex methods like Markov Chain Monte Carlo can accelerate probabilistic output distribution convergence <ns0:ref type='bibr' target='#b2'>(Boyaval, 2012)</ns0:ref>.</ns0:p><ns0:p>Once probabilistic inputs / outputs are generated they may be probed using a variety of statistical methods. A common technique is to extract scalars like maximum von Mises stress from anatomically demarcated regions of interest <ns0:ref type='bibr' target='#b37'>(Radcliffe and Taylor, 2007)</ns0:ref>. Other techniques include Taguchi global model comparisons <ns0:ref type='bibr' target='#b45'>(Taguchi, 1987;</ns0:ref><ns0:ref type='bibr' target='#b11'>Dar et al., 2002;</ns0:ref><ns0:ref type='bibr' target='#b26'>Lin et al., 2007)</ns0:ref> to fuzzy set modeling <ns0:ref type='bibr' target='#b1'>(Babuska and Silva, 2014)</ns0:ref> and probability density construction for specific model parameters <ns0:ref type='bibr' target='#b15'>(Easley et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b22'>Laz et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b29'>McFarland and Mahadevan, 2008;</ns0:ref><ns0:ref type='bibr' target='#b12'>Dopico-Gonz&#225;lez et al., 2009)</ns0:ref>.</ns0:p><ns0:p>The purpose of this paper is to propose an alternative method which conducts classical hypothesis testing at the whole-model level using continuum upcrossing geometry. An 'upcrossing' is a portion of the continuum that survives a threshold (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) like an island above the water's surface or a mountain top above clouds. Each upcrossing possess a number of geometrical features including maximum height, extent and integral, where integrals, for examples, are areas, volumes and hyper-volumes for 1D, 2D and 3D continua, respectively. Parametric solutions to upcrossing geometry probabilities exist for ndimensional Gaussian continua in the random field theory (RFT) literature <ns0:ref type='bibr' target='#b0'>(Adler and Taylor, 2007)</ns0:ref>, and non-parametric approximations have been shown to be equally effective <ns0:ref type='bibr' target='#b32'>(Nichols and Holmes, 2002)</ns0:ref>. The method we propose follows the latter, non-parametric permutation approach because it is ideally suited to the iterative simulation which characterizes probabilistic FE analysis.</ns0:p><ns0:p>The method is inspired by hypothesis testing approaches in nonlinear modeling <ns0:ref type='bibr' target='#b23'>(Legay and Viswanatha, 2009)</ns0:ref> and in particular a label-based continuum permutation approach <ns0:ref type='bibr' target='#b32'>(Nichols and Holmes, 2002)</ns0:ref>. It first assembles a large number of element-or node-based test statistic volumes through iterative simulation, then conducts inference using non-parametrically estimated upcrossing probabilities. These upcrossing distributions form a general framework for conducting classical, continuum-level hypothesis testing on FE models in arbitrarily complex experiments. A thresholded continuum contains zero or more upcrossings, each with particular geometric characteristics including: maximum height, extent, integral, etc., each of which is associated with a different probability. The maximum height characteristicacross all upcrossings -can be used to conduct classical hypothesis testing as described in &#167;2.</ns0:p><ns0:p>described in the FEBio Theory Manual <ns0:ref type='bibr' target='#b28'>(Maas et al., 2015)</ns0:ref>. Model files and analysis scripts are available in this project's GitHub repository (github.com/0todd0000/probFEApy).</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1'>Models</ns0:head></ns0:div> <ns0:div><ns0:head n='2.1.1'>Model A: Simple anisotropic bone compression</ns0:head><ns0:p>A single column of hexahedral elements (Fig. <ns0:ref type='figure' target='#fig_1'>2a</ns0:ref>) with anisotropic stiffness (Fig. <ns0:ref type='figure' target='#fig_1'>2b</ns0:ref>) was used to represent bone with local material inconsistencies. This simplistic model was used primarily to efficiently demonstrate the key concepts underlying the proposed methodology. Nodal displacements were fully constrained at one end, and a total compressive force of 8000 N was applied to the other end along the longitudinal axis. The bone material was linearly elastic with a Poisson's ratio of 0.3.</ns0:p><ns0:p>Local anisotropy in Young's modulus (Fig. <ns0:ref type='figure' target='#fig_1'>2b</ns0:ref>) was created using Gaussian pulses centered at 70%</ns0:p><ns0:p>along the bone length with amplitudes and breadths of approximately 10% and 20%, respectively. The actual amplitudes and breadths of the stiffness increase were varied randomly to simulate an experiment involving N=8 randomly sampled subjects in which the bone's anisotropic stiffness profile was measured separately for each subject. Additionally, a small random signal was separately applied to each of the eight cases to ensure that variance was greater than zero, and thus that test statistic values were computable at all points in the continuum. </ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.2'>Model B: Soft tissue indentation</ns0:head><ns0:p>A rigid hexahedral block was compressed against soft tissue to a depth of 1 cm height as depicted in Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. Nodal displacements on the soft tissue's bottom surface were fully constrained. The soft tissue was modeled as hyperelastic with the following Moony-Rivlin strain energy function <ns0:ref type='bibr' target='#b28'>(Maas et al., 2015)</ns0:ref> :</ns0:p><ns0:formula xml:id='formula_0'>W = a(I &#8722; 3) + k 2 (ln J) 2 (1)</ns0:formula><ns0:p>Here a is the hyperelastic parameter, k is the elasticity volume modulus, I is the deformation tensor's first deviatoric invariant, and J is the deformation Jacobian. The parameter a was set to 100 and eight k values <ns0:ref type='bibr'>(800,</ns0:ref><ns0:ref type='bibr'>817,</ns0:ref><ns0:ref type='bibr'>834,</ns0:ref><ns0:ref type='bibr'>851,</ns0:ref><ns0:ref type='bibr'>869,</ns0:ref><ns0:ref type='bibr'>886,</ns0:ref><ns0:ref type='bibr'>903,</ns0:ref><ns0:ref type='bibr'>920)</ns0:ref> were compared to a datum case of k=820.</ns0:p><ns0:p>Additionally, three different indenter face types were compared. The first indenter face was perfectly flat, and the other two were uneven but smooth as depicted in Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>. The uneven surfaces were generated by adding spatially smoothed Gaussian noise to the indenter face's z coordinates (i.e. the compression direction), then scaling to a maximum value of approximately 2.5 mm, or 1.7% the indenter's height.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.1.3'>Model C: Hip cartilage compression</ns0:head><ns0:p>A separately-published model of hip cartilage compression <ns0:ref type='bibr' target='#b28'>(Maas et al., 2015)</ns0:ref> (Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The bones were rigid and the cartilage was modeled using the hyperelastic Mooney-Rivlin model above (Eqn.1) with a constant a value of 6.817. Ten different values of k were simulated for each of two hypothetical groups (Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>) to mimick a two-sample experiment involving in vivo or in vitro material property measurements. The pelvis and acetabular cartilage were fixed and the femur was kinematically driven 1 mm in the upward direction. <ns0:ref type='bibr'>1200,</ns0:ref><ns0:ref type='bibr'>1230,</ns0:ref><ns0:ref type='bibr'>1260,</ns0:ref><ns0:ref type='bibr'>1290,</ns0:ref><ns0:ref type='bibr'>1320,</ns0:ref><ns0:ref type='bibr'>1350,</ns0:ref><ns0:ref type='bibr'>1380,</ns0:ref><ns0:ref type='bibr'>1410,</ns0:ref><ns0:ref type='bibr'>1440,</ns0:ref><ns0:ref type='bibr'>1470]</ns0:ref> 1335 (90.8) 2 <ns0:ref type='bibr'>[1380,</ns0:ref><ns0:ref type='bibr'>1410,</ns0:ref><ns0:ref type='bibr'>1440,</ns0:ref><ns0:ref type='bibr'>1470,</ns0:ref><ns0:ref type='bibr'>1500,</ns0:ref><ns0:ref type='bibr'>1530,</ns0:ref><ns0:ref type='bibr'>1560,</ns0:ref><ns0:ref type='bibr'>1590,</ns0:ref><ns0:ref type='bibr'>1620,</ns0:ref><ns0:ref type='bibr'>1650]</ns0:ref> 1515 (90.8)</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2'>Analysis</ns0:head><ns0:p>We used a non-parametric permutation method from the Neuroimaging literature <ns0:ref type='bibr' target='#b32'>(Nichols and Holmes, 2002)</ns0:ref> to conduct classical hypothesis testing at the whole-model level. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Random Field Theory <ns0:ref type='bibr' target='#b0'>(Adler and Taylor, 2007)</ns0:ref>. The method is described below and is depicted in Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>.</ns0:p><ns0:p>All permutations described below were applied to pre-simulated FEA results.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.1'>Model A</ns0:head><ns0:p>The datum Young's modulus (E=14 GPa) was subtracted from the eight 1D Young's modulus continua (Fig. <ns0:ref type='figure' target='#fig_1'>2b</ns0:ref>), and the resulting difference continua were sign-permuted (Fig. <ns0:ref type='figure' target='#fig_5'>6a</ns0:ref>) to generate a number of artificial data samples. For each sample the t continuum was computed according to the typical onesample t statistic definition:</ns0:p><ns0:formula xml:id='formula_1'>t(q) = y(q) &#8722; &#181;(q) s(q)/ &#8730; N (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>)</ns0:formula><ns0:p>where y is the sample mean, &#181; is the datum, s is the sample standard deviation, N is sample size and q is continuum position. Repeating for all permutation samples produced a distribution of 1D t continua (Fig. <ns0:ref type='figure' target='#fig_5'>6b</ns0:ref>), whose maxima formed a 'primary' probability density function (PDF) (Fig. <ns0:ref type='figure' target='#fig_5'>6c</ns0:ref>). This primary PDF represents the expected maximum difference (from the datum case of E = 14 GPa) that smooth, purely random continua would be expected to produce if there were truly no effect.</ns0:p><ns0:p>We conducted classical hypothesis testing at &#945;=0.05 using the primary PDF's 95th percentile (t * ) as the criterion for null hypothesis rejection; if the t continuum associated with original, non-permuted data (Fig. <ns0:ref type='figure' target='#fig_5'>6a</ns0:ref>) exceeded t * the null hypothesis was rejected. In this example the original t continuum failed to traverse t * (Fig. <ns0:ref type='figure' target='#fig_5'>6e</ns0:ref>) so the null hypothesis was not rejected. Based on the primary PDF the exact probability value was: p = 0.101 in the depicted example.</ns0:p><ns0:p>We repeated this procedure for the effective strain and von Mises stress distributions associated with the eight Young's modulus continua. In cases where the original t continuum exceeded the t * threshold, probabilities associated with the upcrossing(s) (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) were computed with a 'secondary' PDF (Fig. <ns0:ref type='figure' target='#fig_5'>6d</ns0:ref>) which embodied the probability of observing upcrossings with particular volume (i.e. supra-threshold integral). Note that (i) (1-&#945;)% of the values in the secondary PDF are zero by definition, (ii) an upcrossing which infinitessimally exceeds t * has an integral of zero and a p value of &#945;, and (iii) the minimum upcrossings p value is 1/n, where n is the total number of permutations. All integrals were computed using trapezoidal approximation.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.2'>Model A, Part 2</ns0:head><ns0:p>We conducted a secondary analysis of Model A to examine how additional probabilistic variables increase computational demand. For this analysis we considered load direction (&#952; ) to be uncertain, (b) For each permutation a t continuum was computed using Eqn.2 . (c) The maximum t values from all permutations were assembled to form a primary probability density function (PDF) from which a critical test statistic (t * ) was calculated. (d) Thresholding all permuted test statistic continua at t * produced upcrossings (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) whose integral formed a secondary PDF from which upcrossing-specific p values are computable. (e) Since the original test statistic continuum failed to traverse t * the null hypothesis was not rejected at &#945;=0.05 for this example. </ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.3'>Model B 161</ns0:head><ns0:p>The goal of Model B analysis was to qualitatively assess the effects of imperfect contact geometry (Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>) Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for each of the three indenter faces (Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>): one datum (k=820) and then the eight other values of k as described above. For each indenter we computed the mean von Mises stress distribution in the compressed soft tissue, then compared this mean to the datum (k=820) stress distribution through the one-sample test statistic (Eqn.2) .</ns0:p></ns0:div> <ns0:div><ns0:head n='2.2.4'>Model C</ns0:head><ns0:p>The goal of Model C analysis was to demonstrate how the analysis techniques and results for Model A and Model B extend to realistic, complex models. The null hypothesis of equivalent von Mises stress distributions in each group (Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>) was tested using a slight modification of the permutation approach described above (Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>). The only differences were that (i) the two-sample t statistic was computed instead of the one-sample t statistic, and (ii) group permutations were conducted instead of sign permutations.</ns0:p><ns0:p>Group permutations were performed by randomly assigning each of the 20 continuum observations to one of the two groups, with ten observations in each group, then repeating for a total of 10,000 random permutations. Although the total number of possible permutations was 20! / (10! 10!) = 184,756, we found no qualitative effect of adding more than 10,000 permutations.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head n='3.1'>Model A</ns0:head><ns0:p>FE simulations of each of the eight cases depicted in Fig. <ns0:ref type='figure' target='#fig_1'>2b</ns0:ref> yielded the stress/strain distributions and t statistic distributions depicted in Fig. <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. In this example Young's moduli only increased (Fig. <ns0:ref type='figure' target='#fig_8'>7a</ns0:ref>) and strain only decreased (Fig. <ns0:ref type='figure' target='#fig_8'>7b</ns0:ref>), but stress exhibited central increases (near element #70) and peripheral decreases (near elements #60 and #80) (Fig. <ns0:ref type='figure' target='#fig_8'>7c</ns0:ref>), emphasizing the nonlinear relation between model inputs and outputs.</ns0:p><ns0:p>Maximum absolute t values differed amongst the field variables (Fig. <ns0:ref type='figure' target='#fig_8'>7d-f</ns0:ref>), with stress exhibiting the largest maximum absolute t values. The null hypothesis was rejected for von Mises stresses but not for either Young's modulus or effective strain. Additionally, both stress increases and stress decreases were statistically significant (Fig. <ns0:ref type='figure' target='#fig_8'>7f</ns0:ref>). These results indicate that statistical signal associated with the Young's modulus inputs was amplified in the von Mises stress field, but we note that strain would have been the amplified variable had the the model been displacement-loaded instead of force-loaded. More generally these results show that statistical conclusions pertaining to different model variables can be quite different, and that different continuum regions can respond in opposite ways to probabilistic inputs.</ns0:p><ns0:p>Although stiffness increased non-uniformly as a Gaussian pulse (Fig. <ns0:ref type='figure' target='#fig_8'>7a</ns0:ref>) the test statistic magnitude was effectively uniform across that region (elements 60 -80; Fig. <ns0:ref type='figure' target='#fig_8'>7d</ns0:ref>). This suggests that mechanical and statistical magnitudes are not directly related, and thus that statistical conclusions mustn't be limited to areas of large mechanical signal unless one's hypothesis pertains specifically to those areas.</ns0:p></ns0:div> <ns0:div><ns0:head n='3.2'>Model A, Part 2</ns0:head><ns0:p>Adding uncertainty to the load direction increased variability and thus caused absolute t value decreases near element #70 (Fig. <ns0:ref type='figure' target='#fig_9'>8a</ns0:ref>), but general loading environment changes caused increases to absolute t values in other model areas, especially toward elements #50 and #90. The stress response was somewhat different , with absolute t values increasing near element #70 but decreasing elsewhere (Fig. <ns0:ref type='figure' target='#fig_9'>8c</ns0:ref>), re-emphasizing the complex relation amongst different field variables' response to probabilistic model features.</ns0:p><ns0:p>The t distributions for stress and strain were not qualitatively affected by the number of additional FE simulations; 16 simulations, or one extra simulation per observation (Fig. <ns0:ref type='figure' target='#fig_9'>8a,c</ns0:ref>) yielded essentially the same results as 400 simulations (Fig. <ns0:ref type='figure' target='#fig_9'>8b,d</ns0:ref>). The reason is that permutation leverages variability in small samples to produce a large number of artificial samples, and thereby approximates the results of a large number of FE simulations.</ns0:p><ns0:p>To quantify t continuum distribution stability as a function of the number of permutations we considered the null hypothesis rejection rate in both cases of 16 and 400 FE simulations (Fig. <ns0:ref type='figure' target='#fig_10'>9</ns0:ref>). After approximately 200 permutation iterations the null hypothesis rejection rate was effectively identical for both 16 and 400 FE simulations. These results suggest that permutation, which is extremely fast compared to FE simulation, may be able to effectively approximate a large number of FE simulations using the results of only a few FE simulations. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.3'>Model B 214</ns0:head><ns0:p>The mean stress distributions associated with the three indenter faces (Fig. <ns0:ref type='figure' target='#fig_0'>10</ns0:ref>) closely followed indenter 215 face geometry (Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>). Variation in material parameters was associated with stress distribution variability 216 (Fig. <ns0:ref type='figure' target='#fig_0'>11a</ns0:ref>). Nevertheless, t values were effectively constant across all elements and all three models 217 (Fig. <ns0:ref type='figure' target='#fig_0'>11b</ns0:ref>). This suggests that test statistic continua are more robust to model geometry imperfections than 218 are stress/strain continua. </ns0:p></ns0:div> <ns0:div><ns0:head n='3.4'>Model C</ns0:head></ns0:div> <ns0:div><ns0:head>220</ns0:head><ns0:p>A two-sample t test regarding the material parameters ( Manuscript to be reviewed</ns0:p><ns0:p>Computer Science mean stresses which were generally higher in Group B vs. Group A (Fig. <ns0:ref type='figure' target='#fig_1'>12</ns0:ref>), where a stress distribution difference plot clarified that inter-group differences were generally confined to areas of large stress (Fig. <ns0:ref type='figure' target='#fig_12'>13</ns0:ref>). The inter-group statistical differences were much broader, covering essentially the entire femoral cartilage (Fig. <ns0:ref type='figure' target='#fig_13'>14</ns0:ref>). Moreover, relatively broad regions of the cartilage exhibited significant stress decreases, similar to the result observed in the simple bone model (Fig. <ns0:ref type='figure' target='#fig_8'>7f</ns0:ref>).</ns0:p><ns0:p>These results reiterate many of the aforementioned methodological points. In particular, changes in probabilistic model inputs (in this case: material parameter values) can have statistical effects on output fields (in this case: von Mises stresses) which are not easily predicted. Additionally, the visual advantages of full-field analyses are somewhat clearer in this more anatomically correct model; tabulated stresses from different regions of the femoral cartilage would be more difficult to interpret in terms of the original anatomy. Last, mechanical (Fig. <ns0:ref type='figure' target='#fig_12'>13</ns0:ref>) and statistical (Fig. <ns0:ref type='figure' target='#fig_13'>14</ns0:ref>) results can be quite different. </ns0:p></ns0:div> <ns0:div><ns0:head n='4'>DISCUSSION</ns0:head><ns0:p>This paper demonstrated how a non-parametric permutation technique from Neuroimaging <ns0:ref type='bibr' target='#b32'>(Nichols and Holmes, 2002)</ns0:ref> can be used to conduct classical continuum-level hypothesis testing for finite element (FE) models. It's main advantages are:</ns0:p><ns0:p>1. Easy implementation. As demonstrated in this project's software repository (github.com/ 0todd0000/probFEApy), non-parametric hypothesis testing for FE models can be implemented using relatively compact scripts. 2. Computational efficiency. After simulating subject-specific results -which is usually necessary in arbitrary multi-subject studies -no additional FE simulations are needed; permutation can operate on pre-simulated small-sample results to approximate large-sample probabilities (Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>). Producing the main Model A results (Fig. <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>) required a total of only 1.3 s to execute on a desktop PC, including both FE simulations and permutation-based probability computation.</ns0:p><ns0:p>3. Non-measured uncertainty capabilities. Adding uncertainty in the form of random model parameters does not necessarily require large increases in computational demands; results suggest that with respect to an original dataset with N simulations, it may be possible to robustly accommodate additional uncertainty with just N additional simulations (Figs. <ns0:ref type='figure' target='#fig_10'>8-9</ns0:ref>).</ns0:p><ns0:p>4. Visual richness and tabulation elimination. Continuum-level hypothesis testing results can be presented in the same geometric context as commonly visualized field variables like stress and strain (Fig. <ns0:ref type='figure' target='#fig_8'>7b</ns0:ref>,e and Fig. <ns0:ref type='figure' target='#fig_8'>7c,f</ns0:ref>), which eliminates the need to separately tabulate statistical results.</ns0:p><ns0:p>5. Arbitrarily complex experiments. While only one-and two-sample designs were considered here, t statistic continua generalize to F and all other test statistic continua, so arbitrarily complex designs ranging from regression to MANCOVA can be easily implemented using permutation.</ns0:p><ns0:p>6. Robustness to geometric imperfections. Small geometric changes can have qualitatively large effects on stress/strain continua, but have comparably little-to-no effect on test statistic continua (Fig. <ns0:ref type='figure'>??</ns0:ref>), implying that continuum-level hypothesis testing may be more robust than commonly employed procedures which analyze local maxima. This potential danger is highlighted in the more realistic Model C, whose mean differences (Fig. <ns0:ref type='figure' target='#fig_12'>13</ns0:ref>) exhibited high focal stresses whereas the statistical continuum was much more constant across the contact surface (Fig. <ns0:ref type='figure' target='#fig_13'>14</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head n='4.1'>Mechanical vs. statistical interpretations</ns0:head><ns0:p>Mechanical and statistical continua are generally different. For example, for Model A it is clear that each stiffness increase (Fig. <ns0:ref type='figure' target='#fig_1'>2b</ns0:ref>) has mechanical effects on the strain/stress continuum, but the statistical effects are less clear because there is relatively large uncertainty regarding the true nature of the stiffness increase in the population that this sample represents. For classical hypothesis testing, mechanical meaning is irrelevant because all mechanical effects must be considered with respect to their uncertainty.</ns0:p><ns0:p>Further emphasizing the tenuous relation between mechanical and statistical meaning are regions of small mechanical signals (for Model A: near the periphery of the stiffness increase region) which can be accompanied by relatively large statistical signals.</ns0:p><ns0:p>To objectively conduct classical hypothesis tests on FEA results it is therefore essential to explicitly identify the hypothesis prior to conducting simulations. If limiting analyses to only areas of large mechanical signal can be justified in an a priori sense, then those, and only those areas should be analyzed without any theoretical problem. If, however, one's a priori hypothesis pertains to general stress / strain distribution changes, and not specifically to areas of high mechanical signal, it may be necessary to consider the entire model because maximal mechanical and maximal statistical signals do not necessarily coincide.</ns0:p></ns0:div> <ns0:div><ns0:head>11/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:08:6202:1:0:NEW 23 Sep 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>Comparison with common techniques</ns0:head><ns0:p>In the literature, FE-based classical hypothesis testing is typically conducted via scalar analysis of local extrema <ns0:ref type='bibr' target='#b37'>(Radcliffe and Taylor, 2007)</ns0:ref>. Applying that approach to the local mechanical change extrema in Model A (Fig. <ns0:ref type='figure' target='#fig_8'>7a-c</ns0:ref>) yielded the results in Table <ns0:ref type='table' target='#tab_5'>2</ns0:ref>. The null hypothesis (of no mean change with respect to the 14 GPa case) was rejected at &#945; = 0.05 for all three mechanical variables: Young's modulus, effective strain and von Mises stress.</ns0:p><ns0:p>While the test statistic magnitudes are the same for both the proposed whole-model approach (Fig. <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>) and these local extremum analyses, the critical threshold at &#945;=0.05 is different because the spatial scope is different. The broader the spatial scope of the hypothesis, the higher the threshold must be to avoid false positives <ns0:ref type='bibr' target='#b20'>(Friston et al., 2007)</ns0:ref>; in other words, random processes operating in a larger volume have a greater chance of reaching an arbitrary threshold.</ns0:p><ns0:p>The proposed model-wide approach (Fig. <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>) and the local extremum (scalar) approach have yielded contradictory hypothesis testing conclusions for both Young's modulus and strain distributions, so which approach is correct? The answer is that both are correct, but both cannot be simultaneously correct. The correct solution depends on the a priori hypothesis, and in particular the spatial scope of that hypothesis.</ns0:p><ns0:p>If the hypothesis pertains to only the local extremum, then the local extremum approach is correct, and whole-model results should be ignored because they are irrelevant to the hypothesis. Similarly, if the hypothesis pertains to the whole model, then the whole model results are correct and local extrema results should be ignored because they are irrelevant to the hypothesis. We would argue that all FE analyses implicitly pertain to the whole model unless otherwise specified, and that focus on specific scalar metrics is appropriate only if justified in an a priori manner. Historically in biomechanical FEA, low sample sizes (frequently n = 1 for each model) permitted nothing more than qualitative comparisons of stress or strain maps, and/or numerical comparison of output parameters at single nodes. Nevertheless conventional FEA can concurrently and ironically suffer from an excess of data when results are tabulated over many regions, often in a non-standardized manner across studies.</ns0:p><ns0:p>With the continued increase of computer power and processing speed, FE models comprising over one million elements are becoming more and more common <ns0:ref type='bibr' target='#b30'>(Moreno et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bright and Rayfield, 2011a;</ns0:ref><ns0:ref type='bibr' target='#b7'>Cox et al., 2013</ns0:ref><ns0:ref type='bibr' target='#b9'>Cox et al., , 2015;;</ns0:ref><ns0:ref type='bibr' target='#b10'>Cuff et al., 2015</ns0:ref>) (e.g. <ns0:ref type='bibr' target='#b30'>Moreno et al, 2008;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bright &amp; Rayfield, 2011a;</ns0:ref><ns0:ref type='bibr' target='#b7'>Cox et al, 2013</ns0:ref><ns0:ref type='bibr' target='#b9'>Cox et al, , 2015;;</ns0:ref><ns0:ref type='bibr' target='#b10'>Cuff et al, 2015)</ns0:ref>. Yet, typically stress and strain values are only reported and analysed from just a few elements <ns0:ref type='bibr' target='#b36'>(Porro et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b17'>Fitton et al., 2012a)</ns0:ref>. Alternatively average or peak stress or strain values can be computed for whole models <ns0:ref type='bibr' target='#b13'>(Dumont et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b8'>Cox et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b35'>Parr et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b43'>Sharp and Rich, 2016)</ns0:ref> or selected regions <ns0:ref type='bibr'>(Wroe et al., 2007a,b;</ns0:ref><ns0:ref type='bibr' target='#b31'>Nakashige et al., 2011)</ns0:ref>.</ns0:p><ns0:p>The recent application of geometric morphometrics to FEA results <ns0:ref type='bibr' target='#b6'>(Cox et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b18'>Fitton et al., 2012b;</ns0:ref><ns0:ref type='bibr' target='#b33'>O'Higgins and Milne, 2013)</ns0:ref> has gone some way to providing a method of analysing whole models rather than individual elements, but is limited to the analysis of deformations. The approach outlined here enables, for the first time, the analysis of all stresses or strains in a single hypothesis test.</ns0:p><ns0:p>Another major benefit of the technique outlined here is its ability to take in consideration input parameters that are only imprecisely known. When modelling biological structures, the material properties of the model, and the magnitude and orientations of the muscle loads cannot always be directly measured.</ns0:p><ns0:p>This is an especially acute problem in studies dealing with palaeontological taxa. Previous research has addressed this issue principally by the use of sensitivity analyses which test the sensitivity of a model to changes in one or more unknown parameters <ns0:ref type='bibr' target='#b21'>(Kupczik et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bright and Rayfield, 2011a</ns0:ref> Manuscript to be reviewed <ns0:ref type='bibr'>et al., 2011</ns0:ref><ns0:ref type='bibr' target='#b42'>, 2015;</ns0:ref><ns0:ref type='bibr' target='#b39'>Reed et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b48'>Wood et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b46'>Toro-Ibacache et al., 2016)</ns0:ref>. The models are identical save for the unknown parameters, which are then varied between extremes representing likely biological limits or the degree of uncertainty. In such studies, the number of different models is usually quite low, with each parameter only being tested at a maximum of five different values. Our method takes this approach to its perhaps logical extreme -the unknown parameter is allowed to vary randomly within defined limits over a large number of iterations (usually on the order of 10,000). These iterations produce a distribution of results that can be statistically compared with other such distributions.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>A final advantage is that statistical continua may be less sensitive to geometric mesh peculiarities than stress / strain continua. In Fig. <ns0:ref type='figure' target='#fig_0'>10</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_12'>13</ns0:ref>, for example, it is clear from the oddly shaped regions of stress difference that these effects were likely caused by mesh irregularities and that remeshing would likely smooth out these areas of highly localized stress changes. The test statistic continuum, on the other hand, appeared to be considerably less sensitive to localization effects (Fig. <ns0:ref type='figure' target='#fig_0'>11</ns0:ref>) and (Fig. <ns0:ref type='figure' target='#fig_13'>14</ns0:ref>). This may imply that one needn't necessarily develop an ideal mesh, because statistical analysis may be able to mitigate mesh peculiarity-induced stress distribution irregularities.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>Limitations</ns0:head><ns0:p>The major limitation of the proposed method as it currently stands is that only models of identical geometry can be compared. Thus, while the technique can be readily used to address sensitivity-like questions regarding material properties, boundary conditions and orientations, the method cannot readily address geometry-relevant questions, such as are created by varying mesh density <ns0:ref type='bibr' target='#b4'>(Bright and Rayfield, 2011b;</ns0:ref><ns0:ref type='bibr' target='#b46'>Toro-Ibacache et al., 2016)</ns0:ref>, or are found in between-taxa analyses <ns0:ref type='bibr' target='#b14'>(Dumont et al., 2005</ns0:ref><ns0:ref type='bibr' target='#b13'>(Dumont et al., , 2011;;</ns0:ref><ns0:ref type='bibr' target='#b34'>Oldfield et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b8'>Cox et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b50'>Wroe et al., 2007a;</ns0:ref><ns0:ref type='bibr' target='#b42'>Sharp, 2015)</ns0:ref>. Nevertheless, through three-dimensional anatomical registration <ns0:ref type='bibr' target='#b20'>(Friston et al., 2007)</ns0:ref> and also potentially intra-model spatial interpolation to common continuum positions q (Eqn.2), it may be possible to apply the technique to arbitrary geometries even in cases of large deformation and/or geometrical disparity <ns0:ref type='bibr' target='#b41'>(Schnabel et al., 2003)</ns0:ref>.</ns0:p><ns0:p>A second limitation is computational feasibility. Although our results suggest that incorporating a single additional uncertain parameter into the model may not greatly increase computational demand, this may not be true for higher dimensional parameter spaces. In particular, given N experimental measurements, our results show that 2N simulations are sufficient to achieve probabilistic convergence (Fig. <ns0:ref type='figure' target='#fig_10'>9</ns0:ref>). However, this result may be limited to cases where the uncertainty is sufficiently small so that it fails to produce large qualitative changes in the underlying stress/strain continua. Moreover, the feasibility for higher-dimensional parameter spaces is unclear. In particular, a sample of N observations is likely unsuitable for an N-dimensional parameter space, or even an N/2-dimensional parameter space.</ns0:p><ns0:p>The relation between uncertainty magnitude, number of uncertain parameters, the sample size and the minimum number of FE simulations required to achieve probabilistic convergence is an important topic that we leave for future work.</ns0:p><ns0:p>A third potential limitation is that both upcrossing features and the test statistic continuum can be arbitrary. In this paper we restricted analyses to the upcrossing maximum and integral due to the robustness of these metrics with respect to other geometric features <ns0:ref type='bibr' target='#b52'>(Zhang et al., 2009)</ns0:ref>. Other upcrossing metrics and even arbitrary test statistic continua could be submitted to a non-parametric permutation routine. This is partly advantageous because arbitrary smoothing can be applied to the continuum data, and in particular to continuum variance <ns0:ref type='bibr' target='#b32'>(Nichols and Holmes, 2002)</ns0:ref>, but it is also partly a disadvantage because it increases the scope of analytical possibilities and thus may require clear justification and/or sensitivity analyses for particular test statistic and upcrossing metric choices.</ns0:p><ns0:p>A final limitation is that the both the test statistic and probability continua are directly dependent on the uncertainty one selects via model parameter variance. This affords scientific abuse because it allows one to tweak variance parameters until the probabilistic results support one's preferred interpretation. We therefore recommend that investigators both clearly justify variance choices and treat variance itself as a target of sensitivity analysis.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>Summary</ns0:head><ns0:p>This paper has proposed a probabilistic finite element simulation method for conducting classical hypothesis testing at the continuum level. The technique leverages probability densities regarding geometric features of continuum upcrossings, which can be rapidly and non-parametrically estimated using iterative permutation of pre-simulated stress/strain continua. The method yields test statistic continua which are visually rich, which may eliminate the need for tabulated statistical results, which may reveal unique Manuscript to be reviewed</ns0:p><ns0:p>Computer Science biomechanical information, and which also may be more robust to mesh and other geometrical model peculiarities than stress/strain continua.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1.Example upcrossing in a 1D continuum. A thresholded continuum contains zero or more upcrossings, each with particular geometric characteristics including: maximum height, extent, integral, etc., each of which is associated with a different probability. The maximum height characteristicacross all upcrossings -can be used to conduct classical hypothesis testing as described in &#167;2.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Model A. (a) Stack of cuboids representing a simplified bone. (b) Elemental Young's moduli representing local stiffness increase in N=8 cases.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Model B: rigid block indentation on a hyperelastic material.</ns0:figDesc><ns0:graphic coords='5,203.77,63.78,289.51,202.12' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Model B indenter faces. The grey area depicts the compressed soft tissue.</ns0:figDesc><ns0:graphic coords='5,150.64,461.07,114.61,115.13' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Model C; 'hip_n10rb' from the FEBio test suite containing femoral and acetabular cartilage compressed via rigid bone displacement. (a) Full model. (b) Pelvis removed to expose the cartillage surface geometries.</ns0:figDesc><ns0:graphic coords='6,97.04,68.59,321.04,180.58' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Depiction of non-parametric, permutation-based continuum-level hypothesis testing. This example uses five of the Young's modulus continua from Fig.2b and compares the mean continuum to the datum: &#181;=14 GPa. (a) Original continua were sign-permuted by iteratively multiplying subsets by &#8722;1.(b) For each permutation a t continuum was computed using Eqn.2 . (c) The maximum t values from all permutations were assembled to form a primary probability density function (PDF) from which a critical test statistic (t * ) was calculated. (d) Thresholding all permuted test statistic continua at t * produced upcrossings (Fig.1) whose integral formed a secondary PDF from which upcrossing-specific p values are computable. (e) Since the original test statistic continuum failed to traverse t * the null hypothesis was not rejected at &#945;=0.05 for this example.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>with a mean of zero and a standard deviation of 3 deg (forces with &#952; =0 deg are depicted in Fig.2a, 150 and these forces were rotated about the depicted Y axis). For typical simulation of random variables 151 hundreds or thousands of simulations are usually needed to achieve probability distribution convergence 152<ns0:ref type='bibr' target='#b12'>(Dopico-Gonz&#225;lez et al., 2009)</ns0:ref> , but we aimed to show that computational increases may be minimal for153 the proposed hypothesis testing framework. 154 We randomly varied &#952; for an additional 400 FE simulations, 50 for each of the observations depicted 155 in Fig.2b. We then qualitatively compared the permutation-generated distribution of t continua after just 156 16 simulations (one extra FE simulation for each observation) to the distribution obtained after 400 FE 157 simulations. To quantitatively assess the effects of the number of simulations N on the distributions we 158 examined the null hypothesis rejection rate for the N=16 and N=400 cases as a function of the number of 159 post-simulation permutations.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>160</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Model A results. (a-c) Young's modulus input observations and strain/stress continua associated with each observation. (d-f) Hypothesis testing results (&#945;=0.05); red dotted lines depict critical thresholds.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Model A t distributions for strain (upper panels) and stress (lower panels) under a load direction uncertainty with a standard deviation of 3 deg.</ns0:figDesc><ns0:graphic coords='9,183.09,324.82,330.85,268.82' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Model A null hypothesis (H0) rejection rate as a function of the number of permutations for both 16 and 400 FE simulations.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure 10. Model B mean stress distributions for the three indenter faces. Note that these patterns closely follow the indenter face geometry depicted in Fig.4.</ns0:figDesc><ns0:graphic coords='10,151.04,293.68,114.25,114.77' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 12. Model C: mean stress distributions.</ns0:figDesc><ns0:graphic coords='11,272.44,219.55,256.15,144.08' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Model C: statistical results.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Model C material parameters; see Eqn.1. SD = standard deviation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Group</ns0:cell><ns0:cell>Mooney-Rivlin k values</ns0:cell><ns0:cell>Mean (SD)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>[</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>) yielded t=5.17, p&lt;0.001 and thus a 221 rejection of the null hypothesis of equal group means. These probabilistic material parameters produced 222 9/16 PeerJ Comput. Sci. reviewing PDF | (CS-2015:08:6202:1:0:NEW 23 Sep 2016)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Model A results. Analyses of local extrema (at element 70) using a non-parametric permutation-based two-sample t test. SD = standard deviation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>t</ns0:cell><ns0:cell>p</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Young's modulus (GPa) 14.665 0.670</ns0:cell><ns0:cell cols='2'>2.804 0.026</ns0:cell></ns0:row><ns0:row><ns0:cell>Effective strain (1e-6)</ns0:cell><ns0:cell>789.6</ns0:cell><ns0:cell>33.9</ns0:cell><ns0:cell cols='2'>-2.946 0.022</ns0:cell></ns0:row><ns0:row><ns0:cell>von Mises stress (kPa)</ns0:cell><ns0:cell>8894.0</ns0:cell><ns0:cell>8.0</ns0:cell><ns0:cell cols='2'>3.014 0.020</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"CS-2015:08:6202 Probabilistic biomechanical finite element simulations: whole-model statistical inferences based on upcrossing geometry PeerJ Computer Science We thank the Editor and Reviewers very much for your time. Please find that we have made major revisions, including: (1) we have added one new model and have removed another, (2) we have added two new sets of analyses and, (3) we have completely rewritten Methods and Results sections. Please also find that we have highlighted our main manuscript changes in yellow. Thank you very much for re-considering our work, Todd Pataky, Michihiko Koseki, and Phil Cox _______________________________________________________________________ Reviewer #1 Comment 1.1: The Mooney-Rivlin formulation noted in the manuscript, Eqn. 7, does not seem to match the one used in FEBio, particularly when volumetric component is concerned. The authors may want to check FEBio theory manual to confirm. Response: We agree and apologize for our error. We have corrected the equation and have also added a reference to the FEBio theory manual. Comment 1.2: It is not clear how non-parametric permutation-based probabilities are calculated for model outputs, i.e., using 'sign permutation' directly on model outputs OR conducting 'sign permutation' on model inputs and then conducting finite element analysis. Clarification of this is important as it dictates the required number of potentially costly simulations. Response: We apologize for the lack of clarity. Sign permutation must be conducted on outputs because it requires negative numbers, so applying it to inputs can produce physically impossible parameter values (e.g. negative Young’s moduli). Please find that we have attempted to clarify in a number of places, including the two key revisions below. We also hope that the our scripts in the project’s new GitHub repository will help to clarify the application of permutation to model outputs. Revision 1 (Section 2.2): “We used a non-parametric permutation method from the Neuroimaging literature (Nichols and Holmes, 2002)... All permutations described below were applied to pre-simulated FEA results.” Revision 2 (Section 4.4): “The technique leverages probability densities regarding geometric features of continuum upcrossings, which can be rapidly and nonparametrically estimated using iterative permutation of pre-simulated stress/strain continua.” Comment 1.3: In section 5.4, the authors note an important limitation - the methodology being suitable to designs involving only a small number of unknown parameters. While this is understandable, it may prevent conducting useful studies. It will be useful to know more about at what level such analysis become not feasible. For example, is it possible to use Model A, this time have a failure threshold parameter (variable from model to model and from element to element), where von Mises stress can be compared against to see where in the geometry failure will likely happen. It may be exciting to provide such a study as an example with two model inputs with uncertainties. Response: We agree that computational feasibility is an important issue to consider. In response we have decided to add a new set of analyses which explore the computational increases associated with one additional unknown parameter. In these new analyses (Section 2.2.2: Model A, Part 2) we have added one uncertain parameter (load angle) and have examined the effects of the number of FE simulations on the results. The new results (Figs.8-9) suggest that only a few extra FE simulations may be suitable. The reason is that permutation leverages small-sample variability to closely approximate the distributions associated with hundreds or thousands of extra FE simulations. In addition to these new analyses, please find that we have adjusted our description of the multiparameter limitation (Section 4.3, Paragraph 2). Comment 1.4: The first author has done similar work on temporal analysis of 1D signals. How does this study relates to the author's previous work? Is it an extension to field problems, yet still 1D in terms on model input? Response: This study relates directly to the first author’s previous 1D work in that upcrossing geometry probabilities have driven much of that 1D work. The key difference is that the previous work focussed on experimentally measured continua, but this paper applies that methodology to simulation studies. We don’t believe there is a difference between 1D problems and field problems because, from the perspective of upcrossings (Fig.1), all continua are identical irrespective of their physical nature and irrespective of their dimensionality. That is, the technique can be applied to temporal continua, spatial continua, spatiotemporal continua, or any other abstract continuum. We have, for example, applied the technique to 1D temporal continua which have been smoothed with a Butterworth filter with a continuous cutoff value, where the cutoff value becomes the second dimension of an abstract 2D continuum. Upcrossings do not care what the underlying continuum represents, and in this case we used them to identify optimal filter parameters which maximized statistical signal. Since virtually none of the first Author’s previous work involves simulation studies, we have not cited that work very extensively, but we would be pleased to do so if you feel it would be appropriate. We have not made any specific revisions based on this comment, but please advise if you would like us to add an explanation to the manuscript. Comment 1.5: It will be beneficial to disseminate the Python scripts, at least for Model A, for the readers to be able to reproduce the results and play with the capabilities of the described method. Response: We agree. Please find that we have created a GitHub repository for this project (github.com/0todd0000/probFEApy) which contains model files for Models A and B, and a variety of scripts for all three models. We are unable to redistribute Model C due to FEBio license restrictions; we contacted the FEBio developers to request an exception so that we could redistribute this single model, but they unfortunately rejected our request. Comment 1.6: The authors provide a useful comparison in the last paragraph of section 5.2. Nonetheless, this paragraph may need to strengthened with examples to guide the reader about the situations where local extremum approach may be more appropriate than the model-wide approach and vice versa. It is easy to be a proponent of local extremum approach simply because bad things likely happen at locations of highest stresses, e.g. failure. The example on local extremum on mesh sensitivity is appreciated. Yet, this reviewer wonders at what type of physiological situations knowledge about nonlocal extremum model regions become useful. For the sake of argument (and realizing that Model C was not necessarily provided for physiological investigations), does it really matter to know that strains in ligament attachment sites are highly sensitive to internal/external rotation when they are not necessarily deforming that much to begin with? Response: We agree that local extrema analysis is a natural choice, but we believe more strongly that the appropriate analysis approach emerges directly from one’s a priori hypothesis, and that a priori hypotheses must drive analyses (and not other way around). If, before conducting a set of FE simulations, an investigator derives a hypothesis which pertains to local extrema then local extrema should be analyzed and the rest of the model should be ignored. If, however, one wishes to investigate strain/stress distributions en masse, or has a general hypothesis which pertains to neither specific model sections nor specific model variables, then we feel that objectivity demands whole-model analysis. In the Biomechanics FE literature, as well as in the broader Biomechanics literature, it is our understanding that hypotheses are often not explicitly stated, and are instead implicit. For example: “the purpose of this study was to assess the mechanical effects associated with tissue stiffness changes in diabetes / osteoarthritis”. In such cases the implicit null hypothesis is: no effect on any mechanical variable in any part of the model, because any variable in any part of the model could be used to refute that hypothesis. Despite such non-specific hypotheses, most studies in the literature typically extract model variables in an ad hoc sense, without a priori justification. This, in our view, is scientifically problematic. Without a priori justification analyses are exploratory, which is fine, but in which case hypothesis testing should not be used. As Reviewer #1 suggests there may be good mechanical rationale for considering only local maxima, or for ignoring low-strain regions, so a priori justification in the Introduction may be easy or even simple. Regardless, we feel that the burden is not on us to recommend one technique or the other. We feel that the burden is on the investigator to provide an a priori justification of their intended analysis technique, and more specifically for the particular variables and model regions they intend to analyze. In the absence of such justification we feel that one is obliged to consider the whole model, because that is most consistent with the (implicit) a priori hypothesis regarding general effects. Please find that we have attempted to clarify in a new paragraph to the end of Section 4.1. Please also see Section 4.2, Paragraph 3, which also pertains to this issue. Comment 1.7: Please correct the phrase '.. there is a relatively large uncertainty amount of uncertainty ..' Response: We apologize, the phrase has been corrected. _______________________________________________________________________ Reviewer #2 Comment 2.1: Uncertainty quantification in computational methods is nowadays a mature field of research and there exist many competing methods (e.g., polynomial chaos, ANOVA, non-parametric surrogates based on Gaussian processes, etc.) for analyzing continuum statistics and propagating probability densities through nonlinear systems using FE simulations, experiments, etc. Such techniques are never mentioned in the manuscript, and it is not clear how the proposed method connects and compares against those well-established approaches. In particular, the methodology based on tstatistics outlined in Sec. 2.2 seems quite simplistic, and the conclusions drawn in lines 134-144 and section 5.1 are quite obvious. Please, provide a detailed discussion on the motivation and merits of the proposed approach as compared with the current-state-ofthe-art in uncertainty quantification. Response: We agree. In response we have decided to re-focus our paper on its novelties, which pertain solely to classical hypothesis testing via upcrossing geometry. In particular: (a) We have removed all background information regarding typical (scalar) parametric and non-parametric hypothesis testing to focus only on the proposed method. (b) We have removed the previous Model C which did not involve classical hypothesis testing, and whose analyses were extremely simplistic with respect to the cited techniques. (c) We have added a new model (Model B) and a new set of analyses to Model A in attempts to emphasize the various utilities upcrossing geometry affords for classical hypothesis testing. As far as we know upcrossing-based classical hypothesis testing does not appear in the FEA literature, nor in other continuum simulation literature, but we are somewhat concerned that we may have overlooked part of the literature because Reviewer #2 mentions ANOVA. If there have been previously published whole-model ANOVA procedures please provide a reference so that we may reconsider the merits, if any, of our proposed upcrossing approach. Regarding the t statistic: we agree that the t statistic is simple, but it represents the single most important conceptual step for moving from the Gaussian to applied analysis, and it clearly generalizes to to the F statistic and to all other univariate and multivariate applied techniques. We therefore believe that demonstrating the approach for the t statistic is sufficient to demonstrate that the approach applies also to arbitrary experiments, from one-sample designs through to ANCOVA, and to arbitrary univariate and multivariate continuum variables including scalars, vectors and tensors. Last, please note that, since we have re-focussed our paper on classical hypothesis testing, we have not added references to polynomial chaos and the other techniques Reviewer #2 has cited because, as far as we aware, they apply to generalized probability mappings and not directly to hypothesis testing. In the biomechanical FEA literature classical hypothesis testing is still commonly employed, mainly because model parameters (including geometry and boundary conditions) are typically measured or estimated using small samples, but also because classical hypothesis testing is still highly prevalent in the broader Biomechanics literature. Additionally, since most biomechanical FEA studies publish results pertaining to ad hoc scalar values --- an approach which clearly possesses a variety of statistical weaknesses --- we hope that our paper can help to bridge the gap between conventional, simplistic hypothesis testing and established probabilistic simulation techniques. Comment 2.2: p.3, l.93-94: Is it this the noise that causes the asymmetry observed in the t-statistic shown in Fig.3b? What is the purpose of adding this small random signal? Response: Yes, it is the noise that causes the asymmetry. The Gaussian pulses are all symmetrical about element #70, so the t continuum would also be symmetrical about element #70 if there were no noise. The purpose of adding the noise was to avoid zero variance and thus undefined t values. If there were no noise, the t statistic would blow up to infinity near elements #50 and #90. We feel that adding noise is appropriate for this paper’s purposes, because we intend for this technique to be applied to experimentally measured datasets in which variance is never zero. Please find that we have attempted to clarify as follows: Revision (Section 2.1.1) “Additionally, a small random signal was separately applied to each of the eight cases to ensure that variance was greater than zero at all points in the continuum, and thus to ensure that test statistic values were computable at all points in the continuum.” Comment 2.3: p.4, l. 113: The abbreviation “SD” is introduced without explanation. Please spell out every abbreviation the first time it is used in the text. Response: We apologize for the oversight. Please find that “SD” now appears only in tables and table captions, where the abbreviation is now also defined. Comment 2.4: Please provide sufficient information so that the reader can reproduce every numerical example presented. For example, provide a brief description of which equations are solve using FE? What are the boundary/initial conditions, etc? Response: We agree that we should include sufficient information to reproduce the results. Please find that we now provide our source code and models in a public GitHub repository so that users will be able to reproduce all results. (Note that Model C is not redistributable, so is not included in our repository, but it can be downloaded freely from febio.org). Regarding equations: since we use a variety of material models including both small and large deformation we fear that an explanation of the underlying FE equations would require too much space and draw focus away from the novel aspects of the paper. Instead we have decided to direct readers to the FEBio Theory Manual (see Section 2, Paragraph 1), where all equations are described in detail. Regarding boundary conditions (BCs): we think that all BCs are sufficiently clear from our text descriptions and figures, but any ambiguity that may exist is now resolved in our source code which we have provided in a GitHub repository (github.com/0todd0000/ probFEApy); all BCs are explicit under the <Load> and <Boundary> XML tags. Please advise if you feel we should clarify any specific BCs in the text. Comment 2.5: p.7, l.196-197: how do you select the nodes? does this matter a lot? do you use quadrature to compute the upcrossing integral? if yes, is the selection of the nodes and the quadrature related? Response: The nodes are not selected directly, they instead emerge from thresholding. The threshold is selected objectively based on an a priori Type I error rate of α=0.05, and is computed as the (1-α)th percentile of the test statistic distribution (Fig.6c). Note that integration algorithms aren’t important because the non-parametric inference procedure can be applied to any geometric feature including: number of suprathreshold nodes, number of suprathreshold elements, total suprathreshold hyper-volume, trapezoidal integral, quadrature-based integral, etc. In our experience the precise geometric feature is moot because probabilities associated with all features become smaller as the upcrossing becomes larger. Please find that, in response to other comments, we have removed the section to which this comment pertains. We hope that our other revisions have clarified this issue. Comment 2.6: Figure 6: Figure captions should should be self contained. Please briefly explain what we see in a,b,c,d. Response: We agree, please find that we have augmented the caption. The new caption is not completely self-contained (because it cites an equation and also assumes background knowledge regarding permutation and PDFs), but we believe that it will be sufficiently self-contained for most readers. Comment 2.7: p.14, l.386-388: Please provide a more detailed comment on the difficulties introduced by high- dimensional parameter spaces. How does the computational cost and feasibility of the proposed approach scale in such cases? Response: Thank you for this comment. Reviewer #1 had a similar comment (Comment #1.3 above), and in response we have conducted new multi-parameter analyses. Please refer to our response to Comment #1.3 above, and also to the new results in Figs.8 and 9. To summarize, the new results suggest that the proposed technique does not require appreciably large computational demands in higherdimensional parameter spaces because permutation can powerfully and efficiently approximate arbitrary probability densities. Note that we have assumed that FE simulation is the computational bottleneck, and that permutation is computationally inexpensive. We believe this assumption is justified because single biomechanical FE simulations often require on-the-order of one hour or one day to run. By contrast, hundreds or even thousands of permutation iterations can be run in well under one second on even relatively large continua (e.g. containing 1e6 elements). We have also assumed that the small experimental samples on which permutation is conducted can occupy a representative portion of high-dimensional parameter space. While our new analyses do not directly consider arbitrarily high-dimensional parameter spaces, we hope that they will at least point to permutation’s potential to offer relatively efficient construction of PDFs in higher-dimensional spaces. Comment 2.8: Experimental design. The submission describes original research and is in accordance with the scope of the journal. Regarding result reproducibility please see item 4.) in the Basic Reporting section. Response: To ensure reproducibility please find that we have provided model files and analysis scripts in a public GitHub repository. Comment 2.9: Validity of the findings. The submission meets the scientific standards of the journal. One minor concern is that there seems to be no connection with the current literature on uncertainty quantification in computational science. Moreover, the technical approach is rather simplistic and the conclusions drawn may seem obvious to a reader acclimated with probabilistic finite element simulations. Response: We have removed the previous simplistic theoretical background, and have attempted to highlight our paper’s novelties more clearly. While the statistical metrics (including the t continuum) remain relatively simple, we believe that our results suggest that arbitrarily complex analyses are also possible because the underlying permutation approach can easily handle all experimental designs including regression, ANOVA, MANCOVA, etc. Our paper’s main message is simply that classical hypothesis testing can be conducted at the continuum level based on upcrossing probabilities, which we believe is a novel message in the FE simulation literature. We suspect that most researchers who use FEA (and in particular Biomechanics researchers) are unaware of this possibility. It is possible that we have missed a literature precedent, so please advise if you are aware of a literature precedent for continuum-level classical hypothesis testing in simulation studies. "
Here is a paper. Please give your review comments after reading it.
409
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper provides new tools to examine the efficiency and robustness of derivative-free optimization algorithms based on the normalized data profiles that involve a variety of performance metrics. In addition, five sequences (solvers) of trigonometric simplex designs are proposed to extract different features of non-isometric reflections. Each sequence is designed to rotate the starting simplex through an angle that designates the direction of the simplex. This type of feature extraction is applied to each sequence of the triangular simplexes in high-dimensions to determine a global minimum solution for mathematical problems. The underlying idea of the approach is to vary the extracted nonisometric reflections of the solvers with various choices of rotations. To predict the optimal sequence of trigonometric simplex design, a linear model is used with the normalized data profiles to examine the convergence rate of the multiple simplexes in the neighborhood of a minimum, required in statistical estimation tests. We utilize the proposed algorithm as an example of derivative-free optimization algorithms and compare it to an optimized version of the Nelder-Mead algorithm known as the Genetic Nelder-Mead algorithm (Fajfaret al., 2017). The experimental results demonstrate that the proposed data profiles lead to an examination of the reliability and robustness of the considered solvers from a more comprehensive perspective than the existing data profiles. Finally, the highdimensional data profiles reveal that the proposed solvers outperform the genetic solvers for all accuracy tests.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The growing success in developing derivative-free optimization algorithms and its applications has motivated researchers over the past decades to provide new tools for performance analysis. The purpose of doing so is when an algorithm is presented into the optimization literature, it is expected to show that it performs better than other algorithms. This is required to secure a fair comparison as a basis to evaluate relative performance of different solvers. The measurement scheme for comparison between similar algorithms needs to compare the level of complexity in algorithm design and also displays how efficient a solver performs relative to the other solvers <ns0:ref type='bibr' target='#b22'>(Vince and Earnshaw, 2012)</ns0:ref>.</ns0:p><ns0:p>We are motivated to notice that most algorithm developers are interested in testing one performance measure (one dimension). For example, data profiles are designed to provide users with information about the percentage of solved problems as a function of simplex gradient estimates <ns0:ref type='bibr' target='#b15'>(Mor&#233; and Wild, 2009)</ns0:ref>. However, if the evaluation is expensive, one dimension may not provide useful information to capture how reliable a solver performs relative to the other solvers, as we will demonstrate later. In order to provide a comprehensive evaluation for the relative performance of multiple solvers, we introduce a collection of performance metrics to evaluate new algorithms and improve the existing data profiles. Thus, the proposed multidimensional data profiles are more compact and effective in allocating a computational budget for different levels of accuracy.</ns0:p><ns0:p>The focus of our work is exclusively on minimization problems. Such problems arise naturally in almost every branch of modern science and engineering. For example, pediatric cardiologists seek to delay the next operation as much as possible to identify the best shape of a surgical graft <ns0:ref type='bibr' target='#b0'>(Audet and Hare, 2017)</ns0:ref>. A number of variables can affect the objective function to treat and manage heart problems in children. Some are structural differences they are born with, such as holes between chambers of the heart, valve problems, and abnormal blood vessels. Others involve abnormal heart rhythms caused by the electrical system that controls the heart beat. Technically, we can write the minimum function value of f over the constraint set &#8486; in the form.</ns0:p><ns0:formula xml:id='formula_0'>min x { f (x) : x &#8712; &#8486;} (1)</ns0:formula><ns0:p>Note that, the minimum function value could be:</ns0:p><ns0:p>i. &#8722;&#8734;: such as</ns0:p><ns0:formula xml:id='formula_1'>min x {x 1 : x &#8712; R 3 }</ns0:formula><ns0:p>ii. A well-defined real number: such as</ns0:p><ns0:formula xml:id='formula_2'>min x { x 2 : x &#8712; R 2 , x 1 &#8712; [&#8722;1, 2], x 2 &#8712; [0, 3]}</ns0:formula><ns0:p>However, there are other equivalent forms. Suppose a researcher, interested in obtaining an estimate of the point or set of points that determine the minimum function value z <ns0:ref type='bibr' target='#b0'>(Audet and Hare, 2017)</ns0:ref>. We might instead seek the argument of the minimum:</ns0:p><ns0:formula xml:id='formula_3'>Argmin x { f (x) : x &#8712; &#8486;} : = {x &#8712; &#8486; : f (x) = z}<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>In particular, the argmin set can be:</ns0:p><ns0:p>i. A singleton: such as</ns0:p><ns0:formula xml:id='formula_4'>argmin x { x 2 : x &#8712; R 2 , x 1 &#8712; [&#8722;1, 2], x 2 &#8712; [0, 3]} = {[0, 0] T }</ns0:formula><ns0:p>ii. A set of points: such as</ns0:p><ns0:formula xml:id='formula_5'>argmin x {sin(x) : x &#8712; R, x 1 &#8712; [0, 7]} = {0, &#960;, 2&#960;}</ns0:formula><ns0:p>One of the most common examples of derivative-free optimization algorithms is the Nelder Mead simplex gradient algorithm (1965) (NMa), which is beyond the scale for any paper cited for minimization problems <ns0:ref type='bibr' target='#b1'>(Barton and Ivey Jr, 1996;</ns0:ref><ns0:ref type='bibr' target='#b11'>Lewis et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b24'>Wright et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b10'>Lagarias et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b23'>Wouk et al., 1987)</ns0:ref>. The notion of the NMa is based on creating a geometrical object, called simplex, in the hyperplanes of n-parameters. Then, this simplex performs reflections over the changing landscape of the mathematical problems until the coordinates of the minimum point can be obtained by one of its vertices <ns0:ref type='bibr' target='#b20'>(Spendley et al., 1962;</ns0:ref><ns0:ref type='bibr' target='#b9'>Kolda et al., 2003)</ns0:ref>.</ns0:p><ns0:p>The contribution of the NMa was to incorporate the simplex search with non-isometric reflections, designed to accelerate the search <ns0:ref type='bibr' target='#b11'>(Lewis et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b4'>Conn et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b8'>Han and Neumann, 2006)</ns0:ref>. It was well-understood that the non-isometric reflections of NMa were designed to deform the simplex in a better way to explore the solution space of mathematical functions <ns0:ref type='bibr' target='#b11'>(Lewis et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b2'>Baudin, 2009)</ns0:ref>. Nevertheless, when the number of parameters under investigation increases, the simplex becomes increasingly distorted with each iteration, generating different geometrical formations that are less effective than the initial simplex design <ns0:ref type='bibr' target='#b2'>(Baudin, 2009;</ns0:ref><ns0:ref type='bibr' target='#b21'>Torczon, 1989)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the original NAa for strictly convex functions with up to three continuous derivatives. In all the objective functions, the NMa causes the sequence of the generated simplexes to converge to a non-stationary point.</ns0:p><ns0:p>The NMa repeats inside construction steps with best vertex remaining fixed; until the diameter of the simplex approximately shrinks to 0.</ns0:p><ns0:p>A recent contribution to the NM simplex optimization algorithm is the Genetic Nelder Mead algorithm (GNMa) that is hybridizing NMa with genetic programming <ns0:ref type='bibr' target='#b5'>(Fajfar et al., 2017)</ns0:ref>. Instead of using a restricted NM simplex to only one traditional design, the authors evolve NMa genetically to produce deterministic simplexes that can develop adaptive vertices through generations. Thus, the new algorithm generates many population-based simplexes with different shapes and keeps the best designs that have better features to locate an optimal solution. The authors have only one issue with the original NMa, which is the reduction step. They claim that this operation is inconsistent because it does not return a single vertex. They suggested that the reduction step should include exclusively the worst vertex and basically inner contraction can perform the job. The new implementation of the GNMa performs four operations: reflection, expansion, inner contraction, and outer contraction. In addition to the three basic vertices of the original NM, the authors added one more vertex, defined as the second best. The new vertex joins the other basic vertices to constitute a centroid different than the one that was defined by <ns0:ref type='bibr' target='#b18'>Nelder and Mead (1965)</ns0:ref>. The GNMa forms the next simplex by reflecting the vertex that is associated with the highest value of the cost function (CF), in the hyperplane spread over the remaining vertices.</ns0:p><ns0:p>The main aim of this research is to upgrade the data profiles with a variety of metric measures. In addition, we propose five sequences of trigonometric simplex designs that work separately to optimize the individual components of mathematical functions. To predict the optimal sequence of triangular simplex design, a linear model with a window of 10 samples is proposed for evaluating the multiple simplexes (solvers) in the neighborhood of the minimum. The rest of this paper is organized as follows:</ns0:p><ns0:p>The next section presents the theory of the sequential design of trigonometric Nelder-Mead algorithm, and demonstrates a compact mathematical way of implementing the algorithm based on vector theory. Section 3 describes the importance of the initial simplex design, and presents the multidirectional trigonometric Nelder Mead algorithm (MTNMa). Section 4 presents data profiles and statistical experiments to compare the reliability and robustness of the MTNMa with GNMa <ns0:ref type='bibr' target='#b5'>(Fajfar et al., 2017)</ns0:ref> on a set of standard functions <ns0:ref type='bibr' target='#b14'>(Mor&#233; et al., 1981)</ns0:ref>. Finally, the conclusions are provided in section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>HASSAN NELDER MEAD ALGORITHM</ns0:head><ns0:p>We present in this section the theory of the Hassan Nelder Mead algorithm (HNMa) <ns0:ref type='bibr' target='#b17'>(Musafer and Mahmood, 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Musafer et al., 2020)</ns0:ref>, and describe the significance of the dynamic properties of the algorithm that make it appropriate for unconstrained optimization problems. The sequential trigonometric simplex design of the HNMa allows components of the reflected vertex to adapt to different operations; by breaking down the complex structure of the simplex into multiple triangular simplexes. This is different from the original NMa that forces all components of the simplex to execute a single operation such as expansion. When the next simplexes are characterized by different reflections, the HNMa performs not only similar reflections to that of the original simplex of the NMa, but also others with different orientations determined by the collection of non-isometric features. As a consequence, the generated sequence of triangular simplexes is guaranteed to search the solution space of mathematical problems and performs better than the original simplex of the NMa <ns0:ref type='bibr' target='#b18'>(Nelder and Mead, 1965)</ns0:ref>.</ns0:p><ns0:p>We now present a mathematical way of analyzing the HNMa using vector theory, and explain why the original NMa fails in some instances to find a minimal point or converges to a non-stationary point. For example, suppose we want to determine the minimum of a function f . The function f (x, y) is calculated at the vertices that are subsequently arranged in ascending with respect to the cost function (CF) values, such that: A(x 1 , y 1 ) &lt; B(x 2 , y 2 ) &lt; C(x 3 , y 3 ) &lt; T h(x 4 , y 4 ), where A, B, and C are the vertices of the triangular simplex with respect to the lowest, 2 nd lowest, and 2 nd highest CF values, and T h is a threshold that has the highest CF value. The need for T h arises when the HNMa performs a reflection in an axial component, it replaces the value of the axial component of the T h. If the new value of the T h leads to lower CF value than the previous CF value of the T h, then the HNMa moves to upgrade the next axial component of the T h. After upgrading the T h point with a variety of non-isometric reflections, the HNMa examines the T h to validate if the resulted T h has a lower CF value than C to be replaced with C or the HNMa upgrades the T h only. This technique of exploring the neighborhood of the minimum is to search for the optimal patterns that can be followed and result in a better approach to find the minimum. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science we append the following equations with the additional axial components), to satisfy,</ns0:p><ns0:formula xml:id='formula_6'>H(x 5 , y 5 ) = A + B 2 = x 5 = x 1 + x 2 2 , y 5 = y 1 + y 2 2 (3) I(x 6 , y 6 ) = A +C 2 = x 6 = x 1 + x 3 2 , y 6 = y 1 + y 3 2 (4) G(x 7 , y 7 ) = H +C 2 = x 7 = x 5 + x 3 2 , y 7 = y 5 + y 3 2 (5)</ns0:formula><ns0:p>Note that to find the reflected point D, we add the vectors H and d, as shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> part-I, where d is the vector that can be represented by subtracting the vectors (H and C), (D and H), (E and D) or (F and G).</ns0:p><ns0:p>The coordinates of D are obtained by adding the vector (H &#8722;C) to H. The vector formula is given below.</ns0:p><ns0:formula xml:id='formula_7'>D = H + d = H + (H &#8722;C) = 2H &#8722;C = (2x 5 &#8722; x 3 , 2y 5 &#8722; y 3 )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>A similar process could be used to find the coordinates of E and F. The formulas are stated below. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_8'>E = H + 2d = H + 2(H &#8722;C) = 3H &#8722; 2C = (3x 5 &#8722; 2x 3 , 3y 5 &#8722; 2y 3 ) (7) F = H + d 1 = H + (H &#8722; G) = 2H &#8722; G = (2x 5 &#8722; x 7 , 2y 5 &#8722; y 7 )<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where d 1 can be found by subtracting the vectors (G and C), (H and G), (F and H) or (D and F).</ns0:p><ns0:p>Hence, the HNMa does not have a shrinkage step; instead, two operations are added to the algorithm:</ns0:p><ns0:p>shrink from worse to best I(x 6 , y 6 ) and shrink from good to best H(x 5 , y 5 ). The basic six reflections of the HNMa are shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> part-II.</ns0:p><ns0:p>It is noteworthy to mention that a combination of x&#8722;components of the HNMa results in extraction one of the six non-isometric reflections. Now, if we consider two combinations (such as x and y) or more, then the simplex as in the case of the HNMa performs two reflections or more. Thus, the multiple components of the triangular simplex adapt to extract various non-isometric features of the HNMa. Therefore, the optimization solution of the HNMa not only reflects the opposite side of the simplex through the worse vertex, but also leads to implement various reflections that are determined by the collection of extracted features. For example, suppose we need to find the minimum of a function f (x, y). A solution of the NMa may come out to be reflection in x and y directions, whereas a solution of the HNMa may come out to be reflection in x but expansion in y. It can be a combination of any two operations of the HNMa. In fact, the HNMa is designed to deform its simplex in a way that is more adaptive to tackle the optimization problems than the original simplex of the NMa. The solution of the HNMa helps to extract the optimal characteristics of the non-isometric reflections in the reflected vertex and produce the next simplex that leads to perform faster convergence rate than the original triangular simplex of the NMa.</ns0:p></ns0:div> <ns0:div><ns0:head>MULTIDIRECTIONAL TRIGONOMETRIC NELDER MEAD</ns0:head><ns0:p>The Nelder and Mead algorithm is particularly sensitive to the position of the initial simplex design, where the variable-shape simplex is modified at each iteration using one of four linear operations: reflection, expansion, contraction, and shrinkage. The geometrical shape of the simplex subsequently becomes distorted as the algorithm moves towards a minimal point by generating different geometrical configurations that are less effective than the initial simplex design. To address this need, one of the preferred designs is to build the initial simplex with edges of equal length <ns0:ref type='bibr' target='#b12'>(Martins and Lambe, 2013)</ns0:ref>. In this way, the unit simplex of dimension n is shifted from the origin to the initial guess. Suppose that the length of all sides of the simplex is required to be l. The given starting point x 0 of dimension n, is the initial vertex v 1 = x 0 . We define the parameters a, b &gt; 0 as follows:</ns0:p><ns0:formula xml:id='formula_9'>b = l n &#8730; 2 ( &#8730; n + 1 &#8722; 1) (9) a = b + l &#8730; 2 (10)</ns0:formula><ns0:p>The remaining vertices are computed by adding a vector to x 0 ; whose components are all b values except for the j th component that is assigned to a, where j = 1, 2, ..., n, and i = 2, 3, ..., n + 1, as follows.</ns0:p><ns0:formula xml:id='formula_10'>v i, j = x 0, j + a if j = i &#8722; 1 x 0, j + b if j = i &#8722; 1 (11)</ns0:formula><ns0:p>The risk is that if the coordinate's direction of the constructed initial simplex is perpendicular to the direction towards the minimal point, then the algorithm performs a large number of reflections or converges to a non-stationary point <ns0:ref type='bibr' target='#b13'>(McKinnon, 1998)</ns0:ref>. The practical problem of designing such an initial simplex lies in two parameters: the initial length and the orientation of the simplex. As a result, this simplex is not very effective, especially for problems that involve more than 10 variables <ns0:ref type='bibr' target='#b12'>(Martins and Lambe, 2013)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2021:01:57359:1:1:NEW 19 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Alternatively, the most popular way to initialize a simplex is Pfeffer&#347; method, which is due to L.</ns0:p><ns0:p>Pfeffer at Stanford <ns0:ref type='bibr' target='#b2'>(Baudin, 2009)</ns0:ref>. It is a heuristic method that helps to determine the characteristics of the initial simplex based on the starting point x 0 . The method is applied using the content of usual delta (&#948; u ) and zero term delta (&#948; z ) elements that are used to adjust the simplex orientation. Pfeffer&#347; method is presented in 'Global Optimization Of Lennard-Jones Atomic Clusters' <ns0:ref type='bibr' target='#b6'>(Fan, 2002)</ns0:ref>; and it is used in the 'fminsearch' function from the 'neldermead package' <ns0:ref type='bibr' target='#b3'>(Bihorel et al., 2018)</ns0:ref>. To build a simplex as suggested by L.Pfeffer, the initial vertex is set to v 1 = x 0 , and the remaining vertices are obtained as follows,</ns0:p><ns0:formula xml:id='formula_11'>v i, j = &#63729; &#63730; &#63731; x 0, j + &#948; u * x 0, j * i if j = i &#8722; 1 and x 0, j = 0 &#948; z if j = i &#8722; 1 and x 0, j = 0 x 0, j if j = i &#8722; 1 (12)</ns0:formula><ns0:p>The positive constant coefficients of &#948; z and &#948; u are selected to scale the initial simplex with the characteristic length and orientation of the x 0 . The vertices are i = 2, 3, ..., n + 1, and the parameters of the vertices are j = 1, 2, ..., n. If the constructed simplex is flat or is not in the same direction as an optimal solution, then this initial simplex may fail to drive the process towards an optimum or require to perform a large number of simplex evaluations. Therefore, the selection of a good starting vertex can greatly improve the performance of the simplex operations.</ns0:p><ns0:p>On the contrary, our solution allows the components of the reflected vertex to perform different reflections of the HNMa. This means that, unlike the simplex reflection of the NMa, the next simplex is formed by reflecting the vertex, that is, the one with the worst CF value, through the line segment connecting this vertex and the centroid of the other two vertices, and rotating the reflected vertex through an angle determined by the features of the reflected components. In addition, we reinforce the traditional simplex of the HNMa with four additional triangular simplex designs. The five simplexes explore the solution space and help to determine how best to approximate to a solution of the minimum value.</ns0:p><ns0:p>To initialize a simplex of the HNMa <ns0:ref type='bibr' target='#b17'>(Musafer and Mahmood, 2018)</ns0:ref>, Equation ( <ns0:ref type='formula'>12</ns0:ref>) is modified to be consistent with the new features of the algorithm, as follows.</ns0:p><ns0:formula xml:id='formula_12'>v i, j (Solver1) = x 0, j + &#948; u * x 0, j * i if x 0, j = 0 x 0, j + &#948; z * i if x 0, j = 0 (13)</ns0:formula><ns0:p>According to <ns0:ref type='bibr' target='#b7'>Gao and Han (2012)</ns0:ref>, the default parameter values for &#948; u and &#948; z are 0.05 and 0.00025 respectively. The indices of the HNMa simplex used are i = 2, ..., 5, and j = 1, 2, ..., n <ns0:ref type='bibr' target='#b17'>(Musafer and Mahmood, 2018)</ns0:ref>.</ns0:p><ns0:p>In this test, we are more interested in launching multiple sequences of trigonometric simplex designs with different directions. Each sequence is designed to rotate the starting simplex through an angle that designates the direction of the simplex. The proposed MTNMa enhances the standard Pfeffer&#347; method of simplex designs and forms five solvers of multi-directional simplex designs with distinct reflections. We will demonstrate how solvers of the MTNMa extract different features of non-isometric reflections and converge to a minimum with a smaller computational budget as compared to the previously discussed methods of simplex designs. Key to this outcome is the mathematical model of the MTNMa designed to determine the optimal features of non-isometric reflections that result in better approximate solutions as compared to optimized versions of simplex designs.</ns0:p><ns0:p>One of the potential designs multiplies the odd-indexed variables of odd-indexed vertices by (-1); the sequence of applying Pfeffer&#347; method is chosen to perform a reflection in the y-components of the triangular simplexes of Solver1. The formula is as follows:</ns0:p><ns0:formula xml:id='formula_13'>v i, j (Solver2) = &#63729; &#63732; &#63730; &#63732; &#63731; x 0, j + (&#8722;1) j * &#948; u * x 0, j * i * mod ( i + j 2 ) if x 0, j = 0 and mod ( i + j 2 ) = 1 x 0, j + (&#8722;1) j * &#948; z * i * mod ( i + j 2 ) if x 0, j = 0 and mod ( i + j 2 ) = 1 (14)</ns0:formula><ns0:p>Similarly, we can obtain a mirror image of the above formula if we apply the transformation on the even components of x 0 to generate new vertices. Solver4 performs a reflection in the x-components of the triangular simplexes of Solver1. The corresponding equation is as follows. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_14'>v i, j (Solver4) = &#63729; &#63732; &#63730; &#63732; &#63731; x 0, j + (&#8722;1) j+1 * &#948; u * x 0, j * i * mod ( i + j 2 ) if x 0, j = 0 and mod ( i + j 2 ) = 0 x 0, j + (&#8722;1) j+1 * &#948; z * i * mod ( i + j 2 ) if x 0, j = 0 and mod ( i + j 2 ) = 0 (15)</ns0:formula><ns0:p>A different way to create a simplex with different characteristics of edge lengths and orientations is to push some or all the points of the Solver1 towards the negative (x and y) axes to constitute Solver3 or towards the positive axes to constitute Solver5. Hence, Solver3 rotates the triangular simplexes of Solver1 by 180 degrees about the origin, which is obtained by multiplying the odd and even components of (x and y) by (-1). Similarly, Solver5 is designed to adjust the simplexes of Solver1 to perform a reflection in x-axis, y-axis, or origin, which is obtained by taking the absolute value of the triangular simplexes of Solver1. The corresponding formulas are as follows:</ns0:p><ns0:formula xml:id='formula_15'>v i, j (Solver3) = x 0, j &#8722; &#948; u * x 0, j * i if x 0, j = 0 x 0, j &#8722; &#948; z * i if x 0, j = 0 (16) v i, j (Solver5) = x 0, j + &#948; u * x 0, j * i if x 0, j = 0 x 0, j + &#948; z * i if x 0, j = 0 (17)</ns0:formula><ns0:p>To monitor and evaluate a sequence of trigonometric simplex design, we need to know two points that the simplex passes through as well as the slope with respect to their CF values. Therefore, a window of size 10 points is used to examine the simplex performance. The window size is derived from our practical experience. One of the proposed solvers manages to locate the exact minimum for (Jennrich-Sampson) function within 22 simplex evaluations. Based on the evaluation of the direction vector, the simplex is either continued or aborted. Consider a simplex that passes through a window of 10-points, we need to know the first point P 1 (x 1 , y 1 ) and the last point P 10 (x 10 , y 10 ) of the window as well as the direction of the simplex. We can write this as a line in the parametric form by using vector notation.</ns0:p><ns0:p>x, y = x 1 , y 1 + t m x , m y (18)</ns0:p><ns0:p>For the particular case, we can pick x 1 , y 1 = P 1 x 1 , y 1 , so the direction vector can be found as:</ns0:p><ns0:formula xml:id='formula_16'>m x , m y = P 10 x 10 , y 10 &#8722; P 1 x 1 , y 1 (19)</ns0:formula><ns0:p>If the coordinates of the direction vector equal zero, this indicates that all best points that the simplex (solver) passes through would have equal coordinates, then the simplex is aborted unless it satisfies a convergence test based on the resolution of the simulator. The observing process continues for all the sequences of triangular simplexes on the coordinate plane until the coordinates of the minimal point are found by one of the simplex designs that needs less computational budget than the others. Another advantage of using Equation ( <ns0:ref type='formula'>19</ns0:ref>), when combined with data profiles later to evaluate several solvers, this formula can be used as a criteria to stop a solver that cannot satisfy the convergence test within the given computational budget.</ns0:p></ns0:div> <ns0:div><ns0:head>COMPUTATIONAL EXPERIMENTS</ns0:head><ns0:p>In this section, we present the testing procedures that provide a comprehensive performance evaluation of the proposed algorithm. We follow two stages to carry out the experiments. In the first stage, we define the metrics that differentiate between the considered algorithms, which are summarized as follows:</ns0:p><ns0:p>the accuracy of the algorithm compared to the actual minima, the wall-time to convergence (in second), the number of function evaluations, the number of simplex evaluations, and identification of the best sequence of trigonometric simplex design. In addition, we adopt the guidelines designed by <ns0:ref type='bibr' target='#b14'>Mor&#233; et al. (1981)</ns0:ref>, to evaluate the reliability and robustness of unconstrained optimization software. These guidelines utilize a set of functions exposed to an optimization algorithm to observe that the algorithm is not tuned The second stage involves normalized data profiles suggested by <ns0:ref type='bibr' target='#b15'>Mor&#233; and Wild (2009)</ns0:ref> with a convergence test given by the formula (20). The function of data profiles is to provide an accurate view of the relative performance of multiple solvers belonging to different algorithms when there are constraints on the computational budget.</ns0:p><ns0:formula xml:id='formula_17'>f (x 0 ) &#8722; f (x) &#8805; (1 &#8722; &#964;)( f (x 0 ) &#8722; f L )<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>where x 0 is the starting point for the solution of a particular problem p, p &#8712; P (P is a set of benchmark problems), f L is the smallest CF value obtained for the problem by any solver within a given number of simplex gradient evaluations, and &#964; = 10 &#8722;k is the tolerance with k &#8712; {3, 5, 7} for short-term outcomes.</ns0:p><ns0:p>These include changes in adaptation, behavior, and skills of derivative-free algorithms that are closely related to examining the efficiency and robustness of optimization solvers at different levels of accuracy.</ns0:p><ns0:p>In this research, however, the MTNMa launches multiple solvers that compute a set of approximate solutions. The definition of the convergence test ( <ns0:ref type='formula' target='#formula_17'>20</ns0:ref>) is independent of determining the different optimization solvers that satisfy a certain accuracy, as in the case of algorithms that generate multiple solvers. This is not realistic, solvers mostly cannot approximate to an optimal solution in a similar number of evaluations, thereby some solvers may push the process faster towards the optima than others. Therefore, we use a linear model that has already been defined as the criteria for stopping the algorithm if one of the solvers satisfies a convergence test within a limited computational budget. Assume that we have a set of optimization solvers S converging to best possible solution f L obtained by any solver within a given number of simplex evaluations. The convergence test used for measuring several relative distances to optimality can be defined with respect to s, (s &#8712; S), we might instead write the convergence test in the following form:</ns0:p><ns0:formula xml:id='formula_18'>f (x 0 ) &#8722; f s (x) &#8805; (1 &#8722; &#964;)( f (x 0 ) &#8722; f L )<ns0:label>(21)</ns0:label></ns0:formula><ns0:p>The previous work with data profiles has assumed that the number of simplex evaluations (one dimension) is the dominant performance measure for testing how well a solver performs relative to the other solvers <ns0:ref type='bibr' target='#b15'>(Mor&#233; and Wild, 2009;</ns0:ref><ns0:ref type='bibr' target='#b0'>Audet and Hare, 2017)</ns0:ref>. However, they did not investigate the performance of derivative free optimization solvers if a variety of metrics were used to evaluate the performance. If the cost unit is evaluated only using simplex evaluations, then this assumption is unlikely to hold, when the evaluation is expensive, as we will demonstrate later. In this case, we might instead define the performance measures to be the amount of computational time and number of simplex evaluations. Specifically, we define data profiles in terms of a variety of performance metrics, summarized:</ns0:p><ns0:p>the amount of computational time T , the number of simplex evaluations W , the number of function evaluations Y , and the number of CPU cores Z required to satisfy the convergence test (21). We thus define the data profile of a solver s by the formula.</ns0:p><ns0:formula xml:id='formula_19'>d s (T,W, Z) = 1 P size p &#8712; P : t s (p) n p + 1 &#8804; T, w s (p) n p + 1 &#8804; W, y s (p) n p + 1 &#8804; Y, z s (p) n p + 1 &#8804; Z (<ns0:label>22</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:p>where P denotes the cardinality of P, n p is the number of variables p &#8712; P, and t s (p), w s (p), y s (p) and z s (p) are the performance metrics for timing the algorithm, counting number of simplex evaluations, counting number of function evaluations, and counting number of CPU cores respectively.</ns0:p><ns0:p>Altogether, the computational experiments are conducted to evaluate the MTNMa on a computer that has 1.8 GHz core i5 CPU and 4 GB RAM. Finally, C# language is used to implement the MTNMa and the experiments.</ns0:p></ns0:div> <ns0:div><ns0:head>8/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57359:1:1:NEW 19 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The HNMa generates a sequence of triangular simplexes that extract distinct reflections to calculate the next vertex. Each simplex crawls independently to adapt its shape to determine a solution for a wide range of mathematical problems. Therefore, the convergence speed per simplex varies from one iteration to another. A simplex in some cases explores the neighborhood to update its threshold, but do not move only if the threshold is good enough to replace the worst point. However, in other cases a simplex continues to generate different triangular shapes and orientations. Therefore, the generated simplexes of the HNMa extract different features of non-isometric reflections to update the simplexes with optimal triangular shapes and rotations. In this way, the HNMa mimics the amoeba style of maneuvering to move from one point to another when approaching a minimal point. On the contrary, the NMa <ns0:ref type='bibr' target='#b18'>(Nelder and Mead, 1965)</ns0:ref> forces components of the reflected vertex to follow one of four linear operations (reflection, expansion, contraction, and shrinkage). When the next vertex is characterized by one operation (one type of reflections), some dimensions of the reflected vertex depart for less optimal values. This problem obviously appears in high-dimensional applications. Consequently, the simplex shapes of the NMa becomes less effective in high dimensions and tends to deteriorate rapidly with each iteration. The HNMa <ns0:ref type='bibr' target='#b17'>(Musafer and Mahmood, 2018)</ns0:ref> has proven to deliver a better performance than the traditional NMa, represented by a famous Matlab function, known as 'fminsearch'.</ns0:p><ns0:p>To increase the probability of finding a global minimum, MTNMa generates five sequences of trigonometric simplex designs. Some points in the initial sequence of triangular simplexes of HNMa (Equation <ns0:ref type='formula'>13</ns0:ref>) are perturbed and used as starting points to launch other simplex designs with different reflections. For example, (Equation <ns0:ref type='formula'>14</ns0:ref>) the triangular simplexes of Solver2 are obtained by reflecting the y-components of the triangular simplexes of Solver1, which is performed by multiplying the x-components of Solver1 by (-1). Similarly, (Equation <ns0:ref type='formula'>16</ns0:ref>) the triangular simplexes of Solver1 are rotated 180 degrees to constitute the triangular simplexes of Solver3 (same as reflection in origin), which is obtained by multiplying the (x and y) components of Solver1 by (-1). (Equation <ns0:ref type='formula'>15</ns0:ref>) the triangular simplexes of Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Numerical experiments in Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> are performed to test the MTNMa, according to the experimental conditions defined earlier in this section. The purpose of the computational study is to show that the definition of normalized data profiles for one dimension (such as simplex evaluations) in some cases is not an accurate measure for comparison between similar algorithms. Thus, one dimension may not reflect enough information to examine the efficiency and robustness of derivative-free optimization solvers when similar algorithms generate multiple solvers and use the normalized data profiles to allocate the computational budget. For this reason, we propose high-dimensional normalized data profiles that serve as an accurate measure when comparing similar algorithms and help to allocate an accurate estimate of the computational budget for the compared algorithms. We choose to compare our proposed solution to</ns0:p><ns0:p>GNMa <ns0:ref type='bibr' target='#b5'>(Fajfar et al., 2017)</ns0:ref> because GNMa is one of the best algorithms that utilizes the test functions of <ns0:ref type='bibr' target='#b14'>Mor&#233; et al. (1981)</ns0:ref> and utilizes normalized data profile that involves one dimension (simplex evaluations).</ns0:p><ns0:p>The GNMa generates solvers in a tree-based genetic programming structure. The population size is initialized to 200 and evaluated recursively to produce the evolving simplexes. The GNMa is implemented using 20 of 2.66 GHz Core i5 (4 cores per CPU) machines <ns0:ref type='bibr' target='#b5'>(Fajfar et al., 2017)</ns0:ref>. The authors assumed that a solution is acceptable if the fitness of the obtained solver is lower than 10 &#8722;5 . After running the computer simulation 20 times for 400 generations , five genetically evolved solvers successfully satisfied the condition of the fitness. The optimal solver is determined to be (genetic solver1). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Penalty II ( <ns0:ref type='formula'>4</ns0:ref> Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> illustrates test results to cover the complete experiments for testing the reliability and robustness of the MTNMa. The results in this research are compared to the best-known relevant results from the literature presented by <ns0:ref type='bibr' target='#b5'>Fajfar et al. (2017)</ns0:ref>. According to the definition of the normalized data profile (Equation <ns0:ref type='formula' target='#formula_18'>21</ns0:ref>), f L is required to be determined, which is the best obtained results by any of the individual solvers of the algorithms (GNMa and MTNMa). Therefore, Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> includes the best results of the GNMa obtained by any of the five genetic evolved solvers (the optimal genetic solver1 and the other four genetic solvers reported by <ns0:ref type='bibr' target='#b5'>Fajfar et al. (2017)</ns0:ref>) to secure a fair comparison between GNMa and MTNMa. The GNMa is not an ensemble of the five evolved solvers and for this reason we utilize the high dimensional normalized data profiles to compare the MTNMa to the individual evolved solvers of the GNMa. Moreover, we compare the MTNMa to the one based on our previous publication in <ns0:ref type='bibr' target='#b17'>(Musafer and Mahmood, 2018)</ns0:ref>. The traditional triangular simplex of HNMa generates a simplex with specified edge length and direction that depends on the standard parameter values of &#948; z and &#948; u , which is similar to solver 1. Table 1 also shows the dimensions of the test functions n, the number of simplex and function evaluations, and the actual minima known for the functions. In addition, the starting points for the test functions of <ns0:ref type='bibr' target='#b14'>Mor&#233; et al. (1981)</ns0:ref> are specified as part of the testing procedure so that the relevant algorithms can easily be examined and observed to validate whether the considered algorithms are tuned to a particular category of optimization problems or not. The other vertices can be either randomly generated <ns0:ref type='bibr' target='#b5'>(Fajfar et al., 2017)</ns0:ref> or produced using a specific formula like Pfeffer&#347; method <ns0:ref type='bibr' target='#b2'>(Baudin, 2009)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>11/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57359:1:1:NEW 19 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>From the results given in Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>, it can be seen that the proposed sequences of trigonometric simplex designs, in some cases, achieve a higher degree of accuracy for high dimensions than for less. For example, MTNMa performs better to determine a solution to Quadratic (16) then Quadratic (8) in Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>. While in other cases, the MTNMa generates fewer number of simplexes to approximate a particular solution for high dimensions than for less dimensions. For example, observe the number of simplex evaluations generated for Rosenbrock (6) compared to Rosenbrock (2) in Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>. The behavior of the MTNMa in these problems is that when the dimensionality increases, the MTNMa manages to observe more patterns and find more combinations of the non-isometric features to form the reflected vertex. On the contrary, this is not the behavior of the GNMa, where the accuracy drops down and the algorithm performs a large number of simplex evaluations as it moves to higher dimensions. It can be observed also from Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> that the MTNMa was successful in following curved valleys functions such as Rosenbrock function. In addition, the test shows that the MTNMa is able to generate the same number of simplexes to reach the exact minimum for Rosenbrock (6, 8, and 10).</ns0:p><ns0:p>Thus testing MTNMa on much more complicated function such as Trigonometric ( <ns0:ref type='formula'>10</ns0:ref>) is useful because this function has approximately 120 sine and cosine functions added to each other. Even with the power of genetic programming, it is hard for the simplexes of the GNMa to progress in such environment.</ns0:p><ns0:p>However, since the proposed simplexes of the MTNMa have the angular rotation capability, they are capable of converging to minimums where amplitudes and angles are involved. Finally, we can see from While the GNMa (genetic Solver1) needs to produce 2700 simplexes to solve approximately 100% of the problems at this level of accuracy based on the reported results in <ns0:ref type='bibr' target='#b5'>(Fajfar et al., 2017)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As it can be seen in Figure <ns0:ref type='figure'>4</ns0:ref> ). This is a significant difference in performance. In addition, the computational complexity of the MTNMa solvers is not expensive as compared as the computational time and complexity required to evolve the GNMa solvers. The GNMa involves high computational overhead that comes from exchange vertices and features among the genetic simplexes and modernizing the current population with better offspring. The last major difference is that the optimization solutions of GNMa solvers in some functions</ns0:p><ns0:p>are not be able to satisfy Equation ( <ns0:ref type='formula' target='#formula_18'>21</ns0:ref>) for this level of accuracy. For example, Trigonometric function (10) requires the best possible reduction has to equal (10 &#8722;8 ), which is beyond the skills of any solvers of the GNMa.</ns0:p><ns0:p>From the sub-figures I, II and III given in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, it can be seen that the MTNMa solves roughly 91%</ns0:p><ns0:p>of the problems with a computational budget of 605 simplex gradients , 16271 function estimates, and 3.4 sec. for the accuracy level of (10 &#8722;7 ). Another interesting observation on the data profiles shown in Figures <ns0:ref type='figure' target='#fig_8'>4 and 5</ns0:ref>, is that the proposed algorithm tends to provide similar performance, as well as generate a moderate number of simplex and function evaluations to approximate solutions for the levels of accuracy (10 &#8722;5 ) and (10 &#8722;7 ). As a result, the use of data profiles that incorporate several performance metrics is essential to differentiate between similar algorithms, and provide an accurate estimate for allocating a computational budget that does not rely on a single dimension such as simplex gradients.</ns0:p></ns0:div> <ns0:div><ns0:head>Detailed Analysis of the Five Solvers</ns0:head><ns0:p>By analyzing those five solvers of multi-directional trigonometric simplex designs, we have conducted further tests, which reveal that further evaluations are important to make decision on which solver should be used when there is a limited computational budget. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>&#8722;I shows also that solvers (1 and 5) require significantly fewer number of simplex gradients than solver2 to solve 100% of the problems. Nevertheless, this significant difference in performance is not true when two performance metrics or more are used to examine the reliability of the solvers. This means that if the dimension T is removed from the profile, then remaining dimensions will show enough evidence to evaluate the five solvers.</ns0:p><ns0:p>To examine the parameters (T and W), we consider the observation of the relative performance of shows that the cost unit per iteration (W and T) for Solver3 is slightly less than the cost unit for Solver4.</ns0:p><ns0:p>Therefore, T is independent of Y because there is an additional (non-constant) overhead associated with the relative complexity of the 5 MTNMa solvers that is independent of the number of function evaluations.</ns0:p><ns0:p>The additional overhead comes from the exploration process around the neighborhood of the best result, which depends on how efficient a solver to move in a direction towards the optimum.</ns0:p><ns0:p>Solver3 requires higher function evaluations than Solver4, but takes less computational time to successfully solve the test problems. In the situation where Solver2 stands out as being the best of the five solvers, because certainly it requires not only less function evaluations but also less computational time than the other solvers. This proves that the parameters (W, Y, and T) present independent dimensions for data profiling.</ns0:p><ns0:p>The number of CPUs (Z) was not examined in our evaluation of the MTNMa. The time (T) is related to Z. In general, it depends on the type of mathematical problems that arise in applications. For example, we consider to design an application in pediatric cardiology to maximize the time to delay the next operation, and we want to use the high-dimensional normalized data profile to allocate the computational budget for different solvers of optimization algorithms. Also, the application is required to work in a virtual distributed environment such as Amazon Web Services (AWS). As part of the computational budget, it is important to specify the number of nodes in the virtual cluster. In this particular case, we can add another dimension that is the number of CPUs (Z) to allocate the different optimal numbers of Z for the different solvers and for a specific level of accuracy. The whole idea of the high dimensional normalized data profiles is to examine the efficiency and robustness of the different solvers when they are produced by different optimization algorithms, but most importantly is to allocate the computational budget for the relative solvers.</ns0:p><ns0:p>On a final note, the additional tests for examining data profiles on the 5 solvers of the MTNMa have confirmed that we need to define the normalized data profiles on the basis of a collection of performance measures. If the data profiles are defined for one dimension, then the accuracy of the profiles can be strongly biased when the numbers of function evaluations are independent of the other dimensions (simplex evaluations and computational time). Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. I &#8722; the geometrical analysis of an HNMa based on vector theory<ns0:ref type='bibr' target='#b17'>(Musafer and Mahmood, 2018)</ns0:ref>, II &#8722; the basic six operations of an HNMa<ns0:ref type='bibr' target='#b16'>(Musafer et al., 2020)</ns0:ref>.</ns0:figDesc><ns0:graphic coords='5,141.73,63.78,413.58,319.93' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:01:57359:1:1:NEW 19 Jul 2021) Manuscript to be reviewed Computer Science to particular functions or one type of optimization classes. For this purpose, Mor&#233; et al. (1981) introduced a large collection of different optimization functions for evaluating the reliability and robustness of unconstrained optimization software. The features of the test functions cover three areas: nonlinear least squares, unconstrained minimization, and systems of nonlinear equations.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Solver4 are initialized by reflecting the x-components of the triangular simplexes of Solver1, which is achieved by multiplying the y-components of Solver1 by (-1). Finally, (Equation17) the triangular simplexes of Solver5 are obtained by taking the absolute value of the triangular simplexes of Solver1.Solver5 can generate triangular simplexes by reflection in the x-coordinate, y-coordinate, or origin, or initialize triangular simplexes that are similar to that of the simplexes of Solver1. Figure2shows all the transformations on the (x and y) components of the traditional vertices of Solver1 to generate new vertices for Solver2, Solver3, Solver4, and Solver5. We assume that 3 arbitrary vertices of the triangular simplex (Solver1) shown in Figure2have component values (1, 2), (2, 1), and (3, 3).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. An example of different formations of Solver1.</ns0:figDesc><ns0:graphic coords='10,199.48,470.50,298.08,235.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>(164) (0.0468) Extended Powell singular (8) 9.7234. . . 10 &#8722;61 (1) 4.9406. . . 10 -324 (4. . . 10 &#8722;173 (1) 5.0049. . . 10 -280 (2</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Data profiles for the MTNMa shown for (&#964; = 10 &#8722;3 ). I &#8722; Percentage of solved problems with respect to the number of simplex gradients (W), II &#8722; Percentage of solved problems with respect to the number of function evaluations (Y), III &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the computer time (T), IV &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the number of function evaluations (Y).</ns0:figDesc><ns0:graphic coords='13,103.72,329.52,489.60,233.58' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 contains four data profiles with different dimensions of performance metrics. One of the aims of utilizing various performance measures is to provide a complementary information for the relevant solvers as the function of the computational budget. This is required to secure a fair comparison between the MTNMa and GNMa. As shown in Figure 3&#8722;I, 3&#8722;II, and 3&#8722;IV the MTNMa needs to create 199 simplexes and 4200 function evaluations to solve 100% of the problems at the level of accuracy 10 &#8722;3 .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 385 Figure 4 .</ns0:head><ns0:label>3854</ns0:label><ns0:figDesc>Figure 4. Data profiles for the MTNMa shown for (&#964; = 10 &#8722;5 ). I &#8722; Percentage of solved problems with respect to the number of simplex gradients (W), II &#8722; Percentage of solved problems with respect to the number of function evaluations (Y), III &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the computer time (T), IV &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the number of function evaluations (Y).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Data profiles for the MTNMa shown for (&#964; = 10 &#8722;7 ). I &#8722; Percentage of solved problems with respect to the number of simplex gradients (W), II &#8722; Percentage of solved problems with respect to the number of function evaluations (Y), III &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the amount of computer time (T), IV &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the number of function evaluations (Y).</ns0:figDesc><ns0:graphic coords='14,110.92,431.59,475.20,226.71' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>&#8722;I, 4&#8722;II and 4&#8722;IV, solvers of the MTNMa require fewer number of simplex and function evaluations than solvers of the GNMa to solve roughly 100% of the problems. For example, with a budget of 200 simplex gradients, 10225 function evaluations, and 2 sec. Solvers of the MTNMa solve 100% of the problems at accuracy (10 &#8722;3 ), and solve almost 90% of the problems at accuracy (10 &#8722;5</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Data profiles for the five solvers shown for (&#964; = 10 &#8722;3 ). I &#8722; Percentage of solved problems with respect to the number of simplex gradients (W), II &#8722; Percentage of solved problems with respect to the number of function evaluations (Y), III &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the amount of computer time (T), IV &#8722; Percentage of solved problems with respect to the number of simplex gradients (W) and the number of function evaluations (Y).</ns0:figDesc><ns0:graphic coords='15,103.72,378.05,489.60,233.58' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 6 &#8722;</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6&#8722;III illustrates, the cost unit per iteration (simplex evaluations (W) and time (T)) for solver 2 is less expensive than the other solvers. This forms a strong argument that how a solver in some cases may always require a large number of simplex gradients, but may have the potential to take less time to solve 100% of the test problems and improve the performance. Additional tests and analyzes shown in Figure 6&#8722;II and 6&#8722;III, indicate the strength of combining metric measures in data profiles, forming a clear view that the cost unit (function evaluations (Y) and T) for solver 2 is much less expensive than the other solvers. Even if solver 2 requires more simplex gradient evaluations, it is still more reliable than the others. The results shown in Figure 6&#8722;IV are fully consistent with the data profiles of Figures 6&#8722;II and 6&#8722;III. Solver 2 stands out as being the best of the five solvers.In this particular case, comparison of dimensions (W and Y) is useful for exploring how the number of active simplexes of solvers (1, 2 and 5) changes with respect to the number of objective function evaluations. It is not obvious whether the overall performance of the solvers (1, 2, and 3) is almost entirely dependent on the number of objective function evaluations alone or not. If the number of function evaluations is the dominant dimension to achieve the presented results for Solver2, then the parameters (T and W) do not present independent dimensions and therefore T is dependent of Y in this particular case.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Figure 6&#8722;II reveals that Solver4 needs to perform significantly less function evaluations than Solver3 to successfully solve the test problems. This can be seen in Figure 6&#8722;III, where the data profile for the two dimensions (W for Y) is less computationally expensive for Solver4 than for Solver3. If we assume that the parameter T is dependent of Y, then the data profile shown in Figure 6&#8722;IV should confirm that the cost unit (W and T) is less computationally expensive for Solver4 than for Solver3. Whereas, the data profile</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:01:57359:1:1:NEW 19 Jul 2021)</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>3/17 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:01:57359:1:1:NEW 19 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>4/17 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:01:57359:1:1:NEW 19 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of Experimental Results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Test Function (n)</ns0:cell><ns0:cell>GNMa</ns0:cell><ns0:cell>MTNMa</ns0:cell><ns0:cell>Actual Minima</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Acc.) (Best Solver)</ns0:cell><ns0:cell>(Accuracy) (Best Solver)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Function Ev.)</ns0:cell><ns0:cell>(Function Ev.) (Simplex Ev.) (Time)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Rosenbrock (2)</ns0:cell><ns0:cell>0.0 (2)</ns0:cell><ns0:cell>0.0 (1)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1516)</ns0:cell><ns0:cell>(6963) (799) (0.0312)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Freudenstein&#8722;Roth (2) 48.9842 (1)</ns0:cell><ns0:cell>48.9842 (5)</ns0:cell><ns0:cell>48.9842</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(425)</ns0:cell><ns0:cell>(419) (47) (0.0200)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Powell badly scaled (2) 0.0 (1)</ns0:cell><ns0:cell>0.0 (1)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1957)</ns0:cell><ns0:cell>(9738) (694) (0.0156)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Brown badly scaled (2) 0.0 (1)</ns0:cell><ns0:cell>0.0 (2, 4, 5)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1349)</ns0:cell><ns0:cell>(1449, 1450, 1431) (196) (0.0155)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Beale (2)</ns0:cell><ns0:cell>0.0 (1)</ns0:cell><ns0:cell>0.0 (2, 4)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(683)</ns0:cell><ns0:cell>(1935, 2029) (181) (0.0312)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Jennrich&#8722;Sampson (2) 124.362 (1)</ns0:cell><ns0:cell>124.362 (5)</ns0:cell><ns0:cell>124.362</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(397)</ns0:cell><ns0:cell>(212) (22) (0.0156)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Helical valley (3)</ns0:cell><ns0:cell>0.0 (2)</ns0:cell><ns0:cell>0.0 (3)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(7287)</ns0:cell><ns0:cell>(22278) (1443) (0.1010)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Bard (3)</ns0:cell><ns0:cell>8.2148. . . 10 -3 (2)</ns0:cell><ns0:cell>8.2148. . . 10 -3 (4)</ns0:cell><ns0:cell>8.2148. . . 10 &#8722;3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1020)</ns0:cell><ns0:cell>(1065) (72) (0.0156)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Gaussian (3)</ns0:cell><ns0:cell>1.1279. . . 10 -8 (2)</ns0:cell><ns0:cell>1.1279. . . 10 -8 (2, 4)</ns0:cell><ns0:cell>1.1279. . . 10 &#8722;8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(567)</ns0:cell><ns0:cell>(442, 467) (36) (0.0156)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Meyer (3)</ns0:cell><ns0:cell>87.9458 (1)</ns0:cell><ns0:cell>87.9483 (1)</ns0:cell><ns0:cell>87.9458</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(4511)</ns0:cell><ns0:cell>(3776182) (357780) (33.0791)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Box 3D (3)</ns0:cell><ns0:cell>0.0 (1)</ns0:cell><ns0:cell>2.7523. . . 10 &#8722;29 (1)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(2430)</ns0:cell><ns0:cell>(517602) (51060) (60.7032)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Powell singular (4)</ns0:cell><ns0:cell>1.9509. . . 10 &#8722;61 (1)</ns0:cell><ns0:cell>0.0 (2)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(4871)</ns0:cell><ns0:cell>(56958) (3878) (0.2031)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Wood (4)</ns0:cell><ns0:cell>0.0 (3)</ns0:cell><ns0:cell>3.9936. . . 10 &#8722;30 (3)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(4648)</ns0:cell><ns0:cell>(9871) (500) (0.0468)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Kowalik&#8722;Osborne (4)</ns0:cell><ns0:cell>3.0750. . . 10 -4 (1)</ns0:cell><ns0:cell>3.0750. . . 10 -4 (4)</ns0:cell><ns0:cell>3.0750. . . 10 &#8722;4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1206)</ns0:cell><ns0:cell>(6224) (423) (0.0900)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Brown&#8722;Dennis (4)</ns0:cell><ns0:cell>85822.2 (1)</ns0:cell><ns0:cell>85822.2 (5)</ns0:cell><ns0:cell>85822.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(1288)</ns0:cell><ns0:cell>(1322) (76) (0.0781)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Quadratic (4)</ns0:cell><ns0:cell>0.0 (2)</ns0:cell><ns0:cell>0.0 (5)</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(13253)</ns0:cell><ns0:cell>(19403) (1384) (0.0468)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Penalty I (4)</ns0:cell><ns0:cell>2.2499. . . 10 -5 (5)</ns0:cell><ns0:cell>2.2499. . . 10 -5 (4)</ns0:cell><ns0:cell>2.2499. . . 10 &#8722;5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(7854)</ns0:cell><ns0:cell>(293609) (19379) (0.7656)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>10/17</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57359:1:1:NEW 19 Jul 2021)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>that the MTNMa can detect functions with multiple minimal values such as the Brown Almost Linear (7) function. In addition, the results indicate that the MTNMa outperforms the GNMa in terms of the accuracy tests for almost all high dimensional problems (more than or equal to 8).</ns0:figDesc><ns0:table /></ns0:figure> </ns0:body> "
"High-dimensional normalized data profiles for testing derivative-free optimization algorithms Hassan Musafer1, Emre Tokgoz2, and Ausif Mahmood3 1CDPHP, Data Scientist, Healthcare Economics, and Operations, 500 Patroon Creek Blvd, Albany, NY 12206, USA 2 Quinnipiac University, School of Engineering, 275 Mount Carmel Ave, Hamden, CT 06518, USA 3Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT, 06604, USA Corresponding Author: Hassan Musafer (e-mail: hmusafer@my.bridgeport.edu) Summary of Modifications We received valuable feedback from the respectful reviewers. They significantly helped us to improve the quality of our manuscript. For that, we would like to thank the editor, associate/assistant editor and the reviewers. In summary, the changes applied to the manuscript based on the reviewers’ comments, are as follows: • All the reviewers’ comments and suggestions have been carefully considered and addressed. • A few references were added for better clarity. • Explanations have been added/modified according to the reviewers’ comments and/or the authors’ observations. • All the changes applied have been highlighted in the revised manuscript. Now we explain the changes in detail per item. The items are categorized per reviewer. Our response to each item starts with ‘R:’ and is shown in boldface. We are truly grateful to have been given the chance to revise the paper according to the reviewers’ comments. In what follows, we have provided the reviewers’ comments and our responses regarding how we have addressed each of the comments. We have made our best efforts to address all comments. Thanks so much. Reviewer #1: Comments to the Author: Q. Mathematical notation in page 2 is not standard in the DFO literature. Please use widely-used mathematical notation. Capitalize the first letter in 'equation' R: Thank you very much. We use the mathematical notation from a recent book in DFO (Audet and Hare, 2017). We capitalize the first letter. Audet, C., & Hare, W. (2017). Derivative-free and blackbox optimization. Q. 'The HNM algorithm proved to deliver better performance than the traditional NM algorithm, represented by a famous Matlab function, known as ”fminsearch” Musafer and Mahmood (2018)'. fminsearch has not been proposed by Musafer and Mahmood (2018), it is the classic NM algorithm. R: Thank you very much. The location of the reference is modified. “The HNM algorithm (Musafer and Mahmood, 2018) has proven to deliver a better performance than the traditional NM algorithm, represented by a famous Matlab function, known as” fminsearch”.” Q. 'From the table, ...'. Please provide references to all figures/tables. R: Thank you very much. We provide references to all figures/tables. Q. The English language can be improved. Some examples where the language could be improved include lines 222 and 292 – the current phrasing makes comprehension difficult. R: Thank you very much. We improve the English of lines 222 and 292. Line 222 has been changed to “However, they did not investigate the performance of derivative free optimization solvers if a variety of metrics were used to evaluate the performance.” Line 292 has been changed to “Finally, we can see from Table 1 that the MTNMa can detect functions with multiple minimal values such as the Brown Almost Linear (7) function.” Q. A readme file is needed in the Supplemental_File.zip in order to be able to find out where to find the codes and problem files. R: Thank you very much. We will provide a readme file in the Supplemental_File.zip. Q. 'The standard parameter values for usual delta d_u and for zero term delta d_z are chosen 0.05 and 0.00025 respectively'. Please justify. R: Thank you very much. We have a reference for those values. We change the sentence and add the reference. “According to Gao and Han (2012), the default parameter values for usual delta δu and zero term delta δz are 0.05 and 0.00025 respectively.” Gao, F., & Han, L. (2012). Implementing the Nelder-Mead simplex algorithm with adaptive parameters. Computational Optimization and Applications, 51(1), 259-277. Q. The test functions have not been presented in the text. More importantly, all these problems have only a few variables. The authors need to consider much larger problems. R: Thank you very much. We cite the test functions that have been proposed by (Moré et al., 1981) to test the reliability and robustness of unconstrained optimization software. More et al. (1981) explain the features of the test functions, the starting vertices for the test functions, and the reasons for selecting such a collection of mathematical functions. We provide the details of the importance of the test functions in the computational experiment section, lines 193-200 of the manuscript. We have to be consistent with the literature in terms of the number of variables and use the same total number of variables of (Fajfar et al., 2017) in order to be able to compare computational results. Moré, J. J., Garbow, B. S., & Hillstrom, K. E. (1981). Testing unconstrained optimization software. ACM Transactions on Mathematical Software (TOMS), 7(1), 17-41. Fajfar, I., Puhan, J., & Bűrmen, Á. (2017). Evolving a Nelder–Mead algorithm for optimization with genetic programming. Evolutionary computation, 25(3), 351-373. Q. What's the purpose of the computational study? Why not compare against efficient DFO algorithms (e.g., NOMAD)? R: Thank you very much. The purpose of the computational study is to show that the definition of normalized data profiles for one dimension (such as simplex evaluations) in some cases is not an accurate measure for comparison between similar algorithms. Thus, one dimension may not reflect enough information to examine the efficiency and robustness of derivative-free optimization solvers when similar algorithms generate multiple solvers and use the normalized data profiles to allocate the computational budget. For this reason, we propose high-dimensional normalized data profiles that serve as an accurate measure when comparing similar algorithms and help to allocate an accurate estimate of the computational budget for the compared algorithms. We choose to compare our proposed solution to GNMa (Fajfar et al., 2017) because GNMa is one of the best algorithms that utilizes the test functions of (Moré et al., 1981) and utilizes normalized data profile that involves one dimension (simplex evaluations). Q. The paper is interesting. However, I am confused about its scope. Looking at the title of the paper (High-dimensional normalized data profiles for testing derivative-free optimization algorithms), I would expect that the authors propose a new tool (similar to data profiles) for comparing DFO algorithms. However, this is not the case since the authors are proposing five sequences of trigonometric simplex designs for high dimensional unconstrained optimization problems. Therefore, the authors need to focus on this and only present normalized data profiles as a mean to compare their algorithm. Thus, I believe that the authors need to restructure their paper in order to make this clear. Finally, the computational results are weak. The authors need to consider larger problems and other DFO algorithms to compare with. R: Thank you very much for the comment. The main contribution of the paper is to propose data profiles that involve a variety of performance measures. The second contribution is to propose five sequences of trigonometric simplex designs, called MTNMa. We use MTNMa as an example of derivative-free optimization algorithms and compare it to GNMa (Fajfar et al., 2017). In our paper, we have demonstrated how accurate the proposed data profiles to analyze the efficiency and robustness of GNMa and MTNMa solvers. This idea is emphasized in the abstract and more details are added in the discussion section to further clarify our point. In addition, high-dimensional normalized data profiles are applicable to other derivative-free optimization algorithms. Regarding the computational results, we have to be consistent with the literature for comparison purposes and utilize the same type of mathematical problems and preserve their dimensions for both GNMa and MTNMa. ========================================================== Reviewer #2: Basic reporting: Q. The prose of the article requires significant revision in order for it to meet the requirement for use of clear and unambiguous English. There are frequent grammatical errors and use of terms which are, to my experience, non-standard with respect to the common language of the mathematical sciences. A non-exhaustive list of some of the errors with proposed corrections is included in the “General comments for the author”. R: Thank you very much. We have revised the manuscript and improved the English to include the reviewer’s points. We have carefully addressed all the comments and updated the manuscript accordingly. Q. The structure of the article also requires revision. For example, the first two paragraphs of the Discussion (starting Line 251) introduce the Genetic Nelder Mead (GNM) algorithm and provide the only summary of the author’s Multidirectional Trigonometric Nelder Mead (MTNM) algorithm. The Discussion section should be limited to the interpretation of the author’s work, in the context of previously presented background information, as much as possible. R: We thank the reviewer for this useful suggestion. We have carefully revised the discussion section to include the reviewer’s points. The discussion section focuses on the Multidirectional Trigonometric Nelder Mead (MTNM) algorithm. We have presented the Genetic Nelder Mead (GNM) algorithm in the (Introduction Section). We explained why the GNM algorithm (Fajfar et al., 2017) is a good candidate for comparison with the MTNM algorithm. The Discussion Section is modified to include the reviewer’s points. Q. As the GNM algorithm is a core component of the author’s evaluation of the MTNM algorithm and the proposed data profile (Equation (25)), I believe that it should be more prominently introduced in the article’s introduction. References should also be provided to justify the choice of the GNM algorithm over other Nelder Mead variants for use as a point of comparison. References should also be provided for the sentence starting on Line 220. R: We thank the reviewer for this suggestion. The (Discussion and Introduction Sections) have been modified to include all reviewers’ points. We provided references for the sentence starting on line 220. The purpose of the computational study is to show that the definition of normalized data profiles for one dimension (such as simplex evaluations) in some cases is not an accurate measure for comparison between similar algorithms. Thus, one dimension may not reflect enough information to examine the efficiency and robustness of derivative-free optimization solvers when similar algorithms generate multiple solvers and use the normalized data profiles to allocate the computational budget. For this reason, we propose high-dimensional normalized data profiles that serve as an accurate measure when comparing similar algorithms and help to allocate an accurate estimate of the computational budget for the compared algorithms. We choose to compare our proposed solution to GNMa (Fajfar et al., 2017) because GNMa is one of the best algorithms that utilizes the test functions of (Moré et al., 1981) and utilizes normalized data profile that involves one dimension (simplex evaluations). The GNMa generates solvers in a tree-based genetic programming structure. The population size is initialized to 200 and evaluated recursively to produce the evolving simplexes. The GNMa is implemented using 20 of 2.66 GHz Core i5 (4 cores per CPU) machines (Fajfar et al., 2017). The authors assumed that a solution is acceptable if the fitness of the obtained solver is lower than 10-5. After running the computer simulation 20 times for 400 generations, five genetically evolved solvers successfully satisfied the condition of the fitness. The optimal solver is determined to be (genetic solver1). Q. The minimisation example starting on Line 49 requires further explanation. While I do find the chosen topic (surgical grafts in cardiology) to be interesting, it is not clear with respect to what variables the objective function is defined. R: Thank you very much. We have explained a number of variables that can affect the objective function. “For example, pediatric cardiologists seek to delay the next operation as much as possible to identify the best shape of a surgical graft (Audet and Hare, 2017). A number of variables can affect the objective function to treat and manage heart problems in children. Some are structural differences that they are born with, such as holes between chambers of the heart, valve problems, and abnormal blood vessels. Others involve abnormal heart rhythms caused by the electrical system that controls the heartbeat.” Q. There are a number of small issues with the Figures and Table presented in this article: Figure 1: Sublabel “A” has an identical to the variable “A” present in Figure 1 A and B. The parameters $d$, $d_l$ and $Th$ are not defined. The axes of the plot Figure 1 A are not labelled. R: Thank you very much. We corrected the sublabels in Figure 1. The parameters $d$, $d_l$ and $Th$ are defined. The axes of the plot Figure 1 are labelled as well. Q. Table 1 performs a comparison which I do not believe to be an accurate communication of the work of Fajfar et al (see Section 2). R: Thank you very much. We carefully addressed and answered all points in Section 2. Q. Figure 2, 3, 4 and 5: As data profiler variables are defined in the accompanying description, the axes of the plots may use ds(*), W, Y and T (sec) as the axes labels (suggestion). It is not clear how parameter Y is an “estimate” of the objective function, as the function should be computed up to the limits of numerical precision at each point. I suspect “estimates” refers to the number of “evaluations”. R: We thank the reviewer for this useful suggestion. The limit of numerical precision for (Y) defines the maximum number of function evaluations that a function cannot exceed. Most of the test functions need less than this limit to satisfy the requirement of normalized data profile associated with a level of accuracy. “Evaluation” is a better expression for this inequality than “Estimate”. Q. Figure 5 A and B: It is difficult follow the lines for each solver without zooming in considerably. R: Thank you very much. We have enhanced Figure 5. We increased the resolution of all the Figures in the manuscript as well. Q. The authors have provided code for 6 of the 36 functions comprising test set proposed by More´ et al. I was able to execute this code to verify this limited subset of the author’s results. However, code should be provided for the complete suite of test functions. R: Thank you very much. We will provide c# scripts for the 36 test functions. Q. Finally there are a number of, mostly small, mathematical omissions or errors: R: Thank you very much. We have carefully addressed all the reviewer’s points and corrected mathematical omissions or errors. Q. Line 51: The domain and range of $f$ is not defined. In the context of the Nelder Mead algorithm, the generalised objective function is typically $f: \mathbb{R}^n \rightarrow \mathbb{R}$ such that for, $f(x)$, $x \in \mathbb{R}^n $. R: Thank you very much. The domain and range of (f) are defined in Equation 1. We use the mathematical notation from a recent book in DFO (Audet and Hare, 2017). Q. Line 101 presents an ordering of vertex pairs, this is incorrect in light of the ordering described on the previous line. The ordering should be notated in a manner equivalent to: $A=f(v_1) < B=f(v_2) < C = f(v_3)…$ R: Thank you very much. We revised the paragraph to reflect the correct description. The following change has been added. “For example, suppose we want to determine the minimum of a function $f$. The function $f(x,y)$ is calculated at the vertices that are subsequently arranged in ascending with respect to the cost function (CF) values, such that: A (x_1,y_1 ) < B (x_2,y_2 ) < C (x_3,y_3) < Th(x_4,y_4), where A, B, and C are the vertices of the triangular simplex with respect to the lowest, 2nd lowest, and 2nd highest CF values, and Th is a threshold that has the highest CF value. The need for the Th arises when the HNMa performs a reflection in an axial component, it replaces the value of the axial component of the Th. If the new value of the Th leads to lower CF value than the previous CF value of the Th, then the HNMa moves to upgrade the next axial component of the Th. After upgrading the Th point with a variety of non-isometric reflections, the HNMa examines the Th to validate if the resulted Th has a lower CF value than C to be replaced with C or the HNMa upgrades the Th only. This technique of exploring the neighborhood of the minimum is to search for the optimal patterns that can be followed and result in a better approach to find the minimum.” Q. Line 152: If the values for $delta_u$ and $\delta_z$ have been arrived at through the authors’ practical experience, they should state so, if there is a formal or historical justification for these parameter values, it should be explained. R: Thank you very much. We have a reference for those values. We changed the sentence and added the reference. “According to Gao and Han (2012), the default parameter values for usual delta δu and zero term delta δz are 0.05 and 0.00025 respectively.” Gao, F., & Han, L. (2012). Implementing the Nelder-Mead simplex algorithm with adaptive parameters. Computational Optimization and Applications, 51(1), 259-277. Q. Equation (25): Parameter “Z” refers to the “number of machines”. The term “number of machines” is ambiguous. I assume, however, that the authors are referring to the number of CPU cores. R: Thank you very much. The authors are referring to the number of CPU cores. Q. The author’s present two original contributions: 1. Firstly, a modified version of the Nelder Mead algorithm, the MTNM algorithm, utilizes a set of 5 distinct solvers which work independently to identify the objective function minimum. These simplexes introduce different rotational ‘shifts’ through modification of the functions governing the placement of new simplex vertices at each iteration. An early stopping criterion is used to abort solvers which are failing to converge. The MTNM algorithm is shown to outperform solvers developed via the GNM algorithm in some cases. Of particular note is its performance in high dimensions, for example, when minimising the Quadratic (24) problem. 2. Secondly, a higher dimensional data profiler function by which to evaluate the performance of two dissimilar solvers operating on the same problem set. It introduces dimensions associated with wall time, number of simplex evaluations and number of CPUs. R: Thank you very much. Q. However, there are a number of significant issues with the experimental design: It is not clear why the GNM algorithm has been chosen as a point of comparison. In the referenced paper, Fajfar et al outline a generic algorithm (GNM) for the evolution (or “hyper-optimisation”) of alternative Nelder Mead simplexes. They arrived at 5 candidates and present a single “optimal solver” (solver 1). Conversely, the authors propose an algorithm that uses an ensemble of 5 solvers. R: Thank you very much. We explained in the (Discussion section) why the GNM algorithm is a good candidate to compare with. “The results in this research are compared to the best-known relevant results from the literature introduced by Fajfar et al., (2017). According to the definition of the normalized data profile (Equation 21), fL is required to be determined, which is the best obtained results by any of the individual solvers of the algorithms (GNMa and MTNMa). Therefore, Table 1 includes the best results of the GNMa obtained by any of the five genetic evolved solvers (the optimal genetic solver1 and the other four genetic solvers reported by Fajfar et al., (2017) to secure a fair comparison between GNMa and MTNMa. The GNMa is not an ensemble of the five evolved solvers and for this reason we utilize the high dimensional normalized data profiles to compare the MTNMa to the individual evolved solvers of the GNMa.” We explicitly stated in the manuscript that we compare the MTNMa to the genetic solver 1 of GNMa. Q. Table 1 implies that the GNM algorithm is an ‘ensemble’ Nelder Mead solver when that is not the case. This also occurs on Line 306 where it is stated that the GNM algorithm requires about 12 hours to complete using 20 CPU cores. These values are taken from the time reported by Fajfar et al for completion of their genetic evolution algorithm, not optimisation of the standard test problems of More´ et al. R: Thank you very much. We clarify this in the Discussion section. Q. The data presented by Fajfar et al does not allow for a comparison of their solvers with the MTNM algorithm using the author’s defined data profile. R. Thank you, we carefully addressed and answered the reviewer’s points in the discussion section. Q. The addition of the T, W and Z parameters require further justification, as it stands, I have the following concerns: The Nelder Mead, and modified Nelder Mead, algorithms presented in these articles require $n + 1$ function evaluations per iteration, where $n$ is the number of objective function parameters. The early stopping test employed by the MTNM algorithms results in a meaningful difference between the number of simplex evaluations and function evaluations, but this is certainly not the case in general. R. Thank you very much. The basic idea of the trigonometric simplex designs is to explore the solution space of mathematical problems for best patterns to be followed and approximate the minimal points. This technique of exploring the solution space varies from one function to another. Q. While comparison of $W$ and $Y$ is perhaps useful for exploring how the number of ‘active’ MTNM simplexes changes with respect to the number of objective function evaluations, the overall performance of the algorithm is still almost entirely dependent on the number of objective function evaluations alone. Dependence on $W$ could only arise through the use of simplexes with distinct algorithmic complexity. If there is a measurable difference in the algorithmic complexity of the 5 MTNM solvers, the authors should demonstrate this by way of proofs or numerical experiments. R. Thank you very much. “In this particular case, comparison of dimensions (W and Y) is useful for exploring how the number of active simplexes of solvers (1, 2 and 5) changes with respect to the number of objective function evaluations. It is not obvious whether the overall performance of the solvers (1, 2, and 3) is almost entirely dependent on the number of objective function evaluations alone or not. If the number of function evaluations is the dominant dimension to achieve the presented results for Solver2, then the parameters (T and W) do not present independent dimensions and therefore T is dependent of Y in this particular case. This means that if the dimension T is removed from the profile, then remaining dimensions will show enough evidence to evaluate the five solvers. To examine the parameters (T and W), we consider the observation of the relative performance of Solver3 and Solver4. The data profile as shown in Figure 6-I indicates that Solver3 tends to produce less simplex evaluations than Solver4 to successfully solve the test problems for (τ = 10-3). On the other hand, the data profile in Figure 6-II reveals that Solver4 needs to perform significantly less function evaluations than Solver3 to successfully solve the test problems. This can be seen in Figure 6-III, where the data profile for the two dimensions (W and Y) is less computationally expensive for Solver4 than for Solver3. If we assume that the parameter T is dependent of Y, then the data profile shown in Figure 6-IV should confirm that the cost unit (W and T) is less computationally expensive for Solver4 than for Solver3. Whereas, the data profile shows that the cost unit per iteration (W and T) for Solver3 is slightly less than the cost unit for Solver4. Therefore, T is independent of Y because there is an additional (non-constant) overhead associated with the relative complexity of the 5 MTNMa solvers that is independent of the number of function evaluations. The additional overhead comes from the exploration process around the neighborhood of the best result, which depends on how efficient a solver to move in a direction towards the optimum. Solver3 requires higher function evaluations and less simplex gradients than Solver4, but takes less computational time to successfully solve the test problems. In the situation where Solver2 stands out as being the best of the five solvers, because certainly it requires not only less function evaluations but also less computational time than the other solvers. This proves that the parameters (W, Y, and T) present independent dimensions for data profiling. We rely on the proposed normalized data profile to measure the relative complexity of the 5 MTNM solvers. The high dimensional normalized data profile is used not only to examine the efficiency and robustness of the 5 MTNM solvers but also to measure the relative computational time and complexity among the 5 solvers of the MTNMa. Q. The algorithm wall-time, $T$, is not generalizable due to its dependence on the particular computer hardware used and the state of the host computer. Furthermore, for $T$ to be independent of $Y$, there must exist a significant (non-constant) overhead associated with the solvers that are independent of the number of function evaluations. However, as any such time overhead will necessarily result from the algorithmic complexity of the simplex construction step (see above), the $T$ and $W$ parameters do not seem to present independent dimensions for data profiling. R. Thank you very much. It is true that T depends on particular computer hardware used and the state of the host computer. We addressed this point in the previous question and proved that T is independent of Y. We also proved in the section of “Detailed Analysis of the Five Solvers” that the parameters (W, Y, and T) present independent dimensions for data profiling. Q. The number of CPUs, $Z$, was not used by the authors in their evaluation of the MTNM algorithm and I am unsure of how it relates to the NM algorithm in general. The most common target for parallelization techniques in the context of numerical optimisation problems is the objective function as it is (almost always) the most computationally expensive step. For $Z$ to be relevant the authors should present an optimisation algorithm that follows an alternative parallelization scheme. R. Thank you very much. The number of CPUs (Z) was not examined in our evaluation of the MTNMa. The time (T) is related to Z. In general, it depends on the type of mathematical problems that arise in applications. For example, we consider to design an application in pediatric cardiology to maximize the time to delay the next operation, and we want to use the high-dimensional normalized data profile to allocate the computational budget for different solvers of optimization algorithms. Also, the application is required to work in a virtual distributed environment such as Amazon Web Services (AWS). As part of the computational budget, it is important to specify the number of nodes in the virtual cluster. In this particular case, we can add another dimension that is the number of CPUs (Z) to allocate the different optimal numbers of Z for the different solvers and for a specific level of accuracy. The whole idea of the high dimensional normalized data profiles is to examine the efficiency and robustness of the different solvers when they are produced by different optimization algorithms, but most importantly is to allocate the computational budget for the relative solvers. Q. Validity of the findings The authors present interesting results with respect to the performance of the MTNM solver, however, exploration of these results is hampered by issues in the experimental design (see Section 2). Unfortunately, it is not possible to easily validate all of the author’s findings and reliance on externally sourced data (from the referenced paper by Fajfar et al) is an additional barrier to replication. R. Thank you very much. We carefully addressed the reviewer’s points and updated the manuscript accordingly. We compared the solvers of the MTNMa to the best relevant results from the literature reported by Fajfar et al. (2017). Q. Comments for the Author Below are some suggested grammatical corrections and comments relating to the prose of the submitted article: Line 18: “…through an angle <that> designates…” R. Thank you very much. The sentence is revised. Q. Line 37: “<We are> motived by…” R. Thank you very much. The sentence is revised. Q. Line 47: “<It seems a fact> that if a problem can be described as a mathematical expression, then at some point we will be <required> to minimize it or tune <its> independent variables at a <minimum>”. R. Thank you very much. The sentence is revised. Q. Line 59: “…reflections over the <changing> landscape of <the> mathematical problems until the coordinates of the minimum point <can> be obtained…” R. Thank you very much. The sentence is revised. Q. Line 66: “…the simplex becomes <increasingly> distorted <with each iteration>, generating different geometrical formations that are less effective <than> the <original> simplex design.” R. Thank you very much. The sentence is revised. Q. Line 69: “…the sequence of simplexes <to> converge to a non-stationary point.” R. Thank you very much. The sentence is revised. Q. Line 75: “The rest of this paper is organised as follow<s>.” “presents the theory of <the> sequential design of trigonometric Nelder-Mead algorithms<s>…” R. Thank you very much. The sentence is revised. Q. Line 76, 97 Figure 1 A: “…the vector theory” -> “vector theory” R. Thank you very much. The sentence is revised. Q. Line 78: “…and presents <the> multidimensional…” R. Thank you very much. The sentence is revised. Q. Line 83: “…the theory of <the> Hassan…” R. Thank you very much. The sentence is revised. Q. Line 86: “<This is different from the> traditional NM algorithm…” (suggestion) R. Thank you very much. The sentence is revised. Q. Line 93: “…the generat<ed> sequence of triangular simplexes are guaranteed not only have different shapes but <to> also they have different directions<.> <These characteristics lead to> better performance than the traditional hyperplanes simplex.” “ Note that <to> find the reflected point D we add the vectors $H$ and $d$<, as shown in Figure 1 A.> ” R. Thank you very much. The sentence is revised. Q. Line 106: “Now is we consider two combinations or more”: It is not clear what sort of combinations we are considering. R. Thank you very much. The sentence is modified. “Now, if we consider two combinations (such as x and y) or more, then the simplex as in the case of the HNMa performs two reflections or more.” Q. Line 107: “..multiple components of <the> triangular HNM simplex…” R. Thank you very much. The sentence is revised. Q. Line 114: “In fact, the HNM algorithm is designed to deform its simplex in a way that introduces nonlinear movement, by incorporating a rotation<al> <shift at> each iteration.” (suggestion) R. Thank you very much. The sentence is revised. Q. Line 115: “After all axial components of the Th point are updated to see whether the new Th is better than C to be replaced or not.” R. Thank you very much. The sentence is revised. Q. Line 122: “…approximation to a solution, generating different geometrical configurations…” R. Thank you very much. The sentence is revised. Q. Line 123: “…build the initial simplex with <edges of > equal length …” R. Thank you very much. The sentence is revised. Q. Line 129: “The risk is that<,> if the initial simplex is perpendicular to an optimal solution, then the algorithm…” R. Thank you very much. The sentence is revised. Q. Line 132: “…initial simplex with the NM method…” (suggested) R. Thank you very much. The sentence is revised. Q. Line 134: “…the generat<ed> sequence of…” R. Thank you very much. The sentence is revised. Q. Line 134: “One of the problems is that the generating sequence of simplexes should be consistently scaled with respect to the best vertex obtained and restart the simplex around the best observed results.” R. Thank you very much. The sentence is revised. Q. This sentence is difficult to parse as “One of the problems” is closely followed by “should be”. The first phase is a negative imperative, while the second phrase is a positive imperative. A clearer sentence could start “One of the problems is that the generated sequence of simplexes is not scaled with respect to the best vertex…” or, perhaps, “Ideally the generated sequence of simplexes should be scaled with respect to the best vertex…”. R. Thank you very much. The sentence is revised. Q. Line 138: “”Alternatively, the most popular way to initialize a simplex is Pfeffers´ method <(Baudin, 2009).> <This approach scales> the initial simplex <based on the starting point, $X_0$>. The initial vertex is set to v1 = x0, and the remaining vertices <are> obtained as follows,” R. Thank you very much. The sentence is revised. Q. The paragraph starting Line 141: Please present a brief argument as to why Equation (15) leads to improved starting points. The last sentence does not deductively follow from the Equation definition. R. Thank you very much. We presented a brief argument that explains why Equation (15) leads to a better simplex initialization. “Alternatively, the most popular way to initialize a simplex is Pfeffer ́s method, which is due to L.Pfeffer at Stanford (Baudin, 2009). It is a heuristic method that helps to determine the characteristics of the initial simplex based on the starting point x0. The method is applied using the content of usual delta (δu) and zero term delta (δz) elements that are used to adjust the simplex orientation. Pfeffer ́s method is presented in” Global Optimization of Lennard-Jones Atomic Clusters” (Fan, 2002); and it is used in the” fminsearch” function from the” neldermead package” (Bihorel et al., 2018). The positive constant coefficients of δz and δu are selected to scale the initial simplex with the characteristic length and orientation of the x0. If the constructed simplex is flat or is not in the same direction as an optimal solution, then this initial simplex may fail to drive the process towards an optimum or require to perform a large number of simplex evaluations. Therefore, the selection of a good starting vertex can greatly improve the performance of the simplex operations.” Fan, E. Global optimization of Lennard-Jones atomic clusters. MASTER OF SCIENCE. McMaster University, COMPUTING & SOFTWARE, Hamilton, Ontario, 2002. Bihorel, S., Baudin, M., & Bihorel, M. S. (2018). Package ‘neldermead’. Q. Line 145: “On contrary, our solution depends on allowing the components of the reflected vertex to <transform non-uniformly with each iteration of the simplex>. R. Thank you very much. The sentence is revised. Q. Line 154: New paragraph at “In this test, we are more interested in launching multiple sequences of trigonometric simplex designs with different…” R. Thank you very much. The sentence is revised. Q. Line 158: I suggested that the sentence starting on this line, and the one following, be reworked. The first sentence makes a definite claim as to the effectiveness of the multidirectional simplexes without providing justification, it is also not clear what “effective” means in the context. Finally, instead of “mathematical landscape”, I suggest the term “solution space” or “the domain of the objective function”. R. Thank you very much. The sentence is modified. “In this test, we are more interested in launching multiple sequences of trigonometric simplex designs with different directions. Each sequence is designed to rotate the starting simplex through an angle that designates the direction of the simplex. The proposed MTNMa enhances the standard Pfeffer's method of simplex designs and forms five solvers of multi-directional simplex designs with distinct reflections. We will demonstrate how solvers of the MTNMa extract different features of non-isometric reflections and converge to a minimum with a smaller computational budget as compared to the previously discussed methods of simplex designs. Key to this outcome is the mathematical model of the MTNMa designed to determine the optimal features of non-isometric reflections that result in better approximate solutions as compared to optimized versions of simplex designs.” Q. In the following sentence, it is not clear (to me) if the proposed methods seek to rotate the simplexes such that they have a unique starting direction or a unique direction with each iteration. R. Thank you very much. The sentence is revised. Q. Line 163: One of the potential designs <multiplies> the odd<-indexed> variables of odd-<indexed> vertices by (-1) R. Thank you very much. The sentence is revised. Q. Line 167: “…even components of x0,” R. Thank you very much. The sentence is revised. Q. Line 168: “…specified edge lengths and orientation…” All simplex edge lengths and orientations are specified in some manner. Are the specifications those goals outlined in the paragraph starting on Line 145? R. Thank you very much. The sentence is corrected. “A different way to create a simplex with different characteristics of edge lengths and orientations is to push some or all the points of the Solver1 towards the negative (x and y) axes to constitute Solver3 or towards the positive axes to constitute Solver5. Hence, Solver3 rotates the triangular simplexes of Solver1 by 180 degrees about the origin, which is obtained by multiplying the odd and even components of (x and y) by (-1). Similarly, Solver5 is designed to adjust the simplexes of Solver1 to perform a reflection in x-axis, y-axis, or origin, which is obtained by taking the absolute value of the triangular simplexes of Solver1.” Q. Line 169: “…by multiplying (-1) the parameters…” Is the (-1) a typo? R. Thank you very much. The sentence is revised. Q. “usual delta” and “zero term delta” -> “$\delta_u$ and $\delta_0” R. Thank you very much. The sentence is revised. Q. Line 170: “…the absolute value <of the components of $x_0$ and> and subtracting or adding to adjust the simplex orientation…” R. Thank you very much. The sentence is revised. Q. Line 171: “…and can be modified as needed…” By what criteria are the position vectors added or subtracted? R. Thank you very much. The sentence is revised. Q. Line 173: “…as well as the slope…” The slope with respect to what function and what point points? R. Thank you very much. We clarify the sentence “To monitor and evaluate a sequence of trigonometric simplex design, we need to know two points that the simplex passes through as well as the slope with respect to their CF values.” Q. “Therefore, a window of size 10 points…” The choice of 10 points does not deductively follow from the previous sentence, why 2, 4, or 100? If the window size is a hypermeter derived from your practical experience please make that choice clear. R. Thank you very much. The following sentence adds more details to the previous sentence. “Therefore, a window of size 10 points is used to examine the simplex ́s performance. The window size is derived from our practical experience. One of the proposed solvers manages to locate the exact minimum for (Jennrich-Sampson) function within 22 simplex evaluations.” Q. Line 176: “…the direction vector, which designates the direction of the simplex…” -> “the direction of the simplex” R. Thank you very much. The sentence is revised. Q. Line 178: “For the particular <case>,” R. Thank you very much. The sentence is revised. Q. Line 189: Non-deductive therefore. The phrase 'Therefore' should only when drawing a logical link between ideas. R. Thank you very much. The sentence is revised. Q. Line 190: “tools” -> “metrics” or “figures of merit”, “similar algorithms,” -> “the considered algorithms.” R. Thank you very much. The sentence is revised. Q. “Which are summarized as follows: <the> accuracy of the algorithm compared to the actual minima, <the wall-time to convergence>, the number of function evaluations, the number of simplex evaluations, and identification of the best sequence of trigonometric simplex design. R. Thank you very much. The sentence is revised. Q. Line 194 to 200: “The main reason for designing the set of functions”: It is not clear how the set of functions relates to the previously mentioned testing guidelines. I suggest that the authors make the link explicit (e.g. “These guidelines utilize a set of functions…) or place this discussion after the subsequent sentence. R. Thank you very much. The sentence is revised. Q. Line 207: What is “short-term” behaviour in this context? R. Thank you very much. The context is clarified “Where x0 is the starting point for the solution of a particular problem p, p ∈ P (P is a set of benchmark problems), fL is the lowest CF value obtained for the problem by any solver within a given number of simplex gradient evaluations, and τ=10−k is the tolerance with k ∈ {3,5,7} for short-term outcomes. These include changes in adaptation, behavior, and skills of derivative-free algorithms. Which are closely related to examine the efficiency and robustness of optimization solvers at different levels of accuracy.” Q. Line: “launch” -> “launches” R. Thank you very much. The sentence is corrected. Q. Line 209: It is not clear to me how the individual solvers “cooperate” to find an optimal solution. As it stands I do not think they share information beyond their common starting point, $x_0$, is this correct? R. Thank you very much. The sentence is corrected. “In this research, however, the MTNMa launches multiple solvers that compute a set of approximate solutions.” Q. Line 233: “All computational experiments were carried out on an i5 CPU (4 Cores at1.8 GHz) and 4 GB of RAM. We used C# to implement the MTNM algorithm and carry out the experiments.” R. Thank you very much. The sentence is revised. “Altogether, the computational experiments are conducted to evaluate the MTNM on a computer that has 1.8 GHz core i5 CPU and 4 GB RAM. Finally, C# language is used to implement the MTNMa and the experiments.” "
Here is a paper. Please give your review comments after reading it.
410
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Text classification is the process of categorizing documents based on their content into a predefined set of categories. Text classification algorithms typically represent documents as collections of words and it deals with a large number of features. The selection of appropriate features becomes important when the initial feature set is quite large. In this paper, we present a hybrid of document frequency (DF) and genetic algorithm (GA)-based feature selection method for Amharic text classification. We evaluate this feature selection method on Amharic news documents obtained from the Ethiopian News Agency (ENA). The number of categories used in this study is 13. Our experimental results showed that the proposed feature selection method outperformed other feature selection methods utilized for Amharic news document classification. Combining the proposed feature selection method with Extra Tree Classifier (ETC) improves classification accuracy. It improves classification accuracy up to 1% higher than the hybrid of DF, Information Gain (IG), Chi-square (CHI), and Principal Component Analysis (PCA), 2.47% greater than GA and 3.86% greater than a hybrid of DF, IG, and CHI.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Text classification is the process of categorizing documents based on their content into a predefined set of categories. Text classification algorithms typically represent documents as collections of words and it deals with a large number of features. The selection of appropriate features becomes important when the initial feature set is quite large. In this paper, we present a hybrid of document frequency (DF) and genetic algorithm (GA)-based feature selection method for Amharic text classification. We evaluate this feature selection method on Amharic news documents obtained from the Ethiopian News Agency (ENA). The number of categories used in this study is 13. Our experimental results showed that the proposed feature selection method outperformed other feature selection methods utilized for Amharic news document classification. Combining the proposed feature selection method with Extra Tree Classifier (ETC) improves classification accuracy. It improves classification accuracy up to 1% higher than the hybrid of DF, Information Gain (IG), Chisquare (CHI), and Principal Component Analysis (PCA), 2.47% greater than GA and 3.86% greater than a hybrid of DF, IG, and CHI.</ns0:p></ns0:div> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Amharic is an Ethiopian language that belongs to the Semitic branch of the Afro-Asian language family. Amharic is the official working language of the Ethiopian Federal Democratic Republic, and it is the world's second most spoken Semitic language after Arabic, with approximately 22 million speakers according to <ns0:ref type='bibr' target='#b0'>[1,</ns0:ref><ns0:ref type='bibr' target='#b1'>2]</ns0:ref>. Amharic is classified as a low-resource language when compared to other languages such as English, Arabic, and Chinese <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Due to this, a significant amount of work is required to develop many Natural Language Processing (NLP) tasks to process this language.</ns0:p><ns0:p>Text processing has become difficult in recent years due to the massive volume of digital data. The curse of dimensionality is one of the most difficult challenges in text processing <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Feature selection is one of the techniques for dealing with the challenges that come with a large number of features Text classification is a natural language processing task that requires text processing. Text classification performance is measured in terms of classification accuracy and the number of features used. As a result, feature selection is a crucial task in text classification using machine learning algorithms.</ns0:p><ns0:p>Feature selection aims at identifying a subset of features for building a robust learning model. A small number of terms among millions shows a strong correlation with the targeted news category. Works in <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> address the problem of defining the appropriate number of features to be selected.</ns0:p><ns0:p>The choice of the best set of features is a key factor for successful and effective text classification <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. In general, redundant and irrelevant features cannot improve the performance of the learning model rather they lead to additional mistakes in the learning process of the model.</ns0:p><ns0:p>Several feature selection methods were discussed to improve Amharic text classification performance <ns0:ref type='bibr' target='#b6'>[7,</ns0:ref><ns0:ref type='bibr' target='#b7'>8]</ns0:ref>. Existing feature selection methods for Amharic text classifications employ filter approaches. The filter approach select features based on a specific relevance score. It does not check the impact of the selected feature on the performance of the classifier. Additionally, the filter feature selection technique necessitates the setting up of threshold values. It is extremely difficult to determine the threshold point for the scoring metrics used to select relevant features for the classifier <ns0:ref type='bibr' target='#b8'>[9,</ns0:ref><ns0:ref type='bibr' target='#b9'>10]</ns0:ref>. A better feature method based on classifier performance improves classification accuracy while decreasing the number of features.</ns0:p><ns0:p>As a result, this study presents a hybrid feature selection method that combines document frequency with a genetic algorithm to improve Amharic news text classification. The method can also help us to minimize the number of features required to represent each news document in the dataset. The proposed feature selection method selects the best possible feature subset by considering individual feature scoring and classifier accuracy. The contributions of this study are summarized as follows.</ns0:p><ns0:p>1. Propose a feature selection method that incorporates document frequency and a genetic algorithm.</ns0:p><ns0:p>2. Prove that the proposed feature selection method reduces the number of representative features and improves the classification accuracy over Amharic new document classification.</ns0:p><ns0:p>The rest of the paper is organized as follows: Section II is the description of the literature review. Section III describes the feature selection technique and methodology used in this work, which is based on document frequency and genetic algorithms. Section IV presents and discusses the experimental results. Section V focuses on the conclusion and future work.</ns0:p></ns0:div> <ns0:div><ns0:head>Related works</ns0:head><ns0:p>The accuracy of classifier algorithms used in Amharic news document classification is affected by the feature selection method. Different research has attempted to overcome the curse of dimensionality by employing various feature selection techniques. The following are some of the related feature selection works on Amharic and other languages document classification.</ns0:p><ns0:p>In the paper <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> the authors proposed a new dimension reduction method for improving the performance of Amharic news document classification. Their model consists of three filter feature selection methods i.e., IG, CHI, and DF, and one feature extractor i.e., PCA. Since a different subset of features is selected with the individual filter feature selection method, the authors used both union and intersection to merge the feature subsets. Their experimental result shows that the proposed feature selection method improves the performance of Amharic news classification. Even though the weakness of one feature selection method is filled by the strength of the other, the feature selection method used in their model does not consider the interaction among features on the classifier performance.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> proposed a hybrid feature selection method for Amharic news text classification by integrating three different filter feature selection methods. Their feature selection method consists of information gain, chi-square, and document frequency. The proposed feature selection improves the performance of Amharic news text classification by 3.96%, 11.16%, and 7.3% more than that of information gain, chi-square, and document frequency, respectively. However, the dependency among terms (features) is not considered in their feature selection method.</ns0:p><ns0:p>Feature selection algorithms have been proposed in <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> to analyze highly dimensional datasets and determine their subsets. Ensemble feature selection algorithms have become an alternative with functionalities to support the assembly of feature selection algorithms. The performance of the framework was demonstrated in several experiments. It discovers relevant features either by single FS algorithms or by ensemble feature selection methods. Their experimental result shows that the ensemble feature selection performed well over the three datasets used in their experiment.</ns0:p><ns0:p>The authors of <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> proposed a more accurate ensemble classification model for detecting fake news. Their proposed model extracts important features from fake news datasets and then classifies them using an ensemble model composed of three popular machine learning classifiers: Decision Tree, Random Forest, ETC. Ensemble classifiers, on the other hand, require an inordinate amount of time for training.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> proposed a new bio-inspired firefly algorithm-based feature selection method for dealing with Arabic speaker recognition systems. Firefly algorithm is one of the wrapper approaches to solving nonlinear optimization problems. They proved that this method is effective in improving recognition performance while reducing system complexity.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> the authors explore the use of wrapper feature selection methods in mammographic images for breast density classification. They used two mammographic image datasets, five wrapper feature selection methods were tested in conjunction with three different classifiers. Best-first search with forwarding selection and best-first search with backward selection was the most effective methods. These feature selection methods improve the overall performance by 3% to 12% across different classifiers and datasets.</ns0:p><ns0:p>The work of <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> proposed a novel unsupervised feature selection technique, Feature Selectionbased Feature Clustering that uses similarity-based feature clustering (FSFC). The results of feature clustering based on feature similarity are used by FSFC to reduce redundant features. First, it groups the characteristics based on their similarity. Second, it chooses a representative feature from each cluster that includes the most interesting information among the cluster's characteristics. Their experimental results reveal that FSFC not only reduces the feature space in less time but also improves the clustering performance of K-means considerably.</ns0:p></ns0:div> <ns0:div><ns0:head>Materials and Methodology</ns0:head><ns0:p>The collection of Amharic news documents served as the starting point for the suggested feature selection technique for Amharic news document classification. The proposed document classifier consists of preprocessing, document representation, feature selection, and a classifier module. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> depicts the overall architecture of the proposed classifier in this study. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Data preprocessing</ns0:head><ns0:p>Real-world data generally contains noises, missing values, and maybe in an unusable format that cannot be directly used for machine learning models. Data preprocessing is required for cleaning the data and making it suitable for a machine learning model and also helps us to increase the accuracy and efficiency of a machine learning model <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. Amharic is one of the languages which is characterized by complex morphology <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>. Separate data pre-processing is required because Amharic has its own set of syntactical, structural, and grammatical rules. To prepare the raw Amharic news document for the classifier, we performed the following preprocessing tasks.</ns0:p><ns0:p>Normalization: There are characters in Amharic that have the same sound but no clear-cut rule in their meaning difference. The number of features increases when we write the name of an object or concept with different alphabets/characters. We create a list of characters and their corresponding canonical form used by this study. For instance, characters such as &#4608;&#4963;&#4611;&#4963;&#4739;&#4963;&#4736;&#4964;&#4624;&#4963;&#4627;, &#4656;&#4963;&#4640;, &#4768;&#4963;&#4771;&#4963;&#4816;&#4963;&#4819;, &#4920;&#4963;&#4928; have the same sound and meaning. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the normalization of Amharic characters having the same sound with different symbolic representations <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>. Stop-word Removal: Words in the document do not have equal weight in the classification process. Some are used to fill the grammatical structure of a sentence or do not refer to any object or concept. Common words in English text like, a, an, the, who, be, and other common words that bring less weight are known as stop-words <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>. We used the stop-word lists prepared by <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. We remove those terms from the dataset before proceeding to the next text classification stage.</ns0:p></ns0:div> <ns0:div><ns0:head>Stemming:</ns0:head><ns0:p>Stemming is the process of reducing inflected words to their stem, base, or root form. Amharic is one of the morphological-rich Semitic languages <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>. Due to this, different terms can exist with the same stem. Stemming helps us to reduce morphological variant words to their root and reduce the dimension of the feature space for processing. For example, &#4708;&#4725; 'House' (&#4708;&#4721;, &#4708;&#4726;&#4733;, &#4708;&#4723;&#4733;&#4757;, &#4708;&#4726;&#4731;&#4733;&#4757;, &#4708;&#4723;&#4728;&#4813;, &#4708;&#4726;&#4731;&#4728;&#4813;) into their stem word &#4708;&#4725;. In this paper, we used Gasser's HornMorpho stemmer <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>. HornMorpho is a Python program that analyzes Amharic, Oromo, and Tigrinya words into their constituent morphemes (meaningful parts) and generates words, given a root or stem and a representation of the word's grammatical structure. It is rule-based that could be implemented as finite-state transducers (FST). We adopt this stemmer because it has 95% accuracy and is better as compared with other stemmers <ns0:ref type='bibr' target='#b21'>[22]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Document representation</ns0:head><ns0:p>To transform documents into feature vectors we used Bag-of-Word (BOW) method. The BOW is denoted with Vector Space Model (VSM). In this type of document representation, documents are represented as a vector in n-dimensional space, where n is the number of unique terms selected as informative from the corpus <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref>. The weight of each term can be calculated by one of the term weighting schemes like Boolean value, term frequency, inverse document frequency, or term frequency by inverse document frequency. In the VSM document, D i can be represented as [W i1 , W i2 ...W ij ...W in ] where W ij is the weight computed by one of the above weighting scheme's values of the j th term in the n-dimensional vector space. Despite the disadvantage of BOW stated above it is still the dominant document representation technique used for document categorization in literature <ns0:ref type='bibr' target='#b23'>[24,</ns0:ref><ns0:ref type='bibr' target='#b24'>25]</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Feature selection</ns0:head><ns0:p>Feature selection is the process of choosing a small subset of relevant features from the original features by removing irrelevant, redundant, or noisy features. Feature selection is very important in pattern recognition and classification. Feature selection usually leads to better learning accuracy, lower computational cost, and model interpretability. In this section, we define the document frequency, genetic algorithm, and proposed hybrid feature selection for Amharic text classification.</ns0:p></ns0:div> <ns0:div><ns0:head>Document Frequency</ns0:head><ns0:p>Document frequency (DF) counts the number of documents which contains the given term. DF is determined as words scoring. DF value greater than a threshold are used for text classification. The fundamental idea behind DF is that terms that are irrelevant to the classification are found in fewer documents. DF is determined as <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>DF (t i ) = (1) &#8721; m i = 1 (Ai)</ns0:formula><ns0:p>Where m is the number of documents and Ai is the occurrence of a term in document i.</ns0:p></ns0:div> <ns0:div><ns0:head>Genetic Algorithm</ns0:head><ns0:p>The genetic algorithm is one of the most advanced feature selection algorithms <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>. It is a stochastic function optimization method based on natural genetics and biological evolution mechanics. Genes in organisms tend to evolve over generations to better adapt to their surroundings. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> illustrates a statechart diagram of feature selection using a genetic algorithm. A genetic algorithm consists of operators such as initialization, fitness assignment, selection, crossover, and mutation. Following that, we go over each of the genetic algorithm's operators and parameters.</ns0:p></ns0:div> <ns0:div><ns0:head>Initialization operator:</ns0:head><ns0:p>The first step is to create and initialize individuals in the population. Individuals' genes are randomly initialized because a genetic algorithm is a stochastic optimization method.</ns0:p><ns0:p>Fitness assignment operator: Following the initialization, we must assign a fitness value to each individual in the population. We train each neural network with training data and then evaluate its performance with testing data. A significant selection error indicates poor fitness. Individuals with higher fitness are more likely to be chosen for recombination. In this study, we used a rank-based fitness assignment technique to assign fitness values to each individual <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>.</ns0:p><ns0:p>Selection operator: Following the completion of a fitness assignment, a selection operator is used to select individuals to be used in the recombination for the next generation. Individuals with high fitness levels can survive in the environment. We used the stochastic sampling replacement technique to select individuals based on their fitness, where fitness is determined by factors' weight. The number of chosen individuals is N/2, where N is the size population <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>.</ns0:p><ns0:p>Crossover operator: Crossover operators are used for generating a new population after the selection operator has chosen half of the population. This operator selects two individuals at random and combines their characteristics to produce offspring for the new population. The uniform crossover method determines whether each of the offspring's characteristics is inherited from one or both parents <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Mutation operators:</ns0:head><ns0:p>The crossover operator can produce offspring that are strikingly similar to their parents. This problem is solved by the mutation operator, which changes the value of some features in the offspring at random. To determine whether a feature has been mutated, we generate a random number between 0 and 1 <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>The proposed hybrid feature selection (DFGA)</ns0:head><ns0:p>To obtain the best subset of features, we propose a hybrid feature selection technique that utilizes document frequency and genetic algorithms. The proposed DFGA algorithm utilizes the benefits of filter and wrapper feature selection methods. The most relevant attributes are chosen first, based on document frequency. The best subset of features is then selected using a genetic algorithm to obtain the best possible feature subset for text classification. A high-level description of the proposed feature selection method is presented as shown in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> below. </ns0:p></ns0:div> <ns0:div><ns0:head>Experiment</ns0:head><ns0:p>In this section, we investigate the effect of the proposed feature selection on Amharic news document classification. The performance of the proposed feature selection method is compared with state-of-the-art feature selection methods in terms of classification. All experiments are run in a Windows 10 environment on a machine with a Core i7 processor and 32GB of RAM. In addition, the description of parameters used in the genetic algorithm is depicted as shown in Table <ns0:ref type='table'>2</ns0:ref> below.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 2. Genetic algorithm parameters used for this study</ns0:head></ns0:div> <ns0:div><ns0:head>Dataset</ns0:head><ns0:p>There is no publicly available dataset for Amharic text classification. Business, education, sport, technology, diplomatic relations, military force, politics, health, agriculture, justice, accidents, tourism, and environmental protection are among the 13 major categories of news used in this study. Each document file is saved as a separate file name within the corresponding category's directory, implying that all documents in the dataset are single-labeled. The news is labeled by linguistic experts of Jimma University. Every document is given a single label based on its content. The dataset consists of documents with varying lengths. The upper bound length of a document is 300 tokens and the lower bound is 30 tokens. So the length of documents in each category is in the range of 30-300 tokens. For each category, we used a number from 1 to 13 to represent the category label. The news categories and amount of news items used in this study are listed in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 3. News categories and the number of news documents in each category</ns0:head></ns0:div> <ns0:div><ns0:head>Performance measure</ns0:head><ns0:p>We assess the performance of classifiers with our proposed document frequency plus genetic algorithm-based feature selection in terms of accuracy, precision, recall, and F-measure.</ns0:p><ns0:p>Accuracy: This is the most widely used metric for measuring classifier efficiency, and it is calculated as follows <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>: *100%</ns0:p><ns0:p>(2) &#119860;&#119888;&#119888;&#119906;&#119903;&#119886;&#119888;&#119910; = &#119879;&#119875; + &#119879;&#119873; &#119879;&#119875; + &#119879;&#119873; + &#119865;&#119875; + &#119865;&#119873; Precision: it is used to determine the correctness of a classifier's result and can be determined as follows:</ns0:p><ns0:formula xml:id='formula_1'>Precision= *100% (3) &#119879;&#119875; &#119879;&#119875; + &#119865;&#119875;</ns0:formula><ns0:p>A recall is a metric that assesses the accuracy of a classifier's output. It is calculated using the following equation:</ns0:p><ns0:formula xml:id='formula_2'>Recall= *100%<ns0:label>(4)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>&#119879;&#119875; &#119879;&#119875; + &#119865;&#119873;</ns0:head><ns0:p>The harmonic mean of precision and recall is the F-measure, which can be calculated as follows: </ns0:p><ns0:formula xml:id='formula_3'>F-measure= *100%<ns0:label>(5)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Results</ns0:head><ns0:p>The results of the experiments are discussed in this section. To determine the best train-test split mechanism for our data set, we conduct experiments using train-test split ratios of 70/30, 75/25, 80/20, and 90/10. The experiment is carried out while all other parameters remain constant. Then we got a classification accuracy of 86.57%, 87.53%, 89.68%, and 87.05% respectively. As a result, we used an 80/20 splitting ratio in all of the experiments, which means that 80% of the dataset was used to train the classifier and 20% of the dataset was used to test the trained model. The proposed feature selection method is evaluated on Amharic news classification on 13 major news categories in terms of accuracy, precision, recall, and f-measure and the result is depicted as shown in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> below. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Table 5</ns0:note></ns0:div> <ns0:div><ns0:head>. Comparison of CTC, RFC, and GBC</ns0:head><ns0:p>According to the results in Table <ns0:ref type='table'>5</ns0:ref>, ETC outperforms RFC and GBC by 1.88% and 2.1%, respectively. We also used ETC to compare DFGA-based feature selection strategies to existing filter feature selection and feature extraction methods like DF, CHI, IG, PCA, hybrid of (IG, CHI and DF) <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>, hybrid of (IG, CHI, DF, PCA) <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> and genetic algorithm. The comparisons of the proposed method with the existing methods were performed using our dataset. The results are shown in Table <ns0:ref type='table' target='#tab_4'>6</ns0:ref>. The document frequency plus genetic algorithm-based feature selection method produced the highest accuracy, according to the results in Table <ns0:ref type='table' target='#tab_4'>6</ns0:ref>. This is because the proposed feature selection algorithm considers classification accuracy when selecting a subset of features. In our experiment, the accuracy of the proposed feature selection algorithm is 5.44% higher than that of the DF, 15.01% higher than that of the CHI, and 7.13% higher than that of the IG. Furthermore, we discovered that DF outperforms IG and CHI on larger datasets. This is because the probability of a given class and term becomes less significant as the dataset size increases <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref>.</ns0:p><ns0:p>In addition to classification accuracy, we also compared the proposed feature selection method with the existing feature selection methods in terms of the number of features they produced. As the result, the proposed method produced a minimum number of features as compared with the other method considered in this study. A minimum number of features means, saving the computational time and space taken by the classifier algorithm. The number of features produced by the corresponding feature selection methods is depicted as shown in Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref> below. The results indicate the joint use of filter and wrapper methods improves classification accuracy. It also helps to reduce the size of the feature matrix without affecting the classification accuracy. This is mainly because (1) relevant terms are first taken by the filter methods, (2) wrapper methods produced the best subset of features by considering the classifier's performance. Generally, the proposed feature selection method provides the best classification accuracy with the smallest number of features as compared with the existing feature selection methods. This helps us to save the computation complexity.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In this study, we present a hybrid feature selection method that consists of document frequency and a genetic algorithm for Amharic text classification. To validate the performance of the new feature selection strategy, several experiments and comparisons were conducted using various classifiers and state-of-the-art feature selection techniques such as a hybrid of DF, CHI, and IG, hybrid of IG, CHI, DF and PCA, and GA. The result showed that the proposed feature selection technique gives promising results when we combined it with ETC. As a result, a hybrid of document frequency and genetic algorithm-based feature selection method is suitable for use in a</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The architecture of the proposed Amharic text classifier.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Statechart for feature selection using genetic algorithm.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. A pictorial description of the proposed feature selection</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>List of consonants normalized in the study.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>due to its simplicity, best feature representation, and efficiency.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68748:1:2:NEW 17 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Performance evaluation of Amharic news document classification using the proposed feature selection method.</ns0:figDesc><ns0:table /><ns0:note>We also evaluate the performance of the proposed (document frequency + genetic algorithm) based feature selection over different classifiers such as ETC, RFC (Random Forest Classifier), and Gradient Boosting Classifier (GBC). According to our experimental results, ETC outperforms the RFC and GBC classifiers as shown inTable 5 below. PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68748:1:2:NEW 17 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison between IG, CHI, DF, and genetic algorithm based feature selection</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison of feature selection methods in terms of the number of features.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68748:1:2:NEW 17 Mar 2022)Manuscript to be reviewed</ns0:note> <ns0:note place='foot' n='1'>Table 3. News categories and the number of news documents in each category</ns0:note> <ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68748:1:2:NEW 17 Mar 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Rebuttal Letter Dear , Editor(Yilun Shang) We were pleased to have an opportunity to revise our manuscript now entitled “Feature Selection by Integrating Document Frequency with Genetic Algorithm for Amharic News Document Classification”. In the revised manuscript, we have carefully considered reviewers’ comments and suggestions. As instructed, we have attempted to succinctly explain changes made in reaction to all comments. We reply to each comment in a point-by-point fashion. We have tracked all the changes in the manuscript. The responses to the concerns raised by reviewers are below and are color-coded as follows: a) Comments from editors or reviewers are shown as text in italic font b) Our responses are shown as text. The reviewers’ comments were very helpful overall, and we are appreciative of such constructive feedback on our original submission. After addressing the issues raised, we feel the quality of the paper is much improved. Sincerely Demeke Endalie Reviewer 1 (Tadesse Belay). 1. The writing should be improved a lot. A lot of grammar issues, unsound statements, and incomplete sentences phrases are very common. there are also very very long sentences, please make it readable, write short and concise sentences A: Thank you very much. We were able to get away with it thanks to linguistics. 2. “Examine the performance of the proposed feature selection method in terms of accuracy, precision, recall, and F-measure.' it should be not your main contribution. A: Thank you very much. We have changed the contribution of the study to “Prove that the proposed feature selection method reduces the number of representative features and improves the classification accuracy over Amharic new document classification” this creates a link with the result we got. 3. What was your starting Research Question or the gap you want to fill? Clearly define your research questions. A: We appreciate your insightful suggestions. To improve classification accuracy with a smaller number of features (terms). 4. Introduction: '2007 census [1]' This is a very old reference, do you have a near time reference for this? A: We appreciate your insightful suggestions. We have changed it with a recent paper published in 2020 ref [1, 2] in the revised manuscript. 5. On related work: revise it again even you have not cited the latest paper that was done in Amharic news document classification like this  https://doi.org/10.1371/journal.pone.0251902 A: Thank you very much. Reference [7] is for this work in the revised manuscript. 6. Data processing -Normalization: Which is 'canonical' normalization or standardization? What was your base to normalize homophone characters into a single representation? A: We appreciate your insightful suggestions. We presented the canonical for standardization in Table 1 of the revised manuscript used by previous studies. 7. It would be also nice if your approach is compared with homophone normalization and without homophone normalization approaches. A: We appreciate your insightful suggestions. We did not check it because the only advantage of normalization is to minimize the number of features that are fed to the neural network. 8. Is it the different hybrid results are from the previously conducted paper or from your data? A: We appreciate your insightful suggestions. The comparisons of the proposed method with the existing methods were performed using our dataset. Reviewer 2 1. There is no comparative study of results with other authors [11] on the same dataset. The dataset used in this study does not match the dataset referenced [11]. A: We appreciate your insightful suggestions. The comparisons of the proposed method with the existing methods were performed using our dataset. 2. It looks like a regular pre-processing step for any other language. How is the pre-processing different in context to Amharic? A: Thank you very much. Separate pre-processing is required because Amharic has its own set of syntactical, structural, and grammatical rules. 3. Gasser's HornMorpho stemmer: what is the accuracy of the stemmer used? How is it evaluated? Is it Rule-Based or Statistical? A: We appreciate your insightful suggestions. HornMorpho is a Python program that analyzes Amharic, Oromo, and Tigrinya words into their constituent morphemes (meaningful parts) and generates words, given a root or stem and a representation of the word’s grammatical structure. It is rule-based that could be implemented as finite-state transducers (FST). We adopt this stemmer because it has 95% accuracy and is better as compared with other stemmers[ CITATION Tew18 \l 1033 ].. 4. Table 2, what basis are the documents classified into major categories? Who did this classification? Can a document not belong to multiple types, e.g. education and health?. A: Thank you very much. The news is labeled by linguistic experts of Jimma University. Every document is given a single label based on its content. 5. Table 2, what is the length of the documents in each category? A: We appreciate your insightful suggestions. The dataset consists of documents with varying lengths. The upper bound length of a document is 300 tokens and the lower bound is 30 tokens. So the length of documents in each category is in the range of 30-300 tokens. 6. Any observations on varying the splitting ratio versus the proposed model's performance? A: Thank you very much. We have conducted an experiment with the most commonly used train-test-splitting ratio and we have included the report in the revised manuscript. 7. The dataset used here is small in size, and I recommend that the authors test your proposed algorithm on a larger dataset in future studies. A: We appreciate your insightful suggestions. We have a plan to do it. 8. Can the proposed methodology, etc. used be adapted/used for other similar languages? Conclusion of future work may discuss this. A: Thank you very much. We plan to adopt the proposed feature selection method for other languages in our future work. . "
Here is a paper. Please give your review comments after reading it.
411
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Research software is a critical component of contemporary scholarship. Yet, most research software is developed and managed in ways that are at odds with its long-term sustainability. This paper presents findings from a survey of 1149 researchers, primarily from the United States, about sustainability challenges they face in developing and using research software. Some of our key findings include a repeated need for more opportunities and time for developers of research software to receive training. These training needs cross the software lifecycle and various types of tools. We also identified the recurring need for better models of funding research software and for providing credit to those who develop the software so they can advance in their careers. The results of this survey will help inform future infrastructure and service support for software developers and users, as well as national research policy aimed at increasing the sustainability of research software.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In almost all areas of research, from hard sciences to the humanities, the processes of collecting, storing, and analyzing data and of building and testing models have become increasingly complex. Our ability to navigate such complexity is only possible because of the existence of specialized software, often referred to as research software. Research software plays such a critical role in day to day research that a comprehensive survey reports 90-95% of researchers in the US and the UK rely upon it and more than 60% were unable to continue working if such software stopped functioning <ns0:ref type='bibr' target='#b19'>(Hettrick, 2014)</ns0:ref>. While the research community widely acknowledges the importance of research software, the creation, development, and maintenance of research software is still ad hoc and improvised, making such infrastructure fragile and vulnerable to failure.</ns0:p><ns0:p>In many fields, research software is developed by academics who have varying levels of training, ability, and access to expertise, resulting in a highly variable software landscape. As researchers are under immense pressure to maintain expertise in their research domains, they have little time to stay current with the latest software engineering practices. In addition, the lack of clear career incentives for building and maintaining high quality software has made research software development unsustainable. The lack of career incentives has occurred partially because the academic environment and culture have developed over hundreds of years, while software has only recently become important, in some fields over the last 60+ years, but in many others, just in the last 20 or fewer years <ns0:ref type='bibr' target='#b15'>(Foster, 2006)</ns0:ref>.</ns0:p><ns0:p>Further, only recently have groups undertaken efforts to promote the role of research software (e.g., the Society of Research Software Engineers 1 , the US Research Software Engineer Association 2 ) and to 1 https://society-rse.org The open source movement has created a tremendous variety of software, including software used for research and software produced in academia. It is difficult for researchers to find and use these solutions without additional work <ns0:ref type='bibr' target='#b28'>(Joppa et al., 2013)</ns0:ref>. The lack of standards and platforms for categorizing software for communities often leads to re-developing instead of reusing solutions <ns0:ref type='bibr' target='#b23'>(Howison et al., 2015)</ns0:ref>. There are three primary classes of concerns, pervasive across the research software landscape, that have stymied this software from achieving maximum impact <ns0:ref type='bibr' target='#b10'>(Carver et al., 2018)</ns0:ref>.</ns0:p><ns0:p>&#8226; Functioning of the individual and team: issues such as training and education, ensuring appropriate credit for software development, enabling publication pathways for research software, fostering satisfactory and rewarding career paths for people who develop and maintain software, and increasing the participation of underrepresented groups in software engineering.</ns0:p><ns0:p>&#8226; Functioning of the research software: supporting sustainability of the software; growing community, evolving governance, and developing relationships between organizations, both academic and industrial; fostering both testing and reproducibility, supporting new models and developments (e.g., agile web frameworks, Software-as-a-Service), supporting contributions of transient contributors (e.g., students), creating and sustaining pipelines of diverse developers.</ns0:p><ns0:p>&#8226; Functioning of the research field itself : growing communities around research software and disparate user requirements, cataloging extant and necessary software, disseminating new developments and training researchers in the usage of software.</ns0:p><ns0:p>In response to some of the challenges highlighted above, the US Research Software Sustainability Institute (URSSI) 6 conceptualization project, funded by NSF, is designing an institute that will help with the problem of sustaining research software. The overall goal of the conceptualization process is to bring the research software community together to determine how to address known challenges to the development and sustainability of research software and to identify new challenges that need to be addressed. One important starting point for this work is to understand and describe the current state of the practice in the United States relative to those important concerns. Therefore, in this paper we describe the results of a community survey focused on this goal.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>Previous studies of research software have often focused on the development of cyberinfrastructure <ns0:ref type='bibr' target='#b6'>(Borgman et al., 2012)</ns0:ref> and the various ways software production shapes research collaboration <ns0:ref type='bibr'>(Howison and</ns0:ref><ns0:ref type='bibr'>Herbsleb, 2011, 2013;</ns0:ref><ns0:ref type='bibr' target='#b41'>Paine and Lee, 2017)</ns0:ref>. While these studies provide rich contextual observations about research software development processes and practices, their results are difficult to generalize because they often focus either on small groups or on laboratory settings. Therefore, there is a need to gain a broader understanding of the research software landscape in terms of challenges that face individuals seeking to sustain research software.</ns0:p><ns0:p>A number of previous surveys have provided valuable insight into research software development and use, as briefly described in the next subsection. Based on the results of these surveys and from other related literature, the remainder of this section motivates a series of research questions focused on important themes related to the development of research software. The specific questions are based on the authors' experience in common topics mentioned in the first URSSI workshop <ns0:ref type='bibr' target='#b45'>(Ram et al., 2018)</ns0:ref> as well as previously published studies of topics of interest to the community <ns0:ref type='bibr' target='#b33'>(Katz et al., 2019;</ns0:ref><ns0:ref type='bibr'>Fritzsch, 2019)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Previous Surveys</ns0:head><ns0:p>The following list provides an overview of the previous surveys on research software, including the context of each survey. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarizes the surveys.</ns0:p><ns0:p>&#8226; How do Scientists Develop and Use Scientific Software? <ns0:ref type='bibr' target='#b18'>(Hannay et al., 2009)</ns0:ref> describes the results of a survey of 1972 scientists who develop and use software. The survey focused on questions about (1) how and when scientists learned about software development/use, (2) the importance of developing/using software, (3) time spent developing/using software, (4) hardware platforms, (5) user communities, and (6) software engineering practices.</ns0:p><ns0:p>&#8226; How Do Scientists Develop Scientific Software? An External Replication <ns0:ref type='bibr' target='#b43'>(Pinto et al., 2018)</ns0:ref> is a replication of the previous study <ns0:ref type='bibr' target='#b18'>(Hannay et al., 2009)</ns0:ref> conducted ten years later. The replication focused on scientists who develop R packages. The survey attracted 1553 responses. The survey asked very similar questions to the original survey, with one exception. In addition to replicating the original study, the authors also asked respondents to identify the 'most pressing problems, challenges, issues, irritations, or other 'pain points' you encounter when developing scientific software.' A second paper, Naming the Pain in Developing Scientific Software <ns0:ref type='bibr' target='#b56'>(Wiese et al., 2020)</ns0:ref>, describes the results of this question in the form of a taxonomy of 2,110 problems that are either (1) technical-related, (2) social-related, or (3) scientific-related.</ns0:p><ns0:p>&#8226; A Survey of Scientific Software Development <ns0:ref type='bibr' target='#b39'>(Nguyen-Hoan et al., 2010)</ns0:ref> surveyed researchers in Australia working in multiple scientific domains. The survey focused on programming language use, software development tools, development teams and user bases, documentation, testing and verification, and non-functional requirements.</ns0:p><ns0:p>&#8226; A Survey of the Practice of Computational Science <ns0:ref type='bibr' target='#b44'>(Prabhu et al., 2011)</ns0:ref> reports the results of interviews of 114 respondents from a diverse set of domains all working at Princeton University.</ns0:p><ns0:p>The interviews focused on three themes: (1) programming practices, (2) computational time and resource usage, and (3) performance enhancing methods.</ns0:p><ns0:p>&#8226; Troubling Trends in Scientific Software <ns0:ref type='bibr' target='#b28'>(Joppa et al., 2013)</ns0:ref> reports on the results from about 450 responses working in a specific domain, species distribution modeling, that range from people who find software difficult to use to people who are very experienced and technical. The survey focused on understanding why respondents chose the particular software they used and what other software they would like to learn how to use.</ns0:p><ns0:p>&#8226; Self-Perceptions About Software Engineering: A Survey of Scientists and Engineers <ns0:ref type='bibr' target='#b8'>(Carver et al., 2013)</ns0:ref> reports the results from 141 members of the Computational Science &amp; Engineering community. The primary focus of the survey was to gain insight into whether the respondents thought they knew enough software engineering to produce high-credibility software. The survey also gathered information about software engineering training and about knowledge of specific software engineering practices.</ns0:p><ns0:p>&#8226; 'Not everyone can use Git:' Research Software Engineers' recommendations for scientist-centered software support (and what researchers really think of them) <ns0:ref type='bibr' target='#b26'>(Jay et al., 2016)</ns0:ref> describes a study that includes both Research Software Engineers and domain researchers to understand how scientists publish code. The researchers began by interviewing domain scientists who were trying to publish their code to identify the barriers they faced in publishing their code. Then they interviewed Research Software Engineers to understand how they would address those barriers. Finally, they synthesized the results from the Research Software Engineer interviews into a series of survey questions sent to a larger group of domain researchers.</ns0:p><ns0:p>&#8226; It's impossible to conduct research without software, say 7 out of 10 UK researchers <ns0:ref type='bibr' target='#b20'>(Hettrick, 2018</ns0:ref><ns0:ref type='bibr' target='#b19'>(Hettrick, , 2014) )</ns0:ref> describes the results of 417 responses to a survey of 15 Russel Group Universities in the UK.</ns0:p><ns0:p>The survey focused on describing the characteristics of software use and software development within research domains. The goal was to provide evidence regarding the prevalence of software and its fundamental importance for research.</ns0:p></ns0:div> <ns0:div><ns0:head>&#8226; Surveying the US National Postdoctoral Association Regarding Software Use and Training in</ns0:head><ns0:p>Research <ns0:ref type='bibr' target='#b37'>(Nangia and Katz, 2017)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Software Engineering Practices</ns0:head><ns0:p>Based on the results of the surveys described in the previous subsection, we can make some observations about the use of various software engineering practices employed while developing software. The set of practices research developers find useful appear to have some overlap and some difference from those practices employed by developers of business or IT software. Interestingly, the results of the previous surveys do not paint a consistent picture regarding the importance and/or usefulness of various practices.</ns0:p><ns0:p>Our current survey is motivated by the inconsistencies in previous results and the fact that some key areas are not adequately covered by previous surveys. Here we highlight some of the key results from these previous surveys, organized roughly in the order of the software engineering lifecycle.</ns0:p></ns0:div> <ns0:div><ns0:head>Requirements</ns0:head><ns0:p>The findings of two surveys <ns0:ref type='bibr' target='#b43'>(Pinto et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hannay et al., 2009)</ns0:ref> reported both that requirements were important to the development of research software but also that they were one of the least understood phases. Other surveys reported that (1) requirements management is the most difficult technical problem <ns0:ref type='bibr' target='#b56'>(Wiese et al., 2020)</ns0:ref> and (2) the amount of requirements documentation is low <ns0:ref type='bibr' target='#b39'>(Nguyen-Hoan et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Design Similar to requirements, surveys reported that design was one of the most important phases <ns0:ref type='bibr' target='#b18'>(Hannay et al., 2009)</ns0:ref> and one of the least understood phases <ns0:ref type='bibr' target='#b43'>(Pinto et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hannay et al., 2009)</ns0:ref>. In addition, other surveys reported that (1) testing and debugging are the second most difficult technical problem <ns0:ref type='bibr' target='#b39'>(Nguyen-Hoan et al., 2010)</ns0:ref> and (2) the amount of design documentation is low <ns0:ref type='bibr' target='#b56'>(Wiese et al., 2020)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>4/31</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Testing There were strikingly different results related to testing. A prior survey of research software engineers found almost 2/3 of developers do their own testing, but less than 10% reported the use of formal testing approaches <ns0:ref type='bibr' target='#b42'>(Philippe et al., 2019)</ns0:ref>. Some surveys <ns0:ref type='bibr' target='#b43'>(Pinto et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hannay et al., 2009)</ns0:ref> reported that testing was important. However, another survey reported that scientists do not regularly test their code <ns0:ref type='bibr' target='#b44'>(Prabhu et al., 2011)</ns0:ref>. Somewhere in the middle, another survey reports that testing is commonly used, but the use of integration testing is low <ns0:ref type='bibr' target='#b39'>(Nguyen-Hoan et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Software Engineering Practices Summary This discussion all leads to the first research question:</ns0:p><ns0:p>RQ1: What activities do research software developers spend their time on, and how does this impact the perceived quality and long-term accessibility of research software?</ns0:p></ns0:div> <ns0:div><ns0:head>Software tools and support</ns0:head><ns0:p>Development and maintenance of research software includes both the use of standard software engineering tools such as version control <ns0:ref type='bibr' target='#b34'>(Milliken et al., 2021)</ns0:ref> and continuous integration <ns0:ref type='bibr' target='#b47'>(Shahin et al., 2017)</ns0:ref>. In addition, these tasks require custom libraries developed for specific analytic tasks or even language-specific interpreters that ease program execution. <ns0:ref type='bibr'>et al., 2013)</ns0:ref>.</ns0:p><ns0:p>A UK survey <ns0:ref type='bibr' target='#b20'>(Hettrick, 2018</ns0:ref><ns0:ref type='bibr' target='#b19'>(Hettrick, , 2014) )</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>institutions that support research (e.g., universities and national laboratories) and grant-making bodies that fund research (e.g., federal agencies and philanthropic organizations) often fail to recognize the central importance of software development and maintenance in conducting novel research <ns0:ref type='bibr' target='#b17'>(Goble, 2014)</ns0:ref>. In turn, there is a little direct financial support for the development of new software or the sustainability of existing software upon which research depends <ns0:ref type='bibr' target='#b29'>(Katerbow et al., 2018)</ns0:ref>. In particular, funding agencies typically have not supported the continuing work needed to maintain software after its initial development.</ns0:p><ns0:p>This lack of support is despite increasing recognition of reproducibility and replication crises that depend, in part, upon reliable access to the software used to produce a new finding <ns0:ref type='bibr' target='#b21'>(Hocquet and Wieber, 2021)</ns0:ref>.</ns0:p><ns0:p>In reaction to a recognized gap in research funding for sustainable software, many projects have attempted to demonstrate the value of their work through traditional citation and impact analysis <ns0:ref type='bibr' target='#b3'>(Anzt et al., 2021)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Career Paths</ns0:head><ns0:p>While most of previous surveys did not address the topic of career paths, the survey of research software engineers <ns0:ref type='bibr' target='#b42'>(Philippe et al., 2019)</ns0:ref> did briefly address this question. Because the results differ across the world and our paper focuses on the US, we only report results for respondents in the US. First, 57% of respondents were funded by grants and 47% by institutional support. Second, respondents had been in their current position for an average of 8.5 years. Last, 97% were employed full-time.</ns0:p><ns0:p>Because of the lack of information from prior surveys, we focus the rest of this discussion on other work to provide background. In 2012, the Software Sustainability Institute (SSI) organized the Collaborations Workshop 7 that addressed the question: why is there no career for software developers in academia? The work of the participants and of the SSI's policy team led to the foundation of the <ns0:ref type='table' target='#tab_6'>2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:ref> Manuscript to be reviewed Computer Science has defined job families and templates for job positions that can be helpful both for hiring managers and HR departments that want to recognize the role of RSEs and HPC Facilitators in their organizations 11 . However, there is still not a clearly defined and widely accepted career path for research software engineers in the US. We pose the following research question that guides our specific survey questions related to career paths -RQ5: What factors impact career advancement and hiring in research software?</ns0:p></ns0:div> <ns0:div><ns0:head>Credit</ns0:head><ns0:p>While most of the previous surveys did not address the topic of credit, the survey of research software engineers <ns0:ref type='bibr' target='#b42'>(Philippe et al., 2019)</ns0:ref> does contain a question about how researchers are acknowledged when their software contributes to a paper. The results showed that 47% were included as a co-author, 18% received only an acknowledgement, and 21% received no mention at all. Because of the lack survey results related to credit, we focus on other work to provide the necessary background.</ns0:p><ns0:p>The study of credit leads to a set of interlinked research questions. We can answer these questions by directly asking software developers and software project collaborators to provide their insights. Here we take a white box approach and examine the inside of the box.</ns0:p><ns0:p>&#8226; How do individuals want their contributions to software projects to be recognized, both as individuals and as members of teams?</ns0:p><ns0:p>&#8226; How do software projects want to record and make available credit for the contributions to the projects?</ns0:p><ns0:p>In addition, these respondents can help answer additional questions from their perspective as someone external to other organizations. Here we can only take a black box approach and examine the box from the outside. (A white box approach would require a survey of different participants.)</ns0:p><ns0:p>&#8226; How does the existing ecosystem, based largely on the historical practices related to contributions to journal and conference papers and monographs, measure, store, and disseminate information about contributions to software?</ns0:p><ns0:p>&#8226; How does the existing ecosystem miss information about software contributions?</ns0:p><ns0:p>&#8226; How do institutions (e.g., hiring organizations, funding organizations, professional societies) use the existing information about contributions to software, and what information is being missed?</ns0:p><ns0:p>We also recognize that there are not going to be simple answers to these questions (CASBS Group on Best Practices in Science, 2018; <ns0:ref type='bibr' target='#b0'>Albert and Wager, 2013)</ns0:ref>, and that any answers will likely differ to some extent between disciplines <ns0:ref type='bibr' target='#b13'>(Dance, 2012)</ns0:ref>. Many professional societies and publishers have specific criteria for authorship of papers (e.g., they have made substantial intellectual contributions, they have participated in drafting and/or revision of the manuscript, they agree to be held accountable for any issues relating to correctness or integrity of the work (Association for Computing Machinery, 2018)), typically suggesting that those who have contributed but do not meet these criteria be recognized via an acknowledgment. While this approach is possible in a paper, there is no equivalent for software, other than papers about software. In some disciplines, such as those where monographs are typical products, there may be no formal guidelines. Author ordering is another challenge. The ordering of author names typically has some meaning, though the meaning varies between disciplines. Two common practices are alphabetic ordering, such as is common in economics <ns0:ref type='bibr' target='#b55'>(Weber, 2018)</ns0:ref> and ordering by contribution with the first author being the main contributor and the last author being the senior project leader, as occurs in many fields <ns0:ref type='bibr' target='#b46'>(Riesenberg and Lundberg, 1990)</ns0:ref>. The fact that the contributions of each author is unclear has led to activities and ideas to record their contributions in more detail <ns0:ref type='bibr' target='#b1'>(Allen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b52'>The OBO Foundry, 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Katz, 2014)</ns0:ref>.</ns0:p><ns0:p>Software in general has not been well-cited <ns0:ref type='bibr' target='#b22'>(Howison and Bullard, 2016)</ns0:ref>, in part because the scholarly culture has not treated software as something that should be cited, or in some cases, even mentioned.</ns0:p><ns0:p>The recently-perceived reproducibility crisis <ns0:ref type='bibr' target='#b5'>(Baker, 2016)</ns0:ref> has led to changes, first for data (which also was not being cited (Task Group on Data Citation Standards and Practices, 2013)) and more recently for software. For software, these changes include the publication of software papers , both in general journals and in journals that specialize in software papers (e.g., the Journal of Open Source Software <ns0:ref type='bibr' target='#b50'>(Smith et al., 2018)</ns0:ref>), as well as calls for direct software citation <ns0:ref type='bibr' target='#b49'>(Smith et al., 2016)</ns0:ref> Manuscript to be reviewed Computer Science citations <ns0:ref type='bibr'>(Katz et al., 2021)</ns0:ref>. Software, as a digital object, also has the advantage that it is usually stored as a collection of files, often in a software repository. This fact means that it is relatively simple to add an additional file that contains metadata about the software, including creators and contributions, in one of a number of potential styles <ns0:ref type='bibr' target='#b57'>(Wilson, 2013;</ns0:ref><ns0:ref type='bibr' target='#b14'>Druskat, 2020;</ns0:ref><ns0:ref type='bibr' target='#b27'>Jones et al., 2017)</ns0:ref>. This effort has recently been reinforced by GitHub, who have made it easy to add such metadata to repositories and to generate citations for those repositories <ns0:ref type='bibr' target='#b48'>(Smith, 2021)</ns0:ref>.</ns0:p><ns0:p>Therefore, we pose the following research questions to guide our specific survey questions related to credit -RQ6a What do research software projects require for crediting or attributing software use? and RQ6b: How are individuals and groups given institutional credit for developing research software?</ns0:p></ns0:div> <ns0:div><ns0:head>Diversity</ns0:head><ns0:p>Previous research has found that both gender diversity and tenure (length of commitment to a project) are positive and significant predictors of productivity in open source software development <ns0:ref type='bibr' target='#b54'>(Vasilescu et al., 2015)</ns0:ref>. Using similar data, <ns0:ref type='bibr' target='#b40'>Ortu et al. (2017)</ns0:ref> demonstrate that diversity of nationality among team members is a predictor of productivity. However, they also show this demographic characteristic of a team leads to less polite and civil communication (via filed issues and discussion boards).</ns0:p><ns0:p>Nafus ( <ns0:ref type='formula'>2012</ns0:ref>)'s early qualitative study of gender in open source, through interview and discourse analysis of patch notes, describes sexist behavior that is linked to low participation and tenure for women in distributed software projects.</ns0:p><ns0:p>The impact of codes of conduct (CoC) -which provide formal expectations and rules for the behavior of participants in a software project -have been studied in a variety of settings. In open-source software projects codes of conduct have been shown to be widely reused (e.g. Ubuntu, Contributor Covenant, Django, Python, Citizen, Open Code of Conduct, and Geek Feminism have been reused more than 500 times by projects on GitHub) <ns0:ref type='bibr' target='#b53'>(Tourani et al., 2017)</ns0:ref>.</ns0:p><ns0:p>There are few studies of the role and use of codes of conduct in research software development. <ns0:ref type='bibr' target='#b28'>Joppa et al. (2013)</ns0:ref> point to the need for developing rules which govern multiple aspects of scientific software development, but specific research that addresses the prevalence, impact, and use of a code of conduct in research software development have not been previously reported.</ns0:p><ns0:p>The 2018 survey of the research software engineer community across seven countries <ns0:ref type='bibr' target='#b42'>(Philippe et al., 2019)</ns0:ref> showed the percentage of respondents who identified as male as between 73% (US) and 91% (New Zealand). Other diversity measures are country-specific and were only collected in the UK and US, but in both, the dominant group is overrepresented compared with its share of the national population.</ns0:p><ns0:p>Therefore, we pose the following research question to guide our specific survey questions related to diversity -RQ7: How do current Research Software Projects document diversity statements or codes of conduct, and what support is needed to further diversity initiatives?</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head><ns0:p>To understand sustainability issues related to the development and use of research software, we developed a Qualtrics 12 survey focused on the seven research questions defined in the Background section. This section describes the design of the survey, the solicited participants, and the qualitative analysis process we followed.</ns0:p></ns0:div> <ns0:div><ns0:head>Survey Design</ns0:head><ns0:p>We designed the survey to capture information about how individuals develop, use, and sustain research software. The survey first requested demographic information to help us characterize the set of respondents.</ns0:p><ns0:p>Then, we enumerated 38 survey questions (35 multiple choice and 3 free response). We divided these questions among the seven research questions defined in the Background Section. This first set of 38 questions went to all survey participants, who were free to skip any questions.</ns0:p><ns0:p>Then, to gather more detailed information, we gave each respondent the option to answer follow-up questions on one or more of the seven topic areas related to the research questions. For example, if a respondent was particularly interested in Development Practices she or he could indicate their interest in answering more questions about that topic. Across all seven topics, there were 28 additional questions (25 multiple choice and 3 free response). Because the follow-up questions for a particular topic were only presented to respondents who expressed interest in that topic, the number of respondent to these questions is significantly lower than the number of respondents to first set of 38 questions. This discrepancy in the number of respondents is reflected in the data presented below.</ns0:p><ns0:p>In writing the questions, where possible, we replicated the wording of questions from the previous surveys about research software (described in the Background Section). In addition, because we assumed that respondents would be familiar with the terms used in the survey and to simplify the text, we did not provide definitions of terms in the survey itself.</ns0:p></ns0:div> <ns0:div><ns0:head>Survey Participants</ns0:head><ns0:p>We distributed the survey to potential respondents through two primary venues:</ns0:p><ns0:p>1. Email Lists: To gather a broad range of perspectives, we distributed the survey to 33,293 United States NSF and 39,917 United States NIH PIs whose projects were funded for more than $200K in the five years prior to the survey distribution and involve research software and to mailing lists of research software developers and research software projects.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Snowball Sampling:</ns0:head><ns0:p>We also used snowballing by asking people on the email lists to forward the survey to others who might be interested. We also advertised the survey via Twitter.</ns0:p><ns0:p>The approach we used to recruit participants makes it impossible to calculate a response rate. We do not know how many times people forwarded the survey invitation or the number of potential participants reached by the survey.</ns0:p></ns0:div> <ns0:div><ns0:head>Research Ethics</ns0:head><ns0:p>We received approval for the survey instruments and protocols used in this study from the University of Notre Dame Committee Institutional Review Board for Social and Behavioral Responsible Conduct of . Prior to taking the survey, respondents had to read and consent to participate. If a potential respondent did not consent, the survey terminated. To support open science, we provide the following information: (1) the full text of the survey and (2) a sanitized version of the data <ns0:ref type='bibr' target='#b7'>(Carver et al., 2021)</ns0:ref>. We also provide a link to the scripts used to generate the figures that follow <ns0:ref type='bibr' target='#b9'>(Carver et al., 2022)</ns0:ref>. Qualtrics collected the IP address and geo location from survey respondents.</ns0:p><ns0:p>We removed these columns from the published dataset. However, we did not remove all comments that might lead people to make educated guesses about the respondents.</ns0:p></ns0:div> <ns0:div><ns0:head>ANALYSIS</ns0:head><ns0:p>After providing an overview of the participant demographics, we describe the survey results relative to each of the research questions defined in the Background section. Because many of the survey questions were optional and because the follow-up questions only went to a subset of respondents, we report the number of respondents for each question along with the results below. To clarify which respondents received each question, we provide some text around each result. In addition, when reporting results from a follow-up question, the text specifically indicates that it is a follow-up question and the number of respondents will be much smaller.</ns0:p></ns0:div> <ns0:div><ns0:head>Participant Demographics</ns0:head><ns0:p>We use each of the key demographics gathered on the survey to characterize the respondents (e.g. the demographics of the sample). Note that because some questions were optional, the number of respondents differs across the demographics.</ns0:p><ns0:p>Respondent Type We asked each respondent to characterize their relationship with research software as one of the following: Job Title People involved in developing and using research software have various job titles. For our respondents, Faculty was the most common, given by 63% (668/1046) of the respondents and 79% (354/447) of the Researcher type respondents. No other title was given by more then 6% of the respondents.</ns0:p><ns0:p>Respondent Age Overall, 77% (801/1035) of the respondents are between 35 and 64 years of age. The percentage is slightly higher for Researcher type respondents (370/441 -84%) and slightly lower for Combination type respondents (378/514 -74%).</ns0:p><ns0:p>Respondent Experience The respondent pool is highly experienced overall, with 77% (797/1040) working in research for more than 10 years and 39% (409/1040) for more than 20 years. For the Researcher type respondents, those numbers increase to 84% (373/444) with more than 10 years and 44% <ns0:ref type='bibr'>(197/444)</ns0:ref> with more than 20 years.</ns0:p><ns0:p>Gender In terms of self-reported gender, 70% (732/1039) were Male, 26% (268/1039) were Female, with the remainder reporting Other or Prefer not to say. For Researcher respondents, the percentage of Females is higher (151/443 -34%) Discipline The survey provided a set of choices for the respondents to choose their discipline(s).</ns0:p><ns0:p>Respondents could choose more than one discipline. Table <ns0:ref type='table' target='#tab_6'>2</ns0:ref> shows the distribution of respondents by discipline. Though the respondents represent a number of research disciplines, our use of NSF and NIH mailing lists likely skewed the results towards participants from science and engineering fields.</ns0:p></ns0:div> <ns0:div><ns0:head>Discipline</ns0:head><ns0:p>Total Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Software Engineering Practices</ns0:head><ns0:p>This section focuses on answering RQ1: What activities do research software developers spend their time on, and how does this impact the perceived quality and long-term accessibility of research software?</ns0:p><ns0:p>Where Respondents Spend Software Time We asked respondents what percentage of their time they currently spend on a number of software activities and what percentage of time they would ideally like to spend on those activities. As Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref> shows, there is a mismatch between these two distributions, SpentTime and IdealSpentTime, respectively. Overall, respondents would like to spend more time in design and coding and less time in testing and debugging. However, the differences are relatively small in most cases. could choose as many answers as were appropriate. Interestingly, the aspects most commonly reported are those that are more related to people issues rather than to technical issues (e.g. finding personnel/turnover, communication, use of best practices, project management, and keeping up with modern tools). The only ones that were technical were testing and porting.</ns0:p><ns0:p>Use of Testing Focusing on one of the technical aspect that respondents perceived to be more difficult than it should be, we asked the respondents how frequently they employ various types of testing, including:</ns0:p><ns0:p>Unit, Integration, System, User, and Regression. The respondents could choose from frequently, somewhat, rarely, and never. Figure <ns0:ref type='figure'>3</ns0:ref> shows the results from this question. The only type of testing more than 50% of respondents used frequently was Unit testing (231/453 -51%). On the other extreme only about 25% reported using System (118/441 -27%) or Regression testing (106/440 -24%) frequently.</ns0:p><ns0:p>Use of Open-Source Licensing Overall 74% (349/470) of the respondents indicated they used an open-source license. This percentage was consistent across both combination and developer respondents.</ns0:p><ns0:p>However, this result still leaves 26% of respondents who do not release their code under an open-source license.</ns0:p><ns0:p>Frequency of using *best* practices As a follow-up question, we asked the respondents how frequently they used a number of standard software engineering practices. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; Continuous Integration -54% (54/100)</ns0:p><ns0:p>&#8226; Use of coding standards -54% (54/100)</ns0:p><ns0:p>&#8226; Architecture or Design -51% (52/101)</ns0:p><ns0:p>&#8226; Requirements -43% (43/101) Are there sufficient opportunities for training? When we turn to the availability of relevant training opportunities, an interesting picture emerges. As Figure <ns0:ref type='figure'>5</ns0:ref> shows, slightly more than half of the respondents indicate there is sufficient training available for obtaining new software skills. However, when looking at the response based upon gender, there is a difference with 56% of male respondents answering positively but only 43% of the female respondents answering positively. But, as Figure <ns0:ref type='figure'>6</ns0:ref> indicates, approximately 75% of the respondents indicated they do not have sufficient time to take advantage of these opportunities.</ns0:p><ns0:p>These results are slightly higher for female respondents (79%) compared with male respondents (73%).</ns0:p><ns0:p>So, while training may be available, respondents do not have adequate time to take advantage of it.</ns0:p></ns0:div> <ns0:div><ns0:head>Preferred modes for delivery of training</ns0:head><ns0:p>The results showed that there is not a dominant approach preferred for training. Carpentries, Workshops, MOOCs, and On-site custom training all had approximately the same preference across all three topic ares (Development Techniques, Languages, and Project Management). This result suggests that there is benefit to developing different modes of training about important topics, because different people prefer to learn in different ways.</ns0:p></ns0:div> <ns0:div><ns0:head>Funding</ns0:head><ns0:p>This section uses the relevant survey questions to answer the two research questions related to funding:</ns0:p><ns0:p>&#8226; RQ4a: What is the available institutional support for research software development?</ns0:p><ns0:p>&#8226; RQ4b: What sources of institutional funding are available to research software developers?</ns0:p><ns0:p>First, 54% (450/834) of the respondents reported they have included funding for software in their proposals. However, that percentage drops to 30% (124/408) for respondents who identify primarily as Researchers.</ns0:p><ns0:p>When looking at the specific types of costs respondents include in their proposals, 48% (342/710) Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In examining the source of funding for the projects represented by the survey respondents, the largest funder is NSF, at 36%. But, as Figure <ns0:ref type='figure'>7</ns0:ref> shows, a significant portion of funding comes from the researchers' own institutions. While other funding agencies provide funding for the represented projects and may be very important for individual respondents, overall, they have little impact. This result could have been impacted by the fact that we used a mailing list of NSF projects as one means of distributing the survey.</ns0:p><ns0:p>However, we also used a list of NIH PIs who led projects funded at least at $200K, so it is interesting that NSF is still the largest source.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 7. Sources of Funding</ns0:head><ns0:p>In terms of the necessary support, Figure <ns0:ref type='figure'>8</ns0:ref> indicates that, while institutions do provide some RSE, financial, and infrastructure support, it is inadequate to meet the respondents' needs, overall. In addition, when asked in a follow-up question whether the respondents have sufficient funding to support software development activities for their research the overwhelming answer is no (Figure <ns0:ref type='figure'>9</ns0:ref>).</ns0:p><ns0:p>When asked about whether current funding adequately supports some key phases of the software lifecycle, the results were mixed. Respondents answered on a scale of 1-5 from insufficient to sufficient.</ns0:p><ns0:p>For Developing new software and Modifying or reusing existing software there is an relatively uniform distribution of responses across the five answer choices. However, for Maintaining software, the responses skew towards the insufficient end of the scale.</ns0:p><ns0:p>For respondents who develop new software, we asked (on a 5-point scale) whether their funding supports various important activities, including refactoring, responding to bugs, testing, developing new features, and documentation. In all cases less than 35% of the respondents answered 4 or 5 (sufficient) on the scale.</ns0:p></ns0:div> <ns0:div><ns0:head>Career Paths</ns0:head><ns0:p>This section focuses on answering RQ5: What factors impact career advancement and hiring in research software?</ns0:p><ns0:p>Institutions have a number of different job titles for people who develop software. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science than one answer, so the total exceeds the number of respondents.</ns0:p><ns0:p>While there are a number of job titles that research software developers can fill, unfortunately, as</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_10'>10</ns0:ref> shows, the respondents saw little chance for career advancement for those whose primary role is software development. Only 21% (153/724) of the Combination and Researcher respondents saw opportunity for advancement. The numbers were slightly better at 42% (24/57) for those who viewed themselves as Developers. When we look at the result by gender, only 16% (32/202) of the female respondents see an opportunity for advancement compared with 24% (139/548) for the male respondents. When examining the perceptions of those that have been hired into a software development role, we asked a similar question. We asked respondents the importance of the following concerns when they were hired into their current role:</ns0:p><ns0:p>&#8226; Diversity in the organization</ns0:p><ns0:p>&#8226; Your experience as a programmer or software engineer Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Besides these factors, we asked a follow-up question about the importance of the following factors when deciding on whether to accept a new job or a promotion:</ns0:p><ns0:p>&#8226; Title of the position</ns0:p></ns0:div> <ns0:div><ns0:head>&#8226; Salary raise</ns0:head><ns0:p>&#8226; Responsibilities for a project or part of a project</ns0:p><ns0:p>&#8226; Leading a team</ns0:p><ns0:p>&#8226; Available resources such as travel money</ns0:p><ns0:p>As Figure <ns0:ref type='figure' target='#fig_4'>13</ns0:ref> shows Salary, Responsibility, Leadership, and Resources are the most important factors respondents consider when taking a job or a promotion.</ns0:p><ns0:p>Lastly, in terms of recognition within their organization, in a follow-up question, we asked respondents to indicate whether other people in their organization use their software and whether other people in their organization have contacted them about developing software. Almost everyone (54/61) that responded to these follow-up questions indicated that other people in the organization use their software. In addition just over half of the Combination respondents (27/51) and 72% (8/11) of the Developer respondents indicated that people in their organization had contacted them about writing software for them.</ns0:p></ns0:div> <ns0:div><ns0:head>Credit</ns0:head><ns0:p>This section uses the relevant survey questions to answer the two research questions related to credit:</ns0:p><ns0:p>&#8226; RQ6a: What do research software projects require for crediting or attributing software use?</ns0:p><ns0:p>&#8226; RQ6b: How are individuals and groups given institutional credit for developing research software?</ns0:p><ns0:p>When asked how respondents credit software they use in their research, as Figure <ns0:ref type='figure' target='#fig_7'>14</ns0:ref> shows, the most common approaches are either to cite a paper about the software or to mention the software by name.</ns0:p></ns0:div> <ns0:div><ns0:head>19/31</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Interestingly, authors tended to cite the software archive itself, mention the software URL, or cite the software URL much less frequently. Unfortunately, this practice leads to fewer trackable citations of the software, making it more difficult to judge its impact.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 14. How Authors Credit Software Used in Their Research</ns0:head><ns0:p>Following on this trend of software work not being properly credited, when asked how they currently receive credit for their own software contributions, as Figure <ns0:ref type='figure' target='#fig_13'>15</ns0:ref> shows, none of the standard practices appear to be used very often.</ns0:p><ns0:p>An additional topic related to credit is whether respondent's contributions are valued for performance reviews or promotion within their organization. As Figure <ns0:ref type='figure' target='#fig_4'>16</ns0:ref> shows, approximately half of the respondents indicate that software contributions are considered. Another large percentage say that it depends.</ns0:p><ns0:p>While it is encouraging that a relatively large percentage of respondents' institutions consider software during performance reviews and promotion some or all of the time, the importance of those contributions is still rather low, especially for respondents who identify as Researchers, as shown in Figure <ns0:ref type='figure' target='#fig_4'>17</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Diversity</ns0:head><ns0:p>This section focuses on answering RQ7: How do current Research Software Projects document diversity statements and what support is needed to further diversity initiatives?</ns0:p><ns0:p>When asked how well their projects recruit, retain, and include in governance participants from underrepresented groups, only about 1/3 of the respondents thought they did an 'Excellent' or 'Good' job. Interestingly, when asked how well they promote a culture of inclusion, 68% of the respondents (390/572) indicated they did an 'Excellent' or 'Good' job. These two responses seem to be at odds with each other, suggesting that perhaps projects are not doing as well as they think they are. Conversely, it could be that projects do not do a good job of recruiting diverse participants, but do a good job of supporting the ones they do recruit. Figure <ns0:ref type='figure' target='#fig_4'>18</ns0:ref> shows the details of these responses.</ns0:p><ns0:p>We asked follow-up questions about whether the respondents' projects have a diversity/inclusion statement or a code of conduct. As Figures <ns0:ref type='figure' target='#fig_4'>19 and 20</ns0:ref> show, most projects do not have either of these, nor do they plan to develop one. This answer again seems at odds with the previous answer that most people thought their projects fostered a culture of inclusions. However it is possible that projects fall under institutional codes of conduct or have simply decided that a code of conduct is not the best way to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>THREATS TO VALIDITY</ns0:head><ns0:p>To provide some context for these results and help readers properly interpret them, this section describes the threats to validity and limits of the study. While there are multiple ways to organize validity threats, we organize ours in the following three groups.</ns0:p></ns0:div> <ns0:div><ns0:head>Internal Validity Threats</ns0:head><ns0:p>Internal validity threats are those conditions that reduce the confidence in the results that researchers can draw from the analysis of the included data.</ns0:p><ns0:p>A common internal validity threat for surveys is that the data is self-reported. Many of the questions in our survey rely upon the respondent accurately reporting their perception of reality. While we have no information that suggests respondents were intentionally deceptive, it is possible that their perception about some questions was not consistent with their reality.</ns0:p><ns0:p>A second internal validity threat relates to how we structured the questions and which questions each respondent saw. Due to the length of the survey and the potential that some questions may not be relevant for all respondents, we did not require a response to all questions. In addition, we filtered out questions based on the respondent type (Researcher, Combination, or Developer) when those questions were not relevant. Last, for each topic area, we included a set of optional questions for those who wanted to provide more information. These optional questions were answered by a much smaller set of respondents.</ns0:p><ns0:p>Taken together, these choices mean that the number of respondents to each question varies. In the results reported above, we included the number of people who answered each question.</ns0:p></ns0:div> <ns0:div><ns0:head>22/31</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Construct Validity Threats</ns0:head><ns0:p>Construct validity threats describe situations where there is doubt in the accuracy of the measurements in the study. In these cases, the researchers may not be fully confident that the data collected truly measures the construct of interest.</ns0:p><ns0:p>In our study, the primary threat to construct validity relates to the respondents' understanding of key terminology. The survey used a number of terms related to the research software process. It is possible that some of those terms may have been unfamiliar to the respondents. Because of the length of the survey and the large number of terms on the survey, we chose not to provide explicit definitions for each concepts.</ns0:p><ns0:p>While we have no evidence that raises concerns about this issue, it is possible that some respondents interpreted questions in ways other than how we originally intended.</ns0:p></ns0:div> <ns0:div><ns0:head>External Validity Threats</ns0:head><ns0:p>External validity threats are those conditions that decrease the generalizablity of the results beyond the specific sample included in the study. We have identified two key external validity threats.</ns0:p><ns0:p>The first is general sampling bias. We used a convenience sample for recruiting our survey participants.</ns0:p><ns0:p>While we attempted to cast a very wide net using the sources described in the Survey Participants section, we cannot be certain that the sample who responded to the survey is representative of the overall population of research software developers in the United States.</ns0:p><ns0:p>The second is the discrepancy among the number of respondents of each type (Researcher, Combination, Developer). Because we did not know how many people in the population would identify with each respondent type, we were not able to use this factor in our recruitment strategy. As a result, the number of respondents from each type differs. This discrepancy could be representative of the overall population or it could also suggest a sampling bias.</ns0:p><ns0:p>Therefore, while we have confidence in the results described above, these results may not be generalizable to the larger population.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>We turn now to a discussion of the results of our survey, and the implied answers to our research questions. In each subsection, we restate the original research question, highlight important findings, and contextualize these findings in relation to software sustainability.</ns0:p></ns0:div> <ns0:div><ns0:head>Software Engineering Practices</ns0:head><ns0:p>RQ1: What activities do research software developers spend their time on, and how does this impact the perceived quality and long-term accessibility of research software?</ns0:p><ns0:p>Across a number of questions about software engineering practices, our respondents report the aspects of the software development process that were more difficult than expected were related to people, rather than mastering the use of a tool or technique. Respondents reported they thought testing was important, but our results show only a small percentage of respondents frequently use system testing (27% of respondents) and regression testing (24% of respondents). This result suggests a targeted outreach on best practices in testing, broadly, could be a valuable future direction for research software trainers.</ns0:p><ns0:p>We also asked respondents about how they allocated time to software development tasks. The respondents reported, overall, they spend their time efficiently -allocating as much time as a task requires, but rarely more than they perceive necessary (see Figure <ns0:ref type='figure'>2</ns0:ref>).</ns0:p><ns0:p>However, one notable exception regards debugging, where both developers and researchers report an imbalance between time they would like to spend compared with the time they actually spend. While we did not ask follow up questions about any specific task, we can interpret this finding as the result of asking about an unpleasant task -debugging is not an ideal use of time, even if it is necessary. However, there is an abundance of high quality and openly accessible tools that help software engineers in debugging tasks.</ns0:p><ns0:p>A future research direction is to investigate types of code quality controls, testing, and the use of tools to simplify debugging in research software. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Our results show a diversity of people and their titles who assume the role of research software developer. However, few respondents were optimistic about their research software contributions positively impacting their career (only 21% of faculty and 42% of developers believed software contributions would be valuable for career advancement). This finding was particularly pronounced for female identifying respondents, with only 16% (n=32/202) believing software contributions could impact their career advancement.</ns0:p><ns0:p>When asked to evaluate prospective applicants to a research software position many respondents valued potential and scientific domain knowledge (background) as important factors. We optimistically believe this result indicates that while programming knowledge, and experience are important criteria for job applicants, search committees are also keen to find growth minded scientists to fill research software positions. We believe this result could suggest an important line of future work -asking, for example, research software engineering communities to consider more direct and transparent methods for eliciting potential and scientific domain knowledge on job application materials.</ns0:p><ns0:p>Finally, we highlight the factors important to research software developers when evaluating a prospective job for their own career. Respondents reported that they value salary equally with leadership (of software at an institution) and access to software resources (e.g. infrastructure). This suggests that while pay is important, the ability to work in a valued environment with access to both mentorship and high quality computing resources can play an important role in attracting and retaining talented research software professionals.</ns0:p></ns0:div> <ns0:div><ns0:head>Credit</ns0:head></ns0:div> <ns0:div><ns0:head>RQ6:</ns0:head><ns0:p>&#8226; RQ6a: What do research software projects require for crediting or attributing software use?</ns0:p><ns0:p>&#8226; RQ6b: How are individuals and groups given institutional credit for developing research software?</ns0:p><ns0:p>As described in the Background section, obtaining credit for research software work is still emerging and is not consistently covered by tenure and promotion evaluations. The survey results are consistent with this trend and show none of the traditional methods for extending credit to a research contribution are followed for research software. Figure <ns0:ref type='figure' target='#fig_13'>15</ns0:ref> makes this point clear -respondents most frequently mention software by name but less frequently cite software papers or provide links to the software. According to these results, research software projects require better guidance and infrastructure support for accurately crediting software used in research, both at the individual and institutional level.</ns0:p><ns0:p>We also highlight the relationship between credit and career advancement. In the previous section (Career Paths) we asked respondents how often they were consulted about or asked to contribute to existing software projects at their institution. Among the Developer respondents, 72% were consulted about developing and maintaining software. However, unless this consultation, expertise, and labor is rewarded within a formal academic system of credit, this work remains invisible to tenure and promotion committees. Such invisible labor is typical within information technology professions, but we argue that improving this formal credit system is critical to improving research software sustainability.</ns0:p><ns0:p>Scholarly communications and research software engineers have been active in promoting new ways to facilitate publishing, citing, using persistent identifiers, and establishing authorship guidelines for research software. This effort includes work in software citation aimed at changing publication practices <ns0:ref type='bibr'>(Katz et al., 2021)</ns0:ref>, in software repositories <ns0:ref type='bibr' target='#b48'>(Smith, 2021)</ns0:ref>, and a proposed definition for FAIR software to add software to funder requirements for FAIR research outputs (Chue <ns0:ref type='bibr' target='#b12'>Hong et al., 2021)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Diversity</ns0:head></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Previous studies of research software communities have not focused specifically on diversity statements, DEI initiatives, or related documentation (e.g. code of conduct documents). Our results showed about 2/3 of the respondents thought their organizations promoted a 'culture of inclusion' with respect to research software activities. Conversely, only about 1/3 of respondents thought their organization did an above average job of recruiting, retaining, or meaningfully including diverse groups (see Figure <ns0:ref type='figure' target='#fig_4'>18</ns0:ref>).</ns0:p><ns0:p>We also asked participants whether their main software project (where they spent most time) had, or planned to develop, a diversity and inclusion statement. Over half of respondents indicated that their projects do not have a diversity/inclusion statement or a code of conduct and have no plans to create one.</ns0:p><ns0:p>While these numbers paint a grim picture, we also believe that there is additional research necessary </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2 https://us-rse.org PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022) Manuscript to be reviewed Computer Science train researchers in modern development practices (e.g., the Carpentries 3 , IRIS-HEP 4 , and MolSSI 5 ). While much of the development of research software occurs in academia, important development also occurs in national laboratories and industry. Wherever the development and maintenance of research software occurs, that software might be released as open source (most likely in academia and national laboratories) or it might be commercial/closed source (most likely in industry, although industry also produces and contributes to open source).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>3</ns0:head><ns0:label /><ns0:figDesc>https://carpentries.org 4 https://iris-hep.org 5 https://molssi.org 6 http://urssi.us 2/31 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>reports on the results of 209 responses to provide insight into the role of software in conducting research at US universities. The survey focused on the respondents' use of research software and the training they have received in software development. &#8226; Towards Computational Reproducibility: Researcher Perspectives on the Use and Sharing of Software (AlNoamany and Borghi, 2018) reports on the results from 215 respondents across a range 3/31 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022) Manuscript to be reviewed Computer Science of disciplines. The goal of the survey was to understand how researchers create, use, and share software. The survey also sought to understand how the software development practices aligned with the goal of reproducibility. &#8226; SSI International RSE Survey (Philippe et al., 2019) reports on the results from approximately 1000 responses to a survey of research software engineers from around the world. The goal of the survey is to describe the current state of research software engineers related to various factors including employment, job satisfaction, development practices, use of tools, and citation practices.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>&#8226;</ns0:head><ns0:label /><ns0:figDesc>Researcher -someone who only uses software &#8226; Developer -someone who only develops software &#8226; Combination -both of the above roles The respondents were fairly evenly split between Researchers at 43% (473/1109) and Combination at 49% (544/1109), with the remaining 8% (92/1109) falling into the Developer category. Note that depending on how the respondent answered this question, they received different survey questions. If a respondent 9/31 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022) Manuscript to be reviewed Computer Science indicated they were a Researcher, they did not receive the more development-oriented questions. For the remainder of this analysis, we use these subsets to analyze the data. If the result does not indicate that it is describing results from a subset of the data, then it should be interpreted as being a result from everyone who answered the question. Organization Type Next, respondents indicated the type of organization for which they worked. The vast majority 86% (898/1048) worked for Educational Institutions. That percentage increased to 93% (417/447) for Researcher type respondents Geographic Location Because the focus of the URSSI project is the United States, we targeted our survey to US-based lists. As a result, the vast majority of responses (990/1038) came from the United States. We received responses from 49 states (missing only Alaska), plus Washington, DC, and Puerto Rico.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Where respondents spend software time</ns0:figDesc><ns0:graphic coords='12,141.73,190.60,413.52,232.61' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. Aspects of Software Development That Are More Difficult Than They Should Be</ns0:figDesc><ns0:graphic coords='13,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Tools support for development activities The results in Figure 4 indicate that a large majority of respondents (340/441 -77%) believe Coding is Extremely supported or Very supported by existing tools. Slightly less than half of the respondents find Testing (196/441 -44%) and Debugging (188/441 -43%) to be Extremely supported or Well supported. Less than 30% of the respondents reported Requirements, Architecture/design, Maintenance, and Documentation as being well-supported. Because coding is the only practice where more than half of the respondents indicate Extremely supported or Very supported, these responses indicate a clear opportunity for additional (or better) tool support in a number of areas.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Availability of Tool Support</ns0:figDesc><ns0:graphic coords='14,172.75,414.14,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Sufficient Opportunities for Training</ns0:figDesc><ns0:graphic coords='16,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .Figure 9 .</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figure 8. Sufficiency of Institutional Support</ns0:figDesc><ns0:graphic coords='18,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Opportunities for Career Advancement for Software Developers</ns0:figDesc><ns0:graphic coords='19,172.75,160.52,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Concerns When Trying to Hire Software Development Staff</ns0:figDesc><ns0:graphic coords='20,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Concerns When Being Hired as Software Development Staff</ns0:figDesc><ns0:graphic coords='21,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. How Respondents Currently Receive Credit for Their Software Contributions</ns0:figDesc><ns0:graphic coords='23,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 16 .Figure 17 .</ns0:head><ns0:label>1617</ns0:label><ns0:figDesc>Figure 16. Does Institution Consider Software Contributions in Performance Reviews or Promotion Cases?</ns0:figDesc><ns0:graphic coords='24,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 18 .Figure 20 .</ns0:head><ns0:label>1820</ns0:label><ns0:figDesc>Figure 18. How Respondents Think Their Project Does With Inclusion</ns0:figDesc><ns0:graphic coords='25,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>to clarify the types of diversity, equity, and inclusion work, including formal and informal initiatives, needed in research software development. This research would provide needed clarity on the training, mentoring, and overall state of diversity and inclusion initiatives in research software. Further, research needs to compare these practices with broader software and research communities seeking to understand how, for example, research software compares to open-source software.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,172.75,160.49,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,172.75,112.60,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,172.75,395.01,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Previous Surveys</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell>Focus</ns0:cell><ns0:cell>Respondents</ns0:cell></ns0:row><ns0:row><ns0:cell>Hannay et al. (2009)</ns0:cell><ns0:cell>How scientists develop and use software</ns0:cell><ns0:cell>1972</ns0:cell></ns0:row><ns0:row><ns0:cell>Pinto et al. (2018)</ns0:cell><ns0:cell>Replication of Hannay et al. (2009)</ns0:cell><ns0:cell>1553</ns0:cell></ns0:row><ns0:row><ns0:cell>Wiese et al. (2020)</ns0:cell><ns0:cell>Additional results from Pinto et al. (2018) focused</ns0:cell><ns0:cell>1577</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>on problems encountered when developing scientific</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>software</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Nguyen-Hoan et al. (2010) Software development practices of scientists in Aus-</ns0:cell><ns0:cell>60</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>tralia</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Prabhu et al. (2011)</ns0:cell><ns0:cell>Practice of computational science in one large uni-</ns0:cell><ns0:cell>114</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>versity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Joppa et al. (2013)</ns0:cell><ns0:cell>Researchers in species domain modeling with vary-</ns0:cell><ns0:cell>&#8764;450</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing levels of expertise</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Carver et al. (2013)</ns0:cell><ns0:cell>Software engineering knowledge and training among</ns0:cell><ns0:cell>141</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>computational scientists and engineers</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hettrick (2018, 2014)</ns0:cell><ns0:cell>Use of software in Russell Group Universities in the</ns0:cell><ns0:cell>417</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>UK</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Jay et al. (2016)</ns0:cell><ns0:cell>How scientists publish code</ns0:cell><ns0:cell>65</ns0:cell></ns0:row><ns0:row><ns0:cell>Nangia and Katz (2017)</ns0:cell><ns0:cell>Use of software and software development training</ns0:cell><ns0:cell>209</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>in US Postdoctoral Association</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>AlNoamany and Borghi</ns0:cell><ns0:cell>How the way researchers use, develop, and share</ns0:cell><ns0:cell>215</ns0:cell></ns0:row><ns0:row><ns0:cell>(2018)</ns0:cell><ns0:cell>software impacts reproduciblity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Philippe et al. (2019)</ns0:cell><ns0:cell>Research Software Engineers</ns0:cell><ns0:cell>&#8764;1000</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>as well as through economic studies. An example of the latter was performed by a development team of the widely used AstroPy packages in Astronomy. Using David A. Wheeler's SLOCCount method for economic impact of open-source software they estimate the cost of reproducing AstroPy to be approximately $8.5 million and the annual economic impact on astronomy alone to be approximately $1.5 million<ns0:ref type='bibr' target='#b35'>(Muna et al., 2016)</ns0:ref>.There is, recently, increased attention from funders on the importance of software maintenance and archiving, including the Software Infrastructure for Sustained Innovation (SI2) program at NSF,</ns0:figDesc><ns0:table /><ns0:note>the NIH Data Commons (which includes software used in biomedical research), the Alfred P. Sloan Foundation's Better Software for Science program, and the Chan Zuckerberg Initiative's Essential Open Source Software for Science program which provide monetary support for the production, maintenance, and adoption of research software. Despite encouraging progress there is still relatively little research that focuses specifically on how the lack of direct financial support for software sustainability impacts research software engineers and research software users. We seek to better understand this relationship through two specific research questions that focus on the impact of funding on software sustainability: RQ4a: What is the available institutional support for research software development? and RQ4b: What sources of institutional funding are available to research software developers?</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>along with guidance for those</ns0:figDesc><ns0:table /><ns0:note>11 https://carcc.org/wp-content/uploads/2019/01/CI-Professionalization-Job-Families-and-Career-Gui pdf 7/31 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Disciplines of Respondents</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Researchers Developers Combination</ns0:cell></ns0:row></ns0:table><ns0:note>10/31PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_14'><ns0:head /><ns0:label /><ns0:figDesc>. This result indicates a need for additional work to gather information about how research software developers are using these practices and to disseminate that information to the appropriate communities to increase their usage.Our respondents reported sufficient tool support only for the coding activity. This result suggests the need for additional tools to support important activities like Requirements, Design, Testing, Debugging, and Maintenance. The availability of useful tools that can fit into developers current workflows can increase the use of these key practices for software quality and sustainability.Across multiple questions, our findings suggest a need for greater opportunities to access and participate in software training. Previous surveys found less than half (and sometimes much less) of developers reported they had formal training in software development. Our results support these rough estimates, We provided a motivation for this research question by demonstrating, across a variety of previous surveys and published reports, there has not been sufficient funding dedicated to the development and maintenance of research software. The survey results support this assertion with just under half of the respondents who develop software as part of their research report including costs for developing software into research funding proposals, with even less including costs for reusing or maintaining research software. A limitation of our study is that we do not ask respondents why they choose not to include these costs. We could interpret this result as a belief that such items would not be appropriate for a budget or</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Software Tools</ns0:cell></ns0:row><ns0:row><ns0:cell>RQ2: What tools do research software developers use and what additional tools are needed to</ns0:cell></ns0:row><ns0:row><ns0:cell>support sustainable development practices?</ns0:cell></ns0:row><ns0:row><ns0:cell>Training</ns0:cell></ns0:row><ns0:row><ns0:cell>RQ3: What training is available to research software developers and does this training meet their</ns0:cell></ns0:row><ns0:row><ns0:cell>needs?</ns0:cell></ns0:row><ns0:row><ns0:cell>Funding</ns0:cell></ns0:row><ns0:row><ns0:cell>RQ4:</ns0:cell></ns0:row><ns0:row><ns0:cell>would not result in a competitive funding application. Future should investigate (1) how and why software</ns0:cell></ns0:row><ns0:row><ns0:cell>research funding is allocated, (2) how research software is budgeted in preparing research grant proposals,</ns0:cell></ns0:row><ns0:row><ns0:cell>and (3) what deters researchers from requesting funding for software development, maintenance, or reuse.</ns0:cell></ns0:row><ns0:row><ns0:cell>From the perspective of software sustainability, these results are troubling. Without support for</ns0:cell></ns0:row><ns0:row><ns0:cell>maintaining and sustaining research software, at least some of the initial investments made in software are</ns0:cell></ns0:row><ns0:row><ns0:cell>lost over time.</ns0:cell></ns0:row><ns0:row><ns0:cell>Career Paths</ns0:cell></ns0:row></ns0:table><ns0:note>Overall, we observe research software developers do not commonly follow the best software engineering practices. Of the practices we included in our survey (Continuous Integration, Coding Standards, Architecture/Design, Requirements, and Peer Code Review), none were used by more than 54% of the 26/31 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022) Manuscript to be reviewed Computer Science respondentswith Developers reporting a slightly higher percentage of both formal and informal software training than researchers. In our sample, including both developers and researchers, approximately half of the people developing research software have received no formal training. Consistent with this number, only about half of the respondents reported sufficient opportunities for training. However, approximately 3/4 indicated they did not have sufficient time for training that was available. Together these results suggest two conclusions: (1) there is a need for more training opportunities (as described above) and (2) developers of research software need more time for training, either by prioritizing in their own schedule or by being given it from their employers. &#8226; RQ4a: What is the available institutional support for research software development? &#8226; RQ4b: What sources of institutional funding are available to research software developers? RQ5: What factors impact career advancement and hiring in research software? 27/31 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022)</ns0:note></ns0:figure> <ns0:note place='foot' n='12'>http://www.qualtrics.com 8/31 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:1:1:NEW 9 Mar 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"We thank the reviewers for the valuable comments on our manuscript. We have made a significant revision to address as many of the comments as possible. Below you will find a copy of each reviewer’s comments along with our response below (in blue text). Reviewer 1 Basic reporting R1.1 The text is well written, but contains a few syntax and grammar errors (lines 169, 178, 510, 661). In line 316 the sentence structure seems to be off for me. While I (as a non native speaker) cannot determine if it is wrong, it could be improved for better understanding. Paper updated to reflect these changes. R1.2 Figure 1 is missing an explanation for the two sets of columns. Also, it is rather unreadable due to the subfigures being too small. Removing at least the vertical dotted lines might further increase readability. We have added labels below the columns to clarify the figure. R1.3 Nearly all figures show absolutes. Especially with the difference regarding the absolute number of answers, I would prefer a presentation as relatives (percentages) to allow a better comparison of the answers in figures itself (i.e. figure 5 and 6) and between figures. My suggestion is to state the absolutes in the figure caption or the x-axis captions since it is an important information. In the text the authors already use mainly the percentages, referring to the absolutes in brackets. Because the number of respondents to each question differed, we chose to show absolute numbers rather than percentages. We are afraid that percentages would be misleading. Furthermore, it is not always easy to calculate the percentages and explain the denominator. Because we think it would be more confusing, we chose to keep the figures in absolute numbers. However, we have made a number of improvements to the figures based on other reviewer comments. Experimental design R1.4 In line 388 it is stated that the data was sanitized, but not explained in what way. A short description of this process would be helpful. We added the following sentences at the end of the “Research Ethics” section: Qualtrics collected the IP address and geo location from survey respondents. We removed these columns from the published dataset. However, we did not remove all comments that might lead people to make educated guesses about the respondents. Validity of the findings R1.5 The results are US and UK centric, which is due to the nature of the survey distribution and focus and acknowledged by the authors. While I see no issue with this and most findings might be applicable for other regions, I would recommend to mention it earlier in the paper, perhaps even the abstract. We edited the title, abstract, the second to last sentence of the introduction, and Survey Participants to clarify the focus on the US. Reviewer 2 Basic reporting R2.1 The basic reporting is done well. I indicated on the attached pdf a few spots where the sentences are phrased awkwardly, or where there are some typos, but for the most part the English is quite good. We fixed the marked typos throughout the document. R2.2 The figures were nicely done, except for a few points that I highlighted on the marked up pdf. We fixed figures as indicated in the PDF document. R2.3 The section with the title Discussion should probably have the section title changed to Conclusions, since the Discussion is really what came in the previous section. Also, there isn't currently a Conclusions section. We changed the name of the section to “Conclusion”. Experimental design R2.4 The research questions are well defined, except for one where the wording is awkward. The one with the awkward wording is highlighted in the marked up version. We have reviewed the wording of all questions to ensure they are clear. Validity of the findings R2.5 For the most part the findings seem reasonable. However, there are a few questionable points, as highlighted in the marked up version. A few specific comments include: R2.5a A bigger deal seems to be made of the differences in Figure 1 than the data justifies. The ideal time and the spent time look similar in all cases. For the cases where the paper claims the difference is noteworthy, it would be good to quantify the relative difference between the two measures. We have added a comment to this discussion to indicate that the differences are relatively small. We do not have the resources at this point to do further analysis of this type. R2.5b A section on threats to validity is missing. One threat to validity is the respondents not knowing the meaning of the terminology, such as regression testing, requirements, software carpentry, maintaining and sustaining, design, .... Another threat is respondents self-reporting on their activities, such as what documentation they write. We have added a Threats to Validity Section R2.5c There is no section for future work. For instance, the respondents self-report on what documentation they write, but this documentation could be measured directly by mining existing software repositories. Also, extended interviews could be held with some researchers/developers to get more qualitative data. We greatly expanded the Discussion section. In this expansion, we added a description of the potential future work that goes with each result. R2.5d At line 643 it is stated that the most difficult aspects of software development are not technical. However, this is misinterpreting the study question. The respondents weren't asked what they found most difficult, but what aspects of software development are more difficult than they should be. The respondents might feel like the technical aspects are difficult, but not more difficult than they should be. We updated this sentence to clarify that the people aspects were more difficult than they expected rather than that they were more difficult than the technical aspects. Additional comments R2.6 The paper shows that domain data was collected, but this data does not appear to be used. Did the responses differ between the different domains? If this was not investigated, possibly because of insufficient data, this should be explicitly stated. We used the domain data primarily to describe the sample, not as a hypothesized variable that might affect the results. Therefore, we did not conduct this analysis. Our funding for the work has ended and we are not able to go back and do this analysis. We think our paper is valuable without this additional result. Annotated manuscript R2.7 The reviewer has also provided an annotated manuscript as part of their review: We updated the manuscript to reflect these annotations. Reviewer 3 Basic reporting R3.1 Table I is very helpful, however it would be useful to have more specific details about how these prior studies differ from the current work. We added the following sentence into the introduction to the “Software Engineering Practices” section: “Our current survey is motivated by the inconsistencies in previous results and the fact that some key areas are not adequately covered by previous surveys.” R3.2 Additionally, the research questions (more on this later) do not seem to fit in this section. They have some background from the prior work, but are not well motivated for the overall work. There should be more background included to motivate each RQ and explain why it's included or they should be placed in a separate section (i.e. with the Methods). Overall this research is exciting and provides a valuable contribution to the field. After careful consideration and discussion of this section, we still believe the best organization of the research questions is directly with the text that motivates their inclusion. Therefore we have not moved the questions. However, as indicated in the responses below, we have provided more information to motivate the questions. Experimental design R3.3 The main limitation are the research questions. There is a wide breadth between the RQs and, while the researchers show how these topics fit into the previous surveys, in my opinion the authors fail to motivate why these topics are important for this study and show how each RQ relates to each other and contributes to the overall goal of the work. Each RQ on its own seems very meaningful and relevant, however it is unclear how these connect to each other. To improve upon this, I would suggest adding to each RQ to explain why it is necessary, how it was selected, and how it fits into the current work. We thank the reviewer for their careful attention to the motivation of each research question. We have provided extended commentary to motivate each research question. In this motivation we have used background literature to discuss the importance of a topic and how previous studies have presented inconsistent findings about a topic. We believe this combination provides for a thorough and strong motivation for why a research question should be asked, and specifically how we designed our questionnaire to answer these questions. R3.4 I think also combining the RQs with the specific parts of the survey mentioned in the Methods section would also be helpful. We have updated the PDF of the survey to clearly indicate which survey questions matched each research question. R3.5 Additionally, the RQs lack background for some of specifics studied, especially for the SE-related questions. For example, requirements, design, and testing as software engineering processes, however other potential processes considered part of SE such as implementation, deployment, maintenance, etc. are missing in RQ1. Also the *best* practices (continuous integration, coding standards, arch/design, requirements, peer code review) have some missing options (i.e. pair programming, static analysis tools,) and would be interesting to know the details on how these were selected. Please see the response above to comment R3.3. Validity of the findings R3.6 It would be beneficial to see statistical tests used to further analyze the results and provide their significance. We did not perform these tests in the initial analysis as we were not testing specific hypotheses. At this point, our funding has ended and we do not have the resources to go back and conduct additional analyses. We believe that the results of the paper are interesting and valuable without this additional information. R3.7 The organization of the results could also be improved by providing a clear answer (i.e. highlighted) to each of the research questions based on the findings. We have significantly revised the Discussion section. We repeated each research question and then provided a discussion to summarize the findings related to the question. R3.8 The paper is also missing a threats to validity section to explain limitations of the work (i.e. the huge discrepancy between the respondent types in the results) and how the researchers mitigated these threats. We have added a Threats to Validity section. R3.9 In the conclusion, I would prefer more discussion about implications of these problems and solutions for researchers and developers to improve the state of research software development for each RQ topic, especially for improving the software engineering practices. We have greatly expanded and completely rewritten the discussion section to address this comment. Due to the large number of edits, we shaded the whole section blue to indicate the substantial changes. R3.10 Additionally, it would be interesting to note future opportunities for this work, such as mining GitHub repos to programmatically determine the tools and practices used for research software. In our revision to the Discussion section, we included descriptions of future work that can build upon the results. Reviewer 4 Basic reporting R4.1 The text follows established, scientific form and quality. I would even go as far and say that especially the text up until the 'Analysis' section is written well above average quality. However, starting with the 'Analysis' section, it feels like the publication was written by entirely different authors. This (long) section contains mostly tables, figures, and just a little text to describe the former. The tables and figures could be improved in several places (see detailed notes). Especially for the figures I would suggest to rethink again what they are supposed to show and whether their respective current format is really the best way to display that. Detailed comments can be found later in the review. We reviewed the text in the Analysis section along with the figures and tables based on the other comments. We have made adjustments where possible in response to the other comments. R4.2 I am not sure if this is an artifact of the review-pdf compilation, but almost all citations miss their dois and some citations are not meaningful without either a doi or an URL. Details can be found later in the review. We edited the paper to include DOIs or URLs where needed. R4.3 Up until line 247 (~1/3 of the paper), the text is written in a country-neutral way (including title and abstract), leaving the reader under the impression that the whole RSE landscape is under review. However, as the authors mention themselves: 'results differ across the world'. Thus, I strongly suggest to mention either in the title or at at least in the abstract the limitation that this survey primarily covers only the USA. We edited the title, abstract, the second to last sentence of the introduction, and Survey Participants to clarify the focus on the US. Experimental design R4.4 The raw answer data is provided, along with a (pdf) description of the questions. Something I miss is documentation of the mentioned partial dependencies of questions on answers of previous questions. This unnecessarily complicates a repetition or comparison with other surveys. In addition to providing these inter-dependencies in the supplementary material, it would be beneficial to the reader of the pdf-part of the publication if those could be highlighted or marked in some way. We have updated the PDF of the survey referenced in the text. R4.5 What I miss most is the source code of the scripts used to generate the results from the raw data, including the same for generating the figures. While it is not difficult to do this with the provided csv data from scratch, the vast number of reported numbers and plots would make scripts to generate all of those valuable, and in case of discrepancies between own and reported results the reader would be left without clear way to see where the differences are, in short: they would be very valuable to replicate the results from the data. This is especially disappointing given the connection of this issue with the topic of the paper and the background of the authors. It is entirely possible that I missed those scripts, but I could not find them either in the files submitted to the journal, nor in the data published on Zenodo. We have added a link to the scripts used to generate the figures. Additional comments While reading the article, I collected notes of everything I spotted, even small issues like typos and the like. In the following, those notes are given in roughly the order of appearance in the text. This also means that rather more important issues are mixed with rather trivial ones . I hope this order helps to speed up the improvement of this article, because I believe it to be valuable to the scientific community. For the trivial issues related to typos and wording, we have just fixed them in the text (using change tracking). For the more substantial issues, we keep them listed separately below along with a response for each one. Major edits R4.6 figure 1: It is not clear which group is which in each sub-plot (left/right vs. combination/developers?). It is also not mentioned in the lines that describe the plot: lines 435-438. In general, the figure also suffers from the small numbers displayed in each already small sub-plot. I wonder whether it would be better to switch axes and put all 10 sub-plots as rows into one plot which can then run the entire text width. Also (but hard to see and judge as it is right now), it might be worth noting that at least the left group (combination?) wishes for more training time than they actually use. We have added labels to the columns on Figure 1 to clarify. R4.7 line 442: 'The only one that is technical is testing.' It might be worth noting that this is only a particular high result for the 'combination' group, but not the 'developers' group. On the other hand, for those, 'requirements engineering' is comparatively high, while for the 'combination' group it is not. R4.7a line 444: 'Focusing on the one technical aspect that respondents perceived to be more difficult than it should be': this would imply that 'testing' would be the only technical aspect where respondents answered that they would be more difficult than they would be. However, figure 2 shows a lot of higher-than-zero answers for also other technical topics. 'Testing' is the one technical aspect that was selected most often when it comes to being more difficult than it should be, but not the only one. We have updated the text around this analysis to clarify that testing was one of the technical aspects. We added a bit more detail for the analysis of this question. R4.8 figure 2: It would be better to plot the two differently-colored bars side-by-side (like in figure 3). This way, the two groups can be more easily compared between bars of the same color. If, on the other hand, the differently-colored bars are meant to be 'behind each other', with the red always being 'in front of' the yellow bar, this should be indicated. Even better would be a percentage-based graph, which would also make it possible to compare the two respondent groups with each other, given their different size. Also, with ~50 being the highest number seen here for groups of a size a factor of 10 larger, I wonder whether this was a multiple-choice-question or a single choice question. The difference matters, as with a multiple-choice question a vast majority effectively answered for every aspect that it is not more difficult than it needs to be, while for a single-choice question only the 'most annoying' aspect could be selected, effectively producing a potentially drastically different plot than when asking the multiple-choice version. We have updated the figure and added some text into the description to clarify the question and the number of responses. This question was a multi-select question. R4.9: All figures seem to have been provided in png format. However, being a raster-format, this might produce low-quality results for various zoom-levels. Please consider using scalable formats instead for plots like the ones in this publication. We have made updates to the figures as requested by other comments. However, we still are using .png files for this version. We plan to update the figures to .svg for the final version of the paper. R4.10: figure 3: It is not clear which group forms the basis for the plot (combination or developer or the sum of both). The numbers in the text give indication that it is likely either combination or combination + developer, but it is not explicitly mentioned. We have updated this figure to clarify. R4.11: line 450: Within the first sentence, it is again not clear which group was taken as basis for those numbers. Given that this was already an issue for more than one question, it might be a good idea to add a general indicator in the paper for each question that marks which of the groups received that particular question and that numbers without group statement always mean 'everyone who received that particular question'. We added some text in the first paragraph of the Analysis section to clarify where each response came from. We also added a sentence at the end of the “Respondent Type” in the “Participant Demographics” section to clarify the point raised in this comment. R4.12 line 467: 'we see a different story': not necessarily. The given list of topics to document might not contain, or appear not to contain, the type of documentation applied when documenting code in-line. For instance, a comment on a particular loop structure might not appear to be within 'software design' it respondents correspond that answer with the general design of the overall software package. To clarify, we added the word “other” into the sentence. R4.13: figure 4: The text about the figure concentrates around percentages (e.g.: 'Less than 30% of the respondents reported'). Therefore, changing the figure to use percentages would help the reader. Although the figure itself will show large differences, the y-axis will. Also, since the text combines 'extremely supported' and 'very supported', it would be helpful to visualize those separately from the others, either by using similar colors (and dissimilar to the others), or to add a small black bar between these two and the rest. In addition, if the current order of the topics does not have special meaning, it might be good to sort them by, e.g., the percentage of combined 'extremely supported' and 'very supported'. We have updated all of the figures for consistency. We were not able to implement all of the requested changes here. But, we believe that the updated figure is still clear. Overall we chose to use raw numbers rather than percentages because we believed the raw numbers tell a more accurate story. R4.14: figures 19, 20 and 21 are for some reason displayed differently than most of the others before: here, the groups use different colors and the answers are displayed as different bars, while it is the other way around for the figures beforehand. It would be easier for the reader if this would be done more consistently. Besides consistency, using percentages instead of absolute numbers (and swapping variables as mentioned above) would the reader judge differences between different groups a lot easier. We have revised all figures for consistency. R4.15: line 484: From the text it is not clear which group received that question, i.e., which concrete dependency was connected to this question. We added some text in the first paragraph of the Analysis section to clarify where each response came from. Unless otherwise specified, the results came from all respondents. In this particular case (“version control and continuous integration”), the text specifies that it is a follow-up question. It is in the larger category of Tools. So, these responses are from those that indicated they wanted to answer follow-up questions on tools. R4.16: line 491: 'The lack of use of standard version control methods': The authors state earlier, that '83/87 of respondents answered to use version control. I cannot see a large lack of use of standard version control methods given those numbers. We revised that sentence to say: “While many respondents do use a standard version control system, the large number of Combination respondents who rely on zip file backups suggests that use of standard version control methods is an area where additional training could help.” R4.17: lines 511/514: like at line 500 it would be interesting (and should be within the data collected) to see whether this could be a correlation with something else, like differences in availability of training and gender ratio differences in different disciplines. We did not conduct this detailed analysis as part of our initial work. At this point, our funding has ended and we do not have the resources to conduct additional analyses. We believe the results of the paper are interesting and valuable without this additional analysis. R4.18: line 563: The text states 'Figure 10 shows, the respondents saw little chance for career advancement for those whose primary job is software development'. However, when looking at figure 10, the group of 'developers' show a far better opportunity for advancement (50/50 no/yes) when compared to the other two groups (far lower numbers of 'yes' compared to 'no'). Also, it is interesting to see for the group of researchers only, the group of 'don't know' is the largest overall. We updated the text around this figure to expand the discussion to include the Developers. R4.19: line 565: with so big differences between groups in figure 10, I wonder how relevant the statement concerning gender is really for the gender topic and how much could result from different gender ratios in the groups plotted here. This is a good observation, we have added this point to the Conclusion section. Minor edits (need discussing) R4.20a line 207: This sentence contradicts the sentence before directly, in which it is stated that only a minority of researchers (~20%) found 'formal training to be important or very important'. If the authors intended the meaning that this result may be caused by a lack of said training ('They don't put importance into something because they don't know it'), this should be made clearer. We revised the sentence to clarify. It now reads: “The results of these prior surveys suggest that research software developers \ins{are either unaware of their need for or} may not have access to sufficient \ins{formal} training in software development.” R4.20b lines 444-446: if possible during type-setting the paper for paper-format, it would be nice to use the space of these two lines for the figures and put the content of the lines on the next page. We have made a final check of the paper to fix typesetting issues where possible. R4.20c line 653: I would argue that any time spent on debugging is too much, as ideally there would not be any need for debugging. In this sense, it is not surprising that a lot of people responded that they spend more time for debugging than they ideally would (because that would be no time at all). We agree with this comment. However, we do not see any necessary change to the paper here. So we have not made any changes to the paper. R4.20d line 634: 'This answer again seems at odds'...: Not necessarily. Assuming most of these projects are within scientific institutions, those more likely do have one of both of these, so while the project might not have any, it would be covered by the institutional one (but the question only asked about the project specifically). Also, there are different views of the effectiveness of these measures, which might lead projects to adopt different practices to further inclusion. Given that we cannot make a conclusive determination from our data, we added the following sentence: “However it is possible that projects fall under institutional codes of conduct or have simply decided that a code of conduct is not the best way to encourage inclusion.” R4.20e line 488: The 'However' might not be warranted. It does imply that an overlap of respondents who answered 'use git most of the time' and 'use copy/zip most of the time' would be noteworthy and possibly troublesome. An alternative explanation might be that those with overlap indeed use both, to achieve an even higher level of backup than one single practice alone. We have edited the text in this section a bit. R4.20f line 500: It would be interesting to look at the reason for the (small) difference. I could imaging that different gender ratios in specific disciplines and different availability of training in those disciplines could have an effect of comparable size. While the text does not explicitly state a causation, it might be good to explicitly state that this is indeed by itself no indication for causation. We do not think the current text implies causation, so we did not make any changes. Minor edits We made all of the minor edits suggested "
Here is a paper. Please give your review comments after reading it.
412
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Research software is a critical component of contemporary scholarship. Yet, most research software is developed and managed in ways that are at odds with its long-term sustainability. This paper presents findings from a survey of 1149 researchers, primarily from the United States, about sustainability challenges they face in developing and using research software. Some of our key findings include a repeated need for more opportunities and time for developers of research software to receive training. These training needs cross the software lifecycle and various types of tools. We also identified the recurring need for better models of funding research software and for providing credit to those who develop the software so they can advance in their careers. The results of this survey will help inform future infrastructure and service support for software developers and users, as well as national research policy aimed at increasing the sustainability of research software.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In almost all areas of research, from hard sciences to the humanities, the processes of collecting, storing, and analyzing data and of building and testing models have become increasingly complex. Our ability to navigate such complexity is only possible because of the existence of specialized software, often referred to as research software. Research software plays such a critical role in day to day research that a comprehensive survey reports 90-95% of researchers in the US and the UK rely upon it and more than 60% were unable to continue working if such software stopped functioning <ns0:ref type='bibr' target='#b20'>(Hettrick, 2014)</ns0:ref>. While the research community widely acknowledges the importance of research software, the creation, development, and maintenance of research software is still ad hoc and improvised, making such infrastructure fragile and vulnerable to failure.</ns0:p><ns0:p>In many fields, research software is developed by academics who have varying levels of training, ability, and access to expertise, resulting in a highly variable software landscape. As researchers are under immense pressure to maintain expertise in their research domains, they have little time to stay current with the latest software engineering practices. In addition, the lack of clear career incentives for building and maintaining high quality software has made research software development unsustainable. The lack of career incentives has occurred partially because the academic environment and culture have developed over hundreds of years, while software has only recently become important, in some fields over the last 60+ years, but in many others, just in the last 20 or fewer years <ns0:ref type='bibr' target='#b16'>(Foster, 2006)</ns0:ref>.</ns0:p><ns0:p>Further, only recently have groups undertaken efforts to promote the role of research software (e.g., the Society of Research Software Engineers 1 , the US Research Software Engineer Association 2 ) and to 1 https://society-rse.org The open source movement has created a tremendous variety of software, including software used for research and software produced in academia. It is difficult for researchers to find and use these solutions without additional work <ns0:ref type='bibr' target='#b29'>(Joppa et al., 2013)</ns0:ref>. The lack of standards and platforms for categorizing software for communities often leads to re-developing instead of reusing solutions <ns0:ref type='bibr' target='#b24'>(Howison et al., 2015)</ns0:ref>. There are three primary classes of concerns, pervasive across the research software landscape, that have stymied this software from achieving maximum impact <ns0:ref type='bibr' target='#b11'>(Carver et al., 2018)</ns0:ref>.</ns0:p><ns0:p>&#8226; Functioning of the individual and team: issues such as training and education, ensuring appropriate credit for software development, enabling publication pathways for research software, fostering satisfactory and rewarding career paths for people who develop and maintain software, and increasing the participation of underrepresented groups in software engineering.</ns0:p><ns0:p>&#8226; Functioning of the research software: supporting sustainability of the software; growing community, evolving governance, and developing relationships between organizations, both academic and industrial; fostering both testing and reproducibility, supporting new models and developments (e.g., agile web frameworks, Software-as-a-Service), supporting contributions of transient contributors (e.g., students), creating and sustaining pipelines of diverse developers.</ns0:p><ns0:p>&#8226; Functioning of the research field itself : growing communities around research software and disparate user requirements, cataloging extant and necessary software, disseminating new developments and training researchers in the usage of software.</ns0:p><ns0:p>In response to some of the challenges highlighted above, the US Research Software Sustainability Institute (URSSI) 6 conceptualization project, funded by NSF, is designing an institute that will help with the problem of sustaining research software. The overall goal of the conceptualization process is to bring the research software community together to determine how to address known challenges to the development and sustainability of research software and to identify new challenges that need to be addressed. One important starting point for this work is to understand and describe the current state of the practice in the United States relative to those important concerns. Therefore, in this paper we describe the results of a community survey focused on this goal.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>Previous studies of research software have often focused on the development of cyberinfrastructure <ns0:ref type='bibr' target='#b6'>(Borgman et al., 2012)</ns0:ref> and the various ways software production shapes research collaboration <ns0:ref type='bibr'>(Howison and</ns0:ref><ns0:ref type='bibr'>Herbsleb, 2011, 2013;</ns0:ref><ns0:ref type='bibr' target='#b42'>Paine and Lee, 2017)</ns0:ref>. While these studies provide rich contextual observations about research software development processes and practices, their results are difficult to generalize because they often focus either on small groups or on laboratory settings. Therefore, there is a need to gain a broader understanding of the research software landscape in terms of challenges that face individuals seeking to sustain research software.</ns0:p><ns0:p>A number of previous surveys have provided valuable insight into research software development and use, as briefly described in the next subsection. Based on the results of these surveys and from other related literature, the remainder of this section motivates a series of research questions focused on important themes related to the development of research software. The specific questions are based on the authors' experience in common topics mentioned in the first URSSI workshop <ns0:ref type='bibr' target='#b46'>(Ram et al., 2018)</ns0:ref> as well as previously published studies of topics of interest to the community <ns0:ref type='bibr' target='#b35'>(Katz et al., 2019;</ns0:ref><ns0:ref type='bibr'>Fritzsch, 2019)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Previous Surveys</ns0:head><ns0:p>The following list provides an overview of the previous surveys on research software, including the context of each survey. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarizes the surveys.</ns0:p><ns0:p>&#8226; How do Scientists Develop and Use Scientific Software? <ns0:ref type='bibr' target='#b19'>(Hannay et al., 2009)</ns0:ref> describes the results of a survey of 1972 scientists who develop and use software. The survey focused on questions about (1) how and when scientists learned about software development/use, (2) the importance of developing/using software, (3) time spent developing/using software, (4) hardware platforms, (5) user communities, and (6) software engineering practices.</ns0:p><ns0:p>&#8226; How Do Scientists Develop Scientific Software? An External Replication <ns0:ref type='bibr' target='#b44'>(Pinto et al., 2018)</ns0:ref> is a replication of the previous study <ns0:ref type='bibr' target='#b19'>(Hannay et al., 2009)</ns0:ref> conducted ten years later. The replication focused on scientists who develop R packages. The survey attracted 1553 responses. The survey asked very similar questions to the original survey, with one exception. In addition to replicating the original study, the authors also asked respondents to identify the 'most pressing problems, challenges, issues, irritations, or other 'pain points' you encounter when developing scientific software.' A second paper, Naming the Pain in Developing Scientific Software <ns0:ref type='bibr'>(Wiese et al., 2020)</ns0:ref>, describes the results of this question in the form of a taxonomy of 2,110 problems that are either (1) technical-related, (2) social-related, or (3) scientific-related.</ns0:p><ns0:p>&#8226; A Survey of Scientific Software Development <ns0:ref type='bibr' target='#b40'>(Nguyen-Hoan et al., 2010)</ns0:ref> surveyed researchers in Australia working in multiple scientific domains. The survey focused on programming language use, software development tools, development teams and user bases, documentation, testing and verification, and non-functional requirements.</ns0:p><ns0:p>&#8226; A Survey of the Practice of Computational Science <ns0:ref type='bibr' target='#b45'>(Prabhu et al., 2011)</ns0:ref> reports the results of interviews of 114 respondents from a diverse set of domains all working at Princeton University.</ns0:p><ns0:p>The interviews focused on three themes: (1) programming practices, (2) computational time and resource usage, and (3) performance enhancing methods.</ns0:p><ns0:p>&#8226; Troubling Trends in Scientific Software <ns0:ref type='bibr' target='#b29'>(Joppa et al., 2013)</ns0:ref> reports on the results from about 450 responses working in a specific domain, species distribution modeling, that range from people who find software difficult to use to people who are very experienced and technical. The survey focused on understanding why respondents chose the particular software they used and what other software they would like to learn how to use.</ns0:p><ns0:p>&#8226; Self-Perceptions About Software Engineering: A Survey of Scientists and Engineers <ns0:ref type='bibr' target='#b8'>(Carver et al., 2013)</ns0:ref> reports the results from 141 members of the Computational Science &amp; Engineering community. The primary focus of the survey was to gain insight into whether the respondents thought they knew enough software engineering to produce high-credibility software. The survey also gathered information about software engineering training and about knowledge of specific software engineering practices.</ns0:p><ns0:p>&#8226; 'Not everyone can use Git:' Research Software Engineers' recommendations for scientist-centered software support (and what researchers really think of them) <ns0:ref type='bibr' target='#b27'>(Jay et al., 2016)</ns0:ref> describes a study that includes both Research Software Engineers and domain researchers to understand how scientists publish code. The researchers began by interviewing domain scientists who were trying to publish their code to identify the barriers they faced in publishing their code. Then they interviewed Research Software Engineers to understand how they would address those barriers. Finally, they synthesized the results from the Research Software Engineer interviews into a series of survey questions sent to a larger group of domain researchers.</ns0:p><ns0:p>&#8226; It's impossible to conduct research without software, say 7 out of 10 UK researchers <ns0:ref type='bibr' target='#b21'>(Hettrick, 2018</ns0:ref><ns0:ref type='bibr' target='#b20'>(Hettrick, , 2014) )</ns0:ref> describes the results of 417 responses to a survey of 15 Russel Group Universities in the UK.</ns0:p><ns0:p>The survey focused on describing the characteristics of software use and software development within research domains. The goal was to provide evidence regarding the prevalence of software and its fundamental importance for research.</ns0:p></ns0:div> <ns0:div><ns0:head>&#8226; Surveying the US National Postdoctoral Association Regarding Software Use and Training in</ns0:head><ns0:p>Research <ns0:ref type='bibr' target='#b39'>(Nangia and Katz, 2017)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Software Engineering Practices</ns0:head><ns0:p>Based on the results of the surveys described in the previous subsection, we can make some observations about the use of various software engineering practices employed while developing software. The set of practices research developers find useful appear to have some overlap and some difference from those practices employed by developers of business or IT software. Interestingly, the results of the previous surveys do not paint a consistent picture regarding the importance and/or usefulness of various practices.</ns0:p><ns0:p>Our current survey is motivated by the inconsistencies in previous results and the fact that some key areas are not adequately covered by previous surveys. Here we highlight some of the key results from these previous surveys, organized roughly in the order of the software engineering lifecycle.</ns0:p></ns0:div> <ns0:div><ns0:head>Requirements</ns0:head><ns0:p>The findings of two surveys <ns0:ref type='bibr' target='#b44'>(Pinto et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Hannay et al., 2009)</ns0:ref> reported both that requirements were important to the development of research software but also that they were one of the least understood phases. Other surveys reported that (1) requirements management is the most difficult technical problem <ns0:ref type='bibr'>(Wiese et al., 2020)</ns0:ref> and (2) the amount of requirements documentation is low <ns0:ref type='bibr' target='#b40'>(Nguyen-Hoan et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Design Similar to requirements, surveys reported that design was one of the most important phases <ns0:ref type='bibr' target='#b19'>(Hannay et al., 2009)</ns0:ref> and one of the least understood phases <ns0:ref type='bibr' target='#b44'>(Pinto et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Hannay et al., 2009)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Testing There were strikingly different results related to testing. A prior survey of research software engineers found almost 2/3 of developers do their own testing, but less than 10% reported the use of formal testing approaches <ns0:ref type='bibr' target='#b43'>(Philippe et al., 2019)</ns0:ref>. Some surveys <ns0:ref type='bibr' target='#b44'>(Pinto et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Hannay et al., 2009)</ns0:ref> reported that testing was important. However, another survey reported that scientists do not regularly test their code <ns0:ref type='bibr' target='#b45'>(Prabhu et al., 2011)</ns0:ref>. Somewhere in the middle, another survey reports that testing is commonly used, but the use of integration testing is low <ns0:ref type='bibr' target='#b40'>(Nguyen-Hoan et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Software Engineering Practices Summary This discussion all leads to the first research question:</ns0:p><ns0:p>RQ1: What activities do research software developers spend their time on, and how does this impact the perceived quality and long-term accessibility of research software?</ns0:p></ns0:div> <ns0:div><ns0:head>Software tools and support</ns0:head><ns0:p>Development and maintenance of research software includes both the use of standard software engineering tools such as version control <ns0:ref type='bibr' target='#b36'>(Milliken et al., 2021)</ns0:ref> and continuous integration <ns0:ref type='bibr' target='#b48'>(Shahin et al., 2017)</ns0:ref>. In addition, these tasks require custom libraries developed for specific analytic tasks or even language-specific interpreters that ease program execution. <ns0:ref type='bibr'>et al., 2013)</ns0:ref>.</ns0:p><ns0:p>A UK survey <ns0:ref type='bibr' target='#b21'>(Hettrick, 2018</ns0:ref><ns0:ref type='bibr' target='#b20'>(Hettrick, , 2014) )</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>institutions that support research (e.g., universities and national laboratories) and grant-making bodies that fund research (e.g., federal agencies and philanthropic organizations) often fail to recognize the central importance of software development and maintenance in conducting novel research <ns0:ref type='bibr' target='#b18'>(Goble, 2014)</ns0:ref>. In turn, there is a little direct financial support for the development of new software or the sustainability of existing software upon which research depends <ns0:ref type='bibr' target='#b30'>(Katerbow et al., 2018)</ns0:ref>. In particular, funding agencies typically have not supported the continuing work needed to maintain software after its initial development.</ns0:p><ns0:p>This lack of support is despite increasing recognition of reproducibility and replication crises that depend, in part, upon reliable access to the software used to produce a new finding <ns0:ref type='bibr' target='#b22'>(Hocquet and Wieber, 2021)</ns0:ref>.</ns0:p><ns0:p>In reaction to a recognized gap in research funding for sustainable software, many projects have attempted to demonstrate the value of their work through traditional citation and impact analysis <ns0:ref type='bibr' target='#b3'>(Anzt et al., 2021)</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Career Paths</ns0:head><ns0:p>While most of previous surveys did not address the topic of career paths, the survey of research software engineers <ns0:ref type='bibr' target='#b43'>(Philippe et al., 2019)</ns0:ref> did briefly address this question. Because the results differ across the world and our paper focuses on the US, we only report results for respondents in the US. First, 57% of respondents were funded by grants and 47% by institutional support. Second, respondents had been in their current position for an average of 8.5 years. Last, 97% were employed full-time.</ns0:p><ns0:p>Because of the lack of information from prior surveys, we focus the rest of this discussion on other work to provide background. In 2012, the Software Sustainability Institute (SSI) organized the Collaborations Workshop 7 that addressed the question: why is there no career for software developers in academia? The work of the participants and of the SSI's policy team led to the foundation of the UK RSE association and later to the Society of Research Software Engineering. RSEs around the world are increasingly forming national RSE associations, including the US Research Software Engineer Association (US-RSE) 8 .</ns0:p><ns0:p>Current evaluation and promotion processes in academia and national labs typically follow the traditional pattern of rewarding activities that include publications, funding, and advising students.</ns0:p><ns0:p>However, there are other factors that some have considered. Managers of RSE teams state that when hiring research developers, it is important that those developers are enthusiastic about research topics and have problem-solving capabilities 9 . Another factor, experience in research software engineering, can be evaluated by contributions to software in platforms like GitHub. However, while lines of code produced, number of solved bugs, and work hours may not be ideal measures for developer productivity, they can provide insight into the sustainability and impact of research software, i.e. the presence of an active community behind a software package that resolves bugs and interacts with users is part of sustainability of software and impact on research 10 . In addition, CaRCC (the Campus Research Computing Consortium)</ns0:p><ns0:p>7 http://software.ac.uk/cw12 8 http://us-rse.org/ 9 https://cosden.github.io/improving-your-RSE-application 10 https://github.com/Collegeville/CW20/blob/master/WorkshopResources/WhitePapers/ gesing-team-organization.pdf</ns0:p></ns0:div> <ns0:div><ns0:head>6/32</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:ref> Manuscript to be reviewed Computer Science has defined job families and templates for job positions that can be helpful both for hiring managers and HR departments that want to recognize the role of RSEs and HPC Facilitators in their organizations 11 . However, there is still not a clearly defined and widely accepted career path for research software engineers in the US. We pose the following research question that guides our specific survey questions related to career paths -RQ5: What factors impact career advancement and hiring in research software?</ns0:p></ns0:div> <ns0:div><ns0:head>Credit</ns0:head><ns0:p>While most of the previous surveys did not address the topic of credit, the survey of research software engineers <ns0:ref type='bibr' target='#b43'>(Philippe et al., 2019)</ns0:ref> does contain a question about how researchers are acknowledged when their software contributes to a paper. The results showed that 47% were included as a co-author, 18% received only an acknowledgement, and 21% received no mention at all. Because of the lack survey results related to credit, we focus on other work to provide the necessary background.</ns0:p><ns0:p>The study of credit leads to a set of interlinked research questions. We can answer these questions by directly asking software developers and software project collaborators to provide their insights. Here we take a white box approach and examine the inside of the box.</ns0:p><ns0:p>&#8226; How do individuals want their contributions to software projects to be recognized, both as individuals and as members of teams?</ns0:p><ns0:p>&#8226; How do software projects want to record and make available credit for the contributions to the projects?</ns0:p><ns0:p>In addition, these respondents can help answer additional questions from their perspective as someone external to other organizations. Here we can only take a black box approach and examine the box from the outside. (A white box approach would require a survey of different participants.)</ns0:p><ns0:p>&#8226; How does the existing ecosystem, based largely on the historical practices related to contributions to journal and conference papers and monographs, measure, store, and disseminate information about contributions to software?</ns0:p><ns0:p>&#8226; How does the existing ecosystem miss information about software contributions?</ns0:p><ns0:p>&#8226; How do institutions (e.g., hiring organizations, funding organizations, professional societies) use the existing information about contributions to software, and what information is being missed?</ns0:p><ns0:p>We also recognize that there are not going to be simple answers to these questions (CASBS Group on Best Practices in Science, 2018; <ns0:ref type='bibr' target='#b0'>Albert and Wager, 2013)</ns0:ref>, and that any answers will likely differ to some extent between disciplines <ns0:ref type='bibr' target='#b14'>(Dance, 2012)</ns0:ref>. Many professional societies and publishers have specific criteria for authorship of papers (e.g., they have made substantial intellectual contributions, they have participated in drafting and/or revision of the manuscript, they agree to be held accountable for any issues relating to correctness or integrity of the work (Association for Computing Machinery, 2018)), typically suggesting that those who have contributed but do not meet these criteria be recognized via an acknowledgment. While this approach is possible in a paper, there is no equivalent for software, other than papers about software. In some disciplines, such as those where monographs are typical products, there may be no formal guidelines. Author ordering is another challenge. The ordering of author names typically has some meaning, though the meaning varies between disciplines. Two common practices are alphabetic ordering, such as is common in economics (Weber, 2018) and ordering by contribution with the first author being the main contributor and the last author being the senior project leader, as occurs in many fields <ns0:ref type='bibr' target='#b47'>(Riesenberg and Lundberg, 1990)</ns0:ref>. The fact that the contributions of each author is unclear has led to activities and ideas to record their contributions in more detail <ns0:ref type='bibr' target='#b1'>(Allen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b53'>The OBO Foundry, 2020;</ns0:ref><ns0:ref type='bibr' target='#b33'>Katz, 2014)</ns0:ref>.</ns0:p><ns0:p>Software in general has not been well-cited <ns0:ref type='bibr' target='#b23'>(Howison and Bullard, 2016)</ns0:ref>, in part because the scholarly culture has not treated software as something that should be cited, or in some cases, even mentioned.</ns0:p><ns0:p>The recently-perceived reproducibility crisis <ns0:ref type='bibr' target='#b5'>(Baker, 2016)</ns0:ref> has led to changes, first for data (which also was not being cited (Task Group on Data Citation Standards and Practices, 2013)) and more recently for software. For software, these changes include the publication of software papers , both in general journals and in journals that specialize in software papers (e.g., the Journal of Open Source Software <ns0:ref type='bibr' target='#b51'>(Smith et al., 2018)</ns0:ref>), as well as calls for direct software citation <ns0:ref type='bibr' target='#b50'>(Smith et al., 2016)</ns0:ref> Manuscript to be reviewed Computer Science citations <ns0:ref type='bibr'>(Katz et al., 2021)</ns0:ref>. Software, as a digital object, also has the advantage that it is usually stored as a collection of files, often in a software repository. This fact means that it is relatively simple to add an additional file that contains metadata about the software, including creators and contributions, in one of a number of potential styles <ns0:ref type='bibr'>(Wilson, 2013;</ns0:ref><ns0:ref type='bibr' target='#b15'>Druskat, 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Jones et al., 2017)</ns0:ref>. This effort has recently been reinforced by GitHub, who have made it easy to add such metadata to repositories and to generate citations for those repositories <ns0:ref type='bibr' target='#b49'>(Smith, 2021)</ns0:ref>.</ns0:p><ns0:p>Therefore, we pose the following research questions to guide our specific survey questions related to credit -RQ6a What do research software projects require for crediting or attributing software use? and RQ6b: How are individuals and groups given institutional credit for developing research software?</ns0:p></ns0:div> <ns0:div><ns0:head>Diversity</ns0:head><ns0:p>Previous research has found that both gender diversity and tenure (length of commitment to a project) are positive and significant predictors of productivity in open source software development <ns0:ref type='bibr' target='#b55'>(Vasilescu et al., 2015)</ns0:ref>. Using similar data, <ns0:ref type='bibr' target='#b41'>Ortu et al. (2017)</ns0:ref> demonstrate that diversity of nationality among team members is a predictor of productivity. However, they also show this demographic characteristic of a team leads to less polite and civil communication (via filed issues and discussion boards).</ns0:p><ns0:p>Nafus ( <ns0:ref type='formula'>2012</ns0:ref>)'s early qualitative study of gender in open source, through interview and discourse analysis of patch notes, describes sexist behavior that is linked to low participation and tenure for women in distributed software projects.</ns0:p><ns0:p>The impact of codes of conduct (CoC) -which provide formal expectations and rules for the behavior of participants in a software project -have been studied in a variety of settings. In open-source software projects codes of conduct have been shown to be widely reused (e.g. Ubuntu, Contributor Covenant, Django, Python, Citizen, Open Code of Conduct, and Geek Feminism have been reused more than 500 times by projects on GitHub) <ns0:ref type='bibr' target='#b54'>(Tourani et al., 2017)</ns0:ref>.</ns0:p><ns0:p>There are few studies of the role and use of codes of conduct in research software development. <ns0:ref type='bibr' target='#b29'>Joppa et al. (2013)</ns0:ref> point to the need for developing rules which govern multiple aspects of scientific software development, but specific research that addresses the prevalence, impact, and use of a code of conduct in research software development have not been previously reported.</ns0:p><ns0:p>The 2018 survey of the research software engineer community across seven countries <ns0:ref type='bibr' target='#b43'>(Philippe et al., 2019)</ns0:ref> showed the percentage of respondents who identified as male as between 73% (US) and 91% (New Zealand). Other diversity measures are country-specific and were only collected in the UK and US, but in both, the dominant group is overrepresented compared with its share of the national population.</ns0:p><ns0:p>Therefore, we pose the following research question to guide our specific survey questions related to diversity -RQ7: How do current Research Software Projects document diversity statements or codes of conduct, and what support is needed to further diversity initiatives?</ns0:p></ns0:div> <ns0:div><ns0:head>METHODS</ns0:head><ns0:p>To understand sustainability issues related to the development and use of research software, we developed a Qualtrics 12 survey focused on the seven research questions defined in the Background section. This section describes the design of the survey, the solicited participants, and the qualitative analysis process we followed.</ns0:p></ns0:div> <ns0:div><ns0:head>Survey Design</ns0:head><ns0:p>We designed the survey to capture information about how individuals develop, use, and sustain research software. The survey first requested demographic information to help us characterize the set of respondents.</ns0:p><ns0:p>Then, we enumerated 38 survey questions (35 multiple choice and 3 free response). We divided these questions among the seven research questions defined in the Background Section. This first set of 38 questions went to all survey participants, who were free to skip any questions.</ns0:p><ns0:p>Then, to gather more detailed information, we gave each respondent the option to answer follow-up questions on one or more of the seven topic areas related to the research questions. For example, if a respondent was particularly interested in Development Practices she or he could indicate their interest in answering more questions about that topic. Across all seven topics, there were 28 additional questions (25 multiple choice and 3 free response). Because the follow-up questions for a particular topic were only presented to respondents who expressed interest in that topic, the number of respondent to these questions is significantly lower than the number of respondents to first set of 38 questions. This discrepancy in the number of respondents is reflected in the data presented below.</ns0:p><ns0:p>In writing the questions, where possible, we replicated the wording of questions from the previous surveys about research software (described in the Background Section). In addition, because we assumed that respondents would be familiar with the terms used in the survey and to simplify the text, we did not provide definitions of terms in the survey itself.</ns0:p></ns0:div> <ns0:div><ns0:head>Survey Participants</ns0:head><ns0:p>We distributed the survey to potential respondents through two primary venues:</ns0:p><ns0:p>1. Email Lists: To gather a broad range of perspectives, we distributed the survey to 33,293 United States NSF and 39,917 United States NIH PIs whose projects were funded for more than $200K in the five years prior to the survey distribution and involve research software and to mailing lists of research software developers and research software projects.</ns0:p></ns0:div> <ns0:div><ns0:head n='2.'>Snowball Sampling:</ns0:head><ns0:p>We also used snowballing by asking people on the email lists to forward the survey to others who might be interested. We also advertised the survey via Twitter.</ns0:p><ns0:p>The approach we used to recruit participants makes it impossible to calculate a response rate. We do not know how many times people forwarded the survey invitation or the number of potential participants reached by the survey.</ns0:p></ns0:div> <ns0:div><ns0:head>Research Ethics</ns0:head><ns0:p>We received approval for the survey instruments and protocols used in this study from the University of Notre Dame Committee Institutional Review Board for Social and Behavioral Responsible Conduct of . Prior to taking the survey, respondents had to read and consent to participate. If a potential respondent did not consent, the survey terminated. To support open science, we provide the following information: (1) the full text of the survey and (2) a sanitized version of the data <ns0:ref type='bibr' target='#b7'>(Carver et al., 2021)</ns0:ref>. We also provide a link to the scripts used to generate the figures that follow <ns0:ref type='bibr' target='#b9'>(Carver et al., 2022)</ns0:ref>. Qualtrics collected the IP address and geo location from survey respondents.</ns0:p><ns0:p>We removed these columns from the published dataset. However, we did not remove all comments that might lead people to make educated guesses about the respondents.</ns0:p></ns0:div> <ns0:div><ns0:head>ANALYSIS</ns0:head><ns0:p>After providing an overview of the participant demographics, we describe the survey results relative to each of the research questions defined in the Background section. Because many of the survey questions were optional and because the follow-up questions only went to a subset of respondents, we report the number of respondents for each question along with the results below. To clarify which respondents received each question, we provide some text around each result. In addition, when reporting results from a follow-up question, the text specifically indicates that it is a follow-up question and the number of respondents will be much smaller.</ns0:p></ns0:div> <ns0:div><ns0:head>Participant Demographics</ns0:head><ns0:p>We use each of the key demographics gathered on the survey to characterize the respondents (e.g. the demographics of the sample). Note that because some questions were optional, the number of respondents differs across the demographics.</ns0:p><ns0:p>Respondent Type We asked each respondent to characterize their relationship with research software as one of the following: Job Title People involved in developing and using research software have various job titles. For our respondents, Faculty was the most common, given by 63% (668/1046) of the respondents and 79% (354/447) of the Researcher type respondents. No other title was given by more then 6% of the respondents.</ns0:p><ns0:p>Respondent Age Overall, 77% (801/1035) of the respondents are between 35 and 64 years of age. The percentage is slightly higher for Researcher type respondents (370/441 -84%) and slightly lower for Combination type respondents (378/514 -74%).</ns0:p><ns0:p>Respondent Experience The respondent pool is highly experienced overall, with 77% (797/1040) working in research for more than 10 years and 39% (409/1040) for more than 20 years. For the Researcher type respondents, those numbers increase to 84% (373/444) with more than 10 years and 44% <ns0:ref type='bibr'>(197/444)</ns0:ref> with more than 20 years.</ns0:p><ns0:p>Gender In terms of self-reported gender, 70% (732/1039) were Male, 26% (268/1039) were Female, with the remainder reporting Other or Prefer not to say. For Researcher respondents, the percentage of Females is higher (151/443 -34%) Discipline The survey provided a set of choices for the respondents to choose their discipline(s).</ns0:p><ns0:p>Respondents could choose more than one discipline. Table <ns0:ref type='table' target='#tab_6'>2</ns0:ref> shows the distribution of respondents by discipline. Though the respondents represent a number of research disciplines, our use of NSF and NIH mailing lists likely skewed the results towards participants from science and engineering fields.</ns0:p></ns0:div> <ns0:div><ns0:head>Discipline</ns0:head><ns0:p>Total Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Software Engineering Practices</ns0:head><ns0:p>This section focuses on answering RQ1: What activities do research software developers spend their time on, and how does this impact the perceived quality and long-term accessibility of research software?</ns0:p><ns0:p>Where Respondents Spend Software Time We asked respondents what percentage of their time they currently spend on a number of software activities and what percentage of time they would ideally like to spend on those activities. As Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref> shows, there is a mismatch between these two distributions, SpentTime and IdealSpentTime, respectively. Overall, respondents would like to spend more time in design and coding and less time in testing and debugging. However, the differences are relatively small in most cases. The box and whisker plots 13 in Figure <ns0:ref type='figure'>2</ns0:ref> illustrate the results for this follow-up question. This question was multi-select where respondents could choose as many answers as were appropriate. Interestingly, the aspects most commonly reported are those that are more related to people issues rather than to technical issues (e.g. finding personnel/turnover, communication, use of best practices, project management, and keeping up with modern tools). The only ones that were technical were testing and porting.</ns0:p><ns0:p>Use of Testing Focusing on one of the technical aspect that respondents perceived to be more difficult than it should be, we asked the respondents how frequently they employ various types of testing, including:</ns0:p><ns0:p>Unit, Integration, System, User, and Regression. The respondents could choose from frequently, somewhat, rarely, and never. Figure <ns0:ref type='figure'>3</ns0:ref> shows the results from this question. The only type of testing more than 50% of respondents used frequently was Unit testing (231/453 -51%). On the other extreme only about 25% reported using System (118/441 -27%) or Regression testing (106/440 -24%) frequently.</ns0:p><ns0:p>Use of Open-Source Licensing Overall 74% (349/470) of the respondents indicated they used an open-source license. This percentage was consistent across both combination and developer respondents.</ns0:p><ns0:p>However, this result still leaves 26% of respondents who do not release their code under an open-source license.</ns0:p><ns0:p>Frequency of using *best* practices As a follow-up question, we asked the respondents how frequently they used a number of standard software engineering practices. The response options were Never, 13 The dots represent outliers.</ns0:p></ns0:div> <ns0:div><ns0:head>11/32</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed &#8226; Continuous Integration -54% (54/100)</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>&#8226; Use of coding standards -54% (54/100)</ns0:p><ns0:p>&#8226; Architecture or Design -51% (52/101) Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Are there sufficient opportunities for training? When we turn to the availability of relevant training opportunities, an interesting picture emerges. As Figure <ns0:ref type='figure'>5</ns0:ref> shows, slightly more than half of the respondents indicate there is sufficient training available for obtaining new software skills. However, when looking at the response based upon gender, there is a difference with 56% of male respondents answering positively but only 43% of the female respondents answering positively. But, as Figure <ns0:ref type='figure'>6</ns0:ref> indicates, approximately 75% of the respondents indicated they do not have sufficient time to take advantage of these opportunities.</ns0:p><ns0:p>These results are slightly higher for female respondents (79%) compared with male respondents (73%).</ns0:p><ns0:p>So, while training may be available, respondents do not have adequate time to take advantage of it.</ns0:p></ns0:div> <ns0:div><ns0:head>Preferred modes for delivery of training</ns0:head><ns0:p>The results showed that there is not a dominant approach preferred for training. Carpentries, Workshops, MOOCs, and On-site custom training all had approximately the same preference across all three topic ares (Development Techniques, Languages, and Project Management). This result suggests that there is benefit to developing different modes of training about important topics, because different people prefer to learn in different ways.</ns0:p></ns0:div> <ns0:div><ns0:head>Funding</ns0:head><ns0:p>This section uses the relevant survey questions to answer the two research questions related to funding:</ns0:p><ns0:p>&#8226; RQ4a: What is the available institutional support for research software development?</ns0:p><ns0:p>&#8226; RQ4b: What sources of institutional funding are available to research software developers?</ns0:p><ns0:p>First, 54% (450/834) of the respondents reported they have included funding for software in their Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>proposals. However, that percentage drops to 30% (124/408) for respondents who identify primarily as Researchers.</ns0:p><ns0:p>When looking at the specific types of costs respondents include in their proposals, 48% (342/710) include costs for developing new software, 22% (159/710) for reusing existing software and 29% (209/710) for maintaining/sustaining software. [Note that respondents could provide more than one response.] It is somewhat surprising to see such a large number of respondents who include funding for maintaining and sustaining software.</ns0:p><ns0:p>In examining the source of funding for the projects represented by the survey respondents, the largest funder is NSF, at 36%. But, as Figure <ns0:ref type='figure'>7</ns0:ref> 14 shows, a significant portion of funding comes from the researchers' own institutions. While other funding agencies provide funding for the represented projects and may be very important for individual respondents, overall, they have little impact. This result could have been impacted by the fact that we used a mailing list of NSF projects as one means of distributing the survey. However, we also used a list of NIH PIs who led projects funded at least at $200K, so it is interesting that NSF is still the largest source.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 7. Sources of Funding</ns0:head><ns0:p>In terms of the necessary support, Figure <ns0:ref type='figure' target='#fig_8'>8</ns0:ref> indicates that, while institutions do provide some RSE, financial, and infrastructure support, it is inadequate to meet the respondents' needs, overall. In addition, when asked in a follow-up question whether the respondents have sufficient funding to support software development activities for their research the overwhelming answer is no (Figure <ns0:ref type='figure' target='#fig_9'>9</ns0:ref>).</ns0:p><ns0:p>When asked about whether current funding adequately supports some key phases of the software lifecycle, the results were mixed. Respondents answered on a scale of 1-5 from insufficient to sufficient.</ns0:p><ns0:p>For Developing new software and Modifying or reusing existing software there is an relatively uniform distribution of responses across the five answer choices. However, for Maintaining software, the responses skew towards the insufficient end of the scale.</ns0:p><ns0:p>For respondents who develop new software, we asked (on a 5-point scale) whether their funding supports various important activities, including refactoring, responding to bugs, testing, developing new 14 Figure <ns0:ref type='figure'>7</ns0:ref> is a box and whisker plot. Except for NSF and 'Your own institution', the minimum, Q1, median, and Q3 are all zero. The dashed line is the mean for these categories. The dots are all outliers.</ns0:p></ns0:div> <ns0:div><ns0:head>16/32</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='formula'>253</ns0:ref>), and Programmer (253). There were also a good number of respondents who were Faculty (215) or Research Faculty (242). Note that people could provide more than one answer, so the total exceeds the number of respondents.</ns0:p><ns0:p>While there are a number of job titles that research software developers can fill, unfortunately, as Figure <ns0:ref type='figure' target='#fig_4'>10</ns0:ref> shows, the respondents saw little chance for career advancement for those whose primary role is software development. Only 21% (153/724) of the Combination and Researcher respondents saw opportunity for advancement. The numbers were slightly better at 42% (24/57) for those who viewed themselves as Developers. When we look at the result by gender, only 16% (32/202) of the female respondents see an opportunity for advancement compared with 24% (139/548) for the male respondents. </ns0:p></ns0:div> <ns0:div><ns0:head>Credit</ns0:head><ns0:p>This section uses the relevant survey questions to answer the two research questions related to credit:</ns0:p><ns0:p>&#8226; RQ6a: What do research software projects require for crediting or attributing software use?</ns0:p><ns0:p>&#8226; RQ6b: How are individuals and groups given institutional credit for developing research software?</ns0:p><ns0:p>When asked how respondents credit software they use in their research, as Figure <ns0:ref type='figure' target='#fig_6'>14</ns0:ref> shows, the most common approaches are either to cite a paper about the software or to mention the software by name.</ns0:p><ns0:p>Interestingly, authors tended to cite the software archive itself, mention the software URL, or cite the software URL much less frequently. Unfortunately, this practice leads to fewer trackable citations of the software, making it more difficult to judge its impact.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 14. How Authors Credit Software Used in Their Research</ns0:head><ns0:p>Following on this trend of software work not being properly credited, when asked how they currently receive credit for their own software contributions, as Figure <ns0:ref type='figure' target='#fig_11'>15</ns0:ref> shows, none of the standard practices appear to be used very often.</ns0:p><ns0:p>An additional topic related to credit is whether respondent's contributions are valued for performance reviews or promotion within their organization. As Figure <ns0:ref type='figure' target='#fig_4'>16</ns0:ref> shows, approximately half of the respondents indicate that software contributions are considered. Another large percentage say that it depends.</ns0:p><ns0:p>While it is encouraging that a relatively large percentage of respondents' institutions consider software during performance reviews and promotion some or all of the time, the importance of those contributions is still rather low, especially for respondents who identify as Researchers, as shown in Figure <ns0:ref type='figure' target='#fig_4'>17</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Diversity</ns0:head></ns0:div> <ns0:div><ns0:head>This section focuses on answering RQ7: How do current Research Software Projects document diversity statements and what support is needed to further diversity initiatives?</ns0:head><ns0:p>When asked how well their projects recruit, retain, and include in governance participants from underrepresented groups, only about 1/3 of the respondents thought they did an 'Excellent' or 'Good' job. Interestingly, when asked how well they promote a culture of inclusion, 68% of the respondents (390/572) indicated they did an 'Excellent' or 'Good' job. These two responses seem to be at odds with each other, suggesting that perhaps projects are not doing as well as they think they are. Conversely, it could be that projects do not do a good job of recruiting diverse participants, but do a good job of supporting the ones they do recruit. Figure <ns0:ref type='figure' target='#fig_8'>18</ns0:ref> shows the details of these responses.</ns0:p><ns0:p>We asked follow-up questions about whether the respondents' projects have a diversity/inclusion statement or a code of conduct. As Figures <ns0:ref type='figure' target='#fig_9'>19 and 20</ns0:ref> show, most projects do not have either of these, nor do they plan to develop one. This answer again seems at odds with the previous answer that most people thought their projects fostered a culture of inclusions. However it is possible that projects fall under institutional codes of conduct or have simply decided that a code of conduct is not the best way to encourage inclusion.</ns0:p><ns0:p>Lastly, in a follow-up question we asked respondents to indicate the aspects of diversity or inclusion for which they could use help. As Figure <ns0:ref type='figure' target='#fig_4'>21</ns0:ref> shows, respondents indicated they needed the most help with recruiting, retaining, and promoting diverse participants. They also need help with developing diversity/inclusion statements and codes of conduct.</ns0:p></ns0:div> <ns0:div><ns0:head>THREATS TO VALIDITY</ns0:head><ns0:p>To provide some context for these results and help readers properly interpret them, this section describes the threats to validity and limits of the study. While there are multiple ways to organize validity threats, we organize ours in the following three groups.</ns0:p></ns0:div> <ns0:div><ns0:head>Internal Validity Threats</ns0:head><ns0:p>Internal validity threats are those conditions that reduce the confidence in the results that researchers can draw from the analysis of the included data.</ns0:p><ns0:p>A common internal validity threat for surveys is that the data is self-reported. Many of the questions in our survey rely upon the respondent accurately reporting their perception of reality. While we have no Manuscript to be reviewed Computer Science information that suggests respondents were intentionally deceptive, it is possible that their perception about some questions was not consistent with their reality.</ns0:p><ns0:p>A second internal validity threat relates to how we structured the questions and which questions each respondent saw. Due to the length of the survey and the potential that some questions may not be relevant for all respondents, we did not require a response to all questions. In addition, we filtered out questions based on the respondent type (Researcher, Combination, or Developer) when those questions were not relevant. Last, for each topic area, we included a set of optional questions for those who wanted to provide more information. These optional questions were answered by a much smaller set of respondents.</ns0:p><ns0:p>Taken together, these choices mean that the number of respondents to each question varies. In the results reported above, we included the number of people who answered each question.</ns0:p></ns0:div> <ns0:div><ns0:head>Construct Validity Threats</ns0:head><ns0:p>Construct validity threats describe situations where there is doubt in the accuracy of the measurements in the study. In these cases, the researchers may not be fully confident that the data collected truly measures the construct of interest.</ns0:p><ns0:p>In our study, the primary threat to construct validity relates to the respondents' understanding of key terminology. The survey used a number of terms related to the research software process. It is possible that some of those terms may have been unfamiliar to the respondents. Because of the length of the survey and the large number of terms on the survey, we chose not to provide explicit definitions for each concepts.</ns0:p><ns0:p>While we have no evidence that raises concerns about this issue, it is possible that some respondents interpreted questions in ways other than how we originally intended.</ns0:p></ns0:div> <ns0:div><ns0:head>External Validity Threats</ns0:head><ns0:p>External validity threats are those conditions that decrease the generalizablity of the results beyond the specific sample included in the study. We have identified two key external validity threats.</ns0:p><ns0:p>The first is general sampling bias. We used a convenience sample for recruiting our survey participants.</ns0:p><ns0:p>While we attempted to cast a very wide net using the sources described in the Survey Participants section, we cannot be certain that the sample who responded to the survey is representative of the overall population of research software developers in the United States.</ns0:p><ns0:p>The second is the discrepancy among the number of respondents of each type (Researcher, Combination, Developer). Because we did not know how many people in the population would identify with each respondent type, we were not able to use this factor in our recruitment strategy. As a result, the number of respondents from each type differs. This discrepancy could be representative of the overall population or it could also suggest a sampling bias.</ns0:p><ns0:p>Therefore, while we have confidence in the results described above, these results may not be generalizable to the larger population.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>We turn now to a discussion of the results of our survey, and the implied answers to our research questions. In each subsection, we restate the original research question, highlight important findings, and contextualize these findings in relation to software sustainability.</ns0:p></ns0:div> <ns0:div><ns0:head>Software Engineering Practices</ns0:head><ns0:p>RQ1: What activities do research software developers spend their time on, and how does this impact the perceived quality and long-term accessibility of research software? Across a number of questions about software engineering practices, our respondents report the aspects of the software development process that were more difficult than expected were related to people, rather than mastering the use of a tool or technique. Respondents reported they thought testing was important, but our results show only a small percentage of respondents frequently use system testing (27% of respondents) and regression testing (24% of respondents). This result suggests a targeted outreach on best practices in testing, broadly, could be a valuable future direction for research software trainers.</ns0:p></ns0:div> <ns0:div><ns0:head>26/32</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We also asked respondents about how they allocated time to software development tasks. The respondents reported, overall, they spend their time efficiently -allocating as much time as a task requires, but rarely more than they perceive necessary (see Figure <ns0:ref type='figure'>2</ns0:ref>). However, one notable exception regards debugging, where both developers and researchers report an imbalance between time they would like to spend compared with the time they actually spend. While we did not ask follow up questions about any specific task, we can interpret this finding as the result of asking about an unpleasant task -debugging is not an ideal use of time, even if it is necessary. However, there is an abundance of high quality and openly accessible tools that help software engineers in debugging tasks.</ns0:p><ns0:p>A future research direction is to investigate types of code quality controls, testing, and the use of tools to simplify debugging in research software.</ns0:p><ns0:p>Overall, we observe research software developers do not commonly follow the best software engineering practices. Of the practices we included in our survey (Continuous Integration, Coding Standards, Architecture/Design, Requirements, and Peer Code Review), none were used by more than 54% of the respondents. This result indicates a need for additional work to gather information about how research software developers are using these practices and to disseminate that information to the appropriate communities to increase their usage.</ns0:p></ns0:div> <ns0:div><ns0:head>Software Tools</ns0:head></ns0:div> <ns0:div><ns0:head>RQ2: What tools do research software developers use and what additional tools are needed to support sustainable development practices?</ns0:head><ns0:p>Our respondents reported sufficient tool support only for the coding activity. This result suggests the need for additional tools to support important activities like Requirements, Design, Testing, Debugging, and Maintenance. The availability of useful tools that can fit into developers current workflows can increase the use of these key practices for software quality and sustainability.</ns0:p></ns0:div> <ns0:div><ns0:head>Training</ns0:head></ns0:div> <ns0:div><ns0:head>RQ3: What training is available to research software developers and does this training meet their needs?</ns0:head><ns0:p>Across multiple questions, our findings suggest a need for greater opportunities to access and participate in software training. Previous surveys found less than half (and sometimes much less) of developers reported they had formal training in software development. Our results support these rough estimates, with Developers reporting a slightly higher percentage of both formal and informal software training than researchers. In our sample, including both developers and researchers, approximately half of the people developing research software have received no formal training. Consistent with this number, only about half of the respondents reported sufficient opportunities for training. However, approximately 3/4 indicated they did not have sufficient time for training that was available. Together these results suggest two conclusions: (1) there is a need for more training opportunities (as described above) and ( <ns0:ref type='formula'>2</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science into research funding proposals, with even less including costs for reusing or maintaining research software. A limitation of our study is that we do not ask respondents why they choose not to include these costs. We could interpret this result as a belief that such items would not be appropriate for a budget or would not result in a competitive funding application. Future should investigate (1) how and why software research funding is allocated, (2) how research software is budgeted in preparing research grant proposals, and (3) what deters researchers from requesting funding for software development, maintenance, or reuse.</ns0:p><ns0:p>From the perspective of software sustainability, these results are troubling. Without support for maintaining and sustaining research software, at least some of the initial investments made in software are lost over time.</ns0:p></ns0:div> <ns0:div><ns0:head>Career Paths</ns0:head></ns0:div> <ns0:div><ns0:head>RQ5: What factors impact career advancement and hiring in research software?</ns0:head><ns0:p>Our results show a diversity of people and their titles who assume the role of research software developer. However, few respondents were optimistic about their research software contributions positively impacting their career (only 21% of faculty and 42% of developers believed software contributions would be valuable for career advancement). This finding was particularly pronounced for female identifying respondents, with only 16% (n=32/202) believing software contributions could impact their career advancement.</ns0:p><ns0:p>When asked to evaluate prospective applicants to a research software position many respondents valued potential and scientific domain knowledge (background) as important factors. We optimistically believe this result indicates that while programming knowledge, and experience are important criteria for job applicants, search committees are also keen to find growth minded scientists to fill research software positions. We believe this result could suggest an important line of future work -asking, for example, research software engineering communities to consider more direct and transparent methods for eliciting potential and scientific domain knowledge on job application materials.</ns0:p><ns0:p>Finally, we highlight the factors important to research software developers when evaluating a prospective job for their own career. Respondents reported that they value salary equally with leadership (of software at an institution) and access to software resources (e.g. infrastructure). This suggests that while pay is important, the ability to work in a valued environment with access to both mentorship and high quality computing resources can play an important role in attracting and retaining talented research software professionals.</ns0:p></ns0:div> <ns0:div><ns0:head>Credit</ns0:head></ns0:div> <ns0:div><ns0:head>RQ6:</ns0:head><ns0:p>&#8226; RQ6a: What do research software projects require for crediting or attributing software use?</ns0:p><ns0:p>&#8226; RQ6b: How are individuals and groups given institutional credit for developing research software?</ns0:p><ns0:p>As described in the Background section, obtaining credit for research software work is still emerging and is not consistently covered by tenure and promotion evaluations. The survey results are consistent with this trend and show none of the traditional methods for extending credit to a research contribution are followed for research software. Figure <ns0:ref type='figure' target='#fig_11'>15</ns0:ref> makes this point clear -respondents most frequently mention software by name but less frequently cite software papers or provide links to the software. According to these results, research software projects require better guidance and infrastructure support for accurately crediting software used in research, both at the individual and institutional level.</ns0:p><ns0:p>We also highlight the relationship between credit and career advancement. In the previous section (Career Paths) we asked respondents how often they were consulted about or asked to contribute to existing software projects at their institution. Among the Developer respondents, 72% were consulted about developing and maintaining software. However, unless this consultation, expertise, and labor is rewarded within a formal academic system of credit, this work remains invisible to tenure and promotion Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>committees. Such invisible labor is typical within information technology professions, but we argue that improving this formal credit system is critical to improving research software sustainability.</ns0:p><ns0:p>Scholarly communications and research software engineers have been active in promoting new ways to facilitate publishing, citing, using persistent identifiers, and establishing authorship guidelines for research software. This effort includes work in software citation aimed at changing publication practices <ns0:ref type='bibr'>(Katz et al., 2021)</ns0:ref>, in software repositories <ns0:ref type='bibr' target='#b49'>(Smith, 2021)</ns0:ref>, and a proposed definition for FAIR software to add software to funder requirements for FAIR research outputs (Chue <ns0:ref type='bibr' target='#b13'>Hong et al., 2021)</ns0:ref>. Previous studies of research software communities have not focused specifically on diversity statements, DEI initiatives, or related documentation (e.g. code of conduct documents). Our results showed about 2/3 of the respondents thought their organizations promoted a 'culture of inclusion' with respect to research software activities. Conversely, only about 1/3 of respondents thought their organization did an above average job of recruiting, retaining, or meaningfully including diverse groups (see Figure <ns0:ref type='figure' target='#fig_8'>18</ns0:ref>).</ns0:p><ns0:p>We also asked participants whether their main software project (where they spent most time) had, or planned to develop, a diversity and inclusion statement. Over half of respondents indicated that their projects do not have a diversity/inclusion statement or a code of conduct and have no plans to create one.</ns0:p><ns0:p>While these numbers paint a grim picture, we also believe that there is additional research necessary </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2 https://us-rse.org PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022) Manuscript to be reviewed Computer Science train researchers in modern development practices (e.g., the Carpentries 3 , IRIS-HEP 4 , and MolSSI 5 ). While much of the development of research software occurs in academia, important development also occurs in national laboratories and industry. Wherever the development and maintenance of research software occurs, that software might be released as open source (most likely in academia and national laboratories) or it might be commercial/closed source (most likely in industry, although industry also produces and contributes to open source).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>3</ns0:head><ns0:label /><ns0:figDesc>https://carpentries.org 4 https://iris-hep.org 5 https://molssi.org 6 http://urssi.us 2/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>reports on the results of 209 responses to provide insight into the role of software in conducting research at US universities. The survey focused on the respondents' use of research software and the training they have received in software development. &#8226; Towards Computational Reproducibility: Researcher Perspectives on the Use and Sharing of Software (AlNoamany and Borghi, 2018) reports on the results from 215 respondents across a range 3/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022) Manuscript to be reviewed Computer Science of disciplines. The goal of the survey was to understand how researchers create, use, and share software. The survey also sought to understand how the software development practices aligned with the goal of reproducibility. &#8226; SSI International RSE Survey (Philippe et al., 2019) reports on the results from approximately 1000 responses to a survey of research software engineers from around the world. The goal of the survey is to describe the current state of research software engineers related to various factors including employment, job satisfaction, development practices, use of tools, and citation practices.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>&#8226;</ns0:head><ns0:label /><ns0:figDesc>Researcher -someone who only uses software &#8226; Developer -someone who only develops software &#8226; Combination -both of the above roles The respondents were fairly evenly split between Researchers at 43% (473/1109) and Combination at 49% (544/1109), with the remaining 8% (92/1109) falling into the Developer category. Note that depending on how the respondent answered this question, they received different survey questions. If a respondent 9/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022) Manuscript to be reviewed Computer Science indicated they were a Researcher, they did not receive the more development-oriented questions. For the remainder of this analysis, we use these subsets to analyze the data. If the result does not indicate that it is describing results from a subset of the data, then it should be interpreted as being a result from everyone who answered the question. Organization Type Next, respondents indicated the type of organization for which they worked. The vast majority 86% (898/1048) worked for Educational Institutions. That percentage increased to 93% (417/447) for Researcher type respondents Geographic Location Because the focus of the URSSI project is the United States, we targeted our survey to US-based lists. As a result, the vast majority of responses (990/1038) came from the United States. We received responses from 49 states (missing only Alaska), plus Washington, DC, and Puerto Rico.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Where respondents spend software time</ns0:figDesc><ns0:graphic coords='12,141.73,191.70,413.52,232.61' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. Aspects of Software Development That Are More Difficult Than They Should Be</ns0:figDesc><ns0:graphic coords='13,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Availability of Tool Support</ns0:figDesc><ns0:graphic coords='15,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Sufficient Opportunities for Training</ns0:figDesc><ns0:graphic coords='16,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Sufficiency of Institutional Support</ns0:figDesc><ns0:graphic coords='18,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Necessary Funding to Support Software Development Activities</ns0:figDesc><ns0:graphic coords='18,172.75,395.01,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .Figure 11 .Figure 12 .</ns0:head><ns0:label>101112</ns0:label><ns0:figDesc>Figure 10. Opportunities for Career Advancement for Software Developers</ns0:figDesc><ns0:graphic coords='19,172.75,279.14,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. How Respondents Currently Receive Credit for Their Software Contributions</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 16 .Figure 17 .Figure 18 .Figure 20 .</ns0:head><ns0:label>16171820</ns0:label><ns0:figDesc>Figure 16. Does Institution Consider Software Contributions in Performance Reviews or Promotion Cases?</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>) developers of research software need more time for training, either by prioritizing in their own schedule or by being given it from their employers.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>&#8226;</ns0:head><ns0:label /><ns0:figDesc>RQ4a: What is the available institutional support for research software development? &#8226; RQ4b: What sources of institutional funding are available to research software developers? We provided a motivation for this research question by demonstrating, across a variety of previous surveys and published reports, there has not been sufficient funding dedicated to the development and maintenance of research software. The survey results support this assertion with just under half of the respondents who develop software as part of their research report including costs for developing software 27/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>DiversityRQ7:</ns0:head><ns0:label /><ns0:figDesc>How do current Research Software Projects document diversity statements and what support is needed to further diversity initiatives?</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>to clarify the types of diversity, equity, and inclusion work, including formal and informal initiatives, needed in research software development. This research would provide needed clarity on the training, mentoring, and overall state of diversity and inclusion initiatives in research software. Further, research needs to compare these practices with broader software and research communities seeking to understand how, for example, research software compares to open-source software.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,172.75,247.60,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,172.75,395.01,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,172.75,393.02,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,172.75,396.27,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,172.75,63.78,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,172.75,395.01,351.51,281.21' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Previous Surveys</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell>Focus</ns0:cell><ns0:cell>Respondents</ns0:cell></ns0:row><ns0:row><ns0:cell>Hannay et al. (2009)</ns0:cell><ns0:cell>How scientists develop and use software</ns0:cell><ns0:cell>1972</ns0:cell></ns0:row><ns0:row><ns0:cell>Pinto et al. (2018)</ns0:cell><ns0:cell>Replication of Hannay et al. (2009)</ns0:cell><ns0:cell>1553</ns0:cell></ns0:row><ns0:row><ns0:cell>Wiese et al. (2020)</ns0:cell><ns0:cell>Additional results from Pinto et al. (2018) focused</ns0:cell><ns0:cell>1577</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>on problems encountered when developing scientific</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>software</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Nguyen-Hoan et al. (2010) Software development practices of scientists in Aus-</ns0:cell><ns0:cell>60</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>tralia</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Prabhu et al. (2011)</ns0:cell><ns0:cell>Practice of computational science in one large uni-</ns0:cell><ns0:cell>114</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>versity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Joppa et al. (2013)</ns0:cell><ns0:cell>Researchers in species domain modeling with vary-</ns0:cell><ns0:cell>&#8764;450</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ing levels of expertise</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Carver et al. (2013)</ns0:cell><ns0:cell>Software engineering knowledge and training among</ns0:cell><ns0:cell>141</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>computational scientists and engineers</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hettrick (2018, 2014)</ns0:cell><ns0:cell>Use of software in Russell Group Universities in the</ns0:cell><ns0:cell>417</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>UK</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Jay et al. (2016)</ns0:cell><ns0:cell>How scientists publish code</ns0:cell><ns0:cell>65</ns0:cell></ns0:row><ns0:row><ns0:cell>Nangia and Katz (2017)</ns0:cell><ns0:cell>Use of software and software development training</ns0:cell><ns0:cell>209</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>in US Postdoctoral Association</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>AlNoamany and Borghi</ns0:cell><ns0:cell>How the way researchers use, develop, and share</ns0:cell><ns0:cell>215</ns0:cell></ns0:row><ns0:row><ns0:cell>(2018)</ns0:cell><ns0:cell>software impacts reproduciblity</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Philippe et al. (2019)</ns0:cell><ns0:cell>Research Software Engineers</ns0:cell><ns0:cell>&#8764;1000</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>as well as through economic studies. An example of the latter was performed by a development team of the widely used AstroPy packages in Astronomy. Using David A. Wheeler's SLOCCount method for economic impact of open-source software they estimate the cost of reproducing AstroPy to be approximately $8.5 million and the annual economic impact on astronomy alone to be approximately $1.5 million<ns0:ref type='bibr' target='#b37'>(Muna et al., 2016)</ns0:ref>.There is, recently, increased attention from funders on the importance of software maintenance and archiving, including the Software Infrastructure for Sustained Innovation (SI2) program at NSF,</ns0:figDesc><ns0:table /><ns0:note>the NIH Data Commons (which includes software used in biomedical research), the Alfred P. Sloan Foundation's Better Software for Science program, and the Chan Zuckerberg Initiative's Essential Open Source Software for Science program which provide monetary support for the production, maintenance, and adoption of research software. Despite encouraging progress there is still relatively little research that focuses specifically on how the lack of direct financial support for software sustainability impacts research software engineers and research software users. We seek to better understand this relationship through two specific research questions that focus on the impact of funding on software sustainability: RQ4a: What is the available institutional support for research software development? and RQ4b: What sources of institutional funding are available to research software developers?</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>along with guidance for those</ns0:figDesc><ns0:table /><ns0:note>11 https://carcc.org/wp-content/uploads/2019/01/CI-Professionalization-Job-Families-and-Career-Gui pdf 7/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Disciplines of Respondents</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Researchers Developers Combination</ns0:cell></ns0:row></ns0:table><ns0:note>10/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>Comment code and 95% (96/101) Use descriptive variable/method names either Most of the time or Always. Interestingly, even though a very large percentage of respondents indicated that they comment their code, when we look in more detail at the other types of information documented, we see a different story. The following list reports those who responded Most of the time or Always for each type However, when we investigate further about the version control practices, we find 29% of the Combination type respondents (26/56) indicated they use copying files to another location and 10% (6/56) used zip file backups as their method of version control either Always or Most of the time. While many respondents do use a standard version control system, the large number of Combination respondents who rely on zip file backups suggests the use of standard version control methods is an area where additional training could help. Regarding continuous integration, less than half of the respondents (39/84) indicated they used it either Always or Most of the time. This result suggests another area where additional training could help.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>&#8226; Requirements -43% (43/101)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8226; Peer code review -34% (34/99)</ns0:cell></ns0:row><ns0:row><ns0:cell>Documentation As a follow-up question, in terms of what information respondents document, only</ns0:cell></ns0:row><ns0:row><ns0:cell>55% (56/101) develop User manuals or online help either Most of the time or Always. However, 86%</ns0:cell></ns0:row><ns0:row><ns0:cell>(87/101) of documentation:</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8226; Requirements -49% (49/100)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8226; Software architecture or design -34% (34/100)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8226; Test plans or goals -25% (25/100)</ns0:cell></ns0:row><ns0:row><ns0:cell>&#8226; User stories/use cases -24% (24/100)</ns0:cell></ns0:row><ns0:row><ns0:cell>Training</ns0:cell></ns0:row><ns0:row><ns0:cell>This section focuses on answering RQ3: What training is available to research software developers and</ns0:cell></ns0:row><ns0:row><ns0:cell>does this training meet their needs?</ns0:cell></ns0:row></ns0:table><ns0:note>ToolsThis section focuses on answering RQ2: What tools do research software developers use and what additional tools are needed to support sustainable development practices? Tools support for development activities The results in Figure 4 indicate that a large majority of respondents (340/441 -77%) believe Coding is Extremely supported or Very supported by existing tools. Slightly less than half of the respondents find Testing (196/441 -44%) and Debugging (188/441 -43%) to be Extremely supported or Well supported. Less than 30% of the respondents reported Requirements, Architecture/design, Maintenance, and Documentation as being well-supported. Because coding is the only practice where more than half of the respondents indicate Extremely supported or Very supported, these responses indicate a clear opportunity for additional (or better) tool support in a number of areas. Version control and continuous integration In a follow-up question, almost all of those who responded (83/87) indicated they do use version control. In addition, a slightly lower but still very large percentage of respondents (74/83) indicated they used Git either Always or Most of the time. Git was by far the most commonly used version control system. Finally, almost all respondents (76/83) check their code into the version control system either after every change or after a small group of changes. Have you received training? The percentage of respondents answering yes depends upon the type of respondent: Developers -64% (39/61), Combination -44% (170/384), and Researchers -22% (93/421). When we examine the responses to this question by gender we also see a difference: 30% (69/229) of the Female respondents received training compared with 37% (220/601) of the Male respondents. 13/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022)</ns0:note></ns0:figure> <ns0:note place='foot' n='12'>http://www.qualtrics.com 8/32 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67516:2:0:NEW 23 Mar 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"We thank the reviewers for the valuable comments on our manuscript. We have made a significant revision to address as many of the comments as possible. Below you will find a copy of each reviewer’s comments along with our response below (in blue text). Reviewer 2 R2.1 Figure 1 is difficult to read. The font size is too small for the titles of the subplots. We have updated Figure 1 to make the labels larger R2.2 Figures 1 and 7 show dots that are confusing. The text does not say what the dots represent. They could be data points for individual responses, but then they don't show the multiplicity when more than one respondent has the same answer. Also, there are cases (like NIH for Developers and DoE for Developers in Figure 7) where there are no 'data points' below the mean. This makes me think that I am misinterpreting the dots. They should either be explained, or, if they don't convey any useful information, removed. We have clarified in the text that these are box and whisker plots. Individual dots represent outliers. For Figure 7, except for NSF and “Your own institution”, the minimum, Q1, median and Q3 are all zero. The dashed line is the mean for these categories. The points are all outliers. R2.3 Although the response to reviewers stated that a Conclusions section was added, there still isn't a Conclusions section. The authors should add a Conclusions section. We forgot to rename the last section. We revised the text to include Conclusions, just did not change the name. We have fixed it now. "
Here is a paper. Please give your review comments after reading it.
413
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background: Multivariate time series data generally contains missing values, which can obstacle subsequent analysis and compromise downstream applications. One challenge in this endeavor is the missing values brought about by sensor failure and transmission packet loss. Imputation is the usual remedy in such circumstances. However, in some multivariate time series data, the complex correlation and temporal dependencies, coupled with the non-stationarity of the data, make imputation difficult.</ns0:p></ns0:div> <ns0:div><ns0:head>Mehods:</ns0:head><ns0:p>To address this problem, we propose a novel model for multivariate time series imputation called CGCNImp that considers both correlation and temporal dependency modeling. The correlation dependency module leverages neural Granger causality and a GCN to capture the correlation dependencies among different attributes of the time series data, while the temporal dependency module relies on an attention-driven LSTM and a time lag matrix to learn its dependencies. Missing values and noise are addressed with total variation reconstruction.</ns0:p></ns0:div> <ns0:div><ns0:head>Results:</ns0:head><ns0:p>We conduct thorough empirical analyses on two real-world datasets. Imputation results show that CGCNImp achieves state-of-the-art performance than previous methods.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Multivariate time series data is common to many systems and domains-any data that changes value over time, any data captured by a sensor or measured at intervals ,for example, traffic monitoring <ns0:ref type='bibr' target='#b47'>(Wang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b50'>Zhang et al., 2017)</ns0:ref>, healthcare and patient monitoring <ns0:ref type='bibr' target='#b15'>(Che et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Suo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b33'>Liu and Hauskrecht, 2016)</ns0:ref>, IIoT systems, financial marketing <ns0:ref type='bibr' target='#b5'>(Bauer et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Batres-Estrada, 2015)</ns0:ref> and so on, from which the data collected is typically extracted in the form of multivariate time series data. What is also common is missing values and noise brought about by sensor failure, transmission packet loss, human error, and other issues. These missing values will not only destroy the integrity and balance of original data distributions, but also affect the subsequent analysis and application of related scenarios <ns0:ref type='bibr' target='#b16'>(Cheema, 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Berglund et al., 2015)</ns0:ref>. The processing of missing values in time series has become a very important problem. Some researches try to directly model the dataset with missing values <ns0:ref type='bibr' target='#b51'>(Zheng et al., 2017)</ns0:ref>. However, for every dataset, we need to model them separately. In most cases, imputation is the standard remedy, but imputing with multivariate time series data is not so easy. The complex correlation and temporal dependencies found in some multivariate time series data complicates matters, and the non-stationarity of the data only exacerbates the issue, explained as follows:</ns0:p><ns0:p>Attribute correlation dependencies : In many multivariate time series, it is important to interpret the attribute correlation within time series that naturally arises. Typically, this correlation provides information about the contemporaneous and lagged relationships within and between individual series and how these series interact <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018))</ns0:ref> Manuscript to be reviewed Computer Science variables in total being 11 different locations, each with 11 different variables. Different attributes for the same places are arranged in adjacent positions. A dark blue element (i, j) means that there is a strong Granger causal effect from variable i to variable j. It can be seen that the causal effect is strong along the diagonal of the matrix, which means that there are strong causal effects among different variables at the same location. Several research teams have also demonstrated that many aspects of weather, including temperature, precipitation, air pressure, wind speed, and wind direction, have substantial impacts on the migration of birds and that those impacts are inherently nonlinear <ns0:ref type='bibr' target='#b17'>(Clairbaux et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Boz&#243; et al., 2018)</ns0:ref>. Hence, when attempting to impute missing values, all of these factors must be taken into account and the correlations between all these factors needs to be properly modeled to arrive at an accurate result. Temporal auto-correlation dependencies : The evolution of multivariate time series changes dynamically over time and is mainly reflected in auto-correlations and trends <ns0:ref type='bibr' target='#b1'>(Anghinoni et al., 2021)</ns0:ref>.</ns0:p><ns0:note type='other'>.</ns0:note><ns0:p>For example, in bird migration case, factors affecting these correlations can include inadequate food and subsequent starvation, too little energy to travel, bad weather conditions, and others <ns0:ref type='bibr' target='#b45'>(Visser et al., 2009)</ns0:ref>.</ns0:p><ns0:p>Researchers have proposed various methods of imputing missing values for time series data. The most recent techniques include using the complete data of existing observations to build a model or learning the data distribution and then using that distribution to estimate the missing values. The current models and algorithms with good prediction performance include imputation methods based on machine learning, recurrent neural networks (RNNs) <ns0:ref type='bibr' target='#b15'>(Che et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Suo et al., 2019)</ns0:ref>, and generative adversarial networks (GANs) <ns0:ref type='bibr' target='#b24'>(Goodfellow et al., 2014)</ns0:ref>. Recently, autoencoders have also been used to impute missing values in multivariate time series data. These represent the current state-of-the-art. For instance, Fortuin et al. <ns0:ref type='bibr' target='#b22'>(Fortuin et al., 2020)</ns0:ref> proposed a model based on a deep autoencoder that maps the missing values of multivariate time series data into a continuous low-dimensional hidden space. This framework treats the low-dimensional representations as a Gaussian process but does not specify the goal of learning to generate real samples. Rather, the model simply tries to generate data that is close to a real sample. The result is a set of fuzzy samples. GlowImp <ns0:ref type='bibr' target='#b32'>(Liu et al., 2022)</ns0:ref> combines Glow-VAEs and GANs into a generative model that simultaneously learns to encode, generate and compare dataset samples. Although all these systems perform well at their intended task, none consider complex attribute correlations or temporal auto-correlation dependencies. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>temporal auto-correlation dependencies. The combination of neural Granger causality, an attention mechanism and time lag decay yields satisfactory performance compared to the current methods.</ns0:p><ns0:p>&#8226; An imputation technique based on Granger causality and a GCN that captures attribute correlations for higher accuracy. In addition, an attention mechanism and total variation reconstruction automatically recovers latent temporal information.</ns0:p><ns0:p>&#8226; We conduct thorough empirical analyses on two real-world datasets. Imputation results show that CGCNImp achieves state-of-the-art performance than previous methods.</ns0:p><ns0:p>Reproducibility: Our open-sourced code and the data used with the supplement document are available at https://github.com/zhewen166/CGCNImp.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In recent years, researchers have proposed large body of literature on the imputation of missing value.</ns0:p><ns0:p>Due to the limited space, we only describe a few closely related ones.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistical Based Methods</ns0:head><ns0:p>Statistical <ns0:ref type='bibr' target='#b31'>(Little and Rubin, 2019)</ns0:ref> imputation algorithms impute the missing values with mean value <ns0:ref type='bibr'>(Kantardzic, 2011), median value (na Edgar and</ns0:ref><ns0:ref type='bibr' target='#b37'>Caroline, 2004)</ns0:ref>, mode value <ns0:ref type='bibr' target='#b20'>(Donders et al., 2006)</ns0:ref> and last observed valid value <ns0:ref type='bibr' target='#b0'>(Amiri and Jensen, 2016)</ns0:ref>, which may impute the missing value by the same value (for example median value) if the missing rate is very high.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Based Methods</ns0:head><ns0:p>Some researchers impute the missing values with Machine learning algorithm showing that machine learning based imputation methods are useful for time series imputation. K-Nearest Neighbor (KNN) <ns0:ref type='bibr' target='#b30'>(Liew et al., 2011)</ns0:ref> uses pairwise information between the target with missing values and the k nearest reference to impute the missing values. Expectation-Maximization (EM) <ns0:ref type='bibr' target='#b39'>(Nelwamondo et al., 2007)</ns0:ref> <ns0:ref type='bibr' target='#b2'>(Azur et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b12'>Buuren and Groothuis-Oudshoorn, 2011</ns0:ref>) uses a chained equation to fill the missing values. Autoregressive (S. <ns0:ref type='bibr' target='#b41'>Sridevi et al., 2011)</ns0:ref> estimates missing values using autoregressive-model. Vector autoregressive imputation method (VAR-IM) <ns0:ref type='bibr' target='#b3'>(Bashir and Wei, 2018</ns0:ref>) is based on a vector autoregressive (VAR) model by combining an expectation and minimization algorithm with the prediction error minimization method.</ns0:p><ns0:p>Gradient-boosted tree <ns0:ref type='bibr' target='#b23'>(Friedman, 2020)</ns0:ref> model is built in a stage-wise fashion as in other boosting methods, but it generalizes the other methods by allowing optimization of an arbitrary differentiable loss function.</ns0:p></ns0:div> <ns0:div><ns0:head>Deep Learning Based Methods</ns0:head><ns0:p>In time series imputation, can be classified into RNN-based methods, VAE-based methods and GAN-based methods.</ns0:p><ns0:p>RNN-Based methods. GRU-D <ns0:ref type='bibr' target='#b15'>(Che et al., 2018)</ns0:ref> predicts the missing variable by the combination of last observed value, the global mean and the time lag. But, it has drawbacks on general datasets <ns0:ref type='bibr' target='#b15'>(Che et al., 2018)</ns0:ref>. M-RNN <ns0:ref type='bibr' target='#b49'>(Yoon et al., 2017)</ns0:ref> utilizes bi-directional RNN to impute missing values since both previous series and future series of missing values are known. BRITS <ns0:ref type='bibr' target='#b14'>(Cao et al., 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>processes for time series data. The VAE maps the missing data from the input space into a latent space where the temporal dynamics are modeled by the GP. GlowImp <ns0:ref type='bibr' target='#b32'>(Liu et al., 2022)</ns0:ref> combines Glow-VAEs and GANs into a generative model that simultaneously learns to encode, generate and compare dataset samples. All these methods only optimize the lower bound and do not specify the goal of learning to generate real samples.</ns0:p><ns0:p>GAN-Based methods. <ns0:ref type='bibr' target='#b24'>Goodfellow et al. (2014)</ns0:ref> introduced the generative adversarial networks (GAN), which trains generative deep models via an adversarial process. GAIN <ns0:ref type='bibr' target='#b48'>(Yoon et al., 2018)</ns0:ref> has some unique features. The generator receives noise and mask as an input data and the discriminator gets some additional information via a hint vector to ensure that the generator generates samples depending on the true data distribution. But GAIN is not suitable for time series. GRUI-GAN <ns0:ref type='bibr' target='#b34'>(Luo et al., 2018)</ns0:ref> proposed a two second stage GAN based. The G tries to generate the realistic time series from the random noise vector z. The D tries to distinguish whether the input data is real data or fake data. The adversarial structure can improve accuracy. But This two-stage training needs a lot more time to train the 'best' matched data and seems not stable with a random noise input. E2GAN <ns0:ref type='bibr' target='#b35'>(Luo et al., 2019)</ns0:ref> can impute the incomplete time series via end-to-end strategy. This work proposes an encoder-decoder GRUI based structure as the generator which can improve the accuracy and stable when training the model. the discriminator consists a GRUI layer and a fully connected layer working as the encoder. SSGAN <ns0:ref type='bibr' target='#b36'>(Miao et al., 2021)</ns0:ref> propose a novel semi-supervised generative adversarial network model, with t a generator, a discriminator, and a classifier to predict missing values in the partially labeled time series data <ns0:ref type='bibr' target='#b15'>(Che et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Luo et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY Motivation</ns0:head><ns0:p>In many multivariate time series, it is important to interpret the attribute correlation within time series that naturally arises. Generally, these correlation can be divided into attribute correlation dependencies and temporal auto-correlation dependencies. Hence, our work includes three main considerations: these two types of dependencies plus end-to-end multi-task modeling to properly capture both.</ns0:p><ns0:p>Attributes correlation dependency Typically, this correlation provides information about the contemporaneous and lagged relationships within and between individual series and how these series interact <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018))</ns0:ref>. For example, in bird migration case, the main attribute dependencies are weather factors such as temperature, air pressure, and wind conditions. All can have a substantial impact on evolution of multivariate time series <ns0:ref type='bibr' target='#b17'>(Clairbaux et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Boz&#243; et al., 2018)</ns0:ref>. These therefore need to be considered if one is to accurately impute any missing values. At the same time, there may be false correlation between some attributes. Hence, determining reasonable causal effects among different attributes is also an important issue. We opted for neural Granger causality <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018) )</ns0:ref> to model the correlation dependencies between the variables because it has achieved satisfactory performance on multivariate time series causal inference and it could be easily integrated into the multivariate time series imputation framework.</ns0:p><ns0:p>Temporal auto-correlation dependency. The evolution of multivariate time series changes dynamically over time and patterns are quasi-periodical on different scales of years and days <ns0:ref type='bibr' target='#b1'>(Anghinoni et al., 2021)</ns0:ref>. Additionally, sensor malfunctions and failures, transmission errors, and other factors can mean the recorded time series carries noise <ns0:ref type='bibr' target='#b26'>(Han and Wang, 2013)</ns0:ref>. Effectively exploiting auto-correlation relationships and eliminating sensor noises is therefore a key consideration.</ns0:p><ns0:p>Multitask modeling. Classical time series imputation methods adopt a two-stage modeling approach <ns0:ref type='bibr' target='#b34'>(Luo et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b48'>Yoon et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Miao et al., 2021)</ns0:ref>. First, they analyze the correlations between multiple sequences and then impute the different sequences separately. However, these two-stage methods can not guarantee the global optimum. In this paper, we aim to establish an end-to-end model for Granger causal analysis and deep-learning-based time series imputation under the same framework, which will hopefully accelerate the imputation process and provide interpretability.</ns0:p></ns0:div> <ns0:div><ns0:head>Preliminary</ns0:head><ns0:p>Definition 1: Multivariate Time Series. A multivariate time series X = {x 1 , x 2 , . . . , x n } is a sequence with data observed on n timestamps T = (t 0 ,t 1 , . . . ,t n&#8722;1 ). The i &#8722; th observation x i contains d attributes</ns0:p><ns0:formula xml:id='formula_0'>(x 1 i , x 2 i , . . . , x d i ).</ns0:formula></ns0:div> <ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:1:2:NEW 18 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Example 1: Multivariate Time Series. We give an example of the multivariate time series X with missing values, / indicates the missing value. </ns0:p><ns0:formula xml:id='formula_1'>X = &#63726; &#63728; 5 / / /</ns0:formula><ns0:formula xml:id='formula_2'>M j i = 0, if x j i is null 1, otherwise if the j-th attribute of x i is observed, M j i is set to 1. Otherwise, M j i is set to 0.</ns0:formula><ns0:p>Example 2: Binary Mask Matrix. We can thus compute the binary mask matrix according to the multivariate time series X in example 1 which have missing values.</ns0:p><ns0:formula xml:id='formula_3'>M = &#63726; &#63728; 1 0 0 0 1 1 1 1 0 1 1 0 1 0 1 &#63737; &#63739;</ns0:formula><ns0:p>Definition 3: Time Lag Matrix. In order to record the time lag between current value and last observed value, we introduce the time lag matrix &#948; &#8712; R n * d . The following formation shows the calculation of the &#948; .</ns0:p><ns0:formula xml:id='formula_4'>&#948; d t = &#63729; &#63732; &#63730; &#63732; &#63731; s t &#8722; s t&#8722;1 + &#948; d t&#8722;1 if t &gt; 0 and M d t&#8722;1 == 0 s t &#8722; s t&#8722;1 if t &gt; 0 and M d t&#8722;1 == 1 0 if t == 0</ns0:formula></ns0:div> <ns0:div><ns0:head>CGCNImp model</ns0:head><ns0:p>To impute reasonable values in place of the missing values, as shown in Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, the model contains an attribute correlation dependency module and a temporal auto-correlation dependency module. The correlation dependency module leverages neural Granger causality and a GCN to capture the correlation dependencies between attributes. The output of this module is passed to the temporal dependency module, which combines an attention-driven LSTM with a time lag matrix to generate the missing values. Last, a noise reduction and smoothness module uses neighbors with similar values to smooth the time series and remove much of the noise, while still preserving occasional rapid variations in the original signal. The details of each of these modules and the framework as a whole are discussed in the following sections.</ns0:p></ns0:div> <ns0:div><ns0:head>Attributes causality modeling</ns0:head><ns0:p>Determining complex correlation dependencies is a key problem in the process of imputing with multivariate time series data. Here we use the neural Granger causality <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018) )</ns0:ref> </ns0:p><ns0:formula xml:id='formula_5'>h t+1 = f (x t , h t ) (1)</ns0:formula><ns0:p>where f is some nonlinear function that depends on the particular recurrent architecture. We opted for an LSTM to model the recurrent function f due to its effectiveness at modeling complex time dependencies.</ns0:p><ns0:p>The standard LSTM model takes the form Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_6'>f t = &#963; (W f x t +U f h t&#8722;1 ) i t = &#963; (W i x t +U i h t&#8722;1 ) o t = &#963; (W o x t +U o h (t&#8722;1) ) c t = f t &#8857; c t&#8722;1 + i t &#8857; tanh(W c x t +U c h t&#8722;1 ) h t = o t &#8857; tanh(c t )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where &#8857; denotes component-wise multiplication and i t , f t , and o t represent input, forget and output gates, respectively. These control how each component of the state cell c t , is updated and then transferred to the hidden state used for prediction</ns0:p><ns0:formula xml:id='formula_7'>h t . W f ,W i ,W o ,W c ,U f ,U i ,U o ,U c</ns0:formula><ns0:p>are the parameters that need to learn by LSTM. The output for series i is given by a linear decoding of the hidden state at time t :</ns0:p><ns0:formula xml:id='formula_8'>x ti = g i (x &lt;t ) + e ti (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>where the dependency of g i on the full past sequence x &lt;t is due to recursive updates of the hidden state.</ns0:p><ns0:p>The LSTM model introduces a second hidden state variable c t , referred to as the cell state, giving the full set of hidden parameter as (c t ,h t ).</ns0:p><ns0:p>In Eq. 2 the set of input maxtrices,</ns0:p><ns0:formula xml:id='formula_10'>W = ((W f ) T , (W i ) T , (W o ) T , (W c ) T ) T (4)</ns0:formula><ns0:p>controls how the past time series x t , influences the forget gates, input gates, output gates, and cell updates, and, consequently, the update of the hidden representation. A group lasso penalty across the columns of W can be selected to indicate which Granger series causes series i during estimation. The loss function of for modeling the attribute correlation dependencies is as follows:</ns0:p><ns0:formula xml:id='formula_11'>L NG = min W,U,W o T &#8721; t=2 (x it &#8722; g i (x &lt;t )) 2 + &#955; d &#8721; j=1 ||W ||<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_12'>U = (((U f ) T , (U i ) T , (U o ) T , (U c ) T ) T</ns0:formula><ns0:p>) . The adjacent matrix A, which is produced by neural granger causality is stated as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_13'>A i j = ||W i g j || 2 F (6)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>data. As such, they have received widespread attention. GCNs can include spectrum and/or spatial domain convolutions. In this study, we use spectrum domain convolutions. In the Fourier domain, spectral convolutions on graphs are defined as the multiplication of a signal x with a filter with a filter</ns0:p><ns0:formula xml:id='formula_14'>g &#952; : g &#952; * x = Ug &#952; (U T x).</ns0:formula><ns0:p>Here U is the matrix of eigenvectors of the normalized graph Laplacian L =</ns0:p><ns0:formula xml:id='formula_15'>I N &#8722;D &#8722; 1 2 AD &#8722; 1 2 = U&#955; U T , U T</ns0:formula><ns0:p>x is the graph Fourier transform of x ,A &#8712; R d * d is an adjacency matrix and &#955; is diagonal matrix of its eigenvalues. In multivariate time series, x can also be a X &#8712; R n * d , where d refers to the number of features and n refers to the time internals. Given the adjacent matrix A which is produced by neural granger causality, GCNs can perform the spectrum convolutional operation with consideration capture the correlation characteristics of graph. The GCN model can be expressed as:</ns0:p><ns0:formula xml:id='formula_16'>H G = &#963; ( W &#8722; 1 2 A W &#8722; 1 2 X&#952; )<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where A = A + I N is an adjacent matrix with self-connection structures, I N is an identity matrix, W is a degree matrix, H G &#8712; R n * d is the output of GCN which is the input of the temporal auto-correlation dependency modeling, &#952; is the parameter of GCN, and &#963; (&#8226;) is an activation function used for nonlinear modeling.</ns0:p></ns0:div> <ns0:div><ns0:head>Temporal Auto-correlation Dependency Modeling</ns0:head><ns0:p>Obtaining complex temporal auto-correlation dependencies is another key problem with imputation of multivariate time series data. In particular, sometimes the input decay may not fully capture the missing patterns since not all missingness information can be represented in decayed input values. Due to its effectiveness at modeling complex time dependencies, we choose to model the temporal dependencies using an LSTM <ns0:ref type='bibr' target='#b25'>(Graves, 2012)</ns0:ref>. However, to properly learn the characteristics of the original incomplete time series dataset, we find that the time lag between two consecutive valid observations is always changing due to the nil values. Further, the time lags between observations are very important since they follow an unknown non-uniform distribution. These changeable time lags remind us that the influence of the past observations should decay with time if a variable has been missing for a while.</ns0:p><ns0:p>Thus, a time decay vector &#945; is introduced to control the influence of the past observations. Each value of &#945; should be greater than 0 and fewer than 1 with the larger the &#948; , the smaller the decay vector. Hence, the time decay vector &#945; is modeled as a combination of &#948; :</ns0:p><ns0:formula xml:id='formula_17'>&#945; t = 1/e max(0,W &#945; &#948; t +b &#945; )<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where W &#945; and b &#945; are parameters that need to learn. Once the decay vector has been derived, the hidden state in the LSTM h t&#8722;1 is updated in an element-wise manner by multiplying the decay vector &#945; t to fit the decayed influence of the past observations. Thus, the update functions of the LSTM are as follows:</ns0:p><ns0:formula xml:id='formula_18'>h &#8242; t&#8722;1 = &#945; t &#8857; h t&#8722;1 i t = &#963; (W i [h &#8242; t&#8722;1 ; H G t ] + b i ) f t = &#963; (W f [h &#8242; t&#8722;1 ; H G t ] + b f ) s t = f t &#8857; s t&#8722;1 + i t &#8857; tanh(W s [h &#8242; t&#8722;1 ; H G t ] + b s ) o t = &#963; (W o [h &#8242; t&#8722;1 ; H G t ] + b o ) h t = o t &#8857; tanh(s t ) (9) where W f ,W i ,W o ,W c , b f , b i , b o , b s</ns0:formula><ns0:p>are the parameters that need to learn by LSTM and H G is the output of attributes correlation dependency modeling.</ns0:p><ns0:p>Attentive neural networks have recently demonstrated success in a wide range of tasks and, for this reason, we use one here. Let H L be a matrix consisting of output vectors H L = [h 1 , h 2 , ..., h n ] &#8712; R T &#215;d that the LSTM layer produced, where n is the time series length. The representation &#946; i j of the attention score is formed by a weighted sum of these output vectors: In Eq.11, We reconstruct the missing value by some linear transformation of the hidden state H &#8242; at time t. Hence the reconstruction loss is formulated as:</ns0:p><ns0:formula xml:id='formula_19'>&#946; i j = exp(tanh(W [h i |h j ])) &#8721; T k=1 exp(tanh(W [h k |h j ]))<ns0:label>(10</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>L reg = &#8721; x&#8712;D &#8741;x &#8855; m &#8722; x &#8855; m&#8741; 2 (12)</ns0:formula><ns0:p>x represents the input multivariate time series data, x represents the imputed multivariate time series data and m means the masking matrix. The expression in Eq. 12 is the masked reconstruction loss that calculates the squared errors between the original observed data x and the imputed sample. Here, it should be emphasized that when calculating the loss, we only calculate the observed data as previously described in <ns0:ref type='bibr' target='#b14'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b34'>Luo et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b35'>Luo et al., , 2019;;</ns0:ref><ns0:ref type='bibr' target='#b32'>Liu et al., 2022)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Noise Reduction and Smoothness Imputation</ns0:head><ns0:p>In the past, reconstructions were performed directly, which ignores the noise in the actual sampling process.</ns0:p><ns0:p>However, in real-world multivariate time series data, when time series are collected the observations may be contaminated by various types of error or noise. Hence, these imputation values may be unreliable.</ns0:p><ns0:p>To ensure the reliability of the imputation results, a total variation reconstruction regularization term is applied to the reconstruction results. The method is based on a smoothing function where neighbors with similar values are used to smooth the time series. When applied to time series data, abrupt changes in trend, spikes, dips and the like can all be fully preserved. This regularization term is formulated as follows:</ns0:p><ns0:formula xml:id='formula_21'>M &#8721; j=1 N&#8722;1 &#8721; i=1 | x j i+1 &#8722; x j i | (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>)</ns0:formula><ns0:p>where M is the number of time series, that is, the number of variables, and N is the length of each time series. Compared to a two-norm smoothing constraint, this total variation reconstruction term can ensure smoothness without losing the dynamic performance of the time series <ns0:ref type='bibr' target='#b10'>(Boyd and Vandenberghe, 2004)</ns0:ref> .</ns0:p><ns0:p>Eq.13 applies this term to the reconstruction results. As a result, noise in the original data is reduced and completion accuracy is improved. The reconstruction loss is formulated as:</ns0:p><ns0:formula xml:id='formula_23'>L SL = M &#8721; j=1 N &#8721; i=1 | x j i+1 &#8722; x j i |<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>The total object function of our model is:</ns0:p><ns0:formula xml:id='formula_24'>L loss = &#945; * L NG + &#946; * L reg + &#952; * L SL (15)</ns0:formula><ns0:p>where &#945;,&#946; ,&#952; indicate the weight among different part of the total loss. We optimize Eq. 15 using proximal gradient descent with line search.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENT</ns0:head><ns0:p>To accurately verify and measure the performance of the proposed CGCNImp framework, we compare its performance at imputation with multiple time series against several other contemporary methods.</ns0:p><ns0:p>The selected datasets used in the evaluations were two real-world bird migration datasets focusing on migratory patterns in China -Anser albifrons and Anser fabalis -as well as the KDD 2018 CUP Dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>8/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:1:2:NEW 18 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Dataset Description</ns0:head></ns0:div> <ns0:div><ns0:head>KDD CUP 2018 Dataset</ns0:head><ns0:p>The KDD dataset comes from the KDD CUP Challenge 2018 1 . The dataset which is is a public meteorologic dataset is about 15% missing values. It was hourly collected between 2017/1/20 to 2018/1/30 of Beijing collecting air quality and weather data . Each record contains 12 attributes,for example CO, weather, temperature etc. In our experiment, We select 11 common features for our experiments 12 attributes as the previous method did. We split this dataset for every hour. For every 48 hours , we randomly drop p percent of the dataset,and then we impute these time series with different models and calculate the imputation accuracy by RMSE and MAE where p &#8712; {10, 20, 30, 40, 50, 60, 70, 80, 90}.</ns0:p></ns0:div> <ns0:div><ns0:head>Bird Migration Dataset in China</ns0:head><ns0:p>The <ns0:ref type='bibr'>, 20, 30, 40, 50, 60, 70, 80, 90}</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison Methods and Evaluation Metrics</ns0:head><ns0:p>We compare our methods to eight current imputation methods as previously described in <ns0:ref type='bibr' target='#b32'>(Liu et al., 2022)</ns0:ref>.</ns0:p><ns0:p>A brief description of each follows.</ns0:p><ns0:p>&#8226; Statistical imputation methods <ns0:ref type='bibr' target='#b40'>(Rubinsteyn and Feldman, 2016)</ns0:ref>, where we simply impute the missing values with zero, mean , median.</ns0:p><ns0:p>&#8226; KNN <ns0:ref type='bibr' target='#b30'>(Liew et al., 2011)</ns0:ref>, which the missing data is imputed as the weighted average of k neighbors by using a k-nearest neighbor algorithm to find neighboring data.</ns0:p><ns0:p>&#8226; MF (C. <ns0:ref type='bibr' target='#b13'>Li et al., 2015)</ns0:ref>, which fills the missing values through factorizing an incomplete matrix into low-rank matrices .</ns0:p><ns0:p>&#8226; SVD (Jingfei <ns0:ref type='bibr' target='#b27'>He and Geng, 2016)</ns0:ref>, which uses iterative singular value decomposition for matrix imputation to impute the missing values.</ns0:p><ns0:p>&#8226; GP-VAE <ns0:ref type='bibr' target='#b22'>(Fortuin et al., 2020)</ns0:ref>, a method that combines ideas from VAEs and Gaussian processes to capture temporal dynamics for time series imputation.</ns0:p><ns0:p>&#8226; BRITS <ns0:ref type='bibr' target='#b14'>(Cao et al., 2018)</ns0:ref>, one of methods that include Unidirectional Uncorrelated Recurrent Imputation,Bidirectional Uncorrelated Recurrent Imputation and Correlated Recurrent Imputation algorithm to impute the missing values.</ns0:p><ns0:p>&#8226; GRUI <ns0:ref type='bibr' target='#b34'>(Luo et al., 2018)</ns0:ref>, which is a two-stage GAN based method that use the generator and discriminator to impute missing values.</ns0:p><ns0:p>&#8226; E2E-GAN <ns0:ref type='bibr' target='#b35'>(Luo et al., 2019)</ns0:ref>. It relies on an end-to-end GAN network that proposes an encoderdecoder GRUI based structure and is one of the state-of-the-art methods.</ns0:p><ns0:p>To evaluate the performance of our methods, we use two metrics to the compare and analyze with the results of previous methods.</ns0:p><ns0:p>(1) RMSE (Root Mean Squared Error) refers to the mean value of the square root of the error between the predicted value and the true value. This kind of measurement method is very popular, it can better describe the data, and it is a quantitative weighing method. The calculation formula is as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_25'>RMSE = 1 n n &#8721; i=1 (x &#8722; x)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(2) MAE(Mean Absolute Error) is the average of the absolute value of the error between the observed value and the real value. it is used to describe the error between the predicted value and the real value.</ns0:p><ns0:p>The formulation is as follows:</ns0:p><ns0:formula xml:id='formula_26'>MAE = 1 n n &#8721; i=1 |x &#8722; x|</ns0:formula></ns0:div> <ns0:div><ns0:head>Implementations Details</ns0:head><ns0:p>All the experimental results are obtained under the same hardware and software environment. The hardware is Intel i7 9700k, 48GB memory, NVIDIA GTX 1080 8GB. And the deep learning framework is PyTorch1.7 and TensorFlow1.15.0.</ns0:p><ns0:p>To maintain the same experiment environment as the contemporary method, the dataset was split the datasets into two parts which first part with 80% of the records is used for the training set and the remaining 20% is used for the test set. All values are normalized within the range of 0 to 1. For training process, 10% of the data of the training set was randomly dropped. When testing dataset, we drop the data with different drop-rate between 10% and 90%, tested each method at a range of levels of missing data between 10% and 90%.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Analysis</ns0:head><ns0:p>The results with the KDD, Anser albifrons and Anser fabalis datasets at a missing value ratio of 10% Generally , the higher the proportion of missing data, the more difficult it is to impute the missing value. However, the proportion of missing data is often uncertain. The prediction ability of the model is very important with different missing ratio. To assess the frameworks with different levels of missing data, we then conduct the same experiment with the BRITS,GRUI, E2EGAN and CGCNImp , varying the ratios of missing values from 10% to 90% in steps of 10% as previously described in <ns0:ref type='bibr' target='#b32'>(Liu et al., 2022)</ns0:ref>.</ns0:p><ns0:p>The results are shown in Table <ns0:ref type='table' target='#tab_11'>2 and Table 3</ns0:ref>. Again, our methods return the fewest errors. </ns0:p></ns0:div> <ns0:div><ns0:head>Ablation study</ns0:head><ns0:p>An ablation study is designed to assess the contribution of the attribute causality discovery and the noise reduction and smoothness imputation. This comprised three tests: the first with no ablation; the second where we simply removed the noise reduction and smoothness module and set &#946; to 0 in Eq. 15; plus a third where we simply removed the noise reduction and smoothness module and set &#945; to 0 in in Eq. 15.</ns0:p><ns0:p>All tests are conducted with a range of missing value ratios. Table <ns0:ref type='table' target='#tab_12'>4</ns0:ref> and Table <ns0:ref type='table' target='#tab_13'>5</ns0:ref> show the results. What we found with the Anser bird migration data was that, at a missing rate lower than 40%, removing either the noise reduction and smoothness module or the neural Granger causality gives fewer errors. However, at higher missing rates, the tests with both modules returned substantially fewer errors. This verifies the contribution of both modules to the framework. With the KDD data, CGCNImp in full returned substantially fewer errors, again supporting the contribution of both these modules. There are 9 attributes in the bird migration dataset. <ns0:ref type='bibr'>Fig. 1 (b)</ns0:ref> shows the Granger causal matrix derived from the neural Granger causality analysis. It should be noted that the Granger causality should be a one-way relationship, which means that, theoretically, we need to eliminate conflicting edges in the causal graph. However, in practice, the causal graph is derived from the neural Granger causality analysis and the edge indicate there are strong prediction benefit between variables. Therefore, we kept the conflicting edge and placed them into the GCN network for better performance.</ns0:p></ns0:div> <ns0:div><ns0:head>CASE STUDY : Bird migration route analysis</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_11'>6</ns0:ref> , Fig. <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> and Fig. <ns0:ref type='figure'>8</ns0:ref> show the imputation results of Anser fabalis birds migration routes. What we can see is that the imputed data shows some important wild reserves not seen with the original data.</ns0:p><ns0:p>According to the list of wetlands of international importance in China, for example, Fig. <ns0:ref type='figure' target='#fig_11'>6</ns0:ref> Wanfoshan is now a national forest park, a national nature reserve, and a national geological park which is an important location for bird migration. <ns0:ref type='bibr'>Likewise,</ns0:ref><ns0:ref type='bibr'>Fig. 8 (b)</ns0:ref> shows the imputed location of the Momoge National Nature Reserve <ns0:ref type='bibr' target='#b18'>(Cui et al., 2021)</ns0:ref> not showed in Fig. <ns0:ref type='figure'>8 (a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this paper, we present a novel imputation model, called CGCNImp , that is specifically designed to imputation of multivariate time series data. CGCNImp considers both attribute correlation and temporal auto-correlation dependencies. Correlation dependencies are captured through neural Granger causality and a GCN, while an attention-driven LSTM plus a time lag matrix captures the temporal dependencies and generates the missing values. Last, neighbors with similar values are used to smooth the time series and reduce noise. Imputation results show that CGCNImp achieves state-of-the-art performance than previous methods. We will explore our model for missing-not-at-random data and we will conduct </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Fig. 1 (a) illustrates the causal relationship graph with the KDD time series which collects air quality and weather data. In this data, there are 121 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:1:2:NEW 18 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The causal effect matrix of KDD dataset and bird migration dataset.The X axis indicates attributes. The Y axis indicates attributes. The matrix indicates the causal effect between attributes</ns0:figDesc><ns0:graphic coords='3,174.38,190.31,165.43,158.15' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The CGCNImp framework for multivariate time series missing value imputing.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.56,139.43' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>&#946; 12 , . . . , &#946; 1T &#946; 21 , &#946; 22 , . . . , &#946; 2T &#8226; &#8226; &#8226; &#946; n1 , &#946; n2 , . . . , &#946; nT &#63737; &#63738; &#63738; &#63739; is the attention score.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Fig. 3 ,</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig. 3 , Fig. 4 and Fig. 5 show the imputation results from the KDD datasets for the Tongzhou , Mentougou and Miyun districts, respectively. The blue dots are the ground truth time series and the red curve shows the imputed values. As illustrated, CGCNImp captures the evolution trend and imputes the missing values quite well. Further, it capture the potential probability density distribution of the multivariate time series and makes full use of the interactive information available.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Fig. 1 Figure 3 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Fig.1(a) illustrates the causal relationship graph with the KDD time series. In this data, there are 121 variables in total being 11 different locations, each with 11 different variables. Different attributes for the same places are arranged in adjacent positions. A dark blue element (i, j) means that there is a strong Granger causal effect from variable i to variable j. It can be seen that the causal effect is strong along the diagonal of the matrix, which means that there are strong causal effects among different variables at the same location. Furthermore, there are also strong causal effects between different locations, such as</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>(b) shows the ground truth time series with missing values. This time, CGCNImp imputed the location of Binzhou Seashell Island and the Wetland National Nature Reserve not shown in Fig. 6 (a) showing that the bird migration trajectory could be recovered by our methods.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The ground true(blue) and the imputed values(red) in Mentougou,Beijing of KDD dataset.The X axis indicates time step. The Y axis indicates imputed values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Fig. 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Fig. 7 (b) which is the ground truth time series with missing values, CGCNImp method imputed the location of Wanfoshan Nature Reserve which is not noticeable in the original data on its own (Fig. 7 (a)).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The ground true(blue) and the imputed values(red) in Miyun,Beijing of KDD dataset.The X axis indicates time step. The Y axis indicates imputed values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Anser fabalis dataset imputation.</ns0:figDesc><ns0:graphic coords='15,141.73,350.81,413.55,112.77' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,141.73,63.78,413.57,163.07' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,141.73,263.91,413.59,122.38' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>to model the correlation dependency between the attributes of the multivariate time series. Let x t &#8712; R d be a d-dimensional stationary time series and assume we have observed the process at n timestamps T = (t 0 ,t 1 , . . . ,t n&#8722;1 ). The basic idea of neural Granger causality is to gauge the extent to which the past activity of one time series is predictive of another time series. Thus, let h t &#8712; R d represents the d-dimensional hidden state at time t, the represents the historical context of the time series for predicting a component x ti . The hidden state at time</ns0:figDesc><ns0:table><ns0:row><ns0:cell>t + 1 is updated recursively</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>Birds Migration Dataset collects migration trace data which comes from the project Strategic Priority Research Program of Chinese Academy of Sciences which. The dataset was hourly collected between 2017/12/30 to 2018/5/10 of Anser fabalis and Anser albifrons. Each record contains 13 attributes which are longitude, latitude, speed height,speed velocity, heading, temperature etc. The dataset is about 10%missing values. We select 10 common features contains longitude, latitude, speed height,speed velocity, heading, temperature etc. for our experiments. We split this dataset for 5 minutes time series, and for every 5 minutes, we randomly drop p percent of the dataset,and then we impute these time series with different models and calculate the imputation accuracy by RMSE and MAE between original values and imputed values where p &#8712; {10</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head /><ns0:label /><ns0:figDesc>2 </ns0:figDesc><ns0:table /><ns0:note>1 KDD CUP. Available on: http://www.kdd.org/kdd2018/, 2018.9/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:1:2:NEW 18 Mar 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>appear in Table1. Here, CGCNImp yields significantly fewer errors than the other methods in terms of RMSE and MAE, demonstrating that our method is better than other methods. The RMSE and MAE results of the CGCNImp and other methods on two datasets(lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>dataset</ns0:cell><ns0:cell>KDD dataset</ns0:cell><ns0:cell cols='4'>Anser albifrons dataset Anser fabalis dataset</ns0:cell></ns0:row><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='2'>RMSE MAE RMSE</ns0:cell><ns0:cell>MAE</ns0:cell><ns0:cell>RMSE</ns0:cell><ns0:cell>MAE</ns0:cell></ns0:row><ns0:row><ns0:cell>Zero</ns0:cell><ns0:cell cols='2'>1.081 1.041 1.088</ns0:cell><ns0:cell>1.047</ns0:cell><ns0:cell>1.089</ns0:cell><ns0:cell>1.054</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean</ns0:cell><ns0:cell cols='2'>1.063 1.035 1.033</ns0:cell><ns0:cell>1.025</ns0:cell><ns0:cell>1.043</ns0:cell><ns0:cell>1.035</ns0:cell></ns0:row><ns0:row><ns0:cell>Random</ns0:cell><ns0:cell cols='2'>1.821 1.637 1.802</ns0:cell><ns0:cell>1.431</ns0:cell><ns0:cell>1.721</ns0:cell><ns0:cell>1.677</ns0:cell></ns0:row><ns0:row><ns0:cell>Median</ns0:cell><ns0:cell cols='2'>1.009 0.994 1.109</ns0:cell><ns0:cell>1.042</ns0:cell><ns0:cell>1.001</ns0:cell><ns0:cell>0.998</ns0:cell></ns0:row><ns0:row><ns0:cell>KNN</ns0:cell><ns0:cell cols='2'>0.803 0.724 0.758</ns0:cell><ns0:cell>0.714</ns0:cell><ns0:cell>0.824</ns0:cell><ns0:cell>0.817</ns0:cell></ns0:row><ns0:row><ns0:cell>MF</ns0:cell><ns0:cell cols='2'>0.784 0.627 0.643</ns0:cell><ns0:cell>0.626</ns0:cell><ns0:cell>0.663</ns0:cell><ns0:cell>0.646</ns0:cell></ns0:row><ns0:row><ns0:cell>SVD</ns0:cell><ns0:cell cols='2'>1.043 0.966 1.253</ns0:cell><ns0:cell>1.051</ns0:cell><ns0:cell>1.129</ns0:cell><ns0:cell>1.011</ns0:cell></ns0:row><ns0:row><ns0:cell>GP-VAE</ns0:cell><ns0:cell cols='2'>0.597 0.486 0.693</ns0:cell><ns0:cell>0.572</ns0:cell><ns0:cell>0.534</ns0:cell><ns0:cell>0.375</ns0:cell></ns0:row><ns0:row><ns0:cell>BRITS</ns0:cell><ns0:cell cols='2'>0.156 0.148 0.159</ns0:cell><ns0:cell>0.124</ns0:cell><ns0:cell>0.137</ns0:cell><ns0:cell>0.078</ns0:cell></ns0:row><ns0:row><ns0:cell>GRUI</ns0:cell><ns0:cell cols='2'>0.149 0.102 0.152</ns0:cell><ns0:cell>0.113</ns0:cell><ns0:cell>0.138</ns0:cell><ns0:cell>0.086</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>E2E-GAN 0.133 0.074 0.139</ns0:cell><ns0:cell>0.081</ns0:cell><ns0:cell>0.116</ns0:cell><ns0:cell>0.066</ns0:cell></ns0:row><ns0:row><ns0:cell>Ours</ns0:cell><ns0:cell cols='2'>0.114 0.062 0.128</ns0:cell><ns0:cell>0.072</ns0:cell><ns0:cell>0.107</ns0:cell><ns0:cell>0.059</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The MAE results of the CGCNImp methods on two datasets with different missing rate (lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>missing rate (%)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>BRITS 0.1561 0.1721 0.1928 0.2120 0.2571 0.2980 0.3284 0.3625 0.3912</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD</ns0:cell><ns0:cell cols='9'>GRUI 0.1493 0.1527 0.1702 0.1937 0.2098 0.2541 0.2824 0.3051 0.3361</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>E2EGAN 0.1336 0.1457 0.1601 0.1778 0.1926 0.2235 0.2574 0.2808 0.3031</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ours</ns0:cell><ns0:cell cols='8'>0.1142 0.1279 0.1402 0.1610 0.1803 0.2026 0.2263 0.2509 0.2776</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>BRITS 0.1596 0.1706 0.1931 0.2126 0.2398 0.2571 0.2964 0.3351 0.3686</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>albifrons GRUI 0.1394 0.1562 0.1799 0.1971 0.2205 0.2483 0.2670 0.2995 0.3297</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>E2EGAN 0.1289 0.1358 0.1572 0.1704 0.1976 0.2371 0.2480 0.2746 0.3098</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ours</ns0:cell><ns0:cell cols='8'>0.1287 0.1394 0.1589 0.1679 0.1902 0.2163 0.2389 0.2590 0.2921</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>BRITS 0.1372 0.1451 0.1680 0.1901 0.2273 0.2398 0.2647 0.3004 0.3469</ns0:cell></ns0:row><ns0:row><ns0:cell>fabalis</ns0:cell><ns0:cell cols='9'>GRUI 0.1381 0.1483 0.1761 0.2007 0.2492 0.2703 0.2906 0.3209 0.3501</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>E2EGAN 0.1160 0.1246 0.1508 0.1688 0.1898 0.2103 0.2562 0.2953 0.3391</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ours</ns0:cell><ns0:cell cols='8'>0.1076 0.1242 0.1444 0.1588 0.1829 0.2059 0.2323 0.2713 0.3205</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The RMSE results of the CGCNImp methods on two datasets with different missing rate (lower is better).</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The ablation study RMSE results of the CGCNImp methods on two datasets(lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>missing rate (%)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>&#952; = 0</ns0:cell><ns0:cell cols='8'>0.1468 0.1607 0.1713 0.1890 0.2071 0.2251 0.2449 0.2634 0.2845</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD</ns0:cell><ns0:cell>&#945; = 0</ns0:cell><ns0:cell cols='8'>0.1278 0.1445 0.1607 0.1788 0.1982 0.2173 0.2372 0.2611 0.2826</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>no ablation 0.1142 0.1279 0.1402 0.1610 0.1803 0.2026 0.2263 0.2509 0.2776</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>&#952; = 0</ns0:cell><ns0:cell cols='8'>0.1240 0.1329 0.1567 0.1726 0.1918 0.2143 0.2416 0.2694 0.3112</ns0:cell></ns0:row><ns0:row><ns0:cell>albifrons</ns0:cell><ns0:cell>&#945; = 0</ns0:cell><ns0:cell cols='8'>0.1180 0.1301 0.1473 0.1761 0.1873 0.2098 0.2304 0.2639 0.3047</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>no ablation 0.1287 0.1394 0.1589 0.1679 0.1802 0.2063 0.2289 0.2590 0.2921</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>&#952; = 0</ns0:cell><ns0:cell cols='8'>0.1156 0.1296 0.1373 0.1582 0.1827 0.2092 0.2437 0.2833 0.3276</ns0:cell></ns0:row><ns0:row><ns0:cell>fabalis</ns0:cell><ns0:cell>&#945; = 0</ns0:cell><ns0:cell cols='8'>0.1169 0.1294 0.1393 0.1617 0.1781 0.2132 0.2359 0.2781 0.3230</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='9'>no ablation 0.1076 0.1242 0.1444 0.1588 0.1829 0.2059 0.2323 0.2713 0.3205</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The ablation study MAE results of the CGCNImp methods on two datasets(lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>1224 0.1356 0.1492 0.1658 0.1806 0.1866</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Reviewer 1 (Falk Huettmann) Basic reporting Thanks for the submission and for sending me the manuscript (MS) by Liu et al. on Imputations of Geese titled “Bird-CGCNImp: a causal graph convolutional network for multivariate time series imputation” I find the approach is experimental and creative. I am not much against it, but the current MS falls short on a few relevant science points: -Whatever authors propose and defend, unless we see the data and the code with ISO compliant metadata, this work is not repeatable or transparent. It currently does not meet latest principles of science, or best professional practices. It can easily be fixed though as all data can now fully be made open access and with open source code models with ISO standard documentation (metadata), online, in GBIF and in Movebank even. I stress that specifically because work on these two species and the flyway is already full open access and available with 24 years of remote field work data. See GBIF and then Solovyeva et al. 2021. So why not here ? Not sharing data is not defendable whatsoever and it violates any best professional practices and collegiality. Before that is not clearly done I do not approve of the review or publication. A: Thanks the reviewer’s comments. 1) We had uploaded the data and code since the first submission of the MS to guarantee the transparency and repeatability of my method. 2)In the resubmit, I will update the data and upload the data and code again. 3)I have added the statement that all code is available and provided a link to where everything is available at https://github.com/zhewen166/CGCNImp -Imputation is just one of many methods. Personally, I would use the method of predicted ecological niches instead, or in parallel and to compare. Authors brought it down to a very narrow and self-fulfilling prophecy, just as the endless data filtering to get rid of so-called outliers (as often done in such telemetry works). From the MS and figures, authors seem to believe birds fly on a straight line connecting dots, which is highly dubious. How about krigging, Least Cost paths, or Circuitscape or Marxan approaches instead? One certainly needs a CERTAINTY and CONFIDENCE in the approach, outcome and maps; done how ? A: Thanks the reviewer’s comments. 1)These gaps in the data obscure the original data distribution, undermining the accuracy of any subsequent analysis and the efficacy of the final application (Cheema et al.) The most common solution is to impute the missing data via imputation processes in the field computer science, when encountering the missing value of data. 2)These approaches or software, krigging(Chung, Sang Yong et al.), Least Cost paths(Herzog, I), or Circuitscape(Anantharaman et al.) or Marxan capture the linear correlation of the bird migration data, but the complex correlation and temporal dependencies found in this type of data complicates matters: attribute correlation dependencies and temporal auto-correlation dependencies can not be capture by above methods properly. Anyways, if authors stick with imputations, please state citations on the issue, namely Jerome Friedman, classification trees and boosting, and many forest inventory and remote sensing references. Much literature and expertise sit there that was not used here. It should. A: Thanks the reviewer’s comments. 1) I state citations on the issue and thanks the reviewer again. -The way how to best approach the gap filling should be like a scenario, an hypothesis, or a re-analysis of the existing data. That should be made clear and pursued that way. A: Thanks the reviewer’s comments. 1) In my paper, I only used the non-missing part of the data to train my model to predict the missing value . Because there is no information about missing values, we can only use observations to evaluate the validity of the model, and when the accuracy of our predictions on the observations is very accurate, we assume that the predictions for the missing values are also more accurate. Some recent research work is the same idea (Fortuin et al., 2020;Liu et al., 2022;Miao et al., 2021) -I like Figures 5, 6 and 7 for the concept but those are currently way too small and too selective to be useful or convincing. We need it for all the data, not just some examples. A: Thanks the reviewer’s comments. 1) I agreed but the migration of the bird is just the case study. In computer science, it does not need show all the data. -let’s agree that the biggest topics - research design, representative sampling of the tagged species, technical fault patterns in the transmission, and impacts of anesthesia - are not mentioned. They must, as those do overrule the data and policy question one wants to purse, e.g. where are the ‘true’ dots, where are the flyways, habitats and do we have broadfront migration or any mixing ? Getting stuck with a line is just not appropriate for a cohort or population inference. The latter is presumably our all policy goal. A: Thanks the reviewer’s comments. 1) We can process data and discover some laws of data, but the ecological problems behind the laws may go beyond the processing scope of computer science, and more interdisciplinary knowledge needs to be introduced. We are looking at this problem from the perspective of computer science and following the conventional way of looking at this problem in the computer field. -the title and concept of “Bird-CGCNImp” reads odd, and is odd. It’s not needed that way and should be improved. Same is true for ‘causal graph convolutional network’. That’s hardly English, not a term, and nobody speaks that way (in England, or the English speaking/reading world). A: Thanks the reviewer’s comments . 1) the Bird-CGCNImp is not a concept and I'm so sorry for the confusion. In the computer field, when designing a new model, the model will given a name. 2) Although the name may not have any special meaning, it is usually done by lots of works, for example, GAN, BERT, ResNET,GLOW, RealNVP. -re ‘Identifying bird migration trajectories and discovering habitats is very important for conserving species diversity.’: That’s 100% untrue. Can you show me an example in China, or Malta or anywhere else ? We see flyway declines all over and for many years, see Jiao et al. 2016. The protected areas are way too tiny, paper parks, and do not help. So, this phrase above sets up a strawman; not needed but better to be honest. A: Thanks the reviewer’s comments. 1) The claims ---- “Identifying bird migration trajectories and discovering habitats is very important for conserving species diversity” is cited by some research work, including Hays, Graeme et al, Gilbert, Marius et al. 2) Whether the real state is as these research works say, there may be more factors that need to be considered. -the Literature references are poorly formatted, e.g. first names, abbreviations, and widely incomplete for topics A: Thanks the reviewer’s comments . I have revised the citations in new MS. -Finally, the problem we have in flyways and in China is a ruthless habitat conversion and loss scheme; as it is found globally now. Connecting the bird dots on a line for policy helps little. Happy to be shown any other. A: Thanks the reviewer’s comments . 1) We may make some suggestions from the perspective of computer science, but these suggestions can eventually be adopted and produce good results. This may be a relatively complicated process, and it depends on factors, not only the technology itself. Experimental design The article has no typical research design A: Thanks the reviewer’s comments. 1) The contribution of this paper is to design a new model to complete the missing values ​​of multivariate time series, which belongs to the research in the field of computer science. Maybe the research ideas, experimental design, etc. in the computer field are different from the research ideas and design of the reviewer's research interests. So, the research design may seem unreasonable in different fields. Validity of the findings As stated in section 1, authors assume linear movements, which is not justified. A: Thanks the reviewer’s comments. 1) The bird migration data is arranged by the time series and in my work, the bird migration data is processed by the time series. The MS do not assume the linear movements. Additional comments Thanks. I am happy to support creative and experimental work, but the MS - as presented - fails on some basic science issues, namely transparency and repeatability. A: Thanks the reviewer’s comments. 1) We had uploaded the data and code since the submission of the MS to guarantee the transparency and repeatability of my method. 2)In the resubmit, I will update the data and upload the data and code again. 3)I have added the statement that all code is available and provided a link to where everything is available at https://github.com/zhewen166/CGCNImp Reviewer 2 (Anonymous) Basic reporting In this paper, the authors propose a causal GCN for bird migration trajectories motivated by the existence of both the attribute correlation and the temporal auto-correlation dependencies. The proposed method is novel and the application of bird migration trajectories shows its effectiveness. Experimental design In the proposed Bird-CGCNImp method, the authors establish an end-to-end multi-task model to capture both attribute correlation and temporal auto-correlation dependencies. Besides, the authors also notice the noise in the actual sampling process and design a total variation reconstruction regularization term to improve the imputation accuracy. The authors clearly present the proposed method's motivation and technology details. Validity of the findings In the experiment, the authors conduct the experiment on one public time-series imputation benchmark and two real-world bird migration trajectory datasets. The experimental results are promising. The authors also demonstrate the effectiveness of each component in Bird-CGCNImp by ablation study. My suggestion of the experiment is that the authors should repeat the experiment several times and report the mean and standard deviation of the metric so that the experimental results would be more convincing. Additional comments Missing citations: Suo Q, Yao L, Xun G, Sun J, Zhang A. Recurrent imputation for multivariate time series with missing values. In2019 IEEE International Conference on Healthcare Informatics (ICHI) 2019 Jun 10 (pp. 1-3). IEEE. A: Thanks the reviewer’s comments. I have added the citations in my new MS. minor: Some of the citations are incomplete. For example, the following citations miss the journal name: Nazabal, A., Olmos, P. M., Ghahramani, Z., and Valera, I. (2020). Handling incomplete heterogeneous data using vaes. Yoon, J., Zame, W., and Schaar, M. (2017). Multi-directional recurrent neural networks : a novel method for estimating missing data. A: Thanks the reviewer’s comments. I have revised the incomplete citations in new MS. REFERENCES [1] Suo, Q., Yao, L., Xun, G., Sun, J., and Zhang, A. (2019). Recurrent imputation for multivariate time series with missing values. In 2019 IEEE International Conference on Healthcare Informatics (ICHI). [2] Che, Z., Purushotham, S., Cho, K., Sontag, D., and Liu, Y. (2018). Recurrent neural networks for multivariate time series with missing values. Scientific Reports, 8(1):6085 [3] Liu, C., Zhou, H., Sun, Z., and Cui, G. (2022). Glowimp: Combining glow and gan for multivariate time series imputation. In Algorithms and Architectures for Parallel Processing, pages 50–64, Cham.473 Springer International Publishing. [4] Miao, X., Wu, Y., Wang, J., Gao, Y., Mao, X., and Yin, J. (2021). Generative semi-supervised learning for multivariate time series imputation. In Proceedings of the AAAI Conference on Artificial Intelligence,35(10), pages 8983–8991 [5] Fortuin, V., Baranchuk, D., Raetsch, G., and Mandt, S. (2020). Gp-vae: Deep probabilistic time series imputation. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108, pages 1651–1661 [6] Hays, G. C., Mortimer, J. A., Ierodiaconou, D., and Esteban, N. (2014). Use of long-distance migration patterns of an endangered species to inform conservation planning for the world’s largest marine protected area. Conservation biology : the journal of the Society for Conservation Biology, 28(6):1636–1644. [7] Gilbert, M., Newman, S., and Takekawa (2011). Flying over an infected landscape: distribution of highly pathogenic avian influenza h5n1 risk in south asia and satellite tracking of wild waterfowl. EcoHealth,7:448–58. [8] Chung, Sang Yong; Venkatramanan, S.; Elzain, Hussam Eldin; Selvam, S.; Prasanna, M. V. (2019). 'Supplement of Missing Data in Groundwater-Level Variations of Peak Type Using Geostatistical Methods'. GIS and Geostatistical Techniques for Groundwater Science. Elsevier. pp. 33–41. [9] Herzog, I. (2014). Least-cost Paths – Some Methodological Issues. Internet Archaeology,(36). https://doi.org/10.11141/ia.36.5 [10] Anantharaman, R., Hall, K., Shah, V. B., and Edelman, A. (2020). Circuitscape in julia: High performance connectivity modelling to support conservation decisions. Proceedings of the JuliaCon Conferences,1(1):58. [11] Hastie, T.; Tibshirani, R.; Friedman, J. H. (2009). '10. Boosting and Additive Trees'. The Elements of Statistical Learning (2nd ed.). New York: Springer. pp. 337–384. ISBN 978-0-387-84857-0. Archived from the original on 2009-11-10. "
Here is a paper. Please give your review comments after reading it.
414
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Background: Multivariate time series data generally contains missing values, which can be an obstacle to subsequent analysis and compromise downstream applications. One challenge in this endeavor is presence of the missing values brought about by sensor failure and transmission packet loss. Imputation is the usual remedy in such circumstances. However, in some multivariate time series data, the complex correlation and temporal dependencies, coupled with the non-stationarity of the data, make imputation difficult.</ns0:p></ns0:div> <ns0:div><ns0:head>Mehods:</ns0:head><ns0:p>To address this problem, we propose a novel model for multivariate time series imputation called CGCNImp that considers both correlation and temporal dependency modeling. The correlation dependency module leverages neural Granger causality and a GCN to capture the correlation dependencies among different attributes of the time series data, while the temporal dependency module relies on an attention-driven LSTM and a time lag matrix to learn its dependencies. Missing values and noise are addressed with total variation reconstruction.</ns0:p></ns0:div> <ns0:div><ns0:head>Results:</ns0:head><ns0:p>We conduct thorough empirical analyses on two real-world datasets. Imputation results show that CGCNImp achieves state-of-the-art performance when compared to previous methods.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Mehods:</ns0:head><ns0:p>To address this problem, we propose a novel model for multivariate time series imputation called CGCNImp that considers both correlation and temporal dependency modeling. The correlation dependency module leverages neural Granger causality and a GCN to capture the correlation dependencies among different attributes of the time series data, while the temporal dependency module relies on an attention-driven LSTM and a time lag matrix to learn its dependencies. Missing values and noise are addressed with total variation reconstruction.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Multivariate time series data is common to many systems and domains-any data that changes value over time can be most naturally represented as a time series. This includes data captured by a sensor or measured at intervals, for example, traffic monitoring data <ns0:ref type='bibr' target='#b46'>(Wang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b49'>Zhang et al., 2017)</ns0:ref>, healthcare and patient monitoring data <ns0:ref type='bibr' target='#b14'>(Che et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Suo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b32'>Liu and Hauskrecht, 2016)</ns0:ref>, IIoT systems data, financial marketing data <ns0:ref type='bibr' target='#b6'>(Bauer et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b5'>Batres-Estrada, 2015)</ns0:ref> and so on. In these domains, data is typically extracted in the form of multivariate time series data. What is also common is missing values and noise brought about by sensor failure, transmission packet loss, human error, and other issues. Missing values will not only destroy the integrity and balance of original data distributions, but also affect the subsequent analysis and application of related scenarios <ns0:ref type='bibr' target='#b15'>(Cheema, 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Berglund et al., 2015)</ns0:ref>. The processing of missing values in time series has become a very important problem. Some researches try to directly model the dataset with missing values <ns0:ref type='bibr' target='#b50'>(Zheng et al., 2017)</ns0:ref>. However, for every dataset, we need to model them separately. In most cases, imputation of the missing values is the standard remedy, but imputing with multivariate time series data is not so easy. The complex correlation and temporal dependencies found in some multivariate time series data complicates matters, and the non-stationarity of the data only exacerbates the issue:</ns0:p><ns0:p>Attribute correlation dependencies : In many multivariate time series, it is important to interpret the attribute correlations within the time series that naturally arise. Typically, this correlation provides information about the contemporaneous and lagged relationships within and between individual series and PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:2:0:NEW 7 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science how these series interact <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018))</ns0:ref>. Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> (a) illustrates the causal relationship graph with the KDD 1 time series which collects air quality and weather data. In this data, there are 121 variables in total consisting of 11 different locations, each with 11 different variables. Different attributes for the same places are arranged in adjacent positions. A dark blue element (i, j) means that there is a strong Granger causal effect from variable i to variable j. It can be seen that the causal effect is strong along the diagonal of the matrix, which means that there are strong causal effects among different variables at the same location. <ns0:ref type='bibr'>Similarly,</ns0:ref><ns0:ref type='bibr'>Fig. 1 (b)</ns0:ref> shows causal effects between variables in a bird migration dataset that we study in this paper. Several research teams have also demonstrated that many aspects of weather, including temperature, precipitation, air pressure, wind speed, and wind direction, have substantial impacts on the migration of birds and that those impacts are inherently nonlinear <ns0:ref type='bibr' target='#b16'>(Clairbaux et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Boz&#243; et al., 2018)</ns0:ref>. Hence, when attempting to impute missing values, all of these factors must be taken into account and the correlations between all these factors needs to be properly modeled to arrive at an accurate result. Temporal auto-correlation dependencies : The evolution of multivariate time series changes dynamically over time and is mainly reflected in auto-correlations and trends <ns0:ref type='bibr' target='#b1'>(Anghinoni et al., 2021)</ns0:ref>. For example, in the bird migration case, factors affecting these correlations can include inadequate food and subsequent starvation, too little energy to travel, bad weather conditions, and others <ns0:ref type='bibr' target='#b45'>(Visser et al., 2009)</ns0:ref>.</ns0:p><ns0:p>Researchers have proposed various methods of imputing missing values for time series data. The most recent techniques include using the complete data of existing observations to build a model or learn the data distribution and then using that distribution to estimate the missing values. The current models and algorithms with good prediction performance include imputation methods based on recurrent neural networks (RNNs) <ns0:ref type='bibr' target='#b14'>(Che et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Suo et al., 2019)</ns0:ref> and generative adversarial networks (GANs) <ns0:ref type='bibr' target='#b21'>(Goodfellow et al., 2014)</ns0:ref>. Recently, autoencoders have also been used to impute missing values in multivariate time series data. These represent the current state-of-the-art. For instance, Fortuin et al.</ns0:p><ns0:p>(2020) proposed a model based on a deep autoencoder that maps the missing values of multivariate time series data into a continuous low-dimensional hidden space. This framework treats the low-dimensional representations as a Gaussian process but does not specify the goal of learning as the generation of real samples. Rather, the model simply tries to generate data that is close to a real sample. The result is a set of fuzzy samples. GlowImp <ns0:ref type='bibr' target='#b31'>(Liu et al., 2022)</ns0:ref> combines Glow-VAEs and GANs into a generative model that simultaneously learns to encode, generate and compare dataset samples. Although all these systems perform well at their intended task, none consider complex attribute correlations or temporal auto-correlation dependencies. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to capture correlation dependencies between the attributes and an attention-driven LSTM plus a time lag matrix to model temporal auto-correlation dependencies and to impute the missing values. Last, neighbors with similar values are used to smooth the time series and reduce noise. In summary, our main contributions include:</ns0:p><ns0:p>&#8226; A novel model for imputing multivariate time series that considers both attribute correlation and temporal auto-correlation dependencies. The combination of neural Granger causality, an attention mechanism, and time lag decay yields satisfactory performance compared to the current methods.</ns0:p><ns0:p>&#8226; An imputation technique based on Granger causality and a GCN that captures attribute correlations for higher accuracy. In addition, an attention mechanism and total variation reconstruction automatically recovers latent temporal information.</ns0:p><ns0:p>&#8226; We conduct thorough empirical analyses on two real-world datasets. Imputation results show that CGCNImp achieves state-of-the-art performance when compared to previous methods.</ns0:p><ns0:p>Reproducibility: Our open-sourced code and the data used in this paper are available at https://github.com/zhewen166/CGCNImp.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In recent years, researchers have generated a large body of literature on the imputation of missing values.</ns0:p><ns0:p>Due to the limited space, we only describe a few closely related methods.</ns0:p></ns0:div> <ns0:div><ns0:head>Statistical Methods</ns0:head><ns0:p>Statistical <ns0:ref type='bibr' target='#b30'>(Little and Rubin, 2019)</ns0:ref> imputation algorithms impute the missing values with the mean value <ns0:ref type='bibr' target='#b25'>(Kantardzic, 2011)</ns0:ref>, the median value (na Edgar and Caroline, 2004), the mode value <ns0:ref type='bibr'>(Donders et al., 2006)</ns0:ref> or the last observed valid value <ns0:ref type='bibr' target='#b0'>(Amiri and Jensen, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Machine Learning Based Methods</ns0:head><ns0:p>Some researchers impute the missing values with machine learning algorithms showing that machine learning based imputation methods are useful for time series imputation. K-Nearest Neighbor (KNN) <ns0:ref type='bibr' target='#b29'>(Liew et al., 2011)</ns0:ref> uses pairwise information between the target with missing values and the k nearest references to impute the missing values. Expectation-Maximization (EM) <ns0:ref type='bibr' target='#b39'>(Nelwamondo et al., 2007)</ns0:ref> carries out a multi-step process which predicts the value of the current state and then applies to two estimators refining the predicted values of the given state, maximizing a likelihood function. The Matrix Factorization (MF) (C. <ns0:ref type='bibr' target='#b12'>Li et al., 2015)</ns0:ref> uses a low rank matrix to estimate the missing value. Tensor Singular Value Decomposition (t-SVD) (Jingfei <ns0:ref type='bibr' target='#b24'>He and Geng, 2016)</ns0:ref> initializes the missing values as zeroes. It carries out an the SVD decomposition and selects the k most significant columns of V, using a linear combination of these columns to estimate the missing values. Multivariate Imputation by Chained Equations (MICE) <ns0:ref type='bibr' target='#b2'>(Azur et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b11'>Buuren and Groothuis-Oudshoorn, 2011</ns0:ref>) uses a chained equation to fill the missing values. Autoregressive (S. <ns0:ref type='bibr' target='#b41'>Sridevi et al., 2011)</ns0:ref> modelling estimates missing values using an autoregressive model. The vector autoregressive imputation method (VAR-IM) <ns0:ref type='bibr' target='#b4'>(Bashir and Wei, 2018</ns0:ref>) is based on a vector autoregressive (VAR) model by combining an expectation and minimization algorithm with the prediction error minimization method. The Gradient-boosted tree <ns0:ref type='bibr' target='#b20'>(Friedman, 2020)</ns0:ref> model is built in a stage-wise fashion as in other boosting methods, but it generalizes the other methods by allowing optimization of an arbitrary differentiable loss function.</ns0:p></ns0:div> <ns0:div><ns0:head>Deep Learning Based Methods</ns0:head><ns0:p>In time series imputation, deep learning based approaches can be classified into RNN-based methods, VAE-based methods, and GAN-based methods.</ns0:p><ns0:p>RNN-Based methods. GRU-D <ns0:ref type='bibr' target='#b14'>(Che et al., 2018)</ns0:ref> predicts the missing variable by the combination of last observed value, the global mean, and the time lag. But, it has drawbacks on general datasets <ns0:ref type='bibr' target='#b14'>(Che et al., 2018)</ns0:ref>. M-RNN <ns0:ref type='bibr' target='#b48'>(Yoon et al., 2017)</ns0:ref> utilizes a bi-directional RNN to impute missing values since both previous series and next series of missing values may be known in the scenario considered in their work. BRITS <ns0:ref type='bibr' target='#b13'>(Cao et al., 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Correlated Recurrent Imputation algorithms. All these models may suffer from the problem of vanishing or exploding gradients <ns0:ref type='bibr' target='#b7'>(Bengio et al., 2015)</ns0:ref> and error accumulation when encountering the continuous missing values.</ns0:p><ns0:p>VAE-Based methods. VAEs <ns0:ref type='bibr' target='#b28'>(Kingma and Welling, 2014</ns0:ref>) constitute a novel approach for efficient approximate inference with continuous latent variables. HI-VAE <ns0:ref type='bibr' target='#b38'>(Naz&#225;bal et al., 2020)</ns0:ref> deals with missing data on Heterogeneous and Incomplete Data. However, HI-VAE is not suitable for time series data as it does not exploit temporal information. GP-VAE <ns0:ref type='bibr' target='#b19'>(Fortuin et al., 2020)</ns0:ref> combines variational autoencoders and Gaussian processes for time series data. The VAE maps the missing data from the input space into a latent space where the temporal dynamics are modeled by the GP. GlowImp <ns0:ref type='bibr' target='#b31'>(Liu et al., 2022)</ns0:ref> combines Glow-VAEs and GANs into a generative model that simultaneously learns to encode, generate, and compare dataset samples. All these methods only optimize a lower bound and do not specify the goal of learning to generate real samples.</ns0:p><ns0:p>GAN-Based methods. <ns0:ref type='bibr' target='#b21'>Goodfellow et al. (2014)</ns0:ref> introduced the generative adversarial networks (GAN), and train generative deep models via an adversarial process. GAIN <ns0:ref type='bibr' target='#b47'>(Yoon et al., 2018)</ns0:ref> has some unique features. The generator receives random noise and a mask vector as input data and the discriminator gets some additional information via a hint vector to ensure that the generator generates samples depending on the true data distribution. But GAIN is not suitable for time series. GRUI-GAN <ns0:ref type='bibr' target='#b33'>(Luo et al., 2018)</ns0:ref> proposed a two second stage GAN based model. The generator tries to generate a realistic time series from the random noise vector z. The discriminator tries to distinguish whether the input data is real data or fake data. The adversarial structure can improve accuracy. However, this two-stage training needs a lot more time to train the 'best' matched data and seems unstable with a random noise input <ns0:ref type='bibr' target='#b34'>(Luo et al., 2019)</ns0:ref>. E2GAN <ns0:ref type='bibr' target='#b34'>(Luo et al., 2019)</ns0:ref> can impute the incomplete time series via an end-to-end strategy. This work proposes an encoder-decoder GRUI based structure as the generator, which can improve the accuracy and stability when training the model. The discriminator consists of a GRUI layer and a fully connected layer working as the encoder. SSGAN <ns0:ref type='bibr' target='#b35'>(Miao et al., 2021</ns0:ref>) is a novel semi-supervised generative adversarial network model, with a generator, a discriminator, and a classifier to predict missing values in the partially labeled time series data <ns0:ref type='bibr' target='#b14'>(Che et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b34'>Luo et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>METHODOLOGY Motivation</ns0:head><ns0:p>In many multivariate time series, it is important to interpret the attribute correlations that naturally arise. Generally, these correlation can be divided into attribute correlation dependencies and temporal auto-correlation dependencies. Hence, our work includes three main considerations: these two types of dependencies plus end-to-end multi-task modeling to properly capture both.</ns0:p><ns0:p>Attribute correlation dependency Typically, this correlation provides information about the contemporaneous and lagged relationships within and between individual series and how these series interact <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018))</ns0:ref>. For example, in the bird migration case, the main attribute dependencies are weather factors such as temperature, air pressure, and wind conditions. All can have a substantial impact on evolution of multivariate time series <ns0:ref type='bibr' target='#b16'>(Clairbaux et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Boz&#243; et al., 2018)</ns0:ref>. These therefore need to be considered if one is to accurately impute any missing values. At the same time, there may be false correlation between some attributes. Hence, determining reasonable causal effects among different attributes is also an important issue. We opted for neural Granger causality <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018) )</ns0:ref> to model the correlation dependencies between the variables because it has achieved satisfactory performance on multivariate time series causal inferences, and it could be easily integrated into the multivariate time series imputation framework.</ns0:p><ns0:p>Temporal auto-correlation dependency. The evolution of multivariate time series changes dynamically over time and patterns are quasi-periodical on different scales of years and days <ns0:ref type='bibr' target='#b1'>(Anghinoni et al., 2021)</ns0:ref>. Additionally, sensor malfunctions and failures, transmission errors, and other factors can mean the recorded time series carries noise <ns0:ref type='bibr' target='#b23'>(Han and Wang, 2013)</ns0:ref>. Effectively exploiting auto-correlation relationships and eliminating sensor noise is therefore a key consideration.</ns0:p><ns0:p>Multitask modeling. Classical time series imputation methods adopt a two-stage modeling approach <ns0:ref type='bibr' target='#b33'>(Luo et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b47'>Yoon et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Miao et al., 2021)</ns0:ref>. First, they analyze the correlations between multiple sequences and then impute the different sequences separately. However, these two-stage methods can not guarantee the global optimum. In this paper, we aim to establish an end-to-end model for Granger</ns0:p></ns0:div> <ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:2:0:NEW 7 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science causal analysis and deep-learning-based time series imputation under the same framework, which will hopefully accelerate the imputation process and provide interpretability.</ns0:p></ns0:div> <ns0:div><ns0:head>Preliminary</ns0:head><ns0:p>Definition 1: Multivariate Time Series. A multivariate time series X = {x 1 , x 2 , . . . , x n } is a sequence with data observed at n timestamps T = (t 0 ,t 1 , . . . ,t n&#8722;1 ). The i &#8722; th observation x i contains d attributes</ns0:p><ns0:formula xml:id='formula_0'>(x 1 i , x 2 i , . . . , x d i ).</ns0:formula><ns0:p>Example 1: Multivariate Time Series. We give an example of the multivariate time series X with missing values, / indicates the missing value. </ns0:p><ns0:formula xml:id='formula_1'>X = &#63726; &#63728; 5 / / /</ns0:formula><ns0:formula xml:id='formula_2'>M j i = 0, if x j i is null 1, otherwise if the j-th attribute of x i is observed, M j i is set to 1. Otherwise, M j i is set to 0.</ns0:formula><ns0:p>Example 2: Binary Mask Matrix. We can thus compute the binary mask matrix according to the multivariate time series X in example 1 which have missing values.</ns0:p><ns0:formula xml:id='formula_3'>M = &#63726; &#63728; 1 0 0 0 1 1 1 1 0 1 1 0 1 0 1 &#63737; &#63739;</ns0:formula><ns0:p>Definition 3: Time Lag Matrix. In order to record the time lag between current value and last observed value, we introduce the time lag matrix &#948; &#8712; R n * d . The following formation shows the calculation of the &#948; from the last observation to the current timestamp s t .</ns0:p><ns0:formula xml:id='formula_4'>&#948; d t = &#63729; &#63732; &#63730; &#63732; &#63731; s t &#8722; s t&#8722;1 + &#948; d t&#8722;1 if t &gt; 0 and M d t&#8722;1 == 0 s t &#8722; s t&#8722;1 if t &gt; 0 and M d t&#8722;1 == 1 0 if t == 0</ns0:formula></ns0:div> <ns0:div><ns0:head>CGCNImp model</ns0:head><ns0:p>To impute reasonable values in place of the missing values, as shown in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>, the model contains an attribute correlation dependency module and a temporal auto-correlation dependency module. The correlation dependency module leverages neural Granger causality and a GCN to capture the correlation dependencies between attributes. The output of this module is passed to the temporal dependency module, which combines an attention-driven LSTM with a time lag matrix to impute the missing values. Last, a noise reduction and smoothness module uses neighbors with similar values to smooth the time series and remove much of the noise, while still preserving occasional rapid variations in the original signal. The details of each of these modules and the framework as a whole are discussed in the following sections.</ns0:p></ns0:div> <ns0:div><ns0:head>Attributes causality modeling</ns0:head><ns0:p>Determining complex correlation dependencies is a key problem in the process of imputing with multivariate time series data. Here we use the neural Granger causality <ns0:ref type='bibr' target='#b44'>(Tank et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b43'>(Tank et al., , 2018) )</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_5'>h t+1 = f (x t , h t ) (1)</ns0:formula><ns0:p>where f is some nonlinear function that depends on the particular recurrent architecture. We opted for an LSTM to model the recurrent function f due to its effectiveness at modeling complex time dependencies.</ns0:p><ns0:p>The standard LSTM model takes the form:</ns0:p><ns0:formula xml:id='formula_6'>f t = &#963; (W f x t +U f h t&#8722;1 ) i t = &#963; (W i x t +U i h t&#8722;1 ) o t = &#963; (W o x t +U o h (t&#8722;1) ) c t = f t &#8857; c t&#8722;1 + i t &#8857; tanh(W c x t +U c h t&#8722;1 ) h t = o t &#8857; tanh(c t )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where &#8857; denotes component-wise multiplication and i t , f t , and o t represent input, forget and output gates, respectively. These control how each component of the state cell c t , is updated and then transferred to the hidden state used for prediction</ns0:p><ns0:formula xml:id='formula_7'>h t . W f ,W i ,W o ,W c ,U f ,U i ,U o ,U c</ns0:formula><ns0:p>are the parameters that need to learn by LSTM. The output for series i is given by a linear decoding of the hidden state at time t :</ns0:p><ns0:formula xml:id='formula_8'>x ti = g i (x &lt;t ) + e ti (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>where the dependency of g i on the full past sequence x &lt;t is due to recursive updates of the hidden state.</ns0:p><ns0:p>The LSTM model introduces a second hidden state variable c t , referred to as the cell state, giving the full set of hidden parameter as (c t ,h t ).</ns0:p><ns0:p>In Eq. 2 the set of input maxtrices,</ns0:p><ns0:formula xml:id='formula_10'>W = ((W f ) T , (W i ) T , (W o ) T , (W c ) T ) T (4)</ns0:formula><ns0:p>controls how the past time series x t , influences the forget gates, input gates, output gates, and cell updates, and, consequently, the update of the hidden representation. A group lasso penalty across the columns of W can be selected to indicate which Granger series causes series i during estimation. The loss function for modeling the attribute correlation dependencies is as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_11'>L NG = min W,U,W o T &#8721; t=2 (x it &#8722; g i (x &lt;t )) 2 + &#955; d &#8721; j=1 ||W || 1 (5)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where U = (((U f ) T , (U i ) T , (U o ) T , (U c ) T ) T ) . The adjacent matrix A, which is produced by neural Granger causality, is stated as:</ns0:p><ns0:formula xml:id='formula_12'>A i j = ||W i g j || 2 F (6)</ns0:formula><ns0:p>This adjacency matrix is used in the graph convolution network component discussed in the next subsection.</ns0:p></ns0:div> <ns0:div><ns0:head>Attributes Correlation Dependency Modeling</ns0:head><ns0:p>Convolutional neural networks (CNNs) can derive local correlation features but can only be used in Euclidean space. GCNs, however, are semi-supervised models that can handle arbitrary graph-structured data. As such, they have received widespread attention. GCNs can include spectrum and/or spatial domain convolutions. In this study, we use spectrum domain convolutions. In the Fourier domain, spectral convolutions on graphs are defined as the multiplication of a signal x with a filter g &#952; :</ns0:p><ns0:formula xml:id='formula_13'>g &#952; * x = Ug &#952; (U T x).</ns0:formula><ns0:p>Here U is the matrix of eigenvectors of the normalized graph Laplacian</ns0:p><ns0:formula xml:id='formula_14'>L = I N &#8722;D &#8722; 1 2 AD &#8722; 1 2 = U&#955; U T ,</ns0:formula><ns0:p>U T x is the graph Fourier transform of x, A &#8712; R d * d is an adjacency matrix and &#955; is diagonal matrix of its eigenvalues. In multivariate time series, x can also be a X &#8712; R n * d , where d refers to the number of features and n refers to the time internals. Given the adjacent matrix A which is produced by neural Granger causality, GCNs can perform the spectrum convolutional operation to capture the correlation characteristics. The GCN model can be expressed as:</ns0:p><ns0:formula xml:id='formula_15'>H G = &#963; ( W &#8722; 1 2 A W &#8722; 1 2 X&#952; )<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where A = A + I N is an adjacent matrix with self-connection structures, I N is an identity matrix, W is a degree matrix, H G &#8712; R n * d is the output of GCN which is the input of the temporal auto-correlation dependency modeling, &#952; is the parameter of GCN, and &#963; (&#8226;) is an activation function used for nonlinear modeling.</ns0:p></ns0:div> <ns0:div><ns0:head>Temporal Auto-correlation Dependency Modeling</ns0:head><ns0:p>Obtaining complex temporal auto-correlation dependencies is another key problem with imputation of multivariate time series data. In particular, sometimes the input decay may not fully capture the missing patterns since not all missingness information can be represented in decayed input values. Due to its effectiveness at modeling complex time dependencies, we choose to model the temporal dependencies using an LSTM <ns0:ref type='bibr' target='#b22'>(Graves, 2012)</ns0:ref>. However, to properly learn the characteristics of the original incomplete time series dataset, we find that the time lag between two consecutive valid observations is always changing due to the nil values. Further, the time lags between observations are very important since they follow an unknown non-uniform distribution. These changeable time lags remind us that the influence of the past observations should decay with time if a variable has been missing for a while.</ns0:p><ns0:p>Thus, a time decay vector &#945; is introduced to control the influence of the past observations. Each value of &#945; should be greater than 0 and smaller than 1 with the larger the &#948; , the smaller the decay vector. Hence, the time decay vector &#945; is modeled as a combination of &#948; :</ns0:p><ns0:formula xml:id='formula_16'>&#945; t = 1/e max(0,W &#945; &#948; t +b &#945; )<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where W &#945; and b &#945; are parameters that need to be learned. Once the decay vector has been derived, the hidden state in the LSTM h t&#8722;1 is updated in an element-wise manner by multiplying the decay vector &#945; t to fit the decayed influence of the past observations. Thus, the update functions of the LSTM are as follows:</ns0:p></ns0:div> <ns0:div><ns0:head>7/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:2:0:NEW 7 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_17'>h &#8242; t&#8722;1 = &#945; t &#8857; h t&#8722;1 i t = &#963; (W i [h &#8242; t&#8722;1 ; H G t ] + b i ) f t = &#963; (W f [h &#8242; t&#8722;1 ; H G t ] + b f ) s t = f t &#8857; s t&#8722;1 + i t &#8857; tanh(W s [h &#8242; t&#8722;1 ; H G t ] + b s ) o t = &#963; (W o [h &#8242; t&#8722;1 ; H G t ] + b o ) h t = o t &#8857; tanh(s t ) (9) where W f ,W i ,W o ,W c , b f , b i , b o , b s</ns0:formula><ns0:p>are the parameters that need to be learned by the by LSTM and H G is the output of the attribute correlation dependency modeling.</ns0:p><ns0:p>Attentive neural networks have recently demonstrated success in a wide range of tasks and, for this reason, we use one here. Let H L be a matrix consisting of output vectors H L = [h 1 , h 2 , ..., h n ] &#8712; R T &#215;d that the LSTM layer produced, where n is the time series length. The representation &#946; i j of the attention score is formed by a weighted sum of these output vectors:</ns0:p><ns0:formula xml:id='formula_18'>&#946; i j = exp(tanh(W [h i |h j ])) &#8721; T k=1 exp(tanh(W [h k |h j ]))<ns0:label>(10)</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>H &#8242; = Atten L &#215; H L (11)</ns0:formula><ns0:p>where the</ns0:p><ns0:formula xml:id='formula_20'>Atten L = &#63726; &#63727; &#63727; &#63728; &#946; 11 , &#946; 12 , . . . , &#946; 1T &#946; 21 , &#946; 22 , . . . , &#946; 2T &#8226; &#8226; &#8226; &#946; n1 , &#946; n2 , . . . , &#946; nT &#63737; &#63738; &#63738; &#63739; is the attention score.</ns0:formula><ns0:p>In Eq.11, We reconstruct the missing value by some linear transformation of the hidden state H &#8242; at time t. Hence the reconstruction loss is formulated as:</ns0:p><ns0:formula xml:id='formula_21'>L reg = &#8721; x&#8712;D x &#8855; m &#8722; x &#8855; m 2 (12)</ns0:formula><ns0:p>x represents the input multivariate time series data, x represents the imputed multivariate time series data and m means the masking matrix. The expression in Eq. 12 is the masked reconstruction loss that calculates the squared errors between the original observed data x and the imputed sample. Here, it should be emphasized that when calculating the loss, we only calculate the observed data as previously described in <ns0:ref type='bibr' target='#b13'>(Cao et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b33'>Luo et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b34'>Luo et al., , 2019;;</ns0:ref><ns0:ref type='bibr' target='#b31'>Liu et al., 2022)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Noise Reduction and Smoothness Imputation</ns0:head><ns0:p>In the past, reconstructions were performed directly, which ignores the noise in the actual sampling process.</ns0:p><ns0:p>However, in real-world multivariate time series data, when time series are collected the observations may be contaminated by various types of error or noise. Hence, these imputation values may be unreliable.</ns0:p><ns0:p>To ensure the reliability of the imputation results, a total variation reconstruction regularization term is applied to the reconstruction results. The method is based on a smoothing function where neighbors with similar values are used to smooth the time series. When applied to time series data, abrupt changes in trend, spikes, dips and the like can all be fully preserved. Compared to a two-norm smoothing constraint, the total variation reconstruction term can ensure smoothness without losing the dynamic performance of the time series <ns0:ref type='bibr' target='#b9'>(Boyd and Vandenberghe, 2004)</ns0:ref>. Eq.13 applies the total variation reconstruction term to the reconstruction results. As a result, noise in the original data is reduced and completion accuracy is improved. The reconstruction loss is formulated as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_22'>L SL = M &#8721; j=1 N &#8721; i=1 || x j i+1 &#8722; x j i || 1 (13)</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where M is the number of time series, that is, the number of variables, and N is the length of each time series.</ns0:p><ns0:p>The total object function of our model is:</ns0:p><ns0:formula xml:id='formula_23'>L loss = &#945; * L NG + &#946; * L reg + &#952; * L SL (<ns0:label>14</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>)</ns0:formula><ns0:p>where &#945;,&#946; ,&#952; indicate the weights among different parts of the total loss. We optimize Eq. 14 by Adam optimizer.</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENT</ns0:head><ns0:p>To verify and measure the performance of the proposed CGCNImp framework, we compare its performance at imputation with multiple time series against several other contemporary methods. The selected datasets used in the evaluations were three real-world bird migration datasets focusing on migratory patterns in China -Anser albifrons and Anser fabalis -as well as the KDD 2018 CUP Dataset.</ns0:p></ns0:div> <ns0:div><ns0:head>Dataset Description</ns0:head></ns0:div> <ns0:div><ns0:head>KDD CUP 2018 Dataset</ns0:head><ns0:p>The KDD dataset comes from the KDD CUP Challenge 2018. The dataset, which is is a public meteorologic dataset, has about 15% missing values. It was collected hourly between 2017/1/20 to 2018/1/30 in Beijing, collecting air quality and weather data. Each record contains 12 attributes, for example CO, weather, temperature etc. In our experiment, we select 11 common features as the comparison methods descried in the next section. We split this dataset for every hour. For every 48 hours, we randomly drop p percent of the dataset, and then we impute these time series with different models and calculate the imputation accuracy by RMSE and MAE where p &#8712; {10, 20, 30, 40, 50, 60, 70, 80, 90}.</ns0:p></ns0:div> <ns0:div><ns0:head>Bird Migration Dataset in China</ns0:head><ns0:p>The <ns0:ref type='bibr'>, 20, 30, 40, 50, 60, 70, 80, 90}</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison Methods and Evaluation Metrics</ns0:head><ns0:p>We compare our methods to eight current imputation methods as previously described in <ns0:ref type='bibr' target='#b31'>(Liu et al., 2022)</ns0:ref>.</ns0:p><ns0:p>A brief description of each follows.</ns0:p><ns0:p>&#8226; Statistical imputation methods <ns0:ref type='bibr' target='#b40'>(Rubinsteyn and Feldman, 2016)</ns0:ref>, where we simply impute the missing values with zero, mean, or median.</ns0:p><ns0:p>&#8226; KNN <ns0:ref type='bibr' target='#b29'>(Liew et al., 2011)</ns0:ref>, which imputes the missing data as the weighted average of k neighbors by using a k-nearest neighbor algorithm to find neighboring data.</ns0:p><ns0:p>&#8226; MF (C. <ns0:ref type='bibr' target='#b12'>Li et al., 2015)</ns0:ref>, which fills the missing values through factorizing an incomplete matrix into low-rank matrices.</ns0:p><ns0:p>&#8226; SVD (Jingfei <ns0:ref type='bibr' target='#b24'>He and Geng, 2016)</ns0:ref>, which uses iterative singular value decomposition for matrix imputation to impute the missing values.</ns0:p><ns0:p>&#8226; GP-VAE <ns0:ref type='bibr' target='#b19'>(Fortuin et al., 2020)</ns0:ref>, a method that combines ideas from VAEs and Gaussian processes to capture temporal dynamics for time series imputation.</ns0:p></ns0:div> <ns0:div><ns0:head>9/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:2:0:NEW 7 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; BRITS <ns0:ref type='bibr' target='#b13'>(Cao et al., 2018)</ns0:ref>, one of methods that include Unidirectional Uncorrelated Recurrent Imputation, Bidirectional Uncorrelated Recurrent Imputation, and Correlated Recurrent Imputation algorithm to impute the missing values.</ns0:p><ns0:p>&#8226; GRUI <ns0:ref type='bibr' target='#b33'>(Luo et al., 2018)</ns0:ref>, which is a two-stage GAN based method that uses a generator and discriminator to impute missing values.</ns0:p><ns0:p>&#8226; E2E-GAN <ns0:ref type='bibr' target='#b34'>(Luo et al., 2019)</ns0:ref> which relies on an end-to-end GAN network that includes an encoderdecoder GRUI based structure and is one of the state-of-the-art methods.</ns0:p><ns0:p>To evaluate the performance of our methods, we use two metrics to the compare and analyze with the results of previous methods.</ns0:p><ns0:p>(1) RMSE (Root Mean Squared Error) refers to the mean value of the square root of the error between the predicted value and the true value. The calculation formula is as follows:</ns0:p><ns0:formula xml:id='formula_25'>RMSE = 1 n n &#8721; i=1 (x &#8722; x) 2</ns0:formula><ns0:p>(2) MAE(Mean Absolute Error) is the average of the absolute value of the error between the observed value and the real value. It is used to describe the error between the predicted value and the real value.</ns0:p><ns0:p>The formulation is as follows:</ns0:p><ns0:formula xml:id='formula_26'>MAE = 1 n n &#8721; i=1 |x &#8722; x|</ns0:formula></ns0:div> <ns0:div><ns0:head>Implementations Details</ns0:head><ns0:p>All the experimental results are obtained under the same hardware and software environment. The hardware is Intel i7 9700k, 48GB memory, NVIDIA GTX 1080 8GB. And the deep learning framework is PyTorch1.7 and TensorFlow1.15.0.</ns0:p><ns0:p>To maintain the same experiment environment as the contemporary method, the dataset was split into two parts: the first part with 80% of the records is used for the training set and the remaining 20% is used for the test set. All values are normalized within the range of 0 to 1. For the training process, 10% of the data of the training set was randomly dropped. For the testing dataset, we drop the data with different drop-rate between 10% and 90%, thus testing each method at a range of levels of missing data between 10% and 90%.</ns0:p></ns0:div> <ns0:div><ns0:head>Performance Analysis</ns0:head><ns0:p>The results with the KDD, Anser albifrons and Anser fabalis datasets at a missing value ratio of 10% appear in Table <ns0:ref type='table' target='#tab_6'>1</ns0:ref>. Here, CGCNImp yields significantly lower errors than the other methods in terms of RMSE and MAE, demonstrating that our method is better than the other methods.</ns0:p><ns0:p>Generally, the higher the proportion of missing data, the more difficult it is to impute the missing values. To assess the frameworks with different levels of missing data, we then conduct the same experiment with the BRITS, GRUI, E2EGAN and CGCNImp , varying the ratios of missing values from 10% to 90% in steps of 10% as previously described in <ns0:ref type='bibr' target='#b31'>(Liu et al., 2022)</ns0:ref>. The results are shown in Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref> and Table <ns0:ref type='table' target='#tab_8'>3</ns0:ref>. Again, our methods return the smallest errors. </ns0:p></ns0:div> <ns0:div><ns0:head>Ablation study</ns0:head><ns0:p>An ablation study is designed to assess the contribution of the attribute causality discovery and the noise reduction and smoothness imputation. This comprised three tests: the first with no ablation; the second where we simply removed the noise reduction and smoothness module and set &#946; to 0 in Eq. 14; plus a third where we simply removed the noise reduction and smoothness module and set &#945; to 0 in in Eq. 14. All tests are conducted with a range of missing value ratios. Table <ns0:ref type='table' target='#tab_9'>4</ns0:ref> and Table <ns0:ref type='table'>5</ns0:ref> show the results. What we found with the Anser bird migration data was that, at a missing rate lower than 40%, removing either the noise reduction and smoothness module or the neural Granger causality gives lower errors. However, at higher missing rates, the tests with both modules returned substantially lower errors. This verifies the contribution of both modules to the framework. With the KDD data, CGCNImp in full returned substantially lower errors, again supporting the contribution of both these modules. there is a strong Granger causal effect from variable i to variable j. It can be seen that the causal effect is strong along the diagonal of the matrix, which means that there are strong causal effects among different variables at the same location. Furthermore, there are also strong causal effects between different locations, such as variables 66 to 110.</ns0:p><ns0:p>There are 9 attributes in the bird migration dataset. Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> the edge indicate there are strong correlations between variables. Therefore, we kept the conflicting edge and placed them into the GCN network for better performance.</ns0:p></ns0:div> <ns0:div><ns0:head>CASE STUDY : Bird migration route analysis</ns0:head><ns0:p>Fig. <ns0:ref type='figure'>6</ns0:ref>, Fig. <ns0:ref type='figure'>7</ns0:ref> and Fig. <ns0:ref type='figure'>8</ns0:ref> show the imputation results of Anser fabalis birds migration routes. What we can see is that the imputed data shows some important wild reserves not seen with the original data.</ns0:p><ns0:p>According to the list of wetlands of international importance in China, for example, Fig. <ns0:ref type='figure'>6 (b)</ns0:ref> shows the ground truth time series with missing values. This time, CGCNImp imputed the location of Binzhou Seashell Island and the Wetland National Nature Reserve not shown in Fig. <ns0:ref type='figure'>6 (a)</ns0:ref> showing that the bird migration trajectory could be recovered by our methods.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>7</ns0:ref> (b), the CGCNImp method imputed the location of Wanfoshan Nature Reserve, which is not noticeable in the original data on its own (Fig. <ns0:ref type='figure'>7 (a)</ns0:ref>). Wanfoshan is now a national forest park, a national nature reserve, and a national geological park, which is an important location for bird migration. <ns0:ref type='bibr'>Likewise,</ns0:ref><ns0:ref type='bibr'>Fig. 8 (b)</ns0:ref> shows the imputed location of the Momoge National Nature Reserve <ns0:ref type='bibr' target='#b17'>(Cui et al., 2021)</ns0:ref> not showed in Fig. <ns0:ref type='figure'>8 (a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this paper, we present a novel imputation model, called CGCNImp, that is specifically designed to perform imputation of multivariate time series data. CGCNImp considers both attribute correlation and <ns0:ref type='table'>5</ns0:ref>. The ablation study MAE results of the CGCNImp method on the three datasets (lower is better). temporal auto-correlation dependencies. Correlation dependencies are captured through neural Granger causality and a GCN, while an attention-driven LSTM plus a time lag matrix capture the temporal dependencies and impute the missing values. Last, neighbors with similar values are used to smooth the time series and reduce noise. Imputation results show that CGCNImp achieves state-of-the-art performance when compared to previous methods. We will explore our model for missing-not-at-random data, and we will conduct a theoretical analysis of our model for missing values in further works. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The causal effect matrix of KDD dataset and bird migration dataset.The X axis indicates attributes. The Y axis also indicates attributes. The matrix indicates the causal effect between attributes.</ns0:figDesc><ns0:graphic coords='3,174.38,227.46,165.43,158.15' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The CGCNImp framework for multivariate time series missing value imputing.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.59,139.44' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Birds Migration Dataset collects migration trace data which comes from the project Strategic Priority Research Program of the Chinese Academy of Sciences. The dataset was collected hourly between 2017/12/30 to 2018/5/10 based on the species Anser fabalis and Anser albifrons. Each record contains13 attributes which are longitude, latitude, speed height, speed velocity, heading, temperature etc. The dataset is about 10% missing values. We select 10 common features comprising longitude, latitude, speed height, speed velocity, heading, temperature etc. for our experiments. We split this dataset into 5 minutes time series, and for every 5 minutes, we randomly drop p percent of the dataset, and then we impute these time series with different models and calculate the imputation accuracy by RMSE and MAE between original values and imputed values where p &#8712; {10</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Fig. 3 ,</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig. 3 , Fig. 4 and Fig. 5 show the imputation results from the KDD datasets for the Tongzhou, Mentougou and Miyun districts, respectively. The blue dots are the ground truth time series and the red curve shows the imputed values. As illustrated, CGCNImp captures the evolution of the values and imputes the missing values quite well.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Fig. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig. 1 (a) illustrates the causal relationship graph with the KDD time series. In this data, there are 121 variables in total, corresponding to 11 different locations, each with 11 different variables. Different attributes for the same places are arranged in adjacent positions. A dark blue element (i, j) means that</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The ground true(blue) and the imputed values(red) in Tongzhou,Beijing of the KDD dataset.The X axis indicates time step. The Y axis indicates imputed values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The ground true(blue) and the imputed values(red) in Mentougou,Beijing of the KDD dataset.The X axis indicates time step. The Y axis indicates imputed values.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,141.73,63.78,413.57,163.07' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,141.73,263.91,413.59,122.38' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>uses the RNN structure to model the time series including the</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Unidirectional Uncorrelated Recurrent Imputation, Bidirectional Uncorrelated Recurrent Imputation , and</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:2:0:NEW 7 Apr 2022)</ns0:cell><ns0:cell>3/17</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Time series X may contain missing values, and a binary mask vector R n&#215;d is introduced to indicate the missing positions, which is defined as:</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>18</ns0:cell><ns0:cell>&#63737;</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>/</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell>&#63739;</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>/</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>/</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>Definition 2: Binary Mask Matrix.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The RMSE and MAE results of the CGCNImp and other methods on the three datasets (lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>10/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:2:0:NEW 7 Apr 2022)Manuscript to be reviewed</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The MAE results of the imputation methods on the three datasets with different missing rate (lower is better).</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>(b) shows the Granger causal matrix derived from the neural Granger causality analysis. It should be noted that the Granger causality should be a one-way relationship, which means that, theoretically, we need to eliminate conflicting edges in the causal graph. However, in practice, the causal graph is derived from the neural Granger causality analysis and .1561 0.1721 0.1928 0.2120 0.2571 0.2980 0.3284 0.3625 0.3912 KDD GRUI 0.1493 0.1527 0.1702 0.1937 0.2098 0.2541 0.2824 0.3051 0.3361 E2EGAN 0.1336 0.1457 0.1601 0.1778 0.1926 0.2235 0.2574 0.2808 0.3031 The RMSE results of the imputation methods on the three datasets with different missing rate (lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>11/17</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The ablation study RMSE results of the CGCNImp method on the three datasets (lower is better).</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67238:2:0:NEW 7 Apr 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
" Editor comments (Eibe Frank) MINOR REVISIONS Given the change in focus away from bird migration in environmental science, which I had recommended, I have not attempted to send out the submission for review again (which would almost certainly take much too long and yield little benefit), but instead read through it myself and left a number of recommended wording changes, comments, and questions in the attached PDF. Please view the PDF using Acrobat Reader so that all my annotations are shown as intended. If these remaining small problems are fixed, I will recommend publication. A: Thanks the Eibe’ comments. I have modify my paper according to the comments. "
Here is a paper. Please give your review comments after reading it.
415
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A morphological analyzer plays an essential role in identifying functional suffixes of Korean words. The analyzer input and output differ from each other in their length and strings, which can be dealt with by an encoder-decoder architecture. We adopt a Transformer architecture, which is an encoder-decoder architecture with self-attention rather than a recurrent connection, to implement a Korean morphological analyzer. Bidirectional Encoder Representations from Transformers (BERT) is one of the most popular pretrained representation models; it can present an encoded sequence of input words, considering contextual information. We initialize both the Transformer encoder and decoder with two types of Korean BERT, one of which is pretrained with a raw corpus, and the other is pretrained with a morphologically analyzed dataset. Therefore, implementing a Korean morphological analyzer based on Transformer is a fine-tuning process with a relatively small corpus. A series of experiments proved that parameter initialization using pretrained models can alleviate the chronic problem of a lack of training data and reduce the time required for training. In addition, we can determine the number of layers required for the encoder and decoder to optimize the performance of a Korean morphological analyzer.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Korean is an agglutinative language in which words consist of several morphemes, and some verb forms change when conjugated with functional suffixes. A Korean morphological analyzer (KMA) is designed to analyze a word and identify functional morphemes, which can specify the syntactic role of words in a sentence. Although end-to-end approaches are widely used in deep-learning models, some applications such as syntactic parsers require a KMA as a preprocessor to separate functional morphemes before parsing.</ns0:p><ns0:p>In many cases, the productive inflectional system in Korean causes deletion and contraction between a stem and the following morphemes when creating a Korean word. Therefore, a KMA should identify the base form of a morpheme by recovering deleted morphemes and decomposing contracted morphemes <ns0:ref type='bibr' target='#b5'>(Han and Palmer, 2004)</ns0:ref>. Therefore, a KMA output sequence differs from a raw input sequence in both length and surface form. Figure <ns0:ref type='figure'>1</ns0:ref> shows an example of KMA input and output. INPUT is a sequence of words separated by white spaces, and OUTPUT is a morphologically analyzed result. The output is a sequence of a single morpheme and the part-of-speech (POS) that follows the morpheme. The symbol '&lt;SP&gt;' indicates a word boundary. In this example, the input consists of three words, while the output consists of three words or seven morphemes, where some morphemes indicate grammatical relationships in a sentence.</ns0:p><ns0:p>Because the input and output lengths are different, Korean morphological analysis can be defined as an encoder-decoder problem. A raw input sequence is encoded and then decoded into a morphologically analyzed sequence. An encoder-decoder problem can be easily implemented by adopting two recurrent neural networks. Recent research in deep learning has proposed a new architecture, Transformer <ns0:ref type='bibr' target='#b15'>(Vaswani et al., 2017)</ns0:ref>, for encoder-decoder problems that can increase the parallelism of learning processes by eliminating recurrent connections. Transformer calculates self-attention scores that can cross-reference between every input. Therefore, we adopt the Transformer architecture to implement a KMA in this work.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:1:0:NEW 9 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed Figure <ns0:ref type='figure'>1</ns0:ref>. The output of a morphological analyzer for the example input sentence 'I lost a black galaxy note.' (The Yale Romanized system is used to transcribe Korean sentences. 'VA' is an adjective, and 'VV' is a verb. 'ETM' is an adnominal ending that attaches to the end of a verb or an adjective. 'NNP' is a proper noun, and 'JKO' is a particle that attaches to the end of nouns indicating an objective case. 'EP' is a verbal ending in the past tense, and 'EF' is a verbal ending to make a sentence declarative.)</ns0:p><ns0:p>To train a KMA based on Transformer from scratch, we need a considerable parallel corpus that includes raw input sentences and their analyzed results.</ns0:p><ns0:p>Since the introduction of pretrained language representation models such as Bidirectional Encoder Representations from Transformers (BERT) <ns0:ref type='bibr' target='#b3'>(Devlin et al., 2019)</ns0:ref>, most natural language processing (NLP) applications have been developed based on pretrained models. Pretrained models provide contextdependent embeddings of an input sequence and reduce both the chronic problem of a lack of training data and the time required for training.</ns0:p><ns0:p>In this work, we utilize two types of BERT models to initialize Transformer, the backbone of a KMA.</ns0:p><ns0:p>One is pretrained with Korean raw sentences and the other with morphologically analyzed sentences consisting of morphemes and POS tags. For the sake of clarity, we name the former 'word-based BERT' (wBERT) and the latter 'morpheme-based BERT' (mBERT); mBERT can encode a morphologically analyzed sequence into embedding vectors in the same way wBERT can encode a raw sentence. We initialized the Transformer encoder with wBERT and the Transformer decoder with mBERT.</ns0:p><ns0:p>While it is reasonable to initialize a Transformer encoder with BERT, it may seem unusual to initialize a decoder with BERT. We do not have decoder-based models like GPT <ns0:ref type='bibr' target='#b14'>(Radford et al., 2018)</ns0:ref> pretrained with Korean morphologically analyzed data; instead, only mBERT is pretrained with Korean morphologically analyzed data. Recently, <ns0:ref type='bibr' target='#b0'>Chi et al. (2020)</ns0:ref> demonstrated that initializing an encoder and decoder with XLM <ns0:ref type='bibr' target='#b10'>(Lample and Conneau, 2019)</ns0:ref> produced better results than initializing them with random values.</ns0:p><ns0:p>XLM is a pretrained model for cross-lingual tasks and is implemented based on a Transformer encoder. Therefore, both an encoder and decoder of a KMA can be expected to benefit from initializing parameters with pretrained models. When Transformer is initialized with pretrained models, implementing a KMA based on the Transformer is a fine-tuning process that can be done with a relatively small corpus.</ns0:p><ns0:p>In addition, employing the fine-tuning process is easier and faster than building a KMA from scratch.</ns0:p><ns0:p>Pretrained models such as BERT generally have 12-24 layers, which is deeper than conventional models. A few studies <ns0:ref type='bibr' target='#b2'>(Clark et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b6'>Jawahar et al., 2019)</ns0:ref> have examined which layers of pretrained models are best suited for which tasks. It has been established that tasks dealing with words and surface forms of a sentence generally perform well on the lower layers rather than on the top layer. In this work, we also investigate the number of layers in a Transformer architecture that obtains the best accuracy for Korean morphological analysis.</ns0:p><ns0:p>Our contributions to achieving high-performance Korean morphological analysis are the following:</ns0:p><ns0:p>1. Because we leverage pretrained Korean language representation models to initialize the encoder and decoder of a morphological analyzer, we can train the morphological analyzer faster and with less training data.</ns0:p><ns0:p>2. We find the most appropriate number of layers in the BERT models for a KMA rather than using all layers in the models.</ns0:p><ns0:p>In the following section, we first explore related studies, and then we present the main architecture of a Korean morphological analyzer in Section 3. Experimental results are described in Section 4, followed by the conclusion in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>2/10</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:1:0:NEW 9 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>SURVEY OF KOREAN MORPHOLOGICAL ANALYZERS</ns0:head><ns0:p>Traditional Korean morphological analysis consists of two pipeline stages: the first step is to separate morphemes from a word and convert them into their stems, and the second step is to assign them POS tags.</ns0:p><ns0:p>Recently, a deep learning-based end-to-end approach has been applied to many applications, including KMAs. A sequence-to-sequence architecture based on recurrent neural networks is most often used to implement a KMA in full end-to-end style. Using this architecture, morphological analyzers can be easily implemented without complicated feature engineering or manually built lexicons. Conventional morphological analysis models suffer from the out-of-vocabulary (OOV) problem. To mitigate this problem, the following models adopted syllable-based sequence-to-sequence architecture. <ns0:ref type='bibr' target='#b11'>Li et al. (2017)</ns0:ref> adopted gated recurrent unit networks to implement a KMA with a syllable-based sequence-to-sequence architecture. In addition, an attention mechanism <ns0:ref type='bibr' target='#b12'>(Luong et al., 2015)</ns0:ref> has been introduced to calculate the information needed by a decoder to ensure the model performs well. <ns0:ref type='bibr' target='#b8'>Jung et al. (2018)</ns0:ref> also used syllable-level input and output for a KMA to alleviate the problem of an unseen word. Even with syllable-level input and output, the model tends not to generate characters that rarely occur in a training corpus. Therefore, they supplemented the model with a copy mechanism <ns0:ref type='bibr' target='#b4'>(Gu et al., 2016)</ns0:ref> that copies rare characters to output sequences. A copy mechanism assigns higher probabilities to rare or OOV words to perform better sequence generation during decoding phases. They reported that the accuracy of the KMA improved from 95.92% to 97.08% when adopting input feeding and the copy mechanism. <ns0:ref type='bibr' target='#b1'>Choe et al. (2020)</ns0:ref> proposed a KMA specially designed to analyze Internet text data with several spacing errors and OOV inputs. To handle newly coined words, acronyms, and abbreviations often used in online discourse, they used syllable-based embeddings, syllable bigrams, and graphemes as input features.</ns0:p><ns0:p>The model performed better when the dataset was collected from the Internet.</ns0:p><ns0:p>Since the introduction of BERT <ns0:ref type='bibr' target='#b3'>(Devlin et al., 2019)</ns0:ref> in the field of NLP research, pre-trainingthen-fine-tuning approaches have become prevalent in most NLP applications. In addition, notable improvements have been reported in several studies that have adopted the pre-training-then-fine-tuning framework. However, due to the distinct characteristics of Korean complex morphology systems, previous KMA studies have not adopted large-scale pretrained models such as BERT. <ns0:ref type='bibr' target='#b11'>Li et al. (2017)</ns0:ref> used by wBERT and mBERT. A set of WordPiece that both wBERT and mBERT use does not include a word-separator token to indicate word boundaries. However, when the decoder generates a sequence of morpheme tokens, the word separator token '&lt;SP&gt;' must be specified between morpheme tokens to recover word-level results, as shown in the output of Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>Therefore, we adopt a multi-task learning approach to generate morphological analysis results while also inserting word-separators between the results. On the final layer of the decoder, there is an additional binary classifier that can discern whether a word-separator is needed for each output token. The final layer of the decoder produces two types of output, as shown in Figure <ns0:ref type='figure' target='#fig_0'>3</ns0:ref>, which are combined to generate a final morphological analyzed sequence.</ns0:p><ns0:p>Given a raw input sentence, the tokenizer of wBERT splits words into multiple sub-word tokens. A token input to wBERT consists of a vector summation of a token embedding (w), a positional embedding (p), and a segment embedding (E A ), as shown in Equation <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>x i = w i + p i + E A (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>)</ns0:formula><ns0:p>where i is a token index of a sentence. The input of the encoder is denoted by X = {x 1 , x 2 , .., x n }. The encoder of Transformer encodes X into a sequence of contextualized embedding vectors Z = {z 1 , z 2 , ..., z n }.</ns0:p><ns0:p>Let us denote a sequence of hidden states in the decoder by H = {h 1 , h 2 , ..., h m }. As this work adopts a multi-task approach, the decoder simultaneously produces two kinds of output. The output of morpheme tokens is denoted by Y = {y 1 , y 2 , ..., y m } and the output of word separators is denoted by S = {s 1 , s 2 , ..., s m }, where s i &#8712; {0, 1}. At time t of the decoding phase, an output morpheme and a word-separator are determined by Equation 2 and 3, respectively.</ns0:p><ns0:formula xml:id='formula_2'>P G (y t |X) = P(y t |Z,Y &lt; t) = so f tmax(W T G h t )<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>P S (s t |X) = P(s t |Z,Y &lt; t) = so f tmax(W T S h t )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where W G and W S are learnable parameters for generating both outputs. The objective function L of the model is in Equation 4 Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>L Gen = &#8722; m &#8721; t=1 log P G (y t |X) L Sep = &#8722; m &#8721; t=1 (log P S (s t |X) &#8722; log(1 &#8722; P S (s t |X))) L = L Gen + L Sep</ns0:formula><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTS Datasets and Experimental Setup</ns0:head><ns0:p>We used 90,000 sentences for training, 1,000 sentences for validation, and 10,000 sentences for evaluation in this work. They were all collected from the POS-tagged corpus published by the 21st Century Sejong Project <ns0:ref type='bibr' target='#b9'>(Kim, 2006)</ns0:ref>. The sentence lengths were all less than 100 words and 46 POS labels were used in the Sejong corpus. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> shows the hyperparameters of the encoder and decoder, and Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> shows the hyperparameters</ns0:p></ns0:div> <ns0:div><ns0:head>Corpus</ns0:head><ns0:p>for training the model.</ns0:p><ns0:p>The following experiments were designed to find the combination of the numbers of encoder and decoder layers that achieved the best KMA performance. First, we initialized the Transformer encoder and decoder with wBERT and mBERT, respectively, including embeddings of the WordPiece tokens.</ns0:p><ns0:p>The cross attentions between the Transformer encoder and decoder were initialized with random values.</ns0:p><ns0:p>Because the encoder and decoder each had 12 layers, we compared the KMA performances by performing 12 x 12 combinations of encoder and decoder layers. When we adopted fewer than 12 layers of the encoder and decoder, we used the parameters of the corresponding layers from the bottom of wBERT and mBERT. The remaining parameters, such as W G and W S in Equations 2 and 3, were randomly initialized.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Evaluation</ns0:head><ns0:p>The BERT base model has 12 layers. <ns0:ref type='bibr' target='#b6'>Jawahar et al. (2019)</ns0:ref> reported that tasks dealing with surface information performed best in the third and fourth layers of BERT, while tasks related to semantic information performed best in the seventh layer and above.</ns0:p><ns0:p>First, we wanted to find the optimal number of encoder and decoder layers to achieve the best morphological analysis performance while reducing the number of parameters to be estimated. In the first experiment, the encoder and decoder were initialized with wBERT and mBERT, respectively.</ns0:p><ns0:p>Then we compared the KMA performance according to the number of layers. The overall results of 12</ns0:p><ns0:p>x 12 combinations of encoder and decoder layers are shown in Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>. The accuracy improved as the number of encoder layers increased, while it remained nearly the same when the number of decoder layers increased. We claim that KMA achieves the best performance with 12 encoder layers.</ns0:p><ns0:p>In the second experiment, we examined the effect of the number of decoder layers on the KMA performance. Based on the results of the first experiment, we set the number of encoder layers at 12 and initialized it with wBERT. Then, we initialized the decoder with mBERT and compared the KMA Manuscript to be reviewed</ns0:p><ns0:p>Computer Science performance while varying the number of decoder layers from 3 to 12. Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> shows the KMA accuracy according to the number of decoder layers. To get definitive results, we obtained the accuracies by averaging three trials for each row in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>. Although there seems to be little difference in KMA accuracy according to the number of decoder layers, the KMA performed best with four decoder layers.</ns0:p><ns0:p>Surprisingly, the number of parameters in the decoder can be reduced without deteriorating the KMA performance. Table <ns0:ref type='table'>5</ns0:ref>. The comparison of accuracies according to the number of layers and initialization in the encoder and the decoder.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of layers in the</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> summarizes the experimental results to evaluate the impact of initializing wBERT and mBERT.</ns0:p><ns0:p>In all subsequent experiments, the encoder had 12 layers and the decoder had four. The KMA significantly improved when the encoder was initialized with wBERT, increasing the F1 score by as much as 2.93.</ns0:p><ns0:p>There was a slight performance improvement when the decoder was initialized with mBERT. The KMA with the four-layer decoder outperformed the KMA with the full-layer decoder. In addition, we discovered that the word separator could easily exceed 99% accuracy by adopting multi-task learning. We obtained approximately 93.58% accuracy in word separation when adopting an independent word separator. Table <ns0:ref type='table'>9</ns0:ref> in the appendix provides the accuracy comparisons between the multi-task separator and the independent separator.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>6</ns0:ref> presents the robustness of the KMA to the lengths of the input sentences. The KMA with 12 encoder layers and four decoder layers outperformed the other combinations of models in evaluating longer inputs.</ns0:p><ns0:p>To show the effect of initializing parameters with pretrained models more clearly, we performed the following experiments. We measured changes in the accuracy of the models as the size of the training dataset decreased from 100% to 10%. The results are shown in Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref>. shows that the KMA can be trained robustly and efficiently when its parameters are initialized with the pretrained models wBERT and mBERT. Parameter initialization with pretrained models helps successfully build deep learning models.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_6'>8</ns0:ref> shows a comparison of the F1 scores with those of previous approaches that used sequenceto-sequence architectures. We reimplemented the models from the previous approaches and compared their results directly to those of our model in the same environments. The KMA proposed in this study demonstrates competitive end-to-end performance without any additional knowledge or mechanisms.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this work, we suggested adopting the Transformer architecture to implement a KMA. Furthermore, we proposed using two Korean BERTs to initialize the parameters of the Transformer encoder and decoder.</ns0:p><ns0:p>We introduced a multi-task learning approach to specify word boundaries in an output sequence of morpheme tokens. The KMA achieved its best performance when initialized with two types of Korean BERT. In addition, we observed that the accuracy of the KMA was highest when it had four layers in the decoder and 12 layers in the encoder. To conclude this work, we proved that appropriate parameter initialization can help ensure stable, fast training, and good performance of deep-learning models.</ns0:p></ns0:div> <ns0:div><ns0:head>8/10</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:1:0:NEW 9 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed F1 score Models (re-implementation) Sequence-to-sequence (syllable-basis) <ns0:ref type='bibr' target='#b11'>(Li et al., 2017)</ns0:ref> 97.15 (96.58) Sequence-to-sequence (syllable-basis) + input feeding + copy mechanism <ns0:ref type='bibr' target='#b8'>(Jung et al., 2018)</ns0:ref> 97.08 (96.67) </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 2. Transformer: the basis architecture of a Korean morphological analyzer Figure 2 illustrates the basic architecture of a KMA. Our model is implemented based on Transformer, which consists of an encoder and a decoder. The encoder, the inputs of which are raw sentences, is initialized with wBERT and the decoder, the outputs of which are morphologically analyzed sentences, is initialized with mBERT, as shown in Figure 3. A training dataset consists of pairs of raw input sentences with their morphological analyzed sequences.</ns0:figDesc><ns0:graphic coords='4,178.44,525.19,340.24,112.69' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 provides an example to clearly understand the model. The raw input sentence has three words, which are split into eight WordPiece tokens (shown in INPUT(X)) for the input of the encoder.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The input and the output of the Korean morphological analyzer for 'I lost a black galaxy note.'</ns0:figDesc><ns0:graphic coords='6,141.73,63.71,413.47,121.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison of performances according to the number of encoder and decoder layers. The X-axis shows the number of encoder layers, and the Y-axis shows the number of decoder layers.</ns0:figDesc><ns0:graphic coords='7,206.79,398.30,283.48,188.99' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:65503:1:0:NEW 9 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Initializing the model with wBERT and mBERT degraded the accuracy by less than 0.5%, even when only half of the training dataset was used to train the model. For the model initialized with wBERT alone, the accuracy deteriorated by more than 0.6%, while the accuracy of the model initialized with random values decreased by more than 3%. The most surprising result is that the model initialized with wBERT and mBERT achieved 95.23% accuracy when trained with only 10% of the training dataset. This result was approximately the same as when the model was trained with the complete training dataset after being initialized with random values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 depicts the training loss curves of the models with different initialization values depending on the number of training steps. The X-axis shows the number of training steps, while the Y-axis shows the loss values of the training steps. At the beginning of the training phase, the losses of the model initialized with random values decreased very sharply. However, as the number of training steps increased, the loss of the model using wBERT and mBERT as parameter initializers dropped faster than the other models. This</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Comparison of loss values according to the number of training steps.</ns0:figDesc><ns0:graphic coords='10,206.79,64.10,283.69,216.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The statistics of the dataset.Table1describes the average number and the maximum number of WordPiece tokens in the sentences. We adopted both wBERT and mBERT, released by the Electronics and Telecommunications Research Institute (https://aiopen.etri.re.kr). They were pretrained with approximately 23 GB of data from newspapers and Wikipedia.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Number of</ns0:cell><ns0:cell cols='2'>Average number</ns0:cell><ns0:cell>Maximum number</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>sentences</ns0:cell><ns0:cell /><ns0:cell>of tokens</ns0:cell><ns0:cell>of tokens</ns0:cell></ns0:row><ns0:row><ns0:cell>Train</ns0:cell><ns0:cell>Input Output</ns0:cell><ns0:cell>90,000</ns0:cell><ns0:cell /><ns0:cell>34.80 39.22</ns0:cell><ns0:cell>220 185</ns0:cell></ns0:row><ns0:row><ns0:cell>Validation</ns0:cell><ns0:cell>Input Output</ns0:cell><ns0:cell>1,000</ns0:cell><ns0:cell /><ns0:cell>40.61 45.16</ns0:cell><ns0:cell>120 126</ns0:cell></ns0:row><ns0:row><ns0:cell>Evaluation</ns0:cell><ns0:cell>Input Output</ns0:cell><ns0:cell>10,000</ns0:cell><ns0:cell /><ns0:cell>34.06 38.55</ns0:cell><ns0:cell>126 132</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Hyperparameters</ns0:cell><ns0:cell /><ns0:cell cols='2'>Encoder Decoder</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(wBERT) (mBERT)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Number of layers</ns0:cell><ns0:cell /><ns0:cell>1-12</ns0:cell><ns0:cell>1-12</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Hidden dimension</ns0:cell><ns0:cell /><ns0:cell>768</ns0:cell><ns0:cell>768</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Intermediate dimension</ns0:cell><ns0:cell>3,072</ns0:cell><ns0:cell>3,072</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Number of attention heads</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Activation function</ns0:cell><ns0:cell /><ns0:cell>Gelu</ns0:cell><ns0:cell>Gelu</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Dropout</ns0:cell><ns0:cell /><ns0:cell>0.1</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Maximum of input length</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Vocabulary size</ns0:cell><ns0:cell /><ns0:cell>30,797</ns0:cell><ns0:cell>30,349</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The hyperparameters of the wBERT and mBERT.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Hyperparameters</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>Adam</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate(encoder, decoder)</ns0:cell><ns0:cell>5e-3, 1e-3</ns0:cell></ns0:row><ns0:row><ns0:cell>Beta1, Beta2</ns0:cell><ns0:cell>0.99, 0.998</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum number of training steps</ns0:cell><ns0:cell>100,000</ns0:cell></ns0:row></ns0:table><ns0:note>5/10PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:1:0:NEW 9 Feb 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The hyperparameters for training the model.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison of accuracies according to the number of decoder layers.Through these two experiments, we arrived at the preliminary conclusion that KMA performs best when the decoder has only four layers and the encoder has the full number of layers.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Morpheme-level</ns0:cell><ns0:cell cols='2'>Separator</ns0:cell><ns0:cell>Sentence-level</ns0:cell><ns0:cell>Number of</ns0:cell></ns0:row><ns0:row><ns0:cell>Decoder</ns0:cell><ns0:cell cols='2'>F1 score</ns0:cell><ns0:cell cols='2'>accuracy</ns0:cell><ns0:cell>accuracy</ns0:cell><ns0:cell>parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell cols='2'>97.86</ns0:cell><ns0:cell>99.64</ns0:cell><ns0:cell /><ns0:cell>59.48</ns0:cell><ns0:cell>184M</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell cols='2'>98.31</ns0:cell><ns0:cell>99.78</ns0:cell><ns0:cell /><ns0:cell>66.45</ns0:cell><ns0:cell>193M</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='2'>97.99</ns0:cell><ns0:cell>99.67</ns0:cell><ns0:cell /><ns0:cell>60.88</ns0:cell><ns0:cell>203M</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell cols='2'>98.01</ns0:cell><ns0:cell>99.70</ns0:cell><ns0:cell /><ns0:cell>61.33</ns0:cell><ns0:cell>212M</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell cols='2'>98.04</ns0:cell><ns0:cell>99.71</ns0:cell><ns0:cell /><ns0:cell>61.53</ns0:cell><ns0:cell>222M</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell cols='2'>98.07</ns0:cell><ns0:cell>99.72</ns0:cell><ns0:cell /><ns0:cell>62.22</ns0:cell><ns0:cell>231M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell cols='2'>98.07</ns0:cell><ns0:cell>99.72</ns0:cell><ns0:cell /><ns0:cell>62.07</ns0:cell><ns0:cell>241M</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell cols='2'>98.09</ns0:cell><ns0:cell>99.72</ns0:cell><ns0:cell /><ns0:cell>62.46</ns0:cell><ns0:cell>250M</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell cols='2'>98.14</ns0:cell><ns0:cell>99.74</ns0:cell><ns0:cell /><ns0:cell>63.16</ns0:cell><ns0:cell>260M</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell cols='2'>98.29</ns0:cell><ns0:cell>99.73</ns0:cell><ns0:cell /><ns0:cell>64.42</ns0:cell><ns0:cell>269M</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Number of layers &amp; Initialization in the encoder and decoder</ns0:cell><ns0:cell cols='2'>Morpheme-level F1 score</ns0:cell><ns0:cell cols='2'>Separator accuracy</ns0:cell><ns0:cell>Sentence-level accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-random Decoder-4-random</ns0:cell><ns0:cell cols='2'>95.32</ns0:cell><ns0:cell /><ns0:cell>98.84</ns0:cell><ns0:cell>40.49</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-wBERT Decoder-4-random</ns0:cell><ns0:cell cols='2'>98.25</ns0:cell><ns0:cell /><ns0:cell>99.79</ns0:cell><ns0:cell>63.91</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-wBERT Decoder-4-mBERT</ns0:cell><ns0:cell cols='2'>98.31</ns0:cell><ns0:cell /><ns0:cell>99.78</ns0:cell><ns0:cell>66.45</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-wBERT Decoder-12-mBERT</ns0:cell><ns0:cell cols='2'>98.29</ns0:cell><ns0:cell /><ns0:cell>99.73</ns0:cell><ns0:cell>64.42</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of accuracies according to the input length and number of layers and initialization in the encoder and the decoder.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>7/10PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:1:0:NEW 9 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison of F1 scores according to the size of the training corpus.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison of results of previous approaches to those of the proposed model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Sequence-to-sequence Syllable + grapheme + bigram embeddings (Choe et al., 2020)</ns0:cell><ns0:cell>97.93 (96.74)</ns0:cell></ns0:row><ns0:row><ns0:cell>Our model Transformer + wBERT + mBERT</ns0:cell><ns0:cell>98.31</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='10'>/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:1:0:NEW 9 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Editor comments (Imran Ashraf) MAJOR REVISIONS The authors are advised to make Major Revisions to the manuscript, as per the given comments from the reviewers, and resubmit the paper. [# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question, then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #] Thank you for your considerate advice for our paper. We have renumbered the reviewers’ comments to clearly identify the comments for each reviewer. All comments have been addressed and the paper has been revised in response to them. Reviewer 1 (Huaping Zhang) Basic reporting In this paper, a Korean morphological analyzer (KMA) is designed to output a sequence of a single morpheme and its Part-of-speech (POS) that follows the morpheme. As the lengths of the input and output are different, Korean morphological analysis can be defined as an encoder-decoder problem. In this work, the author adopt the Transformer architecture to implement a KMA. The author's contributions are as follows: (1) In this work, the author utilized two kinds of BERT models to initialize Transformer. The wBERT is pre-trained with Korean raw sentences and the mBERT is pre-trained with morphologically analyzed sentences consisting of morphemes and POS tags. And the author initialized an encoder of Transformer with wBERT and a decoder of Transformer with mBERT. (2) The author found the optimal number of layers for the encoder and decoder to achieve the best performance of KMA. (3) The author summarized the experimental results that evaluate the impact of initializing wBERT and mBERT. (4) The author measured changes in accuracy in the models as the size of training dataset decreased from 100% to 10%. For (1)~(4): Thank you for your complete understanding of our contributions. Experimental design Strengths and Weaknesses (5) In the experiment of finding the optimal number of layers for the decoder, the encoder is initialized with wBERT while the decoder is randomly initialized. But there seems to be no reason to do so, so I think the author should explain the specific reasons for it. (6) In the experiment of finding the optimal number of layers for the encoder, the author fixed the number of layers in the decoder at 4. So the conclusion of this experiment is based on the premise that decoder has 4 layers, and it doesn't mean that the optimal number of layers is that(maybe when the decoder has other layers). For comments (5) & (6): I agreed with your considerate comments. We redesigned and performed the experiments to take into your opinion. First, the encoder and the decoder of Transformer are initialized with wBERT and mBERT, respectively. Then we compared 12 x 12 combinations of the layers of the encoder and the decoder to find the optimal layers of KMA. We added the following in the section of EXPERIMENTS of the revised version. (7) In table 7, when wBERT and mBERT are used to initialize the model and the data set is 10%, the data in this cell should be 95.23% instead of 98.23%. Yes, you’re right. We corrected it. Thanks… (8) In table 8, the comparison of F1 values in different models is meaningless, because they used different test datasets. Yes, you’re right. We re-implemented the previous approaches to obtain the meaningful comparison results by using the same test datasets. The results are provided in Table 8 in the revised version. Validity of the findings Innovation (9) This paper is the first attempt to initialize both an encoder and decoder for Transformer with two kinds of Korean BERT. Yes, That’s right. I appreciate your thoughtful and considerate comments on this study. Thanks to your comments, we could revise this work more completely. Reviewer 2 (Qiuli Qin) Basic reporting The structure of the overall article is very logical, but there are some problems in the details. (1) In the relevant work, the analysis of the relevant research on Korean at this stage is not very clear, and there is no clear analysis of the research deficiencies at this stage and the innovation of this research. It is necessary to explain the enlightenment or reference of the past literature research to this research. I agreed with your considerate comments. The related studies in the original version do not appear to be relevant to our work. So we added the following at the end of Section Survey in the revised version. (2) The paper does not explain how many words the longest text and the shortest text of the experimental data are respectively. It is necessary to further explain whether the algorithm is suitable for long text or short text. According to the experimental data set in the paper, it seems that it is only suitable for the training of short text, so we need to focus on this aspect again. We added a new Table 6 in the revised version to show the robustness of our KMA to the variations of input lengths. Experimental design (3) The specific process description of the experiment is a little simple, such as pretreatment, model experiment process, etc., which should be connected with the subsequent evaluation. I agreed with your considerate comments. The description of the experimental setup was somewhat obscure. We added the following in the section of EXPERIMENTS of the revised version. (4) In the introduction section, in the second paragraph, in order to verify the problem of 'the output sequence of a KMA differs from a raw input sequence in both length and surface form.', I think that not all sentences are the same problem, please add which one kind of sentence has this problem. Korean is an agglutinative language with a very productive inflectional system. In many cases, in the creation of an inflected word, a syllable or a character is deleted from the stem and/or a morphological contraction occurs between the stem and the inflection. We added the following for helping readers understand Korean better into the second paragraph in the Introduction section. (5) In the section of SURVEY ON KOREAN MORPHOLOGICAL ANALYZER, it only proves the rationality of the transform structure, but does not prove the rationality of 'initialize both an encoder and decoder for Transformer with two kinds of Korean BERT.', please add relevant explanations. I think this comment is almost the same as the comment (1). So, please refer to the response of the comment (1). Validity of the findings (6) The main contribution of this article is 'train the morphological analyzer faster and with less training data'. In Table 7, the size of the data set is divided into 10%, 30%, 50%, 70%, 100%. The F1 score first drops and then rises, but the F1 value of the data set at 100% is significantly better than the F1 value of 10%, please Explain why the model proposed in this paper requires a smaller amount of data set. As you said, the more training dataset, the better performance we can expect. However, what we want to say in Table 7 is that our approach (12-encoder-wBERT & 4-decodermBERT) can work better even with small training dataset than other combinations (number of layers and initialization). (7) For the main contribution 2 'find the most appropriate number of layers in the BERT models for a Korean morphological analyzer'&' we observed that the accuracy of the Korean morphological analyzer is highest when it has four layers in the decoder and 12 layers in the encoder.' Compare the experiment, whether the data set in other fields is applicable to the results of this article. We do not think the optimal combination of the number of layers of KMA, which we found in this study, is applicable to other fields of NLP or to other languages. A few studies (Clark et al., 2019; Jawahar et al., 2019) have examined which layers of pre-trained models are best suited for which tasks. Each task has a different layer that works best. It has been established that tasks dealing with words and surface forms of a sentence generally perform well on the lower layers rather than on the top layer. Thanks to your incisive opinion, we have a plan to compare and find the optimal layers of pretrained models for other Korean NLP applications, like syntactic parsers or sentiment analysis, in the near future. Reviewer 3 (Anonymous) Basic reporting (1) some details are missing in this version. the reviewer believes that the introduction is overlength. Yes, I agreed with your considerate comments. We revised the sections of INTRODUCTION and EXPERIMENTS in the revised version. The revised version of EXPERIMENTS is explained in the response of comment (4). Experimental design (2) some mismatched data in the experiment section. Yes, you’re right. The number (95.23%) (line 219) was mismatched with that (98.23%) of in Tab.7 We corrected it. Thanks… Validity of the findings (3) the proposed model is worked in a certain dataset. more experiments are required to validate the proposed model. We do not think the optimal combination of the number of layers of KMA, which we found in this study, is applicable to other fields of NLP or to other languages. A few studies (Clark et al., 2019; Jawahar et al., 2019) have examined which layers of pre-trained models are best suited for which tasks. Each task has a different layer that works best. It has been established that tasks dealing with words and surface forms of a sentence generally perform well on the lower layers rather than on the top layer. Thanks to your incisive opinion, we have a plan to compare and find the optimal layers of pretrained models for other Korean NLP applications, like syntactic parsers or sentiment analysis, in the near future. Additional comments (4) the author should clarify how the encoder and decoder are initialized with the pre-trained BERT model. I agreed with your considerate comments. The description of the experimental setups was somewhat obscure. First, the encoder and the decoder of Transformer are initialized with wBERT and mBERT, respectively. Then we compared 12 x 12 combinations of the layers of the encoder and the decoder to find the optimal layers of KMA. We added the following in the section of EXPERIMENTS of the revised version. (5) it is really necessary to define the classification between the word separator token and morpheme tokens as a multi-task learning? The authors should provide sufficient rationale. (6) more experiments should be conducted to validate the multi-task learning in this work. For (5) and (6): Yes, you’re right. Word boundaries are very important in Korean because they can affect the meaning of a sentence. Therefore, Classifying word separator token is not a complex problem. Therefore, we may build a separate module to add word separator tokens into the results of KMA. However, we want to make an endto-end KMA module, so we adopted a multi-tasking learning. We added the following in the revised version. We performed the additional experiments to take into your considerate comments. However, the multi-tasking issue is not main concern of this study. Therefore, we provide the results in Table 9 in the appendix. (7) In section experiment, the data (line 180) is mismatched with that of in Tab.7. Yes, you’re right. The number (95.23%) (line 219 in the revised version) was mismatched with that (98.23%) of in Tab.7 We corrected it. Thanks… (8) more details should be provided to improve the readability of this work, such as the initialization. I think this comment is the same as comment (4). So, please refer to the response of the comment (4). "
Here is a paper. Please give your review comments after reading it.
416
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>A morphological analyzer plays an essential role in identifying functional suffixes of Korean words. The analyzer input and output differ from each other in their length and strings, which can be dealt with by an encoder-decoder architecture. We adopt a Transformer architecture, which is an encoder-decoder architecture with self-attention rather than a recurrent connection, to implement a Korean morphological analyzer. Bidirectional Encoder Representations from Transformers (BERT) is one of the most popular pretrained representation models; it can present an encoded sequence of input words, considering contextual information. We initialize both the Transformer encoder and decoder with two types of Korean BERT, one of which is pretrained with a raw corpus, and the other is pretrained with a morphologically analyzed dataset. Therefore, implementing a Korean morphological analyzer based on Transformer is a fine-tuning process with a relatively small corpus. A series of experiments proved that parameter initialization using pretrained models can alleviate the chronic problem of a lack of training data and reduce the time required for training. In addition, we can determine the number of layers required for the encoder and decoder to optimize the performance of a Korean morphological analyzer.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Korean is an agglutinative language in which words consist of several morphemes, and some verb forms change when conjugated with functional suffixes. A Korean morphological analyzer (KMA) is designed to analyze a word and identify functional morphemes, which can specify the syntactic role of words in a sentence. Although end-to-end approaches are widely used in deep-learning models, some applications such as syntactic parsers require a KMA as a preprocessor to separate functional morphemes before parsing.</ns0:p><ns0:p>In many cases, the productive inflectional system in Korean causes deletion and contraction between a stem and the following morphemes when creating a Korean word. Therefore, a KMA should identify the base form of a morpheme by recovering deleted morphemes and decomposing contracted morphemes <ns0:ref type='bibr' target='#b5'>(Han and Palmer, 2004)</ns0:ref>. Therefore, a KMA output sequence differs from a raw input sequence in both length and surface form. Figure <ns0:ref type='figure'>1</ns0:ref> shows an example of KMA input and output. INPUT is a sequence of words separated by white spaces, and OUTPUT is a morphologically analyzed result. The output is a sequence of a single morpheme and the part-of-speech (POS) that follows the morpheme. The symbol '&lt;SP&gt;' indicates a word boundary. In this example, the input consists of three words, while the output consists of three words or seven morphemes, where some morphemes indicate grammatical relationships in a sentence.</ns0:p><ns0:p>Because the input and output lengths are different, Korean morphological analysis can be defined as an encoder-decoder problem. A raw input sequence is encoded and then decoded into a morphologically analyzed sequence. An encoder-decoder problem can be easily implemented by adopting two recurrent neural networks. Recent research in deep learning has proposed a new architecture, Transformer <ns0:ref type='bibr' target='#b15'>(Vaswani et al., 2017)</ns0:ref>, for encoder-decoder problems that can increase the parallelism of learning processes by eliminating recurrent connections. Transformer calculates self-attention scores that can cross-reference between every input. Therefore, we adopt the Transformer architecture to implement a KMA in this work. Manuscript to be reviewed Figure <ns0:ref type='figure'>1</ns0:ref>. The output of a morphological analyzer for the example input sentence 'I lost a black galaxy note.' (The Yale Romanized system is used to transcribe Korean sentences. 'VA' is an adjective, and 'VV' is a verb. 'ETM' is an adnominal ending that attaches to the end of a verb or an adjective. 'NNP' is a proper noun, and 'JKO' is a particle that attaches to the end of nouns indicating an objective case. 'EP' is a verbal ending in the past tense, and 'EF' is a verbal ending to make a sentence declarative.)</ns0:p><ns0:p>To train a KMA based on Transformer from scratch, we need a considerable parallel corpus that includes raw input sentences and their analyzed results.</ns0:p><ns0:p>Since the introduction of pretrained language representation models such as Bidirectional Encoder Representations from Transformers (BERT) <ns0:ref type='bibr' target='#b3'>(Devlin et al., 2019)</ns0:ref>, most natural language processing (NLP) applications have been developed based on pretrained models. Pretrained models provide contextdependent embeddings of an input sequence and reduce both the chronic problem of a lack of training data and the time required for training.</ns0:p><ns0:p>In this work, we utilize two types of BERT models to initialize Transformer, the backbone of a KMA.</ns0:p><ns0:p>One is pretrained with Korean raw sentences and the other with morphologically analyzed sentences consisting of morphemes and POS tags. For the sake of clarity, we name the former 'word-based BERT' (wBERT) and the latter 'morpheme-based BERT' (mBERT); mBERT can encode a morphologically analyzed sequence into embedding vectors in the same way wBERT can encode a raw sentence. We initialized the Transformer encoder with wBERT and the Transformer decoder with mBERT.</ns0:p><ns0:p>While it is reasonable to initialize a Transformer encoder with BERT, it may seem unusual to initialize a decoder with BERT. We do not have decoder-based models like GPT <ns0:ref type='bibr' target='#b14'>(Radford et al., 2018)</ns0:ref> pretrained with Korean morphologically analyzed data; instead, only mBERT is pretrained with Korean morphologically analyzed data. Recently, <ns0:ref type='bibr' target='#b0'>Chi et al. (2020)</ns0:ref> demonstrated that initializing an encoder and decoder with XLM <ns0:ref type='bibr' target='#b10'>(Lample and Conneau, 2019)</ns0:ref> produced better results than initializing them with random values.</ns0:p><ns0:p>XLM is a pretrained model for cross-lingual tasks and is implemented based on a Transformer encoder. Therefore, both an encoder and decoder of a KMA can be expected to benefit from initializing parameters with pretrained models. When Transformer is initialized with pretrained models, implementing a KMA based on the Transformer is a fine-tuning process that can be done with a relatively small corpus.</ns0:p><ns0:p>In addition, employing the fine-tuning process is easier and faster than building a KMA from scratch.</ns0:p><ns0:p>Pretrained models such as BERT generally have 12-24 layers, which is deeper than conventional models. A few studies <ns0:ref type='bibr' target='#b2'>(Clark et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b7'>Jawahar et al., 2019)</ns0:ref> have examined which layers of pretrained models are best suited for which tasks. It has been established that tasks dealing with words and surface forms of a sentence generally perform well on the lower layers rather than on the top layer. In this work, we also investigate the number of layers in a Transformer architecture that obtains the best accuracy for Korean morphological analysis.</ns0:p><ns0:p>Our contributions to achieving high-performance Korean morphological analysis are the following:</ns0:p><ns0:p>1. Because we leverage pretrained Korean language representation models to initialize the encoder and decoder of a morphological analyzer, we can train the morphological analyzer faster and with less training data.</ns0:p><ns0:p>2. We find the most appropriate number of layers in the BERT models for a KMA rather than using all layers in the models.</ns0:p><ns0:p>In the following section, we first explore related studies, and then we present the main architecture of a Korean morphological analyzer in Section 3. Experimental results are described in Section 4, followed by the conclusion in Section 5.</ns0:p></ns0:div> <ns0:div><ns0:head>2/10</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:2:0:NEW 22 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>SURVEY OF KOREAN MORPHOLOGICAL ANALYZERS</ns0:head><ns0:p>Traditional Korean morphological analysis consists of two pipeline stages: the first step is to separate morphemes from a word and convert them into their stems, and the second step is to assign them POS tags.</ns0:p><ns0:p>Recently, a deep learning-based end-to-end approach has been applied to many applications, including KMAs. A sequence-to-sequence architecture based on recurrent neural networks is most often used to implement a KMA in full end-to-end style. Using this architecture, morphological analyzers can be easily implemented without complicated feature engineering or manually built lexicons. Conventional morphological analysis models suffer from the out-of-vocabulary (OOV) problem. To mitigate this problem, the following models adopted syllable-based sequence-to-sequence architecture. <ns0:ref type='bibr' target='#b11'>Li et al. (2017)</ns0:ref> adopted gated recurrent unit networks to implement a KMA with a syllable-based sequence-to-sequence architecture. In addition, an attention mechanism <ns0:ref type='bibr' target='#b12'>(Luong et al., 2015)</ns0:ref> has been introduced to calculate the information needed by a decoder to ensure the model performs well. <ns0:ref type='bibr' target='#b8'>Jung et al. (2018)</ns0:ref> also used syllable-level input and output for a KMA to alleviate the problem of an unseen word. Even with syllable-level input and output, the model tends not to generate characters that rarely occur in a training corpus. Therefore, they supplemented the model with a copy mechanism <ns0:ref type='bibr' target='#b4'>(Gu et al., 2016)</ns0:ref> that copies rare characters to output sequences. A copy mechanism assigns higher probabilities to rare or OOV words to perform better sequence generation during decoding phases. They reported that the accuracy of the KMA improved from 95.92% to 97.08% when adopting input feeding and the copy mechanism. <ns0:ref type='bibr' target='#b1'>Choe et al. (2020)</ns0:ref> proposed a KMA specially designed to analyze Internet text data with several spacing errors and OOV inputs. To handle newly coined words, acronyms, and abbreviations often used in online discourse, they used syllable-based embeddings, syllable bigrams, and graphemes as input features.</ns0:p><ns0:p>The model performed better when the dataset was collected from the Internet.</ns0:p><ns0:p>Since the introduction of BERT <ns0:ref type='bibr' target='#b3'>(Devlin et al., 2019)</ns0:ref> in the field of NLP research, pre-trainingthen-fine-tuning approaches have become prevalent in most NLP applications. In addition, notable improvements have been reported in several studies that have adopted the pre-training-then-fine-tuning framework. However, due to the distinct characteristics of Korean complex morphology systems, previous KMA studies have not adopted large-scale pretrained models such as BERT. <ns0:ref type='bibr' target='#b11'>Li et al. (2017)</ns0:ref> used by wBERT and mBERT. A set of WordPiece that both wBERT and mBERT use does not include a word-separator token to indicate word boundaries. However, when the decoder generates a sequence of morpheme tokens, the word separator token '&lt;SP&gt;' must be specified between morpheme tokens to recover word-level results, as shown in the output of Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>Therefore, we adopt a multi-task learning approach to generate morphological analysis results while also inserting word-separators between the results. On the final layer of the decoder, there is an additional binary classifier that can discern whether a word-separator is needed for each output token. The final layer of the decoder produces two types of output, as shown in Figure <ns0:ref type='figure' target='#fig_0'>3</ns0:ref>, which are combined to generate a final morphological analyzed sequence.</ns0:p><ns0:p>Given a raw input sentence, the tokenizer of wBERT splits words into multiple sub-word tokens. A token input to wBERT consists of a vector summation of a token embedding (w), a positional embedding (p), and a segment embedding (E A ), as shown in Equation <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>x i = w i + p i + E A (<ns0:label>1</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>)</ns0:formula><ns0:p>where i is a token index of a sentence. The input of the encoder is denoted by X = {x 1 , x 2 , .., x n }. The encoder of Transformer encodes X into a sequence of contextualized embedding vectors Z = {z 1 , z 2 , ..., z n }.</ns0:p><ns0:p>Let us denote a sequence of hidden states in the decoder by H = {h 1 , h 2 , ..., h m }. As this work adopts a multi-task approach, the decoder simultaneously produces two kinds of output. The output of morpheme tokens is denoted by Y = {y 1 , y 2 , ..., y m } and the output of word separators is denoted by S = {s 1 , s 2 , ..., s m }, where s i &#8712; {0, 1}. At time t of the decoding phase, an output morpheme and a word-separator are determined by Equation 2 and 3, respectively.</ns0:p><ns0:formula xml:id='formula_2'>P G (y t |X) = P(y t |Z,Y &lt; t) = so f tmax(W T G h t )<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>P S (s t |X) = P(s t |Z,Y &lt; t) = so f tmax(W T S h t )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where W G and W S are learnable parameters for generating both outputs. The objective function L of the model is in Equation 4 Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>L Gen = &#8722; m &#8721; t=1 log P G (y t |X) L Sep = &#8722; m &#8721; t=1 (log P S (s t |X) &#8722; log(1 &#8722; P S (s t |X))) L = L Gen + L Sep</ns0:formula><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTS Datasets and Experimental Setup</ns0:head><ns0:p>We used 90,000 sentences for training, 1,000 sentences for validation, and 10,000 sentences for evaluation in this work. They were all collected from the POS-tagged corpus published by the 21st Century Sejong Project <ns0:ref type='bibr' target='#b9'>(Kim, 2006)</ns0:ref>. The sentence lengths were all less than 100 words and 46 POS labels were used in the Sejong corpus. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> shows the hyperparameters of the encoder and decoder, and Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the hyperparameters</ns0:p></ns0:div> <ns0:div><ns0:head>Corpus</ns0:head><ns0:p>for training the model.</ns0:p><ns0:p>The following experiments were designed to find the combination of the numbers of encoder and decoder layers that achieved the best KMA performance. First, we initialized the Transformer encoder and decoder with wBERT and mBERT, respectively, including embeddings of the WordPiece tokens.</ns0:p><ns0:p>The cross attentions between the Transformer encoder and decoder were initialized with random values.</ns0:p><ns0:p>Because the encoder and decoder each had 12 layers, we compared the KMA performances by performing 12 x 12 combinations of encoder and decoder layers. When we adopted fewer than 12 layers of the encoder and decoder, we used the parameters of the corresponding layers from the bottom of wBERT and mBERT. The remaining parameters, such as W G and W S in Equations 2 and 3, were randomly initialized.</ns0:p></ns0:div> <ns0:div><ns0:head>Results and Evaluation</ns0:head><ns0:p>The BERT base model has 12 layers. <ns0:ref type='bibr' target='#b7'>Jawahar et al. (2019)</ns0:ref> reported that tasks dealing with surface information performed best in the third and fourth layers of BERT, while tasks related to semantic information performed best in the seventh layer and above.</ns0:p><ns0:p>First, we wanted to find the optimal number of encoder and decoder layers to achieve the best morphological analysis performance while reducing the number of parameters to be estimated. In the first experiment, the encoder and decoder were initialized with wBERT and mBERT, respectively.</ns0:p><ns0:p>Then we compared the KMA performance according to the number of layers. The overall results of 12</ns0:p><ns0:p>x 12 combinations of encoder and decoder layers are shown in Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>. The accuracy improved as the number of encoder layers increased, while it remained nearly the same when the number of decoder layers increased. We claim that KMA achieves the best performance with 12 encoder layers. We took a closer look at the results of Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref> to examine the effect of the number of decoder layers on the KMA performance. Based on the results of Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>, we set the number of encoder layers at 12 and initialized it with wBERT. Then, we initialized the decoder with mBERT and compared the KMA performance while Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>varying the number of decoder layers from 3 to 12. Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> shows the KMA accuracy according to the number of decoder layers. To get definitive results, we obtained the accuracies by averaging three trials for each row in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. Although there seems to be little difference in KMA accuracy according to the number of decoder layers, the KMA performed best with four decoder layers. Surprisingly, the number of parameters in the decoder can be reduced without deteriorating the KMA performance. Table <ns0:ref type='table'>5</ns0:ref>. The comparison of accuracies according to the number of layers and initialization in the encoder and the decoder.</ns0:p></ns0:div> <ns0:div><ns0:head>Number of layers in the</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> summarizes the experimental results to evaluate the impact of initializing wBERT and mBERT.</ns0:p><ns0:p>In all subsequent experiments, the encoder had 12 layers and the decoder had four. The KMA significantly improved when the encoder was initialized with wBERT, increasing the F1 score by as much as 2.93.</ns0:p><ns0:p>There was a slight performance improvement when the decoder was initialized with mBERT. The KMA with the four-layer decoder outperformed the KMA with the full-layer decoder. In addition, we discovered that the word separator could easily exceed 99% accuracy by adopting multi-task learning. We obtained approximately 93.58% accuracy in word separation when adopting an independent word separator. Table <ns0:ref type='table'>9</ns0:ref> in the appendix provides the accuracy comparisons between the multi-task separator and the independent separator.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref> presents the robustness of the KMA to the lengths of the input sentences. The KMA with 12 encoder layers and four decoder layers outperformed the other combinations of models in evaluating longer inputs.</ns0:p><ns0:p>To show the effect of initializing parameters with pretrained models more clearly, we performed the following experiments. We measured changes in the accuracy of the models as the size of the training dataset decreased from 100% to 10%. The results are shown in Table <ns0:ref type='table' target='#tab_6'>7</ns0:ref>.</ns0:p><ns0:p>Initializing the model with wBERT and mBERT degraded the accuracy by less than 0.5%, even when only half of the training dataset was used to train the model. For the model initialized with wBERT alone, shows that the KMA can be trained robustly and efficiently when its parameters are initialized with the pretrained models wBERT and mBERT. Parameter initialization with pretrained models helps successfully build deep learning models.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_7'>8</ns0:ref> shows a comparison of the F1 scores with those of previous approaches that used sequenceto-sequence architectures. We reimplemented the models from the previous approaches and compared their results directly to those of our model in the same environments. The KMA proposed in this study demonstrates competitive end-to-end performance without any additional knowledge or mechanisms.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this work, we suggested adopting the Transformer architecture to implement a KMA. Furthermore, we proposed using two Korean BERTs to initialize the parameters of the Transformer encoder and decoder.</ns0:p><ns0:p>We introduced a multi-task learning approach to specify word boundaries in an output sequence of morpheme tokens. The KMA achieved its best performance when initialized with two types of Korean BERT. In addition, we observed that the accuracy of the KMA was highest when it had four layers in Manuscript to be reviewed F1 score Models (re-implementation) Sequence-to-sequence (syllable-basis) <ns0:ref type='bibr' target='#b11'>(Li et al., 2017)</ns0:ref> 97.15 (96.58) Sequence-to-sequence (syllable-basis) + input feeding + copy mechanism <ns0:ref type='bibr' target='#b8'>(Jung et al., 2018)</ns0:ref> 97.08 (96.67) </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 2. Transformer: the basis architecture of a Korean morphological analyzer Figure 2 illustrates the basic architecture of a KMA. Our model is implemented based on Transformer, which consists of an encoder and a decoder. The encoder, the inputs of which are raw sentences, is initialized with wBERT and the decoder, the outputs of which are morphologically analyzed sentences, is initialized with mBERT, as shown in Figure 3. A training dataset consists of pairs of raw input sentences with their morphological analyzed sequences.</ns0:figDesc><ns0:graphic coords='4,178.44,525.19,340.24,112.69' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 provides an example to clearly understand the model. The raw input sentence has three words, which are split into eight WordPiece tokens (shown in INPUT(X)) for the input of the encoder.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The input and the output of the Korean morphological analyzer for 'I lost a black galaxy note.'</ns0:figDesc><ns0:graphic coords='6,141.73,63.71,413.47,121.62' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison of performances according to the number of encoder and decoder layers. The X-axis shows the number of encoder layers, and the Y-axis shows the number of decoder layers.</ns0:figDesc><ns0:graphic coords='7,206.79,398.76,283.48,188.99' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:65503:2:0:NEW 22 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>the accuracy deteriorated by more than 0.6%, while the accuracy of the model initialized with random values decreased by more than 3%. The most surprising result was that the model initialized with wBERT and mBERT achieved 95.23% accuracy when trained with only 10% of the training dataset. This result was approximately the same as when the model was trained with the complete training dataset after being initialized with random values. As the size of training dataset becomes smaller, the performance of the model in the last row decreases faster than the model with 12 and 4 layers in the encoder and decoder initialized with wBERT and mBERT, respectively. The main reason is that the former has more parameters than the latter, and models with more parameters require more training data.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 depicts the training loss curves of the models with different initialization values depending on the number of training steps. The X-axis shows the number of training steps, while the Y-axis shows the loss values of the training steps. At the beginning of the training phase, the losses of the model initialized with random values decreased very sharply. However, as the number of training steps increased, the loss of the model using wBERT and mBERT as parameter initializers dropped faster than the other models. This</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:65503:2:0:NEW 22 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Comparison of loss values according to the number of training steps.</ns0:figDesc><ns0:graphic coords='10,206.79,64.10,283.69,216.00' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The statistics of the dataset.Table1describes the average number and the maximum number of WordPiece tokens in the sentences. We adopted both wBERT and mBERT, released by the Electronics and Telecommunications Research Institute (https://aiopen.etri.re.kr). They were pretrained with approximately 23 GB of data from newspapers and Wikipedia.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Number of</ns0:cell><ns0:cell cols='2'>Average number</ns0:cell><ns0:cell>Maximum number</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>sentences</ns0:cell><ns0:cell /><ns0:cell>of tokens</ns0:cell><ns0:cell>of tokens</ns0:cell></ns0:row><ns0:row><ns0:cell>Train</ns0:cell><ns0:cell>Input Output</ns0:cell><ns0:cell>90,000</ns0:cell><ns0:cell /><ns0:cell>34.80 39.22</ns0:cell><ns0:cell>220 185</ns0:cell></ns0:row><ns0:row><ns0:cell>Validation</ns0:cell><ns0:cell>Input Output</ns0:cell><ns0:cell>1,000</ns0:cell><ns0:cell /><ns0:cell>40.61 45.16</ns0:cell><ns0:cell>120 126</ns0:cell></ns0:row><ns0:row><ns0:cell>Evaluation</ns0:cell><ns0:cell>Input Output</ns0:cell><ns0:cell>10,000</ns0:cell><ns0:cell /><ns0:cell>34.06 38.55</ns0:cell><ns0:cell>126 132</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Hyperparameters</ns0:cell><ns0:cell /><ns0:cell cols='2'>Encoder Decoder</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(wBERT) (mBERT)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Number of layers</ns0:cell><ns0:cell /><ns0:cell>1-12</ns0:cell><ns0:cell>1-12</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Hidden dimension</ns0:cell><ns0:cell /><ns0:cell>768</ns0:cell><ns0:cell>768</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Intermediate dimension</ns0:cell><ns0:cell>3,072</ns0:cell><ns0:cell>3,072</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Number of attention heads</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Activation function</ns0:cell><ns0:cell /><ns0:cell>Gelu</ns0:cell><ns0:cell>Gelu</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Dropout</ns0:cell><ns0:cell /><ns0:cell>0.1</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Maximum of input length</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Vocabulary size</ns0:cell><ns0:cell /><ns0:cell>30,797</ns0:cell><ns0:cell>30,349</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The hyperparameters of the wBERT and mBERT.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Hyperparameters</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>Adam</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate(encoder, decoder)</ns0:cell><ns0:cell>5e-3, 1e-3</ns0:cell></ns0:row><ns0:row><ns0:cell>Beta1, Beta2</ns0:cell><ns0:cell>0.99, 0.998</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum number of training steps</ns0:cell><ns0:cell>100,000</ns0:cell></ns0:row></ns0:table><ns0:note>5/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:2:0:NEW 22 Mar 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The hyperparameters for training the model.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison of accuracies according to the number of decoder layers.Through the above experiment, we arrived at the preliminary conclusion that KMA performs best when the decoder has only four layers and the encoder has the full number of layers.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Morpheme-level</ns0:cell><ns0:cell cols='2'>Separator</ns0:cell><ns0:cell>Sentence-level</ns0:cell><ns0:cell>Number of</ns0:cell></ns0:row><ns0:row><ns0:cell>Decoder</ns0:cell><ns0:cell cols='2'>F1 score</ns0:cell><ns0:cell cols='2'>accuracy</ns0:cell><ns0:cell>accuracy</ns0:cell><ns0:cell>parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell cols='2'>97.86</ns0:cell><ns0:cell>99.64</ns0:cell><ns0:cell /><ns0:cell>59.48</ns0:cell><ns0:cell>184M</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell cols='2'>98.31</ns0:cell><ns0:cell>99.78</ns0:cell><ns0:cell /><ns0:cell>66.45</ns0:cell><ns0:cell>193M</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='2'>97.99</ns0:cell><ns0:cell>99.67</ns0:cell><ns0:cell /><ns0:cell>60.88</ns0:cell><ns0:cell>203M</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell cols='2'>98.01</ns0:cell><ns0:cell>99.70</ns0:cell><ns0:cell /><ns0:cell>61.33</ns0:cell><ns0:cell>212M</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell cols='2'>98.04</ns0:cell><ns0:cell>99.71</ns0:cell><ns0:cell /><ns0:cell>61.53</ns0:cell><ns0:cell>222M</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell cols='2'>98.07</ns0:cell><ns0:cell>99.72</ns0:cell><ns0:cell /><ns0:cell>62.22</ns0:cell><ns0:cell>231M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell cols='2'>98.07</ns0:cell><ns0:cell>99.72</ns0:cell><ns0:cell /><ns0:cell>62.07</ns0:cell><ns0:cell>241M</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell cols='2'>98.09</ns0:cell><ns0:cell>99.72</ns0:cell><ns0:cell /><ns0:cell>62.46</ns0:cell><ns0:cell>250M</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell cols='2'>98.14</ns0:cell><ns0:cell>99.74</ns0:cell><ns0:cell /><ns0:cell>63.16</ns0:cell><ns0:cell>260M</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell cols='2'>98.29</ns0:cell><ns0:cell>99.73</ns0:cell><ns0:cell /><ns0:cell>64.42</ns0:cell><ns0:cell>269M</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Number of layers &amp; Initialization in the encoder and decoder</ns0:cell><ns0:cell cols='2'>Morpheme-level F1 score</ns0:cell><ns0:cell cols='2'>Separator accuracy</ns0:cell><ns0:cell>Sentence-level accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-random Decoder-4-random</ns0:cell><ns0:cell cols='2'>95.32</ns0:cell><ns0:cell /><ns0:cell>98.84</ns0:cell><ns0:cell>40.49</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-wBERT Decoder-4-random</ns0:cell><ns0:cell cols='2'>98.25</ns0:cell><ns0:cell /><ns0:cell>99.79</ns0:cell><ns0:cell>63.91</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-wBERT Decoder-4-mBERT</ns0:cell><ns0:cell cols='2'>98.31</ns0:cell><ns0:cell /><ns0:cell>99.78</ns0:cell><ns0:cell>66.45</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Encoder-12-wBERT Decoder-12-mBERT</ns0:cell><ns0:cell cols='2'>98.29</ns0:cell><ns0:cell /><ns0:cell>99.73</ns0:cell><ns0:cell>64.42</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of accuracies according to the input length and number of layers and initialization in the encoder and the decoder.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>7/10</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison of F1 scores according to the size of the training corpus.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison of results of previous approaches to those of the proposed model. the decoder and 12 layers in the encoder. To conclude this work, we proved that appropriate parameter initialization can help ensure stable, fast training, and good performance of deep-learning models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Our model Transformer + wBERT + mBERT</ns0:cell><ns0:cell>98.31</ns0:cell></ns0:row></ns0:table><ns0:note>Sequence-to-sequence Syllable + grapheme + bigram embeddings(Choe et al., 2020) 97.93 (96.74) </ns0:note></ns0:figure> <ns0:note place='foot' n='10'>/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:65503:2:0:NEW 22 Mar 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Editor comments (Imran Ashraf) MAJOR REVISIONS Based on reviewer's comments, changes are required for your manuscript. Dear Professor Ashraf Thank you again for your considerate advice for our paper. All comments have been addressed and the paper has been revised in response to them. Reviewer 1 (Huaping Zhang) Basic reporting Compared with the previous version, the author has made some supplements and modifications to the experiment, but there are still some problems in the details. (1) When exploring the influence of training set size on experimental results, there should be experiments with full-layers for both encoder and decoder, as in the previous experiments. From my point of view, I suggest a few modifications are still needed before publication. First of all, we appreciate your effort and time to improve our work. We could upgrade our manuscript thanks to your considerate review. We have followed your advice and tried to revise the paper as minutely as possible. We performed an additional experiment on the model with full-layers for both encoder and decoder and added the results in Table 7. Experimental design (2) In the experiment of finding the optimal number of encoder and decoder layers , I think the two experiments made by the author can be combined into one, that is, the optimal combination of layers for encoder and decoder can be determined by the first experiment alone. Yes, you are right. I totally agree with your thoughtful comment. We combined the first and the second experiments into one in the revised version. However, since we want to show the readers the underlying reason why we choose 4 layers in the decoder, we leave Table 4 that describes the accuracies and the number of parameters according to the number of decoder’s layers. Validity of the findings Compared with the previous version, the author has made some supplements and modifications to the experiment, but there are still some problems in the details. Although our experimental results cannot be applied to all kinds of NLP applications, we believe that we make an important contribution to finding the optimal number of layers of the encoder and the decoder of Korean Morphological analyzer based on two types of BERT. Additional comments (3) In table 6, the fourth column heading should be '> 100 tokens' not '< 100 tokens'. Corrected "
Here is a paper. Please give your review comments after reading it.
417
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Bilinear pooling (BLP) refers to a family of operations recently developed for fusing features from different modalities predominantly for visual question answering (VQA) models. Successive BLP techniques have yielded higher performance with lower computational expense, yet at the same time they have drifted further from the original motivational justification of bilinear models, instead becoming empircally motivated by task performance. Furthermore, despite significant success in text-image fusion in VQA, BLP has not yet gained such notoriety in video-QA. Though BLP methods have continued to perform well on video tasks when fusing vision and non-textual features, BLP has recently been overshadowed by other vision and textual feature fusion techniques in video-QA. We aim to add a new perspective to the empirical and motivational drift in BLP. We take a step back and discuss the motivational origins of BLP, highlighting the often-overlooked parallels to neurological theories (Dual Coding Theory and The Two-Stream Model of Vision). We seek to carefully and experimentally ascertain the empirical strengths and limitations of BLP as a multimodal text-vision fusion technique in video-QA using 2 models (TVQA baseline and heterogeneous-memory-enchanced 'HME' model) and 4 datasets (TVQA, TGif-QA, MSVD-QA, and EgoVQA). We examine the impact of both simply replacing feature concatenation in the existing models with BLP, and a modified version of the TVQA baseline to accommodate BLP that we name the 'dual-stream' model. We find that our relatively simple integration of BLP does not increase, and mostly harms, performance on these video-QA benchmarks. Using our insights on recent work in BLP for video-QA results and recently proposed theoretical multimodal fusion taxonomies, we offer insight into why BLP-driven performance gain for video-QA benchmarks may be more difficult to achieve than in earlier VQA models. We both share our perspective on, and suggest solutions for, the key issues we identify with BLP techniques for multimodal fusion in video-QA. We look beyond the empirical justification of BLP techniques and propose both alternatives and</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION 39</ns0:head><ns0:p>To solve the growing abundance of complex deep learning tasks, it is essential to develop modelling 40 and learning strategies with the capacity to learn complex and nuanced multimodal relationships and 41 representations. To this end, research efforts in multimodal deep learning have taken aim at the relationship <ns0:ref type='bibr' target='#b7'>Ben-Younes et al. (2019)</ns0:ref>. A bilinear (outer product) expansion is thought to encourage models to learn interactions between two feature spaces and has experimentally outperformed 'simpler' vector operations (i.e. concatenation and element-wise-addition/multiplication) on VQA benchmarks. Though successive BLP techniques focus on leveraging higher performance with lower computational expense, which we wholeheartedly welcome, the context of their use has subtly drifted from application in earlier bilinear models e.g. where in <ns0:ref type='bibr' target='#b60'>Lin et al. (2015)</ns0:ref> the bilinear mapping is learned between convolution maps (a tangible and visualisable parameter), from compact BLP <ns0:ref type='bibr' target='#b30'>Gao et al. (2016)</ns0:ref> onwards the bilinear mapping is learned between indexes of deep feature vectors (a much less tangible unit of representation). Though such changes are not necessarily problematic and the improved VQA performance they have yielded is valuable, they represent a broader trend of the use of BLP methods in multimodal fusion being justified only by empirical success. Furthermore, despite BLP's history of success in text-image fusion in VQA, it</ns0:p><ns0:p>has not yet gained such notoriety in video-QA. Though BLP methods have continued to perform well on video tasks when fusing vision and non-textual features <ns0:ref type='bibr' target='#b43'>Hu et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b109'>Zhou et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b74'>Pang et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b98'>Xu et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>Deng et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b93'>Wang et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b18'>Deb et al. (2022)</ns0:ref>; <ns0:ref type='bibr' target='#b86'>Sudhakaran et al. (2021)</ns0:ref>, BLP has recently been overshadowed by other vision and textual feature fusion techniques in video-QA <ns0:ref type='bibr' target='#b49'>Kim et al. (2019)</ns0:ref>; <ns0:ref type='bibr'>Li et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b29'>Gao et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b61'>Liu et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b59'>Liang et al. (2019)</ns0:ref>.</ns0:p><ns0:p>In this paper, we aim to add a new perspective to the empirical and motivational drift in BLP. Our contributions include the following: I) We carefully and experimentally ascertain the empirical strengths and limitations of BLP as a multimodal text-vision fusion technique on 2 models (TVQA baseline and heterogeneous-memory-enchanced 'HME' model) and 4 datasets (TVQA, TGif-QA, MSVD-QA and EgoVqa). To this end, our experiments include replacing feature concatenation in the existing models with BLP, and a modified version of the TVQA baseline to accommodate BLP that we name the 'dual-stream' model. Furthermore, we contrast BLP (classified as a 'joint' representation by <ns0:ref type='bibr' target='#b4'>Baltru&#353;aitis et al. (2019))</ns0:ref> with deep canonical cross correlation (a 'co-ordinated representation'). We find that our relatively simple integration of BLP does not increase, and mostly harms, performance on these video-QA benchmarks. II)</ns0:p><ns0:p>We discuss the motivational origins of BLP and share our observations of bilinearity in text-vision fusion. III) By observing trends in recent work using BLP for multimodal video tasks and recently proposed theoretical multimodal fusion taxonomies, we offer insight into why BLP-driven performance gain for video-QA benchmarks may be more difficult to achieve than in earlier VQA models. IV) We identify temporal alignment and inefficiency (computational resources and performance) as key issues with BLP as a multimodal text-vision fusion technique in video-QA, and highlight concatenation and attention mechanisms as an ideal alternative. V) In parallel with the empirically justified innovations driving BLP methods, we explore the often-overlooked similarities of bilinear and multimodal fusion to neurological theories e.g. Dual Coding Theory <ns0:ref type='bibr' target='#b71'>Paivio (2013</ns0:ref><ns0:ref type='bibr' target='#b72'>Paivio ( , 2014) )</ns0:ref> and the Two-Stream Model of Vision <ns0:ref type='bibr' target='#b32'>Goodale and Milner (1992)</ns0:ref>; <ns0:ref type='bibr' target='#b65'>Milner (2017)</ns0:ref>, and propose several potential neurologically justified alternatives and improvements to existing text-image fusion. We highlight the latent potential already in existing video-QA dataset to exploit neurological theories by presenting a qualitative analysis of occurrence of psycholinguistically 'concrete' words in the vocabularies of the textual components of the 4 video-QA we experiment with.</ns0:p></ns0:div> <ns0:div><ns0:head>BACKGROUND: BILINEAR POOLING</ns0:head><ns0:p>In this section we outline the development of BLP techniques, highlight how bilinear models parallel the two-stream model of vision, and discuss where bilinear models diverged from their original motivation.</ns0:p></ns0:div> <ns0:div><ns0:head>Concatenation</ns0:head><ns0:p>Early works use Vector concatenation to project different features into a new joint feature space. <ns0:ref type='bibr' target='#b108'>Zhou et al. (2015)</ns0:ref> use vector concatenation on the CNN image and text features in their simple baseline VQA model. Similarly, <ns0:ref type='bibr' target='#b64'>Lu et al. (2016)</ns0:ref> concatenate image attention and textual features. Vector concatenation is a projection of both input vectors into a new 'joint' dimensional space. Vector concatenation as a multimodal feature fusion technique in VQA is considered a baseline and is generally empirically outperformed in VQA by the following bilinear techniques.</ns0:p></ns0:div> <ns0:div><ns0:head>Bilinear Models</ns0:head><ns0:p>Working from the observations that 'perceptual systems routinely separate 'content' from 'style'', <ns0:ref type='bibr' target='#b90'>Tenenbaum and Freeman (2000)</ns0:ref> proposed a bilinear framework on these two different aspects of purely visual Manuscript to be reviewed Computer Science inputs. They find that the multiplicative bilinear model provides 'sufficiently expressive representations of factor interactions'. The bilinear model in <ns0:ref type='bibr' target='#b60'>Lin et al. (2015)</ns0:ref> is a 'two-stream' architecture where distinct subnetworks model temporal and spatial aspects. The bilinear interactions are between the outputs of two CNN streams, resulting in a bilinear vector that is effectively an outer product directly on convolution maps (features are aggregated with sum-pooling). This makes intuitive sense as individual convolution maps represent specific patterns. It follows that learnable parameters representing the outer product between these maps learn weightings between distinct and visualisable patterns directly. Interestingly, both <ns0:ref type='bibr' target='#b90'>Tenenbaum and Freeman (2000)</ns0:ref>; <ns0:ref type='bibr' target='#b60'>Lin et al. (2015)</ns0:ref> are reminiscent of two-stream hypothesises of visual processing in the human brain <ns0:ref type='bibr' target='#b32'>Goodale and Milner (1992)</ns0:ref>; <ns0:ref type='bibr' target='#b65'>Milner and</ns0:ref><ns0:ref type='bibr'>Goodale (2006, 2008)</ns0:ref>; <ns0:ref type='bibr' target='#b31'>Goodale (2014);</ns0:ref><ns0:ref type='bibr'>Milner (2017) (discussed in detail later)</ns0:ref>. Though these models focus on only visual content, their generalisable two-factor frameworks would later be inspiration to multimodal representations. Fully bilinear representations using deep learning features can easily become impractically large, necessitating informed mathematical compromises to the bilinear expansion. <ns0:ref type='bibr' target='#b30'>Gao et al. (2016)</ns0:ref> introduce 'Compact Bilinear Pooling', a technique combining the count sketch function <ns0:ref type='bibr' target='#b10'>Charikar et al. (2002)</ns0:ref> and convolution theorem <ns0:ref type='bibr' target='#b22'>Dom&#237;nguez (2015)</ns0:ref> in order to 'pool' the outer product into a smaller bilinear representation. <ns0:ref type='bibr' target='#b27'>Fukui et al. (2016)</ns0:ref> use compact BLP in their VQA model to learn interactions between text and images i.e. multimodal compact bilinear pooling (MCB). We note that for MCB, the learned outer product is no longer on convolution maps, but rather on the indexes of image and textual tensors. Intuitively, a given index of an image or textual tensor is more abstracted from visualisable meaning when compared to convolution map. As far as we are aware, no research has addressed the potential ramifications of this switch from distinct maps to feature indexes, and later usages of bilinear pooling methods continue this trend. Though MCB is significantly more efficient than full bilinear expansions, they still require relatively large latent dimension to perform well on VQA (d&#8776;16000).</ns0:p></ns0:div> <ns0:div><ns0:head>Compact Bilinear Pooling</ns0:head></ns0:div> <ns0:div><ns0:head>Multimodal Low-Rank Bilinear Pooling</ns0:head><ns0:p>To further reduce the number of needed parameters, <ns0:ref type='bibr' target='#b50'>Kim et al. (2017)</ns0:ref> introduce multimodal low-rank bilinear pooling (MLB), which approximates the outer product weight representation W by decomposing it into two rank-reduced projection matrices: <ns0:ref type='bibr'>(m, n)</ns0:ref> is the output vector dimension, &#8857; is element-wise multiplication of vectors or the Hadamard product, and 1 is the unity vector. MLB performs better than MCB in <ns0:ref type='bibr' target='#b70'>Osman and Samek (2019)</ns0:ref>, but it is sensitive to hyperparameters and converges slowly. Furthermore, <ns0:ref type='bibr' target='#b50'>Kim et al. (2017)</ns0:ref> suggest using Tanh activation on the output of z to further increase model capacity. <ns0:ref type='bibr' target='#b105'>Yu et al. (2017)</ns0:ref> propose multimodal factorised bilinear pooling (MFB) as an extension of MLB. Consider the bilinear projection matrix W &#8712; R m&#215;n outlined above, to learn output z &#8712; R o we need to learn W = [W 0 , ..., W o&#8722;1 ]. We generalise output z:</ns0:p><ns0:formula xml:id='formula_0'>z = MLB(x, y) = (X T x) &#8857; (Y T y) z = x T W y = x T XY T y = 1 T (X T x &#8857;Y T y) where X &#8712; R m&#215;o , Y &#8712; R n&#215;o , o &lt; min</ns0:formula></ns0:div> <ns0:div><ns0:head>Multimodal Factorised Low Rank Bilinear Pooling</ns0:head><ns0:formula xml:id='formula_1'>z i = x T X i Y T i y = k&#8722;1 &#8721; d=0 x T a d b T d y = 1 T (X T i x &#8857; Y T i y) (1)</ns0:formula><ns0:p>Note that MLB is equivalent to MFB where k=1. MFB can be thought of as a two-part process: features are 'expanded' to higher-dimensional space by W &#963; matrices, then 'squeezed' into a 'compact ouput'.</ns0:p><ns0:p>The authors argue that this gives 'more powerful' representational capacity in the same dimensional space than MLB.</ns0:p></ns0:div> <ns0:div><ns0:head>Multimodal Tucker Fusion</ns0:head><ns0:p>Ben-younes et al. ( <ns0:ref type='formula'>2017</ns0:ref>) extend the rank-reduction concept from MLB and MFB to factorise the entire bilinear tensor using tucker decomposition <ns0:ref type='bibr' target='#b92'>Tucker (1966)</ns0:ref> in their multimodal tucker fusion (MUTAN) Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>model. We will briefly summarise the notion of rank and the mode-n product to describe the tucker decomposition model.</ns0:p><ns0:p>Rank and mode-n product:</ns0:p><ns0:formula xml:id='formula_2'>If W &#8712; R I 1 &#215;,...,&#215;I N and V &#8712; R J n &#215;I n for some n &#8712; {1, ..., N} then rank(W &#8855; n V) &#8804; rank(W)</ns0:formula><ns0:p>where &#8855; n is the mode-n tensor product:</ns0:p><ns0:formula xml:id='formula_3'>(W &#8855; n V)(i 1 , ..., i n&#8722;1 , j n , i n+1 , ..., i N ):=&#8721; I n i n =1 W (i 1 , ..., i n&#8722;1 , i n , i n+1 , ..., i N )V( j n , i n )</ns0:formula><ns0:p>In essence, the mode-n fibres (also known as mode-n vectors) of W &#8855; n V are the mode-n fibres of W multiplied by V (proof here Guillaume OLIKIER ( <ns0:ref type='formula'>2017</ns0:ref>)). See Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> for a visualisation of mode-n fibres. Each mode-n tensor product introduces an upper bound to the rank of the tensor. We note that conventionally, the mode-n fibres count from 1 instead of 0. We will follow this convention for the tensor product portion of our paper to avoid confusion. The tucker decomposition of a real 3 rd order tensor T &#8712; R d 1 &#215;d 2 &#215;d 3 is:</ns0:p><ns0:formula xml:id='formula_4'>T = &#964; &#8855; 1 W 1 &#8855; 2 W 2 &#8855; 3 W 3</ns0:formula><ns0:p>where &#964; &#8712; R d 1 &#215;d 2 &#215;d 3 (core tensor), and</ns0:p><ns0:formula xml:id='formula_5'>W 1 , W 2 , W 3 &#8712; R d 1 &#215;d 1 , R d 2 &#215;d 2 , R d 3 &#215;d 3 (factor matrices) respec-</ns0:formula><ns0:p>tively.</ns0:p></ns0:div> <ns0:div><ns0:head>MUTAN:</ns0:head><ns0:p>The MUTAN model uses a reduced rank on the core tensor to constrain representational capacity, and the factor matrices to encode full bilinear projections of the textual and visual features, and</ns0:p><ns0:p>finally output an answer prediction, i.e:</ns0:p><ns0:formula xml:id='formula_6'>y = ((&#964; &#8855; 1 (q T W q )) &#8855; 2 (v T W v )) &#8855; 3 W o</ns0:formula><ns0:p>Where y &#8712; R |A| is the answer prediction vector and q, v are the textual and visual features respectively.</ns0:p><ns0:p>A slice-wise attention mechanism is used in the MUTAN model to focus on the 'most discriminative interactions'. Multimodal tucker fusion is an empirical improvement over the preceeding BLP techniques on VQA, but it introduces complex hyperparameters to refine that are important for relatively its high performance (R and core tensor dimensions).</ns0:p></ns0:div> <ns0:div><ns0:head>Multimodal Factorised Higher Order Bilinear Pooling</ns0:head><ns0:p>All the BLP techniques discussed up to now are 'second-order', i.e. take two functions as inputs. <ns0:ref type='bibr' target='#b106'>Yu et al. (2018b)</ns0:ref> propose multimodal factorised higher-order bilinear pooling (MFH), extending second-order BLP to 'generalised high-order pooling' by stacking multiple MFB units, i.e:</ns0:p><ns0:formula xml:id='formula_7'>z i exp = MFB i exp (I, Q) = z i&#8722;1 exp &#8857; Dropout(U T I &#8857; V T Q) z = SumPool(z exp )</ns0:formula><ns0:p>for i &#8712; {1, ..., p} where I, Q are visual and text features respectively. Similar to how MFB extends MLB, MFH is MFB where p = 1. Though MFH slightly outperforms MFB, there has been little exploration into the theoretical benefit in generalising to higher-order BLP.</ns0:p></ns0:div> <ns0:div><ns0:head>4/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Bilinear Superdiagonal Fusion <ns0:ref type='bibr' target='#b7'>Ben-Younes et al. (2019)</ns0:ref> proposed another method of rank restricted bilinear pooling: Bilinear Superdiagonal Fusion (BLOCK). We will briefly outline block term decomposition before describing BLOCK.</ns0:p><ns0:p>Block Term Decomposition: Introduced in a 3-part paper De Lathauwer (2008a,b); De Lathauwer and Nion (2008), block term decomposition reformulates a bilinear matrix representation as the sum of rank restricted matrix products (contrasting low rank pooling which is represented by only a single rank restricted matrix product). By choosing the number of decompositions in the approximated sum and their rank, block-term decompositions offer greater control over the approximated bilinear model. Block term decompositions are easily extended to higher-order tensor decompositions, allowing multilinear rank restriction for multilinear models in future research. A block term decomposition of a tensor W &#8712; R I 1 &#215;,...,&#215;I N is a decomposition of the form: </ns0:p><ns0:formula xml:id='formula_8'>W = &#8721; R r=1 S r &#8855; 1 U 1 r &#8855; 2 U 2 r &#8855; 3 , ..., &#8855; n U n</ns0:formula><ns0:formula xml:id='formula_9'>z = W &#8855; 1 x &#8855; 2 y</ns0:formula><ns0:p>where z &#8712; R o . The superdiagonal BLOCK model uses a 3 dimensional block term decomposition. The decomposition of W in rank (R 1 , R 2 , R 3 ) is defined as:</ns0:p><ns0:formula xml:id='formula_10'>W = &#8721; R r=1 S r &#8855; 1 U 1 r &#8855; 2 U 2 r &#8855; 3 U 3 r</ns0:formula><ns0:p>This can be written as</ns0:p><ns0:formula xml:id='formula_11'>W = S bd &#8855; 1 U 1 &#8855; 2 U 2 &#8855; 3 U 3 where U 1 =[U 1 1 , ..., U 1 R ],</ns0:formula><ns0:p>similarly with U 2 and U 3 , and now S bd &#8712; R RR 1 &#215;RR 2 &#215;RR 3 . So z can now be expressed with respect to x and y. Let x = U 1 x &#8712; R RR 1 and &#375; = U 2 y &#8712; R RR 2 . These two projections are merged by the block-superdiagonal tensor S bd . Each block in S bd merges together blocks of size R 1 from</ns0:p><ns0:p>x and of size R 2 from &#375; to produce a vector of size R 3 :</ns0:p><ns0:formula xml:id='formula_12'>z r = S r &#8855; x xrR 1 :(r+1)R 1 &#8855; y &#375;rR 2 :(r+1)R 2</ns0:formula><ns0:p>where xi: j is the vector of dimension j &#8722; i containing the corresponding values of x. Finally all vectors z r are concatenated producing &#7825; &#8712; R RR 3 . The final prediction vector is z = U 3 , &#7825; &#8712; R o . Similar to tucker fusion, the block term decomposition based fusion in BLOCK theoretically allows more nuanced control on representation size and empirically outperforms previous techniques.</ns0:p></ns0:div> <ns0:div><ns0:head>5/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORKS Bilinear Pooling in Video-QA With Language-Vision Fusion</ns0:head><ns0:p>We aim to highlight and explore a broad shift away from BLP in favour of methods such as attention in video-QA benchmarks. Several video models have incorporated and contrasted BLP techniques to their own model designs for language-vision fusion tasks. <ns0:ref type='bibr' target='#b49'>Kim et al. (2019)</ns0:ref> find various BLP fusions perform worse than their 'dynamic modality fusion' mechanism on TVQA <ns0:ref type='bibr' target='#b55'>Lei et al. (2018)</ns0:ref> and MovieQA <ns0:ref type='formula'>2019</ns0:ref>) is a hierarchical model that aims to dynamically select from the appropriate point across both time and modalities that outperforms an MCB approach on Movie-QA.</ns0:p></ns0:div> <ns0:div><ns0:head>Bilinear Pooling in Video Without Language-Vision Fusion</ns0:head><ns0:p>Where recent research in video-QA tasks (which includes textual questions as input) has moved away from BLP techniques, several video tasks that do not involve language have found success using BLP </ns0:p></ns0:div> <ns0:div><ns0:head>DATASETS</ns0:head><ns0:p>In this section, we outline the video-QA datasets we use in our experiments.</ns0:p><ns0:p>MSVD-QA <ns0:ref type='bibr' target='#b97'>Xu et al. (2017)</ns0:ref> argue that simply extending image-QA methods is 'insufficient and suboptimal' to conduce quality video-QA, and that instead the focus should be on the temporal structure of videos. Using an NLP method to automatically generate QA pairs from descriptions <ns0:ref type='bibr' target='#b39'>Heilman and Smith (2009)</ns0:ref> ('what', 'who', 'how', 'when', 'where').</ns0:p></ns0:div> <ns0:div><ns0:head>TGIF-QA</ns0:head><ns0:p>Jang et al. ( <ns0:ref type='formula'>2017</ns0:ref>) speculate that the relatively limited progress in video-QA compared to image-QA is 'due in part to the lack of large-scale datasets with well defined tasks'. As such, they introduced the TGIF-QA dataset to 'complement rather than compete' with existing VQA literature and to serve as a bridge between video-QA and video understanding. To this end, they propose 3 subsets with specific video-QA tasks that aim to take advantage of the temporal format of videos:</ns0:p><ns0:p>Count: Counting the number of times a specific action is repeated <ns0:ref type='bibr' target='#b56'>Levy and Wolf (2015)</ns0:ref> e.g. 'How many times does the girl jump?'. Models output the predicted number of times the specified actions happened.</ns0:p><ns0:p>(Over 30k QA pairs).</ns0:p><ns0:p>Action: Identify the action that is repeated a number of times in the video clip. There are over 22k multiple choice questions e.g. 'What does the girl do 5 times?'.</ns0:p><ns0:p>Trans: Identifying details about a state transition <ns0:ref type='bibr' target='#b44'>Isola et al. (2015)</ns0:ref>. There are over 58k multiple choice questions e.g. 'What does the man do after the goal post?'.</ns0:p></ns0:div> <ns0:div><ns0:head>6/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Frame-QA: An image-QA split using automatically generated QA pairs from frames and captions in the TGIF dataset <ns0:ref type='bibr' target='#b58'>Li et al. (2016)</ns0:ref> (over 53k multiple choice questions).</ns0:p></ns0:div> <ns0:div><ns0:head>TVQA</ns0:head><ns0:p>The TVQA dataset <ns0:ref type='bibr' target='#b55'>Lei et al. (2018)</ns0:ref> YouTube2Text-QA has over 99k questions in 'what', 'who' and 'other' style.</ns0:p></ns0:div> <ns0:div><ns0:head>MODELS</ns0:head><ns0:p>In this section, we describe the models used in our experiments, built from the official TVQA 1 and HME-VideoQA 2 implementations.</ns0:p></ns0:div> <ns0:div><ns0:head>TVQA Model</ns0:head><ns0:p>Model Definition: The model takes as inputs: a question q, five potential answers {a i } 4 i=0 , a subtitle S and corresponding video-clip V, and outputs the predicted answer. As the model can either use the entire video-clip and subtitle or only the parts specified in the timestamp, we refer to the sections of video and subtitle used as segments from now on. Figure <ns0:ref type='figure' target='#fig_8'>3</ns0:ref> demonstrates the textual and visual streams and their associated features in model architecture.</ns0:p><ns0:p>ImageNet Features: Each frame is processed by a ResNet101 <ns0:ref type='bibr' target='#b38'>He et al. (2016)</ns0:ref> pretrained on ImageNet <ns0:ref type='bibr' target='#b20'>Deng et al. (2009)</ns0:ref> to produce a 2048-d vector. These vectors are then L2-normalised and stacked in frame order:</ns0:p><ns0:formula xml:id='formula_13'>V img &#8712; R f &#215;2048</ns0:formula><ns0:p>where f is the number of frames used in the video segment.</ns0:p><ns0:p>Regional Features: Each frame is processed by a Faster R- <ns0:ref type='bibr'>CNN Ren et al. (2015)</ns0:ref> trained on Visual Genome <ns0:ref type='bibr' target='#b52'>Krishna et al. (2017)</ns0:ref> in order to detect objects. Each detected object in the frame is given a bounding box, and has an affiliated 2048-d feature extracted. Since there are multiple objects detected per frame (we cap it at 20 per frame), it is difficult to efficiently represent this in time sequences <ns0:ref type='bibr' target='#b55'>Lei et al. (2018)</ns0:ref>. The model uses the top-K regions for all detected labels in the segment as in <ns0:ref type='bibr' target='#b2'>Anderson et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b47'>Karpathy and Fei-Fei (2015)</ns0:ref>. Hence the regional features are V reg &#8712; R n reg &#215;2048 where n reg is the number of regional features used in the segment. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_14'>) embeddings V vcpt &#8712; R n vcpt &#215;300 or R n vcpt &#215;768</ns0:formula><ns0:p>respectively, where n vcpt is the number of visual concepts used in the segment.</ns0:p><ns0:p>Text Features: The model encodes the questions, answers, and subtitles using either GloVe (&#8712; R 300 ) or BERT embeddings (&#8712; R 768 ). Formally, q &#8712; R n q &#215;d , {a i } 4 i=0 &#8712; R n a i &#215;d , S &#8712; R n s &#215;d where n q , n a i , n s is the number of words in q, a i , S respectively and d = 300, 768 for GloVe or BERT embeddings respectively.</ns0:p><ns0:p>Context Matching: Context matching refers to context-query attention layers recently adopted in machine comprehension <ns0:ref type='bibr' target='#b82'>Seo et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b103'>Yu et al. (2018a)</ns0:ref>. Given a context-query pair, context matching layers return 'context aware queries'.</ns0:p><ns0:p>Model Details: Any combination of subtitles or visual features can be used. All features are mapped into word vector space through a tanh non-linear layer. They are then processed by a shared bi-directional <ns0:ref type='bibr'>LSTM Hochreiter and Schmidhuber (1997)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Graves and Schmidhuber (2005)</ns0:ref> ('Global LSTM' in Figure <ns0:ref type='figure' target='#fig_8'>3</ns0:ref>) of output dimension 300. Features are context-matched with the question and answers. The original context vector is then concatenated with the context-aware question and answer representations and their combined element-wise product ('Stream Processor' in Figure <ns0:ref type='figure' target='#fig_8'>3</ns0:ref>, e.g. for subtitles S, the stream processor outputs [F sub ;A sub,q ;A sub,a 0&#8722;4 ;</ns0:p><ns0:formula xml:id='formula_15'>F sub &#8857; A sub,q ;F sub &#8857; A sub,a 0&#8722;4 ]&#8712; R n sub &#215;1500 where F sub &#8712; R n s &#215;300 .</ns0:formula><ns0:p>Each concatenated vector is processed by their own unique bi-directional LSTM of output dimension 600, followed by a pair of fully connected layers of output dimensions 500 and 5, both with dropout 0.5 and ReLU activation. The 5-dimensional output represents a vote for each answer. The element-wise sum of each activated feature stream is passed to a softmax producing the predicted answer ID. All features remain separate through the entire network, effectively allowing the model to choose the most useful features.</ns0:p></ns0:div> <ns0:div><ns0:head>HME-VideoQA</ns0:head><ns0:p>To better handle semantic meaning through long sequential video data, recent models have integrated Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>In this section we outline our experimental setup and results. We save our insights for the discussion in the next section. See our GitHub repository 3 for both the datasets and code used in our experiments.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> shows the benchmarks and SotA results for the datasets we consider in this paper. </ns0:p></ns0:div> <ns0:div><ns0:head>Concatenation to BLP (TVQA)</ns0:head><ns0:p>As previously discussed, BLP techniques have outperformed feature concatenation on a number of VQA benchmarks. The baseline stream processor concatenates the visual feature vector with question and answer representations. Each of the 5 inputs to the final concatenation are 300-d. We replace the visualquestion/answer concatenation with BLP (Figure <ns0:ref type='figure' target='#fig_12'>6</ns0:ref>). All inputs to the BLP layer are 300-d, the outputs are 750-d and the hidden size is 1600 (a smaller hidden state than normal, however, the input features are also smaller compared to other uses of BLP). We make as few changes as possible to accommodate BLP, i.e. we use context matching to facilitate BLP fusion by aligning visual and textual features temporally.</ns0:p><ns0:p>Our experiments include models with/without subtitles or questions (Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>). </ns0:p></ns0:div> <ns0:div><ns0:head>Dual-Stream Model</ns0:head><ns0:p>We create our 'dual-stream' (Figure <ns0:ref type='figure' target='#fig_15'>7</ns0:ref>, Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref>) Manuscript to be reviewed </ns0:p></ns0:div> <ns0:div><ns0:head>CCA</ns0:head><ns0:p>Canonical cross correlation analysis (CCA) <ns0:ref type='bibr' target='#b42'>Hotelling (1936)</ns0:ref> is a method for measuring the correlations between two sets. Let (X 0 , X 1 ) &#8712; R d 0 &#215; R d 1 be random vectors with covariances (&#8721; r=00 , &#8721; r=11 ) and cross-covariance &#8721; r=01 . CCA finds pairs of linear projections of the two views (w &#8242; 0 X 0 , w &#8242; 1 X 1 ) that are maximally correlated: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_16'>&#961; = (w * 0 , w * 1 ) = argmax w 0 ,w 1 corr(w &#8242; 0 X 0 , w &#8242; 1 X 1 )<ns0:label>11</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_17'>= argmax w 0 ,w 1 w &#8242; 0 &#8721; 01 w 1 &#8730; w &#8242; 0 &#8721; 00 w 0 w &#8242; 1 &#8721; 11 w 1</ns0:formula><ns0:p>where &#961; is the correlation co-efficient. As &#961; is invariant to the scaling of w 0 and w 1 , the projections are constrained to have unit variances, and can be represented as the following maximisation:</ns0:p><ns0:formula xml:id='formula_18'>argmax w 0 ,w 1 w &#8242; 0 &#8721; 01 w 1 s.t w &#8242; 0 &#8721; 00 w 0 = w &#8242; 1 &#8721; 11 w 1 = 1</ns0:formula><ns0:p>However, CCA can only model linear relationships regardless of the underlying realities in the dataset.</ns0:p><ns0:p>Thus, CCA extensions were proposed, including kernel CCA (KCCA) <ns0:ref type='bibr' target='#b0'>Akaho (2001)</ns0:ref> and later DCCA.</ns0:p></ns0:div> <ns0:div><ns0:head>DCCA</ns0:head><ns0:p>DCCA is a parametric method used in multimodal neural networks that can learn non-linear transformations for input modalities. Both modalities t, v are encoded in neural-network transformations</ns0:p><ns0:formula xml:id='formula_19'>H t , H v = f t (t, &#952; t ), f v (v, &#952; v )</ns0:formula><ns0:p>, and then the canonical correlation between both modalities is maximised in a common subspace (i.e. maximise cross-modal correlation between</ns0:p><ns0:formula xml:id='formula_20'>H t , H v ). max corr(H t , H v ) = argmax &#952; t ,&#952; v corr( f t (t, &#952; t ), f v (v, &#952; v ))</ns0:formula><ns0:p>We use DCCA over KCCA to co-ordinate modalities in our experiments as it is generally more stable and efficient, learning more 'general' functions.</ns0:p></ns0:div> <ns0:div><ns0:head>DCCA in TVQA</ns0:head><ns0:p>We use a 2-layer DCCA module to coordinate question and context (visual or subtitle) features (Figure <ns0:ref type='figure' target='#fig_17'>8</ns0:ref>, </ns0:p></ns0:div> <ns0:div><ns0:head>Concatenation to BLP (HME-VideoQA)</ns0:head><ns0:p>As described in the previous section, we replace a concatenation step in the HME model between textual and visual features with BLP (Figure <ns0:ref type='figure' target='#fig_11'>5</ns0:ref>, corresponding to the multimodal fusion unit in Figure <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>). The goal here is to explore if BLP can better facilitate multimodal fusion in aggregated memory features (Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref>).</ns0:p><ns0:p>We replicate the results from <ns0:ref type='bibr' target='#b24'>Fan et al. (2019)</ns0:ref> with the HME on the MSVD, TGIF and EgoVQA datasets using the official github repository <ns0:ref type='bibr' target='#b12'>Chenyou (2019)</ns0:ref>. We extract our own C3D features from the frames in the TVQA. </ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>TVQA Experiments</ns0:head><ns0:p>No BLP Improvements on TVQA: On the HME concat-to-BLP substitution model (Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref>), MCB barely changes model performance at all. We find that none of our TVQA concat-to-BLP substitutions (Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>) yield any improvements at all, with almost all of them performing worse overall ( 0.3-5%) than even the questionless concatenation model. Curiously, MCB scores the highest of all BLP techniques.</ns0:p><ns0:p>The dual-stream model performs worse still, dropping accuracy by between 5-10% vs the baseline (Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref>). Similarly, we find that MCB performs best despite being known to require larger latent spaces to work on VQA.</ns0:p><ns0:p>BERT Impacted the Most: For the TVQA BLP-substitution models, we find the GloVe, BERT and 'no-subtitle' variations all degrade by roughly similar margins, with BERT models degrading more most often. This slight discrepancy is unsurprising as the most stable BERT baseline model is the best, and thus may degrade more on the inferior BLP variations. However, BERT's relative degradation is much more pronounced on the dual-stream models, performing 3% worse than GloVe. We theorise that here, the significant and consistent drop is potentially caused by BERT's more contextual nature is no longer helping, but actively obscuring more pronounced semantic meaning learned from subtitles and questions.</ns0:p><ns0:p>Blame Smaller Latent Spaces?: Naturally, bilinear representations of time series data across multiple frames or subtitles are highly VRAM intensive. Thus we can only explore relatively small hidden dimensions (i.e. 1600). However, we cannot simply conclude our poor results are due to our relatively small latent spaces because: I) MCB is our best performing BLP technique. However, MCB has been <ns0:ref type='formula'>2016</ns0:ref>)). We note that 16000/2048 &#8776; 1600/300, and so our latent-to-input size ratio is not substantially different to previous works.</ns0:p></ns0:div> <ns0:div><ns0:head>13/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Unimodal Biases in TVQA and Joint Representation: Another explanation may come from works exploring textual biases inherent in TVQA to textual modalities <ns0:ref type='bibr' target='#b94'>Winterbottom et al. (2020)</ns0:ref>. BLP has been categorised as a 'joint representation'. <ns0:ref type='bibr' target='#b4'>Baltru&#353;aitis et al. (2019)</ns0:ref> consider representation as summarising multimodal data 'in a way that exploits the complementarity and redundancy of multiple modalities'.</ns0:p><ns0:p>Joint representations combine unimodal signals into the same representation space. However, they struggle to handle missing data <ns0:ref type='bibr' target='#b4'>Baltru&#353;aitis et al. (2019)</ns0:ref> as they tend to preserve shared semantics while ignoring modality-specific information <ns0:ref type='bibr' target='#b36'>Guo et al. (2019)</ns0:ref>. The existence of unimodal text bias in TVQA implies BLP may perform poorly on the TVQA as a joint representation of it's features because: I) information from either modality is consistently missing, II) prioritising 'shared semantics' over 'modality-specific' information harms performance on TVQA. Though concatenation could also be classified as a joint representation, we argue that this observation still has merit. Theoretically, a concatenation layer can still model modality specific information (see Figure <ns0:ref type='figure' target='#fig_24'>9</ns0:ref>), but a bilinear representation would seem to inherently entangle its inputs which would make modality specific information more challenging to learn since each parameter representing one modality is by definition weighted with the other. This may explain why our simpler BLP substitutions perform better than our more drastic 'joint' dual-stream model.</ns0:p><ns0:p>What About DCCA?: Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> shows our results on the DCCA augmented TVQA models. We see a slight but noticable performance degradation with this relatively minor alteration to the stream processor.</ns0:p><ns0:p>As previously mentioned, DCCA is in some respects an opposite approach to multimodal fusion than BLP, i.e. a 'coordinated representation'. The idea of a coordinated representations is to learn a separate representation for each modality , but with respect to the other. In this way, it is thought that multimodal interactions can be learned while still preserving modality-specific information that a joint representation may otherwise overlook <ns0:ref type='bibr' target='#b36'>Guo et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b75'>Peng et al. (2018)</ns0:ref>. DCCA specifically maximises cross-modal correlation. Without further insight from surrounding literature, it is difficult to conclude what TVQA's drop in performance using both joint and coordinated representations could mean. We will revisit this when we discuss the role of attention in multimodal fusion. into a new space. We use this technique to prepare our visual and question/answer features because it temporally aligns both features, giving them the same dimensional shape, conveniently allowing us to apply BLP at each time step. Since the representations generated are much more similar than the original raw features and there is some degree of information exchange, it may affect BLP's representational capacity. Though it is worth considering these potential shortcomings, we cannot immediately assume that BiDAF would cause serious issues as earlier bilinear technique were successfully used between representations in the same modality <ns0:ref type='bibr' target='#b90'>Tenenbaum and Freeman (2000)</ns0:ref>; <ns0:ref type='bibr' target='#b30'>Gao et al. (2016)</ns0:ref>. This implies that multimodal interactions can still be learned between the more similar context-matched representations, provided the information is still present. Since BiDAF does allow visual information to be used in the TVQA baseline model, it is reasonable to assume that some of the visual information is in fact intact and exploitable for BLP. However, it is still currently unclear if context matching is fundamentally disrupting BLP and contributing to the poor results we find. We note that in BiDAF, 'memoryless' attention is implemented to avoid propagating errors through time. We argue that though this may be true and help in some circumstances, conversely, this will not allow some useful interactions to build up over time steps.</ns0:p></ns0:div> <ns0:div><ns0:head>Does Context Matching Ruin Multimodal</ns0:head></ns0:div> <ns0:div><ns0:head>The Other Datasets on HME</ns0:head><ns0:p>BLP Has No Effect: Our experiments on the EgoVQA, TGIF-QA and MSVD-QA datasets are on concat-to-BLP substitution HME models. Our results are inconclusive. There is virtually no variation in performance between the BLP and concatenation implementations. Interestingly, EgoVQA consistently does not converge with this simple substitution. We cannot comment for certain on why this is the case.</ns0:p><ns0:p>There seems to be no intuitive reason why it's 1 st person content would cause this. Rather, we believe this is symptomatic of overfitting in training, as EgoVQA is very small and pretrained on a different dataset, for implementing attention mechanisms alongside BLP, so that the theoretically greater representational capacity of BLP is not squandered on less useful noisy information. The TVQA model uses the previously discussed BiDAF mechanism to focus information from both modalities. However, the HME model integrates a more complex memory-based multi-hop attention mechanism. This difference may potentially highlight why the TVQA model suffers more substantially integrating BLP than the HME one.</ns0:p></ns0:div> <ns0:div><ns0:head>BLP in Video-QA: Problems and Recommendations</ns0:head><ns0:p>We have experimented with BLP in 2 video-QA models and across 4 datasets. Our experiments show that the BLP fusion techniques popularised in VQA has not extended to increased performance to video-QA. In the preceding sections, we have supported this observation with experimental results which we contextualise by surveying the surrounding literature for BLP for multimodal video tasks. In this section, we condense our observations into a list of problems that BLP techniques pose to video-QA, and our proposal for alternatives and solutions:</ns0:p><ns0:p>Inefficient and Computationally Expensive Across Time: BLP as a fusion mechanism in video-QA can be exceedingly expensive due to added temporal relations. Though propagating information from each time step through a complex text-vision multimodal fusion layer is an attractive prospect, our experiments imply that modern BLP techniques simply do not empirically perform in such a scenario.</ns0:p><ns0:p>We recommend avoiding computationally expensive fusion techniques like BLP for text-image fusion throughout timesteps, and instead simply concatenate features at these points to save computational resources for other stages of processing (e.g. attention). Furthermore, we note that any prospective fusion technique used across time will quickly encounter memory limitations that could force the hidden-size We demonstrate poor performance using BLP to fuse both 'BiDAF-aligned' (TVQA) and 'raw' (HME) text and video features i.e. temporally aligned and unaligned respectively. As the temporally-aligned modality combinations of video-video and video-audio BLP fusion continue to succeed, we believe that the 'natural alignment' of modalities is a significant contributing factor to this performance discrepancy in video. To the best of our knowledge, we are the first to draw attention to this trend. Attention mechanisms continue to achieve state-of-the-art in video-language tasks and have been demonstrated (with visualisable Manuscript to be reviewed</ns0:p><ns0:p>Computer Science attention maps) to focus on relevant video and question features. We therefore recommend using attention mechanisms for their strong performance and relatively interpretable behaviour, and avoiding BLP for specifically video-text fusion.</ns0:p><ns0:p>Empirically Justified on VQA: Successive BLP techniques have helped drive increased VQA performance in recent years, as such they remain an important and welcome asset to the field of multimodal machine learning. We stress that these improvements, welcome as they are, are only justified by their empirical improvements in the tasks they are applied to, and lack strong theoretical frameworks which explain their superior performance. This is entirely understandable given the infamous difficulty in interpreting how neural networks actually make decisions or exploit their training data. However, it is often claimed that such improvements are the result of some intrinsic property of the BLP operator, e.g. creating 'richer multimodal representations': <ns0:ref type='bibr' target='#b27'>Fukui et al. (2016)</ns0:ref> hypothesise that concatenation is not as expressive as an outer product of visual and textual features. <ns0:ref type='bibr' target='#b50'>Kim et al. (2017)</ns0:ref> claim that 'bilinear models provide rich representations compared with linear models'. Ben-younes et al. ( <ns0:ref type='formula'>2017</ns0:ref>) claim MUTAN 'focuses on modelling fine and rich interactions between image and text modalities'. <ns0:ref type='bibr' target='#b106'>Yu et al. (2018b)</ns0:ref> claim that MFH significantly improves VQA performance 'because they achieve more effective exploitation of the complex correlations between multimodal features'. Ben-Younes et al. ( <ns0:ref type='formula'>2019</ns0:ref>) carefully demonstrate that the extra control over the dimensions of components in BLOCK fusion can be leveraged</ns0:p><ns0:p>to achieve yet higher VQA performance, however this is attributed to it's ability 'to represent very fine interactions between modalities while maintaining powerful mono-modal representations'. In contrast, <ns0:ref type='bibr' target='#b105'>Yu et al. (2017)</ns0:ref> carefully assess and discuss the empirical improvements their MFH fusion offers on VQA.</ns0:p><ns0:p>Our discussions and findings highlight the importance of being measured and nuanced when discussing the theoretical nature of multimodal fusion techniques and the benefits they bring.</ns0:p></ns0:div> <ns0:div><ns0:head>THEORETICALLY MOTIVATED OBSERVATIONS AND NEUROLOGICALLY GUIDED PROPOSALS:</ns0:head><ns0:p>BLP techniques effectively exploit mathematical innovations on bilinear expansions represented in neural networks. As previously discussed, it remains unclear why any bilinear representation would be intrinsically superior for multimodal fusion to alternatives e.g. a series of non-linear fully connected layers or attention mechanisms. In this section, we share our thoughts on the properties of bilinear functions, and how they relate to neurological theories for multimodal processing in the human brain. We provide qualitative analysis of the distribution of psycholinguistic norms present in the video-QA datasets used in our experiments with which, through the lens of 'Dual Coding Theory' and the 'Two-Stream' model of vision, we propose neurologically motivated multimodal processing methodologies.</ns0:p></ns0:div> <ns0:div><ns0:head>Observations: Bilinearity in BLP</ns0:head><ns0:p>Nonlinearities in Bilinear Expansions: As previously mentioned in our description of MLB, <ns0:ref type='bibr' target='#b50'>Kim et al. (2017)</ns0:ref> suggest using Tanh activation on the output of vector z to further increase model capacity. Strictly speaking, we note that adding the the non-linearity means the representation is no longer bilinear as it is not linear with respect to either of its input domains. It is instead the 'same kind of non-linear' in both the input domains. We suggest that an alternative term such as 'bi-nonlinear' would more accurately described such functions. Bilinear representations are not the most complex functions with which to learn interactions between modalities. As explored by <ns0:ref type='bibr' target='#b106'>Yu et al. (2018b)</ns0:ref>, we believe that higher-order interactions between features would facilitate a more realistic model of the world. The non-linear extension of bilinear or higher-order functions is a key factor to increase representational capacity.</ns0:p><ns0:p>Outer Product Forces Multimodal Interactions: The motivation for using bilinear methods over concatenation in VQA and video-QA was that it would enable learning more 'complex' or 'expressive' interactions between the textual and visual inputs. We note however that concatenation of inputs features should theoretically allow both a weighted multimodal combination of textual and visual units, and allow unimodal units of input features. As visualised in Figure <ns0:ref type='figure' target='#fig_24'>9</ns0:ref>, weights representing a bilinear expansion in a neural network each represent a multiplication of input units from each modalitiy. This appears to, in some sense, force multimodal interactions where it could possibly be advantageous to allow some degree of separation between the text and vision modalities. As discussed earlier, it is thought that 'joint' representations <ns0:ref type='bibr' target='#b4'>Baltru&#353;aitis et al. (2019)</ns0:ref> preserve shared semantics while ignoring modality-specific information <ns0:ref type='bibr' target='#b36'>Guo et al. (2019)</ns0:ref>. Though it is unclear if concatenation could effectively replicate bilinear processing while also preserving unimodal processing, it also remains unclear how exactly bilinear Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Proposals: Neurological Parallels</ns0:head><ns0:p>We have recommended that video-QA models prioritise attention mechanisms over BLP given our own experimental results and our observations of the current state-of-the-art trends. We can however still explore how bilinear models in deep learning are related to 2 key areas of relevant neurological research, i.e. the Two-Stream model of vision <ns0:ref type='bibr' target='#b32'>Goodale and Milner (1992)</ns0:ref>; <ns0:ref type='bibr' target='#b65'>Milner (2017)</ns0:ref> and Dual Coding Theory <ns0:ref type='bibr' target='#b71'>Paivio (2013</ns0:ref><ns0:ref type='bibr' target='#b72'>Paivio ( , 2014))</ns0:ref>.</ns0:p><ns0:p>Two-Stream Vision: Introduced in <ns0:ref type='bibr' target='#b32'>Goodale and Milner (1992)</ns0:ref>, the current consensus on primate visual processing is that it is divided into two networks or streams: The 'ventral' stream which mediates transforming the contents of visual information into 'mental furniture' that guides memory, conscious perception, and recognition; and the 'dorsal' stream which mediates the visual guidance of action. There is a wealth of evidence showing that these two subsystems are not mutually insulated from each other, but rather interconnect and contribute to one another at different stages of processing Milner (2017); <ns0:ref type='bibr' target='#b46'>Jeannerod and Jacob (2005)</ns0:ref>. In particular, <ns0:ref type='bibr' target='#b46'>Jeannerod and Jacob (2005)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science task performance previously discussed.</ns0:p><ns0:p>IV) Multimodal cognitive behaviours in people can be improved by providing cues. For example, referential processing (naming an object or identifying an object from a word) has been found to additively affect free recall (recite a list of items), with the memory contribution of non-verbal codes (pictures) being twice that of verbal codes <ns0:ref type='bibr' target='#b73'>Paivio and Lambert (1981)</ns0:ref>. <ns0:ref type='bibr' target='#b5'>Begg (1972)</ns0:ref> find that free recall of 'concrete phrases' (can be visualised) of their constituent words is roughly twice that of 'abstract' phrases. However, this difference increased six-fold for concrete phrases when cued with one of the phrase words, yet using cues for abstract phrases did not help at all. This was named the 'conceptual peg' effect in DCT, and is interpreted as memory images being re-activated by 'a high imagery retrieval cue'. Given such apparent differences in human cognitive processing for 'concrete' and 'abstract' words, it may similarly be beneficial for multimodal text-vision tasks to explicitly exploit the psycholinguistic 'concreteness' word norm. Leveraging existing psycholinguistic word-norm datasets, we identify the relative abundance of concrete words in textual components of the video-QA datasets we experiment with (see Figure <ns0:ref type='figure' target='#fig_27'>12</ns0:ref>).</ns0:p><ns0:p>As the various word-norm datasets use various scoring systems for concreteness (e.g. MTK40 uses a</ns0:p><ns0:p>Likert scale 1-7), we rescale the scores for each dataset such that the lowest score is 0 (highly abstract), and the highest score is 1 (highly concrete). Though we cannot find a concreteness score for every word in each dataset component's vocabulary, we see that the 4 video-QA datasets we experiment with have more concrete than abstract words overall. Furthermore, we see that answers are on-average significantly more concrete than they are abstract, and that (as intuitively expected) visual concepts from TVQA are even more concrete. Taking inspiration from human processing through DCT, it could be hypothesised that multimodal machine learning tasks could benefit by explicitly learning relations between 'concrete' words and their constituents, whilst treating 'abstract' words and concepts differently.</ns0:p><ns0:p>Recently proposed computational models of DCT have had many drawbacks <ns0:ref type='bibr' target='#b72'>Paivio (2014)</ns0:ref>, we believe that neural networks can be a natural fit for modelling neural correlates explored in DCT and should be considered as a future modelling option.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In light of BLP's empirical success in VQA, we have experimentally explored their use in video-QA on 2 models and 4 datasets. We find that switching from vector concatenation to BLP through simple substitution on the HME and TVQA models does not improve and in fact actively harm performance on video-QA. We find that a more substantial 'dual-stream' restructuring of the TVQA model to accommodate BLP significantly reduces performance on TVQA. Our results and observations about the downturn in successful text-vision BLP fusion in video tasks imply that naively using BLP techniques can be very detrimental in video-QA. We caution against automatically integrating bilinear pooling in video-QA models and expecting similar empirical increases as in VQA. We offer several interpretations and insights of our negative results using surrounding multimodal and neurological literature and find our results inline with trends in VQA and video-classification. To the best of our knowledge, we are the first to outline how important neurological theories i.e. dual coding theory and the two-stream model of vision relate to the history of (and journey to) modern multimodal deep learning practices. We offer a few <ns0:ref type='formula'>2017</ns0:ref>), <ns0:ref type='bibr' target='#b77'>Reilly and Kean (2007)</ns0:ref>, and <ns0:ref type='bibr' target='#b83'>Sianipar et al. (2016)</ns0:ref>. The scores for each word are rescaled from 0-1 such that most abstract = 0 and most concrete = 1, and the result averaged if more than 1 dataset has the same word.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Visualisation of mode-n fibres and matricisation</ns0:figDesc><ns0:graphic coords='6,224.45,239.07,248.15,134.57' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>r</ns0:head><ns0:label /><ns0:figDesc>where R &#8712; N * and for each r &#8712; {1, ..., R}, S r &#8712; R R 1 &#215;,...,&#215;R n where each S r are 'core tensors' with dimen-sions R n &#8804; I n for n &#8712; {1, ..., N} that are used to restrict the rank of the tensor W. U n r &#8712; St(R n , I n ) are the 'factor matrices' that intuitively expand the n th dimension of S back up to the original n th dimension of W. St(a, b) here refers to the Stiefel manifold, i.e. St(a, b):{Y &#8712; R a&#215;b : Y T Y = I p }. Figure 2 visualises the block term decomposition process.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Block Term Decomposition (n=3)</ns0:figDesc><ns0:graphic coords='7,214.11,299.75,268.83,64.16' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b89'>Tapaswi et al. (2016)</ns0:ref>.Li et al. (2019) find MCB fusion performs worse on their model in ablation studies on TGIF-QA<ns0:ref type='bibr' target='#b45'>Jang et al. (2017)</ns0:ref>.<ns0:ref type='bibr' target='#b13'>Chou et al. (2020)</ns0:ref> use MLB as part of their baseline model proposed alongside their 'VQA 360 &#8226; ' dataset. Gao et al. (2019) contrast their proposed two-stream attention mechanism to an MCB model for TGIF-QA, demonstrating a substantial performance increase over the MCB model. Liu et al. (2021) use MUTAN fusion between question and visual features to yield impressive results on TGif-QA, though they are outperformed by an attention based model using element-wise multiplication Le et al. (2020). The Focal Visual-Text Attention network (FVTA) Liang et al. (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>techniques. Zhou et al. (2021) use a multilevel factorised BLP based model to fuse audio and visual features for emotion recognition in videos. Hu et al. (2021) use compact BLP to fuse audio and 'visual long range' features for human action recognition. Pang et al. (2021) use MLB as part of an attentionbased fusion for audio and visual features for violence detection in videos. Xu et al. (2021) use BLP to fuse visual features from different channels in RGBT tracking. Deng et al. (2021) use compact BLP to fuse spatial and temporal representations of video features for action recognition. Wang et al. (2021) fuse motion and appearance visual information together achieving state-of-the-art results on MSVD-QA. Sudhakaran et al. (2021) draw design inspiration from bilinear processing of Lin et al. (2015) and MCB to propose 'Class Activation Pooling' for video action recognition. Deb et al. (2022) use MLB to process video features for video captioning.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>, Xu et al. (2017) create the MSVD-QA dataset based on the Microsoft research video description corpus Chen and Dolan (2011). The dataset is made from 1970 video clips, with over 50k QA pairs in '5w' style i.e.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>is designed to address the shortcomings of previous datasets. It has significantly longer clip lengths than other datasets and is based on TV shows instead of cartoons to give it realistic video content with simple coherent narratives. It contains over 150k QA pairs. Each question is labelled with timestamps for the relevant video frames and subtitles. The questions were gathered using AMT workers. Most notably, the questions were specifically designed to encourage multimodal reasoning by asking the workers to design two-part compositional questions. The first part asks a question about a 'moment' and the second part localises the relevant moment in the video clip i.e.[What/How/Where/Why/Who/...] -[when/before/after] -, e.g.'[What] was House saying [before] he leaned over the bed?'. The authors argue this facilitates questions that require both visual and language information since 'people often naturally use visual signals to ground questions in time'. The authors identify certain biases in the dataset. They find that the average length of correct answers are longer than incorrect answers. They analyse the performance of their proposed baseline model with different combinations of visual and textual features on different question types they have identified. Though recent analysis has highlighted bias towards subtitles in TVQA's questions<ns0:ref type='bibr' target='#b94'>Winterbottom et al. (2020)</ns0:ref>, it remains an important large scale video-QA benchmark.EgoVQAMost video-QA datasets focus on video-clips from the 3 rd person.<ns0:ref type='bibr' target='#b23'>Fan (2019)</ns0:ref> argue that 1 st person video-QA has more natural use cases that real-world agents would need. As such, they propose the egocentric video-QA dataset (EgoVQA) with 609 QA pairs on 16 first-person video clips. Though the dataset is relatively small, it has a diverse set of question types (e.g. 1 st &amp; 3 rd person 'action' and 'who' questions, 'count', 'colour' etc..), and aims to generate hard and confusing incorrect answers by sampling from correct answers of the same question type. Models on EgoVQA have been shown to overfit due to its small size. To remedy this,<ns0:ref type='bibr' target='#b23'>Fan (2019)</ns0:ref> pretrain the baseline models on the larger YouTube2Text-QA<ns0:ref type='bibr' target='#b100'>Ye et al. (2017)</ns0:ref>. YouTube2Text-QA is a multiple choice dataset created from MSVD videos Chen and Dolan (2011) and questions created from YouTube2Text video description corpus<ns0:ref type='bibr' target='#b34'>Guadarrama et al. (2013)</ns0:ref>.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. TVQA Model. &#8857;/&#8853; = Element-wise multiplication/addition, &#8865; = context matching Seo et al. (2017); Yu et al. (2018a), &#946; = BLP. Any feature streams may be enabled/disabled.</ns0:figDesc><ns0:graphic coords='10,141.73,63.78,413.59,224.86' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>external 'memory' units<ns0:ref type='bibr' target='#b96'>Xiong et al. (2016)</ns0:ref>;<ns0:ref type='bibr' target='#b87'>Sukhbaatar et al. (2015)</ns0:ref> alongside recurrent networks to handle input features<ns0:ref type='bibr' target='#b28'>Gao et al. (2018)</ns0:ref>;<ns0:ref type='bibr' target='#b107'>Zeng et al. (2017)</ns0:ref>. These external memory units are designed to encourage multiple iterations of inference between questions and video features, helping the model revise it's visual understanding as new details from the question are presented. The heterogeneous8/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. HME Model</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.59,191.18' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. &#8853; = Concatenation, &#946; = BLP.</ns0:figDesc><ns0:graphic coords='11,265.81,525.11,165.44,144.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Baseline concatenation stream processor from TVQA model (left-A) vs Our BLP stream processor (right-B). &#8857; = Element-wise multiplication, &#946; = BLP, &#8865; = Context Matching.</ns0:figDesc><ns0:graphic coords='12,183.09,440.20,330.87,103.79' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>model from the SI TVQA baseline model for 2 main purposes: I) To explore the effects of a joint representation on TVQA, II) To contrast the concatenationreplacement experiment with a model restructured specifically with BLP as a focus. The baseline BLP model keeps subtitles and other visual features completely separate up to the answer voting step. Our aim here is to create a joint representation BLP-based model similar in essence to the baseline TVQA model that fuses subtitle and visual features. As before, we use context matching to temporally align the video and text features.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>3 https://github.com/Jumperkables/trying blp 10/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Our Dual-Stream Model. &#8865; = Context Matching.</ns0:figDesc><ns0:graphic coords='13,183.09,393.23,330.87,93.41' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>/ 25 PeerJ</ns0:head><ns0:label>25</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Baseline concatenation stream processor from TVQA model (left-A) vs Our DCCA stream processor (right-B). &#8857; = Element-wise multiplication, &#8865; = Context Matching.</ns0:figDesc><ns0:graphic coords='14,183.09,501.50,330.87,91.56' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>outperformed by MFH on previous VQA models and it has been shown to require much larger latent spaces to work effectively in the first placeFukui et al. (2016) ( 16000). II) Our vector representations of text and images are also much smaller (300-d) compared to the larger representation dimensions conventional in previous benchmarks (e.g. 2048 inFukui et al. (</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>Integrity?: The context matching technique used in the TVQA model is the birdirectional attention flow (BiDAF) module introduced in Seo et al. (2017). It is used in machine comprehension between a textual context-query pair to generate query-aware context representations. BiDAF uses a 'memoryless' attention mechanism where information from each time step does not directly affect the next, which is thought to prevent early summarisation. BiDAF considers different input features at different levels of granularity. The TVQA model uses bidirectional attention flow to create context aware (visual/subtitle) question and answer representations. BiDAF can be seen as a co-ordinated representation in some regards, but it does project questions and answers representations</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022) Manuscript to be reviewed Computer Science and BLP techniques can sometimes have difficulties converging. Does Better Attention Explain the Difference?: Attention mechanisms have been shown to improve the quality of text and visual interactions. Yu et al. (2017) argue that methods without attention are 'coarse joint-embedding models' which use global features that contain noisy information unhelpful in answering fine-grained questions commonly seen in VQA and video-QA. This provides strong motivation</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head /><ns0:label /><ns0:figDesc>used sub-optimally low. Though summarising across time steps into condensed representations may allow more expensive BLP layers to be used on the resultant text and video representations, we instead recommend using state-of-the-art and empirically proven multimodal attention mechanisms instead<ns0:ref type='bibr' target='#b54'>Lei et al. (2021)</ns0:ref>;<ns0:ref type='bibr' target='#b99'>Yang et al. (2021)</ns0:ref>. Attention mechanisms are pivotal in VQA for reducing noise and focusing on specific fine-grained details<ns0:ref type='bibr' target='#b105'>Yu et al. (2017)</ns0:ref>. The sheer increase in feature information when moving from still-image to video further increases the importance of attention in video-QA. Our experiments show the temporal-attention based HME model performs better when it is not degraded by BLP. Our findings are in line with that of<ns0:ref type='bibr' target='#b63'>Long et al. (2018)</ns0:ref> as they consider multiple different fusion methods for video classification, i.e. LSTM, probability, 'feature' and attention. 'Feature' fusion is the direct connection of each modality within each local time interval, which is effectively what context matching does in the TVQA model.<ns0:ref type='bibr' target='#b63'>Long et al. (2018)</ns0:ref> finds temporal feature based fusion sub-par, and speculates that the burden of learning multimodal and temporal interactions is too heavy. Our experiments lend further evidence that for video tasks, attention-based fusion is the ideal choice.Problem with Alignment of Text and Video: As we highlight in the second subsection of our related works, BLP has yielded great performance in video tasks where it fuses the visual features with non-textual features. Audio and visual feature fusion demonstrates impressive performance on action recognitionHu et al. (2021), emotion recognition Zhou et al. (2021), and violence detection Pang et al. (2021). Likewise, different visual representations have thrived in RGBT tracking Xu et al. (2021), action recognition Deng et al. (2021) and video-QA on MSVD-QA Wang et al. (2021). On the other hand, we notice that several recent video-QA works (highlighted in the first section of our related works) have found in ablation that BLP fusion which specifically fuse visual and textual features give poor results Kim et al. (2019); Li et al. (2019); Gao et al. (2019); Liu et al. (2021); Liang et al. (2019). Our observations and our experimental results highlight a pattern of poor performance for BLP in text-video fusion specifically.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_23'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_24'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Visualisation of the differences between concatenation and bilinear representations for unimodal processing. Concatenation (left-A) can theoretically allow unimodal features from text or vision to process independently of the other modality by reducing it's weighted contribution (see 'V1 Only'). Bilinear representations (right-B) force multimodal interactions. It is less clear how useful 'unimodal' is processed.</ns0:figDesc><ns0:graphic coords='19,224.45,63.78,248.16,93.64' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_25'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure10. Visualisation of the 1 st and 3 rd cross-stream scenarios for the two-stream model of vision described by<ns0:ref type='bibr' target='#b65'>Milner (2017)</ns0:ref>. The early bilinear model proposed by<ns0:ref type='bibr' target='#b90'>Tenenbaum and Freeman (2000)</ns0:ref> strikingly resembles the 1 st (left-A). The 3 rd and more recently favoured scenario features a continuous exchange of information across streams at multiple stages, and can be realised by introducing 'cross-talking' of deep learning features (right-B).</ns0:figDesc><ns0:graphic coords='19,141.73,544.68,413.60,101.70' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_26'><ns0:head /><ns0:label /><ns0:figDesc>experimentally and theoretically guided suggestions to consider for multimodal fusion in video-QA, most notably that attention mechanisms should be prioritised over BLP in text-vision fusion. We qualitatively show the potential for neurologically-motivated multimodal approaches in video-QA by identifying the relative abundance of psycholinguistically 'concrete' words in the vocabularies for the text components of the 4 video-QA datasets we experiment with. We would like to emphasise the importance of related neurological theories in deep learning and encourage researchers to explore Dual Coding Theory and the Two-Stream model of vision.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_27'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. The relative abundance of the psycholinguistic 'concreteness' score in the vocabularies of each source of text in the video-QA datasets we experiment with. Stopwords are not included. Concreteness scores are taken from the following datasets: MT40k Brysbaert et al. (2013), USF Nelson et al. (1998), SimLex999 Hill et al. (2015), Clark-Paivio Clark and Paivio (2004), Toronto Word Pool Friendly et al. (1982), Chinese Word Norm Corpus Yee (2017), MEGAHR-Crossling Ljube&#353;i&#263; et al. (2018), Glasgow Norms Scott et al. (2017),<ns0:ref type='bibr' target='#b77'>Reilly and Kean (2007)</ns0:ref>, and<ns0:ref type='bibr' target='#b83'>Sianipar et al. (2016)</ns0:ref>. The scores for each word are rescaled from 0-1 such that most abstract = 0 and most concrete = 1, and the result averaged if more than 1 dataset has the same word.</ns0:figDesc><ns0:graphic coords='27,141.73,150.08,413.58,386.23' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,141.73,363.92,413.59,176.08' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Dataset benchmark and SoTA results to the best of our knowledge. &#8224; = Mean L2 loss. * = Results we replicated using the cited implementation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Benchmark</ns0:cell><ns0:cell>SoTA</ns0:cell></ns0:row><ns0:row><ns0:cell>TVQA (Val)</ns0:cell><ns0:cell cols='2'>68.85% Lei et al. (2018) 74.97% Khan et al. (2020)</ns0:cell></ns0:row><ns0:row><ns0:cell>TVQA (Test)</ns0:cell><ns0:cell cols='2'>68.48% Lei et al. (2018) 72.89% Khan et al. (2020)</ns0:cell></ns0:row><ns0:row><ns0:cell>EgoVQA (Val 1)</ns0:cell><ns0:cell>37.57% Fan (2019)</ns0:cell><ns0:cell>45.05%* Chenyou (2019)</ns0:cell></ns0:row><ns0:row><ns0:cell>EgoVQA (Test 2)</ns0:cell><ns0:cell>31.02% Fan (2019)</ns0:cell><ns0:cell>43.35%* Chenyou (2019)</ns0:cell></ns0:row><ns0:row><ns0:cell>MSVD-QA</ns0:cell><ns0:cell>32.00% Xu et al. (2017)</ns0:cell><ns0:cell>40.30% Guo et al. (2021)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-Action</ns0:cell><ns0:cell>60.77% Jang et al. (2017)</ns0:cell><ns0:cell>84.70% Le et al. (2020)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-Count</ns0:cell><ns0:cell>4.28 &#8224; Jang et al. (2017)</ns0:cell><ns0:cell>2.19 &#8224; Le et al. (2020)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-Trans</ns0:cell><ns0:cell>67.06% Jang et al. (2017)</ns0:cell><ns0:cell>87.40% Seo et al. (2021)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-FrameQA</ns0:cell><ns0:cell>49.27% Jang et al. (2017)</ns0:cell><ns0:cell>64.80% Le et al. (2020)</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Concatenation</ns0:figDesc><ns0:table /><ns0:note>replaced with BLP in the TVQA model on the TVQA Dataset. All models use visual concepts and ImageNet features. 'No Q' indicates questions are not used as inputs i.e. answers rely purely on input features.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 )</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Text</ns0:cell><ns0:cell>Val Acc</ns0:cell></ns0:row><ns0:row><ns0:cell>TVQA SI</ns0:cell><ns0:cell>GloVe</ns0:cell><ns0:cell>67.78%</ns0:cell></ns0:row><ns0:row><ns0:cell>TVQA SI</ns0:cell><ns0:cell>BERT</ns0:cell><ns0:cell>70.56%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Dual-Stream MCB GloVe</ns0:cell><ns0:cell>63.46%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Dual-Stream MCB BERT</ns0:cell><ns0:cell>60.63%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Dual-Stream MFH GloVe</ns0:cell><ns0:cell>62.71%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Dual-Stream MFH BERT</ns0:cell><ns0:cell>59.34%</ns0:cell></ns0:row></ns0:table><ns0:note>. Output features are the same dimensions as inputs. Though DCCA itself is not directly related to BLP, it has recently been classified as a coordinated representation<ns0:ref type='bibr' target='#b36'>Guo et al. (2019)</ns0:ref>, which contrasts a 'joint' representation.</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Dual</ns0:figDesc><ns0:table /><ns0:note>-Stream Results Table.'SI' for TVQA models indicates the model is using subtitle and ImageNet feature streams only, i.e. the green and pink streams in Figure3</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>DCCA in the TVQA Baseline Model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Text</ns0:cell><ns0:cell cols='3'>Baseline Acc DCCA Acc</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>VI GloVe</ns0:cell><ns0:cell /><ns0:cell cols='2'>45.94% 45.00% (-0.94%)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>VI BERT</ns0:cell><ns0:cell /><ns0:cell>-41.70%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SVI GloVe</ns0:cell><ns0:cell /><ns0:cell cols='2'>69.74% 67.91% (-1.83%)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SVI BERT</ns0:cell><ns0:cell /><ns0:cell cols='2'>72.20% 68.48% (-3.72%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='2'>Fusion Type</ns0:cell><ns0:cell>Val</ns0:cell><ns0:cell>Test</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>TVQA (GloVE) Concatenation</ns0:cell><ns0:cell>41.25%</ns0:cell><ns0:cell>N/A</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>EgoVQA-0 Concatenation</ns0:cell><ns0:cell>36.99%</ns0:cell><ns0:cell>37.12%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>EgoVQA-1 Concatenation</ns0:cell><ns0:cell>48.50%</ns0:cell><ns0:cell>43.35%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>EgoVQA-2 Concatenation</ns0:cell><ns0:cell>45.05%</ns0:cell><ns0:cell>39.04%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>MSVD-QA Concatenation</ns0:cell><ns0:cell>30.94%</ns0:cell><ns0:cell>33.42%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>TGIF-Action Concatenation</ns0:cell><ns0:cell>70.69%</ns0:cell><ns0:cell>73.87%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>TGIF-Count Concatenation</ns0:cell><ns0:cell>3.95 &#8224;</ns0:cell><ns0:cell>3.92 &#8224;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>TGIF-Trans Concatenation</ns0:cell><ns0:cell>76.33%</ns0:cell><ns0:cell>78.94%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>TGIF-FrameQA Concatenation</ns0:cell><ns0:cell>52.48%</ns0:cell><ns0:cell>51.41%</ns0:cell></ns0:row><ns0:row><ns0:cell>TVQA (GloVE)</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell>41.09% (-0.16%)</ns0:cell><ns0:cell>N/A%</ns0:cell></ns0:row><ns0:row><ns0:cell>EgoVQA-0</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell>No Convergence</ns0:cell><ns0:cell>No Convergence</ns0:cell></ns0:row><ns0:row><ns0:cell>EgoVQA-1</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell>No Convergence</ns0:cell><ns0:cell>No Convergence</ns0:cell></ns0:row><ns0:row><ns0:cell>EgoVQA-2</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell>No Convergence</ns0:cell><ns0:cell>No Convergence</ns0:cell></ns0:row><ns0:row><ns0:cell>MSVD-QA</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell cols='2'>30.85% (-0.09%) 33.78% (+0.36%)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-Action</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell cols='2'>73.56% (+2.87%) 73.00% (-0.87%)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-Count</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell>3.95 &#8224; (+0 &#8224;)</ns0:cell><ns0:cell>3.98 &#8224; (+0.06 &#8224;)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-Trans</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell cols='2'>79.30% (+2.97%) 77.10% (-1.84%)</ns0:cell></ns0:row><ns0:row><ns0:cell>TGIF-FrameQA</ns0:cell><ns0:cell cols='2'>MCB</ns0:cell><ns0:cell cols='2'>51.72% (-0.76%) 52.21% (+0.80%)</ns0:cell></ns0:row></ns0:table><ns0:note>12/25PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>HME-VideoQA Model. The default fusion technique is concatenation. &#8224; refers to minimised L2 loss.</ns0:figDesc><ns0:table /></ns0:figure> <ns0:note place='foot' n='25'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:67060:1:1:NEW 8 Apr 2022)Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editors, We thank the reviewers for taking the time to consider our manuscript and their helpful and constructive feedback. We have edited our manuscript to address their concerns. In particular: We have augmented our discussion with a clear set of problems and recommendations concerning BLP techniques for video-QA. We have provided further qualitative analysis of psycholinguistic content of the text in each of the datasets we experiment with in order to empower our discussion and proposal of the neurological parallels to BLP. Reviewer 1 (Anonymous) Basic reporting 1. The expression of the article is clear and coherent. However, there are some typos. In line 106, 'abstracted' to 'abstract'. In line 423, 'eachother' to 'each other'. Some terms are not easy to understand, such as in line 455, 'ImageNet-style feature vectors'. Many experiment results in the tables are not rigorous, i.e. the baseline offsets in Tables 3 and 5 have no percentage sign. We have fixed the above typos and the formatting inconsistencies in Tables 3 and 5. We have further explained some of the more ambiguous terms we use, including ‘ImageNet-style’ (see new Figure 11 and the subsection it is embedded within). 2. The overall article is well organized. However, the structure in the discussion section is chaotic. The analysis of experimental results may be more reasonable in the experiment section, for readers to contact contextual content more easily. We separate our 'Experiments and Results' and 'Discussion' sections as we have quite a few different experimental setups, and need to take the time to thoroughly highlight the significance of each result in the discussion. Experimental design 1. The experimental results are relatively single, only quantitative analysis results are shown. There are no more qualitative or analytical experimental results. We have added further qualitative/analytical results to back up our neurological proposals. New Figure 12 details the abundance of the psycholinguistic 'concreteness' norm in the 4 datasets we experiment with. Validity of the findings 1. All experimental results showed no improvement from baselines, which was a little bit strange. I doubt the correctness of the experimental setup, for the BLP has its rationality as the author proposed and has obvious performance improvement in image QA tasks[1]. [1] Ben-younes, H., Cade`ne, R., Cord, M., and Thome, N. (2017). Mutan: Multimodal tucker fusion for visual question answering. 2017 IEEE International Conference on Computer Vision (ICCV), pages 655 2631–2639. We were motivated to write this paper as we began exploring the poor results as detailed in the discussion. Our experimental setup is available for scrutiny in our GitHub repository, a link to which has been placed in a footnote in the experiments section of the paper. 2. The insights given in the article are relatively basic, and the discussion is not deep enough. Given the success of BLP in VQA, we believe that and its relative lack of success in video-QA in recent years should be closely scrutinised. As such paper aims to probe this problem from multiple angles. We combine experimental results across 4 datasets and 2 models, with a survey of the recent video and video-QA tasks using BLP. We discussed how our results compare to nuanced arguments given in recent multimodal taxonomical review papers and incorporated their insights into our experimental design (DCCA, 'joint', and 'co-ordinated'). We have discussed in detail many of the components of our models, and have now added a further condensed section discussing 3 main problems with BLP. We augment this by highlighting the empirical-only motivation of BLP results, we take a novel approach and draw the historical similarities to neurological theories, and offer several proposals with respect to each. The author did not give an overall summary or analysis of the multiple BLP methods. The author gives a brief introduction of the neurological parallels, however, their theoretical inspiration is not clearly explained. We have significantly expanded the neurological parallels (moved to a new section at the end of the paper: 'Theoretically Motivated Observations And Neurologically Guided Proposals'). We give a full introduction to Dual coding theory and the Two-stream model of vision in each subsection before making our proposals. Our proposals have been elaborated on, backed up with clear visualisations (see new Figures 10, 11, and 12). We have more clearly linked our proposals to their theoretical inspirations. In addition, the reasons why the BLP does harm on the video QA task didn't convince me very well. If it does do harm, what kind of interaction method should we use in the future for cross-modality representation need to be discussed further. We have been careful to contextualise all of our experimental results with similar trends in the multimodal video field. However, we agree we should be more clear and have added a new section specifically to clearly discuss the broader problems and recommendations with BLP in video-QA: 'BLP in Video-QA: Problems and Recommendations'. This includes suggestions of alternatives to BLP and a more targeted look at the problem of alignment of specifically text-vision modalities. Reviewer 2 (Ramzi Guetari) Basic reporting The paper focuses on the bilinear pooling problem in the context of Video-QA. The problem is indeed topical and subject to many research works. The topic is therefore appropriate and also important. The quality of the English language is not a problem and the paper is clearly written from both a literary and scientific point of view. The structure of the manuscript is quite in accordance with the standard required by the journal and the figures are of good quality with good definition and do not suffer from any particular problem. The datasets used are among the most appropriate and the availability of two of them is well indicated in the paper (TVQA and HME-VideoQA) but not for the others. There is no indication on the availability of the dual stream model proposed by the authors. We have shared instructions for obtaining the other datasets, and source code for the 'dual stream' model through a link to our GitHub repository in a footnote in the experiments section. The title of the paper however raises an interesting and important question but the content of the paper does not provide any explanation or clear answer to the question induced by the title. This is one of the weaknesses of this paper that should be improved in order to have a real added value and a significant contribution of the work. We agree, and have added a new section specifically to clearly explain and discuss our ‘key problems’ and recommendations for BLP in video-QA. 'BLP in Video-QA: Problems and Recommendations'. This section is more targeted, and coalesces and expands on the rest of the discussion. We address 3 main weaknesses:'Inefficiency', 'Alignment of text-and-video', and 'Empirical-only justification'. This includes suggestions of alternatives to BLP and a more targeted look at the problem of alignment of specifically text-vision modalities. Experimental design The subject matter is well within the scope of the paper and raises an important issue that is expressed in the title, namely 'The limitations of bilinear pooling in video-QA'. These limitations are indeed shown by means of experiments and comparisons with state-of-the-art methods. Unfortunately, there is no clear explanation of the underlying causes of this weakness of bilinear pooling except for a problem related to datasets, knowing that some of these datasets, such as MovieQA and YouTube2TextQA, seem, according to the literature, to provide the necessary for this kind of studies. Addressed in our previous comment. Though we aim to be measured and complete in discussing the dataset problems, we again agree and have added a more focused and clear explanation as described above. In my humble opinion, it would be necessary to explain the current failures of bilinear pooling which may be due to a problem of alignment of textual and video stream modalities with also temporal alignment or to the expression of queries. A conclusion on this subject would also be welcome and there are probably two among which one should decide. The first conclusion is that the bilinear pooling is not adapted to the problem by explaining why, but this conclusion, in my opinion is not the right one because it is enough to have the adequate datasets to solve part of the problem. The second conclusion should propose corrections to the existing models. We hypothesise 2 functional failures of BLP. The 1st is its inefficiency in terms of both computational resources (when processing across time steps) and performance (as our experiments and literature survey implies). The 2nd is a broad underperformance in text-vision processing for videos. We indeed believe that a component of this is a problem of natural alignment of text-video modalities (as bilinear pooling has worked well for video-audio or video-video fusion). We have understood reviewer 2’s point that “it is enough to have an adequate dataset to solve that problem” as the following: “lack of alignment of text and video is not the fundamental problem as a video dataset could be created with the text and video aligned” e.g. subtitles. If we have understood reviewer 2 correctly, we would intuitively agree and expect video+aligned-text fusion to perform better. However, we note that the specific task of video-QA that we are exploring BLP with by definition has textual questions about a video. While it is interesting to consider if poor text-video BLP performance still exists with closely aligned text like subtitles, questions will not be aligned with video in the majority of cases. As question-video processing is an inherent part of video-QA, we respectfully disagree with the “adequate dataset” premise and favour the first conclusion reviewer 2 outlines. If we have understood reviewer 2’s point here correctly and sufficiently addressed it, we would be happy to add such discussion to the final manuscript. The ideas proposed in the paragraph 'PROPOSED AREAS OF RESEARCH' (line 607) remain mostly general and vague intuitions. More concrete and forceful proposals would give more weight to the paper. We have split 'Proposed Area of Research' into two more targeted sections and greatly expanded on both. The first is the 'BLP in Video-QA: Problems and Recommendations' section we addressed in the above comments. We will expand on the second new section in response to the comments under ‘Validity of Findings’ The idea of demonstrating the weaknesses of a model is a very good approach in research, but again, these weaknesses should be clearly expressed, explained and alternatives or corrections proposed. Addressed in previous comments. Validity of the findings The scientific findings are at this stage quite light and need to be improved. The preceding comments may guide the authors in improving their contributions. The authors claim that, to their knowledge, they are the first to describe how important neurological theories, namely the dual coding theory and the two-stream vision model, are related to the history (and background) of modern multimodal deep learning practices. The thinking is certainly good but needs more formal framework and mathematical expression to be effectively exploited. We have expanded on our scientific findings with qualitative analysis of the video-QA datasets through a Dual Coding Theory lens. We do this in our new section: 'Theoretically Motivated Observations And Neurologically Guided Proposals' which expands on and replaces the old 'Neurological Parallels' section. Specifically, we draw from the idea of psycholinguistic ‘concreteness’. Using many available psycholinguistic word datasets, we score the relative ‘concreteness’ of the vocabularies of each of the textual components for each of the 4 video-QA datasets we experiment with (Figure 12). We find concreteness is overly represented in components like ‘answers’ and ‘visual concepts’ (though there are still some abstract answers). Using these new findings and the difference in processing abstract/concrete terms that Dual Coding theory suggests, we highlight the latent potential existing right now for video-QA datasets to begin using Dual coding theory principals. We offer specific neurological proposals using components discussed in the paper backed up with new visualisation figures (Figures 9, 10, 11). My suggestions to authors: 1. Try to explain clearly where the current weaknesses of bilinear pooling lie and not just the dataset problem. We agree and have addressed this point. Both reviewers have asked for this and as such we have addressed in our response to them here, we have replaced our previous ‘PROPOSED AREAS OF RESEARCH’ section with a more targeted and specific new section: BLP in Video-QA: Problems and Recommendations. Here we outline 3 main weaknesses. The third weakness (lack of theoretical motivation) we continue to expand on in the following new section: 'Theoretically Motivated Observations And Neurologically Guided Proposals'. 2. In the light of these weaknesses propose ideas for improvement or correction. In the new BLP in Video-QA: Problems and Recommendations section, we propose corrections through alternatives for the ‘Efficiency’ and ‘Alignment’ problems. We argue that attention mechanisms are more suitable for text-video fusion in video-QA. The 3rd main weakness: (lack of theoretical motivation) we again continue to expand on in the new section: 'Theoretically Motivated Observations And Neurologically Guided Proposals'. Here we offer specific improvements to the usage of BLP. 3. Formalize their idea of the dual coding theory and the dual stream vision model so that it can have a significant contribution in this area. We have expanded on our previous discussion about Dual Coding theory and the Two-Stream model of vision. We now give specific neurological proposals using components discussed in the paper, backed up with new qualitative results and visualisation figures (Figures 9, 10, 11, 12). In particular, we formulate Dual Coding Theory to be directly relevant to the video-QA problem we discuss by presenting qualitative psycholinguistic ‘concreteness’ scores of the text components of all 4 video-QA datasets we experiment with. Editor (Yilun Shang): ● Reviewer 1 finds our discussion section 'chaotic' and suggests 'analysis of results may be more reasonable in the experiments section'. We separate our 'Experiments and Results' and 'Discussion' sections as we have quite a few different experimental setups, and need to take the time to thoroughly highlight the significance of each result in the discussion. Given reviewer 2 finds our paper structure 'in accordance with the standard required', we have not actioned this comment. ● Given the expanded focus on the neurological parallels, including our qualitative/analytical results. With the consent of the review team, we propose a more appropriate title: “Bilinear Pooling in Video-QA: Empirical Challenges and Motivational Drift from Neurological Parallels”. "
Here is a paper. Please give your review comments after reading it.
418
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>Software vulnerabilities have led to system attacks and data leakage incidents, which made software vulnerabilities have gradually attracted attention. Vulnerability detection had become an important research direction. In recent years, Deep Learning (DL)-based methods had been applied to vulnerability detection. The DL-based method does not need to define features manually and achieves low false negatives and false positives. DL-based vulnerability detectors rely on vulnerability datasets. Recent studies found that DL-based vulnerability detectors have different effects on different vulnerability datasets. They also found that the authenticity, imbalance, and repetition rate of vulnerability datasets affect the effectiveness of DL-based vulnerability detectors. However, the existing research only did simple statistics, did not characterize vulnerability datasets, and did not systematically study the impact of vulnerability datasets on DL-based vulnerability detectors. In order to solve the above problems, we propose methods to characterize sample similarity and code features. We use sample granularity, sample similarity, and code features to characterize vulnerability datasets. Then, we analyze the correlation between the characteristics of vulnerability datasets and the results of DL-based vulnerability detectors. Finally, we systematically study the impact of vulnerability datasets on DL-based vulnerability detectors from sample granularity, sample similarity, and code features. We have the following insights for the impact of vulnerability datasets on DL-based vulnerability detectors: 1) Fine-grained samples are conducive to detecting vulnerabilities. 2)</ns0:p><ns0:p>Vulnerability datasets with lower inter-class similarity, higher intra-class similarity, and simple structure help detect vulnerabilities in the original test set. 3) Vulnerability datasets with higher inter-class similarity, lower intra-class similarity, and complex structure can better detect vulnerabilities in other datasets.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Software vulnerabilities refer to specific flaws in software that enable attackers to carry out malicious activities. The threat of system attacks and data leakage make software security vulnerabilities a vital issue <ns0:ref type='bibr' target='#b6'>(Chen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b52'>Zhu et al., 2017</ns0:ref><ns0:ref type='bibr' target='#b53'>Zhu et al., , 2020))</ns0:ref>. Using source code is effective in open-source code vulnerability detection because it can uncover vulnerabilities from a root cause. Source code static vulnerability detection involves code similarity-based methods <ns0:ref type='bibr' target='#b19'>(Kim &amp; Oh, 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Li &amp; Jin, 2016;</ns0:ref><ns0:ref type='bibr' target='#b17'>Jang et al., 2012)</ns0:ref> and pattern-based methods <ns0:ref type='bibr' target='#b45'>(Yamaguchi &amp; Rieck, 2013;</ns0:ref><ns0:ref type='bibr' target='#b34'>Neuhaus &amp; Zeller, 2007;</ns0:ref><ns0:ref type='bibr' target='#b46'>Yamaguchi &amp; Rieck, 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Grieco et al., 2016)</ns0:ref>. Code similarity-based methods have a high rate of false positives and false negatives <ns0:ref type='bibr' target='#b18'>(Johnson et al., 2013)</ns0:ref>. Pattern-based methods include rule-based methods and machine learning-based methods. Machine learning-based methods include traditional machine learning-based methods and Deep Learning (DL)-based methods. Rule-based methods and traditional machine learningbased methods rely on experts to manually extract features <ns0:ref type='bibr' target='#b48'>(Zhen et al., 2018)</ns0:ref>. The DL-based method does not require a manual definition of features and has a low rate of false negatives and false positives.</ns0:p><ns0:p>In this paper, we studied DL-based vulnerability detectors. DL methods automatically capture and determine features from the training set and then learn to identify vulnerabilities. DL-based vulnerability detectors rely on vulnerability datasets. This paper explored C/C++ vulnerability datasets. The existing C/C++ vulnerability datasets mainly include artificially synthesized data <ns0:ref type='bibr' target='#b2'>(Black, 2018;</ns0:ref><ns0:ref type='bibr' target='#b48'>Zhen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al., 2021c)</ns0:ref>, artificially modified data <ns0:ref type='bibr' target='#b3'>(Booth &amp; Witte, 2013;</ns0:ref><ns0:ref type='bibr' target='#b48'>Zhen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al., 2021c)</ns0:ref>, and real-world open-source code <ns0:ref type='bibr' target='#b36'>(Russell et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b51'>Zhou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Lin et al., 2019a)</ns0:ref>.</ns0:p><ns0:p>A recent study <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> found that the authenticity, imbalance, and repetition rate of the vulnerability dataset will affect the results of the DL-based vulnerability detector. However, the existing research on vulnerability datasets is not comprehensive. There is no characterization of vulnerability datasets or systematic evaluation of the impact of vulnerability datasets on DL-based vulnerability detectors.</ns0:p><ns0:p>Challenges. The major challenges of investigating the impact of vulnerability datasets on DL-based vulnerability detectors are as follows:</ns0:p><ns0:p>&#8226; The challenge of characterizing vulnerability datasets. Vulnerability datasets are different from text, image, and other datasets. The internal structure of the code in the dataset is more complex. It is characterized by a very abstract concept that is difficult to represent with intuitive data.</ns0:p><ns0:p>&#8226; The challenge of vulnerability dataset evaluation methods. The criterion for evaluating the quality of a vulnerability dataset is its impact on the results of the vulnerability detector. However, for the same vulnerability dataset, using different DL-based vulnerability detectors generates significantly different results. Therefore, it is also difficult to study the quality of the dataset by stripping the impact of the performance of DL-based vulnerability detectors.</ns0:p><ns0:p>Contributions. We characterized vulnerability datasets according to three aspects: sample granularity, sample similarity, and code features. We studied the impact of C/C++ vulnerability datasets on DL-based vulnerability detectors to obtain insights. Based on these insights, we provide suggestions for the creation and selection of vulnerability datasets. Our main contributions are as follows:</ns0:p><ns0:p>&#8226; We propose methods to characterize the sample similarity and code features. We calculated the distance between the sample vectors to obtain the inter-class and intra-class distance. We used the inter-class and intra-class distance to express the similarity between the classes and the similarity across the class of samples. We selected five features to characterize code and measure sample complexity, sample size, and subroutine call-related information.</ns0:p><ns0:p>&#8226; We used sample granularity, sample similarity, and code features to characterize vulnerability datasets. Then we analyzed the characteristics of vulnerability datasets and the results of DL-based vulnerability detectors to study the impact of the vulnerability datasets on DL-based vulnerability detectors.</ns0:p><ns0:p>&#8226; We selected four vulnerability datasets, three methods of representation, and four DL-based vulnerability detectors for experiments. We found that the sample granularity, sample similarity, and code features of the dataset impacted DL-based vulnerability detectors in the following ways:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67742:1:0:NEW 5 Apr 2022)</ns0:p></ns0:div> <ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science 1) Fine-grained samples were conducive to detecting vulnerabilities; 2) Vulnerability datasets with higher inter-class similarity, lower intra-class similarity, and simple structure were conducive to detecting vulnerabilities in the original test set; and 3) Vulnerability datasets with lower inter-class similarity, higher intra-class similarity, and complex structure helped detect vulnerabilities in other datasets.</ns0:p></ns0:div> <ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>This paper studies the impact of the vulnerability dataset on DL-based vulnerability detectors. The following is related information on three aspects: vulnerability detectors, vulnerability datasets, and research on vulnerability datasets.</ns0:p></ns0:div> <ns0:div><ns0:head>Vulnerability detectors</ns0:head><ns0:p>Source code vulnerability detection involves methods based on code similarity <ns0:ref type='bibr' target='#b19'>(Kim &amp; Oh, 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Li &amp; Jin, 2016;</ns0:ref><ns0:ref type='bibr' target='#b17'>Jang et al., 2012)</ns0:ref> as well as pattern-based methods <ns0:ref type='bibr' target='#b45'>(Yamaguchi &amp; Rieck, 2013;</ns0:ref><ns0:ref type='bibr' target='#b34'>Neuhaus &amp; Zeller, 2007;</ns0:ref><ns0:ref type='bibr' target='#b46'>Yamaguchi &amp; Rieck, 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Grieco et al., 2016)</ns0:ref>. Code similarity-based methods can detect vulnerabilities due to code cloning, but these methods have a high false-positive rate and falsenegative rate <ns0:ref type='bibr' target='#b18'>(Johnson et al., 2013)</ns0:ref>. The rule-based method is a pattern-based method that relies on experts manually extracting features. Machine learning-based methods are also pattern-based methods that include traditional machine learning-based methods and DL-based methods. Traditional machine learning-based methods also rely on experts to manually extract features. The method of manually extracting features is time-consuming and laborious, and it is not easy to entirely extract the features <ns0:ref type='bibr' target='#b48'>(Zhen et al., 2018)</ns0:ref>. DL-based methods automatically extract features with a low rate of false positives and false positives. DL-based vulnerability detection methods are divided into the following three types according to their feature extraction method.</ns0:p><ns0:p>The first type is sequence-based vulnerability detection. This type of research uses Deep Neural Networks (DNNs) to extract feature representations from sequence code entities, mainly text sequence and function call sequence. The text sequence mainly contains source code text <ns0:ref type='bibr' target='#b38'>(Sestili et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Choi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Peng et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Li &amp; Chen, 2021)</ns0:ref>, assembly instructions <ns0:ref type='bibr' target='#b21'>(Le et al., 2019)</ns0:ref>, and source code processed by the code lexer <ns0:ref type='bibr' target='#b36'>(Russell et al., 2018)</ns0:ref>. The function call sequence includes static calls and dynamic calls <ns0:ref type='bibr' target='#b15'>(Grieco et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b43'>Wu et al., 2017)</ns0:ref>. Additionally, they allow neural networks to capture flow-based patterns and advanced features <ns0:ref type='bibr' target='#b48'>(Zhen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b54'>Zou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al., 2021c;</ns0:ref><ns0:ref type='bibr' target='#b7'>Cheng et al., 2021)</ns0:ref>.</ns0:p><ns0:p>The second type is Abstract Syntax Tree (AST)-based code vulnerability detection. AST retains the hierarchical structure of sentence and expression organization. It contains a relatively large amount of code semantics and syntax. Therefore, AST can be a valuable source for learning feature representations related to potentially vulnerable patterns. This type of method first extracts the ASTs of the code and then combines them with the seq2seq <ns0:ref type='bibr' target='#b9'>(Dam et al., 2017)</ns0:ref>, bidirectional long short-term memory (BLSTM) <ns0:ref type='bibr' target='#b14'>(Farid AB &amp; LA., 2021;</ns0:ref><ns0:ref type='bibr' target='#b30'>Lin et al., 2018)</ns0:ref>, or other networks <ns0:ref type='bibr' target='#b41'>(Wang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Lin et al., 2017)</ns0:ref> to extract the semantic features of the code.</ns0:p><ns0:p>The third type is graph-based vulnerability detection. These studies use DNN to learn feature representations from different types of graph-based program representations, including AST, Control Flow graphs (CFGs), Program Dependency graphs (PDGs), data-dependent graphs (DDGs), and combinations of these graphs <ns0:ref type='bibr' target='#b39'>(Shar &amp; Tan, 2013;</ns0:ref><ns0:ref type='bibr' target='#b12'>Duan et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Dong et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Harer et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Lin et al., 2019b)</ns0:ref> as input to DNNs for learning deep feature representations. Based on this, some studies have used multiple types of composite graphs to express richer semantic information <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Chakraborty et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang et al., 2020)</ns0:ref>. This paper focuses on these three types of DL-based vulnerability detectors.</ns0:p></ns0:div> <ns0:div><ns0:head>Vulnerability datasets</ns0:head><ns0:p>The existing C/C++ vulnerability datasets are mainly divided into the following three types according to the collection method. The first type is artificially synthesized data using known as vulnerability patterns, such as Software Assurance Reference Dataset (SARD) <ns0:ref type='bibr' target='#b2'>(Black, 2018;</ns0:ref><ns0:ref type='bibr' target='#b48'>Zhen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al., 2021c)</ns0:ref>. This type of data is relatively simple and has a single vulnerability pattern. The second type is original data that has been manually modified, such as National Vulnerability Dataset (NVD) <ns0:ref type='bibr' target='#b3'>(Booth &amp; Witte, 2013;</ns0:ref><ns0:ref type='bibr' target='#b48'>Zhen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al., 2021c;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bhandari et al., 2021)</ns0:ref> <ns0:ref type='bibr' target='#b36'>(Russell et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b44'>Xinda Wang &amp; Sun, 2021)</ns0:ref> and open-source software <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Lin et al., 2019a;</ns0:ref><ns0:ref type='bibr' target='#b49'>Zheng et al., 2021)</ns0:ref>. This type of data involves a wide range of vulnerabilities and different structures, reflecting real-world software vulnerabilities. Generally, the unpatched version is regarded as vulnerability data, and the patched version is regarded as non-vulnerability data. This paper studies these three types of C/C++ vulnerability datasets.</ns0:p></ns0:div> <ns0:div><ns0:head>Research on vulnerability datasets</ns0:head><ns0:p>Previous studies have explored which code changes are more prone to contain vulnerabilities in datasets <ns0:ref type='bibr' target='#b4'>(Bosu et al., 2014)</ns0:ref>, and have analyzed the dependencies between vulnerabilities <ns0:ref type='bibr' target='#b22'>(Li et al., 2021a)</ns0:ref> and the vulnerability distribution <ns0:ref type='bibr' target='#b31'>(Liu et al., 2020)</ns0:ref>. A recent study <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> calculated the authenticity of existing vulnerability datasets, the proportion of data with and without vulnerabilities, and repeated samples. That study also found that low authenticity is not conducive to detecting realworld vulnerabilities. Unbalanced datasets usually make DL-based vulnerability detectors ineffective.</ns0:p><ns0:p>Vulnerability datasets with high repeated sample rates may help detect certain vulnerabilities but not others. The current research does not characterize vulnerability datasets or the impact of the vulnerability dataset on DL-based vulnerability detectors.</ns0:p></ns0:div> <ns0:div><ns0:head>DESIGN</ns0:head><ns0:p>The purpose of this paper is to study the impact of vulnerability datasets on DL-based vulnerability detectors. This paper explores the impact of vulnerability datasets on DL-based vulnerability detectors from three aspects: granularity, similarity, and code features. The following are our research motivations:</ns0:p><ns0:p>Granularity. When we studied at SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>, we found that there were differences in the results obtained at the slice-level dataset and function-level dataset. And the slicing technology extracts the information related to the vulnerability. Therefore, we believe that granularity will have an impact on the results of vulnerability detectors and do research on granularity in this paper. differences are small, it is easier to learn the features of each class. The vectors come from samples, and the difference between the vectors is not only the representation method but also the difference of samples.</ns0:p><ns0:p>Therefore, we believe that the similarity between the samples themselves may have an impact on the effectiveness of vulnerability detectors. In this paper, we investigate the effect of inter-class similarity and intra-class similarity on vulnerability detector performance.</ns0:p><ns0:p>Code features. A study <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> found that the effects of artificially synthesized datasets and real-world datasets showed differences. We argue that the difference between synthetic and real datasets lies not in their origin, but code features, such as code complexity. Therefore, this paper comprehensively analyzes the code features and studies the impact of code features on the effect of vulnerability detectors.</ns0:p><ns0:p>We achieved insights by answering the following research questions: RQ1: How does the granularity of vulnerability dataset samples affect DL-based vulnerability detectors? The sample granularity of vulnerability datasets is mainly divided into function level and slice level <ns0:ref type='bibr' target='#b25'>(Li et al., 2021b)</ns0:ref>. The source code files are divided by function at the function level and labelled as vulnerable or non-vulnerable. In the slice set, the slices are labelled according to whether there are vulnerable lines in the slices. We processed the same dataset into different granularities. Then, we studied the impact of sample granularity on DL-based vulnerability detectors.</ns0:p><ns0:p>RQ2: How does the similarity of vulnerability dataset samples affect DL-based vulnerability detectors? Vulnerability datasets are divided into two categories: vulnerable data and non-vulnerable data. We consider that sample similarity has two aspects: inter-class similarity and intra-class similarity.</ns0:p><ns0:p>Inter-class similarity refers to the similarity between two classes of samples. Intra-class similarity refers to the similarity between samples of the same class. We studied the impact of sample similarity between classes and within classes on DL-based vulnerability detectors from the vectors.</ns0:p></ns0:div> <ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2021:11:67742:1:0:NEW 5 Apr 2022)</ns0:ref> Manuscript to be reviewed </ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>STEP I: Generating code samples</ns0:head><ns0:p>This step was to adapt the vulnerability dataset to the input format requirements of the vulnerability detector. We generated a set of vulnerable code samples A = {A 1 , A 2 , ..., A m } and a set of non-vulnerable code samples B = {B 1 , B 2 , ..., B n } from the vulnerability dataset, where m and n were the number of vulnerable code samples and non-vulnerable code samples. We generated code samples with two granularities: function-level and slice-level.</ns0:p><ns0:p>When generating function-level code samples, we divided the code files into function units. Then, we labelled the functions according to the information provided by the vulnerability dataset. The labelling method should be determined according to the requirements of the vulnerability detector. Usually, vulnerable functions are labelled '1', and non-vulnerable functions are labelled '0'.</ns0:p><ns0:p>When generating slice-level code samples, the first step is determining whether the vulnerability dataset contains vulnerability line information. This is because we needed vulnerability line information to determine whether the slice contained vulnerability data, and then we could label slices. For a vulnerability detector, only the labelled training set is meaningful. We then generated a PDG diagram of the source code and generated corresponding program slices for the code elements in the PDG. Finally, we labelled the slices according to the vulnerability line information provided by the vulnerability dataset. The slices containing the vulnerability lines were labelled as vulnerable data, and those that did not contain the vulnerability lines were labelled as non-vulnerable data.</ns0:p></ns0:div> <ns0:div><ns0:head>STEP II: Characterizing sample similarity</ns0:head><ns0:p>This step was to characterize the sample similarity of the vulnerability dataset. We considered two types of sample similarities: inter-class similarity and intra-class similarity. In order to enable the DL model to learn features better, it was necessary to simplify the complex original data and express the original This kind of representation is more concerned with control flow and data flow information, such as Gated Graph Neural Networks (GGNN) <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref>.</ns0:p><ns0:p>After representing the code samples as vectors sets V A and V B , we reduced their dimensions to two-dimensional vectors sets v A = {v A1 , v A2 , ..., v Am } and v B = {v B1 , v B2 , ..., v Bn }. We represented the similarity between classes by the distance between classes. We calculated this by averaging the average distance between two types of samples, denoted by D inter ,</ns0:p><ns0:formula xml:id='formula_0'>D inter = 1 m * n m &#8721; i=1 n &#8721; j=1 D(v Ai , v B j ).</ns0:formula><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_1'>D(v 1 , v 2 )</ns0:formula><ns0:p>represents the cosine distance between v 1 and v 2 <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_2'>D(v 1 , v 2 ) = 1 &#8722; | v 1 &#8226; v 2 ||v 1 || * ||v 2 || |.<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>We represent the similarity within the class by the distance within the class. We calculated the sum of the average distance between each type of sample, denoted by D intra ,</ns0:p><ns0:formula xml:id='formula_3'>D intra = 1 m 2 m &#8721; i=1 m &#8721; j=1 D(v Ai , v A j ) + 1 n 2 n &#8721; i=1 n &#8721; j=1 D(v Bi , v B j ). (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>)</ns0:formula><ns0:p>The larger the D inter and D intra , the lower the corresponding similarity.</ns0:p><ns0:p>We also used relative entropy to measure sample similarity. En(a, b) represents the relative entropy between two samples a and b. The inter-class relative entropy is calculated as the average relative entropy between two types of samples, denoted as En inter ,</ns0:p><ns0:formula xml:id='formula_5'>En inter = 1 m * n m &#8721; i=1 n &#8721; j=1 En(A i , B j ).</ns0:formula><ns0:p>(4)</ns0:p><ns0:p>The intra-class relative entropy is calculated as the sum of the average relative entropy between samples of each class, denoted as En intra ,</ns0:p><ns0:formula xml:id='formula_6'>En intra = 1 m 2 m &#8721; i=1 m &#8721; j=1 En(A i , A j ) + 1 n 2 n &#8721; i=1 n &#8721; j=1 En(B i , B j ). (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>)</ns0:formula><ns0:p>The larger the En inter and En intra , the lower the corresponding similarity.</ns0:p></ns0:div> <ns0:div><ns0:head>STEP III: Characterizing code features</ns0:head><ns0:p>This step was to characterize the code features of the vulnerability dataset, such as code complexity, sample size, and subroutine call-related characteristics. In order to characterize the code features of vulnerability datasets listed above, we chose five features from SciTools 1 : AvgCyclomatic, AvgEssential, AvgLine, AvgCountInput and AvgCountOutput. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Vulnerability datasets</ns0:head><ns0:p>We choose four datasets to conduct experiments: SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>, FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>, Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref>, and REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref>. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> contains an overview of these four vulnerability datasets. Here are the reasons for choosing them:</ns0:p><ns0:p>&#8226; In order to study the impact of granularity on vulnerability detectors, it is necessary to generate slice-level and function-level data for the same vulnerability dataset. Therefore, we selected two vulnerability datasets containing vulnerability line information: SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> and FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>&#8226; In order to study the impact of sample similarity and code features on vulnerability detectors, the difference of vulnerability datasets should be as large as possible. Therefore, we chose vulnerability datasets from different sources. SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> comes from SARD <ns0:ref type='bibr' target='#b2'>(Black, 2018)</ns0:ref> and NVD <ns0:ref type='bibr' target='#b3'>(Booth &amp; Witte, 2013)</ns0:ref>, and FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref> comes from GitHub. We then chose Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> from Qemu and FFMPeg as another dataset. We used the above three vulnerability datasets from different sources to train and test vulnerability detectors.</ns0:p><ns0:p>&#8226; We chose REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> as the public test set to ensure fairness. Since it is a real-world vulnerability dataset. Additionally, it is from a source different to the other three three datasets. </ns0:p></ns0:div> <ns0:div><ns0:head>Representation methods and DL-based vulnerability detectors</ns0:head><ns0:p>We chose three types of representation methods: sequence-based representation method -word2vec <ns0:ref type='bibr' target='#b33'>(Mikolov et al., 2013)</ns0:ref>, AST-based representation method -code2vec <ns0:ref type='bibr' target='#b0'>(Alon et al., 2019)</ns0:ref>, and graph-based representation method -GGNN <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>6/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67742:1:0:NEW 5 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We chose four vulnerability detectors to conduct experiments: SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>, VulDeePecker <ns0:ref type='bibr' target='#b48'>(Zhen et al., 2018)</ns0:ref> , REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref>, and C2V-BGRU. Here are the reasons for choosing them:</ns0:p><ns0:p>&#8226; To study the impact of granularity on vulnerability detectors, a vulnerability detector that can accept both function level and slice level as input should be selected. Therefore, we chose SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> to study the impact of granularity on vulnerability detectors.</ns0:p><ns0:p>&#8226; In order to study the impact of sample similarity and code features, we needed vulnerability detectors that use the three representation methods studied in this paper and different DL models to avoid the bias caused by specific DL models. Therefore, we chose VulDeePecker <ns0:ref type='bibr' target='#b48'>(Zhen et al., 2018)</ns0:ref> based on word2vec <ns0:ref type='bibr' target='#b33'>(Mikolov et al., 2013)</ns0:ref> and BLSTM, REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> based on GGNN <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and MLP, and a variant of SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> based on code2vec <ns0:ref type='bibr' target='#b0'>(Alon et al., 2019)</ns0:ref> and BGRU (called C2V-BGRU).</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation metrics</ns0:head><ns0:p>Our approaches were based on four popular evaluation metrics used for classification tasks: Accuracy </ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>Impact of granularity (RQ1)</ns0:head><ns0:p>This subsection studied the impact of different sample granularity on the DL-based vulnerability detector.</ns0:p><ns0:p>We generated slice-level samples and function-level samples of SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> and FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>. Since SySeVR used the ratio of 80%: 20% in original papers, we follow this ratio to ensure maximum restoration of vulnerability detectors. 80% of the samples were training set and 20% of the samples were test set. The training set was used to train the SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> vulnerability detector. The function-level test set was used for testing. The results are shown in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>.</ns0:p><ns0:p>We observed that the F1-score of the vulnerability detector trained on the slice-level data of the SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c</ns0:ref>) dataset was 9.09% higher than that of the vulnerability detector trained on the function-level data. The F1-score of vulnerability detectors trained on slice-level data of the FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020</ns0:ref>) dataset was 6.52% higher than that of vulnerability detectors trained on functional-level data. The proportion of vulnerability-related information contained in fine-grained samples was larger than that of function-level samples, which was conducive to the more effective learning of vulnerability-related features by the DL model. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Impact of sample similarity (RQ2)</ns0:head><ns0:p>This subsection was to study the impact of sample similarity on DL-based vulnerability detectors. We used word2vec <ns0:ref type='bibr' target='#b33'>(Mikolov et al., 2013)</ns0:ref>, code2vec <ns0:ref type='bibr' target='#b0'>(Alon et al., 2019)</ns0:ref>, and GGNN <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> to represent the samples of the three vulnerability datasets (i.e., SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>, FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>, and Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019</ns0:ref>)) as vectors. To better retain important information, we then used PCA <ns0:ref type='bibr' target='#b42'>(Wold et al., 1987)</ns0:ref> to reduce the dimensionality of the vector to 15 dimensions. Finally, we used T-SNE <ns0:ref type='bibr' target='#b20'>(Laurens &amp; Hinton, 2008)</ns0:ref> to reduce the dimensionality of the vector to two dimensions. We calculated the inter-class and the intra-class distances for these two-dimensional vectors and represented their inter-class and intra-class similarities. The results are shown in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. The SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> dataset had the highest inter-class similarity and the lowest intra-class similarity. In contrast, the FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref> dataset had the lowest inter-class similarity and the highest intra-class similarity across the three representations. The average inter-class distance of SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> was 1.45 times higher than Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019</ns0:ref>) and 2.08 times higher than FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>. The average intra-class distance of FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020</ns0:ref>) was 1.37 times higher than Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019</ns0:ref>) and 2.45 times higher than SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>. We generated function-level code samples from the three vulnerability datasets (i.e., SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>, FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>, and Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref>) according to the input</ns0:p></ns0:div> <ns0:div><ns0:head>8/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67742:1:0:NEW 5 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science format required by the three function-level vulnerability detectors (i.e., VulDeePecker <ns0:ref type='bibr' target='#b48'>(Zhen et al., 2018)</ns0:ref>, REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref>, and C2V-BGRU). Since both VulDeePecker and REVEAL used the ratio of 80%: 20% in original papers, we followed this ratio to ensure maximum restoration of vulnerability detectors. For every vulnerability dataset, 80% of the samples were training set and 20% of the samples were original test set. Training sets were used to train DL-based vulnerability detectors, test sets, and the REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> dataset were used to test the effect of the vulnerability detector. The results are shown in Table <ns0:ref type='table' target='#tab_5'>5 and Figs. 3-4.</ns0:ref> From Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>, we observed that for the vulnerability detection of the original test set, the F1-score of VulDeePecker trained by SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> was 9.42% higher than the F1-score trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 20.94% higher than the F1-score trained by FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>. The F1-score of C2V-BGRU trained by SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> was 3.25% higher than the one trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 4.89% higher than the one trained by FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>. The F1-score of REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> trained by SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> was 9.11% higher than the F1-score trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 15.19% higher than the F1-score trained by FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>. The average F1-score of SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> was 7.26% higher than the F1-score trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 13.67% higher than the F1-score trained by FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>. For the vulnerability detection of the REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> datasets, the F1-score of VulDeePecker trained by FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref> was 13.22% higher than the F1-score trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 14.58% higher than the F1-score trained by SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>. The F1-score of C2V-BGRU trained by FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref> was 3.43% higher than the one trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 9.74% higher than the one trained by SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>. The F1-score of REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> trained by FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020</ns0:ref>) was 6.39% higher than the F1-score trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 14.35%</ns0:p><ns0:p>higher than the F1-score trained by SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>. The average F1-score of FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref> was 7.68% higher than the F1-score trained by Devign <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2019)</ns0:ref> and 12.89% higher than the F1-score trained by SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref>. By analyzing the above results, we observed that for the vulnerability detection of the original test set, the dataset with lower inter-class similarity and higher intra-class similarity was better. For the vulnerability detection of the REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> dataset, the dataset with higher inter-class similarity and lower intra-class similarity was better.</ns0:p><ns0:p>The training set with lower inter-class similarity made it easier for the DL model to learn the difference between the two data classes. For the vulnerability detection of the original test set, the similarity between the test set was also lower. The difference between the vulnerable and non-vulnerable classes was Manuscript to be reviewed</ns0:p><ns0:p>Computer Science significant and was more conducive to detecting vulnerable samples. However, since the DL model learns different features that are not related to vulnerabilities, the training set with low inter-class similarity will affect the performance of the vulnerability detector when detecting other vulnerability datasets.</ns0:p><ns0:p>The training set with higher intra-class similarity helped the DL model learn the characteristics of the two classes of data. For the vulnerability detection of the original test set, the intra-class similarity of the test set was also higher. The same class of data was slightly different, making it more conducive to detecting vulnerable samples. However, since the higher intra-class similarity meant that the similar data in the dataset had a single feature, the DL model learned features unrelated to the vulnerability. Therefore, for the detection of other vulnerability datasets, the high intra-class similarity of the training set affects the performance of the vulnerability detector.</ns0:p><ns0:p>Insight. For DL-based vulnerability detectors, vulnerability datasets with higher intra-class similarity and lower inter-class similarity are conducive to detecting vulnerabilities in the original test set.</ns0:p><ns0:p>Vulnerability datasets with lower intra-class similarity and higher inter-class similarity are conducive to detecting vulnerabilities in other vulnerability datasets. This is because higher intra-class similarity and lower inter-class similarity cause DL-based vulnerability detectors to learn a single feature and features that are unrelated to vulnerabilities.</ns0:p></ns0:div> <ns0:div><ns0:head>Impact of code features (RQ3)</ns0:head><ns0:p>This subsection studied the impact of code features on the DL-based vulnerability detector. We analyzed the samples of vulnerability datasets, extracted and characterized the code features of the vulnerability datasets, and the results are shown in Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>. We found that the SySeVR <ns0:ref type='bibr' target='#b26'>(Li et al., 2021c)</ns0:ref> dataset had the lowest complexity, smallest sample size, and minor subroutine calls. The FUNDED <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref> dataset had the highest complexity, largest sample size, and most subroutine calls. From Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> and Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>, we observed that for the vulnerability detection of the original test set, a training set with lower complexity, smaller average sample size, and fewer subroutine calls was better.</ns0:p><ns0:p>For vulnerability detection of the REVEAL <ns0:ref type='bibr' target='#b5'>(Chakraborty et al., 2020)</ns0:ref> dataset, a training set with higher complexity, larger average sample sizes, and more subroutine calls generated better results.</ns0:p><ns0:p>The dataset with low complexity, small sample size, and fewer subroutine calls had a simple structure, and it was easier for the DL models to learn simple inputs. For the vulnerability detection of the original test set, the structure of the test set was also simple, so it was more conducive to detecting vulnerable samples. However, the simple structure meant that the dataset had a single feature, which made it difficult for the DL model to learn complex vulnerability features from the training set. Therefore, for the vulnerability detection of other vulnerability datasets, a training set with low complexity, small sample size, and few subroutine calls will affect the performance of the vulnerability detector.</ns0:p><ns0:p>Insight. For DL-based vulnerability detectors, vulnerability datasets with a simple structure are conducive to detecting vulnerabilities in the original test set, and vulnerability datasets with a complex structure are more conducive to detecting vulnerabilities in other vulnerability datasets. This is because it is not easy to detect vulnerabilities in complex data, and their complex features can better train the detection ability of DL-based vulnerability detectors.</ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head></ns0:div> <ns0:div><ns0:head>Suggestions</ns0:head><ns0:p>Based on our research results, we have the following suggestions for creating and selecting vulnerability datasets. 1) Vulnerability datasets should be collected from the real-world environment. We need to deduplicate the dataset and remove irrelevant vulnerability information, such as header files and comments.</ns0:p><ns0:p>2) When verifying the feasibility of the DL-based vulnerability detector, the chosen dataset should be relatively simple with less similarity between classes and more significant intra-class similarity under the In the future, we will provide a complete improvement plan.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This paper focuses on using sample granularity, sample similarity, and code features to study the impact of vulnerability datasets on DL-based vulnerability detectors. Our research found: 1) Fine-grained samples were conducive to detecting vulnerabilities; 2) Vulnerability datasets with lower inter-class similarity, higher intra-class similarity, and simple structure helped detect vulnerabilities in the original test set;</ns0:p><ns0:p>and 3) Vulnerability datasets with higher inter-class similarity, lower intra-class similarity, and complex structure could better detect vulnerabilities in other datasets. We also have given suggestions for creating and selecting vulnerability datasets. During the research process, we found that the quality of vulnerability </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>and other vulnerability 2/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67742:1:0:NEW 5 Apr 2022) Manuscript to be reviewed Computer Science databases. They annotate and modify the collected data to indicate vulnerabilities. The third type are real-world open-source datasets, such as open-source repositories like GitHub</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Similarity.</ns0:head><ns0:label /><ns0:figDesc>Vulnerability detection is a binary classification problem of deep learning. The deep learning model learns the characteristics of the two types of samples through vectors. So the inter-and intra-class similarity of input vectors will affect the learning of the deep learning model. If the two classes of vectors are very different, it is easier to learn discriminative features. If the same-class vector</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The overview of this paper: Vulnerability datasets and DL-based vulnerability detectors as input. Steps I -III characterized the characteristics of vulnerability datasets. The characteristics of vulnerability datasets and the results of DL-based vulnerability detectors were used for association analysis to obtain the answers to RQ1-3. Insights were achieved through the above analysis.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67742:1:0:NEW 5 Apr 2022)Manuscript to be reviewed Computer Science data as vectors. We represented vulnerable code samples set A and non-vulnerable code samples set B as multi-dimensional vectors sets V A = {V A1 ,V A2 , ...,V Am } and V B = {V B1 ,V B2 , ...,V Bn }.The methods for representing codes as vectors were mainly divided into sequence-based, ASTbased, and graph-based. Sequence-based representation means that the code is treated as text sequences, regardless of the internal structure of the code. For example, word2vec<ns0:ref type='bibr' target='#b33'>(Mikolov et al., 2013)</ns0:ref> encodes tokens in the code. AST-based representation is a tree representation of the abstract syntax structure of the source code. First, it decomposes the code into a set of paths in the corresponding AST. Then, it uses the neural network to learn the representation of each path and how to integrate the representation of all paths. Abstract Syntax Tree Neural Network (ASTNN)<ns0:ref type='bibr' target='#b47'>(Zhang et al., 2019)</ns0:ref> and code2vec<ns0:ref type='bibr' target='#b0'>(Alon et al., 2019)</ns0:ref> are two representations based on AST. Graph-based representation is based on multiple graphs that explicitly encode different control dependencies and data dependencies as edges of heterogeneous graphs.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>ACC), Precision (P), Recall (R), and F1-score (F1). Let True Positive (TP) be the number of samples with vulnerabilities detected correctly, True Negative (TN) be the number of samples with non-vulnerabilities detected correctly, False Positive (FP) be the number of samples with false vulnerabilities detected, and False Negative (FN) be the number of samples with true vulnerabilities undetected. Accuracy (ACC) indicates the proportion of all correctly classified samples to total samples, ACC = (T P + T N)/(T P + T N + FP + FN). Precision (P), also known as the Positive Predictive rate, indicates the correctness of predicted vulnerable samples, P = T P/(T P + FP). Recall (R) indicates the effectiveness of vulnerability prediction, R = T P/(T P + FN). F1-score (F1) is defined as the geometric mean of Precision and Recall, F1 = 2 * (P * R)/(P + R).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The ROC curve of three vulnerability detectors tested on the original test set</ns0:figDesc><ns0:graphic coords='9,217.67,188.91,129.60,100.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The ROC curve of three vulnerability detectors tested on the original test set</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The ROC curve of three vulnerability detectors tested on the REVEAL dataset</ns0:figDesc><ns0:graphic coords='12,150.38,450.75,129.60,100.80' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67742:1:0:NEW 5 Apr 2022) Manuscript to be reviewed Computer Science representation of the DL-based vulnerability detector. 3) When optimizing the DL-based vulnerability detector, the chosen dataset should be more complex with more significant similarity between classes and less similarity within classes under the representation of the DL-based vulnerability detector. Limitations This study has several limitations. First, we used three vulnerability datasets and four DL-based vulnerability detectors for research. Due to the limited number of public vulnerability datasets currently available and the inherent limitations of DL-based vulnerability detectors, more DL-based vulnerability detectors and vulnerability datasets should be used to verify the results in the future. Second, we studied the impact of C/C++ vulnerability datasets on DL-based vulnerability detectors. Future research direction should explore Python/Java/PHP vulnerability datasets. Third, our work was devoted to the existing vulnerability datasets, but we did not conduct in-depth research on how to improve vulnerability datasets.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>datasets was essential to DL-based vulnerability detectors. It affected the DL-based vulnerability detectors and played a significant role in guiding the optimization of DL-based vulnerability detectors. The lack of vulnerability datasets restricts the development of DL-based vulnerability detectors. We hope to collect better vulnerability datasets to study their relationship with vulnerability detectors and lay a foundation for developing DL-based vulnerability detectors.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Code features of vulnerability datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>AvgCyclomatic</ns0:cell><ns0:cell>Average cyclomatic complexity for all nested functions or methods</ns0:cell></ns0:row><ns0:row><ns0:cell>AvgEssential</ns0:cell><ns0:cell>Average Essential complexity for all nested functions or methods</ns0:cell></ns0:row><ns0:row><ns0:cell>AvgLine</ns0:cell><ns0:cell>Average number of lines for all nested functions or methods</ns0:cell></ns0:row><ns0:row><ns0:cell>AvgCountInput</ns0:cell><ns0:cell>Number of calling subprograms plus global variables read</ns0:cell></ns0:row><ns0:row><ns0:cell>AvgCountOutput</ns0:cell><ns0:cell>Number of called subprograms plus global variables set</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>EXPERIMENTAL SETUP</ns0:cell></ns0:row><ns0:row><ns0:cell>Implementation</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>We used Pytorch 1.4.0 with Cuda version 10.1 and TensorFlow 1.15 (or 1.12) to implement models. We</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ran our experiments on double Nvidia Geforce 2080Ti GPU, Intel(R) Xeon(R) 2.60GHz 16 CPU. The</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>time to train a single vulnerability detection model was between 4hours and 17hours.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of vulnerability datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Source</ns0:cell><ns0:cell>Category</ns0:cell><ns0:cell>Vulnerable</ns0:cell><ns0:cell>Non-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>samples</ns0:cell><ns0:cell>vulnerable</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>samples</ns0:cell></ns0:row><ns0:row><ns0:cell>SySeVR</ns0:cell><ns0:cell>SARD + NVD</ns0:cell><ns0:cell>synthesized,</ns0:cell><ns0:cell>2,091</ns0:cell><ns0:cell>13,502</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>manually</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>modified</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FUNDED</ns0:cell><ns0:cell>GitHub</ns0:cell><ns0:cell>open-source</ns0:cell><ns0:cell>5,200</ns0:cell><ns0:cell>5,200</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>repository</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Devign</ns0:cell><ns0:cell>Qemu + FFMPeg</ns0:cell><ns0:cell>open-source</ns0:cell><ns0:cell>10,067</ns0:cell><ns0:cell>12,294</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>software</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>REVEAL</ns0:cell><ns0:cell>Chromium + Debian</ns0:cell><ns0:cell>open-source</ns0:cell><ns0:cell>1,664</ns0:cell><ns0:cell>16,505</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>software</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The result of SySeVR on function-level and slice-level vulnerability samples</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Granularity</ns0:cell><ns0:cell cols='3'>Accuracy (%) Precision (%) Recall (%)</ns0:cell><ns0:cell>F1-score (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>FUNDED</ns0:cell><ns0:cell>function slice</ns0:cell><ns0:cell>72.34 75.48</ns0:cell><ns0:cell>57.02 63.23</ns0:cell><ns0:cell>55.73 59.55</ns0:cell><ns0:cell>56.38 65.47</ns0:cell></ns0:row><ns0:row><ns0:cell>SySeVR</ns0:cell><ns0:cell>function slice</ns0:cell><ns0:cell>80.36 89.57</ns0:cell><ns0:cell>85.13 96.54</ns0:cell><ns0:cell>82.52 84.02</ns0:cell><ns0:cell>83.37 89.89</ns0:cell></ns0:row></ns0:table><ns0:note>Insight. For DL-based vulnerability detectors, a training set with fine-grained code samples is conducive to detecting vulnerabilities. The fine-grained samples make it easier for the DL-based vulnerability detector to learn the characteristics of vulnerabilities.7/16PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67742:1:0:NEW 5 Apr 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The inter-class distance and intra-class distance of the three vulnerability datasets under the three representation methods</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='2'>Representation D inter</ns0:cell><ns0:cell>D intra</ns0:cell><ns0:cell>En inter</ns0:cell><ns0:cell>En intra</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>word2vec</ns0:cell><ns0:cell>0.1507</ns0:cell><ns0:cell>0.3578</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FUNDED</ns0:cell><ns0:cell>code2vec</ns0:cell><ns0:cell>0.3201</ns0:cell><ns0:cell>0.4854</ns0:cell><ns0:cell>0.6739</ns0:cell><ns0:cell>0.5317</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GGNN</ns0:cell><ns0:cell>0.3205</ns0:cell><ns0:cell>0.5942</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>word2vec</ns0:cell><ns0:cell>0.4013</ns0:cell><ns0:cell>0.1327</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SySeVR</ns0:cell><ns0:cell>code2vec</ns0:cell><ns0:cell>0.4945</ns0:cell><ns0:cell>0.2343</ns0:cell><ns0:cell>0.8578</ns0:cell><ns0:cell>0.3790</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GGNN</ns0:cell><ns0:cell>0.7556</ns0:cell><ns0:cell>0.2002</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>word2vec</ns0:cell><ns0:cell>0.2529</ns0:cell><ns0:cell>0.2247</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Devign</ns0:cell><ns0:cell>code2vec</ns0:cell><ns0:cell>0.3855</ns0:cell><ns0:cell>0.3576</ns0:cell><ns0:cell>0.7265</ns0:cell><ns0:cell>0.4882</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GGNN</ns0:cell><ns0:cell>0.4942</ns0:cell><ns0:cell>0.4683</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The results of the vulnerability detectors trained by the three training sets on the original test set and REVEAL</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Test set</ns0:cell><ns0:cell>Detector</ns0:cell><ns0:cell>Training set</ns0:cell><ns0:cell>Accuracy (%)</ns0:cell><ns0:cell>Precision (%)</ns0:cell><ns0:cell>Recall (%)</ns0:cell><ns0:cell>F1-score (%)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>FUNDED</ns0:cell><ns0:cell>63.75</ns0:cell><ns0:cell>53.45</ns0:cell><ns0:cell>51.78</ns0:cell><ns0:cell>52.65</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VulDeePecker</ns0:cell><ns0:cell>SySeVR</ns0:cell><ns0:cell>89.52</ns0:cell><ns0:cell>75.62</ns0:cell><ns0:cell>72.37</ns0:cell><ns0:cell>73.59</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Devign</ns0:cell><ns0:cell>58.57</ns0:cell><ns0:cell>68.43</ns0:cell><ns0:cell>60.36</ns0:cell><ns0:cell>64.17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>FUNDED</ns0:cell><ns0:cell>52.52</ns0:cell><ns0:cell>54.13</ns0:cell><ns0:cell>42.73</ns0:cell><ns0:cell>47.79</ns0:cell></ns0:row><ns0:row><ns0:cell>Original</ns0:cell><ns0:cell>C2V-BGRU</ns0:cell><ns0:cell>SySeVR</ns0:cell><ns0:cell>84.22</ns0:cell><ns0:cell>56.35</ns0:cell><ns0:cell>49.52</ns0:cell><ns0:cell>52.68</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Devign</ns0:cell><ns0:cell>53.58</ns0:cell><ns0:cell>52.53</ns0:cell><ns0:cell>46.74</ns0:cell><ns0:cell>49.43</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>FUNDED</ns0:cell><ns0:cell>48.84</ns0:cell><ns0:cell>49.44</ns0:cell><ns0:cell>48.94</ns0:cell><ns0:cell>49.23</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>REVEAL</ns0:cell><ns0:cell>SySeVR</ns0:cell><ns0:cell>79.05</ns0:cell><ns0:cell>56.82</ns0:cell><ns0:cell>74.60</ns0:cell><ns0:cell>64.42</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Devign</ns0:cell><ns0:cell>66.24</ns0:cell><ns0:cell>47.24</ns0:cell><ns0:cell>65.87</ns0:cell><ns0:cell>55.31</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>FUNDED</ns0:cell><ns0:cell>78.74</ns0:cell><ns0:cell>23.78</ns0:cell><ns0:cell>28.93</ns0:cell><ns0:cell>26.41</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VulDeePecker</ns0:cell><ns0:cell>SySeVR</ns0:cell><ns0:cell>80.56</ns0:cell><ns0:cell>9.54</ns0:cell><ns0:cell>15.59</ns0:cell><ns0:cell>11.83</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Devign</ns0:cell><ns0:cell>70.08</ns0:cell><ns0:cell>10.56</ns0:cell><ns0:cell>17.58</ns0:cell><ns0:cell>13.19</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>FUNDED</ns0:cell><ns0:cell>89.05</ns0:cell><ns0:cell>21.56</ns0:cell><ns0:cell>19.35</ns0:cell><ns0:cell>20.39</ns0:cell></ns0:row><ns0:row><ns0:cell>REVEAL</ns0:cell><ns0:cell>C2V-BGRU</ns0:cell><ns0:cell>SySeVR</ns0:cell><ns0:cell>88.41</ns0:cell><ns0:cell>8.33</ns0:cell><ns0:cell>14.78</ns0:cell><ns0:cell>10.65</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Devign</ns0:cell><ns0:cell>84.23</ns0:cell><ns0:cell>19.72</ns0:cell><ns0:cell>14.88</ns0:cell><ns0:cell>16.96</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>FUNDED</ns0:cell><ns0:cell>66.26</ns0:cell><ns0:cell>35.89</ns0:cell><ns0:cell>20.36</ns0:cell><ns0:cell>25.98</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>REVEAL</ns0:cell><ns0:cell>SySeVR</ns0:cell><ns0:cell>72.38</ns0:cell><ns0:cell>8.76</ns0:cell><ns0:cell>17.33</ns0:cell><ns0:cell>11.63</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Devign</ns0:cell><ns0:cell>64.05</ns0:cell><ns0:cell>22.35</ns0:cell><ns0:cell>17.45</ns0:cell><ns0:cell>19.59</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Code features values of three vulnerability datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='4'>AvgCyclomalic AvgEssential AvgLine(num) AvgCountInput(num)</ns0:cell><ns0:cell>AvgCountOutput(num)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>FUNDED 15.37</ns0:cell><ns0:cell>8.09</ns0:cell><ns0:cell>100.05</ns0:cell><ns0:cell>8.97</ns0:cell><ns0:cell>17.17</ns0:cell></ns0:row><ns0:row><ns0:cell>SySeVR</ns0:cell><ns0:cell>8.38</ns0:cell><ns0:cell>4.84</ns0:cell><ns0:cell>51.87</ns0:cell><ns0:cell>5.96</ns0:cell><ns0:cell>7.08</ns0:cell></ns0:row><ns0:row><ns0:cell>Devign</ns0:cell><ns0:cell>9.23</ns0:cell><ns0:cell>4.98</ns0:cell><ns0:cell>74.77</ns0:cell><ns0:cell>7.70</ns0:cell><ns0:cell>11.13</ns0:cell></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Response Letter Dear Reviewers, We sincerely thank you for the insightful comments and constructive suggestions on our paper “Investigating the impact of vulnerability datasets on deep learning-based vulnerability detectors”. In this revision, we believe that we have adequately addressed all of the comments. We hope the reviewers find the paper satisfactory. The revised submission includes the revised paper which does not contain any highlighting on the revised portion and an accompanying/supplementary version in which we use blue color to highlight the revisions. In the response, we also use blue color to highlight the item-by-item responses. Best regards, Lili Liu, Zhen Li, Yu Wen and Penglong Chen Reviewer 1 Reviewer 1 (Anonymous) Basic reporting Paper summary: This paper studies the impact of datasets' distribution on the performance of deep learning-based vulnerability detectors, Three different aspects of vulnerability datasets were investigated, including the dataset granularity (function level or slice level), similarity (inter-class similarity and intra-class similarity), and the code features, such as AvgCyclmotic, AveEssential. In the evaluation part, several datasets and deep learning models are evaluated according to the three aspects, and associated insights were proposed. Advantages: + I think this paper has very good research insights on the deep learning model datasets, which does not draw much attention in previous literature. Meanwhile, the distribution of datasets is in fact very important to deep learning models' performance. + The overall experiment design is reasonable and clear. + The datasets used in this paper as well as the deep learning models are very new. Weakness: I have several concerns about this paper, as summarized below: + Overall, the writing of this paper requires improvement. The high-level structures of this paper are relatively clear, but there are flaws: Response: We thank the reviewer for the comment. We have made corresponding changes. We checked the full text and corrected grammar and spelling errors. In addition to this, we used PeerJ's editing services and we believe the language and syntax of this paper have been refined. 1) The abstract needs to be more succinct, especially on the method description. Such as 'The training set is used to train the DL-based vulnerability detector'. Such statements do not need to be in the abstract. Response: We thank the reviewer for the comment. We have made corresponding changes in the abstract. We deleted 'The training set is used to train the DL-based vulnerability detector'. We have abbreviated the description of the research methods and highlighted the coherence. 2) In the Experimental Result Section, I think it will be better to switch the order of Table 4 and Table 5, and their introduction. It is more intuitive to present the representation similarity of the datasets first before giving the corresponding evaluation results. Response: We thank the reviewer for the comment and we have switched the order of Table 4 and Table 5, and their introduction. 3) In Table 2, I think it will be better to add a column to show the category of the dataset, such as synthesized dataset, manually modified, and open-source software. I understand such information can be found in RELATED WORK, doing so will make the table clear and release the unnecessary burden of readers. Response: We thank the reviewer for the comment. We have added a column to show the category of the dataset in Table 2. 4) Typos such as 'Dulnerability Datasets' IN Section EXPERIMENTAL SETUP Line 186. Response: We thank the reviewer for the comment. We changed 'Dulnerability Datasets' to 'Vulnerability Datasets'. We have double-checked the manuscript and fully optimized spelling and expression. 5) In the INTRODUCTION section, redundant information on 'what's deep learning' is given. I think this part could also be more succinct since the knowledge should be already known to the journal readers. Response: We thank the reviewer for the comment. We have made corresponding changes in the information. We have simplified unnecessary instructions. We describe 'what's deep learning' simply as 'DL methods automatically capture and determine features from the training set to identify vulnerabilities' + Another concern is the motivation to select the three aspects (granularity, similarity, code features) is not clearly presented in the paper. More evidence and motivation should be added to show why these three factors are important for evaluations. In addition, the similarity and code features are in fact characteristics of the raw code datasets, while granularity can be considered as a kind of preprocessing. I think it will be better to discuss them separately. Response: We thank the reviewer for the comment. We have added a description of the motivation in the “Design” section. The following are our research motivations: 1) Granularity. When we studied at SySeVR, we found that there were differences in the results obtained at the slice-level dataset and function-level dataset. And the slicing technology extracts the information related to the vulnerability. Therefore, we believe that granularity will have an impact on the results of vulnerability detectors and do research on granularity in this paper. 2) Similarity. Vulnerability detection is a binary classification problem of deep learning. The deep learning model learns the characteristics of the two types of samples through vectors. So the inter- and intra-class similarity of input vectors will affect the learning of the deep learning model. If the two classes of vectors are very different, it is easier to learn discriminative features. If the same-class vector differences are small, it is easier to learn the features of each class. The vectors come from samples, and the difference between the vectors is not only the representation method but also the difference of samples. Therefore, we believe that the similarity between the samples themselves may have an impact on the effectiveness of vulnerability detectors. In this paper, we investigate the effect of inter-class similarity and intra-class similarity on vulnerability detector performance. 3) Code features. The paper “Deep Learning based Vulnerability Detection: Are We There Yet?” found that the effects of artificially synthesized datasets and real-world datasets showed differences. We argue that the difference between synthetic and real datasets lies not in their origin, but code features, such as code complexity. Therefore, this paper comprehensively analyzes the code features and studies the impact of code features on the effect of vulnerability detectors. We thank this reviewer for the suggestion on granularity, and we agree that granularity is the way of preprocessing. Since function-level and slice-level data are two different datasets for text-based vulnerability detectors, we take the preprocessing step to generate slice-level or function-level datasets as a step in generating vulnerability datasets, and use granularity as the feature that expresses how the vulnerability dataset was preprocessed. Each of the three features is a separate class: granularity represents the way of preprocessing, similarity represents the relationship between samples, and code features represent the characteristics of the samples themselves. + The details of experimental implementation are not given. For example, I was expecting the introduction to experiment platforms, the libraries used, experiment data such as running time. Response: We thank the reviewer for the comment. We have added an introduction of the details of experimental implementation in section “EXPERIMENTAL SETUP”. We describe the experiment platforms, the libraries used and the statistic of running time. We used Pytorch 1.4.0 with Cuda version 10.1 and TensorFlow 1.15 (or 1.12) to implement models. We ran our experiments on double Nvidia Geforce 2080Ti GPU, Intel(R) Xeon(R) 2.60GHz 16 CPU. The time to train a single vulnerability detection model was between 4 hours and 17 hours. + Still in the evaluation part, the sizes of the datasets are not given. Also, the parameters (m, n) mentioned in STEP I (Line 146) are not given and not evaluated. Response: We thank the reviewer for the comment and we describe the detailed dataset size in Table 2. The value of “Vulnerable Samples” is the value of m, and the “Non-vulnerable Samples” is the value of n. The parameters (m, n) are used when calculating sample similarity (Dinter, Dintra, Eninter, and Enintra). Experimental design As shown above. Validity of the findings As shown above. Additional comments no comment Reviewer 2 Reviewer 2 (Anonymous) Basic reporting This paper analyzes the inner connection between DL-based vulnerability detectors and datasets. The paper mainly focuses on the following aspects: fine-grained samples, datasets with lower inner-class similarity, and datasets with higher inter-class similarity and lower intra-class similarity. Although many other points can be explored in this kind of research, this paper grants a good view of the relationship between dataset and detector schemes. Also, this is interesting research, giving guidelines to the related research. Experimental design This is the most important part of the paper since the author claims their contributions focus on the evaluation. More effort should be put into this part. Validity of the findings The author needs to put more effort into the evaluation parts, as mentioned above. Some suggestions: 1. using recall rate and an F score is good, but ROC curve, EER, and accuracy are also necessary to evaluate various schemes. Since this part is the paper's focus, authors are suggested to provide detailed evaluation results in the paper. Response: We thank the reviewer for the comment. We have added ROC curve, EER in Fig.2-4, and accuracy in Table 5. 2. How did you divide the dataset? What is the training-testing ratio? This will also impact the detectors' performance. Response: We thank the reviewer for the comment. Since both SySeVR and REVEAL use the ratio of 80%: 20% in original papers, we follow this ratio to ensure maximum restoration of vulnerability detectors. We have added a description of this in the “Experimental Result” section. 3. What was information loss when you applied PCA to the dataset? and what is the energy and time consumption when you remove the PCA? Response: We thank the reviewer for the comment. We lost correlation information with tokens that are farther away when we applied PCA to the dataset. We did an experiment using T-SNE directly without PCA. The results show that the execution time without PCA is 0.07 seconds longer than that with PCA, and the energy consumption is almost the same. We use PCA mainly because it can better preserve important information than directly using T-SNE. We have added a description of this in the “Experimental Result” section. 4. Instead of using intra- or extra-class similarity, authors are suggested to evaluate the entropy difference between data samples. Response: We thank the reviewer for the comment. We have added a part on using relative entropy to measure sample similarity in “STEP II: Characterizing Sample Similarity”. The results are in Table 4. Reviewer 3 Reviewer 3 (Anonymous) Basic reporting 1- This paper has not reached to the acceptable level for publication because of lacks originality and novelty. Response: We thank the reviewer for the comment. The field involved in this paper is DL-based vulnerability detection. At present, the research in this area is gradually becoming mature but still needs to be optimized. We mainly have the following innovations: 1) The characterization of similarity and code features is unprecedented. We propose methods to characterize sample similarity and code features. We compute distances between sample vectors to obtain inter- and intra-class distances. We use inter- and intra-class distances to represent the similarity between classes and the similarity within sample classes. We choose five features to characterize the code and measure information about sample complexity, sample size, and subroutine calls. At the suggestion of reviewer 2, we have added a comparison of relative entropy when characterizing similarity, which was also not available before. 2) Our research protocol is novel and completed independently. We characterize the vulnerability dataset using sample granularity, sample similarity, and code features. Then we analyze the characteristics of the vulnerability dataset and the results of the deep learning-based vulnerability detector to study the impact of the vulnerability dataset on deep learning-based vulnerability detectors. 3) We study vulnerability datasets to help develop deep learning-based vulnerability detectors, which have not been studied in depth before. We choose four vulnerability datasets, three representations, and four deep learning-based vulnerability detectors for experiments. We find that the sample granularity, sample similarity, and code characteristics of the dataset affect DL-based vulnerability detectors. 2- The spell-checks, grammatical and writing style errors of the paper must be improved. Response: We thank the reviewer for the comment. We checked the full text and corrected grammar and spelling errors. In addition to this, we used PeerJ's editing services and we believe the language and syntax of this paper have been refined. Experimental design No comment Validity of the findings No comment Additional comments 1- For readers to quickly catch the contribution in this work, it would be better to highlight major difficulties and challenges, and your original achievements to overcome them, in a clearer way in abstract and introduction. Response: We thank the reviewer for the comment. We made changes to highlight the challenges and our original achievements in abstract and introduction. The main challenges of investigating the impact of vulnerability datasets on DL-based vulnerability detectors are as follows: 1)The challenge of characterizing vulnerability datasets. Vulnerability datasets are different from text, image, and other datasets. The internal structure of the code in the dataset is more complex. It is characterized by a very abstract concept that is difficult to represent with intuitive data. 2) The challenge of vulnerability dataset evaluation methods. The criterion for evaluating the quality of a vulnerability dataset is its impact on the results of the vulnerability detector. However, for the same vulnerability dataset, using different DL-based vulnerability detectors has significantly different results. Therefore, it is also difficult to study the quality of the dataset by stripping the impact of the performance of DL-based vulnerability detectors. We characterize vulnerability datasets from three aspects: sample granularity, sample similarity, and code features. We study the impact of C/C++ vulnerability datasets on DL-based vulnerability detectors to obtain insights. Based on these insights, we provide suggestions for the creation and selection of vulnerability datasets. Our main contributions are as follows: 1) We propose methods to characterize the sample similarity and code features. We calculate the distance between the sample vectors to obtain the inter-class and intra-class distance. We use the inter-class and intra-class distance to express the similarity between the classes and the similarity within the class of samples. We select five features to characterize code and measure sample complexity, sample size, and subroutine call-related information. 2) We used sample granularity, sample similarity, and code features to characterize vulnerability datasets. Then we analyzed the characteristics of vulnerability datasets and the results of DL-based vulnerability detectors to study the impact of the vulnerability datasets on DL-based vulnerability detectors. 3) We selected four vulnerability datasets, three methods of representation, and four DL-based vulnerability detectors for experiments. We found that the sample granularity, sample similarity, and code features of the dataset impacted DL-based vulnerability detectors in the following ways: a) Fine-grained samples were conducive to detecting vulnerabilities; b) Vulnerability datasets with higher inter-class similarity, lower intra-class similarity, and simple structure were conducive to detecting vulnerabilities in the original test set; and c) Vulnerability datasets with lower inter-class similarity, higher intra-class similarity, and complex structure helped detect vulnerabilities in other datasets. 2- Some references are too old and please add at least five references within the past one year for related work section. Response: We thank the reviewer for the comment. The field of DL-based vulnerability detection is gradually developing but not yet mature. The datasets and detectors we used are the latest in the existing open-source research in this field. We added five references in section “RELATED WORK” and now we cite nine 2021 references: [1] Li, Y., Wang, S., and Nguyen, T. N. (2021b). Vulnerability detection with fine-grained interpretations. In Proceedings of the 2021 ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2021), Online. [2] Bosu, A., Carver, J. C., Hafiz, M., Hilley, P., and Janni, D. (2021). Identifying the characteristics of vulnerable code changes: An empirical study. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, Hong Kong, China, pages 59–70. [3] Zheng, Y., Pujar, S., Lewis, B., Buratti, L., Epstein, E., Yang, B., Laredo, J., Morari, A., and Su, Z.502(2021). D2A: A dataset built for ai-based vulnerability detection methods using differential analysis. In Proceedings of the 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP ’21), Madrid, Spain, pages 111–120. [4] Xinda Wang, Shu Wang, P. F. and Sun, K. (2021). PatchDB: A large-scale security patch dataset. In Proceedings of the 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and491Networks (DSN), Taipei, China, pages 257–268. [5] Li, Zou, X. and Chen (2021). Vuldeelocator: A deep learning-based fine-grained vulnerability detector. IEEE Transactions on Dependable and Secure Computing. DOI: 10.1109/TDSC.2021.3076142. . [1]-[4] are referenced in the part of dataset, [5] is referenced in the part of vulnerability detector. 3-There are unsatisfactory organization and writing in the 'Conclusion' section. The authors must rewrite and reorganize this section contents. Response: We thank the reviewer for the comment. We have rewritten and reorganized the 'Conclusion' section. We have added insights and contributions. 4- The highlights are not stressed in the paper, and the innovation of the paper weakness. Response: We thank the reviewer for the comment. This paper is an evaluation research article. The highlights and innovations of this paper are as follows: 1) There is currently no method for systematically evaluating vulnerability datasets, and we are the first to formulate unique evaluation rules for vulnerability datasets. From then on, we no longer only look at the surface features such as its scale source when evaluating the vulnerability dataset, but can evaluate the deeper features of the vulnerability dataset in terms of granularity, similarity, and code features. A suitable vulnerability dataset can enable vulnerability detectors to have better performance, and our research lays the foundation for the development of vulnerability detectors. 2) The characterization of similarity and code features is unprecedented. In describing the similarity of samples, we creatively use relative entropy and vector distance to measure the similarity within and between classes of samples, and no previous studies have focused on the similarity of samples. We use a comprehensive analysis method to characterize code features from multiple dimensions, and no previous research has focused on code features. 3) The datasets and vulnerability detectors we use are the latest research in this field and cover all types of datasets and vulnerability detectors, so our research conclusions are credible and general. 5- There are some mistakes in the style of English writing in the text which are to be revised/corrected carefully. Response: We thank the reviewer for the comment. We have made corresponding changes. We have revisited this article, fully optimized spelling and expression. We used PeerJ's editing services and we believe the language and syntax of this paper have been refined. "
Here is a paper. Please give your review comments after reading it.
419
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>IRimage aims at increasing throughput, accuracy and reproducibility of results obtained from thermal images, especially those produced with affordable, consumer-oriented cameras. IRimage processes thermal images, extracting raw data and calculating temperature values with an open and fully documented algorithm, making this data available for further processing using image analysis software. It also allows to make reproducible measurements of the temperature of objects in series of images, and produce visual outputs (images and videos) suitable for scientific reporting. IRimage is implemented in a scripting language of the scientific image analysis software ImageJ, allowing its use through a graphical user interface and also allowing for an easy modification or expansion of its functionality. IRimage's results were consistent with those of standard software for 15 camera models of the most widely used brand. An example use case is also presented, in which IRimage was used to efficiently process hundreds of thermal images to reveal subtle differences in the daily pattern of leaf temperature of plants subjected to different soil water contents. IRimage's functionalities make it better suited for research purposes than many currently available alternatives, and could contribute to making affordable consumer-grade thermal cameras useful for reproducible research.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>IRimage aims at increasing throughput, accuracy and reproducibility of results obtained from thermal images, especially those produced with affordable, consumer-oriented cameras. IRimage processes thermal images, extracting raw data and calculating temperature values with an open and fully documented algorithm, making this data available for further processing using image analysis software. It also allows to make reproducible measurements of the temperature of objects in series of images, and produce visual outputs (images and videos) suitable for scientific reporting. IRimage is implemented in a scripting language of the scientific image analysis software ImageJ, allowing its use through a graphical user interface and also allowing for an easy modification or expansion of its functionality. IRimage's results were consistent with those of standard software for 15 camera models of the most widely used brand. An example use case is also presented, in which IRimage was used to efficiently process hundreds of thermal images to reveal subtle differences in the daily pattern of leaf temperature of plants subjected to different soil water contents. IRimage's functionalities make it better suited for research purposes than many currently available alternatives, and could contribute to making affordable consumer-grade thermal cameras useful for reproducible research.</ns0:p></ns0:div> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Thermal imaging has many uses in biological, medical and environmental research <ns0:ref type='bibr' target='#b16'>(Kastberger and Stachl, 2003)</ns0:ref>. In recent years, thermal cameras have lowered their price, and affordable consumer cameras are now available for as little as 300USD, either as stand-alone devices or smartphone attachments <ns0:ref type='bibr' target='#b13'>(Haglund and Sch&#246;nborn, 2019)</ns0:ref>. These cameras, in spite of being marketed as consumer devices, have been proven to be suitable as scientific instruments for research <ns0:ref type='bibr' target='#b27'>(Pereyra Irujo et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b17'>Klaessens et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b31'>Razani et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Petrie et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>van Doremalen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b24'>Nosrati et al., 2020)</ns0:ref> and have the potential to greatly improve access to thermography in many scientific fields, especially for budget-limited scientists.</ns0:p><ns0:p>Thermal cameras do not measure temperature directly. Temperature is estimated from measured infrared radiation captured by the sensor in the camera, through a series of equations and using a set of parameters, some provided by the user through the camera interface, and others which are set during calibration. The software coupled to these cameras is usually closed-source, which does not allow the user to know the exact algorithms used to obtain the temperature measurements and the final image. For a thermal camera (or any sensor) to be useful for research, the user should be able to have control over (or at least information about) the processing steps between the raw sensor data and the final measurement <ns0:ref type='bibr' target='#b5'>(Dryden et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Software provided with low-cost infrared cameras, besides being closed-source, has usually limited functionality, since it is aimed at non-scientific users. This kind of software only allows for temperature measurements of manually selected points or areas in the image (e.g., <ns0:ref type='bibr' target='#b24'>Nosrati et al., 2020)</ns0:ref>, which is impractical with a large quantity of images, and hinders reproducibility of results. Parameters used PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for temperature calculations, also required for reproducibility, are generally under-reported <ns0:ref type='bibr' target='#b14'>(Harrap et al., 2018)</ns0:ref>, a problem which can be made worse with consumer-oriented analysis software (especially smartphone apps) that hide the real values of parameters behind simple user options. Besides obtaining and analyzing temperature data in numerical form, false-color images representing the temperature values are usually necessary for visualizing and reporting. Similarly to scientific plots of data, it is important that this representation of temperature data is quantitatively accurate. In spite of this, some of the default color palettes used in consumer-oriented software are selected for aesthetic reasons and do not meet the necessary criteria for scientific reporting <ns0:ref type='bibr' target='#b3'>(Crameri et al., 2020)</ns0:ref>.</ns0:p><ns0:p>IRimage was developed with the aim of overcoming those problems and allowing researchers to increase throughput, accuracy and reproducibility of results obtained from thermal images, especially those produced with affordable, consumer-oriented cameras. It allows researchers to extract raw data from thermal images and calculate temperature values with an open and fully documented algorithm, making this data available for further processing using standard image analysis or statistical software.</ns0:p><ns0:p>It also allows to make reproducible measurements of the temperature of objects in image sequences, along with other outputs that are useful for further analysis and reporting, such as image timestamps, parameters used for temperature estimations, and customized false-color images and videos that use a scientifically accurate color palette. IRimage was initially developed as an in-house simple tool which was used to benchmark a low-cost thermal camera (the 'FLIR One' smartphone attachment, Pereyra <ns0:ref type='bibr' target='#b27'>Irujo et al., 2015)</ns0:ref> and to analyse thermal images of wheat varieties <ns0:ref type='bibr' target='#b1'>(Cacciabue, 2016)</ns0:ref>, and was later developed further in order to make it suitable for a wider range of scientific applications. In this paper, IRimage implementation and usage is described, along with its comparison against standard software, and an example use case that highlights the utility of some of its functions.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS &amp; METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>Theoretical background for temperature calculations</ns0:head><ns0:p>One of the main objectives of IRimage is to provide an implementation of an algorithm which is open not only from a software point of view (as in the definition of open source software), but also open in the sense of being transparent and understandable to the end user, and thus available for scientific scrutiny, customization or extension. To this end, a detailed explanation of the theoretical background of the algorithm used in IRimage is therefore presented here.</ns0:p></ns0:div> <ns0:div><ns0:head>Relationship between temperature and infrared radiation</ns0:head><ns0:p>Thermal cameras are based on the detection of infrared radiation emitted from objects by means of an array of sensors. Each of these sensors generates a digital signal (S), which is a function of radiance (L).</ns0:p><ns0:p>Radiance is the radiant flux (i.e. amount of energy emitted, reflected, transmitted or received per unit time, usually measured in Watts, W) per unit surface and solid angle (in W &#8226; sr &#8722;1 &#8226; m &#8722;2 ). The relationship between the signal (S) resulting from the voltage/current generated by the sensor and the associated electronics (usually quantified as Digital Numbers; DN) and L is usually linear, and gain (G) and offset (O) factors can be calibrated:</ns0:p><ns0:formula xml:id='formula_0'>S = G &#8226; L + O.</ns0:formula><ns0:p>(1)</ns0:p><ns0:p>Measuring radiance can be used to estimate temperature because the total amount of energy emitted by an object is is a function of absolute temperature to the fourth power (according to the Stefan-Boltzmann law). The emission is, however, not equal at different wavelengths (even for a perfect emitter, i.e. a black body): according to Wien's displacement law, the wavelength corresponding to the peak of emission also depends on temperature. For instance, the peak emission of the sun is around 500nm (in the visible portion of the spectrum), while that of a body at 25&#176;C is around 10&#181;m (in the far infrared). Since detectors are only sensitive to part of the spectrum, it is necessary to take into account only the spectral radiance (L &#955; ) for a given wavelength (according to the Lambert's cosine law and the Planck's law) which is equal to: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>L &#955; = &#949; &#8226; 2hc 2 &#955; 5 &#8226; 1 e hc &#955; kT &#8722; 1 ,<ns0:label>(2)</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where &#949; is the emissivity of the surface, h is the Planck constant, c is the speed of light in the medium, &#955; is the wavelength, k is the Boltzmann constant, and T is the absolute temperature of that surface (in kelvins). This equation needs to be integrated over the spectral band corresponding to the detector sensitivity (short-wavelength: 1.4-3&#181;m, mid-wavelength: 3-8&#181;m, or long-wavelength:</ns0:p><ns0:p>8-15&#181;m, depending on the type of sensor) or, for simplicity, be multiplied by the spectral sensitivity range <ns0:ref type='bibr' target='#b11'>(Gaussorgues, 1994)</ns0:ref>. For a given camera (i.e., combination of electronics, sensors and lenses) this equation can be simplified as:</ns0:p><ns0:formula xml:id='formula_2'>L &#955; = &#949; &#8226; 1 R &#8226; (e B T &#8722; 1) ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where B and R are camera calibration parameters (together with G and O). In some cases, the constant 1 is also stored as calibration parameter F.</ns0:p><ns0:p>By combining equations 1 and 3, it is possible to obtain an equation that represents the relationship between S and T for a given sensor, and can be used for its calibration:</ns0:p><ns0:formula xml:id='formula_3'>S = G &#8226; &#949; &#8226; 1 R &#8226; (e B T &#8722; 1) + O.</ns0:formula><ns0:p>(4)</ns0:p></ns0:div> <ns0:div><ns0:head>Sources of radiation</ns0:head><ns0:p>The radiation received by the camera sensor is not equal to the radiation emitted by the object(s) in its field of view. Depending on the emissivity of the object's surface, radiation reflected by the object's surface can contribute significantly to the radiation received by the sensor. Furthermore, this radiation is then attenuated by the atmosphere (mainly by water molecules) even at short distances <ns0:ref type='bibr' target='#b20'>(Minkina and Klecha, 2016)</ns0:ref>. Taking this into account, the signal detected by the sensor (S) can be considered to be composed of three terms:</ns0:p><ns0:formula xml:id='formula_4'>S = &#964; &#8226; S ob j + &#964; &#8226; S re f l + S atm ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where the first term is the equivalent digital signal originating from the target object (S ob j ), attenuated by the atmosphere, which is represented by the atmospheric transmissivity factor tau (&#964;), the second term is the equivalent digital signal from the reflected radiation originating from the target object's surroundings (S re f l ), also attenuated by the atmosphere, and the last term is the equivalent digital signal originated from the atmosphere itself in the path between the object and the sensor (S atm ).</ns0:p></ns0:div> <ns0:div><ns0:head>Estimation of atmospheric transmissivity</ns0:head><ns0:p>There are many different models available to estimate atmospheric transmissivity. For short distances, simple models that take into account the amount of water in the air can provide adequate estimates. For long distances (e.g. for infrared cameras used in satellites), more sophisticated models which take into account not only water but also carbon dioxide, ozone, and other molecules, and other atmospheric factors such as scattering are used <ns0:ref type='bibr' target='#b11'>(Gaussorgues, 1994;</ns0:ref><ns0:ref type='bibr' target='#b39'>Zhang et al., 2016)</ns0:ref>. In this paper, the method used in FLIR Systems' cameras was adopted <ns0:ref type='bibr'>(FLIR Systems, 2001)</ns0:ref>, which estimates atmospheric transmissivity (&#964;) based on air water content (H), calculated from air temperature (t) and relative humidity (RH), and the distance between the object and the sensor (d): Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>H = RH &#8226; e (1.5587 + 6.939&#8226;10 &#8722;2 &#8226;t &#8722; 2.7816&#8226;10 &#8722;4 &#8226;t 2 + 6.8455&#8226;10 &#8722;7 &#8226;t 3 ) , (6) &#964; = X &#8226; e [&#8722; &#8730; d&#8226;(&#945; 1 +&#946; 1 &#8226; &#8730; H)] + (1 &#8722; X) &#8226; e [&#8722; &#8730; d&#8226;(&#945; 2 +&#946; 2 &#8226; &#8730; H)] .<ns0:label>(7</ns0:label></ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Estimation of digital signal values for different radiation sources</ns0:head><ns0:p>Assuming all temperatures and emissivities are known, the signal values originating from the different radiation sources, which contribute to the total signal produced by the sensor, can be estimated using equation 4. For the target object, the signal (S ob j ) can be calculated based on the object temperature (T ob j )</ns0:p><ns0:p>and its emissivity (&#949;):</ns0:p><ns0:formula xml:id='formula_6'>S ob j = G &#8226; &#949; &#8226; 1 R &#8226; (e B T ob j &#8722; 1) + O. (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>)</ns0:formula><ns0:p>The signal originated from the atmosphere between the object and the sensor (S atm ) can be calculated based on air temperature (T atm ) and its emissivity, which is equal to 1 &#8722; &#964;:</ns0:p><ns0:formula xml:id='formula_8'>S atm = G &#8226; (1 &#8722; &#964;) &#8226; 1 R &#8226; (e B T atm &#8722; 1) + O. (<ns0:label>9</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>For estimating the signal from radiation reflected by the target object (S re f l ), one must take into account the reflectivity of the object, which is equal to 1 &#8722; &#949;. Also, it should be necessary to know the temperature of the surrounding objects (T re f l ) and their emissivity (&#949; re f l ):</ns0:p><ns0:formula xml:id='formula_10'>S re f l = G &#8226; (1 &#8722; &#949;) &#8226; (&#949; re f l ) &#8226; 1 R &#8226; (e B T re f l &#8722; 1) + O. (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>)</ns0:formula><ns0:p>Since in most cases it would be difficult to determine the temperature and emissivity of all the surrounding objects, the usual procedure is to estimate an 'aparent reflected temperature' (T app.re f l ), by measuring the apparent temperature of a reflective material with &#949; &#8776; 0 (usually aluminium foil). Using this procedure, Eq. 10 would be replaced by:</ns0:p><ns0:formula xml:id='formula_12'>S re f l = G &#8226; (1 &#8722; &#949;) &#8226; 1 R &#8226; (e B T app.re f l &#8722; 1) + O (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>)</ns0:formula><ns0:p>Object temperature calculation</ns0:p><ns0:p>In order to calculate object temperature (T ob j ), it is necessary to first obtain the signal originating from the object by solving Eq. 5 by S ob j , and usign the total signal S and the results from Eqs. 7, 9, and 11:</ns0:p><ns0:formula xml:id='formula_14'>S ob j = S &#964; &#8722; S re f l &#8722; S atm &#964;<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Finally, by solving Eq. 8 by T ob j and using the result of Eq. 12 and the sensor's gain, offset and calibration parameters (G, O, B, and R), it is possible to calculate the object temperature as follows:</ns0:p><ns0:formula xml:id='formula_15'>T ob j = B log( G&#8226;&#949; R&#8226;(S ob j &#8722;O) + 1) (13)</ns0:formula></ns0:div> <ns0:div><ns0:head>Implementation of the temperature calculation algorithm</ns0:head><ns0:p>IRimage was implemented in the macro language of the widely used, open source, scientific image analysis software ImageJ <ns0:ref type='bibr' target='#b32'>(Rueden et al., 2017)</ns0:ref> or its distribution FIJI <ns0:ref type='bibr' target='#b34'>(Schindelin et al., 2012)</ns0:ref>, and also uses the open source software ExifTool <ns0:ref type='bibr' target='#b15'>(Harvey, 2003)</ns0:ref> to extract raw values from the thermal images. It was implemented and tested using FLIR brand cameras (FLIR Systems Inc., USA), which is one of the most widely used brands in research <ns0:ref type='bibr' target='#b14'>(Harrap et al., 2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Extraction of parameters from JPEG files.</ns0:p><ns0:p>The temperature calculation method relies on having access to the raw sensor data obtained by the camera.</ns0:p><ns0:p>In the case of FLIR cameras, this data is stored in the 'radiometric JPG' files (the standard file format for these cameras) as metadata tags in 'EXIF' format. This metadata also includes camera-specific and user-set parameters which are also used to calculate temperature. All the parameters that are extracted from the JPG files, and the corresponding variables used in IRimage are detailed in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>. First, IRimage uses the ExifTool software <ns0:ref type='bibr' target='#b15'>(Harvey, 2003)</ns0:ref> to processes all images in JPG format within the user-selected folder in order to extract the raw sensor data, which is stored as a PNG image. Next, all camera-specific, atmospheric and user-set parameters are extracted. The next step is the calculation of variables derived from these parameters (detailed in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>), including the calculation of atmospheric transmissivity (using Eq. 6-7) and the estimated signal from reflected objects and the atmosphere (using Eq. 9-11). The byte order (endianness) of the raw image is determined from the image type (PNG or TIFF). This works in almost all cases, but it has been found that this rule does not hold for some (at least 3) camera models. In those cases, an exception to this rule is included in the code.</ns0:p></ns0:div> <ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Depending on the option selected by the user, both the extraction of parameters and the calculation of these variables are either performed for each file or only for the first file in the folder (when the user wants to apply the same set of parameters to all files). In the latter case, the user can also modify the 'user-selected' parameters. The variables used for the final temperature calculation are detailed in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>. The PNG image containing the raw sensor data is opened, and each pixel containing the digital signal from the sensor is processed sequentially. First, the object signal is estimated using Eq. 12, and then the temperature value is calculated using Eq. 13. </ns0:p></ns0:div> <ns0:div><ns0:head>Software usage</ns0:head><ns0:p>IRimage is run as a plugin of the scientific image analysis software ImageJ <ns0:ref type='bibr' target='#b32'>(Rueden et al., 2017)</ns0:ref> or its FIJI distribution <ns0:ref type='bibr' target='#b34'>(Schindelin et al., 2012)</ns0:ref>, and also uses the open-source software ExifTool <ns0:ref type='bibr' target='#b15'>(Harvey, 2003)</ns0:ref> to extract raw values from the thermal images. IRimage is available for download at https://github.com/gpereyrairujo/IRimage. When IRimage is installed, an 'IRimage' sub-menu is added to the 'Plugins' menu of ImageJ/FIJI, which allows the user to access the available functions (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). The four functions included in IRimage are: 1) 'Process', which processes the original thermal images to extract raw data and estimate temperature; 2) 'Measure', which allows the user to measure the temperature of different objects in complete sets of images; 3) 'Color', to create false-color images and videos for reporting; and 4) 'Test', to compare IRimage results against other software.</ns0:p></ns0:div> <ns0:div><ns0:head>Processing thermal images</ns0:head><ns0:p>IRimage was implemented and tested using FLIR brand cameras (FLIR Systems Inc., USA), which is one of the most widely used brands in research <ns0:ref type='bibr' target='#b14'>(Harrap et al., 2018)</ns0:ref>, and is able to process FLIR's radiometric JPG thermal image format.</ns0:p><ns0:p>The 'Process' function of IRimage processes complete folders of thermal images, since this is usually the case in research uses. The radiometric JPG image format includes user-set parameters (i.e., emissivity, air temperature and humidity, reflected temperature, and object distance) within individual images, but it is also possible to use different parameter values to calculate temperature from raw data. IRimage can process the images using either of 3 options (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>): 1) process each file using the stored parameters (e.g., when the user has manually set these parameters in the camera according to specific conditions for each image); 2) use a set of global parameters for all images, which is useful when all images were captured under the same conditions, or if parameters need to be modified globally (in this case a dialog box is shown where parameters can be set, Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>); or 3) use parameters stored in a text file, in which specific parameters can be defined for each image. IRimage's functions. The ImageJ/FIJI 'Plugins' menu containing the 'IRimage' sub-menu and functions, and the main windows and dialog boxes that are shown to the user. In any of the functions, the user is first promted to select the folder containing the original thermal images to be processed (not shown). In the 'Process' function, the user is first asked to indicate how parameters for temperature calculation will be determined for each image and, if the user chooses to enter a set of values manually, a second dialog box is shown for the user to do so. In the 'Measure' function, the user is first asked to choose whether to manually select the areas to be measured, or to use a previously saved mask. If the first option is selected, the user first enters the number of areas to measure (not shown) and then a window with the first image of the set is opened so that user can indicate the areas to be measured using any of the available selection tools. In the 'Color' function the user can select the color palette, the contrast level, how to calculate the displayed temperature range, whether to include a temperature scale bar, and whether to produce a video file as output. The 'Test' function does not require additional user input.</ns0:p></ns0:div> <ns0:div><ns0:head>7/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>IRimage processes all images in JPG format within the user-selected folder. Raw data containing the digital signal from the sensor is extracted and temperature is estimated for each pixel using the algorithm detailed in the Article S1. After processing, three images are stored for each input file, corresponding to the raw data, the estimated temperature, and a false-color image. The estimated temperature pixel values are also stored as text, in a .csv (comma-separated values) file that can be opened in a spreadsheet or statistical software. Also, irrespective of the processing option selected, the parameters used are stored in a .csv text file which can be later used to reproduce the same results (with the 'Use parameters from file' option, Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Each of these output file types are stored in different subfolders within the user-selected folder.</ns0:p></ns0:div> <ns0:div><ns0:head>Measuring the temperature of objects</ns0:head><ns0:p>A third function is included to perform reproducible measurements of the temperature of objects in the images. With the 'Measure' function (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>), the user can select up to 255 different objects (rectangular, oval, or free-form areas, lines or points) and obtain temperature measurements (mean, minimum, maximum and standard deviation) for each of them, for each of the images in a folder. The selected objects are stored in a 'mask' image, which can be later used to reproduce the same measurements.</ns0:p><ns0:p>A mask image can also be modified or created using other methods and used in IRimage.</ns0:p></ns0:div> <ns0:div><ns0:head>Creating customized false-color images and videos</ns0:head><ns0:p>The 'Color' function allows the user to produce false-color images representing the temperature values, which are useful for visualizing and reporting (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>).</ns0:p><ns0:p>The default palette used in IRimage is 'mpl-inferno', one of the default palettes in the popular data visualization Python package Matplotlib (van der Walt and Smith, 2015), which is also included in FIJI.</ns0:p><ns0:p>It is a typical 'thermal' palette that represents colder values in dark blue, and transitions through red and yellow for warmer values (emulating the color of hot liquid metal, or the radiation emitted by a black body). It was selected because it is both perceptually uniform and suitable for color-blind viewers.</ns0:p><ns0:p>Alternatively, the classic greyscale palette, representing colder values as black and warmer values as white, is supported as well. Another important aspect is selecting the appropriate temperature scale, in order to efficiently represent the temperature values in the images using the full color palette. IRimage allows the user to select the contrast level, by automatically adjusting the minimum and maximum displayed values through a 'histogram stretching' algorithm <ns0:ref type='bibr' target='#b6'>(Fisher et al., 2003)</ns0:ref>. It operates by setting an amount of pixels with extreme values (the 'tails' of the histogram) that are excluded (0, 0.3 and 3% for the low, normal and high contrast options, respectively). There are also two ways to calculate the temperature range for a set of images: either use the same scale for all of them (based on the range of temperatures in the full image set, which allows for a better comparison between images), or adjust the scale to the temperature range in each image (which allows for a better visualization of temperature differences within each image). It is also possible to add a scale bar to the images, showing the temperature scale and the color palette, with two different sizes for the scale bar and the font. Lastly, the user can choose whether to produce a video file for the complete set of images as a sequence.</ns0:p><ns0:p>Testing the temperature estimations against other software</ns0:p><ns0:p>The 'Test' function provides a way of testing the algorithm, by comparing the results obtained with the 'Process' function of IRimage against data exported using another software (e.g., that provided by the camera manufacturer). Since IRimage is an open-source software and therefore its modification and customization is possible (and encouraged), this function can be used to check if the calculations have not been altered by any change in the code made by the user (for that purpose, a test image is included with IRimage, along with the temperature data exported using FLIR Tools). Also, it can be used to check whether IRimage functions correctly for a given camera when it is used for the first time. After the function is run, a scatter plot is drawn for each image for which there is reference data available, comparing temperature values of all the pixels, and a text file indicating the mean and maximum temperature differences.</ns0:p></ns0:div> <ns0:div><ns0:head>Comparison to existing tools</ns0:head><ns0:p>IRimage was evaluated by comparing the resulting temperature values with those exported manually using the FLIR Tools software (FLIR Systems, Inc., <ns0:ref type='bibr'>USA, version 5.13.18031.2002)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>were first processed using IRimage using the user-defined parameters stored in the image file. After that, each file was opened using FLIR Tools and the temperature values were manually exported in csv format.</ns0:p><ns0:p>Finally, the temperature values for each image were compared using the 'Test' function in IRimage.</ns0:p></ns0:div> <ns0:div><ns0:head>Example use case</ns0:head><ns0:p>Two Buxus sempervirens (L.) plants, grown in soil-filled, 3.5L pots, were placed inside a greenhouse and imaged with an infrared thermal camera (FLIR E40bx, FLIR Systems, USA) every &#8776;6 min. during 24 h., from a distance of 4 m. The camera was triggered automatically by means of a custom-built device <ns0:ref type='bibr' target='#b26'>(Pereyra Irujo, 2019)</ns0:ref>. Air temperature, relative humidity, and incident photosynthetically active radiation were measured every 1 min. with a datalogger (Decagon Em50, Decagon, Pullman, WA, USA). Plants</ns0:p><ns0:p>were not watered during the previous 4 days, and before the onset of measurements, 0.5 L water was added to one of them, so that soil water content was raised from &#8776;0.2 to &#8776;0.4 m3/m3.</ns0:p><ns0:p>Normally, the values of user-defined parameters are set in the camera before the images are captured (estimating or measuring current air temperature, humidity, and reflected temperature), and these values are either kept fixed or are updated manually as conditions change. In this case, images were captured with a fixed set of parameters (since capture was automated) and then processed using IRimage by following two different approaches:</ns0:p><ns0:p>1. Fixed parameters: the values for air temperature and relative humidity used were the mean values measured by the weather sensor during the whole measurement period; the reflected temperature was estimated, similarly to the usual procedure (FLIR Systems, 2016), as the mean temperature of a piece of aluminium foil placed in the camera's field of view measured in the images processed using an emissivity value of 1. These values were entered in the dialog box in the 'Set global parameters for all images' option of the 'Process' function.</ns0:p><ns0:p>2. Variable parameters: in this case, the reflected temperature was estimated for each image, and the air temperature and relative humidity used were those measured by the weather sensor at the time each image was captured. These values were entered in a .csv text file which was then selected through the 'Use parameters from file' option of the 'Process' function.</ns0:p><ns0:p>After processing, the leaf temperature of each of the two plants was measured in all the images. Due to the intricate shape of the plants, a mask image containing the leaf pixels to be measured was created using ImageJ, using a combination of a thresholding algorithm and manual selection. For these measurements, the images were processed using an emissivity parameter of 0.95 <ns0:ref type='bibr' target='#b33'>(Salisbury and Milton, 1988)</ns0:ref>, and a distance between the camera and the objects of 4 meters.</ns0:p><ns0:p>Leaf temperature was used in combination with sensor data for air temperature to calculate the leaf-toair temperature difference (&#8710;T ), a key variable for analyzing the energy balance of the plant <ns0:ref type='bibr' target='#b10'>(Gates, 1964)</ns0:ref>.</ns0:p><ns0:p>The temperature of the air sensor enclosure (also placed in the camera's field of view) was measured in the images as well, using a similar approach and an emissivity value of 0.84 <ns0:ref type='bibr'>(FLIR Systems, 2016)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>Comparison to existing tools</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows a sample of the images used, either in their original form (Fig. <ns0:ref type='figure'>2A-C</ns0:ref>) or processed with IRimage (Fig. <ns0:ref type='figure'>2D-F</ns0:ref>), and scatter plots comparing the temperature values for all pixels obtained with IRimage vs. FLIR Tools, either showing the full range of temperatures in the image (Fig. <ns0:ref type='figure'>2G-I</ns0:ref>)</ns0:p><ns0:p>or a 'zoomed-in' version showing a small range of temperatures in detail (Fig. <ns0:ref type='figure'>2J-L</ns0:ref>). The comparison between IRimage and the manufacturer's software showed an average difference of 0.0002&#176;C, and in all cases below 0.01&#176;C, but only when temperature was above -40&#176;C. When temperature was below that value, temperature obtained using FLIR Tools was always equal to -40&#176;C, irrespective of the initial raw values, whereas IRimage showed values that could reach -70&#176;C, as can be noted in Fig. <ns0:ref type='figure'>2I</ns0:ref>. Out of the 26 images analyzed, in only one case the comparison was not possible because the temperature data could not be extracted using FLIR Tools (although it was possible to process it with IRimage). The comparison between IRimage and FLIR Tools for the 25 images that could be analyzed is presented in Fig. <ns0:ref type='figure' target='#fig_0'>S1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Example use case</ns0:head><ns0:p>Mean air temperature within the greenhouse was 30.8&#176;C and mean air relative humidity was 45%, values which were used for the image processing with fixed parameters. However, air temperature ranged from during the day (Fig. <ns0:ref type='figure' target='#fig_4'>4B</ns0:ref>). A heating effect of solar radiation incident on the leaves could be seen early in the morning (reaching 2&#176;C above air temperature), followed by a cooling effect of transpiration in the following hours (reaching 1 to 2&#176;C below air temperature), with fluctuations that follow the changes in the amount of incident solar radiation. Plants also showed differences in leaf temperature between them, but mainly during the day, indicating a restricted transpiration (and therefore less evaporative cooling) in the water-stressed plant (Fig. <ns0:ref type='figure' target='#fig_4'>4B</ns0:ref>).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> shows a small sample of the 247 thermal images captured during the experiment, from selected moments of the day. In Fig. <ns0:ref type='figure' target='#fig_6'>5A</ns0:ref>, the color palette represents the same temperature range in all three images, which aids in visualizing the differences in absolute temperature between the night (00 hours), the early morning (08 hours), and the afternoon (14 hours). Figure <ns0:ref type='figure' target='#fig_6'>5B</ns0:ref> shows the same images, but the color palette was set according to the temperature range in each image, which makes the comparison between them more difficult (or even misleading), but allows for a better perception of temperature differences within each image. For instance, the image taken in the afternoon (at 14 hours)</ns0:p><ns0:p>shows the moment in which leaf temperature differs the most between the well-watered (in the left) and </ns0:p></ns0:div> <ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Open source software is ideal for scientific research because it can be freely inspected, modified, and redistributed <ns0:ref type='bibr' target='#b35'>(Schindelin et al., 2015)</ns0:ref>. IRimage is itself open source, and it was also implemented as a plugin of the open source software ImageJ, a widely used scientific image analysis tool which has been considered among the top 'computer codes that transformed science' <ns0:ref type='bibr' target='#b28'>(Perkel, 2021)</ns0:ref>. This not only provides a clear way of knowing the exact steps taken to estimate temperature from raw sensor data, but also allows the researcher to either use the tool through ImageJ's graphical user interface (without requiring any programming knowledge) or to modify, adapt or expand the functionality of the tool using ImageJ's powerful scripting languages <ns0:ref type='bibr' target='#b2'>(Cacciabue et al., 2019)</ns0:ref>. ImageJ (and especially Manuscript to be reviewed</ns0:p><ns0:p>Computer Science its FIJI distribution) provide a large ecosystem of tools with which IRimage can interact, for example, by assembling different processing steps into pipelines using scripting languages: plugins for image transformation, registration, annotation, enhancement, segmentation, visualization, as well as tools for interoperability with other software <ns0:ref type='bibr' target='#b34'>(Schindelin et al., 2012)</ns0:ref>. Also, IRimage has been developed using the simple ImageJ macro language with the explicit aim of encouraging users to modify and contribute to improving the software.</ns0:p><ns0:p>IRimage provides tools for extracting and analyzing temperature data which are compatible with reproducible image handling recommendations <ns0:ref type='bibr' target='#b21'>(Miura and N&#248;rrelykke, 2021)</ns0:ref>, allowing researchers to avoid difficulties which are common when dealing with thermal images obtained with consumer-grade cameras. Software provided with these cameras has functions usually limited to temperature measurements of manually selected points or areas in the image (e.g., <ns0:ref type='bibr' target='#b9'>Garc&#237;a-Tejero et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Nosrati et al., 2020)</ns0:ref>.</ns0:p><ns0:p>When the number of images is large, researchers resort to building custom ad hoc software, which is frequently not available for other researchers to reuse (e.g., <ns0:ref type='bibr' target='#b31'>Razani et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>van Doremalen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Goel et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Mul Fedele et al., 2020)</ns0:ref> or using elaborate methods to extract temperature values from color data in false-color images (e.g., <ns0:ref type='bibr' target='#b0'>Alpar and Krejcar, 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Petrie et al., 2019)</ns0:ref>. Moreover, in some cases these custom methods use 'optimized' images produced by many consumer-grade cameras, which are meant for visualization and are a result of blending visible and thermal images for improved resolution, making the resulting data prone to errors.</ns0:p><ns0:p>One important aspects to consider when reporting temperature data as thermal images is choosing the right color palette <ns0:ref type='bibr' target='#b3'>(Crameri et al., 2020)</ns0:ref>. Some color palettes used for scientific visualization, such as the popular 'rainbow' palettes, have a non-linear or unintuitive relationship between intensity and the represented value, and are not suitable for color-blind people <ns0:ref type='bibr' target='#b36'>(Thyng et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Nu&#241;ez et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Another problem of non-linear palettes is that they are not suitable for conversion to black and white, which can be important in journals that publish images in color online but not in its print version, or when the reader prints a journal article in black and white. The default palette ('mlp-inferno') used in IRimage was chosen so as to avoid those pitfalls, and therefore aid in accurate data interpretation and Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>communication. Other similar palettes exist, also suitable for scientifically representing thermal data, such as 'mpl-magma', 'mpl-plasma' (van der Walt and Smith, 2015), 'CET L-03', 'CET L-08' <ns0:ref type='bibr' target='#b19'>(Kovesi, 2015)</ns0:ref>, or 'cmocean-thermal' <ns0:ref type='bibr' target='#b36'>(Thyng et al., 2016)</ns0:ref>.</ns0:p><ns0:p>The example use case presented in this paper highlights the utility of some of IRimage's functionality, obtaining data that revealed subtle differences in the daily pattern of leaf temperature of two plants that had received more or less irrigation water. Obtaining this data required analyzing 247 thermal images, each of them with different parameters for temperature estimation. This task would have been extremely impractical with standard software, in which each image has to be manually processed. Also, measurement of leaf temperature of the irregular-shaped plants should have probably had to be simplified to square or oval shapes, and these areas selected manually for each image, which would have increased measurement error (by including background pixels) and prevent later reproduction of those results (if the selected areas are not repeated exactly). Also, the presented example highlights how the usual procedure of setting camera parameters (air temperature, relative humidity and reflected temperature) before capturing a large set of images (especially in fluctuating environmental conditions) can lead to potentially large measurement errors, even when these parameters represent the true average conditions. IRimage allows to efficiently and reproducibly process images using unique parameters for each image, thus obtaining more accurate results.</ns0:p><ns0:p>One of the main drawbacks of IRimage is that it is now limited to only one brand of thermal cameras, since it has been implemented and tested for FLIR brand cameras. Its utility is, however, large enough, since this brand is, by far, the most widely used in biological research: in a systematic literature review of thermography in biology, 61% of papers reported using FLIR cameras (FLIR Systems Inc., USA), followed by NEC (NEC Ltd., Japan) and Fluke (Fluke Corporation, USA), with 7% each, and InfraTec (InfraTec GmbH, Germany) with 4% <ns0:ref type='bibr' target='#b14'>(Harrap et al., 2018)</ns0:ref>. The algorithms are, nevertheless, potentially adaptable for other cameras for which raw sensor data could be obtained. Among the cameras from this brand, IRimage was able to process all the tested models (15), yielding individual pixel values that did not differ from those obtained with the manufacturer's software by more than 0.01&#176;C, and being able to process raw data that was below of the manufacturer's software limit of -40&#176;C. It should be noted that this value only represents the minimal discrepancy between different processing methods, but not the actual measurement error. Measurement errors can arise due to the hardware itself (the specified sensitivity of a thermal camera can be around 0.07&#176;C, with an accuracy of &#177;2&#176;C, e.g., FLIR Systems, 2016) or the measurement technique, and so a periodic factory calibration or testing against a target of known temperature is advised (e.g., <ns0:ref type='bibr' target='#b17'>Klaessens et al., 2017)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>Affordable infrared thermal cameras have proven to be suitable for research, especially in low-resource settings, but the use of closed-source, consumer-oriented or custom software for image analysis can limit the throughput, accuracy and reproducibility of the results. IRimage provides open-source, flexible and documented tools for processing, measurement and reporting of thermal imaging data for research purposes in biological and environmental sciences. This tool includes functionalities that make it better suited for research purposes than many currently available alternatives, and could contribute to making affordable consumer-grade thermal cameras useful for reproducible research.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1.IRimage's functions. The ImageJ/FIJI 'Plugins' menu containing the 'IRimage' sub-menu and functions, and the main windows and dialog boxes that are shown to the user. In any of the functions, the user is first promted to select the folder containing the original thermal images to be processed (not shown). In the 'Process' function, the user is first asked to indicate how parameters for temperature calculation will be determined for each image and, if the user chooses to enter a set of values manually, a second dialog box is shown for the user to do so. In the 'Measure' function, the user is first asked to choose whether to manually select the areas to be measured, or to use a previously saved mask. If the first option is selected, the user first enters the number of areas to measure (not shown) and then a window with the first image of the set is opened so that user can indicate the areas to be measured using any of the available selection tools. In the 'Color' function the user can select the color palette, the contrast level, how to calculate the displayed temperature range, whether to include a temperature scale bar, and whether to produce a video file as output. The 'Test' function does not require additional user input.</ns0:figDesc><ns0:graphic coords='8,150.07,68.46,396.90,501.66' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>the morning to 43&#176;C at around noon, and air relative humidity from 13% during the day to 78% in the night (Fig.3). Reflected temperature (measured in the images as indicated previously) ranged from 21 to 46&#176;C, with a mean value of 32.0&#176;C. Incident solar radiation reached 1868 &#181;mol &#8226; m &#8722;2 &#8226; s &#8722;1 , totalling 25.5 mol &#8226; m &#8722;2 &#8226; d &#8722;1 . Measuring the temperature of the sensor enclosure using the thermal images yielded temperature values similar to those measured with the sensor, as shown in Fig.3B.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Environmental conditions in the greenhouse during the experiment. (A) Incident photosynthetically active radiation (PAR). (B) Air temperature, measured either with the weather sensor (continuous line) or by measuring the sensor enclosure with the thermal camera (dotted line). (C) Air relative humidity. The gray shaded area indicates the night. Leaf temperature, as measured with the thermal camera, seemed to follow air temperature closely, as shown in Fig. 4A. Nevertheless, &#8710;T curves reveal leaf and air temperature differences, especially</ns0:figDesc><ns0:graphic coords='12,212.59,134.32,271.85,336.40' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022) Manuscript to be reviewed Computer Science water-stressed (in the right) plants; that difference is much more discernible in the image shown in 5B. Two videos are included as supplementary material, showing the complete sequence of 247 images, either using a fixed scale according to the full temperature range (Video S1) or a variable scale according to the individual temperature range in each image (Video S2).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Leaf temperature of well-watered and water-stressed plants. (A) Absolute air temperature (grey solid line) and absolute leaf temperature of the well-watered (blue dashed line) and water-stressed (red dashed-dotted line) plants. (B) Leaf-to-air temperature difference (&#8710;T ) of the well-watered (blue dashed line) and water-stressed (red dashed-dotted line) plants. The gray shaded area indicates the night.</ns0:figDesc><ns0:graphic coords='13,216.55,124.43,263.93,333.63' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure6shows the different results obtained with either fixed or variable parameters. While temperature values obtained with these two methods seem quite similar in absolute values (Fig.6A-B), &#8710;T curves reveal the effect of parameter selection on the resulting values (Fig.6C-D). When air temperature and humidity within the greenhouse were close to the mean values, both methods returned similar results, but using fixed parameters yielded leaf temperatures values almost 1&#176;C higher or lower when environmental conditions deviated from the average values.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Example thermal images of well-watered (left) and water-stressed (right) plants in different moments of the day. (A) Images taken in the night (00 hours), the early morning (08 hours), and the afternoon (14 hours), and shown using a temperature scale (19-48&#176;C) set according to the global temperature range of the full set of images. (B) The same three images, shown using individually set temperature scales, according to the temperature range in each of them.</ns0:figDesc><ns0:graphic coords='14,150.07,63.78,396.90,254.34' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Comparison of leaf temperature values obtained with either fixed parameters or variable parameters. (A-B) Absolute air temperature (grey solid line), absolute leaf temperature obtained with fixed parameters (dotted red/blue lines), and absolute leaf temperature obtained with variable parameters (solid red/blue lines), in the well-watered (A) and water-stressed (B) plants. (C-D) Leaf-to-air temperature difference (&#8710;T ) obtained with fixed parameters (dotted red/blue lines), and &#8710;T obtained with variable parameters (solid red/blue lines), in the well-watered (C) and water-stressed (D) plants. The gray shaded area indicates the night.</ns0:figDesc><ns0:graphic coords='15,143.19,182.36,410.65,333.63' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='11,144.48,199.12,408.08,276.21' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>2/17 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>) 3/17 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Data and parameters extracted from FLIR radiometric JPEG images.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Parameter / Variable (Units)</ns0:cell><ns0:cell>Variable name in IRimage</ns0:cell><ns0:cell>Symbol used in equations</ns0:cell><ns0:cell>EXIF tag name in FLIR JPG file</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Sensor data</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Raw sensor signal (DN) *</ns0:cell><ns0:cell>rawSignal DN</ns0:cell><ns0:cell>S</ns0:cell><ns0:cell>Raw Thermal Image</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Calibration / camera-specific parameters</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Raw Thermal Image Type (PNG or TIFF)</ns0:cell><ns0:cell>imageType</ns0:cell><ns0:cell /><ns0:cell>Raw Thermal Image Type</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Camera Model</ns0:cell><ns0:cell>cameraModel</ns0:cell><ns0:cell /><ns0:cell>Camera Model</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sensor gain</ns0:cell><ns0:cell>sensorG</ns0:cell><ns0:cell>G</ns0:cell><ns0:cell>Planck R1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sensor offset</ns0:cell><ns0:cell>sensorO</ns0:cell><ns0:cell>O</ns0:cell><ns0:cell>Planck O</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sensor calibration parameter B</ns0:cell><ns0:cell>sensorB</ns0:cell><ns0:cell>B</ns0:cell><ns0:cell>Planck B</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sensor calibration parameter F *</ns0:cell><ns0:cell>sensorF</ns0:cell><ns0:cell /><ns0:cell>Planck F</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sensor calibration parameter R</ns0:cell><ns0:cell>sensorR</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>Planck R2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Atmospheric parameters</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Atmospheric transmissivity param-eter 1</ns0:cell><ns0:cell>atmAlpha1</ns0:cell><ns0:cell>&#945; 1</ns0:cell><ns0:cell>Atmospheric Trans Alpha 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Atmospheric transmissivity param-eter 2</ns0:cell><ns0:cell>atmAlpha2</ns0:cell><ns0:cell>&#945; 2</ns0:cell><ns0:cell>Atmospheric Trans Alpha 2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Atmospheric transmissivity param-eter 1</ns0:cell><ns0:cell>atmBeta1</ns0:cell><ns0:cell>&#946; 1</ns0:cell><ns0:cell>Atmospheric Trans Beta 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Atmospheric transmissivity param-eter 2</ns0:cell><ns0:cell>atmBeta2</ns0:cell><ns0:cell>&#946; 2</ns0:cell><ns0:cell>Atmospheric Trans Beta 2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Atmospheric transmissivity param-eter X</ns0:cell><ns0:cell>atmX</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>Atmospheric Trans X</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>User-selected parameters</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Apparent reflected temperature (&#176;C)</ns0:cell><ns0:cell>appReflTemp C</ns0:cell><ns0:cell /><ns0:cell>Reflected Apparent Temperature</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Air temperature (&#176;C)</ns0:cell><ns0:cell>airTemp C</ns0:cell><ns0:cell>t</ns0:cell><ns0:cell>Atmospheric Temperature</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Object emissivity</ns0:cell><ns0:cell>objEmissivity</ns0:cell><ns0:cell>&#949;</ns0:cell><ns0:cell>Emissivity</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Air relative humidity</ns0:cell><ns0:cell cols='2'>airRelHumidity perc RH</ns0:cell><ns0:cell>Relative Humidity</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Object distance from camera</ns0:cell><ns0:cell>objDistance m</ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>Object Distance</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>* This parameter is included in the JPG EXIF tags but it is (usually) equal to 1, and is equivalent to the value of 1 in</ns0:cell></ns0:row><ns0:row><ns0:cell>the term (e</ns0:cell><ns0:cell>B T &#8722; 1) in Eq. 4</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Calculation of derived variables.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Variables derived from parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter / Variable (Units)</ns0:cell><ns0:cell>Variable name in IRimage</ns0:cell><ns0:cell>Symbol used in equations</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw image byte order / endianness</ns0:cell><ns0:cell>byteOrderLittleEndian</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Aparent reflected temperature (K)</ns0:cell><ns0:cell>appReflTemp K</ns0:cell><ns0:cell>T app.re f l</ns0:cell></ns0:row><ns0:row><ns0:cell>Air temperature (K)</ns0:cell><ns0:cell>airTemp K</ns0:cell><ns0:cell>T atm</ns0:cell></ns0:row><ns0:row><ns0:cell>Air water content</ns0:cell><ns0:cell>airWaterContent</ns0:cell><ns0:cell>H</ns0:cell></ns0:row><ns0:row><ns0:cell>Atmospheric transmissivity</ns0:cell><ns0:cell>atmTau</ns0:cell><ns0:cell>&#964;</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw signal from atmosphere (DN)</ns0:cell><ns0:cell>atmRawSignal DN</ns0:cell><ns0:cell>S atm</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw signal from reflected radiation (DN)</ns0:cell><ns0:cell>reflRawSignal DN</ns0:cell><ns0:cell>S re f l</ns0:cell></ns0:row><ns0:row><ns0:cell>Temperature calculation.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Variables used for temperature calculation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter / Variable (Units</ns0:cell><ns0:cell>Variable name in IRimage</ns0:cell><ns0:cell>Symbol used in equations</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw sensor signal (DN) *</ns0:cell><ns0:cell>rawSignal DN</ns0:cell><ns0:cell>S</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw signal from object (DN)</ns0:cell><ns0:cell>objRawSignal DN</ns0:cell><ns0:cell>S ob j</ns0:cell></ns0:row><ns0:row><ns0:cell>Object temperature (&#176;C)</ns0:cell><ns0:cell>objTemp C</ns0:cell><ns0:cell>T ob j</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='6'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67901:1:0:NEW 24 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"REF: PeerJ Computer Science MS # 67901 - “IRimage: Open source software for processing images from infrared thermal cameras” Feb. 24th, 2022 Dr. Alex James Academic Editor PeerJ Computer Science Dear Editor, I would like to thank the reviewers for their useful and thoughtful comments, and the Editor for the possibility to re-submit a revised manuscript. I have edited the manuscript to address the concerns raised by the Reviewers. Attached is a detailed response to the Reviewers’ comments and a description of the changes made. I believe that the manuscript is now suitable for publication in PeerJ Computer Science Sincerely, Dr. Gustavo Pereyra Irujo INTA-CONICET Reviewer 1 Basic reporting This contribution is about processing of thermal images. In general, this is a complex problem, users usually want to determine the temperature of the object, but thermal cameras measure the thermal radiation of the thin outer layer of the object. The English is ok, but I am not native speaker, and I am not relevant to evaluate English. Literature review is sufficient, the article structure is appropriate. As I am first time as reviewer in this journal, it is a little bit difficult for me to find additional information. Experimental design The research should be supported by a physical overview. The reader appreciated that. You write: the theoretical background of the algorithm used in IRimage and how it was implemented is included in this paper as Article S1 First time I downloaded only the main text.only and I didn’t find more information about physical overview. I don’t find in this paper Article S1 (I find it later), why it is divided on more parts? Maybe it's complicated from the publisher This article (main text) is basic introduction to your system (software) with case projects, but it seems so inappropriate to me as a scientific paper for journal. Only some sentences and reference to S1.are In material and methods, and so the article looks just like a guide to the software and usage examples. However, this research is certainly interesting, usable, and valuable. Validity of the findings The article presents usable software for processing thermal images. Cheap thermal cameras only output to jpg and do not give raw data, this is an example of thermal cameras on a DJI Mavic drone, for example, which complicates scientific use. Additional comments General comments: this article summarizes the state of the art of using low-cost thermal cameras and gives an opportunity to process data from these instruments based on scientific research. Research and information are useful for practice. I recommend the article for publication. But I have a more fundamental note to the overall text: why is the theoretical part outside the article? I would like to at least welcome the extension to essential physical information and fundaments in the Materials and methods section. I appreciate the Reviewer’s comments on the manuscript. I agree with the concern raised by the Reviewer regarding the theoretical background not being part of the main text. As suggested, I included the information from Supplemental Article S.1 in the Materials and methods section in the revised version of the manuscript (lines 75-172). Reviewer 2 Basic reporting This paper describes the functionality of the open source software IRimage which can be used for processing raw thermal images taken with several cameras of one brand FLIR and can potentially be used for other brand cameras. Some example results are presented on 14 thermal images obtained from a (non-verified) public source of thermal images ( Wikimedia ). In addition, an example use case of monitoring the temperature of plant leaves is presented of well and non-well hydrated plants during 24 cycles where many parameters vary in time. The paper is clearly written in professional English language. The introduction give a good background in support of the need and usability for the software IRimage presented. Experimental design The author states that in this paper the software is validated against standard software. The reviewer does not agree that the method used can be considered as scientifically validation procedure using some sample images of a non-verified source and presenting the results in graphs at large scales (e.g. figure 2) were a difference of even 5 degree cannot be distinguished. In the example use case, too many variables have an influence on the measured/calculated temperature to prove the accuracy of the software. The details from the case presented distract from the purpose of this paper to present the IRimage software and could be a paper by itself. For a true validation process, the thermal images should be taken with different cameras of the same object under controlled conditions. Not having all these camera available, the reviewer understand that this is a difficult task. However, with only 3 different cameras, the author would already present the validity of the software. As example case, it would be more convincing to have only 1 or 2 variables for the temperature changes in time compared to a temperature reference e.g. a black body source temperature controlled by a current (temperature phantom). The case with the leaf temperature presented is nice as example for the many processing possibilities but cannot be claimed as a confirmation for accuracy. Validity of the findings If the author would adapt the text by stating some examples are presented of processing some random thermal images using IRimage it would be acceptable. Still the temperature comparison graphs like in figure 2 need to be presented zoomed in on the temperature scale of e.g. 10 degree so the accuracy within 0.1 degree can be distinguished. Additional comments The reviewer is convinced the software IRimage will be well received by the scientific community as a useful tool to process raw data from thermal cameras as long as this data is accessible. It would be great if this would also be used for small thermal imagers like the FLIR-One. This paper gives a good introduction in the potentials of the software. However, a scientific validation process is needed in which other researcher could contribute. The paper should be adapted not claiming the software has been validated by the examples presented. The example case could be presented with far less detail or replaced by a more simple case as suggested. I appreciate the Reviewer’s comments on the manuscript. Following is the detail of how these were addressed: - Regarding the “validation process”: I agree with the Reviewer that a complete validation of the measurement process should not only be limited to the image processing, but also should evaluate the accuracy of the hardware against a reference sensor, using a setup as that mentioned by the Reviewer. The evaluation of the harware (i.e., the thermal camera) is, however, outside of the scope of this manuscript. The aim of that section of the manuscript is to compare the results of the new software tool (i.e., IRimage) to those of an existing tool (i.e., FLIR Tools), a requirement stated in the “bioinformatics software tools” section in PeerJ’s Author Instructions (https://peerj.com/about/policies-and-procedures/#discipline-standards). As suggested by the Reviewer, the manuscript has been adapted so as not to claim the software has been “validated”. All references to “validation” have been removed from the text, figures, supplemental information, and the online software repository, and the revised version of the manuscript now only refers to a “comparison to existing tools”. Also, a comment was added in the Discussion section to clarify the distinction between the evaluation of the processing software and the overall measurement accuracy (lines 393-398 in the revised manuscript, lines 396-401 in the version with tracked changes). - Regarding the example use case: The Reviewer correctly indicates that this case “cannot be claimed as a confirmation for accuracy”. The evaluation of the software was done by comparison to existing software (as described above), while the example case was included to show the utility of IRimage (as mentioned in the manuscript, lines 66-67), focusing on the functionalities that distinguish IRimage from existing software. The Reviewer also indicates that “The details from the case presented distract from the purpose of this paper to present the IRimage software”. I respectfully disagree with the Reviewer in this case, since I believe that highlighting the utility of IRimage’s functions is important for presenting the IRimage software, and thus serves the purpose of this paper. - Regarding the clarity of Figure 2: While the plots presented in Figure 2 allow to visualize the complete range of temperatures in the image, I agree with the Reviewer that it is not possible to distinguish small temperature differences between the images processed with each software tool. To address this issue, this figure was modified by including another set of “zoomed-in” plots in which the scale is limited to a small range (0.2°C) of the temperatures found in the image, thus clearly showing the minimal differences between the results obtained with both processing tools. - The Reviewer also mentions that “It would be great if this would also be used for small thermal imagers like the FLIR-One”. Even though the set of images used in this work did not include any image taken with this popular smartphone camera, an early version of IRimage was indeed used to process images from this model. This work was cited in the manuscript, but the camera model was not mentioned. This was clarified in the revised version of the manuscript (line 63), so that readers are aware that IRimage is also useful for this kind of camera. "
Here is a paper. Please give your review comments after reading it.
421
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>The pervasive adoption of GPS enabled sensors has lead to an explosion on the amount of geolocated data that captures a wide range of social interactions. Part of this data can be conceptualized as event data, characterized by a single point signal at a given location and time. Event data has been used for several purposes such as anomaly detection and land use extraction, among others. To unlock the potential offered by the granularity of this new sources of data it is necessary to develop new analytical tools stemming from the intersection of computational science and geographical analysis. Our approach is to link the geographical concept of hierarchical scale structures with density based clustering in databases with noise to establish a common framework for the detection of Crowd Activity hierarchical structures in geographic point data. Our contribution is threefold: first, we develop a tool to generate synthetic data according to a distribution commonly found on geographic event data sets; second, we propose an improvement of the available methods for automatic parameter selection in Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm that allows its iterative application to uncover hierarchical scale structures on event databases and, lastly, we propose a framework for the evaluation of different algorithms to extract hierarchical scale structures. Our results show that our approach is successful both as a general framework for the comparison of Crowd Activity detection algorithms and, in the case of our automatic DBSCAN parameter selection algorithm, as a novel approach to uncover hierarchical structures in geographic point data sets.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Spatio-Temporal analysis is a rapidly growing field within Geographical Information Science (GIScience).</ns0:p><ns0:p>The rate of increase in the amount of information gathered every day, the pervasiveness of Global Positioning System (GPS) enabled sensors, mobile phones, social networks and the Internet of Things (IoT), demand for robust and efficient analysis techniques that can help us find meaningful insights from large spatio-temporal databases. This diversity of digital footprints can be aggregated and analyzed to reveal significant emerging patterns <ns0:ref type='bibr' target='#b3'>(Arribas-Bel, 2014)</ns0:ref>, but its accidental nature, produced as a sideeffect of the daily operations of individuals, government agencies and corporations, and not according to scientific criteria, calls for new analysis methods and theoretical approaches <ns0:ref type='bibr' target='#b50'>(Zhu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Kitchin, 2013;</ns0:ref><ns0:ref type='bibr' target='#b3'>Arribas-Bel, 2014;</ns0:ref><ns0:ref type='bibr' target='#b30'>Liu et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Within these new data sources, of special interest are those that can be characterized as spatio-temporal events data <ns0:ref type='bibr' target='#b22'>(Kisilevich et al., 2010)</ns0:ref>, observations consisting of points in space and time and possibly with attribute information associated. Geo Social Media messages, cell phone calls, emergency services (911 reports), public service reports and criminal investigations are examples of this kind of data. Event data is linked to the social patterns of activity, they represent breadcrumbs that, when aggregated, can help us understand the underlying dynamics of the population. The production of this kind of event data is mediated by the activities people are undertaking and the geographic structure of space <ns0:ref type='bibr' target='#b19'>(Jiang and Ren, 2019)</ns0:ref>.</ns0:p><ns0:p>The granularity of event data allows the researcher to generate arbitrary aggregations and analyze the PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science data with different zoning schemes and scales <ns0:ref type='bibr' target='#b38'>(Robertson and Feick, 2018;</ns0:ref><ns0:ref type='bibr' target='#b50'>Zhu et al., 2017)</ns0:ref>. However, this freedom to build arbitrary aggregations comes at a cost, for example, the Modifiable Areal Unit Problem (MAUP) <ns0:ref type='bibr' target='#b34'>(Openshaw, 1984)</ns0:ref> links the results of analysis to the specific system of scales and zones used.</ns0:p><ns0:p>Another closely related but different problem is the Uncertain Point Observation Problem <ns0:ref type='bibr' target='#b38'>(Robertson and Feick, 2018)</ns0:ref>, which states the uncertainty of the assignation of a point observation to a given contextual area. As <ns0:ref type='bibr' target='#b48'>Wolf et al. (2021)</ns0:ref> points out, there have been important analytic developments to tackle the issues associated with MAUP, but this developments represent empirical answers to a problem that, as has been evident in the work of <ns0:ref type='bibr' target='#b38'>Robertson and Feick (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b25'>Kwan (2012)</ns0:ref>, is in reality theoretically oriented: solving the MAUP through the development of Optimal Zoning Schemes <ns0:ref type='bibr' target='#b6'>(Bradley et al., 2017)</ns0:ref> does not automatically relate those zones to any geographically significant process or structure.</ns0:p><ns0:p>From a Computer Science perspective, the problem of aggregating individual observations into a system of zones has been tackled mainly through clustering algorithms <ns0:ref type='bibr' target='#b22'>(Kisilevich et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b13'>Frias-Martinez et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b12'>Frias-Martinez and Frias-Martinez, 2014;</ns0:ref><ns0:ref type='bibr' target='#b21'>Khan and Shahzamal, 2020;</ns0:ref><ns0:ref type='bibr' target='#b29'>Liao et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b44'>Steiger et al., 2016)</ns0:ref>. Clustering individual observations is, in the language of <ns0:ref type='bibr' target='#b38'>Robertson and Feick (2018)</ns0:ref>, an assignation of points into areal support and as such it implicitly involves a conceptualization of how the individual behaviors are structured to produce the patterns revealed by clusterization. In this sense, this data driven algorithms can be thought as belonging to the same empirical family of methods of Optimal Zoning Algorithms, lacking theoretical support. As argued by O'Sullivan (2017); O' <ns0:ref type='bibr'>Sullivan and Manson (2015)</ns0:ref>, this lack of grounding on formal geographic knowledge can often lead to spurious or irrelevant conclusions and, in general, hinder the advance of knowledge and the exploitation of new data sources for geographic analysis <ns0:ref type='bibr' target='#b3'>(Arribas-Bel, 2014)</ns0:ref>.</ns0:p><ns0:p>One avenue of research, proposed in <ns0:ref type='bibr' target='#b42'>Singleton and Arribas-Bel (2021)</ns0:ref>, to tackle the problem of the use of data driven algorithms for the development of sound geographic analysis is to develop explicitly spacial algorithms that exploit our knowledge about the processes and structures that organize the spatial activity patterns of society. In our work, we tackle the problem of developing an algorithm that, through the theoretical concept of hierarchical scales, is able to detect patterns that are geographically relevant and not only data driven.</ns0:p><ns0:p>Our contribution is hence threefold. First, since there is often no ground truth available to compare different clustering algorithms on geographical events data, we developed an algorithm to generate synthetic hierarchically structured data; second, we developed an algorithm for the automatic selection of the &#949; parameter in DBSCAN that allows its use for the detection of density based hierarchical structures in geographic point data; and third, we propose a framework to compare the performance of different clustering algorithms. Our work sits at the intersection of computer science and geography, proving that approaching data driven problems from a theory oriented perspective, provides robust analysis frameworks.</ns0:p><ns0:p>The rest of the paper is organized as follows. In Section 2 we present the problem of detecting hierarchical crowd activity structures in geographic events databases; in Section 3, we describe the algorithm to generate synthetic samples to test different algorithms; in Section 4 we describe the algorithms we are going to use; Section 5 describes the experimental setting used to compare different clustering algorithms; Section 6 describes the metrics used to evaluate the performance of the clustering methods; in Section 7 we present our main results and finally in Section 8 we conclude and propose further research.</ns0:p></ns0:div> <ns0:div><ns0:head n='2'>DETECTION OF CROWD ACTIVITY SCALES AND ZONES</ns0:head><ns0:p>In the context of spatio-temporal events, as described by <ns0:ref type='bibr' target='#b22'>Kisilevich et al. (2010)</ns0:ref>, we are going to refer as Crowd Activity to the collective aggregated patterns observed in some spatio-temporal events data sets, specially in data describing some aspect of the behavior of human populations. Although not formally defined, this concept underpins most of the work that we are going to review in the rest of this section.</ns0:p><ns0:p>There is a substantial body of work on techniques for event detection using the geolocated Twitter feed (in this context, event refers to real world occurrences that unfold over space and time, which is different to the use of the term on spatio-temporal databases). <ns0:ref type='bibr' target='#b4'>Atefeh and Khreich (2015)</ns0:ref> present a survey of such techniques. Within this field, we are specially interested in works that use only the spatio-temporal signature of events and don't rely on attribute or content information (such as the content of Twitter messages), because this renders the methods easily translatable across different databases. Along this line, in <ns0:ref type='bibr' target='#b26'>Lee et al. (2011)</ns0:ref> Manuscript to be reviewed Computer Science then characterizing each cluster by the number of users, the amount of messages and a measure of the mobility of users in each cluster.</ns0:p><ns0:p>Another interesting research avenue is the detection of land use by accounting for the spatio temporal signature of events data sets. For example, in Frias-Martinez and Frias-Martinez (2014), the authors develop a technique to extract, through Self Organized Maps <ns0:ref type='bibr' target='#b24'>(Kohonen, 1990)</ns0:ref> and spectral clustering, different land uses in urban zones; while <ns0:ref type='bibr' target='#b28'>Lenormand et al. (2015)</ns0:ref> use a functional, network based, approach to detect land use through cell phone records. Along the same line, <ns0:ref type='bibr' target='#b27'>Lee et al. (2012)</ns0:ref> develop a technique to extract significant crowd behavioural patterns and through them generate a characterization of the urban environment.</ns0:p><ns0:p>From this brief review we can infer some generalities involved in the extraction of Crowd Activity patterns from spatial spatio-temporal events data:</ns0:p><ns0:p>&#8226; Time is segmented in intervals and the definition of this intervals is arbitrary and not extracted from the data. In Frias-Martinez and Frias-Martinez (2014), each day is divided in 20 minutes intervals; in <ns0:ref type='bibr' target='#b26'>Lee et al. (2011)</ns0:ref>, each day is segmented in four six hours intervals, while in <ns0:ref type='bibr' target='#b28'>Lenormand et al. (2015)</ns0:ref> each day is divided in hourly intervals.</ns0:p><ns0:p>&#8226; The geographic space is partitioned in a single scale tessellation of space.</ns0:p><ns0:p>Our work will focus on the second general characteristic: the way in which the space is partitioned to obtain Crowd Activity zones. In the reviewed works, the partitioning algorithm returns a single scale tessellation around the cluster centroids identified. This partition reflects the differences in point density across the whole space but, since it is flat (it has a single scale) it cannot represent the structures found at different scales, this means that such partitions mix the whole range of scales of the underlying data generating processes into a single tessellation.</ns0:p><ns0:p>However, when addressing Crowd Activity patterns from a geographic perspective, the issue of scale is evident: the underlying processes that generate the observed spatio-temporal distribution of events is organized as a hierarchy of scales, closely related to the urban fabric <ns0:ref type='bibr' target='#b17'>(Jiang and Miao, 2015a;</ns0:ref><ns0:ref type='bibr' target='#b1'>Arcaute et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b46'>van Meeteren and Poorthuis, 2018)</ns0:ref>. This suggests the use of techniques for detecting Crowd Activity zones explicitly incorporating the concept of hierarchical scales. Along this line of work, in <ns0:ref type='bibr' target='#b19'>Jiang and Ren (2019)</ns0:ref> the authors prove that a hierarchical structure, based on the Natural Cities algorithm <ns0:ref type='bibr' target='#b17'>(Jiang and Miao, 2015a)</ns0:ref>, is able to predict the location of Twitter messages, while in L&#243;pez-Ram&#237;rez et al.</ns0:p><ns0:p>(2019), the authors use a hierarchical structure to aggregate individual Twitter messages into geographical documents as input to Latent Topic Models. This demonstrates the need to develop further methods to extract hierarchical Crowd Activity patterns from spatio-temporal events data.</ns0:p><ns0:p>Conceptually, hierarchical Crowd Activity detection algorithms are similar to hierarchical clustering algorithms. Crowd Activity is characterized by arbitrary shaped agglomerations in space with potentially noisy samples, so the computational task is similar to the one tackled by DBSCAN <ns0:ref type='bibr' target='#b11'>(Ester et al., 1996)</ns0:ref>. The main difference is that in the case of hierarchical Crowd Activity detection, one needs to find structures within structures to uncover the whole hierarchical structure. Notice that this problem is different from the problem of hierarchical clustering as approached by <ns0:ref type='bibr'>HDBSCAN (McInnes et al., 2017)</ns0:ref> or OPTICS <ns0:ref type='bibr' target='#b0'>(Ankerst et al., 1999)</ns0:ref>, where the task is to find the most significative structures present in a data set, this will be further explained in Section 4.</ns0:p><ns0:p>A central problem to the development of algorithms to detect hierarchical Crowd Activity patterns is the lack of ground truth to test the results. For example, in <ns0:ref type='bibr' target='#b1'>Arcaute et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b19'>Jiang and Ren (2019)</ns0:ref>, the authors are interested in providing alternative, tractable, definitions of cities and perform only qualitative comparisons with available data. In <ns0:ref type='bibr' target='#b32'>L&#243;pez-Ram&#237;rez et al. (2019)</ns0:ref>, the authors extract regular activity patterns from the geolocated Twitter feed and also perform only a qualitative comparison to the known urban activity patterns. To provide an alternative that allows the quantitative evaluation of different algorithms to detect hierarchical Crowd Activity patterns, in the next section we describe an algorithm to generate synthetic data that aims to reproduce the most important characteristics found in real world event data.</ns0:p></ns0:div> <ns0:div><ns0:head n='3'>SYNTHETIC DATA</ns0:head><ns0:p>In order to test different hierarchical Crowd Activity detection algorithms, we developed a tool to generate synthetic data. The algorithm creates and populates a hierarchical cluster structure that reproduces the main characteristics of the structures we described in the previous sections. Our synthetic data generator creates a hierarchical structure by first creating a cluster tree where every node represents a cluster of points within a region delimited by a random polygon, within this polygon, a random number of clusters are generated, this procedure is carried on iteratively. For every level in the hierarchy we fill the space with noise points. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows an example of the tree structure generated by our algorithm as well as the polygons and points.</ns0:p><ns0:p>Geographic distributions that exhibit hierarchical structure have a characteristic heavy-tailed size distribution <ns0:ref type='bibr' target='#b16'>(Jiang, 2013;</ns0:ref><ns0:ref type='bibr' target='#b1'>Arcaute et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Jiang and Ren, 2019)</ns0:ref>, having many more small objects than large. To show that our synthetic data exhibits this same property, we perform a Delaunay triangulation with the points as vertex, then obtain the lengths of the edges and sort them in descending order. We then proceed to select the length values larger than the mean (the Head) and the values smaller than the mean (the Tail), keeping only the latter in order to perform the Head-Tails break described in <ns0:ref type='bibr' target='#b16'>Jiang (2013)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> shows three iterations of the Head-Tails break, each of them clearly exhibiting the heavy-tailed distribution characteristic of hierarchical scales. For each iteration we also calculate the HT-index <ns0:ref type='bibr' target='#b20'>(Jiang and Yin, 2014)</ns0:ref> and show that it corresponds with the level in the cluster tree as expected. To facilitate research with our synthetic data generator, we developed a series of helper tools. For example, we include a tool to easily obtain the tags (to which cluster and level a point belongs) for every point and another tool to generate visualizations of the whole structure such as shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>.</ns0:p><ns0:p>Our synthetic data can be used to test different clustering or aggregation algorithms in terms of their ability to detect the hierarchical structure in geographic point data sets. To allow the reproductibility of our research and make our methods available to other researchers, we published all our algorithms in a publicly available library in <ns0:ref type='bibr' target='#b39'>Salazar et al. (2021)</ns0:ref>.</ns0:p><ns0:p>Our clusters tree data structure can also be used to tag hierarchical clusters structures obtained by different algorithms. Instead of generating synthetic data, we can pass a hierarchical clusterization on a points sample to create a cluster tree by using the points and the clustering labels. Our helper tools simplify the process of comparison between different hierarchical Crowd Activity detection algorithms.</ns0:p></ns0:div> <ns0:div><ns0:head n='4'>CLUSTERING ALGORITHMS</ns0:head><ns0:p>There are several clustering algorithms available in the literature. For this research, based on the geographic considerations described in Section 2, we will focus on density based algorithms. This family of unsupervised learning algorithms identify distinctive clusters based on the idea that a cluster is a contiguous region with a high point density. Of special importance for our research is the ability of density based clustering algorithms to distinguish between cluster points and noise <ns0:ref type='bibr' target='#b11'>(Ester et al., 1996)</ns0:ref>.</ns0:p><ns0:p>The result of a clustering algorithm strongly depends on parameter selection, thus, a common goal in the Computer Science literature has been the development of algorithms that use the least amount of free parameters, this has lead to the development of algorithms like OPTICS <ns0:ref type='bibr' target='#b0'>(Ankerst et al., 1999)</ns0:ref> and HDBSCAN <ns0:ref type='bibr' target='#b8'>(Campello et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b33'>McInnes et al., 2017)</ns0:ref> that can detect hierarchical cluster structures with very little free parameters, this allows the automatic extraction of patterns from data without input from the researcher.</ns0:p><ns0:p>Hierarchical clustering algorithms focus on the task of detecting the most relevant density structures regardless of the scale. To illustrate this point, in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> Closely related to density based clustering, we have algorithms that extract hierarchical structures in point data by imposing thresholds to the distance between points. Hierarchical Percolation <ns0:ref type='bibr' target='#b1'>(Arcaute et al., 2016)</ns0:ref> and Natural Cities <ns0:ref type='bibr' target='#b18'>(Jiang and Miao, 2015b)</ns0:ref> are examples of this kind of algorithms. The goal in this case is to explicitly extract the hierarchical structure implied by a heavy tailed distance distribution.</ns0:p><ns0:p>In the following Sections we present a brief description of the clustering and hierarchical scales extraction algorithms we are going to compare.</ns0:p></ns0:div> <ns0:div><ns0:head>5/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:ref> Manuscript to be reviewed </ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head n='4.1'>Natural Cities</ns0:head><ns0:p>The Natural Cities algorithm proposed in <ns0:ref type='bibr' target='#b18'>Jiang and Miao (2015b)</ns0:ref> looks to objectively define and delineate human settlements or activities at different scales from large amounts of geographic information. The algorithm exploits the heavy tailed distribution of sizes present in geographic data and uses Head-Tails breaks <ns0:ref type='bibr' target='#b16'>(Jiang, 2013)</ns0:ref> to iteratively extract the structures at successive scales. The algorithm can be summarized as follows:</ns0:p><ns0:p>&#8226; Use all the points and generate a Triangulated Irregular Network (TIN) using the Delaunay triangulation algorithm.</ns0:p><ns0:p>&#8226; Extract the edges of the triangulation and obtain their lengths.</ns0:p><ns0:p>&#8226; Calculate the average length.</ns0:p><ns0:p>&#8226; Use the average as a threshold to split the edges in two categories, the Head with those segments larger than the mean, and the Tail, with the segments smaller than the mean.</ns0:p></ns0:div> <ns0:div><ns0:head>6/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>&#8226; Remove the Head.</ns0:p><ns0:p>&#8226; Generate the least amount of continuous polygons from the union of Tail edges.</ns0:p><ns0:p>&#8226; The points that are inside the obtained polygons are kept and the ones outside are consider noise.</ns0:p><ns0:p>&#8226; Repeat the procedure iteratively for each resulting polygon until the size distribution does not resemble a heavy tailed distribution.</ns0:p><ns0:p>At the end of the procedure, a hierarchical structure is obtained that contains structures within structures. This algorithms has been used to examine the spatial structure of road networks and their relation to Social Media messages <ns0:ref type='bibr' target='#b19'>(Jiang and Ren, 2019)</ns0:ref> and the evolution of Natural Cities from geo-tagged Social Media <ns0:ref type='bibr' target='#b17'>(Jiang and Miao, 2015a)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.2'>DBSCAN</ns0:head><ns0:p>DBSCAN is an algorithm designed to detect arbitrarily shaped density clusters in databases with noise <ns0:ref type='bibr' target='#b11'>(Ester et al., 1996)</ns0:ref>. In the following we provide a brief description of the algorithm along with its core assumptions. The definitions established here will serve as basis to the description of OPTICS (Section 4.3), HDBSCAN (Section 4.4) and Adaptative DBSCAN (Section 4.5).</ns0:p><ns0:p>DBSCAN uses a threshold distance (epsilon distance &#949;) and a minimum number of points minPts as initial parameters. From this, DBSCAN makes the following basic definitions:</ns0:p><ns0:p>Definition (Neighborhood). Let p &#8712; P be a point in the data set P and &#949; &gt; 0 a threshold distance, then the epsilon-neighborhood of p is</ns0:p><ns0:formula xml:id='formula_0'>N &#949; (p) = {x|d(x, p) &lt; &#949;} for x &#8712; P Definition (Reachable). A point q &#8712; P is a reachable point from a point p &#8712; P if there is a path of points {p 1 , . . . p n &#8722; 1, q} &#8834; P such that the distance between consecutive points is less than &#949; (d(p i , p i+1 ) &lt; &#949;).</ns0:formula><ns0:p>Definition (Core). Let P be a set of points, a point p is a core point of D with respect to &#949; and minPts if</ns0:p><ns0:formula xml:id='formula_1'>|N &#949; (p)| &#8805; minPts where N &#949; &#8834; P.</ns0:formula><ns0:p>Definition (Density reachable). A point p is density reachable from a point q with respect of &#949; and minPts within a set of points P if there is a path {p 1 , . . . p n &#8722; 1, q} &#8834; P such that p i &#8712; P are core points of</ns0:p><ns0:formula xml:id='formula_2'>P and p i+1 &#8712; N &#949; (p i ).</ns0:formula><ns0:p>Definition (Density connected). A point p is density connected to a point q with respect to &#949; and minPts if there is a point o such that both, p and q are density reachable from o with respect to &#949; and minPts.</ns0:p><ns0:p>Definition (Cluster). A cluster C &#8834; P with respect to &#949; and minPts is a non empty subset of P that satisfies the following conditions:</ns0:p><ns0:p>1. &#8704;p, q &#8712; P : if p &#8712; C and q is density connected from p with respect to &#949; and minPts, then q &#8712; C.</ns0:p><ns0:p>2. &#8704;p, q &#8712; C p is density connected to q with respect to &#949; and minPts.</ns0:p><ns0:p>Definition (Noise). Let C 1 , . . . ,C k be the clusters of a set of points P with respect to &#949; and minPts. The noise is defined as the set of points in P that do not belong to any cluster C i .</ns0:p><ns0:p>Using the above definitions we can summarize the algorithm to cluster a set of points P as:</ns0:p><ns0:p>&#8226; Consider the set P C = {p 1 , . . . p m } &#8834; P of core points of P with respect to &#949; and minPts.</ns0:p><ns0:p>&#8226; For a point p &#8712; P C take all the density connected points as part of the same cluster, remove these points from P and P C .</ns0:p><ns0:p>&#8226; Repeat the step above until P C is empty.</ns0:p><ns0:p>&#8226; If a point can not be reached from any core point is considered as noise. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In order to fit our aim of detecting structures within structures, we apply DBSCAN recursively to each of the discovered clusters. It is important to consider that the &#949; parameter in DBSCAN is related to the relative point density of noise with respect to clusters, this means that in each recursive application one needs to set an appropriate value for &#949;; preserving the same value across iterations would not uncover density structures within clusters since those structures would, by definition, be of higher density than the encompassing cluster. In L&#243;pez-Ram&#237;rez et al. ( <ns0:ref type='formula'>2018</ns0:ref>), &#949; is decreased by a constant arbitrary factor on each iteration, in Section 4.5 we propose a novel method for automatically selecting &#949; for each cluster and iteration that overcomes this limitation.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.3'>OPTICS</ns0:head><ns0:p>The OPTICS algorithm is fully described in <ns0:ref type='bibr' target='#b0'>Ankerst et al. (1999)</ns0:ref>, here we will make a brief presentation of its main characteristics, adapted to our research problem. OPTICS works in a similar fashion to DBSCAN and can be seen as an extension of it. The algorithm uses the notion of density-based cluster ordering to extract the corresponding density-based cluster for each point. OPTICS extends the definitions used in DBSCAN by adding:</ns0:p><ns0:p>Definition (Core distance). The core distance (d core (p, &#949;, minPts)) of a point p with respect to &#949; and minPts, is the smallest distance &#949; &#8242; &lt; &#949; such that p is considered a core point with respect to minPts and</ns0:p><ns0:formula xml:id='formula_3'>&#949; &#8242; . If |N &#949; (p)| &lt; minPts then the core distance is undefined.</ns0:formula><ns0:p>Definition (Reachability distance). Let p and o be points in a set P and take</ns0:p><ns0:formula xml:id='formula_4'>N &#949; (o). The reachability- distance (reach &#8722; dist (&#949;,minpts) (p, o)) is undefined if |N &#949; (o)| &lt; minPts, else is max(d core (p, &#949;, minPts), d(p, o)).</ns0:formula><ns0:p>Let P be a set of points, and set values for &#949; and minPts. The OPTICS algorithm can be summarized as following:</ns0:p><ns0:p>&#8226; For each point p &#8712; P set the reachability-distance value as UNDEFINED (L reach&#8722;dis (p) = UNDE-FINED &#8704;p &#8712; D ).</ns0:p><ns0:p>&#8226; Set a empty processed list L Pro .</ns0:p><ns0:p>&#8226; Set a empty priority queue S</ns0:p><ns0:p>&#8226; For each unprocessed point p in P do:</ns0:p><ns0:p>- * For each q in S do:</ns0:p><ns0:p>&#8226; Obtain N &#949; (q)</ns0:p><ns0:p>&#8226; Mark q as processed (add q to L Pro ).</ns0:p><ns0:p>&#8226; Push q in the priority queue S.</ns0:p><ns0:p>&#8226; If the d core (p, &#949;, minPts) is not UNDEFINED, use the update function with the N &#949; (q), q, &#949;, and minPts as parameters to update S.</ns0:p><ns0:p>-The algorithm expands S until no points can be added.</ns0:p><ns0:p>The update function for a priority queue S, uses as parameters a neighborhood N &#949; (p), the center of the neighborhood p, and the &#949; and minPts values, and is defined as:</ns0:p><ns0:p>&#8226; Get the core-distance of p (d core (p, &#949;, minPts)). -</ns0:p><ns0:formula xml:id='formula_5'>If L reach&#8722;dis (o) is not UNDEFINED (o is already in S), then if new reach&#8722;dis &lt; L reach&#8722;dis (o)</ns0:formula><ns0:p>update the position of o in S moving forward with the new value new reach&#8722;dis .</ns0:p><ns0:p>After this, we have reachability-distance values (L reach&#8722;dis : D &#8594; R &#8746; UNDEFINED) for every point in P and an ordered queue S with respect to &#949; and minPts. Using this queue, the clusters are extracted using &#949; &#8242; &lt; &#949; distance and minPts by assigning the cluster membership depending on the reachability-distance.</ns0:p><ns0:p>To decide whether a given point is noise or the first computed element in a cluster, is necessary to define that UNDEFINED &gt; &#949; &gt; &#949; &#8242; , After this, the assignation of the elements of S is performed as:</ns0:p><ns0:p>&#8226; Set the Cluster ID = NOISE.</ns0:p><ns0:p>&#8226; Consider the points p &#8712; S.</ns0:p><ns0:formula xml:id='formula_6'>&#8226; If L reach&#8722;dis (p) &gt; &#949; &#8242; then, -If d core (p, &#949; &#8242; , minPts) &#8804; &#949; &#8242;</ns0:formula><ns0:p>, the value of Cluster ID is set to the next value, and the cluster for the element p is set as Cluster ID .</ns0:p><ns0:p>-If d core (p, &#949; &#8242; , minPts) &#8805; &#949; &#8242; , the point p is consider as NOISE.</ns0:p><ns0:p>&#8226; If L reach&#8722;dis (p) &#8804; &#949; &#8242; , the cluster for the element p is Cluster ID .</ns0:p><ns0:p>The order of S guarantees that all the elements in the same cluster are close. The L reach&#8722;dis values allow us to distinguish between clusters, these clusters will depend on the &#949; &#8242; parameter.</ns0:p><ns0:p>In our work we use a recursive application of OPTICS to uncover the hierarchical scale structure, in the same fashion as <ns0:ref type='bibr' target='#b31'>L&#243;pez-Ram&#237;rez et al. (2018)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.4'>HDBSCAN</ns0:head><ns0:p>The algorithm Density Based Clustering based on Hierarchical Density Estimates is presented in <ns0:ref type='bibr' target='#b8'>Campello et al. (2013)</ns0:ref>. The main intuition behind HDBSCAN is that the most significant cluters are those that are preserved along the hierarchical distribution. Before developing an explanation of HDBSCAN we need the following definitions:</ns0:p><ns0:formula xml:id='formula_7'>Definition (Core k-distance).</ns0:formula><ns0:p>Let p be a point in space, the core distance d core k (p) of k is defined by</ns0:p><ns0:formula xml:id='formula_8'>d core k (p) = max({d(x, p)|x &#8712; k-NN(p) if x = p}) where k-NN(p) is the set of the k-nearest neighbors of p for a specific k &#8712; N.</ns0:formula><ns0:p>Definition (Core k-distance point). A point p is a Core k-distance point with respect to &#949; and minPts, if the number of elements in the core neighborhood of p (N core k (p, &#949;, minPts) = {x|d core (x, p) &lt; &#949;} ) is greater or equal than minPts. Where &#949; is greater or equal to the core distance of p for a given k value.</ns0:p><ns0:p>Definition (Mutual Reachability Distance). For every p and q the Mutual Reachability Distance is</ns0:p><ns0:formula xml:id='formula_9'>d mreach (p, q) = max({d core k (p), d core k (q), d(p, q)})</ns0:formula><ns0:p>Consider the weighted graph with every point as vertices and the mutual reachability distance as weights. The resulting graph will be strongly connected, the idea is to find islands within the graph that will be consider clusters.</ns0:p><ns0:p>To classify the vertices of the graph in different islands its easier to determine which do not belong to the same island. This is done by removing edges with larger weights than a threshold value &#949;, by reducing the threshold the graph will start to disconnect and the connected components will be the islands (clusters)</ns0:p><ns0:p>to consider. If a vertex in the graph is disconnected from the graph or the connected component doesn't have the minPts, the corresponding vertex are considered noise. The result will depend on &#949;.</ns0:p></ns0:div> <ns0:div><ns0:head>9/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science To reduce computing time HDBSCAN finds a Minimum spanning tree (MST) of the complete graph.</ns0:p><ns0:p>The tree is built one edge at a time by adding the edge with the lowest weight that connects the current tree to a vertex not yet in the tree.</ns0:p><ns0:p>Using the MST and adding a self edge to all the vertices in the graph with a distance core value(MST ext ), a dendrogram with a hierarchy is extracted. The HDBSCAN hierarchy is extracted using the following rules <ns0:ref type='bibr' target='#b8'>(Campello et al., 2013</ns0:ref>):</ns0:p><ns0:p>&#8226; For the root of the tree assign all objects the same label (single 'cluster')</ns0:p><ns0:p>&#8226; Iteratively remove all edges from MST ext in decreasing order of weights (in case of ties, edges must be removed simultaneously):</ns0:p><ns0:p>-Before each removal, set the dendrogram scale value of the current hierarchical level as the weight of the edge(s) to be removed.</ns0:p><ns0:p>-After each removal, assign labels to the connected component(s) that contain(s) the end vertex(-ices) of the removed edge(s), to obtain the next hierarchical level: assign a new cluster label to a component if it still has at least one edge, else assign it a null label ('noise').</ns0:p><ns0:p>The clusters thus obtained from the dendrogram depend on the selection of a &#955; parameter based on the estimation of the stability of a cluster. This is done using a probability density function approximated by the k-nearest neighbors.</ns0:p><ns0:p>Once again, instead of relying on the hierarchical structure obtained from HDBSCAN, to discover our structures within structures we apply the algorithm recursively to every cluster.</ns0:p></ns0:div> <ns0:div><ns0:head n='4.5'>Adaptative DBSCAN</ns0:head><ns0:p>In Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Before explaining our algorithm, let us review how minPts and &#949; values are estimated. For the minPts parameter, the general rule is to select it so minPts &#8805; D + 1, where D is the dimension of the data, so in the recursive application of DBSCAN this parameter is fixed. In general, the problem of selecting &#949; is solved heuristically by the following procedure:</ns0:p><ns0:p>&#8226; Select appropriate minPts.</ns0:p><ns0:p>&#8226; For each point in the data get the kth-nearest neighbors using minPts as the value of K.</ns0:p><ns0:p>&#8226; The different distances obtained are sorted smallest to largest (Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>, K-Sorted distance graph).</ns0:p><ns0:p>&#8226; Good values of &#949; distance are those where there is a big increment in the distance. This increment will correspond to an increment on the curvature in the plot, this point is called the elbow.</ns0:p><ns0:p>An automatic procedure to select &#949; is presented in <ns0:ref type='bibr' target='#b43'>Starczewski et al. (2020)</ns0:ref>, where the authors propose a mathematical formulation to obtain the elbow value of the K-Sorted distance graph once minPts is given.</ns0:p><ns0:p>This formulation tends to find the highest density clusters present in the data, so when applied recursively to hierarchically structured data it will tend to find the clusters in the deepest hierarchical level and will label as noise points belonging to intermediate hierarchical levels.</ns0:p><ns0:p>To overcome this limitation, we propose an alternative procedure to automatically select &#949; that leads to larger distance values and thus less dense clusters in each recursive application of DBSCAN. First, notice that either on the heuristic described above or in the procedure by <ns0:ref type='bibr' target='#b43'>Starczewski et al. (2020)</ns0:ref>, the &#949; value depends on the selection of K, larger values of K will in general lead to larger &#949; values, although not in a strictly monotonic way as seen in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>. In principle it is possible to select K as the maximum number of points in the data set and obtain a suitable &#949; value, the problem is that for large data sets this is not feasible due to computational constraints.</ns0:p><ns0:p>To reduce the computational cost of finding appropriate &#949; values, we observe in Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> the similarities in the K-Sorted distance graph for a range of K values, it can be seen that the elbow value is similar for close values of K. Using this observation we obtain the K-Sorted distance graph only for a sample K s &#8834; {1, . . . , N}, where N is the number of points, instead of every possible value, thus reducing the time and computational cost needed to obtain a suitable &#949; distance value.</ns0:p><ns0:p>To find the elbow value for a given k &#8712; K s , we use the library developed in <ns0:ref type='bibr' target='#b40'>Satopaa et al. (2011)</ns0:ref>. The algorithm is O(n 2 ) so reducing the points involved in this step is also important. For each k, the algorithm takes as input the N * k distances. To limit the number of distances passed to the algorithm, we average the N * k distances in bins of size k, thus we only use N values to calculate the elbow.</ns0:p></ns0:div> <ns0:div><ns0:head>11/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Our adaptative DBSCAN algorithm can be summarized as follows:</ns0:p><ns0:p>&#8226; Let N be the number of points in the data set.</ns0:p><ns0:p>&#8226; For every step k in the range {minPts, . . . , &#8970; N 10 &#8971;} obtain the following:</ns0:p><ns0:p>-For each point p, obtain the distances to the k nearest neighbors.</ns0:p><ns0:p>-Sort the distances.</ns0:p><ns0:p>-Divide the distance interval in bins of size k and obtain the average for each bin.</ns0:p><ns0:p>-Get the elbow value for all the average bins as in <ns0:ref type='bibr' target='#b40'>Satopaa et al. (2011)</ns0:ref>.</ns0:p><ns0:p>&#8226; Take &#949; as the maximum of all elbow values from the previous step.</ns0:p><ns0:p>This algorithm for calculating a suitable &#949; value for each identified cluster allow us to find clusters with intermediate densities (correspondingly to intermediate hierarchical levels) and, at the same time, do it in a computationally efficient manner.</ns0:p></ns0:div> <ns0:div><ns0:head n='5'>EXPERIMENTAL SETTING</ns0:head><ns0:p>In order to develop a common framework for the comparison of the different clustering algorithms considered, we use the synthetic data generation algorithm described in Section 3 to generate random scenarios with 3, 4 and 5 levels, for each number of levels we carried out at least 100 experiments. By definition, each random scenario has a different number of clusters per level and thus different numbers of children per cluster, this provides the algorithms with a wide range of situations to tackle. Each one of the scenarios is passed trough the algorithms described in Section 4 to generate statistically significant evaluations of each algorithm along every metric. The algorithms are evaluated using the metrics described in section 6.</ns0:p><ns0:p>For the DBSCAN and OPTICS algorithms, the implementations used are from <ns0:ref type='bibr' target='#b37'>Pedregosa et al. (2011)</ns0:ref>. For all the algorithms the stopping conditions (in the recursive application) are the same: when the number of points in a cluster falls below a fixed minPoints, no further iterations are performed for said cluster.</ns0:p></ns0:div> <ns0:div><ns0:head>12/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head n='6'>EVALUATION</ns0:head><ns0:p>We will evaluate the performance of the clustering algorithms along three complementary tasks: first, as global (across levels) classification algorithms, evaluating the ability of each algorithm to distinguish globally between cluster and noise points; second as per level classification algorithms, evaluating the ability to distinguish, for each level, noise and cluster points; finally, we will introduce a metric to compare the resulting shapes of the clusters obtained at each level. These three tasks are all important in the detection of hierarchical crowd activity structures: the first inform us about the global distinction between activity and noise; the second does a similar job but at each level, while the last is very important from a geographical pattern point of view since it inform us about the similarity in the shapes of the activity structures detected.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.1'>Global evaluation</ns0:head><ns0:p>To evaluate the performance of the different clustering algorithms as global classification algorithms, we label the obtained clusters using our tree cluster structure. In Figure <ns0:ref type='figure' target='#fig_12'>7</ns0:ref> we show an example output for the different algorithms. For every level we label points as either belonging to a cluster or as noise.</ns0:p><ns0:p>This labeling has the property that when a point is labeled as noise in level n, then for every level n + j with j &#8805; 1 the point will be tagged as noise. Thus we can obtain a label for every point as the concatenation of NOISE, SIGNAL tags. For example, in a three level cluster tree, the possible labels are the following: SIGNAL SIGNAL SIGNAL, SIGNAL SIGNAL NOISE, SIGNAL NOISE NOISE, NOISE NOISE NOISE.</ns0:p><ns0:p>To evaluate the algorithms, we will use the Normalized Mutual Information (NMI) between the</ns0:p></ns0:div> <ns0:div><ns0:head>13/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science algorithm classification and the ground truth. NMI has the advantage of being independent of the number of classes <ns0:ref type='bibr' target='#b47'>(Vinh et al., 2009)</ns0:ref>, thus giving a fair evaluation of the performance of the algorithms across experiments.</ns0:p><ns0:p>Definition (Normalized Mutual Information). NMI is defined as:</ns0:p><ns0:formula xml:id='formula_10'>NMI(X,Y ) = 2I(X,Y ) H(X) + H(Y )</ns0:formula><ns0:p>Where P(x) is the probability to get the label x, H(X) is the Shannon Entropy of the set of labels X and I(X,Y ) is the Mutual Information between the sets of labels X an Y .</ns0:p><ns0:p>Definition (Shannon Entropy). The Shannon Entropy of a set of labels X is defined as: x) where P(x) is the probability of label x.</ns0:p><ns0:formula xml:id='formula_11'>H(X) = &#8722; &#8721; x &#8712; XP(x) log P(</ns0:formula><ns0:p>Definition (Mutual Information). Let X = {X 1 , . . . , X l } and Y = {Y 1 , . . .Y n } be two sets of labels for the same set of points (N), the mutual information between the two set of labels is calculated as:</ns0:p><ns0:formula xml:id='formula_12'>I(X,Y ) = |X| &#8721; i=1 |Y | &#8721; j=1 P(i, j) * log P(i, j) P(i) * P &#8242; ( j)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_13'>P(i) = |X i |/|N| is the probability of a random point belonging to class X i , P &#8242; ( j) = |Y i |/|N|. And P(i, j) = |X i &#8745;Y j |</ns0:formula><ns0:p>|N| is the probability that a random point belongs to both classes X i and Y j .</ns0:p></ns0:div> <ns0:div><ns0:head n='6.2'>Per level evaluation</ns0:head><ns0:p>To reflect the iterative clustering process, we evaluate the classification obtained for each level. In order to carry out this evaluation, we label, for each level, the points as either SIGNAL for points belonging to a cluster or NOISE otherwise. For each level n we include only the points that have the SIGNAL label in n &#8722; 1 level since those are the only points seen by the algorithm at each recursive iteration. Using this approach, the per level evaluation corresponds to the clusters obtained on level n and is performed only for the points that correspond to each level.</ns0:p><ns0:p>In general, the labels obtained for each level are unbalanced with more samples of the Noise class.</ns0:p><ns0:p>Therefore for the evaluation we use the Balanced Accuracy (BA) for binary classification <ns0:ref type='bibr' target='#b7'>(Brodersen et al., 2010)</ns0:ref> on each level.</ns0:p><ns0:p>Definition (Balanced Accuracy). Let X be the ground truth labels in a point set N and Y the set of predicted labels in the same set. The balanced accuracy for a binary labeling is defined as:</ns0:p><ns0:formula xml:id='formula_14'>BA(X,Y ) = 1 2 T P T P + FN + T N T N + FP</ns0:formula><ns0:p>where T P is the true positive set of labels, T N the true negative set of labels, FN is the false negative set of labels, and FP is the false positive set of labels.</ns0:p></ns0:div> <ns0:div><ns0:head n='6.3'>Shape evaluation</ns0:head><ns0:p>For geographic applications such as those described in Section 2, it is important to also evaluate the shape of the clusters obtained. Therefore we propose a measure that compares the shapes of the clusters obtained by each algorithm.</ns0:p><ns0:p>Our Similarity Shape Measure (SSM) compares the shapes of the Concave Hulls of each cluster, obtained by the optimal alpha-shape of the point set <ns0:ref type='bibr' target='#b9'>(Edelsbrunner, 1992;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bernardini and Bajaj, 1997)</ns0:ref>.</ns0:p><ns0:p>Thus, for our SSM, each cluster is represented by a polygon.</ns0:p><ns0:p>A simple way to compare polygon shapes is the Jaccard Index, mostly used in computer vision to compare detection algorithms. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Definition. [Jaccard Index] The Jaccard Index or Jaccard Similarity Coefficient between polygons P and Q is defined as:</ns0:p><ns0:formula xml:id='formula_15'>Jacc(P, Q) = Area(P &#8745; Q) Area(P &#8746; Q)</ns0:formula><ns0:p>where Area is the area of the polygon.</ns0:p><ns0:p>The main issue in implementing a shape similarity measure is in determining which polygons to compare in each level. As can be seen in Figure <ns0:ref type='figure' target='#fig_12'>7</ns0:ref>, every algorithm outputs different number and shapes of polygons in each level so there is no straightforward way of assigning corresponding polygons across algorithms (or the ground truth). To overcome this issue, our Similarity Shape Measure compares, using the Jaccard Index, the polygons produced by an algorithm with the polygons in the ground truth, weighting the index by the number of points in each polygon and its corresponding intersections.</ns0:p><ns0:p>Definition (Similarity Shape Measurement). Let P O i = {P O 0 , . . . P O n } be the set of polygons for the clusters on the i &#8722; th level in the ground truth and</ns0:p><ns0:formula xml:id='formula_16'>Q C i = {Q C 0 , . . . Q C m }</ns0:formula><ns0:p>the polygons to evaluate for the same level, then the similarity between P O i and Q C i is:</ns0:p><ns0:formula xml:id='formula_17'>SSM(P O i , Q C i ) = &#8721; P O l &#8712;P O i Q C k &#8712;Q C i |P O l &#8745; Q C k | * Jacc(P O l , Q C k ) &#8721; P&#8712;P O i |P| + &#8721; Q&#8712;Q not |Q| Where Q not = {Q|Q &#8745; P O i = / 0} &#8834; Q C i for all P O i &#8712; P O i .</ns0:formula><ns0:p>The SSM takes values in [0, 1], the maximum value corresponds to the case when all polygons that intersect across the evaluation sample and the ground truth satisfy Jacc(P O l , Q C k ) = 1, all the points within the polygons P O l are in a corresponding polygon Q C k and the family Q not is empty. Conversely, SSM takes the value 0 when no polygons intersect across the evaluated algorithm and the ground truth.</ns0:p><ns0:p>If the polygons have a large Jaccard Similarity and the number of points in the intersections is large, then SSM will also be large. On the contrary, if the polygons have low Jacacard similarity then a penalization will occur and the SSM will have a lower value even if the cardinality of the intersection is large. This means that our measure will penalize clusterizations that over or under separate points in different clusters with respect to the ground truth.</ns0:p><ns0:p>SSM will penalize for points that belong to a P l for some l &#8712; {0, . . . , n}, that are not inside any</ns0:p><ns0:formula xml:id='formula_18'>Q &#8712; Q C i .</ns0:formula><ns0:p>Also a penalization will occur for the points that belong to Q k for some k &#8712; {0, . . . , m} that not belong to any P &#8712; P O i . This is important to ensure that algorithms that properly classify a grater number of points as signal are not penalized.</ns0:p><ns0:p>Thus our SSM allows for the direct comparison between the polygons in the ground truth and those produced by an arbitrary algorithm, producing a global similarity measure for each level that captures not only how similar the polygons are, but also, how many points are in the most similar polygons.</ns0:p></ns0:div> <ns0:div><ns0:head n='7'>RESULTS AND DISCUSSION</ns0:head><ns0:p>Results for the NMI metric are shown in Figure <ns0:ref type='figure' target='#fig_14'>8</ns0:ref>, displaying the mean NMI values along with the 95% confidence intervals. These results clearly show the poor performance of HDBSCAN and OPTICS as global classification algorithms, that is, they are not able to distinguish, across hierarchical levels, between noise and signal points. This behavior can be explained by the fact that those algorithms are intended to discover structures that are persistent trough the whole clustering hierarchy, which is different from the problem of finding the nested cluster structure, as explained in Section 4. On the other hand, Natural</ns0:p><ns0:p>Cities, DBSCAN and Adaptative DBSCAN consistently perform better across the three number of levels tested. In general the performance of Natural Cities and Adaptative DBSCAN is better than DBSCAN and this later exhibits larger confidence intervals, so its results are less consistent across experiments with the same number of levels. It is also interesting to notice that the results are in general consistent across the number of levels tested. In Table <ns0:ref type='table'>1</ns0:ref> we show the results of the pairwise Welch's t-test for the differences in the mean NMI values between algorithms. As can be seen, the test indicates that all differences, except that between Natural Cities and Hierarchical DBSCAN are significant. Number <ns0:ref type='table'>1</ns0:ref>. Pairwise Welch's t-tests for the difference of the experimental means for the NMI metric for all three levels considered. Values of t-test above 2 indicate a significative difference between the observed means. and HDBSCAN. Adaptative DBSCAN and Natural Cities are the best algorithms at separating signal from noise on a level by level basis, having a similar performance across levels for the three numbers of levels considered. The wider confidence interval of BA values for DBSCAN reflects the rigidity in the way the algorithm selects &#949;, using the same value for all clusters at the same level, as compared to Adaptative DBSCAN which adapts &#949; for each cluster and Natural Cities whose heads-tails break is also computed for each cluster. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> shows the results of the pairwise Welch's t-test for the differences in the mean BA values between algorithms. Most of the differences are significative, except for Natural Cities Vs Adaptative DBSCAN at levels 0, 3 and 4 and OPTICS Vs HDBSCAN at level 4.</ns0:p></ns0:div> <ns0:div><ns0:head>Results for the</ns0:head><ns0:p>Finally, in Figure <ns0:ref type='figure' target='#fig_16'>10</ns0:ref> we show the results for the Shape Similarity Measure (SSM). In this case we begin our comparisons in the first hierarchical level, since the level 0 polygons are the same for all algorithms. This results also show the poor performance of OPTICS and HDBSCAN. In this case Adaptative DBSCAN consistently outperforms Natural Cities and DBSCAN, specially as the number of levels increase. The local nature of &#949; in Adaptative DBSCAN, coupled with the capacity of DBSCAN to find arbitrarily shaped density based clusters, allows the algorithm to better reproduce the cluster shapes in the ground truth data. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the results of the pairwise Welch's t-test for the differences in the mean SSM values between algorithms, again, most of the differences are significative, except for Natural Cities Vs DBSCAN at level 2, which is clear from the plot on Figure <ns0:ref type='figure' target='#fig_16'>10</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head n='7.1'>Application on real-world data.</ns0:head><ns0:p>As a use case of the proposed technique we will extract the Crowd Activity Patterns for the geolocated Twitter feed for Central Mexico. The test database consists of all geolocated tweets for 2015-01-20, there are 10366 tweets for this day. Based on the discussion on Sections 2 and 4 we will only compare the results obtained with Natural Cities, recursive DBSCAN and Adaptative-DBSCAN. In order to qualitatively compare the results of the algorithms we will focus on two tasks: first the detection of the greatest cities within the region and, second, the detection of the Central Business District (CBD) structure of Mexico City, widely described in the literature <ns0:ref type='bibr' target='#b10'>(Escamilla et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b45'>Suarez and Delgado, 2009)</ns0:ref>. The idea behind this qualitative comparison is to understand how the different algorithms detect known Crowd Activity Patterns at different scales. is that Natural Cities tends to detect many small clusters at both levels, qualitatively these small clusters do not seem to correspond with any known structure in the city.</ns0:p><ns0:p>For the deeper levels, both in Hierarchical DBSCAN and Adaptative DBSCAN, the patterns detected seem to closely follow the job to housing ratio, this is a good qualitative indicator that the structures detected correspond with the known geographic activity patterns. To further stress this point, in Figure <ns0:ref type='figure' target='#fig_19'>14</ns0:ref> we show the different clusters detected with the three algorithms for afternoons (14:00 to 18:00 hours)</ns0:p><ns0:p>and evenings (18:00 to 22:00), all figures exhibit more concentrated patterns for the evenings, reflecting people gathered at their workplaces. It is also interesting that all algorithms seem to follow the T-shaped pattern of the CBD and also detect some smaller job centers to the South and West, but Natural Cities Manuscript to be reviewed Manuscript to be reviewed In the future we will use the Adaptative DBSCAN algorithm presented in Section 4.5 to process data such as 911 geolocated calls (police reports) and 311 reports (public service requests) to develop unusual activity detection algorithms, able to capture unusual activity at different scales. Another interesting avenue for further research is the incremental training of the algorithms, finding structures incrementally as more data is fed into the algorithms. Finally, it is also necessary to explicitly incorporate the time dimension and develop algorithms to uncover hierarchical spatio-temporal structures.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>Clusters obtained using the synthetic data generator and tree structure of the hierarchical clusters obtained with our synthetic data generator.</ns0:p><ns0:p>(A) The polygons represent the clusters obtained with our synthetic data generator, points are the original data sample.</ns0:p><ns0:p>(B) Nodes represent cluster polygons while the tree branches are the hierarchical structure. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 11</ns0:note><ns0:p>Levels clusters polygons obtained with Natural Cities compared with metropolitan delimitation and compared with job to housing ratio at the city block level. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 12</ns0:note><ns0:p>Levels cluster polygons obtained with DBSCAN compared with metropolitan delimitation and compared with job to housing ratio at the city block level.</ns0:p><ns0:p>(A) The blue polygons are the clusters obtained with DBSCAN at Level 1 and the grey polygons are the official delimitation of metropolitan areas.</ns0:p><ns0:p>(B) The blue polygons are the clusters obtained with DBSCAN at Level 2 and the grey polygons are the official delimitation of metropolitan areas.</ns0:p><ns0:p>(C) The blue polygons are obtained with DBSCAN at Level 3 and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio.</ns0:p><ns0:p>(D) The blue polygons are obtained with DBSCAN at Level 4 and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 13</ns0:note><ns0:p>Level clusters polygons obtained with Adaptative DBSCAN compared with metropolitan delimitation and at lower scale compared with job to housing ratio at the city block level.</ns0:p><ns0:p>(A) The blue polygons are the clusters obtained with Adaptative DBSCAN and the grey polygons are the official delimitation of metropolitan areas.</ns0:p><ns0:p>(B) The blue polygons are the clusters obtained with Adaptative DBSCAN and the grey polygons are the official delimitation of metropolitan areas.</ns0:p><ns0:p>(C) The blue polygons are obtained with Adaptative DBSCAN and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio.</ns0:p><ns0:p>(D) The blue polygons are obtained with Adaptative DBSCAN and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio.</ns0:p><ns0:p>(E) The blue polygons are obtained with Adaptative DBSCAN and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio.</ns0:p><ns0:p>(F) The blue polygons are obtained with Adaptative DBSCAN and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio.</ns0:p><ns0:p>(G) The blue polygons are obtained with Adaptative DBSCAN and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio.</ns0:p><ns0:p>(H) The blue polygons are obtained with Adaptative DBSCAN and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>, the authors propose a technique for detecting unusually crowded places by extracting the regular activity patterns performing K-Means clustering over the geolocated tweets and 2/25 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Clusters obtained using the synthetic data generator (a) and the corresponding tree structure (b). Nodes in (a) correspond with clusters in (b), the colors represent the different hierarchical levels.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Length of the edges in the Delaunay triangulation for three successive levels in a synthetic data sample.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>we show the original density clusters produced by our synthetic data generator together with the clusters obtained with HDBSCAN (Section 4.4) and our proposed Adaptative DBSCAN (Section 4.5). As can be seen in the Figure, HDBSCAN detects the highest density clusters possible, this becomes even clearer by looking at the Condensed Tree, where the detected structures correspond to the deepest leafs in the tree: the most persisting across scales density structures. This focus on identifying persistent structures makes these algorithms unsuited for the task of detecting structures within structures because, as can be seen in the Figure, the intermediate scale levels are not detected since they do not persist in the Condensed Tree.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Original clusters produced with our synthetic data generator together with the clusters identified with HDBSCAN and Adaptative DBSCAN. The HDBSCAN Condensed Tree (bottom left) shows the depth of the structures detected, the two blue clusters shown in HDBSCAN results correspond to the leftmost and rightmost leafs.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Obtain N &#949; (p) -Mark p as processed (add p to L Pro ) -Push p priority queue S -If d core (p, &#949;, minPts) is UNDEFINED move to the next unprocessed point. If not then: * Update the queue S based on the rechability-distance by the update function and use N &#949; (p), p, &#949;, and minPts as input parameters.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022) Manuscript to be reviewed Computer Science &#8226; For all o &#8712; N &#949; (p) if o is not processed, then -Define a new reachability-distance as new reach&#8722;dis = max(d core (p, &#949;, minPts), d(p, o)) -If L reach&#8722;dis (o) is UNDEFINED (is not in S) then L reach&#8722;dis (o) = new reach&#8722;dis and insert o in S with value new reach&#8722;dis .</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. K-sorted distance graph (for k = 5) showing with a dotted line the location of the elbow value obtained by the procedure outlined in Satopaa et al. (2011).</ns0:figDesc><ns0:graphic coords='11,234.79,63.78,227.47,220.95' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b31'>L&#243;pez-Ram&#237;rez et al. (2018)</ns0:ref> the authors propose the recursive application of DBSCAN as a suitable algorithm to uncover the hierarchical structure present in spatio-temporal events data sets. The main drawbacks of this proposal are, on the one hand, the need to select appropriate &#949; and minPts values for each recursive application of DBSCAN and, on the other hand, that once this values are selected they are used for every cluster in a given hierarchical level, thus assuming that every cluster has the same intrinsic density properties. In this paper we propose an algorithm to automatically select &#949; values for each cluster, this algorithm draws on methods proposed in the available literature and adapts them to the problem of identifying clusters on hierarchically structured geographical data.10/25 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. &#949; values obtained by applying the procedure outlined in Satopaa et al. (2011) to a synthetic data set for every possible value of K.</ns0:figDesc><ns0:graphic coords='12,234.79,63.78,227.47,215.03' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. K-Sorted distance graphs for a range of K values. The red dots show the elbow value calculated as in Satopaa et al. (2011)</ns0:figDesc><ns0:graphic coords='13,234.79,63.78,227.47,217.61' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>The HDBSCAN used is based on<ns0:ref type='bibr' target='#b33'>McInnes et al. (2017)</ns0:ref>, and the Natural cities and adaptive-DBSCAN are our own implementations.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Results of different hierarchical clustering algorithms on a synthetic data set. The colors show the polygons obtained for different levels.</ns0:figDesc><ns0:graphic coords='14,193.43,63.78,310.19,348.65' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Results for the Normalized Mutual Information (NMI) for the experiments with 3, 4 and 5 levels. Bars height represent the mean NMI value while the black lines correspond to the 95% confidence interval.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_15'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure9. Results for the Balanced Accuracy (BA) for the experiments with 3, 4 and 5 levels, the Level axis indicates the BA value for that specific level. Bars height represent the mean BA value while the black lines correspond to the 95% confidence interval.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_16'><ns0:head>FiguresFigure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figures 11 through 13 show the results for the different clustering algorithms. In the larger hierarchical levels, the figures compare our results with the official delimitation of the metropolitan areas, while in the smaller scales, we show the job to housing ratio as an indicator of potential crowd activity. The first thing to notice is that, although the three algorithms are able to separate the input points into the metropolitan</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_17'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure12. Dark polygons are the clusters obtained with iterative DBSCAN, while the clear polygons correspond to the official metropolitan area delimitation. In the bottom maps the underlying coloring corresponds to the job to housing ratio at the city block level, darker color indicates a higher job to housing ratio. The background map was created with OpenStreetMap.</ns0:figDesc><ns0:graphic coords='21,383.15,355.01,161.81,224.93' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_18'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure13. The dark polygons show the result of the Adaptative DBSCAN algorithm, the clear polygons are the official metropolitan area delimitation, while in the bottom maps the underlying coloring corresponds to the job to housing ratio at the city block level, darker color indicates a higher job to housing ratio. The background map was created with OpenStreetMap.</ns0:figDesc><ns0:graphic coords='22,162.78,438.14,119.05,137.04' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_19'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Comparison of clusters obtained with Hierarchical DBSCAN, Adaptative DBSCAN and Natural Cities for the afternoon (14:00 to 18:00 hours) and evening (18:00 to 22:00). The figures focus on the levels showing the Central Business District area and display in different tones the job to housing ratio. The background map was created with OpenStreetMap.</ns0:figDesc><ns0:graphic coords='23,181.73,376.08,104.24,129.51' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_20'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_21'><ns0:head>Figure 10 Shape</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_22'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>A) The blue polygons are the clusters obtained with Natural Cities at Level 1 and the grey polygons are the official delimitation of metropolitan areas. (B) The blue polygons are the clusters obtained with Natural Cities at Level 2 and the coloring corresponds to the job to housing ratio at the city block level. Red represents higher jobs to housing ratio. PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,323.66,525.00,171.75' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,255.37,525.00,218.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,255.37,525.00,218.25' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,70.87,525.00,381.75' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>of Levels</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Natural C Vs DBSCAN</ns0:cell><ns0:cell>-0.46</ns0:cell><ns0:cell>9.37*</ns0:cell><ns0:cell>2.89*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural C Vs OPTICS</ns0:cell><ns0:cell>108.72*</ns0:cell><ns0:cell>68.45*</ns0:cell><ns0:cell>79.07*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural C Vs HDBSCAN</ns0:cell><ns0:cell>60.97*</ns0:cell><ns0:cell>36.48*</ns0:cell><ns0:cell>44.71*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural C Vs Adap DBSCAN</ns0:cell><ns0:cell>-9.08*</ns0:cell><ns0:cell>0.63*</ns0:cell><ns0:cell>-3.54*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs OPTICS</ns0:cell><ns0:cell>54.59*</ns0:cell><ns0:cell>27.14*</ns0:cell><ns0:cell>30.91*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs HDBSCAN</ns0:cell><ns0:cell>32.65*</ns0:cell><ns0:cell>12.12*</ns0:cell><ns0:cell>18.06*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs Adap DBSCAN</ns0:cell><ns0:cell>-5.77*</ns0:cell><ns0:cell>-8.08*</ns0:cell><ns0:cell>-4.92*</ns0:cell></ns0:row><ns0:row><ns0:cell>OPTICS Vs HDBSCAN</ns0:cell><ns0:cell cols='3'>-102.13* -43.45* -55.77*</ns0:cell></ns0:row><ns0:row><ns0:cell>OPTICS Vs Adap DBSCAN</ns0:cell><ns0:cell cols='3'>-95.49* -49.31* -67.92*</ns0:cell></ns0:row><ns0:row><ns0:cell>HDBSCAN Vs Adap DBSCAN</ns0:cell><ns0:cell cols='3'>-59.50* -27.76* -41.45*</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>* significative at the 95% confidence level</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Pairwise Welch's t-tests for the difference of the experimental means for the BA metric for each level across all experiments. Values of the t-test above 2 indicate a significative difference between the observed means.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Level</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Natural C Vs DBSCAN</ns0:cell><ns0:cell>23.33*</ns0:cell><ns0:cell>32.28*</ns0:cell><ns0:cell>32.08*</ns0:cell><ns0:cell>11.11*</ns0:cell><ns0:cell>6.15*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural C Vs OPTICS</ns0:cell><ns0:cell>182.09*</ns0:cell><ns0:cell cols='2'>130.86* 111.64*</ns0:cell><ns0:cell>18.34*</ns0:cell><ns0:cell>11.40*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural C Vs HDBSCAN</ns0:cell><ns0:cell>79.84*</ns0:cell><ns0:cell>59.88*</ns0:cell><ns0:cell>88.08*</ns0:cell><ns0:cell>17.36*</ns0:cell><ns0:cell>10.92*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural C Vs Adap DBSCAN</ns0:cell><ns0:cell>-0.53</ns0:cell><ns0:cell>-3.49*</ns0:cell><ns0:cell>-2.05*</ns0:cell><ns0:cell>-0.82</ns0:cell><ns0:cell>0.54</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs OPTICS</ns0:cell><ns0:cell>27.48*</ns0:cell><ns0:cell>20.89*</ns0:cell><ns0:cell>27.42*</ns0:cell><ns0:cell>8.23*</ns0:cell><ns0:cell>5.88*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs HDBSCAN</ns0:cell><ns0:cell>7.93*</ns0:cell><ns0:cell>-2.77*</ns0:cell><ns0:cell>14.13*</ns0:cell><ns0:cell>6.78*</ns0:cell><ns0:cell>5.26*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs Adap DBSCAN</ns0:cell><ns0:cell>-23.24*</ns0:cell><ns0:cell cols='3'>-32.41* -27.64* -11.46*</ns0:cell><ns0:cell>-5.61*</ns0:cell></ns0:row><ns0:row><ns0:cell>OPTICS Vs HDBSCAN</ns0:cell><ns0:cell>-41.51*</ns0:cell><ns0:cell cols='2'>-54.73* -32.20*</ns0:cell><ns0:cell>-1.61*</ns0:cell><ns0:cell>-0.69</ns0:cell></ns0:row><ns0:row><ns0:cell>OPTICS Vs Adap DBSCAN</ns0:cell><ns0:cell cols='5'>-168.32* -102.33* -67.58* -18.55* -10.92*</ns0:cell></ns0:row><ns0:row><ns0:cell>HDBSCAN Vs Adap DBSCAN</ns0:cell><ns0:cell>-76.72*</ns0:cell><ns0:cell cols='4'>-52.18* -53.55* -17.59* -10.43*</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>* significative at the 95% confidence level</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Pairwise Welch's t-tests for the difference of the experimental means for the BA metric for each level across all experiments. Values of the t-test above 2 indicate a significative difference between the observed means. areas, Natural Cities tends to detect the inner cores on the first hierarchical level, while hierarchical DBSCAN and Adaptative-DBSCAN detect larger metropolitan structures. Another interesting finding is that Natural Cities detects the T-shaped pattern of Mexico City's CBD (Escamilla et al., 2016; Suarez and Delgado, 2009) in two iterations, while hierarchical DBSCAN and Adaptative DBSCAN require 4 and 8 iterations respectively. This means that both DBSCAN based algorithms are detecting intermediate scale structures, while Natural Cities is more aggressively reducing the size of the clusters. Another interesting quality in the clusters detected by Natural Cities in contrast to those detected by the other two algorithms</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Level</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell><ns0:cell>t-test</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Natural Vs DBSCAN</ns0:cell><ns0:cell>-3.82*</ns0:cell><ns0:cell>-0.10</ns0:cell><ns0:cell>-5.34*</ns0:cell><ns0:cell>-7.97*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural Vs OPTICS</ns0:cell><ns0:cell>11.00*</ns0:cell><ns0:cell>9.98*</ns0:cell><ns0:cell>5.35*</ns0:cell><ns0:cell>2.87*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural Vs HDBSCAN</ns0:cell><ns0:cell>10.70*</ns0:cell><ns0:cell>9.85*</ns0:cell><ns0:cell>5.33*</ns0:cell><ns0:cell>2.87*</ns0:cell></ns0:row><ns0:row><ns0:cell>Natural Vs Adap DBSCAN</ns0:cell><ns0:cell cols='4'>-14.62* -22.24* -30.38* -40.76*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs OPTICS</ns0:cell><ns0:cell>14.85*</ns0:cell><ns0:cell>10.45*</ns0:cell><ns0:cell>9.17*</ns0:cell><ns0:cell>9.05*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs HDBSCAN</ns0:cell><ns0:cell>14.59*</ns0:cell><ns0:cell>10.31*</ns0:cell><ns0:cell>9.16*</ns0:cell><ns0:cell>9.05*</ns0:cell></ns0:row><ns0:row><ns0:cell>DBSCAN Vs Adap DBSCAN</ns0:cell><ns0:cell cols='4'>-10.39* -22.35* -23.77* -23.84*</ns0:cell></ns0:row><ns0:row><ns0:cell>OPTICS Vs HDBSCAN</ns0:cell><ns0:cell>-7.72*</ns0:cell><ns0:cell>-3.56*</ns0:cell><ns0:cell>-2.85*</ns0:cell><ns0:cell>2.21*</ns0:cell></ns0:row><ns0:row><ns0:cell>OPTICS Vs Adap DBSCAN</ns0:cell><ns0:cell cols='4'>-28.00* -32.18* -34.37* -43.09*</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>HDBSCAN Vs Adap DBSCAN -27.75* -32.09* -34.36* -43.09*</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>* significative at the 95% confidence level</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure11. The left side shows, in a dark color, the polygons obtained with Natural Cities and in clear the official delimitation of metropolitan areas. On the right side, the dark polygons are obtained with Natural Cities and the underlying coloring corresponds to the job to housing ratio at the city block level, darker color indicates a higher job to housing ratio. The background map was created with OpenStreetMap.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Computer Science Computer Science</ns0:cell><ns0:cell /><ns0:cell cols='7'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.55</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>20.25</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.50</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>20.00 20.25</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>20.25</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.45</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.75 20.00</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>20.00</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.50 19.75</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.40 19.75</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.25 19.50</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.50</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.00 19.25</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.35 19.25</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>18.75 19.00</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.00</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.30</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>18.50 18.75</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>18.75</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>100.0 100.0 18.50</ns0:cell><ns0:cell>99.5 99.5</ns0:cell><ns0:cell>99.0 99.0</ns0:cell><ns0:cell>98.5 98.5</ns0:cell><ns0:cell>98.0 98.0</ns0:cell><ns0:cell>99.30 100.0 18.50</ns0:cell><ns0:cell>99.25 99.5</ns0:cell><ns0:cell cols='2'>99.20 99.0</ns0:cell><ns0:cell>99.15</ns0:cell><ns0:cell>99.10 98.5</ns0:cell><ns0:cell>98.0 99.05</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Level 1 Level 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Level 2 Level 2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.7</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.50</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.45</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.40</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.35</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>19.3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19.30</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>99.3</ns0:cell><ns0:cell /><ns0:cell>99.2</ns0:cell><ns0:cell>99.1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>99.25</ns0:cell><ns0:cell cols='2'>99.20</ns0:cell><ns0:cell cols='2'>99.15</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Level 3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Level 4</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>18/25</ns0:cell></ns0:row></ns0:table><ns0:note>detects many smaller clusters that do not seem to correspond with relevant geographic structures.567 19/25 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69859:1:2:NEW 31 Mar 2022)Manuscript to be reviewed</ns0:note></ns0:figure> </ns0:body> "
"(52) (99) 994 5150 Ext. 5251 Dear Editors We want to thank the reviewers for their thorough revision and generous comments that have greatly improved the quality of our work. We have edited the manuscript to reflect their comments and insights. We believe that with the contribution of the reviewers and our responses the edited manuscript is now suitable for publication in PeerJ Computer Science Dr. Pablo López Ramírez Associate Professor and Chair of Graduate Studies On behalf of all the authors. Detection of hierarchical crowd activity structures in geographic point data Review response Reviewer 1 comment The authors mentioned that the second contribution is to propose an improvement of the DBSCAN algorithm, but existing clustering algorithms were introduced and used. Please clarify improvement points of the algorithm. The title should also be modified to fit the content because the current one seems to be the proposal of an algorithm. Response We agree that the paper is not entirely clear with respect to this contribution, what we developed is a novel approach to adapt the epsilon parameter in DBSCAN in order to automatically discover density structures within clusters. We included existing algorithms for comparison and reference. We changed the text in the relevant sections (Abstract, Introduction, Adaptative DBSCAN and Conclusions and Further Work) to better reflect the nature of our second contribution. We believe the article title reflects the work we’ve done since it is not only about our Adpaptative DBSCAN algorithm but also about the more general issue of detecting crowd activity structures, proposing a synthetic data generator and an evaluation framework. Reviewer 1 comment Figure 1 should be explained in more detail such as the meaning of colors. Also, the quality of Figure 1(b) should be improved. Response We updated the Figure and made the following change to the caption Original text Figure 1. Clusters (a) and tree structure (b) obtained with the synthetic data generator. Proposed Change: Figure 1. (a) Clusters obtained with the synthetic data generator and (b) the corresponding tree structure. Nodes in (b) correspond with clusters in (a). Reviewer 1 comment The authors discussed the results in Section 7, so the section name should be Results and discussions. Response We accept the proposed change and modify the title of Section 7. Original text: 7 RESULTS Proposed Change: 7 RESULTS AND DISCUSSION Reviewer 1 Comment: There are some typos as follows. Line 20: “even data” would be “event data” Line 166: “as shown in 1” would be “as shown in Figure 1” Line 431: There is a space missing after the comma. Line 492: “This results” should be “These results” Response: We thank the reviewer for the thorough revision of such typos, all of them were fixed. Reviewer 2 comments: Reviewer 2 comment There are several places (Lines 131-137, Lines 186-191, Lines 494-496) where it is argued that the expected differences between different clustering algorithms (e.g., HDBSCAN and OPTICS vs. DBSCAN and its adaptive version). It would be beneficial to use a toy example dataset to demonstrate the differences argued here. (Annotated PDF) 1. It would be beneficial to use a toy example dataset to demonstrate the differences argued here. 2. Again, it is beneficial to demonstrate the argued differences with a toy example dataset. 3. Again, use a toy example dataset to demonstrate this. Response We agree that the differences we are stressing here are not entirely clear in the text, we took the reviewer's advice and introduced a figure with the clustering results obtained with HDBSCAN and our Adaptative-DBSCAN, together with the HDBSCAN Condensed Tree. Since the reviewer comments refer to different parts of the text it was challenging to decide upon where the figure and description would make the paper clearer. After thorough consideration, we decided to place the figure and explanation at the beginning of Section 4, before we introduce all the clustering algorithms. We believe that, although the algorithms are not described at this point, the placement of the figure and the accompanying explanation makes for a better understanding of our proposal. Proposed Change Hierarchical clustering algorithms focus on the task of detecting the most relevant density structures regardless of the scale. To illustrate this point, in Figure 3 we show the original density clusters produced by our synthetic data generator together with the clusters obtained with HDBSCAN (Section 4.4) and our proposed Adaptative DBSCAN (Section 4.5). As can be seen in the Figure, HDBSCAN detects the highest density clusters possible, this becomes even clearer by looking at the Condensed Tree, where the detected structures correspond to the deepest leafs in the tree: the most persisting across scales density structures. This focus on identifying persistent structures makes these algorithms unsuited for the task of detecting structures within structures because, as can be seen in the Figure, the intermediate scale levels are not detected since they do not persist in the Condensed Tree. Reviewer 2 comment Some figures (e.g., Figure 1) need to be of higher resolution to be legible (check all other figures for this issue). In Figure 1 caption, please describe how (a) corresponds with (b). For example, is a node in (b) a point in (a)? (Annotated PDF) First of all, the figures need to be higher resolution to be legible (check all other figures for this issue). Second, please describe how (a) corresponds with (b). For example, is a node in (b) a point in (a)? Response: All figures were reviewed and modified to be more readable and the caption on Figure 1 was modified. Original text Figure 1. Clusters (a) and tree structure (b) obtained with the synthetic data generator. Proposed Change Figure 1. (a) Clusters obtained with the synthetic data generator and (b) the corresponding tree structure. Nodes in (b) correspond with clusters in (a). Reviewer 2 Comment Lines 160-161: The reviewer did not quite understand what “three iterations of head-tail breaks” entails. Please elaborate. (Annotated PDF): 1. Need to be clear what this means. Is it a Triangular network created based on the points? 2. Please explain what this entails. Response: Original text didn’t elaborate on how this is done, we corrected the text to better explain the triangulation and what the Head-Tails break entails. Original Text “The edges of a Delaunay Triangulation” Proposed Change “we perform a Delaunay triangulation with the points as vertex, then obtain the lengths of the edges and sort them in descending order” Original Text “Figure 2 shows three iterations of the Head-Tails breaks,” Proposed Change To show that our synthetic data exhibits this same property, we perform a Delaunay triangulation with the points as vertex, then obtain the lengths of the edges and sort them in descending order. We then proceed to select the length values larger than the mean (the Head) and the values smaller than the mean (the Tail), keeping only the latter in order to perform the Head-Tails break described in Jiang (2013). Figure 2 shows three iterations of the Head-Tails break… Reviewer 2 comment Line 249: What important factors need to be considered when manually setting parameters? Reviewer 2 comment (Annotated PDF): What important factors need to be considered when manually setting parameters? Original Text “In order to fit our aim of detecting structures within structures, we apply DBSCAN recursively to each of the discovered clusters, manually setting appropriate parameters as described in Lopez-Ramírez et al. (2018)” Proposed Change In order to fit our aim of detecting structures within structures, we apply DBSCAN recursively to each of the discovered clusters. It is important to consider that the ε parameter in DBSCAN is related to the relative point density of noise with respect to clusters, this means that in each recursive application one needs to set an appropriate value for ε; preserving the same value across iterations would not uncover density structures within clusters since those structures would, by definition, be of higher density than the encompassing cluster. In López-Ramírez et al. (2018), ε is decreased by a constant arbitrary factor on each iteration, in Section 4.5 we propose a novel method for automatically selecting ε for each cluster and iteration that overcomes this limitation. Reviewer 2 comment Line 518: Does this 'new method' refer to applying clustering algorithms recursively to discover structure within structure in general, or specifically to applying the adaptive DBSCAN recursively? Need to be clear. (Annotated PDF) Does this 'new method' refer to applying clustering algorithms recursively to discover structure within structure in general, or specifically to applying the adaptive DBSCAN recursively? Need to be clear. Original text In this paper we presented a synthetic data generator that reproduces structures commonly found on geographical events data sets, introduced a new method for detecting hierarchical structures within structures on such data and presented a general evaluation framework for the comparison of hierarchical crowd activity structures detection algorithms Proposed change In this paper we presented a synthetic data generator that reproduces structures commonly found on geographical events data sets; introduced a new method, based on the recursive application of DBSCAN coupled with an adaptative algorithm for selecting appropriate values for ε for each cluster, for detecting hierarchical structures within structures; and presented a general evaluation framework for the comparison of hierarchical crowd activity structures detection algorithms. Reviewer 2 comment Line 404: “statistically significant evaluations”. What does this entail exactly? Later in the results section, only box plots are used to show the differing performance of the algorithms. I would expect some sorts of statistical significance test on the differences (or comparison of confidence intervals?). (Annotated PDF) What does this entail exactly? Later in the results section, only box plots are used to show the differing performance of the algorithms. I would expect some sorts of statistical significance test on the differences (or comparison of confidence intervals?). Response We agree that confidence intervals and significance tests will render our results more robust, so we changed the box plots to bar graphs with confidence intervals and included tables with Welch’s t-test for the significance of the differences on the mean values obtained. Reviewer 2 comment Lines 530-531: The reviewer thinks it is necessary to do so in this paper. Apply the adaptive DBSCAN a real-world dataset recursively to discover structures within structures, and see if the results make sense (probably with only qualitative evaluations). (Annotated PDF) I think it is necessary to do so in this paper. Apply the adaptive DBSCAN a real-world dataset recursively to discover structures within structures, and see if the results make sense (probably with qualitative evaluations). Response: We included a section showing the result of using Natural Cities, recursive DBSCAN, and Adaptative DBSCAN on a real world dataset Proposed change “Section 7.1 Application on real-world data” Reviewer 2 comment (Annotated PDF): I think you need to explain what are 911 calls and 311 reports (readers in another country may have no idea what they are). Response: We included a brief description every time we make reference to such data.The comment is accepted. Original text “In the future we will use the Adaptative DBSCAN algorithm presented in Section 4.5 to process real world data such as 911 geolocated calls and 311 reports to develop unusual” Proposed change “In the future we will use the Adaptative DBSCAN algorithm presented in Section 4.5 to process data such as 911 geolocated calls (police reports) and 311 reports (public service requests)” Reviewer 2 comment (Annotated PDF): 1. Page 2 line 85 “an” to “and”. 2. Page 6 line 221: Please correct in-text citation format. 3. Page 9 line 357 “lets” to “let us”. Response: All the comments are accepted and we thank the reviewer for the thorough revision. "
Here is a paper. Please give your review comments after reading it.
422
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper proposes an approach to fill in missing data from satellite images using dataintensive computing platforms. The proposed approach merges satellite imagery from diverse sources to reduce the impact of the holes in images that result from acquisition conditions: occlusion, the satellite trajectory, sunlight, among others. The computational complexity derived from the use of large high-resolution images is addressed by dataintensive computing techniques that assume an underlying cluster architecture. As a start, satellite data from the region of study are automatically downloaded; then, data from different sensors are corrected and merged to obtain an orthomosaic; finally, the orthomosaic is split into user-defined segments to fill in missing data, and then filled segments are assembled to produce an orthomosaic with a reduced amount of missing data. As a proof of concept, the proposed data-intensive approach was implemented to study the concentration of chlorophyll at the Mexican oceans by merging data from MODIS-TERRA, MODIS-AQUA, VIIRS-SNPP, and VIIRS-JPSS-1 sensors. Results reveal that the proposed approach produces results that are similar to state-of-the-art approaches to estimate chlorophyll concentration but avoid memory overflow with large images. Visual and statistical comparison of the resulting images reveals that the proposed approach provides a more accurate estimation of chlorophyll concentration when compared to the mean of pixels method alone.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Since the first satellite photographs in the 1940s, followed by missions that include Landsat and Suomi NPP from NASA, and Sentinel from ESA, just to mention three of the most popular, satellite imagery has been improved to the point of becoming daily-use information. Moreover, together with the increase of use, challenges have been emerged, presenting an increasing demand for computational resources and algorithms. Nowadays, images of Earth are commonly used to study the atmosphere, land, and oceans, presenting an increasing use in daily life activities. Applications of satellite imagery range from weather forecasting <ns0:ref type='bibr' target='#b26'>(Sato et al., 2021)</ns0:ref>, monitoring natural disasters <ns0:ref type='bibr' target='#b25'>(Said et al., 2019)</ns0:ref>, survey phytoplankton size structure impacts as an ecological indicator for the state of marine ecosystems <ns0:ref type='bibr' target='#b10'>(Gittings et al., 2019)</ns0:ref>, among many others. Presently, various sensors provide different temporal, spatial, and spectral resolutions to study oceans' evolution. The Moderate-Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors are well known in the community, mainly because of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science data's continuous and free availability. Data from these sensors have been available since 1999 and are still in operation <ns0:ref type='bibr' target='#b14'>(Hu et al., 2010)</ns0:ref>. The MODIS sensors are in orbit aboard the Terra and Aqua satellites <ns0:ref type='bibr'>(NASA, 2020)</ns0:ref>. On the other hand, the VIIRS sensors are aboard the Suomi National Polar-Orbiting Partnership (SNPP) and the Joint Polar Satellite System (JPSS-1) <ns0:ref type='bibr' target='#b19'>(Kramer, Herbert J., 2020)</ns0:ref>. Both MODIS and VIIRS sensors are provided with a set of bands commonly employed to study the oceans <ns0:ref type='bibr' target='#b7'>(Datla et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Regardless of the application, processing data from satellites exhibit various challenges that are difficult for the analysis related to physical phenomena <ns0:ref type='bibr' target='#b24'>(Rodriguez-Ramirez et al., 2019)</ns0:ref>. Moreover, one of the most representative issues emerge from acquisition conditions that produce incomplete data from the whole scene, either caused by occlusion (e.g. clouds) or the trajectory of the satellite at the acquisition moment <ns0:ref type='bibr' target='#b30'>(Zhang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>As reported by the scientific community, some approaches to fill in missing data employ machine learning techniques to merge information from multiple sources within the same set of sensors <ns0:ref type='bibr' target='#b30'>(Zhang et al., 2018)</ns0:ref>. Other approaches use regression models to repair single spatial satellite images, presenting a tradeoff between accuracy and computational effort <ns0:ref type='bibr' target='#b29'>(Zhang et al., 2015)</ns0:ref>. Furthermore, one of the most widely employed methods in oceanography to fill in missing data is DINEOF (Data INterpolating Empirical Orthogonal Function) <ns0:ref type='bibr' target='#b20'>(Liu and Wang, 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Liu and Wang, 2019)</ns0:ref>. Examples of the use of DINEOF to study chlorophyll are disseminated in literature. For instance, <ns0:ref type='bibr' target='#b16'>Jayaram et al. (2018)</ns0:ref> implemented the interpolation functions of the orthogonal data to restore the levels of Chl-a at the Arabic sea between 2000 and 2015, using the MODIS sensor. Low-frequency wavelets were employed to estimate the levels of chlorophyll every month, season, and year, and high-frequency wavelets to detect the anomalies in non-stationary lapses. Similarly, <ns0:ref type='bibr' target='#b15'>Jayaram et al. (2021)</ns0:ref> DINEOF was also employed by <ns0:ref type='bibr' target='#b4'>Bouchra et al. (2011)</ns0:ref> to restore the total suspended matter between Belgium and United Kingdom coasts, using MODIS data acquired between 2003 and 2006. Restored data were compared against the measurements in-situ of total suspended matter collected by the Cefas (Centre for Environment Fisheries and Aquatic Sciences); for factor calibration, a linear regression model was employed, considering the highest observed measurements as the reference values. Additionally, during the atmospheric correction, MODIS data pixels were labeled according to the quality of the restoration: those pixels within a 5 &#215; 5 window that present inconsistencies over the Cefas time series were labeled as doubedly or low quality. Finally, DINEOF was used to compute missing data, and atypical values were assessed using spatial coherence.</ns0:p><ns0:p>Despite the approach employed to fill in missing data, and the study region, the challenges remain. Furthermore, improvements in computational efficiency and accuracy are still required to produce reliable studies. Indeed, the high computational cost required to analyze multi-temporal and multi-resolution data provided by satellite platforms is far from being solved <ns0:ref type='bibr' target='#b2'>(Babbar and Rathee, 2019)</ns0:ref>. In particular, DINEOF is based on empirical orthogonal functions (EOF) to reconstruct missing data in a set of geophysical data through the calculus of the dominant modes of variability within satellite data <ns0:ref type='bibr' target='#b3'>(Beckers and Rixen, 2003)</ns0:ref>.</ns0:p><ns0:p>The high computational complexity of DINEOF increases with the size of the input images and may be impractical with a large number of high-resolution images. Therefore, DINEOF is usually used to process images with a low spatial resolution of small geographical areas <ns0:ref type='bibr' target='#b8'>(GHER, 2020)</ns0:ref>. This research paper proposes a novel approach for the automated hole filling in satellite imagery, taking advantage of an intensive computing strategy to avoid memory overflow when high-resolution images are processed. The proposed approach starts by downloading the satellite data from different sensors (e.g. MODIS-TERRA, MODIS-AQUA, VIIRS-SNPP, and VIIRS-JPSS-1), corresponding to the selected region of interest (ROI). Then, data downloaded are corrected and merged using the average method and the nearest neighbors to obtain a partially filled orthomosaic. In the last step, the orthomosaic is divided into manageable segments by the computer employed for processing, and each segment is Manuscript to be reviewed Computer Science processed using DINEOF. The segments are then assembled to produce an orthomosaic with a reduced amount of holes. For proof of concept, the detection of chlorophyll over the exclusive economic zone of Mexico (EEZM) is analyzed. For that purpose, a system was developed to fill in holes produced by clouds or the satellite platforms' proper trajectory followed by the MODIS and VIIRS sensors.</ns0:p></ns0:div> <ns0:div><ns0:head>A COMPUTER-INTENSIVE APPROACH TO FILL IN MISSING DATA</ns0:head><ns0:p>The proposed data-intensive computing approach to fill in the missing geophysical data from satellite imagery comprises three main conceptual modules: (1) Automatic satellite data download, (2) Satellite data merging, and (3) Filling in missing satellite data using an intensive computer approach. Each module considers the output of the previous one, and their operation is detailed in the sections below. The whole process is depicted in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 1.</ns0:head><ns0:p>Proposed approach for filling in missing satellite data. First, L2 data from the region of interest (ROI) are downloaded. Satellite data are then merged using a 2-step strategy, estimating missing pixels as the average of at least three neighbors in same-day images from different sensors. Finally, missing data is filled in using a data-intensive approach that takes advantage of segmented ROIs and DINEOF</ns0:p></ns0:div> <ns0:div><ns0:head>Automatic satellite data download</ns0:head><ns0:p>The automatic satellite data download module is designed to continuously survey changes in the satellite repository and retrieve the most recent satellite imagery from the region of interest (ROI). Without loss of generality, it is assumed that data are retrieved from the OBPG-Ocean color data repository, but other platforms may be configured with the same behavior. The three steps established in this module are listed below.</ns0:p><ns0:p>1. The first step consists in querying the repository to retrieve the schedule of both sensors (e.g. MODIS and VIIRS): the time when they passed over the zone of study.</ns0:p><ns0:p>2. In the second step, the schedule information is processed to extract the precise hour when the satellite acquired the region of interest (ROI).</ns0:p><ns0:p>3. Finally, the links to the levels L2 products are built in the third step, and the download process starts. The resulting L2 products are stored in a user-specific path that is accessed by the other two modules.</ns0:p><ns0:p>As a result, the module for satellite data download retrieves the images from the configured sensors, corresponding to the ROI at a determined date.</ns0:p></ns0:div> <ns0:div><ns0:head>3/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Satellite data merging</ns0:head><ns0:p>Every time L2 satellite data corresponding to the ROI are downloaded, a new daily image is created by merging data from the selected sensors according to the application. The procedure to create the combined image involves the two steps described in subsections below (e.g. preprocessing satellite data and merging preprocessed data).</ns0:p></ns0:div> <ns0:div><ns0:head>Preprocessing satellite data</ns0:head><ns0:p>Preprocessing data consists of creating the orthomosaics for each daily scene: one for each sensor (e.g.</ns0:p><ns0:formula xml:id='formula_0'>{I 1 , I 2 , &#8226; &#8226; &#8226; I L }).</ns0:formula><ns0:p>Operations like spatial resampling or scaling are required in some cases to prepare the raw data to assemble a single orthomosaic for each of the L sensors. Subsequently, each orthomosaic is processed to fill missing data during the merging phase; a m &#215; m sliding window is applied to the p empty pixels in the image that accomplish with the criteria of having at least three neighbors (i.e., three pixels with data). Such a criterion was established to avoid simple information duplicity of close pixels. The new value p x of an empty pixel is computed using Eq. 1.</ns0:p><ns0:formula xml:id='formula_1'>p x = &#8721; n i=1 p i n ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where p x is the missing data pixel, p i is one of the n neighbor pixels with data within the m &#215; m sliding window, considering n &#8805; 3.</ns0:p></ns0:div> <ns0:div><ns0:head>Merging preprocessed data</ns0:head><ns0:p>In this step, the preprocessed orthomosaics are merged. In order to obtain the combined image at a selected date (day), the orthomosaic with most data related to chlorophyll is first chosen and tagged as base-image (I b ). Then, it is necessary to define the order of the processing of each orthomosaic. The ordering criteria considers the root-mean-square error (RMSE) between the base-image and each of the remaining orthomosaics (I r ), assigning higher priority to the orthomosaics with lower RMSE, see Eq. 2.</ns0:p><ns0:formula xml:id='formula_2'>RMSE(I r ) = 1 N 3 &#8721; r=1 (I b &#8722; I r ) 2 (2)</ns0:formula><ns0:p>where I b is the base-image, I r corresponds to each of the other images, and N is the number of valid pixels in both images (i.e., I b and I r ).</ns0:p><ns0:p>Once the priority is established, data from the four images are combined considering I b as the baseline and following the order of priority given by the RMSE: each missing pixel with coordinates (x, y) in I b is substituted with the pixel from I r with the highest priority on the same position. If none of the I r images contains data, it is considered as a missing pixel. Finally, an adjustment is applied to reduce the impact of differences in acquisition conditions from each sensor, such as different acquisition times and the zone dynamics (currents and winds). Such an adjustment between images I b and I r was applied using the Inverse Distance Weighting (IDW) to the four nearest pixels in directions (&#8722;x, x, &#8722;y, and y). In essence, the resulting image I M produced by merging the sensor-wise orthomosaics {I 1 , I 2 , I 3 , I 4 } incorporate the information from all sensors, and hence, includes fewer gaps than any of the individual orthomosaics.</ns0:p></ns0:div> <ns0:div><ns0:head>Filling in missing satellite data</ns0:head><ns0:p>After preprocessing, the merged orthomosaic I M still remains with gaps, and DINEOF is employed to compute and fill in the gaps. In order to address this problem with high-resolution images from wide areas of study, the data-intensive approach is divided into the following three steps: (1) data segmentation,</ns0:p><ns0:p>(2) fill in missing data for each segment, and (3) assemble the segments. The strategy to fill in missing data is shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, and each step is detailed in sections below.</ns0:p></ns0:div> <ns0:div><ns0:head>Data segmentation</ns0:head><ns0:p>The merged orthomosaic I M comprises the whole ROI to be monitored, which may be computationally unmanageable, depending on the area of study and the computer to process DINEOF. Thus, I M is evenly divided into J &#215; K = NS smaller manageable size segments {I S i, j }: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_3'>I M = &#63726; &#63727; &#63727; &#63727; &#63728; I S 1,1 I S 1,2 &#8226; &#8226; &#8226; I S 1,K I S 2,1 I S 2,2 &#8226; &#8226; &#8226; I S 2,K . . . . . . . . . . . . I S J,1 I S J,2 &#8226; &#8226; &#8226; I S J,K &#63737; &#63738; &#63738; &#63738; &#63739;<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>The user-defined values for J and K should be selected according to the computational resources available to execute DINEOF, and indirectly define the size of the segments {I S i, j }. Inspired by binary search, the orthomosaic may be evenly divided into 2 &#215; 2, 4 &#215; 4, 8 &#215; 8, and so on. As soon as the computer system is able to process the images, the divisions are fixed and the monitoring process configured.</ns0:p></ns0:div> <ns0:div><ns0:head>Fill in missing data for each segment</ns0:head><ns0:p>Filling in missing data is a parallel process that is independently applied to all segments in which the merged image was divided (see Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Using a massive processing configuration (e.g. a computer cluster or a multiprocessor computer) is advantageous to accelerate the complete process. In this step, each segment I S j,k is filled in, and the resulting filled segments I F j,k are stored for posterior processing.</ns0:p></ns0:div> <ns0:div><ns0:head>Assemble segments</ns0:head><ns0:p>At the final module, the resulting I F j,k segments are assembled in the same order that was divided I M , to obtain a new I F orthomosaic without holes.</ns0:p><ns0:formula xml:id='formula_4'>I F = &#63726; &#63727; &#63727; &#63727; &#63728; I F 1,1 I F 1,2 &#8226; &#8226; &#8226; I F 1,K I F 2,1 I F 2,2 &#8226; &#8226; &#8226; I F 2,K . . . . . . . . . . . . I F J,1 I F J,2 &#8226; &#8226; &#8226; I F J,K &#63737; &#63738; &#63738; &#63738; &#63739;<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>A distinct but equivalent way to define the number of segments is to establish the size of each segment I F j,k , assuming all segments are the same size. The size of I F j,k corresponds to a 2-element tuple (width, height) that define the number of pixels per side, considering the ratio between width and height of the segment to be the same of the ratio between the width and the height of the orthomosaic I M :</ns0:p><ns0:formula xml:id='formula_5'>width(I F j,k ) height(I F j,k ) = width(I M )</ns0:formula><ns0:p>height(I M ) .</ns0:p></ns0:div> <ns0:div><ns0:head>5/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div> <ns0:div><ns0:head>STUDY CASE: CHLOROPHYLL ON THE EEZM</ns0:head><ns0:p>The study case used for proof of concept was designed to monitor the chlorophyll-a (Chl-a) over a wide sea area: EEZM. Data from the MODIS and VIIRS sensors were combined to obtain L2 products with the least amount of missing data. The importance of monitoring the Chl-a is related to the dynamics of phytoplankton, which provides the information to predict the impact of climate change in ocean ecosystems. Phytoplankton is composed of microscopic algae and other photosynthetic organisms that inhabit the surface of oceans, rivers, and lakes. These microorganisms constitute the primary source of energy in aquatic systems due to their photosynthetic capacity <ns0:ref type='bibr' target='#b27'>(Winder and Sommer, 2012)</ns0:ref>, and their contribution to preserving the climate balance and the biogeochemical cycle in such ecosystems <ns0:ref type='bibr' target='#b13'>(Hallegraeff, 2010)</ns0:ref>. For some decades now, the Chl-a has been widely used to estimate phytoplankton's biomass in surface water using satellite-based methods <ns0:ref type='bibr' target='#b12'>(Gomes et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b18'>Kramer and Siegel, 2019;</ns0:ref><ns0:ref type='bibr' target='#b23'>O'Reilly et al., 1998)</ns0:ref>. Such usage is given in view of the fact that the Chl-a is the main photosynthetic pigment of phytoplankton. In fact, Chl-a is used as a photoreceptor and gives the green color to the phytoplankton, and various studies have settled the fundamentals of the impact of Chl-a with light reflectance of water bodies, especially in the visible light and close infrared regions of the electromagnetic spectrum <ns0:ref type='bibr' target='#b9'>(Gitelson, 1992;</ns0:ref><ns0:ref type='bibr' target='#b6'>Dall'Olmo and Gitelson, 2005;</ns0:ref><ns0:ref type='bibr' target='#b28'>Yacobi et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Study area</ns0:head><ns0:p>The area of study selected for the analysis and proof of concept corresponds to the EEZM, and covers the sea region close to the seashore (CONABIO, 2022). The distance covered by the EEZM is up to 370.4 km from the continental and insular seacoast. The surface area of the EEZM is one of the greatest in the world and is estimated to be 3 269 386 km 2 .</ns0:p><ns0:p>The complete satellite images are required to study Chl-a concentrations, e.g. images without missing data over the area of study. The size of such a huge area makes the task prohibitive for the computer facilities available for experimentation. Bands 8 to 16 from the MODIS sensors were used, corresponding to wavelengths from 405 to 877 nm and a spatial resolution of 1 km. These bands are mainly employed for Ocean Color and Phytoplankton and Biogeochemistry. On the other hand, the VIIRS sensor provides measurements from water, land, and atmosphere, with a temporal resolution of 12 hrs for day and night ocean data acquisition.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational details on experiments</ns0:head><ns0:p>A For preprocessing, the Graph Processing Tool (GPT) from the Sentinel Application Platform (SNAP) was used to create the orthomosaics and project the sine wave system's data to the WGS-84. Segmentation was performed over the merged image to speed up the process of filling chlorophyll concentration data, following the NetCDF format. The maximum area of the segments is defined in the system configuration parameters and automatically establishes the number of segments in which the image is divided. The * .gher binary files are then generated with their respective mask of the zone that is not processed (e.g., land), as well as its time file that allows activating the filtering of the temporal covariance matrix.</ns0:p><ns0:p>The * .gher and time files generated at segmentation are then used to execute DINEOF, employing the configuration parameters shown in Table <ns0:ref type='table'>1</ns0:ref>. The proposed algorithm rewrites such parameters in a file with the * .init extension. Afterward, the file is read by the DINEOF program, which computes the missing data for each of the segmented data series. In the fill-in missing data step, DINEOF generates a time series without holes for each segment, and it is stored in * .gher file format. Finally, the orthomosaic is reconstructed using the segmented images without missing data. The resulting image is written in NetCDF format.</ns0:p></ns0:div> <ns0:div><ns0:head>6/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Table 1.</ns0:note></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>Satellite data merging</ns0:head><ns0:p>The sensibility of the preprocessing satellite data module to the size of the sliding window was studied using nine different square windows: m &#215; m windows with m = {3, <ns0:ref type='bibr'>5, 7, 9, 11, 15, 21, 31 and 51}.</ns0:ref> In this sensibility test, the base image employed for each sensor was composed by the sequence of images for February 2018; and missing samples were generated using the images for February the 1st, 2018, for each sensor.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> shows the RMSE, the percentage of filled data, the computation time that was employed in the nine different window sizes, and the mosaicking SNAP function. The mosaicking results are the reference for data coverage previous to the application of the sliding window. According to Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref>, the window sizes 5 &#215; 5 and 7 &#215; 7 presented the lowest RMSE value when compared to other window sizes. However, the latter showed a higher percentage of data coverage (28.71% against 25.52%), although the processing time increases according to the window size. In order to compare the results of the mosaicking and evidence the advantages of applying the sliding window, Fig. <ns0:ref type='figure' target='#fig_7'>4</ns0:ref> shows the results of the application of the method to images for February 1 st , 2018. </ns0:p></ns0:div> <ns0:div><ns0:head>The four images in</ns0:head></ns0:div> <ns0:div><ns0:head>Filling in missing satellite data</ns0:head><ns0:p>After preprocessing was applied to the whole temporal series 2018-2019, the impact of the three hyperparameters was evaluated on the proposed system: (1) alpha, (2) numit, and (3) time (see Table <ns0:ref type='table'>1</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>8/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Chl-a The values of the hyperparameters were explored through the application of DINEOF, after splitting the preprocessed orthomosaic into six independent zones (see Fig. <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>). Such a division favors the analysis of Chl-a's behavior either in coastal zones or deep sea, separated from coasts. For example, zone 4 presents deep seawater with a concentration of Chl-a that differs from the concentration in zone 5, which is closer to coasts. On the other hand, a high concentration of Chl-a can be observed close to the coasts in zones 1 to 4 and 6. In that sense, Fig. <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> presents the six zones in which the area of study was divided for the evaluation of the DINEOF tuning parameters.</ns0:p><ns0:p>The proposed approach was evaluated at two different levels. First, at the adjustment of internal hyperparameters of DINEOF, where image segmentation was adapted to 2 &#215; 3 sub-images, and hyperparameters alpha and numit were evaluated according to ranges in Table <ns0:ref type='table'>1</ns0:ref>. The search for the more suitable hyperparameters for DINEOF was conducted by running two experimental designs: one for time=30, and another for time=60. During the adjustment process, the RMSE was estimated for the distinct possible values of alpha and numit, considering the ranges established in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>The resulting expected error obtained through the search process is shown in With fixed hyperparameters, at the second level of evaluation, the RMSE was computed for distinct areas of the segments on the three cloudy scenarios (e.g. 20%, 30%, and 50% clouds). Four segmentation levels were considered for I M in order to parallelize the process, with image segments represented by the triplets j &#215; k &#215;t, with j and k as described in Section Filling in missing satellite data; and t representing the time in trimesters. The segmentation levels correspond to 312&#215;187&#215;12, 625&#215;374&#215;12, 1250&#215;749&#215;12, and 2500 &#215; 1498 &#215; 12. However, the computer configuration employed to run the software was not able to completely run the system with the latter configuration due to memory overflow. Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> presents the average time and RMSE that were obtained after the execution of the experimentation with the aforementioned segment sizes. According to Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>, the computation of the missing data showed a better performance when the segments size and the amount of data to estimate were rather small compared to the size of I M . In fact, the lowest RMSE was attained when I M was divided into 8 &#215; 8 segments in the three cloud scenarios. In the hardest scenario, with 50% of missing data due to clouds, the proposed approach achieved an RMSE of 0.43, which was lower than all other feasible cases. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this research paper, a new efficient approach is proposed to fill in missing high-resolution data in satellite imagery. The proposed approach is organized in three main modules, which provide the versatility to analyze wide areas of study, with high-resolution images and long time series, supporting the massive data processing using computer clusters.</ns0:p><ns0:p>As proof of concept, the proposed approach was applied to fill in the holes in satellite imagery, using data from MODIS sensors aboard the Terra and Aqua satellite platforms and VIIRS aboard the SNPP and JPSS-1 satellites. The multi-sensor fusion approach implements geospatial techniques such as sliding averages and inverse weights interpolation with the squared distance and approaches to adjust the differences in the time each platform overflights the zone of study. Results showed that the proposed approach overcomes the traditional averaging strategy. When compared to the proposed approach, the traditional average strategy produces zones in which the values of Chl-a are overestimated. In contrast, the proposed approach preserves the oceanic structures required to study the ocean dynamics related to the currents and winds.</ns0:p><ns0:p>Although the DINEOF method is widely employed in several remote sensing studies to fill in missing data, the direct process of high-resolution images with DINEOF results in a high computational cost in terms of memory and computing power. For that reason, the proposed approach divides the merged image, and each segment is processed separately using DINEOF. Finally, the processed segments are assembled without missing data. The process of adjusting the input parameters for DINEOF endorses its performance to fill in missing data through the time series analysis. The analysis of the results shows, in general terms, As future work, the proposed approach may be applied to distinct scenarios that may provide evidence 361 of the efficiency of the proposed data-intensive approach. Between the application areas, one of the 362 most relevant may be satellite-guided fishing through phytoplankton monitoring. In fact, phytoplankton 363 constitutes the basic nourishment for small fishes, crustaceans, and other sea life forms that are the base 364 food for larger fishes and sea mammals. Although the data-intensive approach was evaluated on sea 365 monitoring, land applications may also be benefited from its efficiency. Additionally, with these results, it </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>compute the chlorophyll concentration using DINEOF to fill in gaps produced by clouds, using the Ocean Colour Monitor-2 (OCM-2) onboard Oceansat-2 satellite for the period 2016-2019 over the northern Indian Ocean. On the other hand, Alvera-Azc&#225;rate et al. (2011) implemented the data restoration with DINEOF using the time series with a single variable (monovariate), and several variables (multivariate approach). More recently, Alvera-Azc&#225;rate et al. (2021) reported a Suspended Particulate Matter reconstruction combining Sentinel-2 and Sentinel-3 imagery using DINEOF. The advantage of such a combination allows us to retain both the high spatial resolution of the Sentinel-2 data while increasing the temporal resolution from Sentinel-3 data.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Filling in missing satellite data takes advantage of parallel processing to independently process previously divided segments and assemble results in a single orthomosaic</ns0:figDesc><ns0:graphic coords='6,245.13,63.78,206.78,216.37' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>multiprocessor computer with distributed memory was used to run the experiments. The so called Perseo computer, is part of the computing network of the Centro Nayarita de Innovaci&#243;n y Transferencia de Tecnolog&#237;a A.C., M&#233;xico. The Perseo cluster is provided with 388 processing cores, 1,280 GB Ram, 356 TB permanent storage, and runs the CentOS 7.0 operating system. The proposed approach was implemented using a combination of scripts written in Python and Matlab 2018. In particular, the automated image download module written in Python, and the whole processing code written in Matlab are freely available to download through GitHUB: https://github.com/jroberto37/ fill_missing_data.git. The download script takes advantage of the geolocation products that include MOD03, MYD03, VNP03MODLL, and VJ103DNB, for sensors MODIS-TERRA, MODIS-AQUA, VIIRS-SNPP, and VIIRS-JPSS-1 respectively.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Maps of cloud masks in Mexico economic exclusive zone (EEZM). (a) Chl-a image composed by 30 scenes (January 2018), (b) Mask with 20% clouds, (c) Mask with 30% clouds and (d) Mask with 50% clouds.</ns0:figDesc><ns0:graphic coords='8,348.56,454.36,208.06,130.71' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Fig. 5 shows the orthomosaics obtained for each sensor after preprocessing downloaded samples and applying the 7 &#215; 7 sliding window. Due to the differences in trajectories and climate conditions at the overflight time, all orthomosaics present quite different areas of missing samples. For example, Fig. 5(a) and Fig 5(b) corresponding to MODIS Aqua and MODIS Terra respectively, have a band of missing data at the center of the image, but with distinct orientations. On the other hand, Fig. 5(c) and Fig 5(d) that correspond to VIIRS JPSS-1 and SNPP, do not present clear missing data patterns. Such differences favor the exploitation of the different sources to obtain a more complete resulting orthomosaic I M . Once the four orthomosaics were generated, data from the VIIRS-JPSS-1 sensor was selected as the base image in the merging preprocessed data module. The orthomosaic from the VIIRS-JPSS-1 sensor was chosen as the base image (I b ) because it presents a lower percentage of missing data than the other sensors. Then, the final merged I M is processed with DINEOF with the orthomosaics from previous days, as described in the following section.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Impact of the pre-filling with sliding windows in the region of the Gulf of Mexico. (a) JPSS-1 original, (b) Filling zones, (c) Mosaicking and (d) Windows 7 &#215; 7.</ns0:figDesc><ns0:graphic coords='10,359.47,194.53,124.05,131.07' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Fig. 7 .Figure 5 .</ns0:head><ns0:label>75</ns0:label><ns0:figDesc>Figure 5. Maps of orthomosaics in Mexico economic exclusive zone (January 1st 2019). (a) MODIS Aqua orthomosaic, (b) MODIS Terra orthomosaic , (c) VIRSS JPSS-1 orthomosaic and (d) VIRSS SNPP orthomosaic.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>) 0.44 (0.16) Finally, Fig. 8(a) presents the merged image I M , created with the data acquired by the MODIS and VIIRs sensors on January 1st, 2019. On the other hand, Fig. 8(b) presents the final result of the proposed 10/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:1:2:NEW 4 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Segmentation of the area of study, performed automatically by the proposed approach</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.58,261.52' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. RMSE corresponding to the application of DINEOF for different values of alpha and numit, at distinct zones and time frames; the colorbar represents the RMSE. (a) Zone 1 / time=30, (b) Zone 2 / time=30, (c) Zone 3 / time=30, (d) Zone 1 / time=60, (e) Zone 2 / time=60, (f) Zone 3 / time=60, (g) Zone 4 / time=30, (h) Zone 5 / time=30, (i) Zone 6 / time=30, (j) Zone 4 / time=60, (k) Zone 5 / time=60 and (l) Zone 6 / time=60.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Results of the process of filling in missing data in satellite images. (a) Merged image (I M ). (b) Final Image without missing data (I F ).</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='4,141.73,211.54,413.57,238.29' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Results of the preprocessing module in terms of RMSE, percentage of data coverage in the resulting orthomosaic, and the preprocessing time in seconds. Bold numbers symbolize the best results when distinct window sizes are compared</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Mosaicking</ns0:cell><ns0:cell>3 &#215; 3</ns0:cell><ns0:cell>5 &#215; 5</ns0:cell><ns0:cell>7 &#215; 7</ns0:cell><ns0:cell>9 &#215; 9</ns0:cell><ns0:cell>11 &#215; 11</ns0:cell><ns0:cell>15 &#215; 15</ns0:cell><ns0:cell>21 &#215; 21</ns0:cell><ns0:cell>31 &#215; 31</ns0:cell><ns0:cell>51 &#215; 51</ns0:cell></ns0:row><ns0:row><ns0:cell>RMSE</ns0:cell><ns0:cell>0.429</ns0:cell><ns0:cell>0.419</ns0:cell><ns0:cell>0.408</ns0:cell><ns0:cell>0.408</ns0:cell><ns0:cell>0.421</ns0:cell><ns0:cell>0.437</ns0:cell><ns0:cell>0.471</ns0:cell><ns0:cell>0.513</ns0:cell><ns0:cell>0.582</ns0:cell><ns0:cell>0.674</ns0:cell></ns0:row><ns0:row><ns0:cell>Data coverage</ns0:cell><ns0:cell>17.35%</ns0:cell><ns0:cell>20.98%</ns0:cell><ns0:cell>25.53%</ns0:cell><ns0:cell>28.72%</ns0:cell><ns0:cell>30.57%</ns0:cell><ns0:cell>32.31%</ns0:cell><ns0:cell>35.01%</ns0:cell><ns0:cell>37.96%</ns0:cell><ns0:cell>41.37%</ns0:cell><ns0:cell>45.56%</ns0:cell></ns0:row><ns0:row><ns0:cell>Time (s)</ns0:cell><ns0:cell>7.753</ns0:cell><ns0:cell>8.748</ns0:cell><ns0:cell>13.402</ns0:cell><ns0:cell>17.583</ns0:cell><ns0:cell>19.314</ns0:cell><ns0:cell>21.640</ns0:cell><ns0:cell>28.685</ns0:cell><ns0:cell>40.228</ns0:cell><ns0:cell>60.736</ns0:cell><ns0:cell>138.663</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Average time and RMSE obtained after the application of the proposed approach with distinct segment sizes. Bold numbers symbolize the lowest RMSE</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Segment size</ns0:cell><ns0:cell cols='2'>312 &#215; 187 &#215; 12</ns0:cell><ns0:cell cols='2'>625 &#215; 374 &#215; 12</ns0:cell><ns0:cell cols='2'>1250 &#215; 749 &#215; 12</ns0:cell></ns0:row><ns0:row><ns0:cell>Cloud test</ns0:cell><ns0:cell>Time (&#963; )</ns0:cell><ns0:cell>RMSE (&#963; )</ns0:cell><ns0:cell>Time (&#963; )</ns0:cell><ns0:cell>RMSE (&#963; )</ns0:cell><ns0:cell>Time (&#963; )</ns0:cell><ns0:cell>RMSE (&#963; )</ns0:cell></ns0:row><ns0:row><ns0:cell>20%</ns0:cell><ns0:cell>7.52 (4.59)</ns0:cell><ns0:cell cols='5'>0.45 (0.11) 43.41 (28.82) 0.36 (0.08) 187.17 (111.09) 0.37 (0.08)</ns0:cell></ns0:row><ns0:row><ns0:cell>30%</ns0:cell><ns0:cell cols='4'>10.42 (27.84) 0.47 (0.12) 43.21 (27.76) 0.35 (0.07)</ns0:cell><ns0:cell>175.84 (90.69)</ns0:cell><ns0:cell>0.36 (0.06)</ns0:cell></ns0:row><ns0:row><ns0:cell>50%</ns0:cell><ns0:cell cols='5'>13.86 (31.77) 0.52 (0.16) 58.36 (42.02) 0.43 (0.17) 186.72 (130.17</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> </ns0:body> "
"Subject: Submission of revised paper CS-2020:10:54271:0:0:REVIEW March 3, 2022 Gang Mei PeerJ Computer Science Dear Editor Thank you for allowing us to submit a revised draft of our manuscript titled ”An approach to fill in missing data from satellite imagery using data-intensive computing.” We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We are grateful to the reviewers for their insightful comments on our paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers, although experimentation with the suggestions required some months of computer tests. In regard of the changes, the paper’s quality and legibility was significantly improved, and we appreciate the insight of the reviewers’ comments. Here is a point-by-point response to the reviewers’ comments and concerns. Reviewer 1 Basic reporting 1.1 — I support the publication of this work as the authors present a non-easy task through a creative approach. However, I suggest they follow the below suggestions before this research can be considered for publication. I also noted that the quality of the figures provided is not the best and needs to be improved. Reply: Thanks for your comments; we revised and replaced all figures in the paper to ensure good quality. Specifically, Figures 1 and 2 were added; Figures 3, 4, 6, and 7 were improved. Experimental design 1.2 — To improve the experimental design of filling missing (infrared/missing/clouds) satellitedata I suggested following a different approach. It relates to the use of non-missing data from model outputs and adding different percentages of clouds/missing data randomly. Better yet, this new approach can be developed on multiple time steps fields, thus you fully approach the strength of your method by following the existent nature of surface features. 1 Thus you will be able to develop stronger statistics (figure/table) to conclude how well/bad this new approach solves the filling of missing data while ensuring that your newly created data do not create false surface features and it is consistent with the original data. Reply: The experimental design was modified to include monthly images without missing data. Then, three cloud occlusion masks were generated to produce distinct levels of missing data: 20%, 30%, and 50%, see Figure 3. Finally, results were evaluated using the RMSE (Root Mean Squared Error), see Table 3. The experimental results were added to the results section. 1.3 — It is also important that authors provide the original data and the methodology (Python/Matlab) scripts to recreate their analysis. Reply: Data used in experiments was freely downloaded from the Ocean color data repository. Both the Python script to download satellite data, and the Matlab scripts to process the images, was shared on GitHub (https://github.com/jroberto37/fill_missing_data.git). Validity of the findings 1.4 — I believe the validity of the findings is questionable based on the current approach. See above for suggestions on a new approach to better evaluate statistics and skipping the creation of false features, which is conditional to some interpolation approaches. Reply: The experimental design was redefined to address the evaluation issues (Figure 1). This is detailed in response to comment 1.2. Moreover, please see Tables 2 and 3. Table 2 shows the RMSE, the percentage of filled data, the computation time that was employed in the nine different window sizes, and the mosaicking SNAP function. The mosaicking results are the reference for data coverage previous to the application of the sliding window. Table 3 presents the average time and RMSE obtained after the experimentation’s execution. Comments for the Author 1.5 — I suggest you invest a little bit more time in this analysis and provide a robust analysis that we all in the satellite-data world will appreciate. Reply: The statistical analysis was included in the new version of the manuscript, using the RMSE and computing time as evaluation indicators; please see Tables 2 and 3. Reviewer 2 Basic reporting 2.1 — The article has a clear and unambiguous English. The English level is sufficient for an overall correct understanding of the text, taking into account that is not likely the mother tongue 2 of any of the authors. However, I have highlighted (in my ”General Comments”) a few terms that should be replaced by more appropriate ones Literature references are sufficient to illustrate the scope of the research and to give adequate background. The structure of the article is correct and the authors share all the material needed for a thorough revision. The objectives of the research and the results are clearly stated Reply: Thanks for your suggestions, the terms you were replaced along the paper. Experimental design 2.2 — The scope of the research fits in the aims and scope of the journal. The authors propose a methodological improvement to well established gap-filling methods for ocean colour satellite data. The experimental design is correct and the obtained processor allows to apply the DINEOF program to high-resolution data through a segmentation procedure. This is the main advance in the developed processor, together with a refinement in the merging procedure that, however, it is not clear to me how different is from the previous method, based on pixel averages (see my comment in the ”General Comments” section). The advance in knowledge is very small, but the results are of interest to the community of satellite data processing. Reply: The authors agree with the reviewer, the paper and experimentation were extensively revised, and statistical analysis was included to provide evidence of the results; this can be seen in the experimental results section. In fact, the problem that is addressed in the paper is related to the impossibility of DINEOF to process large size images (e.g. the exclusive economic zone of Mexico, EEZM) due to memory overflow. The proposed approach segments the overall orthomosaic and takes advantage of parallel computing to process independent segments. Validity of the findings 2.3 — The results presented are insufficient to support the conclusions of the authors. I think this is the weakest part of the article, that needs to be improved. In my general comments below I suggest the authors different validation exercises that would serve to improve the soundness of the conclusions. In summary: - For the assessment of the differences in the merging method, artificially removing a certain number of valid pixels in the original images is proposed. These pixels would be then used for validation - Instead of the visual inspection in Figure 5, some statistical analysis is asked for Reply: We appreciate your valuable comments, which have helped to improve the article. New experimentation was designed to use images without missing data, and holes were added to the images using cloud masks based on real data, with 20%, 30%, and 50% missing data. Statistical analysis was added using the RMSE and computation time. 3 Comments for the Author Below, my notes with comments and amendments requests throughout the text: 2.4 — Abstract: Line 19: Replace “capture” by “acquisition” Line 23: Replace “For proof of concept” by “As a proof of concept” Line 26: Replace “chlorophyll level” by “chlorophyll concentration” Throughout the text, the term “acquisition” should be used preferably, instead of “capture”. Reply: Thank you very much for your specific comments; the terms were changed in the whole new version of the paper. 2.5 — Introduction: Overall, the introduction to the oceanographic background of the work needs a revision, preferably by an expert in the field, to correct several misunderstandings on the biological and physical basis of the phytoplankton role in Oceans. Line 28: Replace “plant organisms” by “photosynthetic organisms”. Besides microscopic algae, Cyanobacteria are the main constituents of phytoplankton. The references to the impact of phytoplankton are basically incorrect and should be removed or changed. The authors confound “weather” with “climate”. Moreover, the term “weather balance” is vague and meaningless. Line 31: replace “biochemical cycle” by “biogeochemical cycle” It is not clear how phytoplankton could provide information to predict impact of Climate change. The whole first paragraph needs to be revised. Line 34: The most used abbreviation for Chlorophyll-a is “Chl-a”, instead of “Chlo-a” Line 38: Replace “interaction” with “impact”. Chl-a interacts with light. Its interaction, in turn, determines the magnitude and shape of the reflectance spectra in water bodies. Line 39: Replace “closed” by “close” Line 42: replace “space” by “scene” Line 53: What does “an overall temporal record” means? Line 74: Again the confusion between “weather” and “climate”. The sentence in line 74 could be correct, because there are many meteorological satellites used in atmospheric modelling and weather forecasting. But the three references included are related to ocean color sensors and to the relationship of phytoplankton and climate Reply: The whole paper was revised by experts in the field and modified accordingly. Also, it was revised that the proper terms were used in the new version of the text. Finally, the references mentioned above have been updated. 2.6 — Methodology Lines 89-90: The sentence has no sense: “importance of chlo-a in chlorophyll studies” ?? Line 103: What does “Python is a representative” means? Line 110: Replace “passed by” by “passed over” The formula for filling holes in the preprocessing step consists in calculating the average of the n nearest valid pixels (with data). Which is the difference in this calculation with the so-called “classical” method based on the average? The adjustment applied to reduce the impact of differences in capture conditions is not clearly explained Reply: The writing was changed according to the suggestions. The term “classical” was also substituted by Mosaicking, considering that it was the function used for preprocessing (from GPT-SNAP). Similarly, the explanation of the adjustment process was clarified. 2.7 — Results In the explanation of the merging procedure, it is stated that “the ortho-mosaic with most data related to chlorophyll is chosen and tagged as base-image”. However, in the example 4 shown in the results, the VIIRS-SNPP image was taken as the base-image despite not being the one with more valid data. Can the authors explain the reasons for that choice? Reply: The experimentation on the preprocessing process was repeated with distinct window sizes, and the most accurate window size (7 × 7) was selected for merge, and in fact, the VIIRS-JPSS1 resulted with less missing data. It was selected as base image. The explanation was revised and corrected accordingly. 2.8 — Figure 5 shows that the classical method (GPT-SNAP) overestimates values with respect to the proposed method. But, why it is assumed that the proposed method is producing the correct values? A good way of proving this is to produce artificial holes (by removing valid pixels with a simulated pattern) and then use the valid original pixels to test the performance of the merging procedure. Authors are encouraged to do this experiment. Reply: The comparison was improved by introducing controlled holes in the orthomosaics, and estimating RMSE for each case. Results were reported on the results section. 2.9 — Line 209: Replace “fussed” by “fused” In the comparison described lines 209 to 215 (figure 6) it is very difficult to visually observe the differences in concentration claimed by the authors. A comparison of histograms and/or statistic test on differences would be more informative than the visual inspection. Reply: The term “fussed” was replaced by “fused” in the whole paper. Moreover, the comparison was rewritten according to the new results, including the statistic test using the RMSE. Reviewer 3 Basic reporting 3.1 — The manuscript ”An approach to fill in missing data from satellite imagery using dataintensive computing” by Rivera-Caicedo et al is well structured and the scientific investigation is clearly presented, including a basic context and motivation. However, the scoping is not clear and the methodology shows little originality. Reply: The authors appreciate your comments and follow the suggestion. In that sense, the methodology was revised and detailed, and the experiments were updated accordingly. We hope this new version fulfills the quality expectations. 3.2 — The use of the DINEOF method for gap-filling is not new, and the literature contains other titles, including MODIS-based work. Data Interpolating Empirical Orthogonal Functions (DINEOF): a tool for geophysical data analyses (2011) Reconstruction of MODIS total suspended matter time series maps by DINEOF and validation with autonomous platform data (2011) Analysis of gap-free chlorophyll-a data from MODIS in Arabian Sea, reconstructed using DINEOF (2018) Exploratory Analysis of Urban Climate Using a Gap-Filled Landsat 8 Land Surface Temperature Data Set (2020) Reply: The paper’s contribution was clarified along with the text, and the suggested references were included and discussed in the Introduction Section. 5 Experimental design 3.3 — Although the authors claim they introduce ”a new approach for filling in missing data from satellite imagery”, the level of originality is low. The DINEOF method is already well-known, and the authors present some references. For me it is not clear what is new, and the authors do not emphasize the novelty. Reply: The authors agree with the reviewer regarding the small contribution that consists of the design and implementation of the whole processing chain. The whole approach includes automatic download of satellite data, preprocessing, and the application of parallel computing to fill in missing data. One of the challenges addressed in the experimentation was finding the limits of DINEOF to process large images. To do that, we proposed an experimental design to tune the hyperparameters of DINEOF, and find the size of the segments to process such large images. Validity of the findings 3.4 — The validation of the findings is a very important part is such studies. I cannot find a distinct validation section, where the results are compared either with other methods or to measurements. This is a fundamental part which makes me have serios doubts about the quality of the work. The so-called new method is divided into modules, which in principle is good, but in fact only one module refers to data-gap filling, and the rest are basic image processing. Reply: As mentioned in the previous comment, we agree that the main contribution of the approach is mainly in the last module. However, we consider that all modules are important for the sake of clarity of the whole approach. Comments for the Author 3.5 — I strongly recommend to justify better your results, refer to more previous work and resubmit the manuscript. Reply: Thanks for all your comments; we revised the whole paper and included the suggested references and more extensive experimentation and statistical validation. Reviewer 4 Basic reporting The authors present an approach to fill in missing data from satellite imagery using data-intensive computing. The approach was divided into three main modules. The idea is interesting; however, the authors must improve several details before accepting the paper. 6 4.1 — The authors must improve the approach implementation explanation. There are gaps in how it was implemented. As explained, it seems that they only divided the data into smaller sets to be processed. Reply: The proposed approach includes automatic download of satellite data, preprocessing, and the application of parallel computing to fill in missing data. The last step was addressed using parallel computing in order to be able to process large images. Although it seems a simple idea, some challenges appear, including the selection of hyperparameters for DINEOF and the size of the segments to avoid memory overflow. In that sense, we revised the whole paper and clarified the methodology, experimentation, and statistical validation. 4.2 — Also, several plots must be explained in another way. For example, in figure 5, the comparison is not understood. Reply: The explanation of all figures was improved, and the previous Figure 5 was deleted because it was considered unnecessary to explain the point of the paper. 4.3 — The authors must justify parameters used in the implementation (for example 3 x 3 window) Reply: The sensibility of the preprocessing satellite data module to the size of the sliding window was studied using nine different square windows: m×m windows with m = {3, 5, 7, 9, 11, 15, 21, 31 and 51}; please review the experimental results section. The process to select the parameters at the different modules was added to the paper, and new experimentation allowed to better select the different values. The explanation of this part was added to the paper. 4.4 — Table 1 is missing Reply: The problem with missing Table 1 was fixed, and its explanation was improved. Experimental design 4.5 — The authors present interesting experiments and results. However, they must find a metric to validate the results; a visual comparison is not enough. This metric will allow comparing the proposed approach with other methods. Reply: The authors appreciate your suggestion. The RMSE was employed for comparison along with the experiments. 4.6 — Also, they can report processing time a compare it with the DINEOF algorithm processing time. Reply: The processing time was added to the comparison, and the limits of DINEOF with large images were found using different segment sizes. 7 Comments for the Author 4.7 — The idea is a novelty; however, it needs to be clarified to understand the real impact and originality. Reply: The method was clarified all along with the paper. In particular, Section “A COMPUTER INTENSIVE APPROACH TO FILL IN MISSING DATA” was dedicated to clarifying the whole process. We hope the revised version fulfills the expectations of the editorial team and look forward to hearing from you in due course. Sincerely, Dr. Himer Avila-George Professor Department of Computer Science and Engineering Universidad de Guadalajara, México E-mail: himer.avila@academicos.udg.mx On behalf of all authors. 8 "
Here is a paper. Please give your review comments after reading it.
423
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>This paper proposes an approach to fill in missing data from satellite images using dataintensive computing platforms. The proposed approach merges satellite imagery from diverse sources to reduce the impact of the holes in images that result from acquisition conditions: occlusion, the satellite trajectory, sunlight, among others. The amount of computation effort derived from the use of large high-resolution images is addressed by data-intensive computing techniques that assume an underlying cluster architecture. As a start, satellite data from the region of study are automatically downloaded; then, data from different sensors are corrected and merged to obtain an orthomosaic; finally, the orthomosaic is split into user-defined segments to fill in missing data, and then filled segments are assembled to produce an orthomosaic with a reduced amount of missing data. As a proof of concept, the proposed data-intensive approach was implemented to study the concentration of chlorophyll at the Mexican oceans by merging data from MODIS-TERRA, MODIS-AQUA, VIIRS-SNPP, and VIIRS-JPSS-1 sensors. Results reveal that the proposed approach produces results that are similar to state-of-the-art approaches to estimate chlorophyll concentration but avoid memory overflow with large images. Visual and statistical comparison of the resulting images reveals that the proposed approach provides a more accurate estimation of chlorophyll concentration when compared to the mean of pixels method alone.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Since the first satellite photographs in the 1940s, followed by missions that include Landsat and Suomi NPP from NASA, and Sentinel from ESA, just to mention three of the most popular, satellite imagery has been improved to the point of becoming daily-use information. Moreover, together with the increase of use, challenges have been emerged, presenting an increasing demand for computational resources and algorithms. Nowadays, images of Earth are commonly used to study the atmosphere, land, and oceans, presenting an increasing use in daily life activities. Applications of satellite imagery range from weather forecasting <ns0:ref type='bibr' target='#b26'>(Sato et al., 2021)</ns0:ref>, monitoring natural disasters <ns0:ref type='bibr' target='#b25'>(Said et al., 2019)</ns0:ref>, survey phytoplankton size structure impacts as an ecological indicator for the state of marine ecosystems <ns0:ref type='bibr' target='#b10'>(Gittings et al., 2019)</ns0:ref>, among many others. Presently, various sensors provide different temporal, spatial, and spectral resolutions to study oceans' evolution. The Moderate-Resolution Imaging Spectroradiometer <ns0:ref type='bibr'>(MODIS)</ns0:ref> and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors are well known in the community, mainly PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:2:0:NEW 17 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science because of data's continuous and free availability. Data from these sensors have been available since 1999 and are still in operation <ns0:ref type='bibr' target='#b13'>(Hu et al., 2010)</ns0:ref>. The MODIS sensors are in orbit aboard the Terra and Aqua satellites <ns0:ref type='bibr'>(NASA, 2020)</ns0:ref>. On the other hand, the VIIRS sensors are aboard the Suomi National Polar-Orbiting Partnership (SNPP) and the Joint Polar Satellite System (JPSS-1) <ns0:ref type='bibr' target='#b18'>(Kramer, Herbert J., 2020)</ns0:ref>. Both MODIS and VIIRS sensors are provided with a set of bands commonly employed to study the oceans <ns0:ref type='bibr' target='#b7'>(Datla et al., 2016)</ns0:ref>. Regardless of the application, processing data from satellites exhibit various challenges that are difficult for the analysis related to physical phenomena <ns0:ref type='bibr' target='#b24'>(Rodriguez-Ramirez et al., 2019)</ns0:ref>. Moreover, one of the most representative issues emerge from acquisition conditions that produce incomplete data from the whole scene, either caused by occlusion (e.g. clouds) or the trajectory of the satellite at the acquisition moment <ns0:ref type='bibr' target='#b30'>(Zhang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>As reported by the scientific community, some approaches to fill in missing data employ machine learning techniques to merge information from multiple sources within the same set of sensors <ns0:ref type='bibr' target='#b30'>(Zhang et al., 2018)</ns0:ref>. Other approaches use regression models to repair single spatial satellite images, presenting a tradeoff between accuracy and computational effort <ns0:ref type='bibr' target='#b29'>(Zhang et al., 2015)</ns0:ref>. Furthermore, one of the most widely employed methods in oceanography to fill in missing data is DINEOF (Data INterpolating Empirical Orthogonal Function) <ns0:ref type='bibr' target='#b19'>(Liu and Wang, 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Liu and Wang, 2019)</ns0:ref>. Examples of the use of DINEOF to study chlorophyll are disseminated in literature. For instance, <ns0:ref type='bibr' target='#b15'>Jayaram et al. (2018)</ns0:ref> implemented the interpolation functions of the orthogonal data to restore the levels of chlorophyll-a (Chl-a) at the Arabic sea between 2000 and 2015, using the MODIS sensor. <ns0:ref type='bibr' target='#b14'>Jayaram et al. (2021)</ns0:ref> compute the chlorophyll concentration using DINEOF to fill in gaps produced by clouds, using the Ocean Colour Monitor-2 (OCM-2) onboard Oceansat-2 satellite for the period 2016-2019 over the northern Indian Ocean. On the other hand, <ns0:ref type='bibr' target='#b0'>Alvera-Azc&#225;rate et al. (2011)</ns0:ref> implemented the data restoration with DINEOF using the time series with a single variable (monovariate), and several variables (multivariate approach).</ns0:p><ns0:p>More recently, Alvera-Azc&#225;rate et al. ( <ns0:ref type='formula'>2021</ns0:ref>) reported a Suspended Particulate Matter reconstruction combining Sentinel-2 and Sentinel-3 imagery using DINEOF. The advantage of such a combination allows us to retain both the high spatial resolution of the Sentinel-2 data while increasing the temporal resolution from Sentinel-3 data. DINEOF was also employed by <ns0:ref type='bibr' target='#b4'>Bouchra et al. (2011)</ns0:ref> to restore the total suspended matter between Belgium and United Kingdom coasts, using MODIS data acquired between 2003 and 2006. Restored data were compared against the measurements in-situ of total suspended matter collected by the Cefas (Centre for Environment Fisheries and Aquatic Sciences); for factor calibration, a linear regression model was employed, considering the highest observed measurements as the reference values. Additionally, during the atmospheric correction, MODIS data pixels were labeled according to the quality of the restoration: those pixels within a 5 &#215; 5 window that present inconsistencies over the Cefas time series were labeled as doubedly or low quality. Finally, DINEOF was used to compute missing data, and atypical values were assessed using spatial coherence.</ns0:p><ns0:p>Despite the approach employed to fill in missing data, and the study region, the challenges remain. Furthermore, improvements in computational efficiency and accuracy are still required to produce reliable studies. Indeed, the high computational cost required to analyze multi-temporal and multi-resolution data provided by satellite platforms is far from being solved <ns0:ref type='bibr' target='#b2'>(Babbar and Rathee, 2019)</ns0:ref>. In particular, DINEOF is based on empirical orthogonal functions (EOF) to reconstruct missing data in a set of geophysical data through the calculus of the dominant modes of variability within satellite data <ns0:ref type='bibr' target='#b3'>(Beckers and Rixen, 2003)</ns0:ref>. The DINEOF's amount of computation increases with the size of the input images and may be impractical with a large number of high-resolution images. Therefore, DINEOF is usually used to process images with a low spatial resolution of small geographical areas <ns0:ref type='bibr' target='#b8'>(GHER, 2020)</ns0:ref>. This research paper proposes a novel approach for automated hole filling in satellite imagery. The first novelty of the proposed approach compared to previous works is that it uses data from different sensors, while previous works used data from the same set of sensors. Another novelty in our proposal is the way the filling of missing data was performed, which is carried out by chaining three different strategies: (1) The first data with which the gaps in the images are filled comes from the fusion of four data sources (MODIS-TERRA, MODIS-AQUA, VIIRS-SNPP, and VIIRS-JPSS-1); (2) The next step consists of estimating the missing pixels close to those obtained in the previous step, for which the nearest neighbor approach using multivariate interpolation is employed; and (3) Empirical orthogonal functions are used to fill in the last missing data. Finally, the proposed approach uses an intensive computing strategy to avoid memory overflow when processing high-resolution images. For proof of concept, the detection of chlorophyll over the exclusive economic zone of Mexico (EEZM) is analyzed.</ns0:p></ns0:div> <ns0:div><ns0:head>2/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:2:0:NEW 17 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>A COMPUTER-INTENSIVE APPROACH TO FILL IN MISSING DATA</ns0:head><ns0:p>The proposed data-intensive computing approach to fill in the missing geophysical data from satellite imagery comprises three main conceptual modules: (1) Automatic satellite data download, (2) Satellite data merging, and (3) Filling in missing satellite data using an intensive computer approach. Each module considers the output of the previous one, and their operation is detailed in the sections below. The whole process is depicted in Fig. <ns0:ref type='figure'>1</ns0:ref>. Proposed approach for filling in missing satellite data. First, L2 data from the region of interest (ROI) are downloaded. Satellite data are then merged using a 2-step strategy, estimating missing pixels as the average of at least three neighbors in same-day images from different sensors. Finally, missing data is filled in using a data-intensive approach that takes advantage of segmented ROIs and DINEOF</ns0:p></ns0:div> <ns0:div><ns0:head>Automatic satellite data download</ns0:head><ns0:p>The automatic satellite data download module is designed to continuously survey changes in the satellite repository and retrieve the most recent satellite imagery from the region of interest (ROI). Without loss of generality, it is assumed that data are retrieved from the OBPG-Ocean color data repository, but other platforms may be configured with the same behavior. The three steps established in this module are listed below.</ns0:p><ns0:p>1. The first step consists in querying the repository to retrieve the schedule of both sensors (MODIS and VIIRS): the time when they passed over the zone of study.</ns0:p><ns0:p>2. In the second step, the schedule information is processed to extract the precise hour when the satellite acquired the region of interest (ROI).</ns0:p><ns0:p>3. Finally, the links to the levels L2 products are built in the third step, and the download process starts. The resulting L2 products are stored in a user-specific path that is accessed by the other two modules.</ns0:p><ns0:p>As a result, the module for satellite data download retrieves the high-resolution images from the configured sensors, corresponding to the ROI at a determined date.</ns0:p></ns0:div> <ns0:div><ns0:head>Satellite data merging</ns0:head><ns0:p>Every time L2 satellite data corresponding to the ROI are downloaded, a new daily high-resolution image is created by merging data from the selected sensors according to the application. The procedure to create the combined image involves the two steps described in subsections below (preprocessing satellite data and merging preprocessed data).</ns0:p></ns0:div> <ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:2:0:NEW 17 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Preprocessing satellite data</ns0:head><ns0:p>The first novelty of the proposed approach is that data from different sensors are used when performing satellite data fusion. Preprocessing data consists of creating the orthomosaics for each daily scene: one for each sensor (I 1 , I 2 , &#8226; &#8226; &#8226; I L ). Operations like spatial resampling or scaling are required in some cases to prepare the raw data to assemble a single orthomosaic for each of the L sensors. Subsequently, each orthomosaic is processed to fill missing data during the merging phase; a m &#215; m sliding window is applied to the p empty pixels in the image that accomplish with the criteria of having at least three neighbors (i.e., three pixels with data). Such a criterion was established to avoid simple information duplicity of close pixels. The new value p x of an empty pixel is computed using Eq. 1.</ns0:p><ns0:formula xml:id='formula_0'>p x = &#8721; n i=1 p i n ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where p x is the missing data pixel, p i is one of the n neighbor pixels with data within the m &#215; m sliding window, considering n &#8805; 3.</ns0:p></ns0:div> <ns0:div><ns0:head>Merging preprocessed data</ns0:head><ns0:p>In this step, the preprocessed orthomosaics are merged. In order to obtain the combined image at a selected date (day), the orthomosaic with most data related to chlorophyll is first chosen and tagged as base-image (I b ). Then, it is necessary to define the order of the processing of each orthomosaic. The ordering criteria considers the root-mean-square error (RMSE) between the base-image and each of the remaining orthomosaics (I r ), assigning higher priority to the orthomosaics with lower RMSE, see Eq. 2.</ns0:p><ns0:formula xml:id='formula_1'>RMSE(I r ) = 1 N 3 &#8721; r=1 (I b &#8722; I r ) 2 (2)</ns0:formula><ns0:p>where I b is the base-image, I r corresponds to each of the other images, and N is the number of valid pixels in both images (i.e., I b and I r ).</ns0:p><ns0:p>Once the priority is established, data from the four images are combined considering I b as the baseline and following the order of priority given by the RMSE: each missing pixel with coordinates (x, y) in I b is substituted with the pixel from I r with the highest priority on the same position. If none of the I r images contains data, it is considered as a missing pixel. Finally, an adjustment is applied to reduce the impact of differences in acquisition conditions from each sensor, such as different acquisition times and the zone dynamics (currents and winds). Such an adjustment between images I b and I r was applied using the Inverse Distance Weighting (IDW) to the four nearest pixels in directions (&#8722;x, x, &#8722;y, and y). In essence, the resulting high-resolution image I M produced by merging the sensor-wise orthomosaics {I 1 , I 2 , I 3 , I 4 } incorporate the information from all sensors, and hence, includes fewer gaps than any of the individual orthomosaics.</ns0:p></ns0:div> <ns0:div><ns0:head>Filling in missing satellite data</ns0:head><ns0:p>As a second contribution, the proposed approach is able to process high-resolution images of large study areas. After preprocessing, the merged orthomosaic I M still remains with gaps, and DINEOF is employed to compute and fill in the gaps. In order to address this problem with high-resolution images from wide areas of study, the data-intensive approach is divided into the following three steps: (1) data segmentation,</ns0:p><ns0:p>(2) fill in missing data for each segment, and (3) assemble the segments. The strategy to fill in missing data is shown in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, and each step is detailed in sections below.</ns0:p></ns0:div> <ns0:div><ns0:head>Data segmentation</ns0:head><ns0:p>The merged orthomosaic I M comprises the whole ROI to be monitored, which may be computationally unmanageable, depending on the area of study and the computer to process DINEOF. Thus, I M is evenly divided into J &#215; K = NS smaller manageable size segments {I S i, j }: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>I M = &#63726; &#63727; &#63727; &#63727; &#63728; I S 1,1 I S 1,2 &#8226; &#8226; &#8226; I S 1,K I S 2,1 I S 2,2 &#8226; &#8226; &#8226; I S 2,K . . . . . . . . . . . . I S J,1 I S J,2 &#8226; &#8226; &#8226; I S J,K &#63737; &#63738; &#63738; &#63738; &#63739;<ns0:label>(</ns0:label></ns0:formula><ns0:p>Computer Science The user-defined values for J and K should be selected according to the computational resources available to execute DINEOF, and indirectly define the size of the segments {I S j,k }. Inspired by binary search, the orthomosaic may be evenly divided into 2 &#215; 2, 4 &#215; 4, 8 &#215; 8, and so on. As soon as the computer system is able to process the images, the divisions are fixed and the monitoring process configured.</ns0:p></ns0:div> <ns0:div><ns0:head>Fill in missing data for each segment</ns0:head><ns0:p>Filling in missing data is a parallel process that is independently applied to all segments in which the merged image was divided (see Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>). Using a massive processing configuration (e.g. a computer cluster or a multiprocessor computer) is advantageous to accelerate the complete process. In this step, each segment I S j,k is filled in, and the resulting filled segments I F j,k are stored for posterior processing.</ns0:p></ns0:div> <ns0:div><ns0:head>Assemble segments</ns0:head><ns0:p>At the final module, the resulting I F j,k segments are assembled in the same order that was divided I M , to obtain a new I F orthomosaic without holes.</ns0:p><ns0:formula xml:id='formula_3'>I F = &#63726; &#63727; &#63727; &#63727; &#63728; I F 1,1 I F 1,2 &#8226; &#8226; &#8226; I F 1,K I F 2,1 I F 2,2 &#8226; &#8226; &#8226; I F 2,K . . . . . . . . . . . . I F J,1 I F J,2 &#8226; &#8226; &#8226; I F J,K &#63737; &#63738; &#63738; &#63738; &#63739;<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>A distinct but equivalent way to define the number of segments is to establish the size of each segment I F j,k , assuming all segments are the same size. The size of I F j,k corresponds to a 2-element tuple (width, height) that define the number of pixels per side, considering the ratio between width and height of the segment to be the same of the ratio between the width and the height of the orthomosaic I M :</ns0:p><ns0:formula xml:id='formula_4'>width(I F j,k ) height(I F j,k ) = width(I M )</ns0:formula><ns0:p>height(I M ) .</ns0:p></ns0:div> <ns0:div><ns0:head>Computational complexity analysis</ns0:head><ns0:p>The application of DINEOF to a sequence of T orthomosaics requires to assembly a L &#215; T matrix, with L = width &#215; height representing the number of pixels in I M . After that, the resulting matrix is standardized, and the optimal number of empirical orthogonal functions (EOFs) are by the convergence of a validation process that depends on Singular Value Decomposition (SVD) computation. The computation of SVD Manuscript to be reviewed</ns0:p><ns0:p>Computer Science is in the order O(LT 2 ), and the validation process depends on the maximum number of iterations (Q)</ns0:p><ns0:p>employed to find the optimal number of EOFs. Thus, the whole computation of DINEOF for a sequence of T orthomosaics is in the order O(QLT 2 ), with typical values of L &#8811; T and L &#8811; Q: the number of pixels usually greatly exceeds the time frame T , as well as the iterations Q. Consequently, a significant reduction in the number of pixels L per segment I S j,k causes a consequent reduction in the total number of operations.</ns0:p></ns0:div> <ns0:div><ns0:head>STUDY CASE: CHLOROPHYLL ON THE EEZM</ns0:head><ns0:p>The study case used for proof of concept was designed to monitor the Chl-a over a wide sea area: EEZM.</ns0:p><ns0:p>Data from the MODIS and VIIRS sensors were combined to obtain L2 products with the least amount of missing data. The importance of monitoring the Chl-a is related to the dynamics of phytoplankton, which provides the information to predict the impact of climate change in ocean ecosystems. Phytoplankton is composed of microscopic algae and other photosynthetic organisms that inhabit the surface of oceans, rivers, and lakes. These microorganisms constitute the primary source of energy in aquatic systems due to their photosynthetic capacity <ns0:ref type='bibr' target='#b27'>(Winder and Sommer, 2012)</ns0:ref>, and their contribution to preserving the climate balance and the biogeochemical cycle in such ecosystems <ns0:ref type='bibr' target='#b12'>(Hallegraeff, 2010)</ns0:ref>. For some decades now, the Chl-a has been widely used to estimate phytoplankton's biomass in surface water using satellite-based methods <ns0:ref type='bibr' target='#b11'>(Gomes et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b17'>Kramer and Siegel, 2019;</ns0:ref><ns0:ref type='bibr' target='#b23'>O'Reilly et al., 1998)</ns0:ref>. Such usage is given in view of the fact that the Chl-a is the main photosynthetic pigment of phytoplankton. In fact, Chl-a is used as a photoreceptor and gives the green color to the phytoplankton, and various studies have settled the fundamentals of the impact of Chl-a with light reflectance of water bodies, especially in the visible light and close infrared regions of the electromagnetic spectrum <ns0:ref type='bibr' target='#b9'>(Gitelson, 1992;</ns0:ref><ns0:ref type='bibr' target='#b6'>Dall'Olmo and Gitelson, 2005;</ns0:ref><ns0:ref type='bibr' target='#b28'>Yacobi et al., 2011)</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Study area</ns0:head><ns0:p>The area of study selected for the analysis and proof of concept corresponds to the EEZM, and covers the sea region close to the seashore (CONABIO, 2022). The distance covered by the EEZM is up to 370.4 km from the continental and insular seacoast. The surface area of the EEZM is one of the greatest in the world and is estimated to be 3 269 386 km 2 .</ns0:p><ns0:p>The complete satellite images are required to study Chl-a concentrations, e.g. images without missing data over the area of study. The size of such a huge area makes the task prohibitive for the computer facilities available for experimentation. Bands 8 to 16 from the MODIS sensors were used, corresponding to wavelengths from 405 to 877 nm and a spatial resolution of 1 km. These bands are mainly employed for Ocean Color and Phytoplankton and Biogeochemistry. On the other hand, the VIIRS sensor provides measurements from water, land, and atmosphere, with a temporal resolution of 12 hrs for day and night ocean data acquisition.</ns0:p></ns0:div> <ns0:div><ns0:head>Computational details on experiments</ns0:head><ns0:p>A Manuscript to be reviewed is not processed (e.g., land), as well as its time file that allows activating the filtering of the temporal covariance matrix.</ns0:p><ns0:note type='other'>Computer Science Table 1.</ns0:note><ns0:p>The * .gher and time files generated at segmentation are then used to execute DINEOF, employing the configuration parameters shown in Table <ns0:ref type='table'>1</ns0:ref>. The proposed algorithm rewrites such parameters in a file with the * .init extension. Afterward, the file is read by the DINEOF program, which computes the missing data for each of the segmented data series. In the fill-in missing data step, DINEOF generates a time series without holes for each segment, and it is stored in * .gher file format. Finally, the orthomosaic is reconstructed using the segmented high-resolution images without missing data. The resulting image is written in NetCDF format.</ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation in cloudy scenarios</ns0:head><ns0:p>For validation, a free of the holes data set was generated for the time frame from January 2017 to December 2019. The data set was composed of 36 complete high-resolution images (without missing pixels), with a spatial resolution of 1 km from the four sensors. The images were composed with the 30</ns0:p><ns0:p>Chl-a daily images from each month. As an example, Fig. <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>Satellite data merging</ns0:head><ns0:p>The sensibility of the preprocessing satellite data module to the size of the sliding window was studied using nine different square windows: m &#215; m windows with m = {3, 5, <ns0:ref type='bibr'>7, 9, 11, 15, 21, 31 and 51}.</ns0:ref> In this sensibility test, the base image employed for each sensor was composed by the sequence of images for February 2018; and missing samples were generated using the images for February the 1st, 2018, for each sensor.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> shows the RMSE, the percentage of filled data, the computation time that was employed in the nine different window sizes, and the mosaicking SNAP function. The mosaicking results are the reference for data coverage previous to the application of the sliding window. According to Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>, the window sizes 5 &#215; 5 and 7 &#215; 7 presented the lowest RMSE value when compared to other window sizes. However, the latter showed a higher percentage of data coverage (28.71% against 25.52%), although the processing time increases according to the window size. In order to compare the results of the mosaicking and evidence the advantages of applying the sliding window, Fig. <ns0:ref type='figure'>4</ns0:ref> shows the results of the application of the method to high-resolution images for February the following section.</ns0:p></ns0:div> <ns0:div><ns0:head>Filling in missing satellite data</ns0:head><ns0:p>After preprocessing was applied to the whole temporal series 2018-2019, the impact of the three hyperparameters was evaluated on the proposed system: (1) alpha, (2) numit, and (3) time (see Table <ns0:ref type='table'>1</ns0:ref>).</ns0:p><ns0:p>The values of the hyperparameters were explored through the application of DINEOF, after splitting the preprocessed orthomosaic into six independent zones (see Fig. <ns0:ref type='figure' target='#fig_11'>6</ns0:ref>). Such a division favors the analysis of Chl-a's behavior either in coastal zones or deep sea, separated from coasts. For example, zone 4 presents deep seawater with a concentration of Chl-a that differs from the concentration in zone 5, which is closer to coasts. On the other hand, a high concentration of Chl-a can be observed close to the coasts in zones 1 to 4 and 6. In that sense, Fig. <ns0:ref type='figure' target='#fig_11'>6</ns0:ref> presents the six zones in which the area of study was divided for the evaluation of the DINEOF tuning parameters.</ns0:p><ns0:p>The proposed approach was evaluated at two different levels. First, at the adjustment of internal hyperparameters of DINEOF, where image segmentation was adapted to 2 &#215; 3 sub-images, and hyperparameters alpha and numit were evaluated according to ranges in Table <ns0:ref type='table'>1</ns0:ref>. The search for the more suitable hyperparameters for DINEOF was conducted by running two experimental designs: one for time=30, and another for time=60. During the adjustment process, the RMSE was estimated for the distinct possible values of alpha and numit, considering the ranges established in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>The resulting expected error obtained through the search process is shown in lower expected error, e.g., seems to be favorable in both scenarios. On the other hand, Fig. <ns0:ref type='figure' target='#fig_12'>7</ns0:ref>(g) and 7(j) exhibit the lowest expected error, regardless of the value assigned to both alpha and numit, as well as the timeframe. In the rest of the cases and regardless of the time frame, the lowest expected error is attained with alpha=0.1 and numit=3.0, and they were fixed for the application of the segmented fill in the algorithm.</ns0:p><ns0:p>With fixed hyperparameters, at the second level of evaluation, the RMSE was computed for distinct areas of the segments on the three cloudy scenarios (e.g. 20%, 30%, and 50% clouds). Four segmentation levels were considered for I M in order to parallelize the process, with image segments represented by the triplets j &#215; k &#215;t, with j and k as described in Section Filling in missing satellite data; and t representing the time in trimesters. The segmentation levels correspond to 312&#215;187&#215;12, 625&#215;374&#215;12, 1250&#215;749&#215;12, and 2500 &#215; 1498 &#215; 12. However, the computer configuration employed to run the software was not able to completely run the system with the latter configuration due to memory overflow. Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> presents the average time and RMSE that were obtained after the execution of the experimentation with the aforementioned segment sizes. According to Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>, the computation of the missing data showed a better performance when the segments size and the amount of data to estimate were rather small compared to the size of I M . In fact, the lowest RMSE was attained when I M was divided into 8 &#215; 8 segments in the three cloud scenarios. In the hardest scenario, with 50% of missing data due to clouds, the proposed approach achieved an RMSE of 0.43, which was lower than all other feasible cases. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div> <ns0:div><ns0:head>Computational complexity analysis</ns0:head><ns0:p>As an example that follows the case study, processing the sequences with the original size (2500 &#215; 1498 &#215; 12) requires as many operations as O(300 &#215; 3, 745, 000 &#215; 12 2 ). On the other hand, by applying the proposed segmentation strategy, the task is divided into sixteen sequences of 312 &#215; 187 &#215; 12, which require as many operations as O(300 &#215; 58, 344 &#215; 12 2 ). Following this example, Figure <ns0:ref type='figure' target='#fig_14'>9</ns0:ref> presents the number of operations expected for each segment size, showing the effect of the split of the whole image into segments. Figure <ns0:ref type='figure' target='#fig_14'>9</ns0:ref>, it is evident the direct relationship between the size of the segments and the number of operations required to complete the segmented DINEOF: the smaller the segments, the fewer operations are required. However, it has to be pointed out that sensitivity analysis is important to verify the performance. As previously shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>, for the experimental settings analyzed in this paper, the best segment size found during experimentation was 625 &#215; 374 &#215; 12. This analysis is suggested to be conducted on distinct scenarios, according to the particular requirements of the context.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>This research paper introduced a new efficient approach to filling in missing high-resolution data from different satellite sensors. The proposed approach consists of three general steps: (1) Automatic satellite data download, (2) Satellite data fusion, and (3) Filling of missing satellite data using a computationally intensive approach. The first novelty of the proposed approach is that data from different sensors are used when performing satellite data fusion. As a second contribution, an approach is introduced to be able to process high-resolution images of large study areas. The proposed approach divides the orthomosaic into segments that DINEOF can process; the segments can be processed in parallel using a computer cluster;</ns0:p><ns0:p>next, the orthomosaic is reconstructed without missing data. Finally, an analysis of the computational effort of the proposed approach was performed and it was found to be of quadratic order.</ns0:p><ns0:p>As proof of concept, the proposed approach was applied to fill in the holes in satellite imagery, using data from MODIS sensors aboard the Terra and Aqua satellite platforms and VIIRS aboard the SNPP and JPSS-1 satellites. The multi-sensor fusion approach implements geospatial techniques such as sliding averages and inverse weights interpolation with the squared distance and approaches to adjust the differences in the time each platform overflights the zone of study. Results showed that the proposed approach overcomes the traditional averaging strategy. When compared to the proposed approach, the traditional average strategy produces zones in which the values of Chl-a are overestimated. In contrast, the proposed approach preserves the oceanic structures required to study the ocean dynamics related to 377 the currents and winds.</ns0:p></ns0:div> <ns0:div><ns0:head>378</ns0:head><ns0:p>Although the DINEOF method is widely employed in several remote sensing studies to fill in missing 379 data, the direct process of high-resolution images with DINEOF results in a high computational cost in 380 terms of memory and computing power. For that reason, the proposed approach divides the merged image,</ns0:p></ns0:div> <ns0:div><ns0:head>381</ns0:head><ns0:p>and each segment is processed separately using DINEOF. Finally, the processed segments are assembled As future work, the proposed approach may be applied to distinct scenarios that may provide evidence of the efficiency of the proposed data-intensive approach. Between the application areas, one of the most relevant may be satellite-guided fishing through phytoplankton monitoring. In fact, phytoplankton constitutes the basic nourishment for small fishes, crustaceans, and other sea life forms that are the base food for larger fishes and sea mammals. Although the data-intensive approach was evaluated on sea monitoring, land applications may also be benefited from its efficiency. Additionally, with these results, it may be interesting to combine the proposed algorithm to address applications that incorporate artificial intelligence (i.e. forecasting, Etc.). Finally, incorporating the algorithm in open libraries may favor the comparison with future proposals in this research area.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure1. Proposed approach for filling in missing satellite data. First, L2 data from the region of interest (ROI) are downloaded. Satellite data are then merged using a 2-step strategy, estimating missing pixels as the average of at least three neighbors in same-day images from different sensors. Finally, missing data is filled in using a data-intensive approach that takes advantage of segmented ROIs and DINEOF</ns0:figDesc><ns0:graphic coords='4,183.09,253.05,55.34,53.87' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54271:2:0:NEW 17 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Filling in missing satellite data takes advantage of parallel processing to independently process previously divided segments and assemble results in a single orthomosaic</ns0:figDesc><ns0:graphic coords='6,245.13,63.78,206.78,216.37' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>multiprocessor computer with distributed memory was used to run the experiments. The so called Perseo computer, is part of the computing network of the Centro Nayarita de Innovaci&#243;n y Transferencia de Tecnolog&#237;a A.C., M&#233;xico. The Perseo cluster is provided with 388 processing cores, 1,280 GB Ram, 356 TB permanent storage, and runs the CentOS 7.0 operating system. The proposed approach was implemented using a combination of scripts written in Python and Matlab 2018. In particular, the automated image download module written in Python, and the whole processing code written in Matlab are freely available to download through GitHUB: https://github.com/jroberto37/ fill_missing_data.git. The download script takes advantage of the geolocation products that include MOD03, MYD03, VNP03MODLL, and VJ103DNB, for sensors MODIS-TERRA, MODIS-AQUA, VIIRS-SNPP, and VIIRS-JPSS-1 respectively.For preprocessing, the Graph Processing Tool (GPT) from the Sentinel Application Platform (SNAP) was used to create the orthomosaics and project the sine wave system's data to the WGS-84. Segmentation was performed over the merged high-resolution image to speed up the process of filling chlorophyll concentration data, following the NetCDF format. The maximum area of the segments is defined in the system configuration parameters and automatically establishes the number of segments in which the image is divided. The * .gher binary files are then generated with their respective mask of the zone that6/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:2:0:NEW 17 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>number of modes you allow to compute 20 neini The minimum number of modes you want to compute 1 ncv The maximal size for the Krylov subspace 35 tol The threshold for Lanczos convergence 1.0e-8 nitemax The maximum number of iteration allowed for the stabilization of eofs obtained by the cycle 300 toliter Precision criteria defining the threshold of automatic stopping of DINEOF iterations 1.0e-3 rec For complete reconstruction of the matrix 0 eof Writing the left and right modes of the input matrix 0 norm Activate the normalization of the input matrix 0 seed Seed to initialize the random number generator 243435</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>(a) shows that the Chl-a composed image corresponding to January 2018 does not present black or white regions, corresponding to zones with missing data. Three scenarios were prepared to test the system under occlusion conditions by adding different levels of synthetic clouds to the composed images. The three levels of missing data were arbitrarily selected to represent different typical scenarios that are common in real data. Figs.3(b), 3(c), and 3(d) show the composed image corrupted with the synthetic cloud masks, covering 20%, 30%, and 50% respectively. The generation of synthetic masks was based on real cloud images from the same scene at different dates (e.g. climate conditions), and the percentage of clouds was computed based on pixel counts. Regarding the cloud coverage in Fig. 3(b), a few clouds scarcely cover different regions of the sea, shaping natural clouds. Increasingly dense clouds are shown in Figs. 3(c), and 3(d), according to the corresponding percentage of the cloud masks.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Maps of cloud masks in Mexico economic exclusive zone (EEZM). (a) Chl-a image composed by 30 scenes (January 2018), (b) Mask with 20% clouds, (c) Mask with 30% clouds and (d) Mask with 50% clouds</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>st , 2018. The four images in Fig 4 correspond to the central region of the Gulf of Mexico, with coordinates North = 30.40 o , South = 18.60 o , East = &#8722;83.30 o and West = &#8722;94.90 o . Fig. 4(a) shows the original image from sensor VIIRS-JPSS-1. Fig. 4(b) shows the spatial distribution of the pixels from both methods and the pixels that were filled with the proposed approach. Fig. 4(c) shows the results of the mosaicking function from the SNAP software; and Fig. 4(d) shows the results obtained with the window size 7 &#215; 7. A visual comparison of Fig. 4(d) and Fig.4(a) evidences the advantage of using the sliding window to reduce the amount of missing data, even when compared to commercial software (Fig.4(c)). Images from the four sensors were complemented with different percentages of missing data: 67.72% for MODIS-AQUA, 65.93% for MODIS-TERRA, 58.55% for VIIRS-SNPP, and 58.29% for VIIRS-JPSS-1.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_8'><ns0:head>Fig. 5 Figure 4 .</ns0:head><ns0:label>54</ns0:label><ns0:figDesc>Fig. 5 shows the orthomosaics obtained for each sensor after preprocessing downloaded samples and applying the 7 &#215; 7 sliding window. Due to the differences in trajectories and climate conditions at the overflight time, all orthomosaics present quite different areas of missing samples. For example, Fig. 5(a) and Fig 5(b) corresponding to MODIS Aqua and MODIS Terra respectively, have a band of missing data at the center of the image, but with distinct orientations. On the other hand, Fig. 5(c) and Fig 5(d) that correspond to VIIRS JPSS-1 and SNPP, do not present clear missing data patterns. Such differences favor the exploitation of the different sources to obtain a more complete resulting orthomosaic I M . Once the four orthomosaics were generated, data from the VIIRS-JPSS-1 sensor was selected as the base image in the merging preprocessed data module. The orthomosaic from the VIIRS-JPSS-1 sensor was chosen as the base image (I b ) because it presents a lower percentage of missing data than the other sensors. Then, the final merged I M is processed with DINEOF with the orthomosaics from previous days, as described in</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_9'><ns0:head>Fig. 7 .Figure 5 .</ns0:head><ns0:label>75</ns0:label><ns0:figDesc>Figure 5. Maps of orthomosaics in Mexico economic exclusive zone (January 1st 2019). (a) MODIS Aqua orthomosaic, (b) MODIS Terra orthomosaic , (c) VIRSS JPSS-1 orthomosaic and (d) VIRSS SNPP orthomosaic.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>) 0.44 (0.16) Finally, Fig. 8(a) presents the merged image I M , created with the data acquired by the MODIS and VIIRs sensors on January 1st, 2019. On the other hand, Fig. 8(b) presents the final result of the proposed 10/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:2:0:NEW 17 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Segmentation of the area of study, performed automatically by the proposed approach</ns0:figDesc><ns0:graphic coords='12,183.09,63.78,330.87,209.22' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. RMSE corresponding to the application of DINEOF for different values of alpha and numit, at distinct zones and time frames; the colorbar represents the RMSE. (a) Zone 1 / time=30, (b) Zone 2 / time=30, (c) Zone 3 / time=30, (d) Zone 1 / time=60, (e) Zone 2 / time=60, (f) Zone 3 / time=60, (g) Zone 4 / time=30, (h) Zone 5 / time=30, (i) Zone 6 / time=30, (j) Zone 4 / time=60, (k) Zone 5 / time=60 and (l) Zone 6 / time=60.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_13'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Results of the process of filling in missing data in satellite images. (a) Merged image (I M ). (b) Final Image without missing data (I F ).</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_14'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Number of operations expected as the segments are reduced, or equivalently, the number of segments are augmented.</ns0:figDesc></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Results of the preprocessing module in terms of RMSE, percentage of data coverage in the resulting orthomosaic, and the preprocessing time in seconds. Bold numbers symbolize the best results when distinct window sizes are compared</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Mosaicking</ns0:cell><ns0:cell>3 &#215; 3</ns0:cell><ns0:cell>5 &#215; 5</ns0:cell><ns0:cell>7 &#215; 7</ns0:cell><ns0:cell>9 &#215; 9</ns0:cell><ns0:cell>11 &#215; 11</ns0:cell><ns0:cell>15 &#215; 15</ns0:cell><ns0:cell>21 &#215; 21</ns0:cell><ns0:cell>31 &#215; 31</ns0:cell><ns0:cell>51 &#215; 51</ns0:cell></ns0:row><ns0:row><ns0:cell>RMSE</ns0:cell><ns0:cell>0.429</ns0:cell><ns0:cell>0.419</ns0:cell><ns0:cell>0.408</ns0:cell><ns0:cell>0.408</ns0:cell><ns0:cell>0.421</ns0:cell><ns0:cell>0.437</ns0:cell><ns0:cell>0.471</ns0:cell><ns0:cell>0.513</ns0:cell><ns0:cell>0.582</ns0:cell><ns0:cell>0.674</ns0:cell></ns0:row><ns0:row><ns0:cell>Data coverage</ns0:cell><ns0:cell>17.35%</ns0:cell><ns0:cell>20.98%</ns0:cell><ns0:cell>25.53%</ns0:cell><ns0:cell>28.72%</ns0:cell><ns0:cell>30.57%</ns0:cell><ns0:cell>32.31%</ns0:cell><ns0:cell>35.01%</ns0:cell><ns0:cell>37.96%</ns0:cell><ns0:cell>41.37%</ns0:cell><ns0:cell>45.56%</ns0:cell></ns0:row><ns0:row><ns0:cell>Time (s)</ns0:cell><ns0:cell>7.753</ns0:cell><ns0:cell>8.748</ns0:cell><ns0:cell>13.402</ns0:cell><ns0:cell>17.583</ns0:cell><ns0:cell>19.314</ns0:cell><ns0:cell>21.640</ns0:cell><ns0:cell>28.685</ns0:cell><ns0:cell>40.228</ns0:cell><ns0:cell>60.736</ns0:cell><ns0:cell>138.663</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Average time and RMSE obtained after the application of the proposed approach with distinct segment sizes. Bold numbers symbolize the lowest RMSE</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Segment size</ns0:cell><ns0:cell cols='2'>312 &#215; 187 &#215; 12</ns0:cell><ns0:cell cols='2'>625 &#215; 374 &#215; 12</ns0:cell><ns0:cell cols='2'>1250 &#215; 749 &#215; 12</ns0:cell></ns0:row><ns0:row><ns0:cell>Cloud test</ns0:cell><ns0:cell>Time (&#963; )</ns0:cell><ns0:cell>RMSE (&#963; )</ns0:cell><ns0:cell>Time (&#963; )</ns0:cell><ns0:cell>RMSE (&#963; )</ns0:cell><ns0:cell>Time (&#963; )</ns0:cell><ns0:cell>RMSE (&#963; )</ns0:cell></ns0:row><ns0:row><ns0:cell>20%</ns0:cell><ns0:cell>7.52 (4.59)</ns0:cell><ns0:cell cols='5'>0.45 (0.11) 43.41 (28.82) 0.36 (0.08) 187.17 (111.09) 0.37 (0.08)</ns0:cell></ns0:row><ns0:row><ns0:cell>30%</ns0:cell><ns0:cell cols='4'>10.42 (27.84) 0.47 (0.12) 43.21 (27.76) 0.35 (0.07)</ns0:cell><ns0:cell>175.84 (90.69)</ns0:cell><ns0:cell>0.36 (0.06)</ns0:cell></ns0:row><ns0:row><ns0:cell>50%</ns0:cell><ns0:cell cols='5'>13.86 (31.77) 0.52 (0.16) 58.36 (42.02) 0.43 (0.17) 186.72 (130.17</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54271:2:0:NEW 17 Apr 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Subject: Submission of revised paper CS-2020:10:54271:1:2:REVIEW April 17, 2022 Gang Mei PeerJ Computer Science Dear Editor Thank you for allowing us to submit a revised draft of our manuscript titled “An approach to fill in missing data from satellite imagery using data-intensive computing and DINEOF.” We appreciate the time and effort you and the reviewers have dedicated to providing your valuable feedback on our manuscript. In regard of the changes, the paper’s quality and legibility was significantly improved, and we appreciate the insight of the reviewers’ comments. Here is a point-by-point response to the editor and reviewers’ comments and concerns. Editor 1.1 — Compared with the previous version, the revised version has been obviously improved. However, as also pointed out by the reviewer, the authors are suggested to describe more details of the missing value imputation method to justify the main novelty. So, in the related sections, such as the last paragraph in the section of Introduction, and the section of Method, authors are strongly advised to clearly point out the contributions and novelty when comparing with existing related work. Reply: Following your suggestions, we have rewritten part of the following sections of the paper (Introduction, Method, Experimental Results, and Conclusions) in order to point out the contributions and novelty of the work reported. In summary, we propose a product that automates the process of filling in missing data in a satellite image. The first novelty of our approach compared to previous works is that we use data from different sensors, while previous works used data from the same set of sensors. Another novelty in our proposal is the way the filling of missing data was performed, which is carried out by chaining three different strategies: • The first data with which the gaps in the images are filled comes from the fusion of four data sources (MODIS-TERRA, MODIS-AQUA, VIIRS-SNPP, and VIIRS-JPSS-1). • The next step consists of estimating the missing pixels close to those obtained in the previous step, for which the nearest neighbor approach using multivariate interpolation is employed. • Empirical orthogonal functions are used to fill in the last missing data, for which the DINEOF software is used. 1 It is well known that the amount of computations performed by DINEOF increases when using high-resolution images; therefore, when the study area is as large as the exclusive economic zone of Mexico (EEZM), processing an image of this size with DINEOF causes memory overflow. To address the memory overflow problem, an approach was proposed that uses data-intensive computing techniques that assume an underlying cluster architecture. Reviewer 4 Basic reporting 4.1 — The authors present an approach to filling in missing data from satellite imagery using dataintensive computing. The approach was divided into three main modules. The idea is interesting; however, the authors must improve several details before accepting the paper. Reply: Dear reviewer, thank you very much for your review work; we are sure that your suggestions and comments helped improve the manuscript’s quality; the following explains how each of the points suggested by you has been addressed. 4.2 — The authors must improve the approach implementation explanation a justify what is the main novelty of their method compared to state of the art. As explained, it seems that they only divided the data into smaller sets to be processed and this is not enough. Reply: Following your suggestions, we have rewritten part of the following sections of the paper in order to detail the work done: Introduction, Method, Experimental Results, and Conclusions. In particular, this paper presents an approach for filling in missing data in high-resolution satellite images. Unlike the approaches found in the literature review, which address the missing data filling problem using data from the same set of sensors, this paper proposes an algorithm that merges data from 4 different sources and creates an orthomosaic with merged data. To perform the data merge, a module is proposed that selects the image with less missing data and establishes the order in which the three remaining images will be processed; the processing order is established using the root mean square error measure, while the filling of the missing data is performed using the nearest neighbor technique. However, the approach mentioned above is usually not sufficient to fill all the missing data; therefore, like many of the works found in the literature, we use DINEOF to finish filling the missing data. The problem with DINEOF is that it consumes a lot of computational resources and is usually limited to studying small geographic areas. To address these restrictions of DINEOF, this paper proposes a method that divides the work into segments that DINEOF can process and then integrates the results into an orthomosaic without missing data. The resulting orthomosaic has real information from 4 different sources and information extracted from time series using DINEOF. In summary, this paper presents a product that automates the process of filling in missing data in a satellite image using data from 4 different sources and DINEOF. 4.3 — Also, a computational complexity analysis must be added to understand and clarify the contribution of the approach. Reply: We appreciate your suggestion on computational complexity. Computational complexity was evaluated considering your comment following a theoretical approach; therefore, a section entitled “Computational complexity analysis” was included. 2 Experimentally, the time was evaluated after each trial, and the average duration and standard deviation are included in Tables 2 and Table 3. Spatial complexity was studied in terms of the size of the segments in which the orthomosaic IM was divided. Finally, it is important to mention that the DINEOF algorithm was not able to finish its execution with the original sequences (2500 × 1498 × 12) due to memory overload. This comment was emphasized in the text. Experimental design 4.4 — The authors present interesting experiments and results. However, they must find a metric to validate the results; a visual comparison is not enough. This metric will allow comparing the proposed approach with other methods reported in the literature. for example the complexity of their approach. Reply: The authors appreciate the comment of the reviewer and highlight that numerical comparison was included in Table 2 and Table 3. Table 2 presents the results in terms of RMSE, percentage of data coverage, and execution time. And Table 3 presents the average time and RMSE obtained after applying the proposed approach to three distinct cloudy scenarios. In addition, an analysis of the impact of the size of the selected segment on the number of operations required to process a high-resolution image was included, see Fig 9. 4.5 — Also, they can report processing time a compare it with the DINEOF algorithm processing time with previous work reported. Reply: We appreciate your comments and find them very interesting. However, we consider it impossible for this particular study to do a processing time analysis. In a strict sense, most of the previous studies start from the information of one sensor and from there use DINEOF to fill in the missing data. We merge the information from 4 different sources and create an orthomosaic with merged data, which has less missing data than any of the four original images; next, we use DINEOF to finish filling in the missing data. On the other hand, we found a way to process with DINEOF large geographic regions such as the EEZM, automating the segmentation of the data in such a way that DINEOF does not exhaust the computational resources. However, a comparison of the proposed approach using nine different window sizes to fill data was performed and compared against the classical method implemented in SNAP. The results are reported in Table 2, where the window sizes 5 × 5 and 7 × 7 presented the lowest RMSE value when compared to other window sizes. Comments for the Author 4.6 — the authors must clarify the real impact and originality of the proposed method. Currently, this is not enough to accept the paper. Reply: Following your suggestions, we have rewritten part of the following sections of the paper in order to detail the work done: Introduction, Method, Experimental Results, and Conclusions. 3 We hope the revised version fulfills the expectations of the editorial team and look forward to hearing from you in due course. Sincerely, Dr. Himer Avila-George Professor Department of Computer Science and Engineering Universidad de Guadalajara, México E-mail: himer.avila@academicos.udg.mx On behalf of all authors. 4 "
Here is a paper. Please give your review comments after reading it.
424
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>When modelling epidemics, the outputs and techniques used may be hard for the general public to understand. This can cause fear mongering and confusion on how to interpret the predictions provided by these models. This article proposes a solution for such a model that was created by a Canadian institute for COVID-19 in their region-namely, the NorthCOVID-19 model. In taking these ethical concerns into consideration, first the web interface of this model is analyzed to see how it may be difficult for a user without a strong mathematical background to understand how to use it. Second, a system is developed that takes this model's outputs as an input and produces a video summarization with an autogenerated audio to address the complexity of the interface, while ensuring that the end user is able to understand the important information produced by this model. A survey conducted on this proposed output asked participants, on a scale of 1 to 5, whether they strongly disagreed (1) or strongly agreed (5) with statements regarding the output of the proposed method. The results showed that the audio in the output was helpful in understanding the results (80% responded with 4 or 5) and that it helped improve overall comprehension of the model (85% responded with 4 or 5). For the analysis of the NorthCOVID-19 interface, a System Usability Scale (SUS) survey was performed where it received a scoring of 70.94 which is slightly above the average of 68.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The spread of COVID-19 has had a great impact around the world and was classified as a global pandemic in March 2020 as a result <ns0:ref type='bibr'>(World Health Organization et al. (2020)</ns0:ref>). The exponential rate at which this disease spread has resulted in an abundance of research into how populations can try to handle the disease and do their part in preventing it's transmission <ns0:ref type='bibr'>(World Health Organization et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Remuzzi and Remuzzi (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Wong et al. (2020)</ns0:ref>). In the past, the idea of modelling these types of diseases has been a prominent area of interest in the mathematical modeling field as it can assist in future planning and preparation <ns0:ref type='bibr' target='#b28'>(Roddam (2001)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Angstmann et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b7'>Garibaldi et al. (2020)</ns0:ref>). A basic-yet very populartype of modelling technique known as the Susceptible Infectious Recovered (SIR) model <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>) has been used in the past to model diseases such as Ebola <ns0:ref type='bibr' target='#b4'>(Berge et al. (2017)</ns0:ref>), influenza <ns0:ref type='bibr' target='#b24'>(Osthus et al. (2017)</ns0:ref>), and measles <ns0:ref type='bibr' target='#b5'>(Bj&#248;rnstad et al. (2002)</ns0:ref>) for example. However, the outputs of these models might be difficult for the general public to understand as it can not be assumed that everyone will have the necessary mathematical background needed to properly interpret these results <ns0:ref type='bibr' target='#b26'>(Pickering and Kara (2017)</ns0:ref>). This is an important issue to address for two main reasons: (1) to prevent fear mongering <ns0:ref type='bibr' target='#b3'>(Begley et al. (2007)</ns0:ref>) from not being able to fully understand how these models are developing their conclusions and (2) to help the population understand the impact of having appropriate community-level prevention measures for these diseases be followed. This work will look at the NorthCOVID-19 model that was created by a Canadian institute <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>) for epidemic modeling in the North Ontario region. The authors of <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>) extended the SIR model <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Angstmann et al. (2016)</ns0:ref>) to include a second PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:1:1:NEW 17 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science population to run in parallel with the first, as well as additional states to assist in forecasting the hospital demand for the region which resulted in a total of 32 configurable parameters. Moreover, a web interface is publically available 1 where users can configure these parameters and run a simulation to see the results.</ns0:p><ns0:p>The layout of this article is as follows: it will start with a basic introduction to the SIR model, as well as the motivation behind assisting in public understanding. Next, a brief overview of the NorthCOVID-19 model will be provided, as well as an analysis of the current interface to see how difficult it may be to use.</ns0:p><ns0:p>Next, our proposed video-generation-solution will be discussed that utilizes artificial intelligence (AI) to enhance explainability, along with frames from an example of the output to show the ethical decisions made throughout its development. We refer to this as an ethical visualization which, in the context of this work, is a visual representation of data that takes ethics into consideration to ensure that any user interpreting it is not left confused or with unanswered questions. A more in-depth explanation is presented in the issues section where we show specifically what was taken into consideration for this visualization.</ns0:p><ns0:p>Lastly, the results of a survey presented to users outside of this project is shown, with the purpose being to gather their opinions of this proposed output and compare it to the currently available output on the web interface. The contributions of this work are as follows:</ns0:p><ns0:p>1. A method entirely motivated by ethics to visualize epidemic modelling 2. An application of AI for social good <ns0:ref type='bibr' target='#b17'>(Inkpen et al. (2019))</ns0:ref> 3. An analysis of a public interface for epidemic modelling from an ethical standpoint In the context of epidemic modelling, it is important to ensure that the data-particularly for COVID-19 <ns0:ref type='bibr' target='#b29'>(Roser et al. (2020)</ns0:ref>; Hoseinpour Dehkordi et al. ( <ns0:ref type='formula'>2020</ns0:ref>))-is represented using ethical visualization techniques because it is projecting the outcome of the entire population. Therefore, the output should be represented in a way that can not only be interpreted easily by any user but also visualized ethically.</ns0:p></ns0:div> <ns0:div><ns0:head>SIR Model</ns0:head><ns0:p>The origin of the SIR model dates back to 1927 as a contribution to the mathematical theory of epidemics <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>). Since then, there have been many works that have expanded on the core concepts of the SIR model and have proven to be a simple yet effective way to predict the course of various epidemics <ns0:ref type='bibr' target='#b28'>(Roddam (2001)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Angstmann et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Osthus et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b7'>Garibaldi et al. (2020)</ns0:ref>). For the NorthCOVID-19 model <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>), a time-series variation was used as the basis where the differential equations represent a population's change over a set time period <ns0:ref type='bibr' target='#b1'>(Angstmann et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Osthus et al. (2017)</ns0:ref>). In the core equations of the model, there are a total of four parameters; contact rate (the number of people an infectious individual can infect on average), infectivity rate (the percent chance a susceptible individual can become infected by an infectious individual), recovery rate (the percent of infectious individuals that recover from the infection after one time period), and the total population size <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>; <ns0:ref type='bibr' target='#b28'>Roddam (2001)</ns0:ref>). One time period, for this article's purpose, is a tenth of a day (i.e., an update is made to the system every 0.1 days) as it is defined as such in the NorthCOVID-19 model.</ns0:p></ns0:div> <ns0:div><ns0:head>Public Understanding</ns0:head><ns0:p>When discussing the results of these epidemic models with the general public, it may be difficult to convey the reasoning behind their outputs and why it is important to fully understand. To further the motivation behind this article's purpose, the ACM Code of Ethics <ns0:ref type='bibr' target='#b9'>(Gotterbarn et al. (2017)</ns0:ref>) were reviewed and the following four codes were found to be the most relevant to this work: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>With the output of the proposed method, a great deal of care is taken to ensure that no important information is missed or downplayed to the end-users <ns0:ref type='bibr' target='#b13'>(Hoseinpour Dehkordi et al. (2020)</ns0:ref>). The goal is to help the general public understand the outputs from these types of models so that they can interpret it without any confusion.</ns0:p></ns0:div> <ns0:div><ns0:head>Prior Works</ns0:head><ns0:p>When searching for related works in visualizing epidemic models we have found that, to the best of our knowledge, next to none exists in recent literature. Looking further back, however, work from <ns0:ref type='bibr' target='#b12'>(H&#246;hle and Feldmann (2007)</ns0:ref>) in 2007 proposed a R (programming language) package that included visuals for 'stochastic epidemic models'. The authors did not incorporate ethics into the work as it simply displayed the raw data in a graphical form. In 2011, the authors of <ns0:ref type='bibr' target='#b23'>(Maciejewski et al. (2011)</ns0:ref>) presented a computer application called 'PanViz' that was targeted towards public health officials in the USA to simulate a pandemic. The visualization aspect of this program showed graphs of the statistics and, most notably, how the disease may spread throughout various districts in the country-which was further detailed in another work by two of the authors <ns0:ref type='bibr' target='#b0'>(Afzal et al. (2011)</ns0:ref>). In both cases, though, ethics was not taken into consideration and it was not targeted towards the general public.</ns0:p><ns0:p>When interpreting the outputs of an epidemic model, a prominent thought may be how trustworthy the results are. This is a valid concern as these models are trying to predict the future outcome of a given scenario. In recent literature, visualization of such 'uncertainty' has been explored as it affects the trust and understanding of the individual interpreting it <ns0:ref type='bibr' target='#b10'>(Greis et al. (2017)</ns0:ref>; Hullman (2019); Hofman et al. ( <ns0:ref type='formula'>2020</ns0:ref>)). In the work presented by <ns0:ref type='bibr' target='#b18'>(Kale et al. (2019)</ns0:ref>), the authors looked at the complexity that researchers face in managing such uncertainties in a way that does not discredit their work. They provide solutions such as showing all possible outcomes based on the uncertainty or simply disclosing the reason it exists in the first place <ns0:ref type='bibr' target='#b18'>(Kale et al. (2019)</ns0:ref>). In either case, it should not be too complex for the end-user to understand to prevent confusion or distrust <ns0:ref type='bibr' target='#b10'>(Greis et al. (2017)</ns0:ref>). In the context of our specific work, a number of uncertainties exist when projecting for COVID-19; especially when explaining it to lay users. Such issues will be raised along with the solutions that were incorporated in the proposed output.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS &amp; METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>NorthCOVID-19</ns0:head><ns0:p>The NorthCOVID-19 model was created by the authors of <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>) as a variation of the SIR model that considered both urban and rural populations when simulating an epidemic. The foundation of this model uses sets of differential equations to move individuals from one state to another. It also takes into account resources reaching maximum capacity by having 'overflow' equations to emulate the path individuals will take in that instance <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). For example, if an ICU has reached maximum capacity but an individual is set to enter it, they will, instead, go through these 'overflow' paths. Furthermore, when compared to the SIR model, an additional five states (Hospitalized, ICU, ICU Discharge, Ward, and Death) have been added within two populations running in parallel because, for the target location of the Northern Ontario region in Canada, rural communities commonly access urban health services for more specialized care <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). However, for the purposes of this research, the proposed output to be discussed will only take one population into consideration-specifically, the urban population to improve the understanding of outputs for one location at a time without the added complexity of another population, thus affecting the results.</ns0:p><ns0:p>Another important aspect of this model is the intermediate states denoted as purple squares. These are sub-states where individuals need to be held for a number of time periods before travelling down the flow they are connected to. For example, in the 'ICU' state, one of the parameters used is the number of days an individual will need to stay there before being moved to the 'Ward'. If an individual does not fall under the flow leading to the 'Death' state, they will then be held in the intermediate state so that they are only affected by the flow leading to the 'Ward' state while still occupying a spot within the 'ICU' state. The main limitation of this model is that the parameters are entirely static in nature-which is different in reality with public health policies such as lockdowns, for example, which would change the population's average contact rate. Regardless, this model is important as it is able to project what would happen if no changes were made to curb the spread of the modelled disease. Further details on the model and parameters can be found in <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>3/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:1:1:NEW 17 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Current Interface and Output</ns0:head><ns0:p>When the authors of NorthCOVID-19 <ns0:ref type='bibr'>(Savage et al. (</ns0:ref> <ns0:ref type='formula'>2020</ns0:ref>)) made the model, a web interface was designed alongside it to provide researchers in the field an accessible means of interacting with it. This interface needed to accommodate a total of 32 parameters across two populations so that individual aspects of the model could be modified. Once this has been configured to the user's specifications, it can be submitted where the simulation is run in a matter of seconds before being returned to the user shortly after.</ns0:p><ns0:p>Once the results have been computed, the first thing displayed to the user is a summary of what occurred in the simulation. Second, the user is shown the number of susceptible, infectious, and recovered individuals per capita of 10,000 over time. Next, the user is shown the number of deaths per capita of 10,000 over time. Lastly, a graph showing the ICU usage over time is presented to the user, as well as a plot of how many care units would be in place, ideally, to accommodate everyone. The user is given the option to download a spreadsheet of the raw results from the simulation that shows the fractional, point-in-time counts, for each state, every 0.1 timesteps.</ns0:p></ns0:div> <ns0:div><ns0:head>Issues</ns0:head><ns0:p>To get an initial idea of how the NorthCOVID-19 interface was being perceived, we asked the authors <ns0:ref type='bibr'>(Savage et al. (</ns0:ref> <ns0:ref type='formula'>2020</ns0:ref>)) for their feedback on possible issues that may surface for the general public. It was noted by them that it had already been demonstrated to other researchers, public health officers, policy makers, and physicians so they had a good idea of how users were assessing their system <ns0:ref type='bibr' target='#b25'>(Pearce et al. (2020)</ns0:ref>). Taking their points and observations into consideration, a few issues became apparent:</ns0:p><ns0:p>1. The initial interface of 32 parameters could be overwhelming to a user that does not understand how each parameter works 2. The raw results, being so granular and in decimal format, could be confusing to a user that does not understand the implementation of mathematics in the model 3. The graphs, although valuable to researchers in the field as the results have been generalized to per capita values, could be confusing to an end user that does not understand how these ratios work or what they mean to the actual population size These issues can be further addressed by the ACM Code of Ethics <ns0:ref type='bibr' target='#b9'>(Gotterbarn et al. (2017)</ns0:ref>) discussed before. For issue 1, codes 2.7 and 3.1 can be applied as we want to ensure that the users understand the reasoning behind these results. Although there are 32 parameters, the additions on top of the base SIR model come from the hospitalization states that have been added to NorthCOVID-19 <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). Since this aspect of the model would be specific to the regions it is being applied to, the general public would assume that these states have been configured to what their area expects. Therefore, in the proposed output, we only display parameters from the model that relate to the disease itself-contact rate, infectivity, and recovery time-as they would be most likely to vary based on location-specific statistics.</ns0:p><ns0:p>Additionally, however, the NorthCOVID-19 model also has parameters for how many individuals are initially infected as well as what the capacity limit of the intensive care unit (ICU) is <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>).</ns0:p><ns0:p>These parameters are also important for the general public to know as it shows how the disease begins in their region as well an important value showing the handling capacity of their hospitals. For issue 2, code 1.3 and 3.1 can be applied as we want ensure that the general public doesn't dismiss the results of these models by helping them understand results in a better format. Lastly, for issue 3, all of the codes discussed in section can be applied as the output of the model should be understandable for all individuals since it has a great deal of importance to the general public.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed Epidemic Model Output</ns0:head></ns0:div> <ns0:div><ns0:head>Video Output</ns0:head><ns0:p>The first aspect to the proposed solution is a video output that is generated based on the following values made available by the NorthCOVID-19 model: the parameters used, summarized results, and raw data. In practice, these values would be based on local statistics that are relevant to the audience being targeted.</ns0:p><ns0:p>To demonstrate the proposed method, arbitrary values will be used to give an idea of how the video output would look. In the very first frame, we display to the user the preset values for the important parameters identified. This frame stays up long enough for the audio output (discussed in the next section) to read all values out to the viewer-an example can be seen in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. This is to ensure that the viewer understands the setup of this particular output before any of the results are presented.</ns0:p><ns0:p>In the next frame, the proposed solution first presents the raw infectious numbers over time. The scale on this graph is an absolute value instead of being a per capita value so it is much more transparent and easier to understand <ns0:ref type='bibr' target='#b20'>(Kienzler (1997)</ns0:ref>; <ns0:ref type='bibr' target='#b21'>Kostelnick (2008)</ns0:ref>)-an example of this is shown in Figure <ns0:ref type='figure'>2</ns0:ref>. However, it does not convey another crucial design choice: the graph is animated to show the numbers over time from the first day to the last day. During the animation, the audio output discussed in the next section plays to walk the viewer through what happened as a result, with the total number fading in for the conclusion of this frame. The reasoning for this visualization setup is to ease the viewer into the results instead of abruptly showing a graph that they need to interpret by themselves <ns0:ref type='bibr' target='#b26'>(Pickering and Kara (2017)</ns0:ref>).</ns0:p><ns0:p>The same design choice was made for the frames following this one as shown in Figures <ns0:ref type='figure' target='#fig_4'>3 and 4</ns0:ref>. Once the last graph has been shown to the viewer, a concluding frame is shown where all peaks that were previously shown are presented, on the same scale, in a bar chart. As with the other slides and to be discussed in the next section, an audio output accompanies this to reiterate each of the results to the viewer. By displaying the peaks in this format, one can view the magnitude of each peak relative to one another with little to no confusion <ns0:ref type='bibr' target='#b26'>(Pickering and Kara (2017)</ns0:ref>). This is particularly important for the infectious, recovered, and death peaks as this will typically be what the general public is most concerned with <ns0:ref type='bibr' target='#b20'>(Kienzler (1997)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Begley et al. (2007)</ns0:ref>). To conclude this section, the code for the proposed output was created in Python and uses the libraries matplotlib <ns0:ref type='bibr' target='#b16'>(Hunter (2007)</ns0:ref>) along with moviepy 2 to generate the result discussed.</ns0:p></ns0:div> <ns0:div><ns0:head>Audio Output</ns0:head><ns0:p>To further enhance the video output, the choice was made to have a text-to-speech library interpret the results for the viewer. However, a simple, robotic voice was not desirable as it may sound too monotonous for the context and not feel trustworthy as a result. Therefore, this work uses an artificial intelligence assisted solution from <ns0:ref type='bibr' target='#b31'>(Tachibana et al. (2018)</ns0:ref>). To summarize, this novel approach uses deep convolution networks to adjust the actual audio spectrograms from the text-to-speech to make it sounds more realistic and less robotic. The authors made this codebase publically available along with a pre-trained model that is used in this work. In the introductory slide (Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>), the following text was converted to audio using this library:</ns0:p><ns0:p>Visualization of NorthCOVID-19. The results in this video are based on the following parameters: an initial population of 'N', initial infected 'N_i', ICU capacity of 'N_c', contact rate of 'c', an infectivity of 'tau' percent, and an illness duration of 'nu' days</ns0:p><ns0:p>The parameters in this text and the ones to follow use the notation from NorthCOVID-19 <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). Since this output is a generated dynamically, the text is updated based on these values. In the next frame (Figure <ns0:ref type='figure'>2</ns0:ref>) the following text is spoken: This scenario started with 'N_i' infected individuals. Once the infection ended after 'INFECTION_DURATION' days, there was a total of 'TOTAL_INFECTED_COUNT' individuals that had become infected</ns0:p><ns0:p>Where 'INFECTION DURATION' and 'TOTAL INFECTED COUNT' come from their respective values but as the absolute values instead of per capita. In the next frame (Figure <ns0:ref type='figure'>3</ns0:ref>) the following text is spoken:</ns0:p><ns0:p>Over the course of the infection, the ICU capacity peaked at 'ICU_PEAK_COUNT' individuals while the Ward peaked at 'WARD_PEAK_COUNT' individuals. The ICU limit in this scenario was set to 'N_c' individuals</ns0:p><ns0:p>Where 'ICU PEAK COUNT' and 'WARD PEAK COUNT' come from the maximum value in their respective sections of the model. In the next frame (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>) the following text is spoken:</ns0:p><ns0:p>Out of the 'TOTAL_INFECTED_COUNT' individuals that become infected, a total of 'TOTAL_RECOVERED_COUNT' individuals recovered from the infection while 'TOTAL_DEATH_COUNT' deaths were recorded</ns0:p><ns0:p>Where 'TOTAL RECOVERED COUNT' and 'TOTAL DEATH COUNT' come from their respective values but as the absolute values instead of per capita. Lastly, in the conclusion slide (Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>) the following text is spoken:</ns0:p><ns0:p>After the infection ended in 'INFECTION_DURATION' days. The total infected peaked at 'TOTAL_INFECTED_COUNT', the ICU peaked at 'ICU_PEAK_COUNT', the Ward peaked at 'WARD_PEAK_COUNT', the recovered peaked at 'TOTAL_RECOVERED_COUNT', and the deaths peaked at 'TOTAL_DEATH_COUNT' By having this audio accompany the video output, the goal is to assist the user in their understanding of the results in a simple, to the point, format. The result is an informative 2 minute video (with audio) that, from start to finish, takes around 10 minutes to generate. </ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>User Study</ns0:head><ns0:p>To evaluate the web interface of the NorthCOVID-19 model and the proposed video output, a 15 question survey was created. The link to this form was posted on the authors' Linkedin (total of 38 individuals responded) and Slack channels (total of 2 individuals responded) with the only restriction being that it could be filled out only once. When an individual was first presented with the survey, they were shown a 4-minute video that walked them through how to use the web interface that was provided by the authors of <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). Then, they were asked to use the platform on their own and answer 10 questions that were adapted from the System Usability Scale (SUS) <ns0:ref type='bibr' target='#b6'>(Brooke (1996)</ns0:ref>) as per Figures <ns0:ref type='figure'>6 (a-j</ns0:ref>). An answer of 1 meant they 'Strongly Disagree' with the statement and 5 meant 'Strongly Agree'. Lastly, they were shown a sample output of the proposed video 3 in this article, and asked 5 questions regarding their thoughts about it as per Figures <ns0:ref type='figure'>7 (a-e</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>Survey</ns0:head><ns0:p>After completing the survey, a total of 40 individuals responded. Since the first ten questions (Figures 6a through 6j) were directly adapted from SUS <ns0:ref type='bibr' target='#b6'>(Brooke (1996)</ns0:ref>), a score was derived to see how simple the interface is to use. Once these values were calculated for each individual response, the average was determined to get the final result where 0 means that the interface is not usable at all and 100 means that it is extremely easy to use. From studies done on this survey, a score of 68 is noted as being average for this measure <ns0:ref type='bibr' target='#b2'>(Bangor et al. (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Lewis (2018)</ns0:ref>). This survey conducted on NorthCOVID-19 scored the interface at 70.94 which is slightly above the average. This result, along with the issues raised show the need to create a different way of interfacing with this model.</ns0:p><ns0:p>In analyzing these results further <ns0:ref type='bibr'>(Figures 7(a)</ns0:ref> through 7(e)), it can be seen that a majority of the responses leaned towards a confident understanding of the video output and improved comprehension of the results. An interesting observation, however, can be seen in Figure <ns0:ref type='figure'>7b</ns0:ref> when the individuals were asked if the website output was easier to understand. Most answers fell under a neutral response (i.e., score of 3) but enough fell under scores 4 and 5 (i.e., 'Agree' and 'Strongly Agree') that it leaned more towards agreeing than disagreeing with the statement. This response could be attributed to the fact that the website is entirely interactive with its graphs and transparency in regards to the parameter configuration.</ns0:p><ns0:p>Regardless, from Figures <ns0:ref type='figure'>7a, 7c</ns0:ref>, 7d, and 7e, a strong leaning towards the video output being easy to understand, improving comprehension, and being helpful to the individuals who took the survey can be Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>observed in these results. This shows achievement in the goal of meeting ACM codes 1.3, 2.7, 3.1, and 3.2 as discussed before.</ns0:p></ns0:div> <ns0:div><ns0:head>LIMITATIONS</ns0:head><ns0:p>Although the responses to the survey seem promising, we do acknowledge that the collection technique used is a limitation in itself. We had decided against recording demographic and educational information of the respondents to ensure anonymity, but can see how that could've provided further insight to these ratings. Furthermore, the number of individuals that responded was relatively low, and is another limitation of this work. We welcome more feedback on this visualization and hope to continue improving its output so that all can understand.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article, the outputs of an epidemic model called 'NorthCOVID-19' (Savage et al. ( <ns0:ref type='formula'>2020</ns0:ref>)) along with its web interface was examined to see what issues it may have when being interpreted by the general public. The interface was created with researchers in the field in mind but the issues affects even the general population regardless of their background. After identifying three main issue that could cause anxiety and confusion, this article focused on creating a solution that would thoroughly explain the results in an ethical, easy to understand format. The result was a video that eases in to each of the results while having an artificial intelligence assisted voice further explaining the proposed output the viewer is seeing.</ns0:p><ns0:p>This feature was designed with the ACM code of ethics <ns0:ref type='bibr' target='#b9'>(Gotterbarn et al. (2017)</ns0:ref>) in mind as we wanted to ensure that the output presented was easy for the general public to understand <ns0:ref type='bibr' target='#b20'>(Kienzler (1997)</ns0:ref>; Pickering and Kara ( <ns0:ref type='formula'>2017</ns0:ref>)).</ns0:p><ns0:p>To evaluate the NorthCOVID-19 interface, questions were adapted from the System Usability Scale (SUS) <ns0:ref type='bibr' target='#b6'>(Brooke (1996)</ns0:ref>) to get a score out of 100 that describes how easy it is to use. The resulting score was calculated to be 70.94 which is slightly above average <ns0:ref type='bibr' target='#b2'>(Bangor et al. (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Lewis (2018)</ns0:ref>).</ns0:p><ns0:p>When users were asked about the video output in comparison to the website's output, the results showed most answers being neutral with a leaning towards the website's output being easier to understand. This could be attributed to the interactive graphs provided and transparency of the parameter configuration.</ns0:p><ns0:p>However, when discussing the video output itself, the results strongly leaned towards it helping improve the comprehension of the results and being easy to understand. This shows that the proposed output does add value to NorthCOVID-19 and that the design choices involved (having an animated video output with audio) achieved their intended purpose of providing the user with the information needed to understand the results without an interface. We have opted to make the simulation model, video generation script,</ns0:p><ns0:p>and survey results open-sourced at the following repository: https://github.com/andrfish/NorthCOVID19</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>public awareness and understanding of computing, related technologies, and their consequences &#8226; 3.1: Ensure that the public good is the central concern during all professional computing work &#8226; 3.2: Articulate, encourage acceptance of, and evaluate fulfillment of social responsibilities by members of the organization or group 1 https://covid.datalab.science/ 2/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:1:1:NEW 17 Mar 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The first frame in the proposed solution highlighting the initial parameters used in this example simulation</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. The second frame in the proposed solution</ns0:figDesc><ns0:graphic coords='6,170.92,470.51,355.20,198.96' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>2 https://zulko.github.io/moviepy/ 6/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:1:1:NEW 17 Mar 2022) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The fourth frame in the proposed solution</ns0:figDesc><ns0:graphic coords='8,172.60,77.48,351.84,196.56' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The final frame in the proposed solution</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .Figure 7 .</ns0:head><ns0:label>67</ns0:label><ns0:figDesc>Figure 6. The results of each question in the first part of the survey</ns0:figDesc></ns0:figure> </ns0:body> "
"DATE: March 14, 2022 TO: Academic Editor, PeerJ Computer Science FROM: Andrew Fisher, Department of Computer Science Lakehead University, Thunder Bay, ON, Canada P7B 5E1 RE: Article resubmission. Dear Editor, We wish to submit a revised copy of our article entitled ' An ethical visualization of the NorthCOVID-19 model' for consideration by PeerJ Computer Science. We can attest that this is an original article that has not been published elsewhere, nor is it currently under submission for publication elsewhere. We have carefully reviewed the concerns and believe they were addressed with the addition of an open-source repository as well as integration to the covid.datalab.science website. Please see our point-by-point response in the pages following this cover letter. We have no conflicts to disclose. Please address all correspondence concerning this article to me at afisher3@lakeheadu.ca. Thank you for taking the time to consider this article. Sincerely, Andrew Fisher Research Assistant, Department of Computer Science Lakehead University, Thunder Bay, ON Tel:204-761-9726 Web:http://www.datalab.science Reviewer 1 (Monika Heiner) Basic reporting I’m impressed how many words can be spent for explaining and justifying the design of a simplified user interface. The paper reads very well, however, I can’t really assess its value as the videos themselves, not to speak of the system generating the videos on the fly, are not available, in contrast to the original website, https://covid.datalab.science, for which the simplified user interface has been developed for. I suggest to add a few words explaining how the outcome of the reported work will be used/made available in the future. Experimental design no comment Validity of the findings Can't be assessed as the main result of the paper (a video-generating add-on to a given web interface) is not available. Thank you for your comments. We have integrated the video generation feature to the website and invite you to try it out https://covid.datalab.science. Please see the additional video download button for both urban and rural results after running a simulation. Additionally, we have made a repository containing both the NorthCOVID-19 and video generation script so future research can re-use it: https://github.com/andrfish/NorthCOVID19 Additional comments This paper has nine authors; it could be helpful to see how each of the authors contributed to the reported work. The first 7 authors contributed to the video generation script (the specificity of this can be seen in the repository, please see the example figure below) as well as the writing of the paper. The last 2 authors supervised the work and provided revisions as the paper was being developed. Additionally, the contribution breakdown has been described in the PeerJ submission itselfplease see the screenshot below Example contribution breakdown from the repository Contribution breakdown from the PeerJ submission There are no raw data. We have also added the raw survey data to our repository: https://github.com/andrfish/NorthCOVID19 minor: line 119: there seems to be something missing after ‘section’; Thank you for noticing this, we have resolved it. Reviewer 2 (Rashid Mehmood) Basic reporting The authors examine the outputs of an epidemic model called “NorthCOVID-19” along with its web interface to see what issues it may have when being interpreted by the general public. They identified three main issues that could cause anxiety and confusion and focused on creating a solution that would thoroughly explain the results in an ethical, easy-to-understand format. They built audio and video methods including using AI to communicate the epidemic model results. This is a well-written paper that I recommend to be accepted. Experimental design The experimental design is clear with sufficient details to reproduce the experiments and results. Validity of the findings Conclusions are well-stated. Thank you for your comments. Reviewer 3 (Anonymous) Basic reporting Overall it looks interesting. However, I am not too sure if the assumptions made are true. Experimental design Assuming that the proposed model is for public use. I don't think so. You cannot find the model anymore. Thank you for your comments. We have integrated the video generation feature to the website and invite you to try it out https://covid.datalab.science. Please see the additional video download button for both urban and rural results after running a simulation. Validity of the findings How independent the study can be if the first author of this paper is the second author of the published model paper? These results were completely unmoderated as the users were simply given a set of instructions and asked a series of questions via Google Forms. Therefore we do not believe that the authorship would have an impact on the outcome. One needs to provide some facts about how much the model was used by the general public. A sample of 40 is not too bad, but not knowing who the participants are in real trouble. Since our survey was focused solely on usability feedback from the users, we did not collect demographic nor general public usage information. The raw dataset, however, has been added to a repository we have made for this project: https://github.com/andrfish/NorthCOVID19 "
Here is a paper. Please give your review comments after reading it.
425
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>When modelling epidemics, the outputs and techniques used may be hard for the general public to understand. This can cause fear mongering and confusion on how to interpret the predictions provided by these models. This article proposes a solution for such a model that was created by a Canadian institute for COVID-19 in their region-namely, the NorthCOVID-19 model. In taking these ethical concerns into consideration, first the web interface of this model is analyzed to see how it may be difficult for a user without a strong mathematical background to understand how to use it. Second, a system is developed that takes this model's outputs as an input and produces a video summarization with an autogenerated audio to address the complexity of the interface, while ensuring that the end user is able to understand the important information produced by this model. A survey conducted on this proposed output asked participants, on a scale of 1 to 5, whether they strongly disagreed (1) or strongly agreed (5) with statements regarding the output of the proposed method. The results showed that the audio in the output was helpful in understanding the results (80% responded with 4 or 5) and that it helped improve overall comprehension of the model (85% responded with 4 or 5). For the analysis of the NorthCOVID-19 interface, a System Usability Scale (SUS) survey was performed where it received a scoring of 70.94 which is slightly above the average of 68.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The spread of COVID-19 has had a great impact around the world and was classified as a global pandemic in March 2020 as a result <ns0:ref type='bibr'>(World Health Organization et al. (2020)</ns0:ref>). The exponential rate at which this disease spread has resulted in an abundance of research into how populations can try to handle the disease and do their part in preventing it's transmission <ns0:ref type='bibr'>(World Health Organization et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Remuzzi and Remuzzi (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Wong et al. (2020)</ns0:ref>). In the past, the idea of modelling these types of diseases has been a prominent area of interest in the mathematical modeling field as it can assist in future planning and preparation <ns0:ref type='bibr' target='#b28'>(Roddam (2001)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Angstmann et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b8'>Garibaldi et al. (2020)</ns0:ref>). A basic-yet very populartype of modelling technique known as the Susceptible Infectious Recovered (SIR) model <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>) has been used in the past to model diseases such as Ebola <ns0:ref type='bibr' target='#b4'>(Berge et al. (2017)</ns0:ref>), influenza <ns0:ref type='bibr' target='#b24'>(Osthus et al. (2017)</ns0:ref>), and measles <ns0:ref type='bibr' target='#b5'>(Bj&#248;rnstad et al. (2002)</ns0:ref>) for example. However, the outputs of these models might be difficult for the general public to understand as it can not be assumed that everyone will have the necessary mathematical background needed to properly interpret these results <ns0:ref type='bibr' target='#b26'>(Pickering and Kara (2017)</ns0:ref>). This is an important issue to address for two main reasons: (1) to prevent fear mongering <ns0:ref type='bibr' target='#b3'>(Begley et al. (2007)</ns0:ref>) from not being able to fully understand how these models are developing their conclusions and (2) to help the population understand the impact of having appropriate community-level prevention measures for these diseases be followed. This work will look at the NorthCOVID-19 model that was created by a Canadian institute <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>) for epidemic modeling in the North Ontario region. The authors of <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>) extended the SIR model <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Angstmann et al. (2016)</ns0:ref>) to include a second PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:2:0:NEW 20 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science population to run in parallel with the first, as well as additional states to assist in forecasting the hospital demand for the region which resulted in a total of 32 configurable parameters. Moreover, a web interface is publically available (https://covid.datalab.science/) where users can configure these parameters and run a simulation to see the results.</ns0:p><ns0:p>The layout of this article is as follows: it will start with a basic introduction to the SIR model, as well as the motivation behind assisting in public understanding. Next, a brief overview of the NorthCOVID-19 model will be provided, as well as an analysis of the current interface to see how difficult it may be to use.</ns0:p><ns0:p>Next, our proposed video-generation-solution will be discussed that utilizes artificial intelligence (AI) to enhance explainability, along with frames from an example of the output to show the ethical decisions made throughout its development. We refer to this as an ethical visualization which, in the context of this work, is a visual representation of data that takes ethics into consideration to ensure that any user interpreting it is not left confused or with unanswered questions. A more in-depth explanation is presented in the issues section where we show specifically what was taken into consideration for this visualization.</ns0:p><ns0:p>Lastly, the results of a survey presented to users outside of this project is shown, with the purpose being to gather their opinions of this proposed output and compare it to the currently available output on the web interface. The contributions of this work are as follows:</ns0:p><ns0:p>1. A method entirely motivated by ethics to visualize epidemic modelling 2. An application of AI for social good <ns0:ref type='bibr' target='#b17'>(Inkpen et al. (2019))</ns0:ref> 3. An analysis of a public interface for epidemic modelling from an ethical standpoint In the context of epidemic modelling, it is important to ensure that the data-particularly for COVID-19 <ns0:ref type='bibr' target='#b29'>(Roser et al. (2020)</ns0:ref>; Hoseinpour Dehkordi et al. ( <ns0:ref type='formula'>2020</ns0:ref>))-is represented using ethical visualization techniques because it is projecting the outcome of the entire population. Therefore, the output should be represented in a way that can not only be interpreted easily by any user but also visualized ethically.</ns0:p></ns0:div> <ns0:div><ns0:head>SIR Model</ns0:head><ns0:p>The origin of the SIR model dates back to 1927 as a contribution to the mathematical theory of epidemics <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>). Since then, there have been many works that have expanded on the core concepts of the SIR model and have proven to be a simple yet effective way to predict the course of various epidemics <ns0:ref type='bibr' target='#b28'>(Roddam (2001)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Angstmann et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Osthus et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b8'>Garibaldi et al. (2020)</ns0:ref>). For the NorthCOVID-19 model <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>), a time-series variation was used as the basis where the differential equations represent a population's change over a set time period <ns0:ref type='bibr' target='#b1'>(Angstmann et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Osthus et al. (2017)</ns0:ref>). In the core equations of the model, there are a total of four parameters; contact rate (the number of people an infectious individual can infect on average), infectivity rate (the percent chance a susceptible individual can become infected by an infectious individual), recovery rate (the percent of infectious individuals that recover from the infection after one time period), and the total population size <ns0:ref type='bibr' target='#b19'>(Kermack and McKendrick (1927)</ns0:ref>; <ns0:ref type='bibr' target='#b28'>Roddam (2001)</ns0:ref>). One time period, for this article's purpose, is a tenth of a day (i.e., an update is made to the system every 0.1 days) as it is defined as such in the NorthCOVID-19 model.</ns0:p></ns0:div> <ns0:div><ns0:head>Public Understanding</ns0:head><ns0:p>When discussing the results of these epidemic models with the general public, it may be difficult to convey the reasoning behind their outputs and why it is important to fully understand. To further the motivation behind this article's purpose, the ACM Code of Ethics <ns0:ref type='bibr' target='#b9'>(Gotterbarn et al. (2017)</ns0:ref>) were reviewed and the following four codes were found to be the most relevant to this work: Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>With the output of the proposed method, a great deal of care is taken to ensure that no important information is missed or downplayed to the end-users <ns0:ref type='bibr' target='#b13'>(Hoseinpour Dehkordi et al. (2020)</ns0:ref>). The goal is to help the general public understand the outputs from these types of models so that they can interpret it without any confusion.</ns0:p></ns0:div> <ns0:div><ns0:head>Prior Works</ns0:head><ns0:p>When searching for related works in visualizing epidemic models we have found that, to the best of our knowledge, next to none exists in recent literature. Looking further back, however, work from <ns0:ref type='bibr' target='#b12'>(H&#246;hle and Feldmann (2007)</ns0:ref>) in 2007 proposed a R (programming language) package that included visuals for 'stochastic epidemic models'. The authors did not incorporate ethics into the work as it simply displayed the raw data in a graphical form. In 2011, the authors of <ns0:ref type='bibr' target='#b23'>(Maciejewski et al. (2011)</ns0:ref>) presented a computer application called 'PanViz' that was targeted towards public health officials in the USA to simulate a pandemic. The visualization aspect of this program showed graphs of the statistics and, most notably, how the disease may spread throughout various districts in the country-which was further detailed in another work by two of the authors <ns0:ref type='bibr' target='#b0'>(Afzal et al. (2011)</ns0:ref>). In both cases, though, ethics was not taken into consideration and it was not targeted towards the general public.</ns0:p><ns0:p>When interpreting the outputs of an epidemic model, a prominent thought may be how trustworthy the results are. This is a valid concern as these models are trying to predict the future outcome of a given scenario. In recent literature, visualization of such 'uncertainty' has been explored as it affects the trust and understanding of the individual interpreting it <ns0:ref type='bibr' target='#b10'>(Greis et al. (2017)</ns0:ref>; Hullman (2019); Hofman et al. ( <ns0:ref type='formula'>2020</ns0:ref>)). In the work presented by <ns0:ref type='bibr' target='#b18'>(Kale et al. (2019)</ns0:ref>), the authors looked at the complexity that researchers face in managing such uncertainties in a way that does not discredit their work. They provide solutions such as showing all possible outcomes based on the uncertainty or simply disclosing the reason it exists in the first place <ns0:ref type='bibr' target='#b18'>(Kale et al. (2019)</ns0:ref>). In either case, it should not be too complex for the end-user to understand to prevent confusion or distrust <ns0:ref type='bibr' target='#b10'>(Greis et al. (2017)</ns0:ref>). In the context of our specific work, a number of uncertainties exist when projecting for COVID-19; especially when explaining it to lay users. Such issues will be raised along with the solutions that were incorporated in the proposed output.</ns0:p></ns0:div> <ns0:div><ns0:head>MATERIALS &amp; METHODS</ns0:head></ns0:div> <ns0:div><ns0:head>NorthCOVID-19</ns0:head><ns0:p>The NorthCOVID-19 model was created by the authors of <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>) as a variation of the SIR model that considered both urban and rural populations when simulating an epidemic. The foundation of this model uses sets of differential equations to move individuals from one state to another. It also takes into account resources reaching maximum capacity by having 'overflow' equations to emulate the path individuals will take in that instance <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). For example, if an ICU has reached maximum capacity but an individual is set to enter it, they will, instead, go through these 'overflow' paths. Furthermore, when compared to the SIR model, an additional five states (Hospitalized, ICU, ICU Discharge, Ward, and Death) have been added within two populations running in parallel because, for the target location of the Northern Ontario region in Canada, rural communities commonly access urban health services for more specialized care <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). However, for the purposes of this research, the proposed output to be discussed will only take one population into consideration-specifically, the urban population to improve the understanding of outputs for one location at a time without the added complexity of another population, thus affecting the results.</ns0:p><ns0:p>Another important aspect of this model is the intermediate states denoted as purple squares. These are sub-states where individuals need to be held for a number of time periods before travelling down the flow they are connected to. For example, in the 'ICU' state, one of the parameters used is the number of days an individual will need to stay there before being moved to the 'Ward'. If an individual does not fall under the flow leading to the 'Death' state, they will then be held in the intermediate state so that they are only affected by the flow leading to the 'Ward' state while still occupying a spot within the 'ICU' state. The main limitation of this model is that the parameters are entirely static in nature-which is different in reality with public health policies such as lockdowns, for example, which would change the population's average contact rate. Regardless, this model is important as it is able to project what would happen if no changes were made to curb the spread of the modelled disease. Further details on the model and parameters can be found in <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:2:0:NEW 20 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head></ns0:div> <ns0:div><ns0:head>Current Interface and Output</ns0:head><ns0:p>When the authors of NorthCOVID-19 <ns0:ref type='bibr'>(Savage et al. (</ns0:ref> <ns0:ref type='formula'>2020</ns0:ref>)) made the model, a web interface was designed alongside it to provide researchers in the field an accessible means of interacting with it. This interface needed to accommodate a total of 32 parameters across two populations so that individual aspects of the model could be modified. Once this has been configured to the user's specifications, it can be submitted where the simulation is run in a matter of seconds before being returned to the user shortly after.</ns0:p><ns0:p>Once the results have been computed, the first thing displayed to the user is a summary of what occurred in the simulation. Second, the user is shown the number of susceptible, infectious, and recovered individuals per capita of 10,000 over time. Next, the user is shown the number of deaths per capita of 10,000 over time. Lastly, a graph showing the ICU usage over time is presented to the user, as well as a plot of how many care units would be in place, ideally, to accommodate everyone. The user is given the option to download a spreadsheet of the raw results from the simulation that shows the fractional, point-in-time counts, for each state, every 0.1 timesteps.</ns0:p></ns0:div> <ns0:div><ns0:head>Issues</ns0:head><ns0:p>To get an initial idea of how the NorthCOVID-19 interface was being perceived, we asked the authors <ns0:ref type='bibr'>(Savage et al. (</ns0:ref> <ns0:ref type='formula'>2020</ns0:ref>)) for their feedback on possible issues that may surface for the general public. It was noted by them that it had already been demonstrated to other researchers, public health officers, policy makers, and physicians so they had a good idea of how users were assessing their system <ns0:ref type='bibr' target='#b25'>(Pearce et al. (2020)</ns0:ref>). Taking their points and observations into consideration, a few issues became apparent:</ns0:p><ns0:p>1. The initial interface of 32 parameters could be overwhelming to a user that does not understand how each parameter works 2. The raw results, being so granular and in decimal format, could be confusing to a user that does not understand the implementation of mathematics in the model 3. The graphs, although valuable to researchers in the field as the results have been generalized to per capita values, could be confusing to an end user that does not understand how these ratios work or what they mean to the actual population size These issues can be further addressed by the ACM Code of Ethics <ns0:ref type='bibr' target='#b9'>(Gotterbarn et al. (2017)</ns0:ref>) discussed before. For issue 1, codes 2.7 and 3.1 can be applied as we want to ensure that the users understand the reasoning behind these results. Although there are 32 parameters, the additions on top of the base SIR model come from the hospitalization states that have been added to NorthCOVID-19 <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). Since this aspect of the model would be specific to the regions it is being applied to, the general public would assume that these states have been configured to what their area expects. Therefore, in the proposed output, we only display parameters from the model that relate to the disease itself-contact rate, infectivity, and recovery time-as they would be most likely to vary based on location-specific statistics.</ns0:p><ns0:p>Additionally, however, the NorthCOVID-19 model also has parameters for how many individuals are initially infected as well as what the capacity limit of the intensive care unit (ICU) is <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>).</ns0:p><ns0:p>These parameters are also important for the general public to know as it shows how the disease begins in their region as well an important value showing the handling capacity of their hospitals. For issue 2, code 1.3 and 3.1 can be applied as we want ensure that the general public doesn't dismiss the results of these models by helping them understand results in a better format. Lastly, for issue 3, all of the codes discussed in Public Understanding section can be applied as the output of the model should be understandable for all individuals since it has a great deal of importance to the general public. This can be accomplished by using the actual values output from the model rather than per-capita or ones that have been normalized.</ns0:p></ns0:div> <ns0:div><ns0:head>Proposed Epidemic Model Output</ns0:head></ns0:div> <ns0:div><ns0:head>Video Output</ns0:head><ns0:p>The first aspect to the proposed solution is a video output that is generated based on the following values made available by the NorthCOVID-19 model: the parameters used, summarized results, and raw data. In practice, these values would be based on local statistics that are relevant to the audience being targeted.</ns0:p><ns0:p>To demonstrate the proposed method, arbitrary values will be used to give an idea of how the video output would look. In the very first frame, we display to the user the preset values for the important parameters identified. This frame stays up long enough for the audio output (discussed in the next section) to read all Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>values out to the viewer-an example can be seen in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. This is to ensure that the viewer understands the setup of this particular output before any of the results are presented. In the next frame, the proposed solution first presents the raw infectious numbers over time. The scale on this graph is an absolute value instead of being a per capita value so it is much more transparent and easier to understand <ns0:ref type='bibr' target='#b20'>(Kienzler (1997)</ns0:ref>; <ns0:ref type='bibr' target='#b21'>Kostelnick (2008)</ns0:ref>)-an example of this is shown in Figure <ns0:ref type='figure'>2</ns0:ref>. However, it does not convey another crucial design choice: the graph is animated to show the numbers over time from the first day to the last day. During the animation, the audio output discussed in the next section plays to walk the viewer through what happened as a result, with the total number fading in for the conclusion of this frame. The reasoning for this visualization setup is to ease the viewer into the results instead of abruptly showing a graph that they need to interpret by themselves <ns0:ref type='bibr' target='#b26'>(Pickering and Kara (2017)</ns0:ref>).</ns0:p><ns0:p>The same design choice was made for the frames following this one as shown in Figures <ns0:ref type='figure' target='#fig_5'>3 and 4</ns0:ref>. Once the last graph has been shown to the viewer, a concluding frame is shown where all peaks that were previously shown are presented, on the same scale, in a bar chart. As with the other slides and to be discussed in the next section, an audio output accompanies this to reiterate each of the results to the viewer. By displaying the peaks in this format, one can view the magnitude of each peak relative to one another with little to no confusion <ns0:ref type='bibr' target='#b26'>(Pickering and Kara (2017)</ns0:ref>). This is particularly important for the infectious, recovered, and death peaks as this will typically be what the general public is most concerned with <ns0:ref type='bibr' target='#b20'>(Kienzler (1997)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Begley et al. (2007)</ns0:ref>). To conclude this section, the code for the proposed output was created in Python and uses the libraries matplotlib <ns0:ref type='bibr' target='#b16'>(Hunter (2007)</ns0:ref>) along with moviepy 1 to generate the result discussed.</ns0:p></ns0:div> <ns0:div><ns0:head>Audio Output</ns0:head><ns0:p>To further enhance the video output, the choice was made to have a text-to-speech library interpret the results for the viewer. However, a simple, robotic voice was not desirable as it may sound too monotonous for the context and not feel trustworthy as a result. Therefore, this work uses an artificial intelligence assisted solution from <ns0:ref type='bibr' target='#b31'>(Tachibana et al. (2018)</ns0:ref>). To summarize, this novel approach uses deep convolution networks to adjust the actual audio spectrograms from the text-to-speech to make it sounds more realistic and less robotic. The authors made this codebase publically available along with a pre-trained model that is used in this work. In the introductory slide (Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>), the following text was converted to audio using this library:</ns0:p><ns0:p>Visualization of NorthCOVID-19. The results in this video are based on the following parameters: an initial population of 'N', initial infected 'N_i', ICU capacity of 'N_c', contact rate of 'c', an infectivity of 'tau' percent, and an illness duration of 'nu' days</ns0:p><ns0:p>The parameters in this text and the ones to follow use the notation from NorthCOVID-19 <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). Since this output is a generated dynamically, the text is updated based on these values. In the next frame (Figure <ns0:ref type='figure'>2</ns0:ref>) the following text is spoken: This scenario started with 'N_i' infected individuals. Once the infection ended after 'INFECTION_DURATION' days, there was a total of 'TOTAL_INFECTED_COUNT' individuals that had become infected</ns0:p><ns0:p>Where 'INFECTION DURATION' and 'TOTAL INFECTED COUNT' come from their respective values but as the absolute values instead of per capita. In the next frame (Figure <ns0:ref type='figure'>3</ns0:ref>) the following text is spoken:</ns0:p><ns0:p>Over the course of the infection, the ICU capacity peaked at 'ICU_PEAK_COUNT' individuals while the Ward peaked at 'WARD_PEAK_COUNT' individuals. The ICU limit in this scenario was set to 'N_c' individuals</ns0:p><ns0:p>Where 'ICU PEAK COUNT' and 'WARD PEAK COUNT' come from the maximum value in their respective sections of the model. In the next frame (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>) the following text is spoken:</ns0:p><ns0:p>Out of the 'TOTAL_INFECTED_COUNT' individuals that become infected, a total of 'TOTAL_RECOVERED_COUNT' individuals recovered from the infection while 'TOTAL_DEATH_COUNT' deaths were recorded</ns0:p><ns0:p>Where 'TOTAL RECOVERED COUNT' and 'TOTAL DEATH COUNT' come from their respective values but as the absolute values instead of per capita. Lastly, in the conclusion slide (Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>) the following text is spoken:</ns0:p><ns0:p>After the infection ended in 'INFECTION_DURATION' days. The total infected peaked at 'TOTAL_INFECTED_COUNT', the ICU peaked at 'ICU_PEAK_COUNT', the Ward peaked at 'WARD_PEAK_COUNT', the recovered peaked at 'TOTAL_RECOVERED_COUNT', and the deaths peaked at 'TOTAL_DEATH_COUNT' By having this audio accompany the video output, the goal is to assist the user in their understanding of the results in a simple, to the point, format. The result is an informative 2 minute video (with audio) that, from start to finish, takes around 10 minutes to generate. </ns0:p></ns0:div> <ns0:div><ns0:head>RESULTS</ns0:head></ns0:div> <ns0:div><ns0:head>User Study</ns0:head><ns0:p>To evaluate the web interface of the NorthCOVID-19 model and the proposed video output, a 15 question survey was created. The link to this form was posted on the authors' Linkedin (total of 38 individuals responded) and Slack channels (total of 2 individuals responded) with the only restriction being that it could be filled out only once. When an individual was first presented with the survey, they were shown a 4-minute video that walked them through how to use the web interface that was provided by the authors of <ns0:ref type='bibr' target='#b30'>(Savage et al. (2020)</ns0:ref>). Then, they were asked to use the platform on their own and answer 10 questions that were adapted from the System Usability Scale (SUS) <ns0:ref type='bibr' target='#b6'>(Brooke (1996)</ns0:ref>) as per Figures <ns0:ref type='figure'>6 (a-j</ns0:ref>). An answer of 1 meant they 'Strongly Disagree' with the statement and 5 meant 'Strongly Agree'. Lastly, they were shown a sample output of the proposed video 2 in this article, and asked 5 questions regarding their thoughts about it as per Figures <ns0:ref type='figure'>7 (a-e</ns0:ref>).</ns0:p></ns0:div> <ns0:div><ns0:head>Survey</ns0:head><ns0:p>After completing the survey, a total of 40 individuals responded. Since the first ten questions (Figures 6a through 6j) were directly adapted from SUS <ns0:ref type='bibr' target='#b6'>(Brooke (1996)</ns0:ref>), a score was derived to see how simple the interface is to use. Once these values were calculated for each individual response, the average was determined to get the final result where 0 means that the interface is not usable at all and 100 means that it is extremely easy to use. From studies done on this survey, a score of 68 is noted as being average for this measure <ns0:ref type='bibr' target='#b2'>(Bangor et al. (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Lewis (2018)</ns0:ref>). This survey conducted on NorthCOVID-19 scored the interface at 70.94 which is slightly above the average. This result, along with the issues raised show the need to create a different way of interfacing with this model.</ns0:p><ns0:p>In analyzing these results further <ns0:ref type='bibr'>(Figures 7(a)</ns0:ref> through 7(e)), it can be seen that a majority of the responses leaned towards a confident understanding of the video output and improved comprehension of the results. An interesting observation, however, can be seen in Figure <ns0:ref type='figure'>7b</ns0:ref> when the individuals were asked if the website output was easier to understand. Most answers fell under a neutral response (i.e., score of 3) but enough fell under scores 4 and 5 (i.e., 'Agree' and 'Strongly Agree') that it leaned more towards agreeing than disagreeing with the statement. This response could be attributed to the fact that the website is entirely interactive with its graphs and transparency in regards to the parameter configuration.</ns0:p><ns0:p>Regardless, from Figures <ns0:ref type='figure'>7a, 7c</ns0:ref>, 7d, and 7e, a strong leaning towards the video output being easy to understand, improving comprehension, and being helpful to the individuals who took the survey can be Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>observed in these results. This shows achievement in the goal of meeting ACM codes 1.3, 2.7, 3.1, and 3.2 as discussed before.</ns0:p></ns0:div> <ns0:div><ns0:head>LIMITATIONS</ns0:head><ns0:p>There are three limitations that we encountered with this study. First, although the responses to the survey seem promising, we do acknowledge that the collection technique used is a limitation in itself.</ns0:p><ns0:p>We had decided against recording demographic and educational information of the respondents to ensure anonymity, but can see how that could've provided further insight to these ratings. Second, the number of individuals that responded was relatively low, and is therefore another limitation of this work. Lastly, we have not considered optimizing the video generation process which would be ideal for a real-time implementation as it takes up to 10 minutes for the script to generate roughly 7,200 frames. We welcome more feedback on this visualization and hope to continue improving its output so that all can understand.</ns0:p></ns0:div> <ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this article, the outputs of an epidemic model called 'NorthCOVID-19' (Savage et al. ( <ns0:ref type='formula'>2020</ns0:ref>)) along with its web interface was examined to see what issues it may have when being interpreted by the general public. The interface was created with researchers in the field in mind but the issues affects even the general population regardless of their background. After identifying three main issue that could cause anxiety and confusion, this article focused on creating a solution that would thoroughly explain the results in an ethical, easy to understand format. The result was a video that eases in to each of the results while having an artificial intelligence assisted voice further explaining the proposed output the viewer is seeing.</ns0:p><ns0:p>This feature was designed with the ACM code of ethics <ns0:ref type='bibr' target='#b9'>(Gotterbarn et al. (2017)</ns0:ref>) in mind as we wanted to ensure that the output presented was easy for the general public to understand <ns0:ref type='bibr' target='#b20'>(Kienzler (1997)</ns0:ref>; Pickering and Kara ( <ns0:ref type='formula'>2017</ns0:ref>)).</ns0:p><ns0:p>To evaluate the NorthCOVID-19 interface, questions were adapted from the System Usability Scale (SUS) <ns0:ref type='bibr' target='#b6'>(Brooke (1996)</ns0:ref>) to get a score out of 100 that describes how easy it is to use. The resulting score was calculated to be 70.94 which is slightly above average <ns0:ref type='bibr' target='#b2'>(Bangor et al. (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Lewis (2018)</ns0:ref>).</ns0:p><ns0:p>When users were asked about the video output in comparison to the website's output, the results showed most answers being neutral with a leaning towards the website's output being easier to understand. This could be attributed to the interactive graphs provided and transparency of the parameter configuration.</ns0:p><ns0:p>However, when discussing the video output itself, the results strongly leaned towards it helping improve the comprehension of the results and being easy to understand. This shows that the proposed output does add value to NorthCOVID-19 and that the design choices involved (having an animated video output with audio) achieved their intended purpose of providing the user with the information needed to understand the results without an interface. We have opted to make the simulation model, video generation script,</ns0:p><ns0:p>and survey results open-sourced at the following repository: https://github.com/andrfish/NorthCOVID19</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>public awareness and understanding of computing, related technologies, and their consequences &#8226; 3.1: Ensure that the public good is the central concern during all professional computing work &#8226; 3.2: Articulate, encourage acceptance of, and evaluate fulfillment of social responsibilities by members of the organization or group 2/13 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:2:0:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:08:64699:2:0:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The first frame in the proposed solution highlighting the initial parameters used in this example simulation</ns0:figDesc><ns0:graphic coords='6,170.80,101.94,355.44,199.92' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. The second frame in the proposed solutionAdditionally, the ordering of these frames took ethical thinking into mind as we wanted to properly lead up the morbid results (i.e., deaths) by not displaying it first to the users. To do so, the video first</ns0:figDesc><ns0:graphic coords='6,170.92,471.51,355.20,198.96' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>1 https://zulko.github.io/moviepy/ 6/13 PeerJ Comput. Sci. reviewing PDF | (CS-2021:08:64699:2:0:NEW 20 Apr 2022) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The fourth frame in the proposed solution</ns0:figDesc><ns0:graphic coords='8,172.60,77.48,351.84,196.56' type='bitmap' /></ns0:figure> <ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The final frame in the proposed solution</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .Figure 7 .</ns0:head><ns0:label>67</ns0:label><ns0:figDesc>Figure 6. The results of each question in the first part of the survey</ns0:figDesc></ns0:figure> </ns0:body> "
"DATE: April 20, 2022 TO: Academic Editor, PeerJ Computer Science FROM: Andrew Fisher, Department of Computer Science Lakehead University, Thunder Bay, ON, Canada P7B 5E1 RE: Article resubmission. Dear Editor, We wish to submit a revised copy of our article entitled ' An ethical visualization of the NorthCOVID-19 model' for consideration by PeerJ Computer Science. We can attest that this is an original article that has not been published elsewhere, nor is it currently under submission for publication elsewhere. We have carefully reviewed the concerns and believe they were addressed with an update of our open-source repository as well as minor changes to the manuscript (highlighted in blue). Please see our point-by-point response in the pages following this cover letter. We have no conflicts to disclose. Please address all correspondence concerning this article to me at afisher3@lakeheadu.ca. Thank you for taking the time to consider this article. Sincerely, Andrew Fisher Research Assistant, Department of Computer Science Lakehead University, Thunder Bay, ON Tel:204-761-9726 Web:http://www.datalab.science Reviewer 1 (Monika Heiner) Basic reporting () re individual contributions of all authors: I assume that this will be visible in the final paper; so far this information is not included. Yes, the author contribution breakdown was provided in the PeerJ submission itself. () https://github.com/andrfish/NorthCOVID19, Video Generation Script for the COVID Crushers, introductory comment: “This script takes input from NorthCOVID-19 (see model here) and produces a video animation (see sample_format.pdf) of the results. The file 'sample_output.csv' is an example of what the output will look like, 'sample_parameters.json' is an example saved file of the parameters used, and 'sample_results.json' is an example of the results output to the website from the simulation.” -> none of the links work - ‘NorthCOVID-19’ , ‘model’; for some of the pointers there are no links given at all, eg. sample_format.pdf, sample_results.json, sample_parameters.json. Thank you for pointing this out, we have corrected the links for “NorthCOVID-19” and “model”. For the files, we have now included them in our repository (https://github.com/andrfish/NorthCOVID19). () “The result was a video” thanks for adding the video generation feature to the original website https://covid.datalab.science; I suggest to move the link to this website, hidden so far in a footnote, to a more prominent position. We have moved this link in-line towards the top of page 2. () “Please do not leave this page, the generation may take up to 10 minutes and the download will start automatically.” -> having to wait up to 10’ (it actually took almost as long) is a rather long delay for getting something simplified compared to what I immediately get and comprises actually more information. For the video generation process, each slide is first rendered and saved based on their respective statistics using the OpenCV and Matplotlib libraries. Then, each slide is combined and rendered again to get the final result that the user receives. We opted for it to play back at 60 frames per second to give smooth animations, for a total of roughly 7,200 frames to render in up to 10 minutes. A real-time implementation wasn’t considered for this study so we have added this optimization constraint to our limitations towards the top of page 11. () the last slide in the generated “video” (is more a sequence of slides) needs some normalisation to make the results visible. We appreciate this feedback but felt that normalizing it would add another complexity for the general public to understand rather than displaying the actual values. This is now emphasized towards the end of the “Issues” subsection on page 4. in summary, I’m still amazed how many words one can spend for explaining and justifying the design of a simplified user interface. I personally would prefer the direct output as offered by the original website https://covid.datalab.science; not to say that I somehow feel offended that I - as a useris considered not to be able to understand what the original diagrams are telling me. But I’m happy to accept that I might not belong to the actual target group of the tool. Thank you for your understanding, we certainly did not intend for the output to be directed towards researchers. Rather, it is for the general public which we hope is clear to readers from our “Public Understanding” and “Issues” sections. "
Here is a paper. Please give your review comments after reading it.
426
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>To solve the problems of poor stability and low modularity (Q) of community division results caused by the randomness of node selection and label update in the traditional label propagation algorithm, an improved two-stage label propagation algorithm based on LeaderRank was proposed in this study. In the first stage, the order of node updating is determined by the participation coefficient (PC). Then, a new similarity measure is defined to improve the label selection mechanism so as to solve the problem of label oscillation caused by multiple labels of the node with the most similarity to the node.</ns0:p><ns0:p>Moreover, the influence of the nodes is comprehensively used to find the initial community structure. In the second stage, the rough communities obtained in the first stage are regarded as nodes, and their merging sequence is determined by the PC. Next, the nonweak community and the community with the largest number of connected edges are combined. Finally, the community structure is further optimized to improve the modularity so as to obtain the final partition result. Experiments were performed on six real networks and fifteen artificial datasets with different scales, complexities, and densities. The modularity and normalized mutual information (NMI) were considered as evaluation indexes for comparing the improved algorithm with dozens of relevant classical algorithms. The results showed that the proposed algorithm yields superior performance, and the results of community partitioning obtained using the improved algorithm were stable and more accurate than those obtained using other algorithms. In those classical real networks, the modularity of community division results of the proposed algorithm was higher than that of other algorithms. And the NMI values were above 0.9943 on eight artificial datasets containing 1000 nodes, and 0.986 and 0.953 on two large-scale complex networks containing 5000 nodes. In addition, the proposed algorithm always performs well in community detection in five large-scale artificial data sets with 6000 to 10000 nodes.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>With the rapid development of the Internet and big data technology, research on complex networks has gradually penetrated into many fields, such as information science and biological science, and has thus become a very challenging research topic <ns0:ref type='bibr' target='#b5'>(5)</ns0:ref>. In social networks, such as scientific research cooperation and transportation networks, objects are usually represented as nodes, and relationships between objects are represented as edges <ns0:ref type='bibr' target='#b45'>(40)</ns0:ref>. Real-world networks have one important feature, community structure, that is, a network is usually composed of several communities, with relatively close node connections within the community and relatively sparse node connections between the communities. The discovery of community structure is an important basis for exploring the formation principle 0 and function of complex network structures <ns0:ref type='bibr' target='#b25'>(22)</ns0:ref> and plays a vital role in many fields. For instance, in the field of biology <ns0:ref type='bibr' target='#b22'>(19)</ns0:ref>, community detection is of great significance for understanding the specific organizational structure, functional analysis, and behavior prediction of biological systems. In the field of ecommerce, consumers with similar purchasing habits can be mined through community detection, thus creating greater business value through the establishment of efficient recommendation systems <ns0:ref type='bibr' target='#b12'>(11)</ns0:ref>. In the field of infectious diseases, community detection can be used to analyze and identify the key population of infectious diseases so as to effectively control the spread of diseases <ns0:ref type='bibr' target='#b3'>(3)</ns0:ref>. Therefore, the quick and effective discovery of the community structure of networks has become the primary task and an important branch of social network research.</ns0:p><ns0:p>With the extensive research on social network analysis, many community detection algorithms have emerged, but most of them suffer from limitations such as high complexity, low accuracy of community division, and unstable results. Label propagation algorithm (LPA) has attracted attention due to its advantages of low time complexity, no prior conditions, and suitability for community detection in large-scale networks <ns0:ref type='bibr' target='#b11'>(10)</ns0:ref>. However, the traditional LPA has the following disadvantages: 1) LPA adopts a random strategy in the updating sequence of nodes, resulting in randomness in the community partition results; 2) LPA treats every node as equally important and does not distinguish the importance of each node; 3) LPA assigns a unique label to each node and fails to identify overlapping communities <ns0:ref type='bibr' target='#b20'>(17)</ns0:ref>.</ns0:p><ns0:p>In view of the abovementioned shortcomings, numerous improved algorithms have been proposed. <ns0:ref type='bibr' target='#b4'>(4)</ns0:ref> proposed an improved LPA algorithm based on label propagation ability, developed a calculation method based on a k-shell decomposition algorithm for determining the importance of individual nodes <ns0:ref type='bibr' target='#b24'>(21)</ns0:ref>, and formulated a label update strategy through the importance ranking of nodes and label propagation ability. <ns0:ref type='bibr' target='#b33'>(30)</ns0:ref> proposed a community detection algorithm based on node influence and similarity (NIS-LPA), wherein the selected seed nodes are used to expand into seed regions, and then the similarity between nodes is calculated based on the network topology and real attributes of nodes, thus improving the stability and accuracy of the algorithm. <ns0:ref type='bibr' target='#b38'>(34)</ns0:ref> proposed a community detection algorithm integrating LeaderRank and label propagation (LLPA) wherein the three aspects of node label initialization, node update sequence, and label propagation selection process are improved. The LeaderRank algorithm is adopted to select key nodes, and labels are assigned to them by calculating the influence of the nodes. Thereafter, the nodes are updated according to the influence of the nodes, and the propagation ability between nodes is considered in the process of label propagation. <ns0:ref type='bibr' target='#b7'>(7)</ns0:ref> proposed a community detection algorithm based on boundary nodes and label propagation (LBN), which determines core nodes and boundary nodes, respectively, and then determines the community to which they belonged according to the weight of the boundary nodes, thus improving the stability of the algorithm. However, the values of Q <ns0:ref type='bibr' target='#b37'>(33)</ns0:ref> and NMI <ns0:ref type='bibr' target='#b39'>(35)</ns0:ref> are still unsatisfactory. <ns0:ref type='bibr' target='#b42'>(38)</ns0:ref> proposed label importance-based label propagation algorithm (LILPA) for community detection for application in core drug detection. In LILPA, when labels are transmitted to other nodes, the label updating process based on node importance, node attractiveness, and label importance is used to improve the label instability and the accuracy and efficiency of community division. For overlapping communities, ( <ns0:ref type='formula'>14</ns0:ref>) proposed an efficient community detection algorithm based on label propagation with community kernel (CK-LPA), which assigns a corresponding weight to each node according to the importance of the node in the network and updates node labels according to the weight order. They also discussed the composition of weights, label updating, propagation strategies, and convergence conditions. ( <ns0:ref type='formula'>18</ns0:ref>) improved the label update order and propagation threshold. They proposed an overlapping community detection algorithm based on label propagation by using the PageRank and node clustering coefficients algorithms (COPRAPC), wherein nodes with low influence are selected for label propagation, and the node clustering coefficient is used to control the maximum number of communities that nodes belong to. <ns0:ref type='bibr' target='#b28'>(25)</ns0:ref> proposed an overlapping community detection algorithm integrating label preprocessing and node influence (FLPNI), thereby greatly reducing the randomness of label propagation. <ns0:ref type='bibr' target='#b32'>(29)</ns0:ref> proposed an improved label propagation algorithm for community detection based on two-level neighborhood similarity (TNS-LPA); defined a new two-level neighborhood similarity measurement, which selected the initial community center by considering the minimum distance and local centrality index; and optimized the algorithm by adopting the asynchronous updating label strategy according to the importance of nodes, thereby further improving the accuracy of community division. <ns0:ref type='bibr' target='#b13'>(12)</ns0:ref> proposed an improved label propagation algorithm based on modularity and node importance (LPA-MNI) wherein the initial community is identified based on the value of modularity, and then the remaining nodes that have not been assigned to the initial community are clustered through label propagation. Node importance is used to improve the label update sequence, and the label selection mechanism is used when the majority of nodes contain multiple labels. Experimental results showed that LPA-MNI is more robust than the traditional LPA algorithm. <ns0:ref type='bibr' target='#b8'>(8)</ns0:ref> proposed the node importance-based label propagation algorithm (NI-LPA) to identify overlapping communities to address the problem of instability in the LPA algorithm caused by random updating. NI-LPA uses information derived from node attributes to simulate special propagation and filtering processes. Experiments on artificial and real networks of different sizes, complexities, and densities revealed the high efficiency of NI-LPAT for overlapping community detection. <ns0:ref type='bibr' target='#b27'>(24)</ns0:ref> proposed another label propagation algorithm based on node importance (NI-LPA) wherein the importance of nodes is defined by combining the signal propagation of nodes, the value of K-shell nodes themselves, and the Jaccard distance between adjacent nodes, which better avoids the instability caused by random selection of nodes in the traditional LPA algorithm. <ns0:ref type='bibr' target='#b17'>(15)</ns0:ref> encoded both semantic and geometric information of the environment in a weighted colored graph, in which the edges were partitioned into a finite set of ordered semantic classes (e.g., colors), and then incrementally searched for the shortest path among the set of paths with minimal inclusion of inferior classes, using information from the previous search by ideas similar to the LPA. <ns0:ref type='bibr' target='#b0'>(1)</ns0:ref>In the first and second iterations (t &lt;= 2) of the propagation, if the number of maximum label frequencies in neighbor nodes was equal, the Adamic/Adar index was used to select the appropriate label. For the other iterations (t &gt; 2), a new criterion, known as label strength, was applied to select the label with the highest strength of a node. <ns0:ref type='bibr' target='#b39'>(35)</ns0:ref>proposed a new node similarity metric, and the label was updated according to the similarity between the current node and neighbor nodes.</ns0:p><ns0:p>The abovementioned algorithms focus on the calculation of the node importance and seed node selection and consider the randomness of node update order but ignore the importance of label update strategy, resulting in the unstable and less accurate community division. Therefore, this study focused on the updating strategy of nodes and labels to achieve efficient and accurate community division. The two-stage community detection algorithm based on the label propagation algorithm (39) (LPA-TS) has the following problems. 1) In the first stage, the algorithm determines the node update sequence from the descending participation coefficient (PC) and then updates the node label to that with the largest similarity so as to obtain the initial partition result. However, only the number and degree of common neighbors are considered in the definition of similarity. There may be multiple nodes with maximum similarity with the same number and degree of common neighbors. If one node is randomly selected for label update, the result of community division will be unstable. 2) In the second stage, the algorithm first regards the initial community as nodes and then determines the order of community mergers from the PC. Then, the algorithm performs merging according to the conditions of a weak community and finally obtains the community structure. However, in some classical networks, the community division results are not ideal, and the modularity is low because LPA-TS has some shortcomings in the updating strategy of nodes and labels and the definition of initial community merge conditions. To solve these problems, an improved two-stage label propagation algorithm (LPA-ITSLR) was proposed in this paper. The contributions and innovations of this paper are as follows.</ns0:p><ns0:p>(1) To solve the problem of unstable and inaccurate community division results yielded by the LPA-TS algorithm, a new similarity measurement index between nodes was proposed to optimize the node label updating strategy. In the initial stage of community division, the number and degree of common neighbors of nodes and the similarity of structural information between common neighbors are considered comprehensively. In view of the situation that multiple nodes may have the maximum similarity value, the importance of nodes is sorted by calculating the LeaderRank value so as to avoid the randomness of node label update order and ensure the stability of the initial community division result.</ns0:p><ns0:p>(2) To address the problem of low modularity in LPA-TS, the optimal parameter value was determined by improving the definition of weak community in the original algorithm, and the evaluation function based on complementary entropy was changed to the objective function based on modularity optimization in the community merging stage so as to further improve the quality of community division and the accuracy of the final division result.</ns0:p><ns0:p>(3) Experiments were conducted on six real networks and 15 artificial datasets with different scales and complexities. The Q and NMI values were used as evaluation indexes to compare the proposed algorithm with several recent label propagation algorithms. Experimental results showed that the improved algorithm has higher quality and stability in the community division.</ns0:p></ns0:div> <ns0:div><ns0:head>Theoretical Basis</ns0:head></ns0:div> <ns0:div><ns0:head>Community division</ns0:head><ns0:p>A complex network is generally represented by G(V, E), where is the node set,</ns0:p><ns0:formula xml:id='formula_0'>&#119881; = {&#119907; 1 ,&#119907; 2 ,&#8230;,&#119907; &#119899; } E = { , , &#8230;, } represents the set of edges, and is one of the division &#119890; 1 &#119890; 2 &#119890; &#119898; &#120570; = {&#120570; 1 , &#120570; 2 , &#8230;, &#120570; &#119896; } of G if and only if: 1) &#8704; &#120570; &#119903; &#8712; &#120570; , &#120570; &#119903; &#8800; &#8709; ; 2) &#119865;&#119900;&#119903; &#119886;&#119897;&#119897; &#120570; &#119903; &#8712; &#120570; , &#8899; &#120570; &#119903; &#8712; &#120570; &#120570; &#119903; = &#119881;; 3) &#8704; &#120570; &#119901; , &#120570; &#119902; &#8712; &#120570; , &#120570; &#119901; &#8745; &#120570; &#119902; = &#8709; ;</ns0:formula></ns0:div> <ns0:div><ns0:head>Participation coefficient</ns0:head><ns0:p>The PC of node <ns0:ref type='bibr' target='#b43'>(39)</ns0:ref> is used to describe the distribution of nodes with different &#119907; &#119894; &#119875;&#119862; &#119894; communities in the network edge; it is defined as Eq. ( <ns0:ref type='formula'>1</ns0:ref>), where k is the number of communities, and d i is the degree of node and . A high PC &#119907; &#119894; &#119889; &#119894; (&#120570; &#119903; ) = |{&#119907; &#119895; &#9474;(&#119907; &#119894; ,&#119907; &#119895; ) &#8712; &#119864; &#8743; &#119907; &#119895; &#8712; &#120570; &#119903; }| value indicates that the node is connected with more communities and that the node has a low degree of belonging to each community. In contrast, a low PC value indicates that the node is connected to a fewer number of communities and that the node has a high degree of belonging to each community. When community detection is performed, nodes with low PC and obvious community affiliation are selected to start traversal, which is more conducive for finding the correct community structure.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_1'>&#119875;&#119862; &#119894; = 1 -&#8721; &#119896; &#119903; = 1 ( &#119889; &#119894; (&#120570; &#119903; ) &#119889; &#119894; ) 2</ns0:formula></ns0:div> <ns0:div><ns0:head>Strong and weak communities</ns0:head><ns0:p>Community structures can be strong or weak <ns0:ref type='bibr' target='#b43'>(39)</ns0:ref>. A strong community means that the number of links between any node in the community and the inside of the community is greater than the number of links between the node and the outside of the community. It can be defined as Eq. ( <ns0:ref type='formula'>2</ns0:ref>). A weak community means that the sum of the edges of all nodes in the community and the nodes inside the community is greater than the sum of the edges of all nodes outside the community. It can be defined as Eq. ( <ns0:ref type='formula'>3</ns0:ref>). In general, a community should satisfy at least the character of weak community.</ns0:p><ns0:p>( and then uses Eq. ( <ns0:ref type='formula'>5</ns0:ref>) to divide the background nodes evenly among all the nodes.</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_2'>) &#119871;&#119877; &#119894; ( &#119905; ) = &#8721; &#119873; &#119895; = 0 &#119886; &#119894;&#119895; &#119889; &#119895; &#119871;&#119877; &#119895; (&#119905; -1)<ns0:label>4</ns0:label></ns0:formula><ns0:p>Where t is the number of iterations and N is the number of nodes in the network. <ns0:ref type='bibr'>If</ns0:ref> </ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Indicators</ns0:head><ns0:p>Modularity. Modularity (Q), proposed by Newman et al., is commonly used for measuring the strength of community structures. The closer its value is to 1, the higher the strength of community structures, that is, the better the quality of community division <ns0:ref type='bibr' target='#b37'>(33)</ns0:ref>. Q can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_3'>(6) &#119876; = 1 2&#119898; &#8721; &#119894;,&#119895; (&#119886; &#119894;&#119895; - &#119889; &#119894; &#119889; &#119895; 2&#119898; )&#120575; &#119894;,&#119895;</ns0:formula><ns0:p>Where m is the total number of edges in the network, is an element in the adjacency matrix A &#119886; &#119894;&#119895; of network G, and is the degree of node . When and belong to the same community, &#119889; &#119894; &#119907; &#119894; &#119907; &#119894; &#119907; &#119895; &#120575; &#119894;,&#119895; 1; otherwise 0. = &#120575; &#119894;,&#119895; = Normalized mutual information. For networks with a known community structure, NMI is generally used to evaluate the community division effect. The higher the NMI value, the more similar the result is to the real community structure. A value of 1 indicates that the partition result is completely consistent with the actual community structure <ns0:ref type='bibr' target='#b39'>(35)</ns0:ref>. Assuming that &#119860; = {&#119860; Manuscript to be reviewed of communities under the two divisions, NMI can be defined as follows:</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_4'>(7) NMI = 2&#8721; &#119896; &#119894; = 1 &#8721; &#119896;' &#119895; = 1 &#119879; &#119894;&#119895; &#119897;&#119900;&#119892; &#119899;&#119879; &#119894;&#119895; &#119879; &#119894; &#119879; &#119895; -&#8721; &#119896; &#119894; = 1 &#119879; &#119894; &#119897;&#119900;&#119892; &#119879; &#119894; &#119899; -&#8721; &#119896;' &#119895; = 1 &#119879; &#119895; &#119897;&#119900;&#119892; &#119879; &#119895;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119899;</ns0:head><ns0:p>Where n is the total number of nodes in the network, is the confusion matrix, is the &#119879; &#119879; &#119894;&#119895; number of common nodes included in the real divided communities and , and is the sum &#119860; &#119894; &#119861; &#119895; &#119879; &#119894; of the elements in the i-th row of the confusion matrix.</ns0:p></ns0:div> <ns0:div><ns0:head>The Proposed Algorithm</ns0:head></ns0:div> <ns0:div><ns0:head>Question-posing</ns0:head><ns0:p>In the first stage of the LPA-TS algorithm, when there are two or more nodes with the largest similarity with the current node, the algorithm randomly selects one node for label update; this may lead to unstable partition results. LPA-TS algorithm expresses the similarity of nodes as CN, which can be expressed as Eq. ( <ns0:ref type='formula' target='#formula_5'>8</ns0:ref>), where represents the neighbor nodes of node and &#119873; &#119894; &#119907; &#119894; &#119889; &#119894; represents the degree of node . community. As shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, . </ns0:p><ns0:formula xml:id='formula_5'>&#119907; &#119894;<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>&#119873; 0 = {&#119907; 2 ,&#119907;</ns0:formula></ns0:div> <ns0:div><ns0:head>Figure 2 Results of 100 experiments of the two algorithms on two networks</ns0:head><ns0:p>The LPA-TS algorithm only yielded an initial community division result in the first stage, and there were still many small communities. In the Karate network shown in Figure <ns0:ref type='figure'>3</ns0:ref>, some nodes with higher degrees have greater similarities with many nodes. For example, node can easily &#119907; 34 pass its label to neighboring nodes, while those at the edge of the network have low similarity to central nodes higher degrees. For example, nodes and can easily form small-scale &#119907; <ns0:ref type='bibr' target='#b28'>25</ns0:ref> &#119907; 26 communities, such as triangle nodes and diamond nodes in Figure <ns0:ref type='figure'>3</ns0:ref>. To merge these small communities, LPA-TS uses the definition of weak communities and the evaluation function based on complementary entropy in the second stage. However, in the definition of weak communities &#945; is set as 2, which leads to unstable division results in some networks, that is, the final community division results are not ideal, and the degree of modularity is low. Therefore, in this study, the parameters and the objective function in the second-stage community merger strategy were improved and a new community division method, LPA-ITSLR, was developed to achieve stable and more accurate community division results.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 3 LPA-TS algorithm partitioning results for Karate network in the first stage</ns0:head></ns0:div> <ns0:div><ns0:head>Node similarity definition</ns0:head><ns0:p>To solve the abovementioned problems, a new similarity index was proposed, which comprehensively considers the common neighbors, degrees of nodes, and the structural relationship between common neighbors. The more common neighbors two nodes have, the more similar they are. The higher the degree of a node, the more the number of nodes it shares its edges with, that is, the similarity of two nodes is inversely proportional to the degree of the node itself. The number of connected edges between neighboring nodes is combined to avoid multiple nodes with the same similarity value as the original node. The improved similarity index can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_7'>(9) &#119878; &#119868;&#119862;&#119873; (&#119894;, &#119895;) = |&#119873; &#119894; &#8745; &#119873; &#119895; | + 1 |&#119873; &#119894; &#8746; &#119873; &#119895; | + 1 &#119889; &#119894; &#119889; &#119895; + &#119862;(&#119907; &#119894; , &#119907; &#119895; ) |&#119873; &#119894; &#8745; &#119873; &#119895; |</ns0:formula><ns0:p>Where represents the number of edges between the common neighbor nodes of nodes &#119862;(&#119907; &#119894; , &#119907; &#119895; ) and . The numerator of the first term of the equation is increased by 1 so that the improved &#119907; &#119894; &#119907; &#119895; similarity index is not 0 when there is no public neighbor. According to the definition of similarity in Eq. ( <ns0:ref type='formula'>9</ns0:ref>), the similarity between nodes and in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> <ns0:ref type='table' target='#tab_5'>1</ns0:ref>. First, each node is assigned a unique label, and the similarity between nodes is calculated according to Eq. ( <ns0:ref type='formula'>9</ns0:ref>). Then, the PC value of each node is calculated and sorted in ascending order. Next, the labels are updated according to the sorted nodes. In the label updating strategy, the similarity between the current node and other nodes is compared. If the node with the largest similarity is not unique, the LR is further compared; if not, one is randomly selected to obtain the rough initial community structure in the first stage. <ns0:ref type='table' target='#tab_6'>2</ns0:ref>. In view of the problem that small communities may cause low network modularity in the first stage, whether the initial community meets the weak community condition is judged first. If the condition is not met, the community with the largest number of connected edges is selected for merging; this process is repeated until the entire network meets the weak community condition. The research of LPA-TS ( <ns0:ref type='formula'>39</ns0:ref>) algorithm shows that &#120572; is generally set as 2 in Eq. ( <ns0:ref type='formula'>3</ns0:ref>). In order to achieve more accurate community detection results, &#120572; is set as 0.5, 1, 1.2, 1.5 and 2, respectively for the 8 real data sets used in this paper. Through repeated experiments, it can be known that when &#120572; is set as 1.5, good division results are achieved on the 8 networks.</ns0:p><ns0:p>Moreover, experiments are also carried out on 15 artificial data sets to verify the rationality of the value of &#120572;. Therefore, in our study, &#120572; is determined to be 1.5 to achieve better performance in all networks. Each community is regarded as a node, and its PC value is calculated using Eq.</ns0:p><ns0:p>(1) to determine the community with the largest PC value; then, the community with the most links is determined for merging. If the modularity increases after the merge, the merge will be selected; otherwise, it will not be merged, thus ensuring that the community structure after the second stage merge will have a higher modularity and be closer to the real community structure. </ns0:p></ns0:div> <ns0:div><ns0:head>Experiment and Analysis</ns0:head><ns0:p>In this study, numerous experiments are conducted on real networks and artificial datasets with different structural parameters. The classical LPA algorithm, LPA-TS algorithm, and several community detection algorithms based on label propagation were compared; moreover, the effectiveness, correctness, stability, and accuracy of the proposed algorithm were verified.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of experimental results on real networks</ns0:head><ns0:p>Real dataset. In total, six classic real social network datasets were used in the experiment; their attribute characteristics are presented in Table <ns0:ref type='table' target='#tab_7'>3</ns0:ref>. In Table <ns0:ref type='table' target='#tab_7'>3</ns0:ref>, |V| represents the total number of nodes in the network, |E| represents the total number of edges, | | represents the number of &#120570; communities included in the network, max(k) represents the maximum node degree, &lt;k&gt; represents the average node degree, &lt;L&gt; represents the average path length, and &lt;c&gt; represents the clustering coefficient.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 3 Basic structural parameters of real datasets</ns0:head><ns0:p>Community division results. The proposed LPA-ITSLR algorithm was used to divide communities in the six abovementioned real datasets. The results are illustrated in Figure <ns0:ref type='figure'>4</ns0:ref>, where nodes in different communities are represented by different color.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 4 Community detection results of real networks</ns0:head><ns0:p>Stability analysis of LPA-ITSLR. The proposed LPA-ITSLR algorithm and the LPA and LPA-TS algorithms were compared and analyzed in the six abovementioned real datasets. Each dataset was run independently for 10 times, and the average value of the three algorithms on the six datasets was obtained (denoted as &lt;Q&gt;), as shown in Table <ns0:ref type='table' target='#tab_8'>4</ns0:ref>. The independent experimental results for each time are shown in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>.</ns0:p><ns0:p>As can be seen from the experimental results presented in Table <ns0:ref type='table' target='#tab_8'>4</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>, LPA-ITSLR performed well in all datasets, with the exception that the average module degree on the NetScience network was slightly lower than that obtained using the other two algorithms.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:1:1:NEW 7 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Moreover, LPA-ITSLR yielded more stable community partitioning results and a higher module degree than the other two algorithms. NetScience is a weighted network; however, in the experiment, the weight was ignored, and it was transformed into a powerless network for community division. Therefore, the quality of community division on this network obtained using LPA-ITSLR was slightly lower than that obtained using the other two algorithms. However, in the 10 independent experiments, the results of the LPA and LPA-TS algorithms exhibited fluctuations, indicating that the two algorithms are unstable due to the randomness of node and label update. The module-degree value of the proposed LPA-ITSLR algorithm always remained stable for every network, indicating that LPA-ITSLR effectively solves the oscillation problem in the process of label propagation and has higher accuracy and stability. To further verify the robustness of LPA-ITSLR, 100 independent experiments were conducted on the Karate, Dolphin, and Football networks; the results are presented in Figure <ns0:ref type='figure'>6</ns0:ref>. The community division results obtained using the LPA algorithm exhibited the most serious fluctuations in the modularity value, followed by the LPA-TS algorithm. In contrast, LPA-ITSLR maintained the same community division results in 100 experiments, and the modularity was higher than that of LPA and LPA-TS. </ns0:p></ns0:div> <ns0:div><ns0:head>Table 4 Average modularity values of 10 experiments for the three algorithms on real datasets</ns0:head><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> Comparison of modularity of LPA, LPA-TS, and LPA-ITSLR To further evaluate the performance of the LPA-ITSLR algorithm, it was compared with four recent community detection algorithms based on label propagation. Among them, the COPRA algorithm (6) realizes community division by assigning multiple labels with attribution coefficients to a node. The WLPA algorithm <ns0:ref type='bibr' target='#b26'>(23)</ns0:ref> first selects the label with a larger weight for propagation during the label propagation process. The LINSIA <ns0:ref type='bibr' target='#b29'>(26)</ns0:ref> algorithm is based on node importance and employs label importance to complete the community division. The LILPA <ns0:ref type='bibr' target='#b42'>(38)</ns0:ref> algorithm uses a fixed label update sequence based on the ascending order of node importance for discovering communities. The modularity of the results obtained using the five algorithms on the four real datasets is presented in Table <ns0:ref type='table' target='#tab_10'>5</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 5 Modularity comparison of five algorithms</ns0:head><ns0:p>From Table <ns0:ref type='table' target='#tab_10'>5</ns0:ref>, it can be seen LPA-ITSLR yielded the highest modularity and most stable in community division results. Thus, instability caused by label oscillation is avoided effectively by using LPA-ITSLR. Performance comparison of LPA-ITSLR with other algorithms. For further analysis of the effectiveness of the proposed algorithm for community partition and correctness, three classic datasets of Karate, Dolphins, and Football were used, and the LPA-ITSLR algorithm and seven classic community detection algorithms were employed for obtaining the division results for correlation analysis in terms of the number of communities and module Q as evaluation |&#120570;| indicators, The results are presented in Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 6 Results of eight algorithms on classical networks</ns0:head><ns0:p>From Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref>, it can be seen that the number of communities and modularity of the partition results of the eight algorithms on the three classical networks were different, but LPA-ITSLR exhibited good performance on these datasets; moreover, the partition results and the number of communities were consistent with the real network structure, and the modularity was higher than that obtained using other algorithms. Analysis of experimental results of artificial datasets Artificial datasets. Ten artificial networks were generated using the LFR benchmark (31) ; the basic information is presented in Table <ns0:ref type='table'>7</ns0:ref> The number of nodes is 5000, the community size is 50, the average degree of nodes is 10, and the maximum degree is 50. The mixing parameter &#181; was 0.1 and 0.3, respectively, and these two networks were denoted as LFR-9 and LFR-10.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 7 Description of synthetic networks</ns0:head><ns0:p>Comparative analysis of algorithm performance. For the first eight artificial datasets, the proposed LPA-ITSLR algorithm was compared with the LPA and LPA-TS algorithms in terms of the community division results. The average modularity &lt;Q&gt; and NMI were used as evaluation indicators. The experimental results are shown in Figure <ns0:ref type='figure'>7</ns0:ref>. As the value of &#181; increased, the network became more complex. The modularity of the community division results of the three algorithms on the corresponding network decreased by varying degrees, but LPA-ITSLR yielded higher modularity than the other algorithms. Moreover, the NMI value of LPA-ITSLR on the first seven networks was 1, and the NMI value of the network with a &#181; value of 0.45 was 0.9943, showing extremely strong stability and higher quality of community division.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref> Comparison of modularity and NMI on eight synthetic datasets For large-scale artificial networks LFR-9 and LFR-10 with high complexity, the proposed LPA-ITSLR algorithm was compared with seven recent label propagation algorithms for community division. Q and NMI were considered as evaluation parameters. The results are presented in Table <ns0:ref type='table'>8</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 8 Results for LFR9 and LFR10</ns0:head><ns0:p>From Table <ns0:ref type='table'>8</ns0:ref>, it can be seen that the community division results obtained using the seven algorithms were not stable, and the algorithm proposed in this paper maintains stable community division results on the two complex artificial data sets. Although the NMI value was slightly lower than that for other algorithms, the modularity was far higher. In the community merge phase optimization strategy based on modularity, LPA-ITSLR is superior as it can yield stable and high-quality community division results.</ns0:p><ns0:p>For the above 10 artificial networks, the experimental results show that the proposed algorithm is superior to other algorithms in both Q and NMI. In order to further verify the superiority of the proposed algorithm, we compare the number of communities detected by LPA-ITSLR algorithm with the actual number of communities of the ten networks, and the results are shown in Table <ns0:ref type='table'>9</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 9 Actual number of communities and the number of communities detected by LPA-ITSLR</ns0:head><ns0:p>It can be seen from table <ns0:ref type='table'>9</ns0:ref> that the number of communities detected by the algorithm proposed in this paper is basically consistent with the actual number of communities. In general, good results are obtained except for small deviations in some networks. Performance analysis of the algorithm for large data sets. In order to further verify the effectiveness of the algorithm, several large-scale data sets were used for experiments in this paper. The number of nodes of these artificial networks were 6000, 7000, 8000, 9000 and 10000, respectively, and these networks were denoted as LFR-11 to LFR-15, respectively. Table <ns0:ref type='table' target='#tab_5'>10</ns0:ref> shows the experimental results of the algorithm on these five large-scale networks, including the actual number of communities, the number of communities detected by the LPA-ITSLR algorithm, Q and NMI.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 10 Community detection results of five large-scale artificial networks</ns0:head><ns0:p>It can be seen from table 10 that, the algorithm performs well on these large data sets. With the increase of the number of nodes, the network scale and complexity continues to expand, and the modularity and NMI all showed a trend of decline. But the Q is always above 0.86, and the NMI is more than 0.95 by and large. In addition, the number of communities detected by the algorithm proposed is basically consistent with the actual number of communities, which further verifies the effectiveness and superiority of the LPA-ITSLR algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>To solve the problem of unstable results and low modularity of the LPA-TS algorithm in community detection on some networks, an improved LeaderRank-based two-stage label propagation algorithm named LPA-ITSLR was proposed in this study. In the first stage, the order of node updating is determined by descending order of the PC values. In the label propagation strategy, the improved similarity index is used, and then the influence of the nodes is compared so as to obtain the initial community division. In the second stage, the community is regarded as a node, and the PC is calculated again and sorted in ascending order. For determining the optimal parameter value in the weak community condition, the community is merged. Finally, the community structure is further improved based on the modularity optimization, and the final community division result is obtained. The proposed LPA-ITSLR algorithm solves the problem that the randomness of LPA-TS algorithm may yield unstable community partition results. Moreover, LPA-ITSLR yielded higher modularity than other algorithms on six real networks and 15 artificial datasets and achieved a more stable community division. However it has a higher time complexity in the case of certain networks with special community structures such as when the network community structure is complex, when there are many small communities and less contact between communities, and for nonequilibrium size distribution networks, a community detection method based on label propagation integrated deep learning and optimization could be employed to determine the node similarities and label influence. In the future, community detection in large-scale networks will be further studied to reduce the time complexity of the algorithm, and to achieve more accurate and efficient community detection results. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>and are the real community structure of the network and the , &#119860; 2 , &#8230;, &#119860; &#119896; } &#119861; = {&#119861; 1 , &#119861; 2 , &#8230;, &#119861; &#119896; ' } community division result of the network by an algorithm, respectively, and are the number &#119896; &#119896;'</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>&#119862;&#119873;(&#119907; &#119894; , &#119907; &#119895; ) = |&#119873; &#119894; &#8745; &#119873; &#119895; | + 1 &#119889; &#119894; &#119889; &#119895; In the network diagram shown in Figure 1, when the LPA-TS algorithm is used for the first stage of community division, two initial rough communities are obtained: &#120570; 1 = {&#119907; 7 ,&#119907; 8 , &#119907; 9 , &#119907; 2 } and . The nodes in different communities are represented by different &#120570; 2 = {&#119907; 1 ,&#119907; 3 , &#119907; 4 , &#119907; 5 , &#119907; 6 } shapes and colors in Figure 1. At this time, node has not yet been merged into any &#119907; 0</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 1 A</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 A network instance with two communitiesThe traditional LPA algorithm and the LPA-TS algorithm were used to conduct 100 experiments on classic Karate and Football networks. The corresponding module degree Q (33) of the community division results is shown in Figure2. Both algorithms exhibited obvious oscillations, indicating that the community division results of the algorithms are unstable.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Comparison of algorithm stability (a) (b) (c)</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>. The number of nodes |V| in the top eight artificial networks is 1000, and the community size | | is 10-50, that is, min = 10, max = 50. The &#120570; |&#120570;| |&#120570;| average degree of nodes &lt;k&gt; is 20, and the maximum degree max(k) is 50. The values of 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, and 0.45 were employed as the mixing parameter &#181;, and the eight networks were denoted as LFR-1-LFR-8. The latter two artificial networks are more complicated.</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,229.87,480.75,291.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,255.37,419.25,231.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,204.37,525.00,187.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>2) &#120572; * &#119889; &#119894;&#119899; &#119894; (&#120570; &#119903; ) &gt; &#119889; &#119900;&#119906;&#119905; &#119894; (&#120570; &#119903; ) , &#8704; &#119894; &#1013; &#120570; &#119903; The LeaderRank algorithm(34) is used to calculate the influence of nodes in the network. A background node is added to the network and connected with all the nodes in the network to &#119907; &#119892; form a new network. The algorithm assigns 1 unit of LeaderRank (LR) value to all nodes except the background node in advance, assigns 0 unit of LR value to node , uses Eq. (4) to calculate &#119907; &#119892; the LR value of node in each iteration, and iterates repeatedly until reaches a steady state &#119907; &#119894; &#119907; &#119894;</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>(3) represents the number of connected edges between the node in &#120572; * &#8721; &#119894;&#1013;&#120570; &#119903; &#119889; &#119894;&#119899; &#119889; &#119900;&#119906;&#119905; &#119894; (&#120570; &#119903; ) &#119894; (&#120570; &#119903; ) &gt; &#8721; &#119894;&#1013;&#120570; &#119903; &#119894; (&#120570; &#119903; ) &#119889; &#119894;&#119899; &#119907; &#119894; and the internal nodes of , In Eqs. (2) and (3), the community represents the number of connected &#120570; &#119903; &#120570; &#119903; &#119889; &#119900;&#119906;&#119905; &#119894; (&#120570; &#119903; )</ns0:cell></ns0:row><ns0:row><ns0:cell>edges between node in &#119907; &#119894; &#120570; &#119903;</ns0:cell><ns0:cell>and other nodes except . In general, &#120572; = 2. &#120570; &#119903;</ns0:cell></ns0:row><ns0:row><ns0:cell>LeaderRank algorithm</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>8 , &#119907; 1 , &#119907; 5 } &#8743; (&#119907; 2 ,&#119907; 8 &#8712; &#120570; 1 ) &#8743; (&#119907; 1 ,&#119907; 5 &#8712; &#120570; 2 ) At this time, the LPA-TS algorithm randomly selects a community to &#119862;&#119873;(&#119907; 9 ,&#119907; 0 ) = &#119862;&#119873;(&#119907; 3 ,&#119907; 0 ) merge into it and finally yields two community division results, &#119907; 0 &#120570; = {{&#119907; 7 ,&#119907; 8 , &#119907; 9 , &#119907; 2 }, {&#119907; 0 ,&#119907; 1 , and , resulting in instability. &#119907; 3 , &#119907; 4 , &#119907; 5 , &#119907; 6 }} &#120570;' = {{&#119907; 7 ,&#119907; 8 ,&#119907; 9 , &#119907; 2 , &#119907; 0 }, {&#119907; 1 ,&#119907; 3 , &#119907; 4 , &#119907; 5 , &#119907; 6 }}</ns0:figDesc><ns0:table><ns0:row><ns0:cell>According to the similarity calculation formula of the LPA-TS algorithm, the node has a large &#119907; 0</ns0:cell></ns0:row><ns0:row><ns0:cell>similarity with in the community ; in addition, has a large similarity with in the &#119907; 9 &#120570; 1 &#119907; 0 &#119907; 3</ns0:cell></ns0:row><ns0:row><ns0:cell>community . Moreover, both and have the same neighbor attributes as node , that is, &#120570; 2 &#119907; 9 &#119907; 3 &#119907; 0</ns0:cell></ns0:row><ns0:row><ns0:cell>.</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 First stage of the LPA-ITSLR algorithm Stage</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>2: Community merge. The second stage of the proposed LPA-ITSLR algorithm is shown in Table</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 Second stage of the LPA-ITSLR algorithm</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3 Basic structural parameters of real datasets</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>|&#119881;|</ns0:cell><ns0:cell>|&#119864;|</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>max(k)</ns0:cell><ns0:cell>&lt;k&gt;</ns0:cell><ns0:cell>&lt;d&gt;</ns0:cell><ns0:cell>&lt;c&gt;</ns0:cell></ns0:row><ns0:row><ns0:cell>Karate</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>78</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>4.588</ns0:cell><ns0:cell>2.408</ns0:cell><ns0:cell>0.588</ns0:cell></ns0:row><ns0:row><ns0:cell>Dolphin</ns0:cell><ns0:cell>62</ns0:cell><ns0:cell>159</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>5.129</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.309</ns0:cell></ns0:row><ns0:row><ns0:cell>Polbooks</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>441</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>8.4</ns0:cell><ns0:cell>3.079</ns0:cell><ns0:cell>0.448</ns0:cell></ns0:row><ns0:row><ns0:cell>Football</ns0:cell><ns0:cell>115</ns0:cell><ns0:cell>613</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>10.661</ns0:cell><ns0:cell>2.508</ns0:cell><ns0:cell>0.403</ns0:cell></ns0:row><ns0:row><ns0:cell>Les_Miserable</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>254</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>6.597</ns0:cell><ns0:cell>2.641</ns0:cell><ns0:cell>0.736</ns0:cell></ns0:row><ns0:row><ns0:cell>NetScience</ns0:cell><ns0:cell>379</ns0:cell><ns0:cell>914</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>3.451</ns0:cell><ns0:cell>6.042</ns0:cell><ns0:cell>0.798</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:1:1:NEW 7 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Average modularity values of 10 experiments for the three algorithms on real datasets</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:1:1:NEW 7 Jan 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 Average modularity values of 10 experiments for the three algorithms on real datasets</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Dataset/&lt;Q&gt;</ns0:cell><ns0:cell>LPA</ns0:cell><ns0:cell>LPA-TS</ns0:cell><ns0:cell>LPA-ITSLR</ns0:cell></ns0:row><ns0:row><ns0:cell>Karate</ns0:cell><ns0:cell>0.3174</ns0:cell><ns0:cell>0.3716</ns0:cell><ns0:cell>0.4242</ns0:cell></ns0:row><ns0:row><ns0:cell>Dolphin</ns0:cell><ns0:cell>0.4920</ns0:cell><ns0:cell>0.3759</ns0:cell><ns0:cell>0.5418</ns0:cell></ns0:row><ns0:row><ns0:cell>Polbooks</ns0:cell><ns0:cell>0.3801</ns0:cell><ns0:cell>0.4569</ns0:cell><ns0:cell>0.5207</ns0:cell></ns0:row><ns0:row><ns0:cell>Football</ns0:cell><ns0:cell>0.5819</ns0:cell><ns0:cell>0.6010</ns0:cell><ns0:cell>0.6068</ns0:cell></ns0:row><ns0:row><ns0:cell>Les_Miserable</ns0:cell><ns0:cell>0.2719</ns0:cell><ns0:cell>0.5007</ns0:cell><ns0:cell>0.5102</ns0:cell></ns0:row><ns0:row><ns0:cell>NetScience</ns0:cell><ns0:cell>0.7769</ns0:cell><ns0:cell>0.7573</ns0:cell><ns0:cell>0.7567</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5 Modularity comparison of five algorithms</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>&#22303; 0.10187 0.3741 &#22303; 0.03946 0.4884 &#22303; 0.03215 0.5972 &#22303; 0.02115 WLPA 0.3682 &#22303; 0.08176 0.3695 &#22303; 0.02517 0.5070 &#22303; 0.00622 0.5981 &#22303; 0.01374 LINSIA 0.3989 &#22303; 0.00004 0.3878 &#22303; 0.00005 0.4521 &#22303; 0.00007 0.5853 &#22303; 0.00007 LILPA 0.4213 &#22303; 0.0029 0.4003 &#22303; 0.00214 0.4635 &#22303; 0.00646 0.6061 &#22303; 0.00151</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell><ns0:cell>Karate</ns0:cell><ns0:cell>Dolphin</ns0:cell><ns0:cell>Polbooks</ns0:cell><ns0:cell>Football</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>COPRA 0.2348 LPA-ITSLR 0.4242</ns0:cell><ns0:cell>0.5418</ns0:cell><ns0:cell>0.5207</ns0:cell><ns0:cell>0.6068</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6 Results of eight algorithms on classical networks</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Network Criteria</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>Karate</ns0:cell><ns0:cell>Q</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>Dolphin</ns0:cell><ns0:cell>Q</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>Football</ns0:cell><ns0:cell>Q</ns0:cell></ns0:row><ns0:row><ns0:cell>Fastgreedy</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell>0.38</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.495</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell cols='2'>0.549</ns0:cell></ns0:row><ns0:row><ns0:cell>LPA</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.292</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>0.492</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>0.576</ns0:cell></ns0:row><ns0:row><ns0:cell>Leading Eigenvector</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.393</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>0.491</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell cols='2'>0.492</ns0:cell></ns0:row><ns0:row><ns0:cell>Walktrap</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>0.353</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.489</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell cols='2'>0.602</ns0:cell></ns0:row><ns0:row><ns0:cell>NIBLPA</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>0.352</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>0.452</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>0.542</ns0:cell></ns0:row><ns0:row><ns0:cell>EdMot</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>0.412</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.518</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>0.604</ns0:cell></ns0:row><ns0:row><ns0:cell>LPA-MNI</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.372</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.527</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell cols='2'>0.582</ns0:cell></ns0:row><ns0:row><ns0:cell>LPA-ITSLR</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.4242</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.5418</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell cols='2'>0.6068</ns0:cell></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:1:1:NEW 7 Jan 2022)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:1:1:NEW 7 Jan 2022) Manuscript to be reviewed Computer Science</ns0:note> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:1:1:NEW 7 Jan 2022)Manuscript to be reviewed</ns0:note> </ns0:body> "
"Dear Editor, We are truly grateful to the reviewers for the critical evaluation and thoughtful suggestions on our manuscript (“An improved two-stage label propagation algorithm based on LeaderRank”). According to the comments of the three reviewers, we have made careful revisions to the manuscript. Our point-by-point responses to the review’s comments are as given below. We hope that these revisions are satisfactory and the revised version will be acceptable for publication in “PeerJ Computer Science”. Thank you very much for your attention to our paper. I wish you all the best!  Response to the first reviewer’s comments Dear reviewers, thank you very much for your valuable comments. According to the suggestion, with the help of the professional SCI thesis polishing organization, we have rewritten the paper to improve the presentation. Since some contents have been modified and supplemented in the paper, the line number may have changed. Our point-by-point responses to the comments are as given below. 1. Line 87: The 'Q' has been italicized. 2. Line 147: We have revised the expression here to describe the content of this article more clearly. 3. Line 216: “Real” has been removed. 4. Line 297: The research shows that for most networks, is generally set as 2. In order to achieve more accurate community division results, is set as 0.5, 1, 1.2, 1.5, and 2, respectively, for the 8 real data sets used in this paper. Through repeated experiments, it can be seen that when is set to 1.5, good division results can be achieved on the 8 networks.  At the same time, experiments are also carried out on 15 artificial data sets to verify the rationality of .  Therefore, in this study, is determined to be 1.5 to achieve better performance in all networks. The rationality of the values is also verified through the experiment on 15 artificial networks.   5. Table 5 shows the stability verification results of community detection of the proposed algorithm. The four compared algorithms all have certain instability in the modularity of community partition results in terms of the four networks, while the results of the algorithm proposed always remain the same. Table 6 shows the verification of the community partitioning quality of the algorithm. The modularity of the partitioning results of the seven compared algorithms is stable in terms of the three networks, but that is lower than that of the algorithm proposed in this paper, which verifies the superiority of the proposed algorithm. 6. Line 380: As the value of μ increases, the network will become more and more complex. When the value of μ exceeds 0.5, the community structure is not obvious, and Q and NMI will plummet or even drop to 0, which is not of great research value. Therefore, the experiments were carried out when μ is 0.1 to 0.5 in our studies. In the next step, the impact of the value of μ on the community detection results will be further studied to get more accurate results. 7. According to your suggestion, the actual number of communities in the LFR networks and the number of communities detected are listed in the table 9. Table 9 Actual number of communities and the number of communities detected by LPA-ITSLR LFR networks Actual number of communities Number of communities divided by LPA-ITSLR LFR-1 35 40 LFR-2 35 35 LFR-3 38 38 LFR-4 45 45 LFR-5 39 39 LFR-6 42 42 LFR-7 42 42 LFR-8 42 40 LFR-9 85 81 LFR-10 98 69 Response to the second reviewer’s comments Dear reviewers, thank you very much for all your valuable comments. According to your suggestion, we have revised the paper. The amendments made are presented below. Meanwhile, in the last part of the paper, the work was summarized, and the possible future work direction was also pointed out. Our point-by-point responses to the comments are as given below. 1. We have studied the recent related papers and supplemented the introduction part of this manuscript according to your suggestion, as follows: (Lim, Salzman & Tsiotras, 2021) encoded both semantic and geometric information of the environment in a weighted colored graph, in which the edges were partitioned into a finite set of ordered semantic classes (e.g., colors), and then incrementally searched for the shortest path among the set of paths with minimal inclusion of inferior classes, using information from the previous search by ideas similar to the LPA. (Aghaalizadeh et al., 2021) In the first and second iterations (t <= 2) of the propagation, if the number of maximum label frequencies in neighbor nodes was equal, the Adamic/Adar index was used to select the appropriate label. For the other iterations (t > 2), a new criterion, known as label strength, was applied to select the label with the highest strength of a node. (Zhang & Xia, 2021) proposed a new node similarity metric, and the label was updated according to the similarity between the current node and neighbor nodes. 2. Although the time complexity of the proposed algorithm is slightly higher, it improves the randomness and stability of the community detection results. The proposed algorithm achieves higher community partition performance on 6 classical real networks and 15 artificial networks with different scale and complexity. Further studies will be conducted in terms of large-scale networks to reduce the time complexity of the algorithm, which has been pointed out in the summary of the paper. 3. According to your suggestion, we conduct the experiments for large-scale data sets with 6000, 7000, 8000, 9000 and 10000 nodes, respectively, and the results are shown as follows. Table 10 Community detection results of five large-scale artificial networks Dataset |V| <k> max(k) actual number of communities number of communities found <Q> NMI LFR-11 6000 10 50 30 60 0.1 125 128 0.8730 0.9762 LFR-12 7000 10 50 30 60 0.1 130 133 0.8686 0.9510 LFR-13 8000 10 50 30 60 0.1 176 176 0.8828 0.9805 LFR-14 9000 10 50 30 60 0.1 175 178 0.8722 0.9629 LFR-15 10000 10 50 30 60 0.1 175 180 0.8645 0.9478 Response to the third reviewer’s comments Dear reviewers, thank you for all your valuable comments. According to your suggestion, with the help of the professional SCI thesis polishing organization, we have improved the presentation of our paper. The following is a description of the amendment. Since some contents have been modified and supplemented in the paper, the line number may have changed. 1. We have supplemented the experimental results of five large-scale artificial networks with 6000, 7000, 8000, 9000 and 10000 nodes. 2. All occurrences of “community discovery” have been corrected to “community detection”. 3. In table 3, the letter representing the average path length has been corrected as L to avoid ambiguity, and the expression of clustering coefficient has been changed.   4. At line 49, the place of the citation number 1 has been revised, and the expression problem at line 241 has also been corrected.   "
Here is a paper. Please give your review comments after reading it.
427
"<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:p>To solve the problems of poor stability and low modularity (Q) of community division results caused by the randomness of node selection and label update in the traditional label propagation algorithm, an improved two-stage label propagation algorithm based on LeaderRank was proposed in this study. In the first stage, the order of node updating is determined by the participation coefficient (PC). Then, a new similarity measure is defined to improve the label selection mechanism so as to solve the problem of label oscillation caused by multiple labels of the node with the most similarity to the node.</ns0:p><ns0:p>Moreover, the influence of the nodes is comprehensively used to find the initial community structure. In the second stage, the rough communities obtained in the first stage are regarded as nodes, and their merging sequence is determined by the PC. Next, the nonweak community and the community with the largest number of connected edges are combined. Finally, the community structure is further optimized to improve the modularity so as to obtain the final partition result. Experiments were performed on 6 real classic networks and 19 artificial datasets with different scales, complexities, and densities. The modularity and normalized mutual information (NMI) were considered as evaluation indexes for comparing the improved algorithm with dozens of relevant classical algorithms. The results showed that the proposed algorithm yields superior performance, and the results of community partitioning obtained using the improved algorithm were stable and more accurate than those obtained using other algorithms. In those classical real networks, the modularity of community division results of the proposed algorithm was higher than that of other algorithms. And the NMI values were all above 0.9943 on eight artificial datasets containing 1000 nodes, and 0.986 and 0.953 on two complex networks containing 5000 nodes. In addition, the proposed algorithm always performs well in nine large-scale artificial data sets with 6000 to 50000 nodes, which verifies its computational performance and utility in community detection for large-scale network data sets.</ns0:p></ns0:div> </ns0:abstract> <ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'> <ns0:div><ns0:head>Introduction</ns0:head><ns0:p>With the rapid development of the Internet and big data technology, research on complex networks has gradually penetrated into many fields, such as information science and biological science, and has thus become a very challenging research topic <ns0:ref type='bibr' target='#b5'>(5)</ns0:ref>. In social networks, such as scientific research cooperation and transportation networks, objects are usually represented as nodes, and relationships between objects are represented as edges <ns0:ref type='bibr' target='#b44'>(40)</ns0:ref>. Real-world networks have one important feature, community structure, that is, a network is usually composed of several communities, with relatively close node connections within the community and relatively sparse node connections between the communities. The discovery of community structure is an important basis for exploring the formation principle 0 and function of complex network structures <ns0:ref type='bibr' target='#b25'>(22)</ns0:ref> and plays a vital role in many fields. For instance, in the field of biology <ns0:ref type='bibr' target='#b22'>(19)</ns0:ref>, community detection is of great significance for understanding the specific organizational structure, functional analysis, and behavior prediction of biological systems. In the field of ecommerce, consumers with similar purchasing habits can be mined through community detection, thus creating greater business value through the establishment of efficient recommendation systems <ns0:ref type='bibr' target='#b12'>(11)</ns0:ref>. In the field of infectious diseases, community detection can be used to analyze and identify the key population of infectious diseases so as to effectively control the spread of diseases <ns0:ref type='bibr' target='#b3'>(3)</ns0:ref>. Therefore, the quick and effective discovery of the community structure of networks has become the primary task and an important branch of social network research.</ns0:p><ns0:p>With the extensive research on social network analysis, many community detection algorithms have emerged, but most of them suffer from limitations such as high complexity, low accuracy of community division, and unstable results. Label propagation algorithm (LPA) has attracted attention due to its advantages of low time complexity, no prior conditions, and suitability for community detection in large-scale networks <ns0:ref type='bibr' target='#b11'>(10)</ns0:ref>. However, the traditional LPA has the following disadvantages: 1) LPA adopts a random strategy in the updating sequence of nodes, resulting in randomness in the community partition results; 2) LPA treats every node as equally important and does not distinguish the importance of each node; 3) LPA assigns a unique label to each node and fails to identify overlapping communities <ns0:ref type='bibr' target='#b20'>(17)</ns0:ref>.</ns0:p><ns0:p>In view of the abovementioned shortcomings, numerous improved algorithms have been proposed. <ns0:ref type='bibr' target='#b4'>(4)</ns0:ref> proposed an improved LPA algorithm based on label propagation ability, developed a calculation method based on a k-shell decomposition algorithm for determining the importance of individual nodes <ns0:ref type='bibr' target='#b24'>(21)</ns0:ref>, and formulated a label update strategy through the importance ranking of nodes and label propagation ability. <ns0:ref type='bibr' target='#b33'>(30)</ns0:ref> proposed a community detection algorithm based on node influence and similarity (NIS-LPA), wherein the selected seed nodes are used to expand into seed regions, and then the similarity between nodes is calculated based on the network topology and real attributes of nodes, thus improving the stability and accuracy of the algorithm. <ns0:ref type='bibr' target='#b37'>(34)</ns0:ref> proposed a community detection algorithm integrating LeaderRank and label propagation (LLPA) wherein the three aspects of node label initialization, node update sequence, and label propagation selection process are improved. The LeaderRank algorithm is adopted to select key nodes, and labels are assigned to them by calculating the influence of the nodes. Thereafter, the nodes are updated according to the influence of the nodes, and the propagation ability between nodes is considered in the process of label propagation. <ns0:ref type='bibr' target='#b7'>(7)</ns0:ref> proposed a community detection algorithm based on boundary nodes and label propagation (LBN), which determines core nodes and boundary nodes, respectively, and then determines the community to which they belonged according to the weight of the boundary nodes, thus improving the stability of the algorithm. However, the values of Q <ns0:ref type='bibr' target='#b36'>(33)</ns0:ref> and NMI <ns0:ref type='bibr' target='#b38'>(35)</ns0:ref> are still unsatisfactory. <ns0:ref type='bibr' target='#b41'>(38)</ns0:ref> proposed label importance-based label propagation algorithm (LILPA) for community detection for application in core drug detection. In LILPA, when labels are transmitted to other nodes, the label updating process based on node importance, node attractiveness, and label importance is used to improve the label instability and the accuracy and efficiency of community division. For overlapping communities, ( <ns0:ref type='formula'>14</ns0:ref>) proposed an efficient community detection algorithm based on label propagation with community kernel (CK-LPA), which assigns a corresponding weight to each node according to the importance of the node in the network and updates node labels according to the weight order. They also discussed the composition of weights, label updating, propagation strategies, and convergence conditions. ( <ns0:ref type='formula'>18</ns0:ref>) improved the label update order and propagation threshold. They proposed an overlapping community detection algorithm based on label propagation by using the PageRank and node clustering coefficients algorithms (COPRAPC), wherein nodes with low influence are selected for label propagation, and the node clustering coefficient is used to control the maximum number of communities that nodes belong to. <ns0:ref type='bibr' target='#b28'>(25)</ns0:ref> proposed an overlapping community detection algorithm integrating label preprocessing and node influence (FLPNI), thereby greatly reducing the randomness of label propagation. <ns0:ref type='bibr' target='#b32'>(29)</ns0:ref> proposed an improved label propagation algorithm for community detection based on two-level neighborhood similarity (TNS-LPA); defined a new two-level neighborhood similarity measurement, which selected the initial community center by considering the minimum distance and local centrality index; and optimized the algorithm by adopting the asynchronous updating label strategy according to the importance of nodes, thereby further improving the accuracy of community division. <ns0:ref type='bibr' target='#b14'>(12)</ns0:ref> proposed an improved label propagation algorithm based on modularity and node importance (LPA-MNI) wherein the initial community is identified based on the value of modularity, and then the remaining nodes that have not been assigned to the initial community are clustered through label propagation. Node importance is used to improve the label update sequence, and the label selection mechanism is used when the majority of nodes contain multiple labels. Experimental results showed that LPA-MNI is more robust than the traditional LPA algorithm. <ns0:ref type='bibr' target='#b8'>(8)</ns0:ref> proposed the node importance-based label propagation algorithm (NI-LPA) to identify overlapping communities to address the problem of instability in the LPA algorithm caused by random updating. NI-LPA uses information derived from node attributes to simulate special propagation and filtering processes. Experiments on artificial and real networks of different sizes, complexities, and densities revealed the high efficiency of NI-LPAT for overlapping community detection. <ns0:ref type='bibr' target='#b27'>(24)</ns0:ref> proposed another label propagation algorithm based on node importance (NI-LPA) wherein the importance of nodes is defined by combining the signal propagation of nodes, the value of K-shell nodes themselves, and the Jaccard distance between adjacent nodes, which better avoids the instability caused by random selection of nodes in the traditional LPA algorithm. <ns0:ref type='bibr' target='#b17'>(15)</ns0:ref> encoded both semantic and geometric information of the environment in a weighted colored graph, in which the edges were partitioned into a finite set of ordered semantic classes (e.g., colors), and then incrementally searched for the shortest path among the set of paths with minimal inclusion of inferior classes, using information from the previous search by ideas similar to the LPA. <ns0:ref type='bibr' target='#b0'>(1)</ns0:ref>In the first and second iterations (t &lt;= 2) of the propagation, if the number of maximum label frequencies in neighbor nodes was equal, the Adamic/Adar index was used to select the appropriate label. For the other iterations (t &gt; 2), a new criterion, known as label strength, was applied to select the label with the highest strength of a node. <ns0:ref type='bibr' target='#b38'>(35)</ns0:ref> proposed a new node similarity metric, and the label was updated according to the similarity between the current node and neighbor nodes.</ns0:p><ns0:p>The abovementioned algorithms focus on the calculation of the node importance and seed node selection and consider the randomness of node update order but ignore the importance of label update strategy, resulting in the unstable and less accurate community division. Therefore, this study focused on the updating strategy of nodes and labels to achieve efficient and accurate community division. The two-stage community detection algorithm based on the label propagation algorithm (39) (LPA-TS) has the following problems. 1) In the first stage, the algorithm determines the node update sequence from the descending participation coefficient (PC) and then updates the node label to that with the largest similarity so as to obtain the initial partition result. However, only the number and degree of common neighbors are considered in the definition of similarity. There may be multiple nodes with maximum similarity with the same number and degree of common neighbors. If one node is randomly selected for label update, the result of community division will be unstable. 2) In the second stage, the algorithm first regards the initial community as nodes and then determines the order of community mergers from the PC. Then, the algorithm performs merging according to the conditions of a weak community and finally obtains the community structure. However, in some classical networks, the community division results are not ideal, and the modularity is low because LPA-TS has some shortcomings in the updating strategy of nodes and labels and the definition of initial community merge conditions. To solve these problems, an improved two-stage label propagation algorithm (LPA-ITSLR) was proposed in this paper. The contributions and innovations of this paper are as follows.</ns0:p><ns0:p>(1) To solve the problem of unstable and inaccurate community division results yielded by the LPA-TS algorithm, a new similarity measurement index between nodes was proposed to optimize the node label updating strategy. In the initial stage of community division, the number and degree of common neighbors of nodes and the similarity of structural information between common neighbors are considered comprehensively. In view of the situation that multiple nodes may have the maximum similarity value, the importance of nodes is sorted by calculating the LeaderRank value so as to avoid the randomness of node label update order and ensure the stability of the initial community division result.</ns0:p><ns0:p>(2) To address the problem of low modularity in LPA-TS, the optimal parameter value was determined by improving the definition of weak community in the original algorithm, and the evaluation function based on complementary entropy was changed to the objective function based on modularity optimization in the community merging stage so as to further improve the quality of community division and the accuracy of the final division result.</ns0:p><ns0:p>(3) Experiments were conducted on six real networks and 19 artificial datasets with different scales (1000 nodes to 50000 nodes) and complexities. The Q and NMI values were used as evaluation indexes to compare the proposed algorithm with several recent label propagation algorithms. Experimental results showed that the improved algorithm has higher quality and stability in community division than the comparative algorithms. For large-scale data sets, the proposed algorithm can still achieve high quality of community division.</ns0:p></ns0:div> <ns0:div><ns0:head>Theoretical Basis</ns0:head></ns0:div> <ns0:div><ns0:head>Community division</ns0:head><ns0:formula xml:id='formula_0'>A complex network is generally represented by G(V, E), where is the node set, &#119881; = {&#119907; 1 ,&#119907; 2 ,&#8230;,&#119907; &#119899; } E = { , , &#8230;, } represents the set of edges, and is one of the division &#119890; 1 &#119890; 2 &#119890; &#119898; &#120570; = {&#120570; 1 , &#120570; 2 , &#8230;, &#120570; &#119896; } of G if and only if: 1) &#8704; &#120570; &#119903; &#8712; &#120570; , &#120570; &#119903; &#8800; &#8709; ; 2) &#119865;&#119900;&#119903; &#119886;&#119897;&#119897; &#120570; &#119903; &#8712; &#120570; , &#8899; &#120570; &#119903; &#8712; &#120570; &#120570; &#119903; = &#119881;; 3) &#8704; &#120570; &#119901; , &#120570; &#119902; &#8712; &#120570; , &#120570; &#119901; &#8745; &#120570; &#119902; = &#8709; ;</ns0:formula></ns0:div> <ns0:div><ns0:head>Participation coefficient</ns0:head><ns0:p>The PC of node <ns0:ref type='bibr' target='#b42'>(39)</ns0:ref> is used to describe the distribution of nodes with different &#119907; &#119894; &#119875;&#119862; &#119894; communities in the network edge; it is defined as Eq. ( <ns0:ref type='formula'>1</ns0:ref>), where k is the number of communities, and d i is the degree of node and . A high PC &#119907; &#119894; &#119889; &#119894; (&#120570; &#119903; ) = |{&#119907; &#119895; &#9474;(&#119907; &#119894; ,&#119907; &#119895; ) &#8712; &#119864; &#8743; &#119907; &#119895; &#8712; &#120570; &#119903; }| value indicates that the node is connected with more communities and that the node has a low degree of belonging to each community. In contrast, a low PC value indicates that the node is connected to a fewer number of communities and that the node has a high degree of belonging to each community. When community detection is performed, nodes with low PC and obvious community affiliation are selected to start traversal, which is more conducive for finding the correct community structure.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_1'>&#119875;&#119862; &#119894; = 1 -&#8721; &#119896; &#119903; = 1 ( &#119889; &#119894; (&#120570; &#119903; ) &#119889; &#119894; ) 2</ns0:formula></ns0:div> <ns0:div><ns0:head>Strong and weak communities</ns0:head><ns0:p>Community structures can be strong or weak <ns0:ref type='bibr' target='#b42'>(39)</ns0:ref>. A strong community means that the number of links between any node in the community and the inside of the community is greater than the number of links between the node and the outside of the community. It can be defined as Eq. ( <ns0:ref type='formula'>2</ns0:ref>). Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A weak community means that the sum of the edges of all nodes in the community and the nodes inside the community is greater than the sum of the edges of all nodes outside the community. It can be defined as Eq. ( <ns0:ref type='formula'>3</ns0:ref>). In general, a community should satisfy at least the character of weak community.</ns0:p><ns0:p>( and then uses Eq. ( <ns0:ref type='formula'>5</ns0:ref>) to divide the background nodes evenly among all the nodes.</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_2'>) &#119871;&#119877; &#119894; ( &#119905; ) = &#8721; &#119873; &#119895; = 0 &#119886; &#119894;&#119895; &#119889; &#119895; &#119871;&#119877; &#119895; (&#119905; -1)<ns0:label>4</ns0:label></ns0:formula><ns0:p>Where t is the number of iterations and N is the number of nodes in the network. If there is an edge between nodes and , then , otherwise, ; represents the degree of node </ns0:p></ns0:div> <ns0:div><ns0:head>Evaluation Indicators</ns0:head><ns0:p>Modularity. Modularity (Q), proposed by Newman et al., is commonly used for measuring the strength of community structures. The closer its value is to 1, the higher the strength of community structures, that is, the better the quality of community division <ns0:ref type='bibr' target='#b36'>(33)</ns0:ref>. Q can be calculated as follows:</ns0:p><ns0:p>(6) &#119876; = Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>result is completely consistent with the actual community structure <ns0:ref type='bibr' target='#b38'>(35)</ns0:ref>. Assuming that &#119860; = {&#119860; 1 and are the real community structure of the network and the , &#119860; 2 , &#8230;, &#119860; &#119896; } &#119861; = {&#119861; 1 , &#119861; 2 , &#8230;, &#119861; &#119896; ' } community division result of the network by an algorithm, respectively, and are the number &#119896; &#119896;'</ns0:p><ns0:p>of communities under the two divisions, NMI can be defined as follows:</ns0:p><ns0:formula xml:id='formula_3'>(7) NMI = 2&#8721; &#119896; &#119894; = 1 &#8721; &#119896;' &#119895; = 1 &#119879; &#119894;&#119895; &#119897;&#119900;&#119892; &#119899;&#119879; &#119894;&#119895; &#119879; &#119894; &#119879; &#119895; -&#8721; &#119896; &#119894; = 1 &#119879; &#119894; &#119897;&#119900;&#119892; &#119879; &#119894; &#119899; -&#8721; &#119896;' &#119895; = 1 &#119879; &#119895; &#119897;&#119900;&#119892; &#119879; &#119895;</ns0:formula></ns0:div> <ns0:div><ns0:head>&#119899;</ns0:head><ns0:p>Where n is the total number of nodes in the network, is the confusion matrix, is the &#119879; &#119879; &#119894;&#119895; number of common nodes included in the real divided communities and , and is the sum</ns0:p><ns0:formula xml:id='formula_4'>&#119860; &#119894; &#119861; &#119895; &#119879; &#119894;</ns0:formula><ns0:p>of the elements in the i-th row of the confusion matrix.</ns0:p></ns0:div> <ns0:div><ns0:head>The Proposed Algorithm</ns0:head></ns0:div> <ns0:div><ns0:head>Question-posing</ns0:head><ns0:p>In the first stage of the LPA-TS algorithm, when there are two or more nodes with the largest similarity with the current node, the algorithm randomly selects one node for label update; this may lead to unstable partition results. LPA-TS algorithm expresses the similarity of nodes as CN, which can be expressed as Eq. ( <ns0:ref type='formula' target='#formula_5'>8</ns0:ref>), where represents the neighbor nodes of node and &#119873; &#119894; &#119907; &#119894; &#119889; &#119894; represents the degree of node . community. As shown in Figure <ns0:ref type='figure'>1</ns0:ref>, .</ns0:p><ns0:formula xml:id='formula_5'>&#119907; &#119894;<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>&#119873; 0 = {&#119907; 2 ,&#119907; 8 , &#119907; 1 , &#119907; 5 } &#8743; (&#119907; 2 ,&#119907; 8 &#8712; &#120570; 1 ) &#8743; (&#119907; 1 ,&#119907; 5 &#8712; &#120570; 2 )</ns0:formula><ns0:p>According to the similarity calculation formula of the LPA-TS algorithm, the node has a large &#119907; 0 similarity with in the community ; in addition, has a large similarity with in the Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_7'>&#119907;</ns0:formula></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The LPA-TS algorithm only yielded an initial community division result in the first stage, and there were still many small communities. In the Karate network shown in Figure <ns0:ref type='figure'>3</ns0:ref>, some nodes with higher degrees have greater similarities with many nodes. For example, node can easily &#119907; 34 pass its label to neighboring nodes, while those at the edge of the network have low similarity to central nodes higher degrees. For example, nodes and can easily form small-scale &#119907; <ns0:ref type='bibr' target='#b28'>25</ns0:ref> &#119907; 26 communities, such as triangle nodes and diamond nodes in Figure <ns0:ref type='figure'>3</ns0:ref>. To merge these small communities, LPA-TS uses the definition of weak communities and the evaluation function based on complementary entropy in the second stage. However, in the definition of weak communities &#945; is set as 2, which leads to unstable division results in some networks, that is, the final community division results are not ideal, and the degree of modularity is low. Therefore, in this study, the parameters and the objective function in the second-stage community merger strategy were improved and a new community division method, LPA-ITSLR, was developed to achieve stable and more accurate community division results.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 3 LPA-TS algorithm partitioning results for Karate network in the first stage</ns0:head></ns0:div> <ns0:div><ns0:head>Node similarity definition</ns0:head><ns0:p>To solve the abovementioned problems, a new similarity index was proposed, which comprehensively considers the common neighbors, degrees of nodes, and the structural relationship between common neighbors. The more common neighbors two nodes have, the more similar they are. The higher the degree of a node, the more the number of nodes it shares its edges with, that is, the similarity of two nodes is inversely proportional to the degree of the node itself. The number of connected edges between neighboring nodes is combined to avoid multiple nodes with the same similarity value as the original node. The improved similarity index can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_8'>(9) &#119878; &#119868;&#119862;&#119873; (&#119894;, &#119895;) = |&#119873; &#119894; &#8745; &#119873; &#119895; | + 1 |&#119873; &#119894; &#8746; &#119873; &#119895; | + 1 &#119889; &#119894; &#119889; &#119895; + &#119862;(&#119907; &#119894; , &#119907; &#119895; ) |&#119873; &#119894; &#8745; &#119873; &#119895; |</ns0:formula><ns0:p>Where represents the number of edges between the common neighbor nodes of nodes &#119862;(&#119907; &#119894; , &#119907; &#119895; ) and . The numerator of the first term of the equation is increased by 1 so that the improved &#119907; &#119894; &#119907; &#119895; similarity index is not 0 when there is no public neighbor. According to the definition of similarity in Eq. ( <ns0:ref type='formula'>9</ns0:ref>), the similarity between nodes and in Figure <ns0:ref type='figure'>1</ns0:ref> <ns0:ref type='table' target='#tab_6'>1</ns0:ref>. First, each node is assigned a unique label, and the similarity between nodes is calculated according to Eq. ( <ns0:ref type='formula'>9</ns0:ref>). Then, the PC value of each node is calculated and sorted in ascending order. Next, the labels are updated according to the sorted nodes. In the label updating strategy, the similarity between the current node and other nodes is compared. If the node with the largest similarity is not unique, the LR is further compared; if not, one is randomly selected to obtain the rough initial community structure in the first stage. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Stage 2: Community merge. The second stage of the proposed LPA-ITSLR algorithm is shown in Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref>. In view of the problem that small communities may cause low network modularity in the first stage, whether the initial community meets the weak community condition is judged first. If the condition is not met, the community with the largest number of connected edges is selected for merging; this process is repeated until the entire network meets the weak community condition. The research of LPA-TS ( <ns0:ref type='formula'>39</ns0:ref>) algorithm shows that &#120572; is generally set as 2 in Eq. ( <ns0:ref type='formula'>3</ns0:ref>). In order to achieve more accurate community detection results, &#120572; is set as 0.5, 1, 1.2, 1.5 and 2, respectively for the 8 real data sets used in this paper. Through repeated experiments, it can be known that when &#120572; is set as 1.5, good division results are achieved on the 8 networks.</ns0:p><ns0:p>Moreover, experiments are also carried out on 15 artificial data sets to verify the rationality of the value of &#120572;. Therefore, in our study, &#120572; is determined to be 1.5 to achieve better performance in all networks. Each community is regarded as a node, and its PC value is calculated using Eq.</ns0:p><ns0:p>(1) to determine the community with the largest PC value; then, the community with the most links is determined for merging. If the modularity increases after the merge, the merge will be selected; otherwise, it will not be merged, thus ensuring that the community structure after the second stage merge will have a higher modularity and be closer to the real community structure. </ns0:p></ns0:div> <ns0:div><ns0:head>Experiment and Analysis</ns0:head><ns0:p>In this study, numerous experiments are conducted on real networks and artificial datasets with different structural parameters. The classical LPA algorithm, LPA-TS algorithm, and several community detection algorithms based on label propagation were compared; moreover, the effectiveness, correctness, stability, and accuracy of the proposed algorithm were verified.</ns0:p></ns0:div> <ns0:div><ns0:head>Analysis of experimental results on real networks</ns0:head><ns0:p>Real dataset. In total, six classic real social network datasets were used in the experiment; their attribute characteristics are presented in Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>. In Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>, |V| represents the total number of nodes in the network, |E| represents the total number of edges, | | represents the number of &#120570; communities included in the network, max(k) represents the maximum node degree, &lt;k&gt; represents the average node degree, &lt;L&gt; represents the average path length, and &lt;c&gt; represents the clustering coefficient.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 3 Basic structural parameters of real datasets</ns0:head><ns0:p>Community division results. The proposed LPA-ITSLR algorithm was used to divide communities in the six abovementioned real datasets. The results are illustrated in Figure <ns0:ref type='figure'>4</ns0:ref>, where nodes in different communities are represented by different color.</ns0:p></ns0:div> <ns0:div><ns0:head>Figure 4 Community detection results of real networks</ns0:head><ns0:p>Stability analysis of LPA-ITSLR. The proposed LPA-ITSLR algorithm and the LPA and LPA-TS algorithms were compared and analyzed in the six abovementioned real datasets. Each dataset was run independently for 10 times, and the average value of the three algorithms on the six datasets was obtained (denoted as &lt;Q&gt;), as shown in Table <ns0:ref type='table' target='#tab_10'>4</ns0:ref>. The independent experimental results for each time are shown in Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>As can be seen from the experimental results presented in Table <ns0:ref type='table' target='#tab_10'>4</ns0:ref> and Figure <ns0:ref type='figure'>5</ns0:ref>, LPA-ITSLR performed well in all datasets, with the exception that the average module degree on the NetScience network was slightly lower than that obtained using the other two algorithms. Moreover, LPA-ITSLR yielded more stable community partitioning results and a higher module degree than the other two algorithms. NetScience is a weighted network; however, in the experiment, the weight was ignored, and it was transformed into a powerless network for community division. Therefore, the quality of community division on this network obtained using LPA-ITSLR was slightly lower than that obtained using the other two algorithms. However, in the 10 independent experiments, the results of the LPA and LPA-TS algorithms exhibited fluctuations, indicating that the two algorithms are unstable due to the randomness of node and label update. The module-degree value of the proposed LPA-ITSLR algorithm always remained stable for every network, indicating that LPA-ITSLR effectively solves the oscillation problem in the process of label propagation and has higher accuracy and stability. To further verify the robustness of LPA-ITSLR, 100 independent experiments were conducted on the Karate, Dolphin, and Football networks; the results are presented in Figure <ns0:ref type='figure'>6</ns0:ref>. The community division results obtained using the LPA algorithm exhibited the most serious fluctuations in the modularity value, followed by the LPA-TS algorithm. In contrast, LPA-ITSLR maintained the same community division results in 100 experiments, and the modularity was higher than that of LPA and LPA-TS. To further evaluate the performance of the LPA-ITSLR algorithm, it was compared with four recent community detection algorithms based on label propagation. Among them, the COPRA algorithm (6) realizes community division by assigning multiple labels with attribution coefficients to a node. The WLPA algorithm <ns0:ref type='bibr' target='#b26'>(23)</ns0:ref> first selects the label with a larger weight for propagation during the label propagation process. The LINSIA <ns0:ref type='bibr' target='#b29'>(26)</ns0:ref> algorithm is based on node importance and employs label importance to complete the community division. The LILPA <ns0:ref type='bibr' target='#b41'>(38)</ns0:ref> algorithm uses a fixed label update sequence based on the ascending order of node importance for discovering communities. The modularity of the results obtained using the five algorithms on the four real datasets is presented in Table <ns0:ref type='table' target='#tab_12'>5</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 4 Average modularity values of 10 experiments for the three algorithms on real datasets</ns0:head></ns0:div> <ns0:div><ns0:head>Table 5 Modularity comparison of five algorithms</ns0:head><ns0:p>From Table <ns0:ref type='table' target='#tab_12'>5</ns0:ref>, it can be seen LPA-ITSLR yielded the highest modularity and most stable in community division results. Thus, instability caused by label oscillation is avoided effectively by using LPA-ITSLR. Performance comparison of LPA-ITSLR with other algorithms. For further analysis of the effectiveness of the proposed algorithm for community partition and correctness, three classic datasets of Karate, Dolphins, and Football were used, and the LPA-ITSLR algorithm and seven classic community detection algorithms were employed for obtaining the division results <ns0:ref type='table' target='#tab_13'>6</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>PeerJ</ns0:head></ns0:div> <ns0:div><ns0:head>Table 6 Results of eight algorithms on classical networks</ns0:head><ns0:p>From Table <ns0:ref type='table' target='#tab_13'>6</ns0:ref>, it can be seen that the number of communities and modularity of the partition results of the eight algorithms on the three classical networks were different, but LPA-ITSLR exhibited good performance on these datasets; moreover, the partition results and the number of communities were consistent with the real network structure, and the modularity was higher than that obtained using other algorithms. Analysis of experimental results of artificial datasets Artificial datasets. Ten artificial networks were generated using the LFR benchmark (31) ; the basic information is presented in Table <ns0:ref type='table'>7</ns0:ref> The number of nodes is 5000, the community size is 50, the average degree of nodes is 10, and the maximum degree is 50. The mixing parameter &#181; was 0.1 and 0.3, respectively, and these two networks were denoted as LFR-9 and LFR-10.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 7 Description of synthetic networks</ns0:head><ns0:p>Comparative analysis of algorithm performance. For the first eight artificial datasets, the proposed LPA-ITSLR algorithm was compared with the LPA and LPA-TS algorithms in terms of the community division results. The average modularity &lt;Q&gt; and NMI were used as evaluation indicators. The experimental results are shown in Figure <ns0:ref type='figure'>7</ns0:ref>. As the value of &#181; increased, the network became more complex. The modularity of the community division results of the three algorithms on the corresponding network decreased by varying degrees, but LPA-ITSLR yielded higher modularity than the other algorithms. Moreover, the NMI value of LPA-ITSLR on the first seven networks was 1, and the NMI value of the network with a &#181; value of 0.45 was 0.9943, showing extremely strong stability and higher quality of community division.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref> Comparison of modularity and NMI on eight synthetic datasets For large-scale artificial networks LFR-9 and LFR-10 with high complexity, the proposed LPA-ITSLR algorithm was compared with seven recent label propagation algorithms for community division. Q and NMI were considered as evaluation parameters. The results are presented in Table <ns0:ref type='table'>8</ns0:ref>.</ns0:p></ns0:div> <ns0:div><ns0:head>Table 8 Results for LFR9 and LFR10</ns0:head><ns0:p>From Table <ns0:ref type='table'>8</ns0:ref>, it can be seen that the community division results obtained using the seven algorithms were not stable, and the algorithm proposed in this paper maintains stable community division results on the two complex artificial data sets. Although the NMI value was slightly lower than that for other algorithms, the modularity was far higher. In the community merge phase optimization strategy based on modularity, LPA-ITSLR is superior as it can yield stable and high-quality community division results. Manuscript to be reviewed</ns0:p></ns0:div> <ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For the above 10 artificial networks, the experimental results show that the proposed algorithm is superior to other algorithms in both Q and NMI. In order to further verify the superiority of the proposed algorithm, we compare the number of communities detected by LPA-ITSLR algorithm with the actual number of communities of the ten networks, and the results are shown in Table <ns0:ref type='table'>9</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>9</ns0:ref> Actual number of communities and the number of communities detected by LPA-ITSLR It can be seen from table <ns0:ref type='table'>9</ns0:ref> that the number of communities detected by the algorithm proposed in this paper is basically consistent with the actual number of communities. In general, good results are obtained except for small deviations in some networks. Performance analysis of the algorithm for large data sets. In order to further verify the effectiveness, the computational performance and utility of the proposed algorithm for largescale network data sets, nine artificial data sets were used for experiments in this paper. The number of nodes of these large-scale networks was 6000, 7000, 8000, 9000, 10000, 20000, 30000, 40000, 50000 respectively, and these networks were denoted as LFR-11 to LFR-19, respectively. Table <ns0:ref type='table' target='#tab_8'>10</ns0:ref> shows the experimental results of the algorithm on these 9 large-scale networks, including the actual number of communities, the number of communities detected by the LPA-ITSLR algorithm, Q and NMI. It can be seen from table 10 that, the algorithm performs well on these large data sets. With the increase of the number of nodes, the network scale and complexity continues to expand, and there is a discrepancy between the actual number of communities and the number of communities obtained by the algorithm. But the Q is always above 0.86, and the NMI is more than 0.96 by and large. Specially, on the dataset containing 20000 to 50000 nodes, the NMI basically reaches more than 0.98, which shows the utility of the proposed algorithm in community division for large-scale network data sets. In addition, the number of communities detected by the algorithm proposed is basically consistent with the actual number of communities, which further verifies the effectiveness and superiority of the LPA-ITSLR algorithm.</ns0:p></ns0:div> <ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>To solve the problem of unstable results and low modularity of the LPA-TS algorithm in community detection on some networks, an improved LeaderRank-based two-stage label propagation algorithm named LPA-ITSLR was proposed in this study. In the first stage, the order of node updating is determined by descending order of the PC values. In the label propagation strategy, the improved similarity index is used, and then the influence of the nodes is compared so as to obtain the initial community division. In the second stage, the community is regarded as a node, and the PC is calculated again and sorted in ascending order. For determining the optimal parameter value in the weak community condition, the community is merged. Finally, the community structure is further improved based on the modularity optimization, and the final community division result is obtained. The proposed LPA-ITSLR algorithm solves the problem that the randomness of LPA-TS algorithm may yield unstable community partition results. Moreover, LPA-ITSLR yielded higher modularity than other algorithms on six real networks and 19 artificial datasets and achieved a more stable community division. However, it has a higher time complexity in the case of certain large-scale networks with special structures such as when the network community structure is complex, when there are many small communities and less contact between communities, and for nonequilibrium size distribution networks. So a community detection method based on label propagation integrated deep learning and optimization could be employed to determine the node similarities and label influence. In the future, community detection in large-scale networks will be further studied to reduce the time complexity of the algorithm, and to achieve more accurate and efficient community detection results. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>&#119862;&#119873;(&#119907; &#119894; , &#119907; &#119895; ) = |&#119873; &#119894; &#8745; &#119873; &#119895; | + 1 &#119889; &#119894; &#119889; &#119895; In the network diagram shown in Figure 1, when the LPA-TS algorithm is used for the first stage of community division, two initial rough communities are obtained: &#120570; 1 = {&#119907; 7 ,&#119907; 8 , &#119907; 9 , &#119907; 2 } and . The nodes in different communities are represented by different &#120570; 2 = {&#119907; 1 ,&#119907; 3 , &#119907; 4 , &#119907; 5 , &#119907; 6 } shapes and colors in Figure 1. At this time, node has not yet been merged into any &#119907; 0</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_1'><ns0:head>Figure 1 AFigure 2</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1 A network instance with two communitiesThe traditional LPA algorithm and the LPA-TS algorithm were used to conduct 100 experiments on classic Karate and Football networks. The corresponding module degree Q (33) of the community division results is shown in Figure2. Both algorithms exhibited obvious oscillations, indicating that the community division results of the algorithms are unstable.(a) (b) Figure 2 Results of 100 experiments of the two algorithms on two networks</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_2'><ns0:head>Figure 5 Figure 6</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5 Comparison of algorithm stability (a) (b) (c) Figure 6 Comparison of modularity of LPA, LPA-TS, and LPA-ITSLR</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:11:67478:2:0:NEW 16 Feb 2022) Manuscript to be reviewed Computer Science for correlation analysis in terms of the number of communities and module Q as evaluation |&#120570;| indicators, The results are presented in Table</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>7. The number of nodes |V| in the top eight artificial networks is 1000, and the community size | | is 10-50, that is, min = 10, max = 50. The &#120570; |&#120570;| |&#120570;| average degree of nodes &lt;k&gt; is 20, and the maximum degree max(k) is 50. The values of 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, and 0.45 were employed as the mixing parameter &#181;, and the eight networks were denoted as LFR-1-LFR-8. The latter two artificial networks are more complicated.</ns0:figDesc></ns0:figure> <ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:2:0:NEW 16 Feb 2022)</ns0:figDesc></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,229.87,480.75,291.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,255.37,419.25,231.00' type='bitmap' /></ns0:figure> <ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,204.37,525.00,187.50' type='bitmap' /></ns0:figure> <ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The LeaderRank algorithm<ns0:ref type='bibr' target='#b37'>(34)</ns0:ref> is used to calculate the influence of nodes in the network. A background node is added to the network and connected with all the nodes in the network to &#119907; &#119892; form a new network. The algorithm assigns 1 unit of LeaderRank (LR) value to all nodes except the background node in advance, assigns 0 unit of LR value to node , uses Eq. (4) to calculate &#119907; &#119892; the LR value of node in each iteration, and iterates repeatedly until reaches a steady state &#119907; &#119894; &#119907; &#119894;</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>2) (3) represents the number of connected edges between the node in &#120572; * &#119889; &#119894;&#119899; &#119894; (&#120570; &#119903; ) &gt; &#119889; &#119900;&#119906;&#119905; &#119894; (&#120570; &#119903; ) , &#8704; &#119894; &#1013; &#120570; &#119903; &#120572; * &#8721; &#119894;&#1013;&#120570; &#119903; &#119889; &#119894;&#119899; &#119889; &#119900;&#119906;&#119905; &#119894; (&#120570; &#119903; ) &#119894; (&#120570; &#119903; ) &gt; &#8721; &#119894;&#1013;&#120570; &#119903; &#119894; (&#120570; &#119903; ) &#119889; &#119894;&#119899; &#119907; &#119894; and the internal nodes of , In Eqs. (2) and (3), the community represents the number of connected &#120570; &#119903; &#120570; &#119903; &#119889; &#119900;&#119906;&#119905; &#119894; (&#120570; &#119903; )</ns0:cell></ns0:row><ns0:row><ns0:cell>edges between node in &#119907; &#119894; &#120570; &#119903;</ns0:cell><ns0:cell>and other nodes except . In general, &#120572; = 2. &#120570; &#119903;</ns0:cell></ns0:row><ns0:row><ns0:cell>LeaderRank algorithm</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>At this time, the LPA-TS algorithm randomly selects a community to&#119862;&#119873;(&#119907; 9 ,&#119907; 0 ) = &#119862;&#119873;(&#119907; 3 ,&#119907; 0 )merge into it and finally yields two community division results, &#119907; 0 &#120570; = {{&#119907; 7 ,&#119907; 8 , &#119907; 9 , &#119907; 2 }, {&#119907; 0 ,&#119907; 1 , and , resulting in instability. &#119907; 3 , &#119907; 4 , &#119907; 5 , &#119907; 6 }} &#120570;' = {{&#119907; 7 ,&#119907; 8 ,&#119907; 9 , &#119907; 2 , &#119907; 0 }, {&#119907; 1 ,&#119907; 3 , &#119907; 4 , &#119907; 5 , &#119907; 6 }}</ns0:figDesc><ns0:table><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>&#120570; 1</ns0:cell><ns0:cell>&#119907; 0</ns0:cell><ns0:cell>&#119907; 3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>community . Moreover, both and have the same neighbor attributes as node , that is, &#120570; 2 &#119907; 9 &#119907; 3 &#119907; 0</ns0:cell></ns0:row><ns0:row><ns0:cell>.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 First stage of the LPA-ITSLR algorithm</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:2:0:NEW 16 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 Second stage of the LPA-ITSLR algorithm</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 10 Community detection results of 9 large-scale artificial networks.</ns0:head><ns0:label>10</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure> <ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 3 Basic structural parameters of real datasets</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>|&#119881;|</ns0:cell><ns0:cell>|&#119864;|</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>max(k)</ns0:cell><ns0:cell>&lt;k&gt;</ns0:cell><ns0:cell>&lt;d&gt;</ns0:cell><ns0:cell>&lt;c&gt;</ns0:cell></ns0:row><ns0:row><ns0:cell>Karate</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>78</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>4.588</ns0:cell><ns0:cell>2.408</ns0:cell><ns0:cell>0.588</ns0:cell></ns0:row><ns0:row><ns0:cell>Dolphin</ns0:cell><ns0:cell>62</ns0:cell><ns0:cell>159</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>5.129</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.309</ns0:cell></ns0:row><ns0:row><ns0:cell>Polbooks</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>441</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>8.4</ns0:cell><ns0:cell>3.079</ns0:cell><ns0:cell>0.448</ns0:cell></ns0:row><ns0:row><ns0:cell>Football</ns0:cell><ns0:cell>115</ns0:cell><ns0:cell>613</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>10.661</ns0:cell><ns0:cell>2.508</ns0:cell><ns0:cell>0.403</ns0:cell></ns0:row><ns0:row><ns0:cell>Les_Miserable</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>254</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>6.597</ns0:cell><ns0:cell>2.641</ns0:cell><ns0:cell>0.736</ns0:cell></ns0:row><ns0:row><ns0:cell>NetScience</ns0:cell><ns0:cell>379</ns0:cell><ns0:cell>914</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>3.451</ns0:cell><ns0:cell>6.042</ns0:cell><ns0:cell>0.798</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:2:0:NEW 16 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Average modularity values of 10 experiments for the three algorithms on real datasets</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:2:0:NEW 16 Feb 2022)</ns0:note></ns0:figure> <ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 4 Average modularity values of 10 experiments for the three algorithms on real datasets</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Dataset/&lt;Q&gt;</ns0:cell><ns0:cell>LPA</ns0:cell><ns0:cell>LPA-TS</ns0:cell><ns0:cell>LPA-ITSLR</ns0:cell></ns0:row><ns0:row><ns0:cell>Karate</ns0:cell><ns0:cell>0.3174</ns0:cell><ns0:cell>0.3716</ns0:cell><ns0:cell>0.4242</ns0:cell></ns0:row><ns0:row><ns0:cell>Dolphin</ns0:cell><ns0:cell>0.4920</ns0:cell><ns0:cell>0.3759</ns0:cell><ns0:cell>0.5418</ns0:cell></ns0:row><ns0:row><ns0:cell>Polbooks</ns0:cell><ns0:cell>0.3801</ns0:cell><ns0:cell>0.4569</ns0:cell><ns0:cell>0.5207</ns0:cell></ns0:row><ns0:row><ns0:cell>Football</ns0:cell><ns0:cell>0.5819</ns0:cell><ns0:cell>0.6010</ns0:cell><ns0:cell>0.6068</ns0:cell></ns0:row><ns0:row><ns0:cell>Les_Miserable</ns0:cell><ns0:cell>0.2719</ns0:cell><ns0:cell>0.5007</ns0:cell><ns0:cell>0.5102</ns0:cell></ns0:row><ns0:row><ns0:cell>NetScience</ns0:cell><ns0:cell>0.7769</ns0:cell><ns0:cell>0.7573</ns0:cell><ns0:cell>0.7567</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 5 Modularity comparison of five algorithms</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>&#22303; 0.10187 0.3741 &#22303; 0.03946 0.4884 &#22303; 0.03215 0.5972 &#22303; 0.02115 WLPA 0.3682 &#22303; 0.08176 0.3695 &#22303; 0.02517 0.5070 &#22303; 0.00622 0.5981 &#22303; 0.01374 LINSIA 0.3989 &#22303; 0.00004 0.3878 &#22303; 0.00005 0.4521 &#22303; 0.00007 0.5853 &#22303; 0.00007 LILPA 0.4213 &#22303; 0.0029 0.4003 &#22303; 0.00214 0.4635 &#22303; 0.00646 0.6061 &#22303; 0.00151</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell><ns0:cell>Karate</ns0:cell><ns0:cell>Dolphin</ns0:cell><ns0:cell>Polbooks</ns0:cell><ns0:cell>Football</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>COPRA 0.2348 LPA-ITSLR 0.4242</ns0:cell><ns0:cell>0.5418</ns0:cell><ns0:cell>0.5207</ns0:cell><ns0:cell>0.6068</ns0:cell></ns0:row></ns0:table></ns0:figure> <ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 6 Results of eight algorithms on classical networks</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Network Criteria</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>Karate</ns0:cell><ns0:cell>Q</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>Dolphin</ns0:cell><ns0:cell>Q</ns0:cell><ns0:cell>|&#120570;|</ns0:cell><ns0:cell>Football</ns0:cell><ns0:cell>Q</ns0:cell></ns0:row><ns0:row><ns0:cell>Fastgreedy</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell>0.38</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.495</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell cols='2'>0.549</ns0:cell></ns0:row><ns0:row><ns0:cell>LPA</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.292</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>0.492</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>0.576</ns0:cell></ns0:row><ns0:row><ns0:cell>Leading Eigenvector</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.393</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>0.491</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell cols='2'>0.492</ns0:cell></ns0:row><ns0:row><ns0:cell>Walktrap</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>0.353</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.489</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell cols='2'>0.602</ns0:cell></ns0:row><ns0:row><ns0:cell>NIBLPA</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>0.352</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>0.452</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>0.542</ns0:cell></ns0:row><ns0:row><ns0:cell>EdMot</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>0.412</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.518</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>0.604</ns0:cell></ns0:row><ns0:row><ns0:cell>LPA-MNI</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.372</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>0.527</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell cols='2'>0.582</ns0:cell></ns0:row><ns0:row><ns0:cell>LPA-ITSLR</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.4242</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.5418</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell cols='2'>0.6068</ns0:cell></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:2:0:NEW 16 Feb 2022)</ns0:note></ns0:figure> <ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:11:67478:2:0:NEW 16 Feb 2022) Manuscript to be reviewed Computer Science</ns0:note> </ns0:body> "
"Dear Editor, We are truly grateful to the reviewers for the critical evaluation and thoughtful suggestions on our manuscript (“An improved two-stage label propagation algorithm based on LeaderRank”). According to the comments of the three reviewers, we have made careful revisions to the manuscript. I'm glad that after the first round of revision, two reviewers have approved the article and thought the article is well written. In the second round of review, the second reviewer made a comment on the experimental results of large-scale data sets. And our point-by-point responses to this review’s comments are as given below. We hope that these revisions are satisfactory and the revised version will be acceptable for publication in “PeerJ Computer Science”. Thank you very much for your attention to our paper. I wish you all the best!  Response to the second reviewer’s comments Dear reviewers, thank you very much for all your valuable comments. According to your suggestion, we further improved the algorithm, supplemented the corresponding experiments, and revised the paper using track mode. Our point-by-point responses to the comments are as given below. 1.  I will add the supplementary files again to make the code available. 2. According to your suggestion, we improved the algorithm and tried to carry out experiments on large-scale networks again. The largest data set in the original paper contains 10000 nodes. To verify the computational performance and utility of the proposed algorithm for large-scale network data sets, we have generated simulation data sets containing 20000, 30000, 40000 and 50000 nodes respectively. The experiment has been carried out again, and the results are shown in the table 10. From the experimental results, we draw the conclusion that, as the size of the data set increases, the time complexity of this algorithm increases, but it can still achieve a high modularity and NMI in terms of large-scale data sets, which verifies the effectiveness of the proposed algorithm. However, in the future, we will further improve the algorithm to improve its computational performance on large-scale data sets. Meanwhile, in the last part of the paper, we summary our work, the shortcomings of the algorithm and point out the further research work in the future. It is hoped that the algorithm can have a greater breakthrough in its computational performance and community division quality. Table 10 Community detection results of 9 large-scale artificial networks. Dataset |V| <k> max(k) actual number of communities number of communities found <Q> NMI LFR-11 6000 10 50 30 60 0.1 125 128 0.8730 0.9762 LFR-12 7000 10 50 30 60 0.1 130 133 0.8686 0.9510 LFR-13 8000 10 50 30 60 0.1 176 176 0.8828 0.9805 LFR-14 9000 10 50 30 60 0.1 175 178 0.8722 0.9629 LFR-15 10000 10 50 30 60 0.1 175 180 0.8775 0.9678 LFR-16 20000 10 50 30 60 0.1 436 464 0.8842 0.9844 LFR-17 30000 10 50 30 60 0.1 668 683 0.8846 0.9851 LFR-18 40000 10 50 30 60 0.1 1058 1049 0.8837 0.9793 LFR-19 50000 10 50 30 60 0.1 1382 1341 0.8643 0.9637 "
Here is a paper. Please give your review comments after reading it.