instruction
stringclasses
1 value
input
stringlengths
53
283
output
stringlengths
92
42.8k
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: VEGETATION INDICES’ SPATIAL PREDICTION BASED NOVEL ALGORITHM FOR DETERMINING TSUNAMI RISK AREAS AND RISK VALUES Review round: 1 Reviewer: 2
Basic reporting: This paper studies a new algorithm to detect tsunami risk areas based on spatial modelling of vegetation index. I think three questions must be answered in this paper. 1. I suggest to add some more explanation for the flowchart/algorithm to make it more clear and understandable. 2. In Eq. (8), the closest point search technique is performed by using the Euclidean distance formula. Why did you prefer Euclidean distance instead of Hamming distance or Hausdorff distance? 3. Is it possible that the closest point search technique is performed by using the similarity between data points instead of Euclidean distance? Experimental design: No comments Validity of the findings: No comments Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: VEGETATION INDICES’ SPATIAL PREDICTION BASED NOVEL ALGORITHM FOR DETERMINING TSUNAMI RISK AREAS AND RISK VALUES Review round: 2 Reviewer: 1
Basic reporting: The English structure of the revised paper shows clarity and a more improved version than the initial paper submitted. Figures and Tables are given the appropriate titles and the data of their results and the raw data were shared. All terms and Acronyms used in this paper are defined according to the context of their paper Experimental design: no comment Everything is accounted for Validity of the findings: Findings are valid and conclusions are well stated with future recommendations Additional comments: Dear Peer J Team, By comparing my suggested comments and revisions of my previous review of this paper and the newly submitted revised paper, all my comments and suggestions were addressed which improved a lot the quality and consistency of this paper.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: VEGETATION INDICES’ SPATIAL PREDICTION BASED NOVEL ALGORITHM FOR DETERMINING TSUNAMI RISK AREAS AND RISK VALUES Review round: 2 Reviewer: 2
Basic reporting: The authors have addressed my concerns Experimental design: ok Validity of the findings: ok Additional comments: n/a
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A DEEP CROWD DENSITY CLASSIFICATION MODEL FOR HAJJ PILGRIMAGE USING FULLY CONVOLUTIONAL NEURAL NETWORK Review round: 1 Reviewer: 1
Basic reporting: Authors in their work proposed a new dataset for crowd analysis. In addition to this, proposed a fully convolutional neural network (FCNN)-based method to monitor the crowd. There is merit in the dataset annotation but the manuscript still need a lot of work. 1) Literature references are not sufficient. The related work section needs more work. There is a lot of work done in this field is reviewed in the following papers [1,2]. [1] Sindagi, Vishwanath A., and Vishal M. Patel. "A survey of recent advances in CNN-based single image crowd counting and density estimation." Pattern Recognition Letters 107 (2018): 3-16. [2] Gao, Guangshuai, et al. "Cnn-based density estimation and crowd counting: A survey." arXiv preprint arXiv:2003.12783 (2020). 2) The English language should be improved to ensure clearly understand your text. Some examples where the language could be improved include lines 80, 93, 157 (what is L denoting here ?), Mathematical notations should be well defined. (For example in 228, What is m and n) – the current phrasing makes comprehension difficult. Experimental design: These are major issues with the experimental setup. 1. Experiment design and experimental results Both the proposed method and the benchmark methods are stochastic and the results from multiple independent runs are expected. What is currently reported in the paper is the results from a single run, which is not enough to draw concrete conclusions. Furthermore, multiple runs will be needed to conduct a statistical significance test. 2. Performance metrics The typical accuracy is inappropriate to be used when the dataset is imbalanced (Shanghai Tech and UCSD) and multiclass (proposed dataset, Shanghai Tech and UCSD). We know for sure that the benchmark datasets in this study fall under that category. Then why the typical accuracy is still used to assess the effectiveness of the experimented methods. 3. Benchmark methods and fairness of the comparisons. 3.1- Benchmark methods I do not think any of the methods in the experiments was specifically proposed for crowd density classification. There are a number of studies that have proposed similar techniques to the proposed method [1,2], e.g., utilising CNN or other machine learning methods to classify/estimate crowd density. Why none of these was included in the comparisons despite some of such methods being discussed in the related work section. [1] Gao, Guangshuai, et al. "Cnn-based density estimation and crowd counting: A survey." arXiv preprint arXiv:2003.12783 (2020). 3.2 - Fairness (a) The proposed method utilises pre-trained models (transfer learning) whereas the benchmark methods are trained from scratch. I do not think this is a fair comparison unless the study is about transfer learning vs conventional learning. (b) Overall dataset annotation looks fair except there is a chance of having a human bias in 5 classes. As it is very difficult to see the difference in the low and medium (3rd image) and the same is the case with medium and high. In my personal opinion, having three classes (low, medium and high) will be more appropriate than five classes to reduce human bias or error. Validity of the findings: Experiments 1 and 2 have fundamental issues that need to be resolved before making any valid conclusions. 1. The main issue with stochastic methods is that different results are produced depending on the starting point of the search. In neural networks, the random value generator, more specifically the starting point of the random values generator, initialises the weights; hence, causing the network to start the process from a different point in the search space. Therefore, we must rerun the method multiple times using different seed values “while” keeping everything else untouched/identical. 2. How the other benchmark datasets where category wise evaluation (Mentioned in Table 1) is not available were modified into a classification problem. 3 what are the hyperparameters for all the benchmarks and our proposed model? 4. what are the train and test sizes for other benchmark datasets. 5. Why are recall, f-score etc are not reported? 6. All benchmark models come with pre-trained weights. Which means they are different from each other. For example, I can use a method that was trained to do anomaly detection and fine-tune it (re-train it) for crowd classification and then compare it to a method that was trained to perform cancer image segmentation in MRIs after re-train it for crowd classification. Can you see the difference between the base models? Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A DEEP CROWD DENSITY CLASSIFICATION MODEL FOR HAJJ PILGRIMAGE USING FULLY CONVOLUTIONAL NEURAL NETWORK Review round: 1 Reviewer: 2
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: This article needs important modifications to be suitable for this journal. I suggest major revision for this paper. The main comments are: 1) In this study, the authors would like to propose a model of deep crowd density classification for Hajj pilgrimage by using fully convolutional neural network. It can be noticed that the CNN used in this manuscript has been studied and applied in the previous literatures. 2) The novelty of this paper should be further justified and to establish the contributions to the new body of knowledge. 3) Abstract section should be improved considering the proposed structure from the journal. 4) In Introduction section, the authors should improve the research background, the review of significant works in the specific study area, the knowledge gap, the problem statement, and the novelty of the research. 5) The presentation of the results and conclusions were not enough; it should be highlighted. 6) In the conclusions section, the findings should be explained clearly. 7) The authors should elaborate more on the practical implications of their study, as well as the limitations of the study, and further research opportunities. 8) The English writing does not influence, in all the paper. There are a lot of grammatical errors which should be revised by the authors. So, the paper needs a professional English revision. The author’s guide should be considered by the authors in the writing style in all the paper.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A DEEP CROWD DENSITY CLASSIFICATION MODEL FOR HAJJ PILGRIMAGE USING FULLY CONVOLUTIONAL NEURAL NETWORK Review round: 2 Reviewer: 1
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: This paper needs to consider some suggestions to improve it like: (not taken into account in the first revision) 1- In the conclusions section, the findings should be explained clearly. 2- Conclusions have significantly improved. The authors should elaborate more on the practical implications of their study, as well as the limitations of the study, and further research opportunities. After the modifications, I suggest to Accept this paper
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A HYBRID FEATURE SELECTION ALGORITHM AND ITS APPLICATION IN BIOINFORMATICS Review round: 1 Reviewer: 1
Basic reporting: Wang et al. developed a hybrid MMPSO method combining feature ranking and heuristic searching which yield higher classification of accuracy. To demonstrate the accuracy of such method, the authors applied the method to eleven datasets and identified 18 tumor specific genes, nine of which were further confirmed in a HCC TCGA dataset. Together, the study presents a highly effective MMPSO algorithm with great potential to identify novel biomarkers and therapeutic targets for cancer research and treatment. Experimental design: The study is original and novel in the sense that it develops the hybrid algorithm by combining mRMR and MIC method. The methodology is very concrete as the authors compared the MMPSO method to eight other methods and clearly showed their superiority. Furthermore, the MMPSO was applied to the HCC biological dataset and gave a 18 gene signature which successfully separate 531 samples into cancer and normal groups. Many of the targets in the signature were validated by published research and/or their own Kaplan Meier method. Therefore, the study is well targeted at the Aims of Scope of Peer J Computer Science. Validity of the findings: The findings in this study are thoroughly validated. The conclusions are well stated. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A HYBRID FEATURE SELECTION ALGORITHM AND ITS APPLICATION IN BIOINFORMATICS Review round: 1 Reviewer: 2
Basic reporting: The authors aimed to develop an improved feature selection algorithm. They applied the algorithm to the public gene expression dataset and found some candidate biomarker genes for classifying tumor versus normal samples. This is an incremental work based on previous methods and has potentially interesting applications on tumor diagnosis. However, the following concerns should be addressed to meet the basic requirement of a bioinformatics method paper. Experimental design: Major concern #1: the authors used the same dataset (LIHC) for selecting the tumor/normal gene signatures and testing their performance. This does not prove the superiority of the algorithm since it can simply overfit this dataset. I suggest 1) selecting features based on a subset of samples and validating on the left-out samples; 2) using another cohort of normal/tumor gene expression datasets for testing the selected features. Major concern #2: the authors did not compare the gene signatures selected by their method versus other methods. For instance, using the top differential genes of a simple differential gene expression analysis with DESeq2 or EdgeR, do they separate the tumor samples better than these 18 genes? Given that in many of the ten datasets in Figure 1 and Table 2, mRMR performs very similarly to MMPSO, what is the performance on the LIHC dataset if only using mRMR to select features? Validity of the findings: Major concern #3: In line 210, the authors listed the 18 gene signatures and “nine genes were significantly upregulated in tumors compared to normal samples” but did not clarify which genes were differentially upregulated or downregulated and what were the criteria. The authors also did not provide method detail of how they determined the high and low expression levels for sample stratification in Figure 4. For instance, I checked the average values of some of the genes in the two raw data files (data_tumor.csv, data_normal.csv) provided by the authors: ACTN1, ENSG00000072110 Tumor = 12.78586 Normal = 12.33047 CACHD1, ENSG00000158966 Tumor = 7.624089 Normal = 7.308306 In lines 292-296, the authors discussed the literature that a higher level of ACTN1 is associated with HCC size, and in Figure 4, CACHD1 was described as an ungulated gene associated with worse prognosis, however, I do not consider a difference of < 0.5 as significantly different in gene expression. At the very least, the authors should provide a box-whisker plot with proper statistics of the expression of these 18 genes across the normal/tumor samples they used to demonstrate their points. Additional comments: Minor comments. Please read carefully for typos. Line 18: “a optimal subset”. Line 70: “Yuanyuan et al. 2021”. In line 17, MMPSO should not be abbreviated because this is its first appearance in the whole manuscript.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A HYBRID FEATURE SELECTION ALGORITHM AND ITS APPLICATION IN BIOINFORMATICS Review round: 2 Reviewer: 1
Basic reporting: The authors have properly addressed the questions I raised. Experimental design: The authors have properly addressed the questions I raised. Validity of the findings: The authors have properly addressed the questions I raised. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS EFFICIENT VERIFIABLE MULTI-KEYWORD SEARCH OVER ENCRYPTED DATA BASED ON BLOCKCHAIN Review round: 1 Reviewer: 1
Basic reporting: The authors please make any changes to improve the readability of the paper. The reference is insufficient. The authors need to add popular related articles to support the proposed arguments. The authors need to add more detailed contents to enrich the proofing process. Experimental design: No comments. Validity of the findings: The paper proposed a conceptual approach. If possible, the authors please verify the applicability of the proposed algorithm. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS EFFICIENT VERIFIABLE MULTI-KEYWORD SEARCH OVER ENCRYPTED DATA BASED ON BLOCKCHAIN Review round: 1 Reviewer: 2
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS EFFICIENT VERIFIABLE MULTI-KEYWORD SEARCH OVER ENCRYPTED DATA BASED ON BLOCKCHAIN Review round: 1 Reviewer: 3
Basic reporting: In this article, the author proposes a blockchain-based multi-keyword verifiable symmetric searchable encryption scheme. Compared with previous work, it improves the search efficiency, ensures the fairness of the search, and can perform multi-key Word search result verification. Finally, the author uses experiments to prove that this method is safe and effective in practical applications. Here are some comments for the authors to improve the quality of this manuscript. Experimental design: The experimental design should be more detailed, and the pictures will be more convincing Validity of the findings: This is the same problem, adding your pictures will make your findings more convincing Additional comments: 1. I don’t know if it’s due to the system or other reasons. I didn’t see your picture in the article. 2. In the introduction, I think you can write more compactly. For example, line 45, the shortcomings of SSE are introduced before, and dynamic updates are also very important afterwards. 3. It is recommended to use subtitles to make your article clearer. 4. Please use correct template of this journal.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS EFFICIENT VERIFIABLE MULTI-KEYWORD SEARCH OVER ENCRYPTED DATA BASED ON BLOCKCHAIN Review round: 2 Reviewer: 1
Basic reporting: The revised draft made progress. Experimental design: It is acceptable. Validity of the findings: It is acceptable. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS EFFICIENT VERIFIABLE MULTI-KEYWORD SEARCH OVER ENCRYPTED DATA BASED ON BLOCKCHAIN Review round: 2 Reviewer: 2
Basic reporting: The authors modify the manuscript according to the comments. It improves a lot compared to the previous version. However, there are still some details to be corrected to make this manusript better. Would the authors use the latex template if possible since the typewritting is not so neat, especially for the equations. Experimental design: The proof of the scheme is sound. The simulation results can be improved if possible. Validity of the findings: The novelty is fine. Are there any data in the simulation? Please list them or clain the origin if possible. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS EFFICIENT VERIFIABLE MULTI-KEYWORD SEARCH OVER ENCRYPTED DATA BASED ON BLOCKCHAIN Review round: 3 Reviewer: 1
Basic reporting: The revised draft made progress. Experimental design: It is acceptable. Validity of the findings: It is acceptable. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS EFFICIENT VERIFIABLE MULTI-KEYWORD SEARCH OVER ENCRYPTED DATA BASED ON BLOCKCHAIN Review round: 3 Reviewer: 2
Basic reporting: I recommend to accept it in the current version. Experimental design: The experiments in this paper are fine. Validity of the findings: The findings are sound. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DMPNET: DENSELY CONNECTED MULTI-SCALE PYRAMID NETWORKS FOR CROWD COUNTING Review round: 1 Reviewer: 1
Basic reporting: The paper, at a high level, presents a way to cope with the scale variation problem in Crowd Counting research. The authors clearly write about their motivation for coming up with the proposals and also compare the counting performance with state-of-the-art methods on 4 major datasets. The main contributions of the paper are the proposed Multi-scale Pyramid Networks (MPN), which are densely connected for better information retention throughout the deep networks. I shall comment on the basic components of the paper that could be improved as follows: 1. I believe that the very first paper that uses dense connections is “Dense Scale Network for Crowd Counting” (DSNet in Table 2) [1]. Thus, (Dai et. al., 2021) should be mentioned when introducing the dense connections. 2. The published year of DSNet is not consistent. (Dai et. al., 2019) or (Dai et. al., 2021)? In crowd counting research, to “keep the input and output resolutions unchanged”, the encoder-decoder structure (See [3] and [5]) is usually utilized. The introduction to the structure is worth adding for better context. 3. In the RELATED WORK Section, to better support the difference between this work and others, there should be examples of previous papers that add “extra perspective maps or attention maps”, for instance [2],[3], and [4]. Specifically [3] also adopts multiple branches of convolution with different kernel sizes, namely the ASPP structure. 4. In Figure 4., please carefully check that #groups = 2 or 4? Based on Figure 4., the incoming feature channels are divided into 2 groups? 5. About describing the configurations (kernel_size, #output_channel, ...) of a convolution operation, It is preferable to use \times over the “*” notation. The “*” notation may be reserved for the convolution operation itself. 6. In Figure 5, Is there any particular reason, the larger kernel_size is, the smaller output channels and the groups are? Is this just to balance computational resources between branches? 7. (Typo) Change “MFFE” to “MFFN”? 8. (Typo) What is G (with \theta parameterization) in the density level consistency loss? Should G be F? 9. Is C in the calculation of MAE loss, the summation over all pixels? The definition of C can be made more clear. Do I and D^{GT} notations have the same meaning? As I_{i} cannot appear twice, please revise the MAE loss formulation. 10. \alpha and \beta for weighting the losses are chosen based on what criterion? Experiences? Random? or validation count performance? References [1] Dai, Feng, et al. "Dense scale network for crowd counting." Proceedings of the 2021 International Conference on Multimedia Retrieval. 2021. [2] Zhu, Liang, et al. "Dual path multi-scale fusion networks with attention for crowd counting." arXiv preprint arXiv:1902.01115 (2019). [3] Thanasutives, Pongpisit, et al. "Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting." 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. [4] Shi, Miaojing, et al. "Revisiting perspective information for efficient crowd counting." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [5] Jiang, Xiaolong, et al. "Crowd counting and density estimation by trellis encoder-decoder networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [6] Chen, Liang-Chieh, et al. "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs." IEEE transactions on pattern analysis and machine intelligence 40.4 (2017): 834-848. Experimental design: 1. On defining the evaluation metrics, you can use D^{GT} instead of Z`. The notations need consistency. There are no “bolded” numbers, indicating the best results in any Table. 2. Current SOTA methods are incomplete, [2], [3], [4], and [5] are missing. 3. In Table 3., it is not clear whether (w/ LPN, w/o GPN) and (w/ GPN, w/o LPN) are trained with MFFN or not? In the 3rd row, MPN should be MFFN? Another typo? 4. From Table 4., it is still unclear how many blocks of MPN blocks should be densely connected? In this paper, It is 3, but how about 1, 2, or even 4? This may be addressed in the ablation study section. 5. This paper recommends the use of the group convolution (over the normal one) but does not compare the number of trainable parameters of the proposed Multi-scale Pyramid Network, when G varies and the case that G = 1 for all the branches. References [1] Dai, Feng, et al. "Dense scale network for crowd counting." Proceedings of the 2021 International Conference on Multimedia Retrieval. 2021. [2] Zhu, Liang, et al. "Dual path multi-scale fusion networks with attention for crowd counting." arXiv preprint arXiv:1902.01115 (2019). [3] Thanasutives, Pongpisit, et al. "Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting." 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. [4] Shi, Miaojing, et al. "Revisiting perspective information for efficient crowd counting." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [5] Jiang, Xiaolong, et al. "Crowd counting and density estimation by trellis encoder-decoder networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [6] Chen, Liang-Chieh, et al. "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs." IEEE transactions on pattern analysis and machine intelligence 40.4 (2017): 834-848. Validity of the findings: 1. (Important) Since this paper would be perceived (if it gets published) as an incrementally improved version of DSNet, some audiences might get confused by the fact that the DMPNet only outperforms DSNet in terms of lowering MSE computed from the ShanghaiTech Part_B dataset. Hence, I advise that you can also compare the number of trainable parameters or inference speed for better justification of your proposed DMPNet. I guess that DMPNet might contain fewer parameters as you employed the group convolution operation. 2. As I mentioned above, it is hard to justify that DMPNet has “better in both mean absolute error (MAE) and mean squared error (MSE)” at the current state of the paper. So, there may be doubts whether “MPN can effectively extract multi-scale features” or not? 3. On lines 290-291, the claim that “Our DMPNet has well solved the problems of crowd occlusion, perspective distortion, and scale variations” is a bit exaggerated. It is unclear how exactly you deal with the perspective distortion? Is this solely based on your experimental results of the UCF_QNRF dataset? Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DMPNET: DENSELY CONNECTED MULTI-SCALE PYRAMID NETWORKS FOR CROWD COUNTING Review round: 1 Reviewer: 2
Basic reporting: The authors proposed a Densely connected Multi-scale Pyramid Network (DMPNet) based on VGG and Multi-scale Pyramid Network (MPN). The latter consists of three modules that aim at extracting features at different scales. The authors compared performances obtained by DMPNet with other methods at state of the art. 1. Figures and Table should be placed before the bibliography 2 Related work section should include all methods used for comparison. 3. Introduce every acronym before using it (e.g., CNN, ASNet, etc.) 4. Viresh etal. (line 97-98) and CP-CNN method (Table 2) are not present in the bibliography 5. Figures 3 and 4 have the same caption. 6. Captions should be improved 7. In table 2, no results are highlighted in bold 8. Figure 1 presents "Table 1" in the caption 9. In the manuscript and Figure 5, the symbol "*" was used. It is necessary to replace it with "x" (in Latex, \times). 10. lines 35-36, 53-54, 103-106, 107-108, 195-196 need references Experimental design: The method is described with sufficient details. Moreover, the authors provide the code. 1. The training methods section can be integrated into Experimental and Discussion 2. The measure MSE (Section "Evaluation Metrics") is the "Root Mean Squared Error" (RMSE) 3. lines 267-271, 280-282 can be improved Validity of the findings: 1. The ablation experiment section can be improved by studying these effects on the other considered data sets. Additional comments: 1. There are some typos (e.g., SOAT --> SOTA)
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DMPNET: DENSELY CONNECTED MULTI-SCALE PYRAMID NETWORKS FOR CROWD COUNTING Review round: 2 Reviewer: 1
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTERPRETABLE DEEP LEARNING FOR THE PREDICTION OF ICU ADMISSION LIKELIHOOD AND MORTALITY OF COVID-19 PATIENTS Review round: 1 Reviewer: 1
Basic reporting: The authors should decrease the number of tables as possible. Some tables could be merged or moved to supplement files, such as hyperparameters tuning. They should keep the most important parts for readers. Experimental design: The research question is "interpretable". The authors only pick the important features from the model but not to the point "interpretable". They can explain the definition of "interpretable". Validity of the findings: Nil. Additional comments: The analytic process could be concise and to the point.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTERPRETABLE DEEP LEARNING FOR THE PREDICTION OF ICU ADMISSION LIKELIHOOD AND MORTALITY OF COVID-19 PATIENTS Review round: 1 Reviewer: 2
Basic reporting: * The report is too detailed and includes trivial information which renders the reading experience to be exhausting. In scientific papers many different methods are used these days and it is necessary to think about what information should be included, what should be cited and what should be left out (as trivial information). This is a process which the current report can greatly benefit from. More focus needs to be directed towards the main findings of the papers and the methods that ended up being used in the most optimal model that was eventually applied. * The figures and the tables lack detailed descriptions. Furthermore they are not completely labelled ans for instance the x axis and y axis labels are missing in the feature importance plots (i.e. figure 2 and 4). * information provided in many tables are redundant, as for instance in many tables such as table 6 the "Without symptoms" percentages are the complements of the "With symptoms". Better and more brief means of describing the 'Description of data sets' and the 'statistical analysis' must be used that listing all these information in a sequence of tables. If some of these information are not relevant or significant they can be referred to in supplement documents rather than in the main paper. * The following sentences require rewriting : - Line 145: "ADASYN (He et al., 2008) is a synthetic data generation algorithm that has the benefits of not copying the same minority data and producing more data which are harder to learn." - In equation 2: Xk is not defined . What is Xk ? - Line 153: "… same proportion of observations, and also due to the dataset 154 being imbalanced. The fold used was k=5 which means …" -Top of page 4: "These operations are done sequentially in the order as shown in Equations 4, and 5: FC =W(x) (10) …" Do the equations need to be repeated ?! Please clarify how do they relate to formula 4 and 5. In tables 5 and 8: Both columns are labelled as "no-ICU" ! Line 273: "Overall, over 50% of the individuals acquired a symptom of a disease before they died." This does not seem correct! How far over 50% ? and by "a symptom" do the authors mean "at least one symptom" ? This sentence suggests that a bit less than 50% of patients died without showing any symptoms ?! Line 392: "The key performance metric in this analysis is the AUC score and it can be plotted using the precision-recall curve which demonstrates a trade-off between the recall score (True Positive Rate), and the precision score (Positive Predictive Value)." AUC is usually referred to the area under ROC curve. Do you mean the area under Precision Recall curve here ? If yes, you can use AUCPR to avoid confusion. Experimental design: * I was not convinced and completely informed about how the authors avoided overfitting to the data throughout the whole study. They mention that they consider a 5 fold cross-validation using stratified K-fold : "The fold used was k=5 which means that the dataset was divided into 5 folds with each fold being utilized once as a testing set, with the remaining k - 1 folds becoming the training set." However, further down in the report (line 299) they mention : "The ENTIRE dataset is split into training and validation in a 90%-10% ratio respectively, to enable a good number of data points to be trained to get better results. " If the entire data is divided to train and validation, how is the 5 fold cross-validation implemented ? * The pvalues and FDRs of the correlations are missing throughout the report. Furthermore, the correlation (distance) method hat was used should be clarified; was it "Pearson" ? Validity of the findings: * My biggest issue with the study is that as the authors have also mentioned the data is small for the kind of prediction model that the authors are aiming for. Furthermore the data is imbalanced, whilst computational methods have been used to overcome the effects that the imbalance of data may have on the training and testing of the model. All these, together with the fact that only one data was used for the each prediction model (one for ICU prediction of COVID-19 and another for the mortality) makes any findings related to the most effective prediction factors to be imprecise and unreliable if they are solely dependent on the computational methods used by the authors. At least these findings should be directly supported by other litterateurs and studies. As for instance is the direct association of Ferritin with ICU visits or mortality of COVID-19 patients (rather than the indirect association through COPD that is mentioned by the authors) been discovered by other studies as well ? Additional comments: My criticism regarding the study aside the inclusion and possibly publication of all the codes used for analyzing the data and producing the figures by the authors is an excellent practice that should become more common in the scientific community.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTERPRETABLE DEEP LEARNING FOR THE PREDICTION OF ICU ADMISSION LIKELIHOOD AND MORTALITY OF COVID-19 PATIENTS Review round: 2 Reviewer: 1
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: The authors answered the questions
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTERPRETABLE DEEP LEARNING FOR THE PREDICTION OF ICU ADMISSION LIKELIHOOD AND MORTALITY OF COVID-19 PATIENTS Review round: 2 Reviewer: 2
Basic reporting: * This is a suggestion (and it is OPTIONAL!): Figures have titles only and lack descriptions! legends are described in the PeerJ “Instruction for Authors” to be optional, however I think that it would improve the readability of a paper. Legends and table/figure descriptions allow the person who reads it to fully understand the plot/illustration without needing to read the text. As for instance in figure 2 you can mention what the heat map mean (e.g. shows whether a feature is selected at a given decision step in the model), what the x and y are; and what the brighter colours stand for and etc. Experimental design: * It would be interesting if the authors can elaborate that how can factors such as COPD and Myalgia mortality be amongst 3 highest predictors of mortality but not amongst the highest factors for ICU admission. Are they seen in the later and more advanced stages of Covid? * I understand that Neural Networks does not necessarily model in the same manner as linear regression and etc, however it would still be inteersting if the authors could elaborate how the Shortness of Breath (SOB) feature which has the highest correlation with admission to the ICU unit is not amongst its highest 3 predictors ! Validity of the findings: * Something that is still not fully clear to me is that how has the stratified k-fold split of 5 been implemented in the training and testing of the models, using the most optimal model for ICU admission as for instance. The applied cross validation (K-fold split of 5) is still with 5 replications; therefore, I assume that 5 sets of training and testing of each prediction model been executed? An ROC curve can be plotted for each prediction on each test subset, so this means that actually for each prediction model 5 PRC/ROC curves are resulted and 5 AUPRC/AUCs are measured? The authors however describe only one PRC/ROC curve and AUPRC/AUC. Are these the PRC/ROC curves of the most optimal run (with the highest AUPRC/AUC)? If that is true, it should be written clearly and optionally the authors may want to mention the variance of the AUPRC/AUC across the 5 cross validation runs, or report the mean AUPRC/AUC as well. Additional comments: * Correct m_l and m_s in the line beneath equation 1 , beneath line 153 (some lines don’t have numbers !). * Line 167: “TabNet deep learning model 15 is”. Is “15” a typo ?
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTERPRETABLE DEEP LEARNING FOR THE PREDICTION OF ICU ADMISSION LIKELIHOOD AND MORTALITY OF COVID-19 PATIENTS Review round: 3 Reviewer: 1
Basic reporting: No comment. Experimental design: No comment. Validity of the findings: No comment. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TEMPORAL DYNAMICS OF REQUIREMENTS ENGINEERING FROM MOBILE APP REVIEWS Review round: 1 Reviewer: 1
Basic reporting: Line 73: reviews. First, we collect, pre-process and extract software requirements from large review datasets. The authors might need to the name of the apps and explain the process in details in section 3. Line 74: Then, the software requirements associated with negative reviews are organized into groups according to their content similarity. The authors might need to explain how Figure 1. : It is very high level. Lack of analytical details need to show the steps in more explicit way. Line 201: The reviews are from 8 apps of different categories. What are the name of these apps and their categories? Line 318-9: It would be better to move it up. After you introduce the model. Line 321: Does it mean that you consider the negative review as the change-point?. This part is lack of a clear explanation. Moreover, it needs more supporting example (e.g., why did the author consider the negative review as the change-point) Experimental design: Line 76 and 217: software requirement from negative reviews: The authors need to explain how they classify the negative reviews in more details (not only by considering the low rate as negative), as well as The authors, might need to read the reviews to check if it can be considered as negative or positive reviews, even if the rate is low/high (1/5). Example, the reviewers might rate the app 3 out of 5 and it might be considered as negative review? Line 219: Thus, we use RE-BERT to extract …..…... It is not clear how this step is preformed? How did the authors classify them into complaints, bad usage experience, or malfunction Line 193: Does the training data contain negative review? How would the model be able to predict them ? The authors need to explain these points. Line 233: It is not clear how BERT would help your model? why do you need it at the first place? Line 234: It is not clear how this objective is related to the proposed model. Validity of the findings: In general: Lack of a comprehensive experiment. The model should be evaluated with some of the existing solutions to see how its performance. Line 355: we used a dataset with 86,610 reviews of: Did they collect the datasets or is it publicly available? if it is publicly available where is the reference and why did they choose it? What is the dimensionality of the dataset? What is the name of these apps. Line 358: clustering stage (with k = 300 clusters) why k=300 Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TEMPORAL DYNAMICS OF REQUIREMENTS ENGINEERING FROM MOBILE APP REVIEWS Review round: 1 Reviewer: 2
Basic reporting: The paper does not follow the standard sections for the journal. Furthermore, Acknowledgments acknowledge funders as additional contradiction to the guidelines. The authors do not share any raw data on their evaluation. Neither the app review nor the results are shared. Therefore, none of their results can be judged. Experimental design: no comment Validity of the findings: The data used for the evaluation was not made available. Therefore the validity cannot be judged. Explaining the data set as well as sharing it online would improve the paper a lot. Furthermore, the authors could also publish the results on the data when having applied the MAPP method. Additional comments: The paper covers an interesting and promising topic. It is written in a clear and well understandable language. The two mentioned problems can easily be addressed with a revision of the paper. As minor addition I would also suggest to extend the motivation with references that support the goal of continuous evaluation of user feedback side by side to the development process like T Palomba et al. [1] and Hassan et al. [2] as well as Scherr [3] et al. do. 1. Palomba, F., Linares-Vásquez, M., Bavota, G., Oliveto, R., Di Penta, , Poshyvanyk, D., De Lucia, A.: Crowdsourcing User Reviews to Support the Evolution of Mobile Apps. Journal of Systems and Software(March 2018), 143-162 DOI:10.1016/j.jss.2017.11.043 (137) 2. Hassan, S., Tantithamthavorn, C., Bezemer, C.-P., Hassan, A.: Studying the dialogue between users and developers of free apps in the Google Play Store. Empirical Software Engineering 23(3), 1275–1312 https://doi.org/10.1007/s10664-017-9538-9 (2018) 3. Scherr, S., Hupp, S., Elberzhager, F.: Establishing Continuous App Improvement by Considering Heterogenous Data Sources. International Journal of Interactive Mobile Technologies (iJIM) 15(10), 66-86 (2021)
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TEMPORAL DYNAMICS OF REQUIREMENTS ENGINEERING FROM MOBILE APP REVIEWS Review round: 1 Reviewer: 3
Basic reporting: - Missing the threats to validity and/or limitation section. - Missing the discussions and/or implications section. - It still isn’t clear to me how this can be used concretely as this is not detecting emerging issues, but rather predicting the future frequency of a requirement in negative reviews. Could you elaborate more on why this prediction is needed as the requirements from negative reviews should have already impacted the app's rating? Additionally, as a developer looking at the results, when would I say that the problem/requirement is becoming major? (e.g., the slope of change/the change in the frequency is greater than some particular threshold?). These could be addressed in the discussions and-or implications section. Experimental design: - Some methodological decisions need more details (e.g., I’m not quite sure how you decide the starting number of k? does selecting a different starting number k affect the result in any way? what's the rationale behind selecting six clusters in your experiment? Can a requirement belong to more than one cluster?) - Were reviewed cleaned (e.g., based on length?). - Your experiment should also include using app update dates as changepoints because prior research suggested that naturally, you will see a spike of user reviews shortly after app updates. Validity of the findings: - For a journal paper, a further extensive evaluation is needed. I would argue that reviews from food ordering apps did not contain rich data regarding the problems/requirements with the app itself but contain a great number of information/complaints on the services. - How applicable can this be used for apps in other categories? Your experimentation is done only on one type of apps and only popular ones (please discuss this in the threats to validity and/or limitation section). - The experimental evaluation is quite lacking compared to other sections. You could revisit the research question you introduced in the introduction. - Sensitivity analysis may be needed. Additional comments: - Interesting research, relevant to the journal. - It is well written, structured, and presented. - Extensive literature review. - The authors provided a code to examine.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TEMPORAL DYNAMICS OF REQUIREMENTS ENGINEERING FROM MOBILE APP REVIEWS Review round: 2 Reviewer: 1
Basic reporting: All good Experimental design: All good Validity of the findings: All good Additional comments: All good
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TEMPORAL DYNAMICS OF REQUIREMENTS ENGINEERING FROM MOBILE APP REVIEWS Review round: 2 Reviewer: 2
Basic reporting: no comment Experimental design: no comment Validity of the findings: The findings are explained in a more solid way due to the introduction of a discussion section. Nevertheless half of this rather short section is spent on the limitations and not on discussion the findings. I would like to see a deeper discussions of the interesting findings. Additional comments: The paper covers an interesting and promising topic. It is written in a clear and well understandable language. The authors provided a significant improvement over the first version.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACO-KINEMATIC: A HYBRID FIRST OFF THE STARTING BLOCK Review round: 1 Reviewer: 1
Basic reporting: 1. When several titles are cited in the text it is better to be in increasing order (lines 33,53, 103, 114) 2. Use same format, when several titles are cited : [a] [b] [c] or [a,b,c] (line 103) 3. line 134. Section 4 is too short. the section 3 can become "problem statement and objectives" 4. line 140. After mentioning Dorigo include citation of his work. 5. line 141. The first who proposed variant of ACO for continues optimisation is Patrick Siarry (Continuous interacting ant colony algorithm based on dense heterarchy, J Dréo, P Siarry, Future Generation Computer Systems 20 (5), 841-856, 2004), cite him together with [32] 6. line 169. Remove section 6. Kinematic equations are discussed in section 9. 7. line 207. equation (13). This equation do not show the path length and minimisation of this function do not give the shorter path. The objective function must be the path length with end point the target. The collision avoiding can be constrain. Safety avoiding of obstacles can be solved defining some safety distance from obstacles. Experimental design: The paper needs new experiments after rewriting the objectives and constraints. Validity of the findings: The findings will be valid after recalculating with new objective function. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACO-KINEMATIC: A HYBRID FIRST OFF THE STARTING BLOCK Review round: 1 Reviewer: 2
Basic reporting: No Comment Experimental design: No Comment Validity of the findings: No Comment Additional comments: This paper presents a combined technique based on ant colony optimization (ACO) algorithm and kinematic equations, named ACO-Kinematic, to solve the robot navigation problem in static environments. In this method, ACO is used to find a collision-free route to the next step, while kinematic equations are used to control and move the robot to the new selected step. The paper can be considered for publication in PeerJ Computer Science, if the following minor and major comments would be carefully addressed: 1) The main limitation of this study is to utilize the ACO-Kinematic algorithm for path planning of the mobile robot in static environments, i.e., considering only static obstacles with known location. Although the authors correctly addressed it as a limitation of their work in Conclusion, there is still a major issue: What about the static obstacles with unknown locations? Is the robot does not aware of the location of obstacles (even static), it cannot run ACO prior to find the full path for the mobile robot. 2) In the original ACO, pheromone and heuristic information (as available) are used to calculate the probability of the different choices (i.e., next steps in your study) using the ACO selection rule. Please provide more details about how next steps are selected via ACO? Why you did not consider heuristic information in your model? For example, the distance to the obstacles and angle to the target can be used as very informative heuristic information not only to speed-up the algorithm, but also to improve the solution quality. There is a fuzzy heuristic based ACO (combining fuzzy heuristic information and pheromone): “FH-ACO: Fuzzy heuristic-based ant colony optimization for joint virtual network function placement and routing”, recently published in Applied Soft Computing, 107, 107401. It utilized multi-criteria fuzzy heuristic as the heuristic information to guide ACO in finding better solutions, and utilizes a multi-criteria fuzzy heuristic model as well as pheromone to construct the full path. You should mention this paper in Introduction or Literature Review, and discuss why you did not consider heuristic information in ACO selection rule? 3) How uncertainties are handled in your model? You should have a plan to handle uncertainties of the environment, or even discuss about it as a limitation of your work in Conclusion. Even for static obstacles, what is your plan if there are uncertainties in the location of the obstacles? 4) Please provide a time complexity analysis for the different phases of the ACO-Kinematic. 5) The results of the ACO-Kinematic should be compared and justified with recently published heuristic- or metaheuristic-based path planning techniques. 6) Finally, the paper should be carefully double-checked to be free of errors. This manuscript can be reconsidered for the publication in PeerJ Computer Science, if the above issues would be carefully addressed.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACO-KINEMATIC: A HYBRID FIRST OFF THE STARTING BLOCK Review round: 2 Reviewer: 1
Basic reporting: The authors corrected the paper according reviewers comments Experimental design: The authors corrected the paper according reviewers comments. Validity of the findings: The authors corrected the paper according reviewers comments. Additional comments: The paper is corrected according reviewers comments.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACO-KINEMATIC: A HYBRID FIRST OFF THE STARTING BLOCK Review round: 2 Reviewer: 2
Basic reporting: No comment Experimental design: No comment Validity of the findings: No comment Additional comments: The revised version has been efficiently improved, and all of my comments have been corrected. The current version of the manuscript can be accepted for publication.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 1 Reviewer: 1
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: Given the complexity involved, the author has produced a number of positive and welcome outcomes including the literature review which offers a useful overview of current research and policy and the resulting bibliography which provides a very useful resource for current practitioners. Most relevant papers (only relevant and state-of-the-art) should be qualitatively and quantitatively analyzed with what gaps were left in these works and what this work is proposed to overcome those gaps/challenges. Experimental should be rigorously analyzed both theoretically and visually with proper justification of obtained results as well as with potential comparative studies can add future work also
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 1 Reviewer: 2
Basic reporting: This study provides the implementation of various machine learning models to measure the polarity of the sentiment presented in user reviews on IMDB website. For this purpose, the reviews are first preprocessed to remove redundant information and noise and then various classification models like support vector machines (SVM), Naive Bayes classifier, random forest and gradient boosting classifier are used to predict the sentiment of these reviews. The objective is to find the optimal process and approach to attain the highest accuracy with best generalization. Various feature engineering approaches such as term frequency-inverse document frequency (TF-IDF), bag of words, global vectors for word representations and Word2Vec are applied along with the hyperparameter tuning of the classification models to enhance the classification accuracy Experimental design: Satisfactory Validity of the findings: Satisfactory Additional comments: 1. The authors have to summarize the findings/gaps from recent literature in the form of a table. 2. Some of the recent works on NLP and feature extraction such as the following can be discussed: "An ensemble machine learning approach through effective feature extraction to classify fake news, Analysis of dimensionality reduction techniques on big data". 3. In the proposed methodology section, the authors should clearly map the proposed work with the limitations of the existing works and discuss how the proposed work overcomes them. 4. Present a detailed analysis on the results obtained. 5. Discuss about the limitations of the current work.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 1 Reviewer: 3
Basic reporting: Literature review is very superficial and the last paper that is reviewed has been published in the year 2019. Literature review is not guided, and authors have made very generic assumptions. Experimental design: Authors are advised to try the latest LSTM models that are more suitable for this type of problem. Validity of the findings: There isn’t any significant contribution in terms of methodology or results. Authors need to justify the results, why a particular model should yield better results. Additional comments: The paper is basically a comparative analysis of various machine learning algorithms on the published dataset and has no real scientific contribution.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 1 Reviewer: 4
Basic reporting: No comment. Experimental design: 1.There is no novelty in this research. The algorithms used (for preprocessing and classification) are common algorithms that widely used for sentiment analysis. 2. The third contribution mentions the use of deep learning in this work which is not correct since the results only show four traditional ML algorithms. 3. Authors should specify the type of decision tree used in this paper and also other parameters such as maximum depth of the tree or number of features to consider when looking for the best split. 4. Authors should also specify the hyperparameters used and the kernels used in this paper. Validity of the findings: 1. The results in table 15 show comparable results in previous work. Hence, the impact and novelty of this work are not there. The author should indicate the strength of this work. 2. The use of "optimized machine learning algorithms" term in the title does not reflect the real research outcome. The authors should at least describe the optimization process involved in this work. 3. In terms of the evaluation validity, how the authors ensure the similar evaluation method were used by all the papers in Table 15? Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 2 Reviewer: 1
Basic reporting: The authors have addressed all my comments. The manuscript may be accepted for publication. Experimental design: Satisfactory Validity of the findings: No comments Additional comments: The authors have addressed all my comments. The manuscript may be accepted for publication.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 2 Reviewer: 2
Basic reporting: The paper presents a study to find the optimal process and approach to measure the polarity of the sentiments in user reviews on the IMDB website. In section "Related Work" authors present a wide research in the context of the gap of knowledge that they want to fill. However, two of the works in Table 1 are not reviewed in the text above: Giatgoglou et al (2017) and Mathapati et al (2018). I thank you for providing the code, however your code file need more comments to be useful to future readers. For example, the introduction doesn't match with the code (Multi-layer Perceptron is not used, according to the paper). Besides, there aren't any comments about the kind of features engineering method used in each point. The way of citing the bibliographic references is not correct the most of the times. The cites must be between parentheses if order to make the text more readable. For example, in line 48, instead of "...as a significant research are during the last few years Hearst (2003)" should be "...as a significant research are during the last few years (Hearst, 2003)" The kind of cite, using the name of the author outside of the parentheses, should only be used if the name of author is part of the sentence. For example: Singh et al (2013) conduct experimental work about... Some minor errors are: a) Bag of Words is abbreviate as BoW, but sometimes it is written as Bow. Authors should revise the text to homogenize this acronym and others (RF) b) Some blank spaces are needed in lines 196 (between algorithms and Oghina), 201 (between consistency and Lee) c) Equations must be referenced with their number between parentheses instead of saying "the following". For example, in lines 219 and 220, TF-IDF is calculated using TF and IDF as (3) d) The sample reviews of the Table 5, 6, 7 and 8 contain point at the end of the sentences, when these were removed in Table 4 e) In Table 6, the first sample review after case lowering yet contains a "P" f) In line 325, it is indicated that Tables 11 and 13 show BoW and TF-IDF features, but the tables are 9 and 10. Experimental design: In line 181, bibliography reference to the data set is missing A bibliography reference to TextBlow is missing. It is a Python Library for processing textual data that provides a simple API for diving into common natural language processing (NLP) tasks such as sentiment analysis between others. It is not a lexicon-based technique. TextBlob is used to annotate the dataset with sentiments (negative or positive) in order to tackle with "the contradiction in the user sentiments in the reviews and assigned labels" How do the authors know that classification of the reviews is contradictory and TextBlob result is better? Authors must explain this point. In line 197, what does authenticity of a learning algorithm mean? or it is a mistake and it should be "accuracy". In subsection "Feature Engineering Methods" only Bag of Words and TF-IDF are explained. Authors should also explain and reference the other two methods used in their study (Global Vector and Word2Vec) Authors should revise the equation (1) and its explanation in line 217. In it TFij appears, but the explanation talk about term "t" and document "d" In subsection "Supervised machine learning Models", author explain Naive Bayes, but in the Introduction and in the results, they used Gradient Boosting Classifier. Since gradient boosting classifiers are a group of machine learning algorithms that combine many weak learning models together to create a strong predictive model, which weak learning models does the gradient boosting classified used combine? The title of the paper contains "optimized machine learning algorithms" and Table 3 shows the hyperparameters used for optimizing the performance of models. Author should explain how and why they have chosen these parameters to ensure the use of optimized machine learning algorithms. The BoW features showed in Table 9 don't match with those in the preprocessed text of sample reviews in Table 8 TF-IDF values in Table 10 are miscalculated. If a word have a a frequency of 0 in a document, its TF-IDF can't be upper 0 In Figure 3, rows of the matrix represent actual labels and columns represent predicted labels. However, in lines 337 and 338 the opposite appears Validity of the findings: The results are not very conclusive and the analysis of them it is merely a reading of the table with no discussion or a deep explanation. There are not statics test to see if the differences are significant Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 3 Reviewer: 1
Basic reporting: The authors have addressed all my comments. Experimental design: Comment 10: A bibliography reference to TextBlow is missing. It is a Python Library for processing textual data that provides a simple API for diving into common natural language processing (NLP) tasks such as sentiment analysis between others. It is not a lexicon-based technique. The bibliography reference to TextBlow must indicate that it is a online document, the url and the last query date. Author Response comment 11: Reviewer’s is concern is genuine. For clarification we added Table 3 in the revised manuscript which contains the ’original’ label from the dataset and label from the Textblob. Original label is ’Negative’ while Textblob assigned label is ’positive’. Textblob assigned labels are more reliable as is confirmed by the higher classification accuracy from the models when used with Textblob labeled dataset. It make no sense to rely on the classification results to indicate that the labels in the original dataset are contradictory. I'm sorry but this point is not clear for me. Why do we have to give more confidence to a machine than to a human? I'll try to set this clear. From what I read, I understand that you are using the predictions from another tool (Textblob) to train and evaluate your proposal. In this way, you can say that your proposal is similar to a model created by Textblob but it may be a poor model to the real data. In fact, in this way Textblob model could not be improved because that model is the goal. So if that model is supposed to be perfect, what's the point of the research (we already have the model from Textblob). So the main question is: Why do you say the labels from Textblob are better than the original data? I am sorry to say that Table 3 is not clarifying because the text seems to have been processed and can not be interpreted. How do we know if it is positive or negative? Validity of the findings: In the new discussion of results, the paper says: "Performance of machine learning models(*1*) is improved with Textblob data annotation(*2*)" In the response to comment 11: *2* "Textblob assigned labels(*2*) are more reliable as is confirmed by the higher classification accuracy(*1*)" This is a case of false proof. If we want to prove *1*, we need to prove *2*. If we want to prove *2*, we need to prove *1*. We enter in a recursive non-sense proof. Regarding this, the conclusions talk about contradictions found between sentiments and assigned labels but this is not explained in the paper. Is the data set wrongly labeled? You seem to mean that Textblob is detecting this mislabeled instances but how has this been checked? Has anybody actually read the reviews and confirmed that they was mislabeled? If not, you can not say so because it will just create confusion, in a similar way as fake news do. Besides, more care should be taken when writing. If the data set is about movie reviews, why mentioning "tweets" in the discussion. The statistical test applied is not clearly explained. What differences has been tested? T-test is usually applied to see if the means are different. What two means have been compared? Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CLASSIFICATION OF MOVIE REVIEWS USING TERM FREQUENCY-INVERSE DOCUMENT FREQUENCY AND OPTIMIZED MACHINE LEARNING ALGORITHMS Review round: 4 Reviewer: 1
Basic reporting: The authors have addressed my comments. Experimental design: The authors have addressed my comments. Validity of the findings: The authors have addressed all my comments. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: JOINT USER GROUPING AND POWER CONTROL USING WHALE OPTIMIZATION ALGORITHM FOR NOMA UPLINK SYSTEMS Review round: 1 Reviewer: 1
Basic reporting: "The authors introduced Whale Optimization Algorithm (WOA) and compare it with Grey Wolf Optimizer (GWO) and Particle Swarm Optimization (PSO) algorithms in the context of NOMA-based wireless system. The manuscript is easy to follow. The authors can consider the below suggestions to improve the quality of the paper. As for the writing: - abstract: "A user grouping,... assume..." to passive voice. - please avoid using ambiguous expressions such as "high", "significantly comparable", etc. - line 144 two "at". - line 151 unfinished sentence. - avoid starting sentences with Greek symbols or "Which". - Grey Wolf Optimizer (GWO): The gender of the wolve is emphasized. How does it contribute to the algorithm? - "Figure 2: Grey wolf hierarchy ?" - line 379 "is assume" to passive voice. - line 412 "simulations results" Experimental design: The authors proper explanation NOMA-based wireless system performance and detailed analysis. Neither experimental verification nor comparison to other methods/models is presented. and more comparison to recently published papers that may prove scientific value is not provided Validity of the findings: Please restructure the paper, as sections "Spectral efficiency maximization" and "Solution of proposed model" can be subsections instead of being equal to sections "Introduction" and "Discussion". If possible, please elaborate the discussion on the Simulation results and add application scenarios and/or future works in the Conclusion. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: JOINT USER GROUPING AND POWER CONTROL USING WHALE OPTIMIZATION ALGORITHM FOR NOMA UPLINK SYSTEMS Review round: 1 Reviewer: 2
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: This paper studies the uplink NOMA system with the WOA algorithm, But the reviewer cannot accept this manuscript. Comments are as follows: 1. The authors just applied the WOA algorithm to the uplink NOMA systems and the reviewer could not find any interesting idea or intuition from this manuscript. 2. No performance gain of the WOA algorithm compare with PSO or GWO algorithm from fig.7 3. Future works should be added in the last section.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: JOINT USER GROUPING AND POWER CONTROL USING WHALE OPTIMIZATION ALGORITHM FOR NOMA UPLINK SYSTEMS Review round: 1 Reviewer: 3
Basic reporting: Although the language of the article is understandable, it needs some enhancements, for example, when the authors wrote "In this work, a newly developed Whale Optimization Algorithm (WOA) is implemented", they did not mention the purpose of using this algorithm specifically. And the article also lacks clarity when talking briefly about results in the Abstract section as in the sentence " Also, WOA attains improved results in compliance with system complexity". Moreover there are some grammatical mistakes and types like: "is capable to chose", "would honour the","where as ", "response is assume to be", "are describe as".... Experimental design: WOA algorithm has to be written and described by taking into account the NOMA Uplink system, One of the drawbacks of the article is that the baselines contain more details than the proposed algorithms. Validity of the findings: I think it would be good to compare your results against other algorithms such as Cuckoo Search Optimization, Ant Colony Algorithm, and other state-of-the-art algorithms... Additional comments: The article is missing the future work
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: JOINT USER GROUPING AND POWER CONTROL USING WHALE OPTIMIZATION ALGORITHM FOR NOMA UPLINK SYSTEMS Review round: 1 Reviewer: 4
Basic reporting: The problem under study is interesting and trendy, but the major issue is that there is no significant and convincing novelty. In the following, some concerns will be discussed which hopefully can help the authors. 1. The main concern about this work is that the manuscript consists of contradictory statements about the contribution. It seems the authors investigated the usage of three algorithms on the application of NOMA, not proposing a new variant of WOA. The authors should specify with clarity their novelty and contribution. For instance, the reader encountered the following inconsistent sentences, - “In this work PSO, GWO, and WOA are employed”, “a newly developed WOA is implemented”, “To solve the issue of complexity, a WOA with low complexity is investigated for an efficient opt…”, “The proposed algorithm for user pairing/grouping”, “… reduce the system complexity, an innovative existence metaheuristic optimization technique named WOA is proposed in this work.”, “This section evaluates the proposed algorithms WOA, GWO, and PSO…”. 2. The abstract should be revised to reflect the main novelty and contribution of the paper. 3. The literature overview in Introduction section is shallow, the authors should add an in-depth literature review of recent optimization algorithms such as MTDE and I-GWO (with DLH search strategy) algorithms. 4. It is recommended to provide more informative literature on the usage of metaheuristic algorithms for solving the NOMA uplink system. 5. The Section entitled “Optimal and Sub-optimal User Grouping using WOA” is vague and does not show the methodology of the usage of WOA for user grouping tasks. The authors should describe the proposed algorithm step-by-step and show the sequence by using a flowchart or pseudo-code. (Algorithm 1 is the pseudo-code of WOA not the usage of it for user grouping) 6. The authors should clarify what is the “WOA NOMA”. There is no explanation for “WOA NOMA” in the manuscript until the Simulation results section. Experimental design: 7. The authors mentioned in Introduction that “The results obtained through the algorithms proposed in (Sedaghat and Mu¨ ller (2018)), WOA, GWO and the popular PSO are exclusively compared in this study.”, but there are not any results from (Sedaghat and Mu¨ ller (2018) in the Simulation results section. 8. The authors should correct the legend of Figures 4-6 to remove “[19]” and add a proper reference. 9. The selection of comparative algorithms is not satisfactory. It’s necessary to compare the proposed algorithm with recently proposed and enhanced variants of algorithms. 10. I could not find the number of iterations, runs, and population used in experiments; are they missed? Validity of the findings: 11. The authors should statistically analyze the proposed and comparative algorithms using Wilcoxon and Friedman tests. 12. The authors have claimed about the aim of using WOA and “obtaining high spectral-efficiency with lower computational complexity”, but this complexity is related to WOA and the authors have no contribution to achieve or even to reduce it. 13. It is recommended to analyze the performance and effectiveness of the proposed and comparative algorithms using the performance index (PI) as shown in Subsection 5.3.5 in the QANA paper. Additional comments: 14. It is noted that Figures 4, 5, 6, and 7 are illustrated before their description.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: JOINT USER GROUPING AND POWER CONTROL USING WHALE OPTIMIZATION ALGORITHM FOR NOMA UPLINK SYSTEMS Review round: 2 Reviewer: 1
Basic reporting: The article appears much better. The content is clear and easy to understand. However, I recommend referring to some recent research papers that are considered as related to this research paper: - Alawad, N. A., & Abed-alguni, B. H. (2021). Discrete Island-Based Cuckoo Search with Highly Disruptive Polynomial Mutation and Opposition-Based Learning Strategy for Scheduling of Workflow Applications in Cloud Environments. Arabian Journal for Science and Engineering, 46(4), 3213-3233. - Panda, S. (2020). Joint user patterning and power control optimization of MIMO–NOMA systems. Wireless Personal Communications, 1-17. - Zheng, G., Xu, C., & Tang, L. (2020, May). Joint User Association and Resource Allocation for NOMA-Based MEC: A Matching-Coalition Approach. In 2020 IEEE Wireless Communications and Networking Conference (WCNC) (pp. 1-6). IEEE. - Abed-alguni, B. H., & Alawad, N. A. (2021). Distributed Grey Wolf Optimizer for scheduling of workflow applications in cloud environments. Applied Soft Computing, 102, 107113 Experimental design: The methods and figures are described snd formulated well. Validity of the findings: No comment. Additional comments: No comment.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: JOINT USER GROUPING AND POWER CONTROL USING WHALE OPTIMIZATION ALGORITHM FOR NOMA UPLINK SYSTEMS Review round: 3 Reviewer: 1
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: no comment. Everything has been addressed well.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SECURITY AWARENESS OF SINGLE SIGN-ON ACCOUNT IN THE ACADEMIC COMMUNITY: THE ROLES OF DEMOGRAPHICS, PRIVACY CONCERNS, AND BIG-FIVE PERSONALITY Review round: 1 Reviewer: 1
Basic reporting: The paper is well written and is easy to follow. The results are reported with professional diagrams and tables. Improvement suggestion: 1. The figures should be vector graphics. I suggest the authors regenerate them into pdf or ps graphics 2. Citations from top security conferences (e.g., NDSS, IEEE S&P) are largely missing. The authors are suggested to look through literatures starting with the following paper: AuthScan: Automatic Extraction of Web Authentication Protocols from Implementations Experimental design: The paper manages to make clear the hypotheses, and design use studies to validate them. I am very happy to see the experiments well follow the ethics. Improvement suggestion: 1. The privacy concerns (Table 3) in the questionnaire seem not complete. For example, the relying party (RP) may access user's data on identify provider (IDP). It would be good if the authors may consider to include these. Validity of the findings: The conclusions are well stated, and backed by the user studies. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SECURITY AWARENESS OF SINGLE SIGN-ON ACCOUNT IN THE ACADEMIC COMMUNITY: THE ROLES OF DEMOGRAPHICS, PRIVACY CONCERNS, AND BIG-FIVE PERSONALITY Review round: 1 Reviewer: 2
Basic reporting: Thank you so much for the editor for giving me an opportunity to review this article. Very well written article in Professional English. The research is clear and unambiguous, and very relevant to current topic of information security. SSO is an area where a lot of security attacks can happen in organizations. Educational sector often overlooks security and privacy of information. This article is timely to elaborate with appropriate data about the security mindset of participants in the educational institution. Please to verify spacing and formatting issues before sending the final copy. Experimental design: The design and reserach fits within the Information Security domain The research is well defined and attached sumary and survey results fills in the idenfied knowledge gap of SSO. Validity of the findings: Data has been well analysed and conclusions are linked to original research questions. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SECURITY AWARENESS OF SINGLE SIGN-ON ACCOUNT IN THE ACADEMIC COMMUNITY: THE ROLES OF DEMOGRAPHICS, PRIVACY CONCERNS, AND BIG-FIVE PERSONALITY Review round: 1 Reviewer: 3
Basic reporting: There are many papers in the usability security literature that focus on 2FA evaluations, for example Das “Why Johnny Doesn’t Use Two Factor A Two-Phase Usability Study of the FIDO U2F Security key - https://link.springer.com/chapter/10.1007%2F978-3-662-58387-6_9 and since the paper mentions 2FA a couple more well known references could be included. There is a similar paper by the same principal author related to older users and this could provide a complement to the Pratama and Firmansyah paper mention on lines 85-86. The authors mention privacy concerns and it would be good to make reference to https://cups.cs.cmu.edu/soups/2007/posters/p173_heckle.pdf Heckle and Lutters, SOUPS 2007. The level of the grammar and flow of the paper was good. The strength of the paper is the way that they have drawn broadly from the literature since there is not a great deal of literature on SSO evaluations in higher education Experimental design: It would have been interesting to see a hypothesis which took into account whether the subjects they were studying had an impact on their awareness, for example if they were from an arts background or a science background. This related to line 142 so it would be interesting why they chose that hypothesis and now one which was linked to their academic backgrounds? So for example in Figure 3 the security awareness was compared to behaviour attitude and knowledge. I would like to have seen more explanation as to why this aspect was not expanded on more to include the type of faculty they were related to. If some of the staff were professional services staff then it would have been good to have split the categorisation of people more finely. It was not entirely clear what the difference was between students, faculty and staff were on line 73. The methodology did not indicate on line 188 what the size of the overall population was and what were the limitations of the survey. Did everyone complete all the answers and so were the answers normalised? Validity of the findings: Thank you for providing the tables at the back of the articles but in the main body of the discussion no specific numbers were presented which related to the tables. I would have expected the discussion section to make reference to the tables. The commentary in the discussion did match the tables but the narrative would have been strengthened with more specific referencing to the tables. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SECURITY AWARENESS OF SINGLE SIGN-ON ACCOUNT IN THE ACADEMIC COMMUNITY: THE ROLES OF DEMOGRAPHICS, PRIVACY CONCERNS, AND BIG-FIVE PERSONALITY Review round: 1 Reviewer: 4
Basic reporting: Background information is covered pretty well. Please see attached PDF for specific comments though. Experimental design: Rationale and procedures seems good. Please see attached PDF for specific comments about how some of the DV's and IV's are being reported though. Validity of the findings: Findings seem to be on target. Suggest modifying some of the figures and note that two figures are not mentioned in text. Also, some of the implications that pertain directly to personality (Big 5) are not well developed and could be made stronger. Please see attached PDF for specific comments though. Additional comments: Please see attached PDF for specific comments. One of the biggest issues is to make sure the article gets proofread by a native English speaker to ensure grammatical consistency.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SECURITY AWARENESS OF SINGLE SIGN-ON ACCOUNT IN THE ACADEMIC COMMUNITY: THE ROLES OF DEMOGRAPHICS, PRIVACY CONCERNS, AND BIG-FIVE PERSONALITY Review round: 2 Reviewer: 1
Basic reporting: The paper is well written and is easy to follow. The results are reported with professional diagrams and tables. Experimental design: The paper manages to make clear the hypotheses, and design use studies to validate them. Validity of the findings: The conclusions are well stated, and backed by the user studies. Additional comments: Thank the authors for the revision. I am OK for this version to be published.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SECURITY AWARENESS OF SINGLE SIGN-ON ACCOUNT IN THE ACADEMIC COMMUNITY: THE ROLES OF DEMOGRAPHICS, PRIVACY CONCERNS, AND BIG-FIVE PERSONALITY Review round: 2 Reviewer: 2
Basic reporting: The authors fixed the changes mentioned by the reviewers. Experimental design: The authors fixed the changes mentioned by the reviewers. Validity of the findings: The authors fixed the changes mentioned by the reviewers. Additional comments: None
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 1 Reviewer: 1
Basic reporting: Strengths: S1. The paper shows the context (scientific experiments) and motivates the need for collecting computational and non-computational provenance from the early stages of the experiments to support their reproducibility. S2. Figures are relevant, high quality, well-labeled, and described. The architecture figure provides a good overview of the approach that is helpful for following the description. The other three figures provide examples of analyses that CAESAR supports. Weaknesses: W1. The related work section seems outdated. The paper states "Only a few research works have attempted to track provenance from computational notebooks (Hoekstra 2014; Pimentel et al., 2015; Carvalho et al., 2017)", but there are newer approaches that also track provenance from notebooks: - Koop, David, and Jay Patel. "Dataflow notebooks: encoding and tracking dependencies of cells." 9th {USENIX} Workshop on the Theory and Practice of Provenance (TaPP 2017). 2017. - Kery, Mary Beth, and Brad A. Myers. "Interactions for untangling messy history in a computational notebook." 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 2018. - Petricek, Tomas, James Geddes, and Charles Sutton. "Wrattler: Reproducible, live and polyglot notebooks." 10th {USENIX} Workshop on the Theory and Practice of Provenance (TaPP 2018). 2018. - Head, Andrew, et al. "Managing messes in computational notebooks." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. - Wenskovitch, John, et al. "Albireo: An Interactive Tool for Visually Summarizing Computational Notebook Structure." 2019 IEEE Visualization in Data Science (VDS). IEEE, 2019. - Wang, Jiawei, et al. "Assessing and Restoring Reproducibility of Jupyter Notebooks." 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2020. W2. Some older approaches are also missing in the related work: approaches that collect provenance at the OS level usually are able to interlink data, steps, and results from computational and non-computational processes, once the non-computational processes are stored in the computer. However, more notably, Burrito collects the OS provenance and provides a GUI for documenting the non-computational processes and annotating the provenance: - Guo, Philip J., and Margo I. Seltzer. "Burrito: Wrapping your lab notebook in computational infrastructure." (2012). W3. The "Design and Development" subsection of "Background" is confusing. On one hand, it is dense and enters into implementation details of the approach that are later explained in the "Results" section. On the other, it is too shallow and does not explain important concepts and tools used by the paper. For a better comprehension of the paper, the background section could answer the following questions: - What are the main features of REPRODUCE-ME? How does it compare to other provenance ontologies such as PROV and P-Plan? - What is ODBA? What ate the benefits of using it? W4. The structure of the paper does not conform to the Peerj standards (https://peerj.com/about/author-instructions/#standard-sections): it has a "Background" section instead of a "Materials & Methods" section. Additionally, a big part of the "Results" section describes the approach in detail instead of describing the experimental results. Both the "Background" and the approach proposal could be in the "Materials & Methods" section and solve W3 partially. W5. While the raw data was supplied (great!), the cited page (Samuel, 2019b) is generic for multiple research projects, and finding the raw data associated with this specific paper requires navigating through some links to reach the GitHub repository with the data (https://github.com/Sheeba-Samuel/CAESAREvaluation). This repository could be cited directly. Additionally, there is no description on the repository describing the structure and the data, making it hard to validate. Experimental design: Strength: S3. The supplemental files are well described and allow the replication of the usability experiment. Weaknesses: W6. The originality of the paper evaluation is not clear. It seems that CAESAR was introduced and partially evaluated in (Samuel et al., 2018), where the first half of the evaluation occurred (the definition and evaluation of the competency questions). However, the usability evaluation seems original. The paper should state clearly what is original in this paper and how does it improve from (Samuel et al., 2018). W7. The paper does not report the results of the competency query evaluation. It indicates that each question addressed different elements of the REPRODUCE-ME Data Model, but it does not present the questions nor the results. (Disregard this weakness if the evaluation is really part of another paper. In this case, reinforce it in line 634). W8. The research questions are not well defined. While the main motivation of the paper is based on reproducibility, the current evaluations seem to assess understandability and usability, instead. Validity of the findings: Weaknesses: W9. The paper should indicate the threats to the validity of the experiments. Given the size of the population, it is likely that the experiment has an external threat to validity with statistical results that are not sound. Additionally, the usage of CAESAR did not occur in a controlled environment, which also leads to a threat to internal validity. W10. The conclusion claims that the approach addresses understandability, reproducibility, and reuse of scientific experiments, but the experiments do no support these claims Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 1 Reviewer: 2
Basic reporting: The report is mostly there but I think could have some additional support. The paper states: "However, this is too little too late: These measures are usually taken at the point in time when papers are being published" Give some examples of what these measure are that are insufficient. In Line 117 - 'To the best of our knowledge, no work has been done to track the provenance of results generated from the execution of these notebooks and make available this provenance information in an interoperable way". I think this claim of novelty is not necessary to make so bold. For example, I think NBSafety (http://www.vldb.org/pvldb/vol14/p1093-macke.pdf) and Vizier https://vizierdb.info are both related work that are in the same area as the reported tool. Here I think the important thing is that PeerJ is not about novelty but rather if the work is sound. The particular work has its own perspective and is useful but I think it's unnecessary to make these claims of research novelty. At least provide some deeper justification. Lastly, when introducing the FAIR principles say what they stand for. Experimental design: The experimental design and reporting need to be better explained. In section Background Evaluation - you seem to describe a number of different evaluation strategies. I had trouble figuring out the various strategies and telling them a part. It would be beneficial to distinguish each of the evaluation strategies and make it clear what approach you used in each of them. Maybe simply labeling them would help. For example, you seem to have an application evaluation, a competence question based evaluation, user based evaluation? The lack of organization of the material is also present in the presentation of the evaluation results. Please clearly describe and distinguish which evaluation strategies were used and the specific evaluation results there were. For example, it was unclear why a notebook with face recognition (line 574) was being used for evaluation. Likewise, I'm not sure if a user survey study of 6 participants is enough when the other evaluation approaches are not clearly demarcated. Be careful of claims made in the system description without evidence. For example "The provenance information is stored in CAESAR to query this data from different sources efficiently." These claims when made need to be substantiated. Validity of the findings: I believe the majority of the findings are valid given that this is a report of a software platform but given the reporting on the method it was hard to determine whether the evaluation results in their totality showed what the authors said it shows. Additional comments: In general, a nice contribution of the system but I the reporting around the evaluation needs to be clearer for the reader. Also, the claim of novelty I don't think needs to be made and instead the focus should be on the contribution of the system and validating the claims of the system
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 1 Reviewer: 3
Basic reporting: The paper's overall structure is fine, but the writing should be improved. There is a lot of passive voice used that should be changed. For example, "it is also required to represent and express this information in an..." could be restated as "information should be represented and expressed in an..." Also, there are places where "got" is used that could use more precise language (e.g. obtained). I also would advise against using quotation marks for defining terms (e.g. "provenance") as that brings a connotation that is likely not intended; use italics or bold instead. In general, the paper is wordy, and sentences or phrases can be shortened or even omitted. In addition, the results of the evaluation seem to be discussed twice (once in the Evaluation subsection [p. 11] and again in the Discussion section [p. 14]). The paper is professional, the raw survey results are shared, and the source code is available. There are some places where terms could be defined earlier (e.g. FAIR in the introdution is not explained, JupyterHub [p. 8]). Experimental design: The paper is a research paper, detailing a framework and the evaluation of it. The framework, CAESAR, seeks to help users create and maintain reproducible experimental workflows. The introduction lays out the reasons why reproducibility is important and how provenance can help, and the contributions of CAESAR, ProvBook, and REPRODUCE-ME. Much of the paper provides details on the architectures of these frameworks which is expected. To me, the paper does not do well enough describing the experimental design. The paper states that competency questions were used to evaluate CAESAR, then discusses data and user-based evaluation of ProvBook, and finally discusses a user-based evaluation. It is not clear to me what the competency questions are nor what "plugged the ontology in CAESAR by using these competency questions" means. I was unsure if the paragraphs describing evaluation were related or separate experiments. It is also unclear what the user-based evaluation measured. Users had to upload data, but nothing else was mandatory? The number of users (6) is too low to make significant statistical conclusions, and the evaluation seems entirely subjective. Validity of the findings: I don't think the conclusions are overstated but rather the conclusions seem quite limited. Due to some issues with the experimental design (see 2), I think it is difficult to judge the conclusions. It is important for users to "like" a system, but it would be helpful to know more about the use cases where the framework has been used. Additional comments: This paper describes a lot of work in the areas of reproducibility and provenance which are important areas for computational science. The paper describes how domain scientists in biological imaging face particular challenges and how CAESAR, ProvBook, and REPRODUCE-ME were designed to help them. It seems like there has been a lot of work done which is important to both the bioimaging community and the broader community interested in reproducibility and provenance. As a paper describing these frameworks, it goes into great detail about the particular implementation, which is important documentation, but is not particularly useful for readers who do not use this specific system. There is certainly a tension between not providing enough detail on how the system works and writing user documentation in a research paper, but I feel the paper has too much of the latter. The long lists describing panels and the REPRODUCE-ME schema should be moved to supplemental material. The threads related to the core problems of supporting reproducibility in the bioimaging workflows are sometimes lost in the details of the system. I would strongly encourage adding a running example to the text to help readers understand specific use cases and how the frameworks help users to address them. In this context, specific features can be detailed. This should also lead to greater coherence among the three pieces that are detailed in the paper.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 2 Reviewer: 1
Basic reporting: no comment Experimental design: The research question ("it is possible to capture, represent, manage and visualize a complete path taken by a scientist in an experiment including the computation and non-computation steps to derive a path towards experimental results") is too broad, and the evaluation does not seem to be completely related to it. The evaluation had three parts, which considered distinct aspects: - Competency question-based evaluation: the competency questions evaluate the possibility of capturing and representing the complete path. Arguably, the system also had to manage it, but there is no visualization evaluation in this part. - Data-based evaluation: this evaluation was related to reproducibility, performance, and scalability, which are not part of the main research question. - User-based evaluation: this evaluation was related the representing and visualizing the complete path to users for a usability study. The paper could break the main research question into smaller questions for each of these parts and discuss the results and implications separately. Validity of the findings: no comment Additional comments: The paper evolved well, and the authors addressed most of my comments appropriately. However, the paper still has a weakness with the experimental design.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 2 Reviewer: 2
Basic reporting: The paper addresses the concerns I had. One minor addition might be a reference to a survey on provenance. Like Wellington Oliveira, Daniel De Oliveira, and Vanessa Braganholo. 2018. Provenance Analytics for Workflow-Based Computational Experiments: A Survey. <i>ACM Comput. Surv.</i> 51, 3, Article 53 (July 2018), 25 pages. DOI:https://doi.org/10.1145/3184900 or Herschel, M., Diestelkämper, R. & Ben Lahmar, H. A survey on provenance: What for? What form? What from?. The VLDB Journal 26, 881–906 (2017). https://doi.org/10.1007/s00778-017-0486-1 Experimental design: no comment Validity of the findings: no comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 2 Reviewer: 3
Basic reporting: The text is improved, and much of the passive voice has been addressed, but there are still a few places where the text could be improved. Specifically, in the evaluation: - "The evaluation was done" Who did this evaluation? - "Several runs were attempted to solve the issue". Who attempted these runs? User 1? - "This was resolved by installing the sci-kit-learn module." Was this User 2, or did ProvBook do this? Also a few other spots in the text: - "We, therefore, argue", actually like without commas or to swap order to "Therefore, we argue" - "we aim to create a conceptual model" -> "we create a conceptual model" - "We use D3 JavaScript library" -> "the" - "obtained 3 responses" => "received three responses" (got doesn't always translate to obtained) Experimental design: I didn't see Brank et al. define competency questions in the survey paper so was still a bit lost about how "produce good results" (line 560-561) is actually evaluated. Reading further and the 2018 paper on "The Story of an Experiment", it sounds like domain experts reviewed the results from the queries to verify them. There are some details about some queries not working and requiring OPTIONAL, but should we assume that the domain experts were finally happy with all competency queries? In any case, moving this domain expert verification statement up earlier would help clarify who is doing the evaluation. I still don't understand the "Data and user-based evaluation of ProvBook in CAESAR" section. The general factors seem reasonable to care about, but I don't understand how this was *evaluated*. Also, the passive voice (see Basic Reporting) makes this harder to understand. Who did the evaluation? How was this evaluated? The description of how notebooks were used and updated is interesting, but the focus should be on ProvBook. The one sentence here is "Using ProvBook, Users 1 and 2 could track the changes and compare the original script with the new one" How did this help them during the study? Was one user able to use ProvBook and the other not able to use it? That type of study would allow conclusions that ProvBook improved reproducibility because, for example, User 2 took less time to reproduce the original notebook. In the User-based evaluation of CAESAR, I also was left wondering how the "None of the questions were mandatory" piece is addressed. Were the participants required to use the tool for a certain amount of time? Are the "questions" the competency questions or the survey questions? If the survey questions were not required to be answered, why? If the competency questions were not required, how did you verify whether the users used the tool for its intended purpose? Validity of the findings: Unfortunately, I still feel that the conclusions are rather limited, and it is unclear how they follow from the evaluations as described. This is likely due to experimental design notes above, but the discussion section still states conclusions that are not in evidence: "The results of the data and user-based evaluation of ProvBook in CAESAR show how it helps in supporting computational reproducibility." I suspect that ProvBook in CAESAR does help support computational reproducibility, but I don't see how what is described in that evaluation subsection validates that. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 3 Reviewer: 1
Basic reporting: As a minor issue, the main research "question" in line 466 is not phrased as a question Experimental design: The paper improved in reporting the experiment design, but it still can improve the wording on the experiment section: The paper divides the main question into 3 parts in the first paragraph of the Evaluation, but it does not reference these parts explicitly. For instance, the paper states that in the first part, they "address the question of capturing and representing the complete path of a scientific experiment" and for that, they evaluate: - the role of CAESAR in capturing the non-computational data - the role of ProvBook in capturing the computational data - the role of the REPRODUCE-ME ontology in semantically representing the complete path ... using the competency question-based evaluation Then, the next subsection is called "Competency question-based evaluation", indicating that it only covers the 3rd item (the role of the REPRODUCE-ME ontology). Moreover, by reading this subsection, it seems that it in fact does not cover the role of ProvBook in capturing the computational data (which is somewhat discussed in the next subsection), but it does have a discussion about the role of CAESAR in capturing the non-computational data. Validity of the findings: no comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 3 Reviewer: 2
Basic reporting: The text is much improved, and the reduction in passive voice makes the text easier to read. I would recommend a better inline summary of results in the evaluation section. I'm not sure why the reader is expected to go to a different file to read about them. I realize that having the open, raw results available is great for those that want to poke around, but I think a summary for readers is useful. - L563: "The competency questions, the RDF data used for the evaluation, the 564 SPARQL queries, and their results are publicly available (Samuel, 2021)." (What are the results?) - L604, "The files in the Supplementary information provides the information on this evaluation by showing the difference in the execution time of the same cell in a notebook in different execution environments." (What do they show?) - L630 "The results of the evaluation are available in the Supplementary file." (What are the results?) The user-based evaluation subsection does have results in the manuscript. Typo in L581: "uses face recognition example" -> "uses a facial recognition example" Experimental design: The additional information about the use case in the "Data and user-based evaluation of ProvBook" section (L581) is helpful, but I would argue this is more of an example and probably should be tagged as such. We don't know, for example, that User 2 needed to look at User 1's provenance to fix the error. We also don't know if User 2 is a more experienced Python programmer, or even if User 2's environment (Fedora) made this easier. Because n=1, I don't think we can conclude that ProvBook is the reason for improved performance, but this use case is instructive in understanding why we should expect ProvBook to help. Validity of the findings: I think the text reads ok here now. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 4 Reviewer: 1
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: I understand that the authors addressed all of my previous comments, and it is ready for publication.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 4 Reviewer: 2
Basic reporting: With respect to the results, I think my previous comments may have misled. I care about what the evaluation **shows**, not specifically what is contained in the supplemental files. The paper states "we evaluated CAESAR with the REPRODUCE-ME ontology using competency questions collected from different scientists in our requirement analysis phase"; "The domain experts reviewed, manually compared, and evaluated the correctness of the results from the queries using the Dashboard"; and "The domain experts manually compared the results of SPARQL queries using Dashboard and ProvTrack and evaluated their correctness." I see the text regarding issues with null results and modifications, but I don't see any statement summarizing the results of the evaluation. Did the domain experts find that after the modifications, all the competency questions were satisfactorily answered? If so, please state that. If not, it seems like having details about what was problematic would be helpful. (If I'm missing how the competency evaluation works, perhaps clarifying that in the text would also help. Evaluation to me implies there are results that give an indication of how well something works.) "We used an example Jupyter Notebook which uses a face recognition example applying eigenface algorithm and SVM using scikit-learn" is still not quite right to my ears. Perhaps "... example where eigenface and SVM algorithms from scikit-learn are applied." Experimental design: No comment Validity of the findings: No comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A COLLABORATIVE SEMANTIC-BASED PROVENANCE MANAGEMENT PLATFORM FOR REPRODUCIBILITY Review round: 5 Reviewer: 1
Basic reporting: No Comment Experimental design: No Comment Validity of the findings: No Comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE RISE OF OBFUSCATED ANDROID MALWARE AND IMPACTS ON DETECTION METHODS Review round: 1 Reviewer: 1
Basic reporting: There are fair amount of English language mistakes and ambiguous sentences. Some of them are highlighted and annotated in the pdf file. The intext citation style is inconsistent at places (highlighted in pdf) Experimental design: The design of the study is satisfactory. However one of the major concern is that the study do not report the evaluation results of the surveyed papers. They should include table w.r.t evasion techniques and corresponding evasion results obtained in the literature. e.g. study A uses X evasion technique and successfully evaded n% of the sample under observation. Moreover at multiple places in the article, headings are missing. (highlighted in the pdf). Validity of the findings: Lessons learnt and the future directions are not presented clearly. Additional comments: The paper needs major revision. Around 39 points are annotated in the pdf file which should be addressed clearly before publication.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE RISE OF OBFUSCATED ANDROID MALWARE AND IMPACTS ON DETECTION METHODS Review round: 1 Reviewer: 2
Basic reporting: The English language should be improved to ensure that an international audience can clearly understand your text • Incomplete sentence o in Abstract at Line 20. For example, "Every day, thousands of new Android malware applications" o Line 257, “Several untrusted or well-verified…” • Too long sentences o 1. Line 26 "The concern of encountering difficulties in malware reverse engineering motivates researchers to maintain the gap between securing the source code of benign Android applications by using evasion techniques and analyzing the obfuscated Android malware applications" o 2. Line 38 "With many open-source libraries for Android....." • Line 38 Since, add comma • Line 46 e.g., contact lists, photos, videos, documents, AND account details • Line 48 reference missing. "Android applications use Java as a developing language because ..." o Line 54 reference Google Bouncer? o Line 131, FeCO Reference? • The word Obfuscation is use somewhere with capital O and somewhere with small o? • The Rationale – The rationale correct it • Line 52 ".;" correct it • In line 23 "The malware authors adopt the obfuscations ".- Correct it • Line 81 False Negative FN detection - False Negative (FN) detection, Line 83 use FN instead of abbreviating it again. • Line 91 You and Yim (You & Yim 2010) – correct it • Line 93 and polymorphic malware types • Line 93 please explain the word these? • Line 107, Correction required- “Our goal is to systematize these evasion techniques using a taxonomy methodology…” also explain the term ‘these’ • Line 152, we conclude the paper on conclusion section – correct it • In line 188 the abbreviation WOS should be mentioned properly in Line 159 • Line 208, In the section, we first study the Android application – Correct it • Line 209, we investigate the Android Operating System's (OS) weaknesses. – Correct it • Repeating sentence check line 218 and 228. “Android apps require a virtual machine to run, called Dalvik or ART, depending on the OS version” • Line 244, Therefore, The Android community, Correct it • Formatting issues in Line 264 The Coarse Granularity of Android Permissions: label correction required, also explain this point to get clear understanding of android weakness. • References missing in paragraph 270-274 • Line 344, 406, 410 and many others label missing, reference missing. • Line 534 The classification techniques base the decisions on many criteria – correct it • Figure ordering such as figure 7,8 mentioned after figure 9, 10 Experimental design: • Please explain in your introduction why you choose particularly Android platform for observing obfuscation techniques? • Second and third contribution point (Line 110) should be to the point and give clear depiction of your work. Moreover, contribution point 1 and 2 overlap such as “This study examines different evasion techniques that hinder detecting malicious parts of applications and affect detection accuracy” and “Our goal is to systematize these evasion techniques using a taxonomy methodology, which clearly shows various evasion techniques and how they affect malware analysis and detection accuracy” • Authors didn’t mention how many papers they investigated in number such as at Line 117 “our investigation focuses on studies written between 2011 and early 2021” • Table 1 shows 11 comparison only, whereas authors have investigated studies from 2011-2021 so only 11 papers focused on evasion techniques? Or did authors mention only these papers, if so, why? o In table 1 only two papers are from 2021, then 0 papers from 2020, 0 from 2019, 0 from 2018 indicates either there are no studies conducted in this era or authors didn’t surveyed properly. • Line 131 “FeCO focused on features selections and machine learning models….” which models? And which type? • Table 2, please fill publisher column for the remaining journals • During identification and screening phase, authors didn’t use the term Machine learning, however, the title represent strongly as machine learning. And being a reviewer I didn’t found a word machine learning throughout introduction and related review section. Moreover, in line 660 author mention “However, deep insight into machine learning techniques is outside the scope of this study” then the title of the paper should be revised. • Authors mentioned that they are the only one who scrutinize Android malware detection academic and commercial frameworks, so are you claiming that there are no studies that focused on academic framework? • Line 284. “On the other hand, malware writers trap these features to gain remote access to install malicious applications”, Explanation required how? • Line 307 Explain the word THEM and THESE in sentences “Android applications have powerful tools and techniques to secure and protect them from being reverse-engineered. Conversely, malware authors are using these tools and techniques to evade detection” • Line 311, “As displayed in Figure 2, we categorize evasion techniques into two main types” There are other evasion techniques too, such as oligo-morphic etc. so did authors only considered polymorphic and metamorphic intentionally? If so, please explain why? • Please extend the study by mentioning which papers focused on which evasion techniques through references. Such as Line 326 RPK, how many papers that you scrutinize uses RPK etc • Most of the references used in the paper are old such as 2015-2016 and I barely found the updated references from 2019-2021, indicates that they survey is conducted only from papers 2014-2016. Validity of the findings: No comment Additional comments: no comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE RISE OF OBFUSCATED ANDROID MALWARE AND IMPACTS ON DETECTION METHODS Review round: 2 Reviewer: 1
Basic reporting: The basic reporting of the paper is up to the standards and all the references are concrete and relevant to the field of study. Experimental design: Good Validity of the findings: Fair Additional comments: All concerns raised by my side have been addressed and i am happy to accept the paper for publication.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE RISE OF OBFUSCATED ANDROID MALWARE AND IMPACTS ON DETECTION METHODS Review round: 2 Reviewer: 2
Basic reporting: Required changes have been conducted. No comments Experimental design: Required changes have been conducted . No comments Validity of the findings: no comment Additional comments: no comment
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DYNAMIC GUIDED METRIC REPRESENTATION LEARNING FOR MULTI-VIEW CLUSTERING Review round: 1 Reviewer: 1
Basic reporting: This paper proposes a framework called Dynamic Guided Metric Representation Learning for Multi-View Clustering, which is clearly and unambiguously expressed. The literature references is sufficient. The structure, figures and tables are profession. Experimental datasets are shared. However, the writing could be further improved. More explanations about the expeirmental comparison should be added, besides reporting the best resuts. Moreover, well-designed ablation studies could be further enhance the effectiveness of the proposed framework. Therefore, I recommend this paper to be accepted after a minor revision. Experimental design: Overall, the experiments are well-designed and meaningful. Several comments are listed as follows: 1. More explanations about the expeirmental comparison should be added, besides reporting the best resuts. 2. Well-designed ablation studies could be further enhance the effectiveness of the proposed framework. Validity of the findings: no Additional comments: The written of the manuscript should be carefully checked and improved. Several examples of typos are listed here: 1. In the abstract, make sure whether "... 1rt " is a mistake; 2. Line 259-260, and et al.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DYNAMIC GUIDED METRIC REPRESENTATION LEARNING FOR MULTI-VIEW CLUSTERING Review round: 1 Reviewer: 2
Basic reporting: This paper proposed a method with multiple learning steps to cluster multi-view data including inter-intra learning, dynamic guided deep learning and shared representation. It proposed an effective mechanism that considers the discriminant characteristics in a single view and the correlation between views simultaneously. Specially, it combines the effectiveness of FDA-HSIC and dynamic routing learning, and solves issues that occurs with existing methods. In the end, experiments on four datasets demonstrated the effectiveness of the proposed method for clustering tasks. This paper needs to be improved in the following aspects: 1. In the figure 2 of experiments part, the silhouette_score of this method is lower than the baselines, authors should give more detailed analysis on this. 2. There are some grammatical and spelling mistakes that need to polish. For example: what are the variables t and b in formula 2 and 3? Is “Deep representation based on and dynamic guided deep learning ” correct ? 3. In Section 4.3, the reference of baseline algorithms should be added. 4. In related work, a paragraph to summarize the main technical challenges of related work should be added. Experimental design: Experiments are well desined. Datasets and experimental results are presented clear. Validity of the findings: Conclusion are well stated. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DYNAMIC GUIDED METRIC REPRESENTATION LEARNING FOR MULTI-VIEW CLUSTERING Review round: 1 Reviewer: 3
Basic reporting: This paper presents an interesting study on multi-view clustering method. In particular, the authors have proposed a novel framework called DGMRL-MVC, which can cluster multi-view data in a learned latent discriminated embedding space. The data representation can be enhanced by multi-steps. And the approach is proposed to address issues that arise with existing methods, through considering the discriminant characteristics in a single view and the correlation of multiple views rather than using this information from a single view. The effectiveness of the proposed approach has been validated experimentally by compared baseline methods on four multi-view datasets. The results show some improvements of the performance in terms of precision, recall, F1, RI, NMI and Silhouette_score, in comparison with existing methods. This paper needs to be improved in the following aspects: 1)The proposed approach is learning the embedding space for clustering. Therefore, authors should add some relevant details on multi-view subspace clustering in the Section 2.1.2. 2)Please check the last sentence in the Section 2.2.2 and Section 3.3.1. Experimental design: 1)In Experiments, the authors state that the evaluation measures include precision, recall, F1, RI, NMI. In order to make the effect clearer to reader,authors should add the formulas. 2)In the Section 4.3, Table 1 shows the differences among baselines. Authors should add the more relevant details (e.g., optimization function). Validity of the findings: No Additional comments: 1)Please check the abbreviation of the model in Fig.2 and Fig.3. 2)Please standardize the punctuation after the formula.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DYNAMIC GUIDED METRIC REPRESENTATION LEARNING FOR MULTI-VIEW CLUSTERING Review round: 2 Reviewer: 1
Basic reporting: In this revision, authors have answered the question I asked in last review. I have no further question for this revision. Experimental design: authors have well explained the experimental results in Figure 2. Validity of the findings: Same as last review. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DYNAMIC GUIDED METRIC REPRESENTATION LEARNING FOR MULTI-VIEW CLUSTERING Review round: 2 Reviewer: 2
Basic reporting: OK. Experimental design: Ok. Validity of the findings: Ok, Additional comments: Ok
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS A COMPONENT-BASED SYSTEM MODEL TO IMPROVE THE QUALITY OF HIGHLY CONFIGURABLE SYSTEMS Review round: 1 Reviewer: 1
Basic reporting: Needs thororugh proof read Experimental design: OK Validity of the findings: Can be enhanced further. The novelty and impact requires elaboration. Additional comments: The work Towards a component-based system model to improve the quality of highly configurable systems is the current work and better presented in most of the aspects. However, authors can further improve their work in the following sections, 1. The abstract can be improved with the significance of the results. 2. The introduction requires modification such as being concise. 3. Further authors may elaborate on the research contribution in the introduction. 4. Provided literature is better but authors may add a bit more on the literature. 5. The proposed framework is not readable easily. 6. Authors may elaborate about the data set they used. 7. A few of the references need attention to complete the required information. 8. Few sentences need careful attention, they are required to be rephrased such as " Hence, results show that the easy of ease of understanding and adaptability, required effort, high-quality achievement, and version management is significantly improved i.e., more the 50 per cent as compared to exiting method i.e., less than 50 per cent" 9. Please carefully check the sentence structure " da Silva et al. [35] presented a 185 new ability-based approach for scoping the product line details". 10. Few references are not relevant like 39, 40. 11. Authors please carefully use words Hypotheses and Hypothesis.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS A COMPONENT-BASED SYSTEM MODEL TO IMPROVE THE QUALITY OF HIGHLY CONFIGURABLE SYSTEMS Review round: 1 Reviewer: 2
Basic reporting: The paper presents a Component-Based System Model to Improve the Quality of Highly Configurable Systems to support IT-related organizations. The paper is very well written and interesting concept. Following suggestions would be incorporated to Improve the quality of paper. 1. There are few unusual long statements that may exceed three lines. A thoroughly English language revision is a must, where many English language, spelling errors, grammar and punctuation errors are found. 2. The Abstract should clarify the evaluation criteria of the Component-Based System Model and the main findings and results of the evaluation quantitatively. 3. The suggested title should be clear and enlightening, and should reflect the aim and approach of the study. 4. A section should be added between the introduction and proposed model sections to clarify and emphasize the main contributions and its scientific justification and applicability, in this study. 5. There are some grammatical and typo mistakes of English in paper and need some minor adjustments. 6. There are some formatting issues in caption of figures and tables, please address them. 7. Enhance the pictures’ size for better presentation and readability. Experimental design: satisfactory Validity of the findings: Done in a better way Additional comments: The “Research Problem” section must be under the heading of Introduction for better understanding of the proposed research. “Literature Study” section is not exhaustive to narrate existing work in the identified domain so it is better to include 2 or 3 more paragraphs including their references. Include more references focusing on highly configurable systems through software product line.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS A COMPONENT-BASED SYSTEM MODEL TO IMPROVE THE QUALITY OF HIGHLY CONFIGURABLE SYSTEMS Review round: 1 Reviewer: 3
Basic reporting: The author work is good. The abstract covers the complete paper. Author provided detail background of the work. Author also focused on the related literature. Author produced required figures and tables which are necessary for the explanation of the work. Experimental design: The author work is related to the scope of the journal. Author introduced valid research gaps and later on the findings are adding to the domain of knowledge. Author design methodology in detail and each component is well explained. Validity of the findings: The data and processes claimed by the author are valid and according to domain of the work. The result is verified statistically using different statistical tools. Additional comments: The author concluded work in detail. The conclusion also providing readers a thinking about future work in this research direction. Some comments given below which should be addressed in the final submission. 1) Correct spelling exiting into existing Line 28 and line 30 2) Author should adopt one method, i.e. either use abbreviation and full text for all or only use abbreviation except first time. For example, SPLE, HCS, QeAPLE etc. 3) Complete sentence at line 90. State-of-the-art ….. what next? 4) Author should read full paper for required English grammar corrections.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS A COMPONENT-BASED SYSTEM MODEL TO IMPROVE THE QUALITY OF HIGHLY CONFIGURABLE SYSTEMS Review round: 2 Reviewer: 1
Basic reporting: OK Experimental design: OK Validity of the findings: OK Additional comments: Authors have incorporated the suggested changes.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: TOWARDS A COMPONENT-BASED SYSTEM MODEL TO IMPROVE THE QUALITY OF HIGHLY CONFIGURABLE SYSTEMS Review round: 2 Reviewer: 2
Basic reporting: Author did good effort in updating paper according to comments. Author also highlighted changes in the paper through submitting track changes file. Author's response to comments in detail. Experimental design: The experimental design is satisfactory. The flaws are removed in revised version. Validity of the findings: The focus is on real time problem solving. This is a validity of the finding by the author. Additional comments: The paper is in good shape after required changes.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: NEGATION AND UNCERTAINTY DETECTION IN CLINICAL TEXTS WRITTEN IN SPANISH: A DEEP LEARNING-BASED APPROACH Review round: 1 Reviewer: 1
Basic reporting: The authors describe an approach for the detection of negation and uncertainty detection in clinical texts written in Spanish using deep learning. The authors have used two deep learning approaches, namely BiLSTM-CRF and BERT. The paper builds on various similar works the exploit deep learning approaches and annotated data for Spanish and advance the state-of-art in an incremental way. Although, the paper does not present any entirely new break though idea, in my view, it is still of general interest. The authors demonstrate that either better or competitive results can be achieved with the approaches based on deep learning models on both negation and uncertainty detection in clinical texts. Experimental design: The experimantal design is, on the whole, well done and the paper requires relatively minor modifications. Some issues that could be addressed are mentioned below. Non-exploitation of syntactic properties: In Section 2.1 (lines 149-150) the authors mention some works that exploit syntactic properties of the sentence to extract the scope (e.g. Cotik et al., 2016; Zhou et al., 2015; Peng et al., 2018). Although this was used in the context of rule-based approaches, a question arises regarding whether this would not be useful also for approaches that use deep learning. I recommend that the authors address this issue in e.g., 5 Discussion. Deep learning approaches require effort of labelling data: I tend to agree with the authors that creating rules manually can be a lengthy process (Section 2.1, lines 151-152). I also agree that feature engineering used in many machine learning approaches can be a time-consuming task too. For instance, you mention that NUBES corpus includes 29,682 labelled records. I imagine that labelling this data must also be a time-consuming task. Although, in the case presented in this paper this data is already available, this might not not be the case if this solution were applied to other languages, for which such labelled corpus is not available. So, I find it unfair that the effort on labelling large amounts of training data has not been mentioned by the authors. Future evaluations and comparisons of different systems should take into account not just F1, but various other costs, such as those associated with preparation of auxiliary labelled data, features or rules manually. But this is future work. As for this paper, can you please mention that the solution presented is not without extra costs? Explain better how the labels are dealt with the deep learning system: Section 3.2.1 discusses the model BiLSTM-CRF and this is accompanied by Figure 2. Similarly, Section 3.2.2 describes the BERT model and this is accompanied by Figure 3. In both figures we can see how one particular sentence (i.e. “La biopsia no muestra células cancerígenas” in case of BiLSTM-CRF) is fed in. At the beginning of Section 3.2. (lines 328-330) the authors show how one sentence is annotated : ([’Paciente : O’, ’Sin : B-NegCue’, ’dolor : B-NegScope’, ’tora´xico : I-NegScope’, ’. : O ’].). From the description it is not clear how the labels are handled. Are they fed also into BiLSTM-CRF and BERT, or just “attached”? This should be clarified. Improve the description of the BERT representation: The authors mention that they use three embeddings, token, segment and position embeddings. As for the token embedding, the authors use the symbol E_n to represent it. From the context it seems that the other two embeddings are represented using symbols E_1 and E_2. This could be clarified. The authors then discuss the output representation (R_1, R_2, R_n). The equation (1) should make it clear that the index i spans the values 1,2 and n - if I interpreted this correctly. So, as we can see, each word representation in this scheme is associated with a particular label, so this probably answers my doubt raised in the previous paragraph (i.e., how the labels are handled). But what are the exact values of w_0 and b_0 used in the implementation? Evaluation in the scope task Section 4.1 discusses the evaluation measures used, which are pretty standard. It appears that the result of a particular scope task can have a binary result determining whether the system did this right or not. However, scope task involves determining all elements in the scope. So, for instance, when considering your example sentence in Section 3.2 (lines 328-330), i.e., ([’Paciente : O’, ’Sin : B-NegCue’, ’dolor : B-NegScope’, ’tora´xico : I-NegScope’, ’. : O ’].), we see that there are two elements within the NegScope. It could happen that the system identifies just one of them correctly. Would, in this case, this be considered as an error? This point could be explained better. Also, other measures could be used in case the answer is only partially correct. When is the WordPiece Tokenization activated? This issue is discussed in Section 3.2.2. and an example is given of how some words are separated into morphological units. For instance, “biopsia” is represented as “bio ##psia”. But what mechanism determines which token should be (or not) decomposed this way? Whey, for instance, is the word “muestra” (or células) nor decomposed in a similar manner? Validity of the findings: The paper and and the results presented are of general interest. It shows how some existing techniques can be adapted to new settings and improve the results of the state-of-the art approaches. Additional comments: Minor points on presentation and wording Table 2 summarizing the datasets includes two different kinds of information (on size and on cues and scope). It seems that it would be better to separate this into two tables. Section 2.2, line 157: in the second step those features are trained into a classifier to perform predictions => the second step those features are used to train a classifier to perform predictions Section 2.3, line 193: from syntactic trees with a convolutional layer that’s features are concatenated => from syntactic trees with a convolutional layer whose features are concatenated Figure 1: The labels NegCue and NegScope appear in rectangles that are shaded. If this is printed on a black-and-while printer, the labels are hardly legible, as the shading is too dark. Title of Section 4.5.3 Validation Results The title “Transfer results” would seem clearer, given the contents The reference Santiso et al., (2018) in incomplete
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: NEGATION AND UNCERTAINTY DETECTION IN CLINICAL TEXTS WRITTEN IN SPANISH: A DEEP LEARNING-BASED APPROACH Review round: 1 Reviewer: 2
Basic reporting: In the Introduction section, the motivation of the work focuses on the improvement of deep learning approaches in several nlp tasks, but not on negation and speculation detection. There are some works that show improvements in negation and speculation tasks, and this should also be included as motivation for your work. The following are some of the works focused on this topic: -Zavala, R. R., & Martinez, P. (2020). The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study. JMIR Medical Informatics, 8(12), e18953. -Britto, B. K., & Khandelwal, A. (2020). Resolving the Scope of Speculation and Negation using Transformer-Based Architectures. arXiv preprint arXiv:2001.02885. -Fei, H., Ren, Y., & Ji, D. (2020). Negation and speculation scope detection using recursive neural conditional random fields. Neurocomputing, 374, 22-29. -Al-Khawaldeh, F. T. (2019). Speculation and Negation Detection for Arabic Biomedical Texts. World of Computer Science & Information Technology Journal, 9(3). -Dalloux, C., Claveau, V., & Grabar, N. (2019, September). Speculation and negation detection in French biomedical corpora. In RANLP 2019-Recent Advances in Natural Language Processing (pp. 1-10). In the Related works section you present some of them, but you should indicate the differences and similarities between your work and the existing ones. The negation overview in which all existing corpora in all languages are compiled is missing. Jiménez-Zafra, S. M., Morante, R., Teresa Martín-Valdivia, M., & Ureña-López, L. A. (2020). Corpora annotated with negation: An overview. Computational Linguistics, 46(1), 1-52. Typos: - The sentence “In these cases, the scope recognition fails because all tokens in a sentence can be taken as the scope.” appears twice. Lines 146-148. - BiLSMT ---> BiLSTM (Line 578). Experimental design: In the description of the datasets it would be convenient to add the total number of negation and speculation cues of each corpus. Regarding the Cancer dataset, as it is the first time it is presented, it should be included the number of annotators and the inter-annotator agreement for both, negation and speculation. How did you resolve the disagreement cases? Why did you explore BiLSTM-CRF and BERT for negation and uncertainty detection? This should be justified in subsection 3.2. This is very important as it will make the research rigorous and not a simple test of algorithms. There is one part that is not clear to me. If I understand correctly, the detection of cues and scopes is modeled jointly. How do you then know which scope corresponds to which cue? In order to the approach be replicable, it must be indicated which probability values have been used to determine the predicted labels (B-Cue, I-Cue, O, B-CueScope,...). Validity of the findings: Comparison with other proposals regarding cue detection should also be included, as it has been done for scope identification in Table 6. The discussion section is very interesting because it helps to learn how the algorithms studied work and the advantages and shortcomings of each one. When talking about limitations that need to be addressed, do these limitations correspond to BiLSTM, BERT or both? This should be clarified in the paper. On the other hand, when it is said that most of the scope errors are due to discontinuous scopes, some examples of annotation of discontinuous scope are presented but how the system has labeled those scopes should also be included, so that we can see where it is failing the system. Additional comments: The paper deals with a very interesting and important topic that affects all natural language processing tasks. Moreover, it does so in Spanish, a language less studied in this subject. In general, a good job has been done, but the aspects that I have indicated in the previous sections should be addressed to improve the paper before publication. Therefore, my recommendation is major revisions.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: NEGATION AND UNCERTAINTY DETECTION IN CLINICAL TEXTS WRITTEN IN SPANISH: A DEEP LEARNING-BASED APPROACH Review round: 2 Reviewer: 1
Basic reporting: The paper discusses the problem of detecting negation and uncertainty in clinical texts in Spanish. The topic is general and should be of interest to others. The section in Related work presents a comprehensive account of current work. The methodology is described in a clear manner and the results obtained are competitive and of interest. The authors have responded well to my previous comments. My recommendation is to accept the paper. Experimental design: No additional comment Validity of the findings: No additional comment Additional comments: Some suggestion to improve the formulation: Line 252: model, which results showed => model whose results showed Line 286: The results obtained will show => The results obtained show Lines 411-412: On the other hand, the BERT model is based on transformer architectures which also have been improved sequence-labeling tasks in the biomedical domain => ?? On the other hand, the BERT model is based on transformer architectures which also has been improved for sequence-labeling tasks in the biomedical domain Figure 2: The arrow below Marix T’ and Matrix L’ in the direction of Matrix H should be moved. It should join Matrix L’ and Matrix H.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: NEGATION AND UNCERTAINTY DETECTION IN CLINICAL TEXTS WRITTEN IN SPANISH: A DEEP LEARNING-BASED APPROACH Review round: 2 Reviewer: 2
Basic reporting: The authors have taken into account the reviewers' suggestions and have made the appropriate changes, improving the article. They have done a great job. Therefore, I recommend its acceptance but there are two details to be addressed. Typos: Line 84: allow to advances —> allow to advance Line 243: (Zavala and Martínez, 2020) —> Zavala and Martínez (2020) Please, add to Table 4 the Total of negation and uncertainty cues. Experimental design: - Validity of the findings: - Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: MACHINE-LEARNING-BASED AUTOMATED QUANTIFICATION MACHINE FOR VIRUS PLAQUE ASSAY COUNTING Review round: 1 Reviewer: 1
Basic reporting: • Language used is NOT clear throughout the article, several ambiguities are present (please see general comments). • Literature is well referenced & relevant. • Structure conforms to PeerJ standards. • Figures are high quality and described, but too many (please see general comments). • Complete raw data is supplied. Experimental design: • Original primary research is NOT within Scope of the journal. • Research question well defined, relevant & meaningful. • It is stated how the research fills an identified knowledge gap. • Rigorous investigation performed to a high technical & ethical standard. • Methods described with sufficient detail & information, but require specific elements and machinery to replicate accordingly (please see general comments). • Sample size well chosen for the problem at hand. Validity of the findings: • Data is robust but critical controls, e.g. different forms of plaques (from different virus) is not present. • Conclusions are well stated, linked to original research question & limited to supporting results. Additional comments: The manuscript by Phanomchoeng et al. presents an automated quantification machine for viral plaque counting. This machine is a convenient way to reduce the workload in plaque counting. The authors show the performance of the machine with an example of a Dengue virus dataset that they have produced, comparing it to manual (expert) counting. I commend the authors for their extensive work, tutorial videos and found their proposed method interesting and useful. However, I believe there are several concerns that should be addressed before Acceptance. Major Concerns: 1. As the contribution of the authors is a bioinformatics software tool aimed to virologists with no prior experience in image analysis, the Computer Science category of the PeerJ journal seems inappropriate. 2. The English language is not clear enough and should be improved. Some examples where the language could be improved include lines 57, 58, 81, 82, 85, 100, 140, 170, 171, 191, 192, 260, 261 282, 308, 370, 373 and 407. The current phrasing makes comprehension difficult. I suggest you have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or contact a professional editing service. 3. The authors propose an automated quantification machine and the software to operate it. Although the authors claim that the hardware is relatively simple to set up, I could not test it accordingly due to not having access to the specific instruments. 4. It is not clear how the machine would perform if presented with plaque assays from different viruses, or with differently shaped plaques. In fact, the software was only evaluated using Dengue virus. The author should discuss it or change the scope of the Main title accordingly. 5. The authors should consider reducing the total number of main figures. Perhaps some figures could be supplementary? For example Figure 5, 6, 10, 12. Also, Figures 13 and 14 could be merged into a single image. 6. How were the “Maximum Number of Errors” for the expert defined (Table I)? This relevant point seems too arbitrary. Minor points: 1. The phrase “The viral plaques appear in the image as white circled areas (Fig. 3) since the viruses eat the cell around themselves.” should be rewritten. 2. The phrase “Thus, when the number of viral plaques is large, it is more difficult to justify the number of viral plaques.” should be rewritten, having in mind that the authors should not justify their results, rather than present them. 3. Typo: in line 98 the number 4 should be superindexed (“cells at 1 x 104 cells”). 4. Typo: in line 310 there seems to be an extra period(“the counting by the expert and machine. Pearson's”) 5. The authors should be more explicit about the future developments regarding the Firebase database. 6. The authors should be more explicit about which filters are applied to the binary image (line 216).
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: MACHINE-LEARNING-BASED AUTOMATED QUANTIFICATION MACHINE FOR VIRUS PLAQUE ASSAY COUNTING Review round: 1 Reviewer: 2
Basic reporting: 1. Is an automated quantification machine really needed in counting plaques? The authors should provide more application scenarios of automatic quantification for viral plaques. It is easy to count 1-10 plaques in one well, in an appropriate dilution of viral stocks. The authors are focusing on statistic and algorithm of image recognition, ignoring the repeatability of viral plaque assay itself. 2. On the other hand, the shapes of plaques are variable in some viruses. How to distinguish overlapped plaques(two or more)from a single enhanced big plaque caused by viral mutations. What about those smaller plaques caused by attenuated viruses in the picture of plaque assay? Experimental design: The experiments about image recognition are well designed within the scope of the journal. Validity of the findings: I have no questions about this part. Additional comments: Citation 1 (Delbruck, 1940) is inappropriate for the first sentence of the introduction section. It is well accepted that the first viral plaque assay on eukaryotic cell lines was described by Dulbecco et al. (Dulbecco, R., 1952. Production of Plaques in Monolayer Tissue Cultures by Single Particles of an Animal Virus. Proc. Natl. Acad. Sci. U. S. A., 38 (8), 747-752.)
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: MACHINE-LEARNING-BASED AUTOMATED QUANTIFICATION MACHINE FOR VIRUS PLAQUE ASSAY COUNTING Review round: 1 Reviewer: 3
Basic reporting: From figure 5 onwards, the numbers do not match the description, and it becomes very difficult to follow. Some sentences in the text are not finished (e.g. line 58: "Recently, automated imaging-based counters for plaque assays have been employed, but the current versions."). There are also parts that are vague regarding the description of the algorithm. For instance line 194, the authors write "a shape-based matching algorithm is employed". Experimental design: see next section Validity of the findings: Phanomchoeng et al. describe a new method to enumerate viral plaques in well plates. Their work is original in that it combines hardware and software into a single solution. There were, however, several points I found would need to be addressed. Accessibility -- A publication such as this one the software described should be published as well. I could not find in the manuscript a link to a repository and a set of data/images. The PeerJ editorial policy states: "For software papers, 'materials' are taken to mean the source code and/or relevant software components required to run the software and reproduce the reported results. The software should be open source, made available under an appropriate license, and deposited in an appropriate archive. Data used to validate a software tool is subject to the same sharing requirements as any data in PeerJ publications." Furthermore, I advise the author to name their method and to provide online documentation (otherwise, I anticipate low adoption). In several instances, the authors describe their method as cost-effective. It would be necessary to give an estimate of the overall price. Performance evaluation -- The number of plaques is the variable of interest, and the authors find a good correlation between expert and machine. It is difficult for the reader to understand why errors are negligible for large (>12) plaques. Could the authors explain or provide references? In several instances, the authors describe their correlation between expert and machine as significant, which is trivial. A fairer assessment would be to compare the machine vs human error with human vs human error (in other words, is the error/bias) due to automation close to the error between experimenters. What is the justification for table 1. It seems extremely arbitrary and unnecessary. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: MACHINE-LEARNING-BASED AUTOMATED QUANTIFICATION MACHINE FOR VIRUS PLAQUE ASSAY COUNTING Review round: 2 Reviewer: 1
Basic reporting: • Language used is clear throughout the article. • Literature is well referenced & relevant. • Structure conforms to PeerJ standards. • Figures are high quality and described, but too many (please see general comments). • Complete raw data is supplied Experimental design: • Original primary research is NOT within Scope of the journal. • Research question well defined, relevant & meaningful. • It is stated how the research fills an identified knowledge gap. • Rigorous investigation performed to a high technical & ethical standard. Validity of the findings: • Conclusions are well stated, linked to original research question & limited to supporting results. Additional comments: I would like to thank the authors for their extensive work and the quality of their manuscript. However I disagree with the authors and I still think that the Original primary research is NOT within the Aims and Scope of the journal, as per stated in point 5 of the Aim and Scope page (please see https://peerj.com/about/aims-and-scope/cs). For clarity, I have copied the cited paragraph: “Submissions should be directed to an audience of Computer Scientists. Articles that are primarily concerned with biology or medicine and do not have a clearly articulated applicability to the broader field of computer science should be submitted to PeerJ - the journal of Life and Environmental Sciences. For example, bioinformatics software tools should be submitted to PeerJ, rather than to PeerJ Computer Science. “ Finally, as I understand that the editor considers that the paper IS within the scope of the journal (because of the reviewing process), I will recommend the paper for publication as is.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: MACHINE-LEARNING-BASED AUTOMATED QUANTIFICATION MACHINE FOR VIRUS PLAQUE ASSAY COUNTING Review round: 2 Reviewer: 2
Basic reporting: The authors have answered all my questions, but the importance of automatic quantification has not been well explained. I believe that the improved Englished language will not be a problem for understanding. They have also corrected the wrong references. Experimental design: no comment Validity of the findings: I believe validity of certain plaque plates(96-well,or other sizes) with Image Algorithm is not a problem. On the contrary, the repeatability of plaque assay itself should be addressed. Additional comments: I have said that the authors should provide more application scenarios of automatic quantification for viral plaques. The authors have not provided substantial applications and failed to emphasize the importance of automatic quantification. In other words, they should not limited it to counting viral plaques. The sizes of plaques caused by antivirals should be included. In fact, a high-throughput antiviral drug screening using plaque assay in 96-well plates can provide this scenario(DOI: 10.1002/jmv.25463).
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: EVALUATING GRAPH NEURAL NETWORKS UNDER GRAPH SAMPLING SCENARIOS Review round: 1 Reviewer: 1
Basic reporting: This is an experimental analysis paper for evaluating different baseline GNNs using graph sampling. The motivation of the work seems to inform the community of a experimental benchmark that supplies the trends of using different sampling methods while using several GNN instances. The writing of the paper is clear and the experiments are fairly compared when the models are evaluated on the 4 datasets considered. The code framework is public and from my observation it follows a similar layout/setting as Benchmarking GNNs [Dwivedi et al., 2020] following the priciples of fair GNN benchmarking. However, the paper has no technical contribution apart from the aforementioned sentences; and this is obvious from the introduction and setting of the paper. Following are my concerns which may prohibit the usefulness of this paper as a benchmark of sampling methods using different GNNs: * The datasets used are quite small and there is a well-known agreement in the community to move on to more large and complex datasets than Cora, Citeseer, etc. that are used in the paper. See Dwivedi et al., 2020, Open Graph Benchmark: Hu et al., 2020 which describe the need to using other datasets than aforementioned to evaluate GNN trends fairly. It is also clear that this paper is aware of the previous works on GNN benchmarks. Hence, I do not understand the need of making a graph sampling benchmark using the same small datasets! * The paper follows the setting of Dwivedi et al., 2020 but do not consider GatedGCNs which were best performing in that work. This may be question the validity of the paper's insight where it is written that GCN/BFS are generally the best performing (L65), since a previous well performing GNN baseline-GatedGCN is not considered for evaluation in this work. * If the paper's intent was to establish a evaluation trend of different GNN under sampling, it is necessary to consider robustness of experiments to make the results reliable and useful for the community. While the experiments in the paper are fair and unquestionable, it is not necessarily robust. For example, it does not seem the results are reliable if they are not evaluated on more complex and diverse datasets, and also on diverse tasks (such as graph prediction, link prediction, node prediction). This work only considers node prediction on small datasets. Experimental design: * The research questions are well defined and meaningful. If the work would be more robust, it could fill the research gaps. However, see my concerns above in "Basic Reporting" which may not make this paper robust. * Of the experiments considered in the paper, the settings are well defined for reproducibility. Similarly, the methods are sufficicently detailed to replicate. However, the concerns are with the datasets used and the lack of robustness in such an experimental work. Please refer to comments in "Basic Reporting" section on this. Validity of the findings: * Of the experiments performed in the paper, the underlying data is provided, but are questionable given the current state and trend of the graph deep learning field in general. In particular, the community is slowly transitioning to evaluate GNNs on more diverse, complex and medium-to-large scale datasets. The datasets used in the paper are small and I believe that the paper is aware of that. Additional comments: * In the writing of the paper, several references do not mention date and have "n.d." instead of the date. For eg. Bruna et al. n.d. It seems this is a minor thing which the paper should have taken care of before submission.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: EVALUATING GRAPH NEURAL NETWORKS UNDER GRAPH SAMPLING SCENARIOS Review round: 1 Reviewer: 2
Basic reporting: The authors provide a systematic review of graph based methods. They tried to identify the performance of graph neural network models for graphlets. Authors performed the evaluation of five Graph Neural Networks. The idea is good and interesting. However I have few comments on this article. Experimental design: 1- In experimental setup, author did not mention how many hidden layers are in the network. 2- What information/features are you extracting in layers of the network. 3- Please mention the no of training, testing and validation samples you are using in the experiments. 4- can you elaborate row no 138 "there is no missing edges among..." why ? 5- what are effects you noticed for 1000 epochs for small and large datasets? any visualization graphs etc?? 6- among five GNNs you did not considered the Siamese GNN, any reason about that? for example "Riba, Pau, et al. "Learning graph distances with message passing neural networks." 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018." and NI kajla et. al. "Graph Neural Networks Using Local Descriptions in Attributed Graphs: An Application to Symbol Recognition and Hand Written Character Recognition" Validity of the findings: 1- in Abstract row no 27 author say that to complete a sample subgraph does not improve the performance. they adds that it depends on the dataset, However I am not clear about the statement. Please elaborate more about it. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: EVALUATING GRAPH NEURAL NETWORKS UNDER GRAPH SAMPLING SCENARIOS Review round: 2 Reviewer: 1
Basic reporting: The authors provide a systematic review of graph-based methods. They tried to identify the performance of graph neural network models for graphlets. Authors performed the evaluation of five Graph Neural Networks. The idea is good and interesting. Experimental design: My previous comments on the experimental portion have been addressed. Validity of the findings: findings of this study are significant Additional comments: All my previous comments have been addressed. I have no further comments.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SCHEMA AND CONTENT AWARE CLASSIFICATION FOR PREDICTING THE SOURCES CONTAINING AN ANSWER OVER CORPUS AND KNOWLEDGE GRAPHS Review round: 1 Reviewer: 1
Basic reporting: The article is well thought out, well-contextualized, and innovative. The references can be seen to provide sufficient background of the field, but there are some references that are relatively old. The research results in the QA field in recent years should be supplemented and compared. The text of the article is clearly structured and rich in content. However, as shown in Figure 2, the image descriptions are too simple, the diagrams are not clear, and the text is rather blurred. The terminology in the article should be more precisely explained and illustrated. In addition, the English language should be improved to ensure that an international audience can clearly understand your text. Experimental design: The motivation behind the problem investigated in this manuscript is interesting and meaningful. However, the detailed application background of this issue has not been mentioned in detail. This article has a good grasp and investigation of the existing QA research, but lacks recent work comparison and narration. There are too many experiments using a combination of machine learning methods for comparison and too many repetitive experiments. It is recommended to compare with the existing state-of-the-art methods. Validity of the findings: The data provided by the article is relatively reliable and abundant, and the conclusions are related to the original research questions. However, the article should emphasize the impact of the proposed method on the field and its novelty. Additional comments: no comment
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SCHEMA AND CONTENT AWARE CLASSIFICATION FOR PREDICTING THE SOURCES CONTAINING AN ANSWER OVER CORPUS AND KNOWLEDGE GRAPHS Review round: 1 Reviewer: 2
Basic reporting: - In section 1, the authors should introduce a model that did the work like this study before. How is the authors’ model different from the previous model? - In section 3, the authors should define general metadata, detail metadata, and give some examples. - Figure 2 should label on shapes and give some explanations. Experimental design: - In lines 194-196, expansion of the existing triples aims to increase the efficiency of the system in answering multi-hop questions. The authors should give some examples of multi-hop answers and explain how the model finds the multi-hop answers from expansion triples. - In lines 321-322, the authors used 150-dimensional word and parts of speech (POS) embeddings with the word2vec tool. As I know the word2vec model appeared in 2013. Why did the author not use fast text (2016) or BERT (2018)? - In lines 329-330, The author hired two annotators to evaluate the question schema extraction task. Why don’t the authors use a computerized model instead of humans for objective judgment? - In line 344, the authors used naïve Bayes and SVM binary model for question-level classification. Why don’t the authors use deep learning such as LSTM, CNN, or BERT for classification to achieve higher accuracy? - In lines 372-374, the scores such as accuracy, precision, recall, and F1 should be presented in equation forms and numbered. - In subsection 5.7, the authors should present tables that compare the response time of this study over datasets. These tables prove that the QAnswer framework achieves better results on two categories with and without source prediction. Validity of the findings: No comment Additional comments: This paper provides a solution that combine knowledge graph and text to predict the source (s) containing the response from a number of structured data sources and an unstructured data source. This is an interesting method.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SCHEMA AND CONTENT AWARE CLASSIFICATION FOR PREDICTING THE SOURCES CONTAINING AN ANSWER OVER CORPUS AND KNOWLEDGE GRAPHS Review round: 2 Reviewer: 1
Basic reporting: The authors addressed all my previous comments. Experimental design: The authors addressed all my previous comments. Validity of the findings: The authors addressed all my previous comments. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEPENDENCY MANAGEMENT BOTS IN OPEN-SOURCE SYSTEMS—PREVALENCE AND ADOPTION Review round: 1 Reviewer: 1
Basic reporting: This paper presents a mixed-methods study on the impacts of adopting a specific type of bot: the dependency management bots. The study examines an important and timely topic. Bots as software maintenance task automation tools have become increasingly popular, as discussed in the paper. The current state-of-art is clearly and accurately summarized. The paper is very well written, easy to read and understand. In addition, the problem is well-motivated. Furthermore, the paper outlines the main takeaways and implications for the open-source community and bot researchers. Regarding the replication package and supplemental materials, I would also recommend sharing a codebook with code names, descriptions, and examples of quotations for all themes and conversation labels. It would facilitate the understandability of each one of them. Experimental design: Regarding the experimental design, there are a few places to clarify or improve. First, it would make the paper stronger if the authors could clarify the motivation of focusing on dependency management bots after conducting a broader study that attempted to categorize several "bots" and assess how this classification would provide meaningful insights about bots adoption and its impacts. Even though all encountered bots are dependency management bots, It still seems the paper has two disjunct parts. The first part focused on classifying the bots from the BIMAN dataset using the previous categorization by Erlenhov et al, 2020. The second one restricted the analysis to a specific type of bots: dependency management bots. I believe the findings reported in the RQ1, which resulted in the decision of focusing on dependency management bots, are closely related to the type of bots/automation present in the BIMAN dataset. Those bots/automation are the ones that commit code, which is also the case for dependency management bots. It is not expected, for example, to see any bot that only automates tasks and post comments on issues. Another opportunity to strengthen the paper is to clarify the qualitative analysis method in Section 6.2. Overall, it is not clear how the authors conducted the coding process from the description. Have multiple authors participated in the coding process? Have they discussed the codes? Have they reached an agreement? The authors should enhance the analysis method in the revision if it was not rigorous and describe the method clearly. In the revision, I also encourage the authors to articulate in the discussion *how* this study complements the findings from previous works: Which prior findings are confirmed? Which prior findings are challenged or extended? What findings are new? How are the new findings fit (or not fit) into the previous works? Validity of the findings: Overall, the paper is clean, readable, addresses a significant problem, presents valid results grounded on the data provided. I do feel the paper has the weaknesses above that should be fixed before publication. I recommend major revisions, as I consider all of the comments could be addressed over the revision period. Additional comments: Additional (minor) comments: - Line 28: “creating pull requests (PRs) (Wessel et al., 2020a) [...] or code contributions over time (Wessel et al., 2018).” -> double-check the references and their examples. - Line 30: The last sentence of the introduction first paragraph lacks a reference. - The link in the first quotation of Section 6.3 (https://github.com/pydanny/ cookiecutter-django/pull/2872#issuecomment-702824915) is redirecting to the wrong URL (https://github.com/pydanny/). The same occurred with the link in the third quotation. - Line 490: “[...] have been used in previous work (Wessel and Steinmacher, 2020; Wyrich et al., 491 2021; Wessel and Steinmacher, 2020).” -> Wessel and Steinmacher, 2020 were cited twice.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEPENDENCY MANAGEMENT BOTS IN OPEN-SOURCE SYSTEMS—PREVALENCE AND ADOPTION Review round: 1 Reviewer: 2
Basic reporting: *The English language should be improved to ensure that an international audience can clearly understand your text. Some examples where the language could be improved include sections 3, 5.1, 6.1, 7. *The problem definition has not been formulated clearly. I think the motivations for this study need to be made clearer. The authors must do a better job to explain why their work is better than what came before or after. *This paper is not self-contained. In almost all sections, the authors refer to their previous publications. I think this paper suffers from extensive self-plagiarism. Additionally, the main contribution and innovation of this paper are not clear enough. * The audience of this paper cannot understand it without reading the publications from these authors. *The citation format sometimes isn't consistent. For instance, Erlenhov et al. (2020) and (Erlenhov et al., 2020). * In section 4.2, there is three self-citation to the same paper in one paragraph. The authors refer to this paper as one of the references in the study methodology section and 6.1 Data Collection (Erlenhov et al., 2021). Experimental design: *In Figure 2, the flowchart is not clear enough. A better explanation is required to indicate the sources of knowledge for designing such a decision model. Would you please add your evidence for this figure? *The author mentioned the BIMAN approach, but they should elaborate on this dataset further. Currently, there is no explanation has given. * The authors mentioned several taxonomies in the related work section, and they introduced their paper as one of the papers in related work. I think that the paper needs more effort and citations from other studies to explain the taxonomies. Validity of the findings: * How did the study address these questions? and it needs more detail about research methods. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEPENDENCY MANAGEMENT BOTS IN OPEN-SOURCE SYSTEMS—PREVALENCE AND ADOPTION Review round: 2 Reviewer: 1
Basic reporting: Thank you to the authors for providing this detailed revision. I would reinforce that I find the topic is very important and timely. I also think the study is well-executed, and the paper is well-written, easy to read and understand. In addition, the problem is well-motivated. Due to the detailed revision version provided by the authors, I recommend paper acceptance. The authors addressed all the priority and significant review points I have raised. I see no major issues related to the approach followed in conducting the study or even the results' presentation. Experimental design: There are no further suggestions for improvement. Validity of the findings: Overall, the paper is clean, readable, addresses a significant problem, presents valid results grounded on the data provided. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEPENDENCY MANAGEMENT BOTS IN OPEN-SOURCE SYSTEMS—PREVALENCE AND ADOPTION Review round: 2 Reviewer: 2
Basic reporting: - Experimental design: - Validity of the findings: - Additional comments: The authors have significantly improved the manuscript by restructuring and condensing the previous draft.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A NOVEL HYBRID-BASED APPROACH OF SNORT AUTOMATIC RULE GENERATOR AND SECURITY EVENT CORRELATION (SARG-SEC) Review round: 1 Reviewer: 1
Basic reporting: The article uses clear, unambiguous, technically correct text, having a good introduction clearly mentioning the disadvantages of the manual snort rules generation. However, the paper is a little bit long and wordy and in some sections is exaggerated. Literature is not sufficient to show the disadvantages of existing automatic snort reules generation and the need for current work that fills the gap. For example, see https://doi.org/10.1109/IranianCEE.2016.7585840 and https://doi.org/10.1109/ICITEED.2015.7409013 work is also related to this, many more example exists. The structure of the paper is not bad, the figures and tables look fine. The results partly support the hypothesis proposed by the authors. Experimental design: With much detail design of the experiment is outlined in order to evaluate the proposed snort automatic rule generator and security event correlation. However, there is no proper explanation and discussion of attack types under which SARG-SEC is evaluated. Attack type is needed to explain under which the proposed strategy is evaluated. Validity of the findings: In the abstract from lines 50-52” It is evident from the experimental results that SARG-SEC has demonstrated impressive performance and can provide competitive advantages compared to other related approaches”. But not linked with a conclusion as there is a comparison shown with other related approaches. The author needs to show the comparison of manual rules results Vs auto-generated snort rules results. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A NOVEL HYBRID-BASED APPROACH OF SNORT AUTOMATIC RULE GENERATOR AND SECURITY EVENT CORRELATION (SARG-SEC) Review round: 1 Reviewer: 2
Basic reporting: How to define the snort rules is one the critical issue when deploying snort-based IDS. This paper made a good trial to automatically generating the rule for such IDSs, which I think is an interesting,challenging and meaningful trial. Experimental design: no comment Validity of the findings: no comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A NOVEL HYBRID-BASED APPROACH OF SNORT AUTOMATIC RULE GENERATOR AND SECURITY EVENT CORRELATION (SARG-SEC) Review round: 1 Reviewer: 3
Basic reporting: The authors present a novel hybrid-based approach of snort automatic rule generator and security event correlation (SARG-SEC). In general, the work is interesting and the proposed methodology is adequately explained. The authors rely on Snort, a well known Network Intrusion Detection System (NIDS). Although the contributions of the paper are mentioned in the introductory part, is not very clear how the paper is differentiated with respect to other works. Moreover, the introductory part can be further enhanced, including some statistics about NIDS and real cybersecurity incidents. The authors provide a detailed background on NIDS and Snort. Finally, despite the fact that the evaluation results demonstrate the efficiency behind this work some further clarification should be provided Experimental design: Although the authors present a detailed overview about NIDS and Snort, the paper does not include a paper about similar works. Some indicative references are given below. Moreover, it is not very clear why the authors chose Snort? why not other signature-based NIDS like Suricata. [1] Radoglou-Grammatikis, Panagiotis, et al. "SPEAR SIEM: A Security Information and Event Management system for the Smart Grid." Computer Networks 193 (2021): 108008. [2] Grammatikis, Panagiotis Radoglou, et al. "SDN-Based Resilient Smart Grid: The SDN-microSENSE Architecture." Digital 1.4 (2021): 173-187. [3] Grammatikis, Panagiotis Radoglou, et al. "Secure and private smart grid: The spear architecture." 2020 6th IEEE Conference on Network Softwarization (NetSoft). IEEE, 2020. [4] Sekharan, S. Sandeep, and Kamalanathan Kandasamy. "Profiling SIEM tools and correlation engines for security analytics." 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET). IEEE, 2017. [5] Suarez-Tangil, Guillermo, et al. "Providing SIEM systems with self-adaptation." Information Fusion 21 (2015): 145-158. [6] Suarez-Tangil, Guillermo, et al. "Automatic rule generation based on genetic programming for event correlation." Computational Intelligence in Security for Information Systems. Springer, Berlin, Heidelberg, 2009. 127-134. [7] Kotenko, Igor, et al. "Parallelization of security event correlation based on accounting of event type links." 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP). IEEE, 2018. [8] Fedorchenko, Andrey, and Igor Kotenko. "IOT Security event correlation based on the analysis of event types." Dependable IoT for Human and Industry: Modeling, Architecting, Implementation 147 (2018). [9] Ferebee, Denise, et al. "Security visualization: Cyber security storm map and event correlation." 2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS). IEEE, 2011. [10] Dwivedi, Neelam, and Aprna Tripathi. "Event correlation for intrusion detection systems." 2015 IEEE International Conference on Computational Intelligence & Communication Technology. IEEE, 2015. Validity of the findings: Although the authors provide a lot of figures related to the effectiveness of the proposed methods, it is not clear how the authors reflect the validity of the findings. For instance, the authors can compare their implementation with other works, other IDS and Security Information and Event Management (SIEM) systems, using some baseline metrics. Additional comments: The paper should be re-checked entirely about potential writing errors and typos.