instruction
stringclasses
1 value
input
stringlengths
53
283
output
stringlengths
92
42.8k
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A NOVEL HYBRID-BASED APPROACH OF SNORT AUTOMATIC RULE GENERATOR AND SECURITY EVENT CORRELATION (SARG-SEC) Review round: 1 Reviewer: 4
Basic reporting: Spelling mistakes such as the one in line 594 should be avoided where "A priori" is misspelt as "Apriorior". In addition, The grammar within this article needs another round of proofreading. In line 176, instead of just "propose", one could use "to propose". In the same line, the conjunction doesn't really do justice to an intuitive reading of the line and feels awkward. Multiple such issues were observed all over the manuscript. The figures and graphs look somewhat low in quality (both in terms of resolution and the font used). It's advised to export these in 300 dpi for publishable quality. Experimental design: no comment Validity of the findings: no comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A NOVEL HYBRID-BASED APPROACH OF SNORT AUTOMATIC RULE GENERATOR AND SECURITY EVENT CORRELATION (SARG-SEC) Review round: 2 Reviewer: 1
Basic reporting: The authors addressed all the comments. Therefore, the paper is accepted for publication Experimental design: The authors addressed all the comments. Therefore, the paper is accepted for publication Validity of the findings: The authors addressed all the comments. Therefore, the paper is accepted for publication Additional comments: The authors addressed all the comments. Therefore, the paper is accepted for publication
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A NOVEL HYBRID-BASED APPROACH OF SNORT AUTOMATIC RULE GENERATOR AND SECURITY EVENT CORRELATION (SARG-SEC) Review round: 2 Reviewer: 2
Basic reporting: Looks good Experimental design: Looks good Validity of the findings: Looks good Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEVELOPMENT OF A STOCK TRADING SYSTEM BASED ON A NEURAL NETWORK USING HIGHLY VOLATILE STOCK PRICE PATTERNS Review round: 1 Reviewer: 1
Basic reporting: This paper proposes a pattern-based stock trading system using ANN-based deep learning and utilizing the results to analyze and forecast highly volatile stock price patterns. Three highly volatile price patterns containing at least a record of the price hitting the daily ceiling in the recent trading days are defined. The implications of each pattern are briefly analyzed using chart examples. Here, the training of the neural network has been conducted with stock data filtered in three patterns and trading signals were generated using the prediction results of those neural networks. Using data from the KOSPI and KOSDAQ markets, this paper shows that the proposed pattern-based trading system can achieve better trading performances than domestic and overseas stock indices. Experimental design: 1. How the experimental environment has been developed? 2. Why the dataset is divided into four parts? Validity of the findings: 1. How the accuracy is 96.23%? 2. The proposed scheme must be compared with at least two existing schemes. 3. There must be a discussion on how the results are generated. Technical details are missing. Additional comments: 1. Motivations of the paper are not clear. 2. Contributions must be represented point-wise. 3. In the "Related Works" section, the existing schemes must be discussed one by one. Delete the limitations from this section and add them in the Introduction section. 4. How the price pattern is chosen. 5. The proposed scheme is unstructured. divide the proposed scheme in many sub-sections, and then, discuss the entire proposed scheme under each sub-section. 6. Equations and figures are not represented properly. All the key terms must be defined. 7. It is hard to identify the novelty of the proposed work. 8. The English language is very poor. Never use I, we, or our in a Research Article. 9. The organization of the paper is poor. Add section number. 10. Important references are missing. Add the following references: “Efficient algorithm for big data clustering on single machine”, CAAI Transactions on Intelligence Technology, vol. 5, no. 1, pp. 9-14, 2020. “Ensemble algorithm using transfer learning for sheep breed classification”, Proc. of the 2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI), IEEE, Timisoara, Romania, pp. 199-204, 19-21 May 2021. “Enhanced neural network based univariate time series forecasting model”, Distributed and Parallel Databases, 2021. DOI: 10.1007/s10619-021-07364-9 “Deobfuscation, unpacking, and decoding of obfuscated malicious JavaScript for machine learning models detection performance improvement”, CAAI Transactions on Intelligence Technology, vol. 5, no. 3, pp. 184-192, 2020. “Fast and secure data accessing by using DNA computing for the cloud environment”, IEEE Transactions on Services Computing, 2020. DOI: 10.1109/TSC.2020.3046471 “Feature selection approach using ensemble learning for network anomaly detection”, CAAI Transactions on Intelligent Technology, vol. 5, no. 4, pp. 283-293, 2020 “Securing multimedia by using DNA based encryption in the cloud computing environment”, ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 3s, 2020. DOI: https://doi.org/10.1145/3392665 “Nonlinear neural network based forecasting model for predicting COVID-19 cases”, Neural Processing Letters, 2021. DOI: 10.1007/s11063-021-10495-w “IFODPSO-based multi-level image segmentation scheme aided with Masi entropy”, Journal of Ambient Intelligence and Humanized Computing, vol. 12, pp. 7793-7811, 2021. DOI: https://doi.org/10.1007/s12652-020-02506-w
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEVELOPMENT OF A STOCK TRADING SYSTEM BASED ON A NEURAL NETWORK USING HIGHLY VOLATILE STOCK PRICE PATTERNS Review round: 1 Reviewer: 2
Basic reporting: The author didn’t properly introduce the problem correctly. There is lot of Finance related technical information that’s presented to the reader from the beginning without properly clarifying what it is. Author should work on easing the reader into the topic. The quality of the figures presented are not publishing ready and should be redone as a high quality image Experimental design: Each stock market index uses proprietary methods to determine which companies or investments to include and that can change over time. How is that handled here? Validity of the findings: This is a time series prediction problem, why are we not using a variation of LSTM’s? LSTM’s almost always guarantee better results than DNN. Authors should experiment with modern methods rather than leaving it at DNN Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEVELOPMENT OF A STOCK TRADING SYSTEM BASED ON A NEURAL NETWORK USING HIGHLY VOLATILE STOCK PRICE PATTERNS Review round: 1 Reviewer: 3
Basic reporting: Impact of volatility on the assets prices changes play a crucial for traders/investors for capturing positive returns on their investments and with the advancement of technology revolution crept into the stock markets arena. Since the turn of the century financial markets have undergone two major changes, (a) massive information (b) speed of strategy execution. The paper under the review speaks of the same tone for picking up a strategy and to discover a price amidst of volatility. But the paper need to speak conceptually more on drawback in the current existing patters while proposing for the new trading mechanism. Experimental design: 1. Provide the complete implementation process? 2. Does any specific reason for datasets is divided into more parts? Validity of the findings: 1. How was 96.23% accuracy arrived at, what was the measuring method used to arrive at 96.23% accuracy? 2. The proposed scheme must be compared with at least ‘four’ traditional methods. 3. Need to add more technical novelty details. Additional comments: 1. Motivation of the paper was not clear. 2. Need more reference on related work, on the other hand with what ever reference works were quoted the authors haven’t touched up on criticism/drawback/limitations in the related works. 3. Research work is silent in recommending a particular method/process for measuring accuracy.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEVELOPMENT OF A STOCK TRADING SYSTEM BASED ON A NEURAL NETWORK USING HIGHLY VOLATILE STOCK PRICE PATTERNS Review round: 2 Reviewer: 1
Basic reporting: This paper proposes a pattern-based stock trading system using ANN-based deep learning and utilizing the results to analyze and forecast highly volatile stock price patterns. Experimental design: The experimental design is convincing. Validity of the findings: The results show the efficiency of the proposed scheme. Additional comments: The paper can be accepted.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEVELOPMENT OF A STOCK TRADING SYSTEM BASED ON A NEURAL NETWORK USING HIGHLY VOLATILE STOCK PRICE PATTERNS Review round: 2 Reviewer: 2
Basic reporting: The author has been clear in presenting his thoughts and the literature reference are sufficient . Experimental design: 1. Author have provided the complete implementation process; 2. The author has rectified the specific reason for division of the datasets. Validity of the findings: 1.Validation are presented much better way in substantiating the argument. 2. They had clearly compared with few methods that added novelty to their approach. Additional comments: Motivation for taking up the paper and summering the paper are clear.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING RACISM AND XENOPHOBIA USING DEEP LEARNING MODELS ON TWITTER DATA: CNN, LSTM AND BERT Review round: 1 Reviewer: 1
Basic reporting: The authors of this paper have focused on a very important aspect of our society, the increasing popularity of racism and xenophobia over social media. More specifically, the authors focus on the Twitter platform and the Spanish language where they use three different deep learning models, CNN, LSTM, and BERT with the last one being the most effective during experimentation. The paper is well structured and well written. This is another important paper in the area that utilizes NLP and behavioural characteristics, showcasing existing problems. However, there are a few issues that the reviewer would like to raise with the authors. The contributions of the paper are three, the assembly of the datasets, the experimentations on top of the datasets, and last the critical comparison and analysis. A table at the end of the related literature can help the authors summarize their novelties and the potential reader understand them. Experimental design: The hardware and software specifications of the testbed environment are not mentioned and they should be for the replicability of the results. Validity of the findings: The discussion section should be renamed to critical evaluation and comparison of the results. In the same section, the authors should further explain their results, not just demonstrate the numbers. They should clarify why each methodology had different results and how they can be used potentially in an automated environment. There is no mention of introduced overhead if any between the different deep learning models. Additional comments: There is no mention of privacy. This work clearly demonstrates that Cambridge Analytica is still alive and we can export people’s behavioural characteristics without their consent just by acquiring publicly available data (e.g. https://doi.org/10.1109/MC.2018.3191268 , https://doi.org/10.1109/UIC-ATC.2013.12 , https://doi.org/10.3390/make2030011 ).
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING RACISM AND XENOPHOBIA USING DEEP LEARNING MODELS ON TWITTER DATA: CNN, LSTM AND BERT Review round: 1 Reviewer: 2
Basic reporting: This manuscript proposes several deep learning models to classify Spanish texts on xenophobia and racism. Among the objectives of the research presented are to generate a dataset of Spanish-language tweets labelled as xenophobic or racist, or not. To carry out this research, the authors have compiled their own dataset which they have shared with the rest of the scientific community and, on this, they have applied CNN, LSTM and BERT models. The manuscript is well-structured, state-of-the-art, scientifically justified, reproducible and novel. The research focuses on generating models with a Spanish dataset which, a priori, is often difficult to justify. However, the authors mention other similar research such as HaterNet and HatEval, highlighting the possible weaknesses of these experiments and thus justifying the experiments presented in this manuscript. Experimental design: At the methodological level, the authors mention the methodology applied, how they obtained the dataset and, in addition, they have shared the jupyter notebooks containing the code of the models generated and the results obtained. Validity of the findings: The authors need to address some minor weaknesses before the manuscript can be considered for publication in a journal. 1. Although CNN, LSTM and BERT techniques are indicated in the title, CNN and LSTM techniques are not mentioned in the abstract. It is recommended that the authors modify the abstract by adding this information and other interesting information regarding the final results obtained, for example, the % accuracy obtained in the models. 2. To improve the interpretability of the results, authors are advised to add some of the figures of the confusion matrices corresponding to the models that obtained the best results. 3. On the other hand, they should revise the manuscript and modify some concepts that appear written in two different ways, for example "tagged" and "labelled". Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING RACISM AND XENOPHOBIA USING DEEP LEARNING MODELS ON TWITTER DATA: CNN, LSTM AND BERT Review round: 1 Reviewer: 3
Basic reporting: This paper evaluates different approaches to detect hate speech motivated by racism or xenophobia in Spanish using Twitter data. The structure of the manuscript is clear and there is an extensive literature review. The authors justifies the need of an adapted model for supervised text classification in Spanish to detect hateful messages particularly aimed at migrants or non-caucasian people. I consider however that the rational is still poor and needs to create arguments in the next directions: -If the problem is language-based, what other developments have shown that native models for hate speech detection are better that general ones? This is, showing examples of attempts in languages different from English that have a good performance (and not only in Spanish). -Compare the results of this paper to the best performances obtained in other languages. This would make the manuscript more useful for international audiences. Regarding previous attempts, the authors mention HaterNet and HatEval, but are missing Pharm. This project develops deep learning models to classify tweets with hate speech towards migrants and refugees in Spanish, Greek and Italian. In Spanish, they manually labelled more than 12,000 tweets finding 1,390 hateful messages to build the classification model with RNN. This approach does not use BERT, but still gets good metrics (f1-score=0.87 for Spanish). The classification interface (http://pharm-interface.usal.es) offers more information (probably the training corpus under request too) and these two papers: Vrysis, L.; Vryzas, N.; Kotsakis, R.; Saridou, T.; Matsiola, M.; Veglis, A.; Arcila, C.; Dimoulas, C. (2021). A Web Interface for Analyzing Hate Speech. Future Internet, 13(3), 80. https://doi.org/10.3390/fi13030080 Arcila, C., Sánchez, P., Quintana, C., Amores, J. & Blanco, D. (2022, Online preprint). Hate speech and social acceptance of migrants in Europe: Analysis of tweets with geolocation. Comunicar, 71. On the other hand, what is the goal of point 2.2? Why is it called "Sentiment analysis"? Do the authors refer probably to text classification? The title is confusing since SA is not the scope of this paper. Experimental design: The method is correctly included in the paper and shows that the research is in line with the scope of the Journal. The models are well described, although I am not sure if most of the details are necessary for a Computer Science specialized audience (i.e. description of what BERT is or definition of te evaluation metrics). The training corpus seem to be labelled by specialists in Psychology and has specific examples of racism and xenophobia. This is a good value of the paper. Though, the reliability of this manual classification is not reported (i.e. inter-coder reliability measure), which seriously compromise the quality of the ad hoc data set. This is extremely important since this test can tell if different coders were really considering the same as hate, and then validated the qualitative category. In addition, there is a relevant privacy concern in the Twitter data. In Zenodo, the tweets include the usernames and the given label (hate/ no hate). This does not meet the ethical and data protection standards for Twitter analysis. Validity of the findings: The findings are relevant for the hate speech detection field, since they can be used to build better models in different languages. In special, the use of BETO and the verification of its enormous advantage can help other researchers and practitioners to create better models in the future. My concern is the validation of the results in other different data. This is, for example, collecting new Twitter data (within another timeframe or word filters) and validate the obtained models. I consider this external validation phase extremely important for ML models. Regarding the conclusions, I think that the limitations and future research should be better developed. Is just the size of the dataset the only limitation? What specific applications can generate this model? How can the model deteriorate in time with the use on new words/sentences? How often this model should be re-trainned to be really useful in real-life applications? Additional comments: None
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING RACISM AND XENOPHOBIA USING DEEP LEARNING MODELS ON TWITTER DATA: CNN, LSTM AND BERT Review round: 2 Reviewer: 1
Basic reporting: The paper was already clear and its quality was improved according to the comments. Experimental design: The experimental design was improved according to the comments. New measurements were added as requested. Validity of the findings: The impact of the findings was addressed and extra discussion was added. Additional comments: All comments were addressed.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING RACISM AND XENOPHOBIA USING DEEP LEARNING MODELS ON TWITTER DATA: CNN, LSTM AND BERT Review round: 2 Reviewer: 2
Basic reporting: The authors have accomplished all the suggested improvements and changes. Under my opinion the paper must be accepted in the current state. Experimental design: ok Validity of the findings: ok Additional comments: None
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING RACISM AND XENOPHOBIA USING DEEP LEARNING MODELS ON TWITTER DATA: CNN, LSTM AND BERT Review round: 2 Reviewer: 3
Basic reporting: The authors have addressed my main concerns and the manuscript is much better now. The missing reference can be found in: Arcila, C., S nchez, P., Quintana, C., Amores, J. & Blanco, D. (2022, Online preprint). Hate speech and social acceptance of migrants in Europe: Analysis of tweets with geolocation. Comunicar, 71. https://www.revistacomunicar.com/index.php?contenido=detalles&numero=71&articulo=71-2022-02&idioma=en Experimental design: The authors have addressed my main concerns in the experimental design. Validity of the findings: The authors have addressed my main concerns in the validity of findings. Additional comments: None
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: STARMC: AN AUTOMATA BASED CTL* MODEL CHECKER Review round: 1 Reviewer: 1
Basic reporting: This paper presents the ideas and algorithms behind the starMC tool, one of the few CTL* in existence. The tool uses decision diagrams for its symbolic implementation and accepts Petri nets as input. The implementation is evaluated in extensive experiments, using many models and formulas. This paper builds on an earlier demo paper, and extends it with separate logical and implementation views of the tool. Furthermore, it contains extensive references to original works, from which the ideas originated. The experimental evaluation is more rigorous than in the previous paper. The paper is very clearly written and pleasant to read. The paper is mostly self-contained. There are plenty of references to previous work; the related work section covers more than two pages. I think this gives a lot of insight into the history of CTL* model checking and also its current state. The source code and files used for the experiments are available online, which is commendable. Experimental design: The main questions posed in the paper are whether it's possible to design and implement a CTL* model checker and whether techniques like variable ordering and the extraction of counter-examples can be applied to it. As mentioned in the intro, the authors only answer the first half of this question, the second half is left for future work. In my view, the setup of the experiments is appropriate for the question that the researchers try to answer. The set of benchmarks is based on the widely used MCC collection of models and formulas. The experimental results indicate that this is a varied set of benchmarks, which helps validity. My only criticism here is that the first research question relates to models of 'industrial interest', while it's not clear that such models are contained in the MCC benchmark set. Validity of the findings: I think the main achievement of this paper is not the finding that CTL* model checking is possible in practice, but the fact that there is now a tool to do just that. The conclusion that starMC can be used for large state spaces is fair, given the experimental results. The authors also highlight the shortcomings of their tool. Additional comments: The paper was truly enjoying to read, and I think that not much needs to be done for acceptance. I do have a few small remarks, which are listed below. The paper can be accepted when these are addressed. Minor remarks/typos: - p2, l92: distint -> distinct - Sect. 2.2: in my opinion, it's nice to be consistent with singular/plural of abbreviated nouns. You use the abbreviation MDD both for singular and plural, but for plural of MxD, you use MxDs. - p3: it might help the reader if you emphasize that the 'fully reduced rule' of skipping levels does not apply to MxDs. For example, the MxD of T0 does not restrict the relation between P2 and P2' if interpreted as fully reduced. - Def. 7: braces for sets in the wrong place; should be Q = S \cup {Spre}, with Spre \notin S. - p8, l287: s1 -> s_1 - Alg.3, l11: it was not immediately clear to me that this line is a comment. Perhaps there is a better syntax? - p13, l363: Some aspects of this procedure needs -> Some aspects of this procedure need - p13, l370: if M ⊗ A is a BA [..] weak or terminal -> if M ⊗ A is a weak or terminal BA - p13, l374: "whenever Qi ≤ Qj , there is no transition from Q j to Q i." Perhaps this should be the other way around? Otherwise the partial order is allowed to be the identity relation. - p13, l385: "onto the marking of s j." -> Do you mean "onto the marking of s_i"? - Alg. 5, l9: perhaps you mean S = S'? - p16, l528: "Most computations of the model checker works with DD" -> Most computations of the model checker work with DDs - p17, l548: It is not immediately clear what "2n + 2 resp." refers to. - Tables 1, 2 and 3: "Both terminates" -> Both terminate - p23, l742: "LTSmin is faster then starMC" -> than
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: STARMC: AN AUTOMATA BASED CTL* MODEL CHECKER Review round: 1 Reviewer: 2
Basic reporting: - The treatment of deadlocks is unclear. On one hand, a Kripke structure is defined (lines 169-170 and 173-175) to be deadlock-free, enforcing infinite paths. On the other hand, the experiments section reports "stuttering enabled for LTL, and disabled for CTL" (line 631-632), which seems inconsistent with the given definition for a Kripke structure, and begs the question of what then the decision is for CTL* and whether the decision for LTL and CTL follows that of line 631. - Fairness is presented in a confusing manner. The acceptance sets of the constructed GBA are used as "fairness constraints" (line 14 of Algorithm 4), but given that I have limited knowledge of and experience with fairness assumptions, why this is the case is completely opaque to me. A clarification either way would likely help make the theory much more digestible to a reader not familiar with the original work by Emerson and Lei. The role of the fairness assumption could also be clarified to make the algorithm more transparent. - Another avenue for improving transparency of the algorithm is a more thorough overview of the algorithm before going into the details. Lines 300-314 attempt this, but the takeaways from the example are not particularly clear, as lines 303-310 mainly demonstrate how the example cannot be checked just with LTL or CTL checking. More elaboration on how the different subformulae are dealt with would be nice here. - The there are a handful of typos. Definition 3 lists that $s \models a$ iff $a \in L(a)$ when clearly it should be $a \in L(s)$, and Algorithm 5 writes $\mathbf{repeat} ... \mathbf{until} S \neq S'$ as a fixed-point computation, when it probably should be $\ldots \mathbf{until} S = S'$. The example on line 302 has a formula mentioning APs alpha and beta, but the text describe an AP gamma which is not present. - Algorithm 5 details a function $CheckE_{fair}G$ that returns a sat-set, while other functions that compute sat-sets are called e.g. $SatCTL*$ (algorithms 2, 4). Experimental design: - The comparison to LTSmin is not particularly convincing. The paper does not appear to give a convincing argument why it is sane to consider the state space generation step an important part of the verification procedure. Line 638/639 specifies 60 seconds for state space generation and 60 seconds per formula, but the paper focuses mainly on the formula verification half of this, in which case the state space generation step is not important. Because of this, the comparison to LTSmin should probably have been restricted to models where both starMC and LTSmin generated state spaces, rather than models where either of them did. The text also places some emphasis on the worst-case exponential translation to $\mu$ calculus done in LTSmin, it should be clarified whether this time was contained in the 60 seconds for state space generation or the 60 seconds per query. - The method of constructing CTL* queries from MCC should be noted already on line 633, right now it's first introduced on line 735-738. The sentence on line 633 should also be revised, it seems to have grammatical issues. - For sake of completeness the used Spot/Meddly versions and LTSmin version should be noted. - It seems “unfair” that LTSmin gets 4 cores. Sylvan can utilize 4 cores, but it seems be an unfair comparison for starMC (which is single core) and comparing the underlying DD framework rather than the algorithm (translation via mu-calculus vs. the Emmerson-Lei Algorithm). Validity of the findings: - The experimental setup in its is missing. The generated data is provided, but the experiment cannot be repeated w/o significant effort by a peer. This is particular problematic as LTSmin is forced to adopt a specific variable-ordering, which is known to be a significant factor in the efficiency of symbolic modelcheckers. - As the author motivates their work by the rareness of existing CTL* modelcheckers for which executable code is available, the authors should ensure that their entire setup is provided in a repeatable format. This would include both source-code, models and scripts for execution and data-collection. Currently links are provided to the raw result (but not scripts, models or queries) along with links to mutable/volatile places on the internet. A link to a permanent data-storage service such as Zenodo is strongly recommended. - "the experiments conducted in the testing phase indicate that starMC can solve (significantly) more models than LTSmin" is a strong statement given the selection-criteria of the models. The followup comment on the speed of LTSmin also seems arbitrary, given the difference in resource allocation for the two tools. Additional comments: The paper is recommended to be revised, in particular the experimental section. Furthermore, the text as a whole could have been written more clearly and needs an additional proof-reading pass to make it clearer, a few examples are provided in the notes.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: STARMC: AN AUTOMATA BASED CTL* MODEL CHECKER Review round: 1 Reviewer: 3
Basic reporting: The paper is detailed and gives mostly details on the implementation of the starMC tool which is part of the greatSPN graphical petri net tool, although it can also be used standalone. The paper describes the tool and the implemented algorithms in sufficient detail. There is plenty of background and there is also a reasonable section on the related work in the literature. The work is self-contained. Not many proofs are included but this is not necessary since there are no true new algorithms proposed, merely existing algorithms that have been implemented using symbolic methods. Virtual machine with the tool, csv summaries of the benchmarks etc can be downloaded, although it is a bit tricky because the link in the review form is broken. Experimental design: Research questions are sufficient and the conclusions appear supported by the empirical evidence. They appear to be replicable. I would however recommend that the authors store an artifact of the benchmarks, raw data, processing scripts on an external platform such as Zenodo for long term persistence. Validity of the findings: As reported in section 2, I think the conclusions are supported by the evidence, they appear to be scientifically accurate. Additional comments: I found a minor spelling error "Sumamry" in the caption of Table 3. Other than that the paper is reasonable. I would personally have appreciated more discussion of variable orderings and their impact because I like that in BDD papers, but in this case I can see that it does not add much to the paper.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: STARMC: AN AUTOMATA BASED CTL* MODEL CHECKER Review round: 2 Reviewer: 1
Basic reporting: This paper presents the ideas and algorithms behind the starMC tool, one of the few CTL* in existence. The tool uses decision diagrams for its symbolic implementation and accepts Petri nets as input. The implementation is evaluated in extensive experiments, using many models and formulas. This paper builds on an earlier demo paper, and extends it with separate logical and implementation views of the tool. Furthermore, it contains extensive references to original works, from which the ideas originated. The experimental evaluation is more rigorous than in the previous paper. The paper is very clearly written and pleasant to read. The paper is mostly self-contained. There are plenty of references to previous work; the related work section covers more than two pages. I think this gives a lot of insight into the history of CTL* model checking and also its current state. The source code and files used for the experiments are available online, which is commendable. Experimental design: The main questions posed in the paper are whether it's possible to design and implement a CTL* model checker and whether techniques like variable ordering and the extraction of counter-examples can be applied to it. As mentioned in the intro, the authors only answer the first half of this question, the second half is left for future work. In my view, the setup of the experiments is appropriate for the question that the researchers try to answer. The set of benchmarks is based on the widely used MCC collection of models and formulas. The experimental results indicate that this is a varied set of benchmarks, which helps validity. My only criticism here is that the first research question relates to models of 'industrial interest', while it's not clear that such models are contained in the MCC benchmark set. Validity of the findings: I think the main achievement of this paper is not the finding that CTL* model checking is possible in practice, but the fact that there is now a tool to do just that. The conclusion that starMC can be used for large state spaces is fair, given the experimental results. The authors also highlight the shortcomings of their tool. The experimental results are easy to reproduce with the virtual machine the authors provided. The VM includes a script for running a subset of the experiments in a reasonable amount of time. The results appear to match those in the paper. Additional comments: I want to thank the authors for carefully addressing all the comments I listed for the initial version.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: AN ENSEMBLE BASED APPROACH USING A COMBINATION OF CLUSTERING AND CLASSIFICATION ALGORITHMS TO ENHANCE CUSTOMER CHURN PREDICTION IN TELECOM INDUSTRY Review round: 1 Reviewer: 1
Basic reporting: The authors proposed churn prediction model is a hybrid model that is based on the combination of clustering and classification algorithms using ensemble. First, different clustering algorithms i.e K-means, K-medoids, X-means, and Random clustering are evaluated individually on two churn prediction datasets. Then hybrid models are introduced by combining clustering with seven different classification algorithms individually and then evaluation is performed using ensembles. The author has done some good work but still, need improvements : 1- Abstract didn't contain much information about the proposed ensemble model. it contains not needed information as compared to meaningful information please update it. 2- there is no need to add a single sub-section 1.1 heading. just simply add a contribution. 3- Visualization is very poor. 4- In section 3.4.8 Ensemble classifiers author mentions ensemble classifiers voting, bagging, AdaBoost, and stacked without proper architecture discussion. 5- The flow of experiments should be added with diagrams. 6- Consider these ensemble architectures in literature and in comparison with your proposed model. Rupapara, V., Rustam, F., Shahzad, H.F., Mehmood, A., Ashraf, I. and Choi, G.S., 2021. Impact of SMOTE on Imbalanced Text Features for Toxic Comments Classification using RVVC Model. IEEE Access. Jamil, R., Ashraf, I., Rustam, F., Saad, E., Mehmood, A. and Choi, G.S., 2021. Detecting sarcasm in multi-domain datasets using convolutional neural networks and long short term memory network model. PeerJ Computer Science, 7, p.e645. Rustam, F., Ashraf, I., Mehmood, A., Ullah, S. and Choi, G.S., 2019. Tweets classification on the base of sentiments for US airline companies. Entropy, 21(11), p.1078. Rustam, F., Mehmood, A., Ullah, S., Ahmad, M., Khan, D.M., Choi, G.S. and On, B.W., 2020. Predicting pulsar stars using a random tree boosting voting classifier (RTB-VC). Astronomy and Computing, 32, p.100404. Experimental design: The experimental design of the manuscript is not clear. Please add a graphical representation of the proposed methodology. Hyperparameter setting of models should be in tabular form. How you tune your models, which method you used Validity of the findings: Why author used Accuracy precision, recall, and f1 score for clustering method evaluation? There is a big difference in accuracy and F1 score value which shows model overfitting please justify it. Compare your proposed approach results with the above-mentioned studies. Additional comments: Dear Editor I hope you are doing well. The manuscript is good needs some changes I have mention above kind regards
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: AN ENSEMBLE BASED APPROACH USING A COMBINATION OF CLUSTERING AND CLASSIFICATION ALGORITHMS TO ENHANCE CUSTOMER CHURN PREDICTION IN TELECOM INDUSTRY Review round: 1 Reviewer: 2
Basic reporting: The authors proposed an approach for customer churn prediction. They proposed a hybrid model using the cluster and classification algorithms. The proposed research is evaluated on two different benchmark telecom data sets obtained from GitHub and Bigml platforms. The author needs improvement to make the manuscript significant. How the author combines cluster and classification algorithms please describe it in more detail. The combination of several models will increase the complexity so you work on the tradeoff between accuracy and efficiency. Evaluate the clustering model in terms of RMSE, and MSE, MAE. Why you selected these two datasets? Apply some validation techniques to show the significance of the proposed approach in terms of standard deviation. What is the impact of preprocessing on the model's performance? Have you done some variation in preprocessing? Apply state of the arts deep learning models LSTM, CNN GRU in comparison. Update the literature review with recent studies. Rupapara, V., Rustam, F., Shahzad, H.F., Mehmood, A., Ashraf, I. and Choi, G.S., 2021. Impact of SMOTE on Imbalanced Text Features for Toxic Comments Classification using RVVC Model. IEEE Access. Rustam, F., Khalid, M., Aslam, W., Rupapara, V., Mehmood, A. and Choi, G.S., 2021. A performance comparison of supervised machine learning models for Covid-19 tweets sentiment analysis. Plos one, 16(2), p.e0245909. Omar, B., Rustam, F., Mehmood, A. and Choi, G.S., 2021. Minimizing the overlapping degree to improve class-imbalanced learning under sparse feature selection: application to fraud detection. IEEE Access, 9, pp.28101-28110. Rustam, F., Ashraf, I., Shafique, R., Mehmood, A., Ullah, S. and Sang Choi, G., 2021. Review prognosis system to predict employees job satisfaction using deep neural network. Computational Intelligence, 37(2), pp.924-950. Experimental design: No comments Validity of the findings: No comments Additional comments: No comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: AN ENSEMBLE BASED APPROACH USING A COMBINATION OF CLUSTERING AND CLASSIFICATION ALGORITHMS TO ENHANCE CUSTOMER CHURN PREDICTION IN TELECOM INDUSTRY Review round: 2 Reviewer: 1
Basic reporting: The revised version is good and the authors also resolve all my concerns, its good now for publication. Experimental design: no comment Validity of the findings: no comment Additional comments: no comment
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: AN ENSEMBLE BASED APPROACH USING A COMBINATION OF CLUSTERING AND CLASSIFICATION ALGORITHMS TO ENHANCE CUSTOMER CHURN PREDICTION IN TELECOM INDUSTRY Review round: 2 Reviewer: 2
Basic reporting: Authors have done good work in revision and resolved all issues mentioned by me Experimental design: no comment Validity of the findings: no comment Additional comments: no comment
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCESSIBILITY CHALLENGES OF E-COMMERCE WEBSITES Review round: 1 Reviewer: 1
Basic reporting: This paper is well written. In addition to the introduction and conclusion, it is composed of method and material, results and discussion sections Hence, It is well structured. The literature references are relevant and recent. However, almost of theme are reports and guides from international organizations like WHO. In fact, it will be more suitable to cite academic papers. The authors follow a clear methodology. The obtained results are analyzed and discussed in order to give solution to the accessibility problem. Solutions of accessibility are described in the conclusion section. I think that it is more suitable to put in the discussion section. Also, I think that the authors have to describe in details the followed methodology and the proposed solutions. The authors have to explain some abbreviations (like ARIA attributes, A, AA and AAA levels). I think that the introduction is very long. The authors touch on many subjects to show the importance of the problem (like COVID-19, e-commerce, accessibility). I suggest to rewrite the introduction to be more consistent. I remark that there is no related work section presented in this paper !!! Experimental design: The paper presents the accessibility challenges of e-commerce websites. The authors through this study have presented many contributions: 1 - correlation between the ranking of e-commerce websites and the accessibility, 2 - the most significant accessibility barriers and neglected principles 3 - some proposed solutions to improve the accessibility of e-commerce websites. I think that the research is well designed. Also, the research question is relevant, especially during COVID-19 period where e-commerce is considered as a main solution to apply confinement. The applied methodology is rigorous. Also, the authors have mentioned the main limits of this methodology which consists mainly in the inability of automated tools to test content accessibility. I think, that the authors have to explain in details the followed methodology. Comparing to some cited works (like Oliveira et al (2020)), , I did understand exactly the novelty of this study. Is it the proposition of solutions to the accessibility? because the same tools (WAVE) and field (E-commerce) are used in both studies. Validity of the findings: The obtained results are very important and they allow improving e-commerce websites (and indirectly improving the global economy). The first remark of the authors is the correlation between e-commerce websites ranking and the accessibility. This remark guides the business sector to improve the accessibility of their websites to improve their rank. Also, the authors describe the main accessibility barriers and the most neglected principles of this attribute. Moreover, the authors presented the solutions of this issue. In term of novelty and originality, the authors have mentioned several recent works that are deal with this problem (using the same tools and applied in the e-commerce field). I think, that the authors have to mention clearly the novelty and originality points comparing to these studies. I suggest adding related work section to study this point. The authors provided all data required to replicate. In future works, the authors suggest using in their future work a hybrid method by applying automated tool with manual review method. I think that it is difficult to test the accessibility of 50 websites manually. Also, the manual method causes subjectivity problems. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCESSIBILITY CHALLENGES OF E-COMMERCE WEBSITES Review round: 1 Reviewer: 2
Basic reporting: The manuscript titled: Accessibility challenges of e-commerce websites, evaluates the accessibility of 50 e-commerce web sites in the top rankings based on classification proposed by ecommerceDB. The evaluation was made under WAVE tool. MS Excel pivot table tool was applied for analyzing the accessibility of selected e-commerce. In constrast, the normality test and the correlation between the ranking of e-commerce websites and accessibility barriers were analyzed using IBM SPSS (v 25). Overall, the authors’ contribution was not pretty clear since the study analyses the outputs of WAVE tool on 50 top ranking e-commerce web sites. The proposed Methodology for evaluating e-commerce websites needs further highlights. For exemple, in the third phase: Define the test scenario, from a software engineering point of view there is no provided test. scenario. Also, Is there any potential back-tracking in the proposed Methodology in the case where WAVE does not exhibit a relevant information (potential failures, new accessibility improvements purposes, etc) ? Also, in Phase 2, which user categories were selected? it should be discussed. The authors are strongly invited to make a comparison between their findings and the study of (Xu, 2020) which is the unique work presented in the State-Of-The-Art section that uses WAVE-based evaluation of 45 e-commerce website. I should be interesting, for authors, to highlight the WAVE analysis method and the additional modification indicated in WCAG-EM 1.0 since it impacts the results and the proposed accessibility improvements. In table 2 the column headings were not explained (Errors, Contrast Errors, Alerts, Features, Structural Elements and ARIA). For instance, the authors indicate errors and contrast errors, what is the difference between them? Also, it seems that ARIA, Structural Elements, Features and Alerts are warning barrier types. I suggest to explain these concepts when introducing table2.. Table 3 is presented in summative manner and needs more clarification about Success criteria and Level. Success criteria presents number with X.X.X format which is not clear (the meaning of the first, second and third number is missing). Also, level takes A or AA values which is even not explained before table3. In short, success criteria according to WCAG 2.1 should be outlined first in the appropriate position in the paper. The authors should mention how Accessibility barriers have been identified? if the identification process belongs to WAVE findings, it must be noted. In table 4, the authors are invited to justify why the Normality testing is required? Also, the result of normality test should be commented either the data follow the normal distribution or not (i.e., Lilliefors test) In the discussion section, in addition to what is raised about accessibility barriers (low vision, ocular degenerative diseases, cardiovascular accidents, etc), the Hardware limitations were not discussed. Sometimes end-users’ devices (smartphone, tablets, …) may hide some significant information on the web site based on automatic brightness functionality which is proposed mainly for preserving human vision. Interstitial advertising was not discussed as well. In table 5, when calculating the Pearson correlation, the authors should provide at which level the correlation is significant. Using SPSS, the significance level is defaulted by 0.01, it there any modification of that level? Page 11, line 282, (In analyzing the accessibility of e-commerce websites, we applied descriptive statistics …)  the authors should clarify which type of descriptive statistics is used: univariate or multi-variate Finally, as a native question: Is it possible to take into account web applications by WAVE-based evaluation? Although figures, tables are well labelled and described, some minor issues are: • Line 94 the title: The state of the art is missing since the what follows discuss literature review. • Second paragraph after the figure 1: (there are many; many people with hearing roblems)-> redundance • Table3, figure 5, figure 6 and figure 7 captions are written in bold. It should be submitted to the template. • (…, a sample of the top 50 e-commerce websites was taken from the EcommerceDB ranking site)  recurrent phrase in the manuscript. (In Introduction section, In Materials & Methods section, in results section) • Page 8, line 125, (Our research proposes an automatic review method using the WAVE Web Accessibility Evaluation Tool (15) to solve …)  what (15) stands for? Is it about the tool version? • page 7, ligne 85: WCAG 2.1 (World Wide Web Consortium, 2018) consists of 4 principles, 13 guidelines and conformance criteria, plus an undetermined number of suitable techniques  Is there really an undetermined number? Can it be limited? • Page 7, line 88: Principle 1, related to perceptibility, refers to information, and user interface components should be presented most simply. Principle 2 focuses on operability - it comprises the user interface components, and navigation should be between each page  the difference between principle 1 and 2 is not clear. • The first paragraph in discussion section, (it was revealed that there is a need for web developers and web designers to apply WCAG 2.1 …) : incomplete phrase, a descriptive word is missing after WCAG 2.1 like : guidelines, principles, requirements. • Page 13, line 340, (This research can guide developers and designers of e-commerce websites to spread the use of WCAG 2.1 (6), ..) what does (6) stands for ? Experimental design: Rigorous investigation is performed in a simplest way. Overall, the authors’ contribution was not pretty clear since the study analyses the outputs of WAVE tool on 50 top ranking e-commerce web sites. The proposed Methodology for evaluating e-commerce websites needs further highlights. The introduction should also highlights what are the gaps left by existing solutions and how the presented proposal addresses them. Validity of the findings: In conclusion section, the provided recommendations are suitable for any type of web sites. I would have liked that they are dedicated to e-commerce sites since the core matter of the study is the Accessibility issue in e-commerce websites. Generally, e-commerce websites present images of products. I suppose that an image processing issue should be tackled. The authors suggest future works in two places in the manuscript: before and within the conclusion section. It should be better to combine suggested future works in the end of conclusion section. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCESSIBILITY CHALLENGES OF E-COMMERCE WEBSITES Review round: 2 Reviewer: 1
Basic reporting: Overall, the raised comments are handled. Paper formatting needs more adjustments. As well as English improvement. For example, in Page 10, line 211 : (This phase starts the evaluation process is defined, the level of compliance with WCAG 2.1 for the evaluation is defined (World Wide Web Consortium, 2018). Should be reformulated. In Materials & Mathods section, page 195, (We used (WebAIM, 2021) Web Accessibility Evaluation Tool (WAVE) ) -> the citation should be putted after the tool name. Experimental design: The authors are invited to justify why they put the works ((Villa & Monzón, 2021), (Pollák et al., 2021) and (Paștiu et al., 2020)) to denote the impact of COVID-19 on the growth of e-commerce websites? Is there a need for making that? Also, the main issue of the paper is the accessibility not the impact of COVID-19 on the evolution of e-commerce websites. I suggest to make just a short paragraph in the form: according to ((Villa & Monzón, 2021), (Pollák et al., 2021) and (Paștiu et al., 2020)) the COVID-19 impacts the growth of e-commerce Websites in the way …..) not to describe each work separately. In the Literature review of COVID-19, e-commerce, and web accessibility section, I suggest to remove (COVID-19, e-commerce, and web accessibility) and to keep only the title: Literature review From the line 103 to 136, the authors expose the WCAG 2.1 principles in the literature review section after introducing COVID-19 works related to Web accessibility. I suggest to put a brief description of WCAG principles in the introduction section and restructure it better or to make a separated section that outlined WCAG principles. The paper contribution is well designed. Validity of the findings: The previous remarks and suggestions have been reviewed. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCESSIBILITY CHALLENGES OF E-COMMERCE WEBSITES Review round: 3 Reviewer: 1
Basic reporting: I have just one last remark regarding the order of sections: it should be better to put Web accessibility principles section (from line 92 to 125) after the literature review section and before Materials & Methods section since it’s inappropriate in scientific writing to make sections intertwined. I think that my suggestion in the last review was not clearly understood. For the rest, I’m satisfied and I think that the raised comments have been handled. Experimental design: / Validity of the findings: / Additional comments: /
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CURRENT APPROACHES FOR EXECUTING BIG DATA SCIENCE PROJECTS—A SYSTEMATIC LITERATURE REVIEW Review round: 1 Reviewer: 1
Basic reporting: The manuscript presents a systematic review of frameworks for executing big data science projects. The contributions of the article are well presented. The manuscript concisely summarizes the current state of the art in data science project management frameworks. Furthermore, the findings and conclusions derived from the systematic review are of interest to the research community on organizational and management frameworks for big data science projects. The structure of the article is correct, and there are sufficient background details and literature references provided. The article introduces the topic adequately. The lack of established and mature methodologies justifies the need for information about how data science projects are organized, managed, and coordinated. Thus, it adequately justifies the need for a systematic review in this field. The paper is well written and well organized, uses professional, clear, and unambiguous English. Figures and tables in general are comprehensive and helpful. Experimental design: The content of the article is consistent with the journal's Aims and Scope. Furthermore, the methodology for the systematic review is described with sufficient detail: both the search space, the search strategy, and the selection criteria & procedure are well justified and described to be reproducible by another investigator (given access to the mentioned digital libraries). The references are correctly cited. The review is also organized logically into coherent paragraphs. The reviewer noted a possible mismatch in the total number of papers: - In Figure 1, in the Google Scholar search space, the number of retrieved studies is 37600. However, the total number of retrieved papers in Table 1 is 44600 (9200 + 17800 + 17600). Is this mismatch due to repeated articles? This difference should be clarified. - In line 262, it is mentioned that "After executing the title and abstract screen, 98 papers were selected for candidates for primary studies.". The same number of articles (98) appears in Figure 1. However, the total number of candidate papers in table 2 is 135 (52+18+24+36+5). Again, is this mismatch due to repeated articles on different digital libraries? This difference should be clarified. Validity of the findings: The novelty of the study is assessed in the manuscript. The conclusions are appropriately stated, connected to the three research questions, and are supported by the results. The conclusion identifies very interesting future directions for research in the field. The limitations of the study are also presented in the Discussion section. - Lines 299-304: "While the six themes that we identified in our SLR are all relevant to project execution, there was a wide range in the number of papers published for the different themes. Exploring this ratio of publications across the different themes provides a high-level view of research activity for the different themes. In other words, the number of articles for a particular theme is indicative of the current focus, with respect to current research efforts for that theme regarding the execution of data science projects." The reviewer found this paragraph not straightforward and difficult to read, with the same idea repeated twice. Therefore, the reviewer recommends simplifying this paragraph and making it easier to understand. Additional comments: Typography errors: - Lines 511-513: "Another limitation is that while authors explored ACM Digital Library, IEEEXplore, Scopus, ScienceDirect, and Google Scholar databases, which index high impact journals and conference papers from IEEE, ACM, SpringerLink, and Elsevier to identify all possible relevant articles.": The while clause has not been appropriately used; there is no second part to the sentence. - Line 387: the text should be in bold - Caption Figure 1: csteps => steps - Table 4: There are unnecessary parentheses on Extension / CRISP-DM / Primary Studies
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: CURRENT APPROACHES FOR EXECUTING BIG DATA SCIENCE PROJECTS—A SYSTEMATIC LITERATURE REVIEW Review round: 1 Reviewer: 2
Basic reporting: This is a very interesting piece of research. The use of English was clear, professional, and unambiguous. The review of literature is carried out following a logical approach and a consistent methodology. Reasonable exclusions are adequately justified. The structure of the paper is good, however, Table 3 shows missing references in the first two lines (Error displayed) The article is aligned with the aims and scope of the journal and potentially impactful as its novelty is identified in a broad review (based on recent and up to date research) of the approaches for executing big data science projects. However, the article is generically assessing the methods and approaches underestimating the importance (and relevance) of the industries in which big data can be applied. The introduction adequately presents the subject and it supports enough the research rationale and scope. Experimental design: Rigorous investigation, excellent description of the methods and appropriate citations. Validity of the findings: The validity of the findings is supported by adequate and complete methodology. I encourage the authors to further investigate the industries (sectors) where (and why) the big data can be better applied. Conclusions are well stated, however, they can be more developed. (Gaps and limitations adequately identified) Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DBFU-NET: DOUBLE BRANCH FUSION U-NET WITH HARD EXAMPLE WEIGHTING TRAIN STRATEGY TO SEGMENT RETINAL VESSEL Review round: 1 Reviewer: 1
Basic reporting: - The english writing is extremely problematic and hard to follow, the grammatical errors are all over the article. The submission should be thoroughly refined and proofread. - The description in the section of related works is not informative enough to outline the pros and cons of different approaches and their design choices, thus being hard to well clarify the contribution of the proposed method. Experimental design: - The train/test split is not well reported in the experimental settings. There are only descriptions on the cross-validation, but we have no idea what are the number of testing samples in each dataset. The authors should better clarify if they follow exactly the same experimental settings as other baselines used for making comparison. Validity of the findings: - Although the reported performance of the proposed method is superior to various baselines on three datasets (i.e. DRIVE, STARE, an CHASE), the improvement is actually quite marginal. The ablation study for verifying the contribution of design choices (e.g. random channel attention, hard example weighting) shows also the same problem to have only incremental (almost neglectable) improvement with respect other model variants. - The authors should clarify if they adopt the same binarization threshold for all the baselines while using F1 score, Sn, Sp, ACC, G-Mean, and MCC metrics for evaluation. As now the proposed method "chooses the threshold that has the highest F1-score as the optimal threshold value" (stated in Line 281), it would be inappropriate and unfair since the threshold is chosen according to the evaluation result (based on testing data?). Additional comments: This paper proposes to tackle the problem of segmenting retinal vessel from the fundus image. The proposed method is stemmed from the U-Net architecture and extended to consist of one encoder and two decoder branches, where one encoder is trained with the typical cross-entropy objective while another one is trained by the loss function that weights more on the hard pixels (e.g. the boundary pixels of vessel). Moreover, the authors propose a "random channel attention" mechanism serving as regularization to improve the model training. The output of two decoder branches are merged by a fusion layer to produce the final segmentation output. Pros: + The idea of adopting the hard example/pixels is not new but shown to be beneficial for the task of retinal vessel segmentation. The regularization based on "random channel attention" mechanism also shows better (but quite marginal) performance. Cons: In addition to the issues as commented in "Basic reporting", "Experimental design", and "Validity of the findings", here comes more problems that the authors should take into consideration for improving their paper. - The performance of each decoder branch should be reported in order to better distinguish between the boost contributed to fusion layer and the hard example/pixels. - During the ablation study for making comparison with respect to focal loss and dice loss, the parameter settings for the focal loss should be clarified. - The superior performance in comparison to other baselines seems to mostly stem from the network with larger capacity, as the model variant of using single branch (in Table 2) already outperforms many baselines. There should be more detailed discussion/comparison on the model capacity among the proposed method and baselines for better clarifying the contribution of the proposed method. Overall, as there exists the issues of having marginal improvement, unclear and potentially problematic experimental settings, insufficient experimental results, and bad paper organization/writing, I regret to suggest the rejection for this submission.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DBFU-NET: DOUBLE BRANCH FUSION U-NET WITH HARD EXAMPLE WEIGHTING TRAIN STRATEGY TO SEGMENT RETINAL VESSEL Review round: 1 Reviewer: 2
Basic reporting: The paper proposed an automatic fundus image retinal vessel segmentation (RVS) method, consisting of a 2-branch fusion U-Net with the proposed random channel weighting regularization mechanism. While a branch was trained with segmentation criterion, the other was trained with morphological mask weighted criterion. Related works were categorized and compared with the proposed methods in reasonable manners. Figures about different parts of proposed method were shown with clarity. However, the experiments demonstrated no significant proof that the proposed methods (regularization and weighting loss) performed better than the existing deep learning methods. Also, there were a lot of gramatically errors that need to be fixed. Experimental design: The experiments consisted of baselines and ablation comparisons on 3 different datasets, which were techincally sound and adequate. Also, the description of the method and all settings of experiments were provided with sufficient details. Validity of the findings: The proposed method should be competitively effective, but better not to be claimed superior to the exsisting methods. According the the results, the improvements of DBFU-Net over other regularization methods(L2, Dropout, etc), training losses (Dice, Focal), and existing deep learning segmentation baselines were quantitatively neglegable (<1%). Moreover, the necessity of two branches were kind of sceptical, as applying mask weighting loss with smaller "weight" in Eq. 3 on a single U-Net could be equal to the effect of fusing two branches. Last but not least, the computational cost of generating "morphological mask weighted cross entropy loss" should be greater than Focal and Dice loss because of the additional erosion/dilation operations. Additional comments: The authors addressed a good problem to solve, as automatic RVS should be beneficial for ophthalmology disease screening. A fairly amount of experiments were conducted to show the performance of the proposed method. However, the following are a few cons to be fixed: 1. According to the results, the performance and improvement of the proposed methods over other exisiting methods should be claimed conservative. 2. Additional experiments should be conducted to better demonstrate the contribution of this work: 2.1. single branch U-Net vs. DBFU-Net, using the same loss functions (mask weighted, Focal, DICE) 2.2. single branch U-Net vs. DBFU-Net, using the same regularizations (RCA, dropout, etc.) 3. There were too many grammatical errors. I strongly suggest the authors proofread it thoroughly, e.g. 1.1. Missing “a” and “the” 1.2. Wrong usage of comma and multiple verbs in a sentense 1.4. Fig. 2, Func. 3 1.5. Typos, e.g. Line 61. patient -> procedure Line 188. define -> definition Line 424. feature -> future
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DBFU-NET: DOUBLE BRANCH FUSION U-NET WITH HARD EXAMPLE WEIGHTING TRAIN STRATEGY TO SEGMENT RETINAL VESSEL Review round: 2 Reviewer: 1
Basic reporting: - The english writing is still problematic as the grammatical errors are still all over the article even in the revision parts. The submission should be thoroughly refined and proofread. - The categorization in the section of related works seems awkward, the boundary between non-learning-based methods and some unsupervised methods is unclear. Experimental design: no comment Validity of the findings: - If the authors claim that their design choices (e.g. random channel attention, hard example weighting) can be combined with the techniques proposed in other articles to get better results, there should be such experiments in the revision. Additional comments: This paper proposes to tackle the problem of segmenting retinal vessel from the fundus image. The proposed method is stemmed from the U-Net architecture and extended to consist of one encoder and two decoder branches, where one encoder is trained with the typical cross-entropy objective while another one is trained by the loss function that weights more on the hard pixels (e.g. the boundary pixels of vessel). Moreover, the authors propose a "random channel attention" mechanism serving as regularization to improve the model training. The output of two decoder branches are merged by a fusion layer to produce the final segmentation output. Pros: + The idea of adopting the hard example/pixels is not new but shown to be beneficial for the task of retinal vessel segmentation. The regularization based on "random channel attention" mechanism also shows better (but quite marginal) performance. Cons: Please refer to my comments on "Basic reporting" and "Validity of the findings". Overall, I suggest the major revision for this submission.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DBFU-NET: DOUBLE BRANCH FUSION U-NET WITH HARD EXAMPLE WEIGHTING TRAIN STRATEGY TO SEGMENT RETINAL VESSEL Review round: 2 Reviewer: 2
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: 1. The author had clarified the incremental contributions of this paper: RCA regularization, hard example weighting train strategy, and DBFU-Net. Also, the results were claimed to be comparable with existing methods. 2. The paper had been proofread. 3. Additional experiments were conducted and the results were demonstrated to be competitive to the existing methods. A follow-up concern is that your tables got some "-" (missing data). I suggest the author fill them up as some might alter the expectation (e.g. Table 7 [21, 22] might be better than the proposed method).
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ULTRASOUND IMAGE DENOISING USING GENERATIVE ADVERSARIAL NETWORKS WITH RESIDUAL DENSE CONNECTIVITY AND WEIGHTED JOINT LOSS Review round: 1 Reviewer: 1
Basic reporting: The authors of the article named "Ultrasound Image Denoising Using Generative Adversarial Networks with residual dense connectivity and weighted joint loss" propose a novel method based on the GAN network for denoising of speckle noise in images. They train and evaluate their method using the SNR and SSIM metrics. They compare the results with the other state-of-the-art methods in image denoising and prove their GAN outperforms other methods. The language is clear, consistent, and professional English. The literature overview is sufficient, however, there could be a few improvements and additional references. On line 78 you write "(...) methods (...), which 'are' divided into two categories." If it is common to divide these methods in these categories then use a reference, if not, write "could be" instead of "are". There should be a citation for the SSIM. In line 252 you should cite all of the named models. The article is well structured. Figures could be improved. Figures 9 and 12 are mentioned in the article as "Figure", others are mentioned as "Fig.". The authors should fix that. In figure 12, the authors should add what dataset does the bar plot refers to. Experimental design: Research is well defined and structured. The results are interesting and well presented. However, in Table I. authors should add the average value and standard deviation. Models should be trained multiple times to avoid accidental results. The authors should mention what does the skip pathway refers to, concatenation or addition? Authors could try to squeeze the whole model in one or two figures for better clarity and generally improve Figure 1. Methods to which the GAN is compared could be better described. Validity of the findings: The article proposes original work, it is impactful and has some novelty. All data has been provided. Conclusions are well stated and limited to supporting results. Additional comments: The general impression of the article is good, I hope the authors will submit the review. The headline of the article is based on the ultrasound images. However, the method has only been visually evaluated on the ultrasound dataset, all analysis has been done on some other publicly available datasets.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ULTRASOUND IMAGE DENOISING USING GENERATIVE ADVERSARIAL NETWORKS WITH RESIDUAL DENSE CONNECTIVITY AND WEIGHTED JOINT LOSS Review round: 1 Reviewer: 2
Basic reporting: Literature needs improvement. Writing style is OK, but Figures are poorly drawn, requires directional arrows to illustrate the dataflow Experimental design: OK Validity of the findings: Its OK. Please refer to my final comments related to the noise levels Additional comments: Major Concerns : P8 Lin-79- The author classify the Wavelet method as a Frequency domain techniques, which is absurd. Wavelets provide both time and frequency. This statement/ and preceding ones should be modified accordingly P9, Lin 106. What does modified UNET mean. Provide reasonable information about the modifications. Also there are many UNET variations which need to be mentioned in literature - RDAU-NET, RDA-UNET-WGAN, ADID UNET, UNET++, VGGUNET, Ens4B-UNet etc. Though these models are NOT directly related to denoising, it is worth mentioning these models atleast in the literature P10 Lin 138 - The Speckle Noise Model requires more explanation. The current information is minimalistic. P11. Lin 155 - Please change loss of image information to loss of spatial information P11 Lin 158 - Use of Residual blocks and Dense nets are known to avoid vanishing gradient problem, as stated in RDAUNET and ADID UNETS, hence what additional improvement does RDCB provide P14, Lin 212. I am confused as to why the author is explaining VGG 19 model ?. I feel the VGG part can be removed or rewritten in way that is related to the proposed architecture P15, Lin 245, Is Visual effect ( the performance metric as explained by the author ) is used elsewhere in the literature ? If so please cite those papers . Figure 1 requires directions (arrows) to illustrate the flow the data. At the moment, its really confusing to the readers to understand how the entire model works Figure 12. Why does the noise levels vary between the same sigma value for PSNR and SSIM. Does noise level expressed in db should be same (eq 32db in PSNR and 0.8377 in SSIM) ? In general the paper requires more additional literature in regards to UNETs and GANs. the authors have to clearly mention the term " Modified " that is used in their model
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ULTRASOUND IMAGE DENOISING USING GENERATIVE ADVERSARIAL NETWORKS WITH RESIDUAL DENSE CONNECTIVITY AND WEIGHTED JOINT LOSS Review round: 2 Reviewer: 1
Basic reporting: References are improved. Professional english is used. Experimental design: Research question is now well defined. Methods description is improved as well as figures. However, authors do not state how many times were experiments done to prove the robustnes of the methods, at least I do not see it. Validity of the findings: Conclusions are well stated, and all underlying data has been provided. Additional comments: If these small modifications are made the article is suitable for publishing, in my opinion.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: 2D FACIAL LANDMARK LOCALIZATION METHOD FOR MULTI-VIEW FACE SYNTHESIS IMAGE USING A TWO-PATHWAY GENERATIVE ADVERSARIAL NETWORK APPROACH Review round: 1 Reviewer: 1
Basic reporting: - Clarity in the overall structure of manuscript sentences. - Literary study can be further extended as there are no studies referred for the year 2021. - Sufficiently detailed tables and figures which is a plus point. - Abstract mentions "improved the encoder-decoder global pathway structure"; however, the abstract fails to mention the details of this improvement in terms of accuracy or operational workflow. - Authors mention throughout the manuscript the need for the global structure; however, the reader can still do not get the idea of its importance and impact on the overall verification process. - Line 459 mentions 20 illumination levels for the image dataset. Mention the details of these levels. Experimental design: - Line 5,6,7 mentions there is a drop of 10% of recognition. Can authors present and validate this point through focused experiments and also show the gain of the proposed methods over the same set of images (for frontal-profile to frontal-frontal face verification) Validity of the findings: - The results reported are promising. It would be great if the authors could add some more experiments to show the effect of 20 different illumination levels as for angles presented in Table 3. Additional comments: - Related work should be summarised and a small paragraph should be added at the end of the section to discuss the key takeaway of literary study to facilitate the reader. - Proposed method should be presented in one concrete table with all the operational steps involved (all in one place) Overall the research is well-defined and has a high potential to fill the identified research gap.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: 2D FACIAL LANDMARK LOCALIZATION METHOD FOR MULTI-VIEW FACE SYNTHESIS IMAGE USING A TWO-PATHWAY GENERATIVE ADVERSARIAL NETWORK APPROACH Review round: 1 Reviewer: 2
Basic reporting: - English language should be improved. There still have some grammatical errors, typos, or ambiguous parts. - "Introduction" is verbose, it could be revised to make it more concise. - Quality of figures should be improved significantly. Some figures had a low resolution. Experimental design: - Source codes should be provided for replicating the methods. - Statistical tests should be conducted in the comparison to see the significance of the results. - The authors should describe in more detail on the hyperparameter optimization of the models. - Deep learning is well-known and has been used in previous studies i.e., PMID: 31920706, PMID: 32613242. Thus, the authors are suggested to refer to more works in this description to attract a broader readership. - Cross-validation should be conducted instead of train/val split. Validity of the findings: - The authors should compare the performance results to previously published works on the same problem/dataset. - It lacks a lot of discussions on the results. - The model contained a little bit overfitting, how to explain this case? Additional comments: No comment.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: 2D FACIAL LANDMARK LOCALIZATION METHOD FOR MULTI-VIEW FACE SYNTHESIS IMAGE USING A TWO-PATHWAY GENERATIVE ADVERSARIAL NETWORK APPROACH Review round: 2 Reviewer: 1
Basic reporting: No comment. Experimental design: No comment. Validity of the findings: No comment. Additional comments: No comment.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTEGER PARTICLE SWARM OPTIMIZATION BASED TASK SCHEDULING FOR DEVICE-EDGE-CLOUD COOPERATIVE COMPUTING TO IMPROVE SLA SATISFACTION Review round: 1 Reviewer: 1
Basic reporting: 1. Abstract has to be rewritten. A concise and factual abstract is required. 2. Problem/gap not properly highlighted in the abstract, also quantitative results/improvement missing in the abstract. 3. There are a number of English language-related issues and need to be revised especially a lot of conjunctions used, sentences are too long and difficult to apprehend. 4. A meta-heuristic algorithm …. Which can have a better performance than heuristic methods (line 45-47), how? Need some references or proper justification. 5. According to the assumption at line 109 that tasks assigned to a core are executed based on EDF, in such a case how do you handle starvation? 6. Do you think that branch and bound technique can’t be applied for large-scale problems (line 154)? 7. The sentence “Then, their method used Kuhn-Munkres …” need to be rephrased (line 364) 8. The sentence “Different from these … “ need to be rephrased (line 393) 9. No paper is found from 2021 and the single paper you have mentioned as of 2021 is actually published in 2018. 10. Add papers from 2021 11. Figure number should be mentioned (line 339) Experimental design: 1. The numbers of subscripts were used for variables and symbols are very high, reduce the number of subscript variables and symbols as few as possible. For this, you can divide the “problem formulation” section into subsections. 2. Do you think inertia weight has a role in the PSO algorithm? Which variant of inertia weight you have used for your algorithm. Detail about inertia weight and other terms like r1, r2, c1, c2 are not given (lines 212-213). 3. Complexity of the proposed approach has been calculated but not compared with their counterparts. 4. Which encoding scheme you have used? (Figure 2) 5. “Referring to published ……” Only mentioning that it is taken from the literature is not sufficient, you need to provide proper references (lines 247-250). Validity of the findings: 1. References not mentioned for the first three approaches (FF, FFD, and EDF) used for comparisons (lines 253, 254-255, and 256-258)? You need to provide proper references. 2. How the computing time of task on the Cloud can be greater than on the Edge? (lines 330-331) Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTEGER PARTICLE SWARM OPTIMIZATION BASED TASK SCHEDULING FOR DEVICE-EDGE-CLOUD COOPERATIVE COMPUTING TO IMPROVE SLA SATISFACTION Review round: 1 Reviewer: 2
Basic reporting: The authors claimed that his approach has better resource efficiency. However, the results concerning resource efficiency are not meaningful. The authors are required to obtain results concerning Average resource utilization which will be between 0 to 1 or 0 to 100 %. All the figures are having no y-axis caption. Similarly, some of the results seem the same even their titles are different. For instance, Figure 3 (a) and Figure 3 (c). The authors are required to add a section results and discussion that should show why their proposed approach is better than the existing and results need to be clearly described. Experimental design: The authors claimed that his approach has better resource efficiency. However, the results concerning resource efficiency are not meaningful. The authors are required to obtain results concerning Average resource utilization which will be between 0 to 1 or 0 to 100 %. All the figures are having no y-axis caption. Similarly, some of the results seem the same even their titles are different. For instance, Figure 3 (a) and Figure 3 (c). The authors are required to add a section results and discussion that should show why their proposed approach is better than the existing and results need to be clearly described. Validity of the findings: The results need to be discussed properly. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTEGER PARTICLE SWARM OPTIMIZATION BASED TASK SCHEDULING FOR DEVICE-EDGE-CLOUD COOPERATIVE COMPUTING TO IMPROVE SLA SATISFACTION Review round: 2 Reviewer: 1
Basic reporting: 1. Previous comments regarding the abstract have been incorporated, however, it is recommended to start your abstract from an introductory sentence (as in the previous one) instead of a sentence like “In this paper”. 2. Authors have provided quantitative results in the abstract and in the contribution section, however, the improvements shown in the different sections are different. Moreover, in the abstract, the term “x” is used while in the contribution section the symbol “%” is used, correct them or justify if it is different? 3. In the last paragraph of section 1 “introduction”, Section 6 has mentioned two times? 4. The symbol “Z” and “U” used in equations 17 and 19 have not been explained in the text. 5. In response to previous comments “Do you think inertia weight has a role in PSO algorithm? Which variant of inertia weight you have used for your algorithm. Detail about inertia weight and other terms like r1, r2, c1, c2 not given (lines 212-213)”, it is mentioned that it is out of scope of this paper. I agree up to some extent that you are not working the inertia weight strategies but Inertia weight is one of the most important control parameters for maintaining the balance between the global and local search of the PSO, which can affect the performance of the PSO. Moreover, the values of constants like c1 and c2 also matter. Therefore, it is suggested to use the latest variant of inertia weight strategy or at least refer/cite some latest papers in the text so that readers can explore them if interested to follow the details, some of the latest papers in this regard are given below. https://doi.org/10.1007/s11227-021-04062-2. https://doi.org/10.1007/s10586-019-02983-5 Experimental design: N/A Validity of the findings: N/A Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: INTEGER PARTICLE SWARM OPTIMIZATION BASED TASK SCHEDULING FOR DEVICE-EDGE-CLOUD COOPERATIVE COMPUTING TO IMPROVE SLA SATISFACTION Review round: 2 Reviewer: 2
Basic reporting: The language used in the paper is very poor. The language of the whole paper still needs to be improved. Experimental design: The reviewer has not incorporated changes to the article based on my first comment "The authors claimed that his approach has better resource efficiency. However, the results concerning resource efficiency are not meaningful. The authors are required to obtain results concerning Average resource utilization which will be between 0 to 1 or 0 to 100 %. " Validity of the findings: The authors have claimed almost 10 times improvement against the state-of-the-art approaches which seems incorrect. The authors are required to perform the experiments again and see whether the improvement claimed is same Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCELERATING COVERING ARRAY GENERATION BY COMBINATORIAL JOIN FOR INDUSTRY SCALE SOFTWARE TESTING Review round: 1 Reviewer: 1
Basic reporting: The paper is very difficult to understand due to many spelling and grammar mistakes: Spelling mistakes on line 24 (compbinatorial-> combinatorial), line 144 (chat->that), line 617 (movers-> moves), and line 834 (Bbut->but). The listings in the paper are not numbered and referred. A good way is to cross-reference the listings such as use ”Listing X shows something” instead of “in the listing bellow”. On line 930, the actual percentages are missing and instead question marks are shown. It could be that the authors forgot to write the actual numbers. Why is “weaken-product based combinatorial join” in quote marks? “There are three streams of methods 47 have been studied” should be “There are three categories of methods that have been studied”. Many of the figures are also too small to be readable (e.g., Figure 1, Figure 14, 15 etc.). The obtained data is provided, but I expected a more thorough experimental setup to be provided, including how the data should be obtained from these tools, the input models used and so on. In this form, the results are not really replicable. Experimental design: The experimental study relies only on some rather simplistic examples. Hence, whether the results can be generalized remains open to some extent. An extension to include more examples/higher strengths and input space models of larger size would be highly recommended. I am missing the results for PICT in Section 5. Causality and dependence between different variables is not thoroughly investigated. Research questions should be rewritten since these contain mistakes and I am more confused after reading them. For example, the first research question actually contains two sub-questions. The first question is binary in nature, which seems quite simplistic given the limitations of such an investigation. No relation between these proxy measures of test efficiency and actual cost measures are given. Some useful advice can be found here: http://www.cse.chalmers.se/~feldt/advice/guide_to_creating_research_questions.pdf The structure of the evaluation methodology and results sections could benefit if these are explicitly aligned with the structure proposed in the guidelines for conducting case study research in software engineering by Runeson et al. who considers for instance sections on the data collection procedure and the data analysis procedure (which seem to be implicitly mentioned in the paper). The threats to validity section needs to be extended with regard to: Conclusion validity. Does the treatment/change we introduced have a statistically significant effect on the outcome we measure? Internal validity. Did the treatment/change we introduced cause the effect on the outcome? Can other factors also have had an effect? Construct validity. Does the treatment correspond to the actual cause we are interested in? Does the outcome correspond to the effect we are interested in? External validity. Transferability Is the cause and effect relationship we have shown valid in other situations? Can we generalize our results? Do the results apply in other contexts? Credibility. Are we confident that the findings are true? Why? Dependability Are the findings consistent? Can they be repeated? Confirmability. Are the findings shaped by the respondents and not by the researcher? The relationship between the independent and dependent variables is not really explained in the context of the research questions. Would be good to explain the stated hypotheses on test efficiency and effectiveness. How about the time needed to generate the existing covering arrays that are used for the combinatorial join? The experiment seems planned in a very ad-hoc manner. For example, the execution time measures are rather unscientific, and there is no possibility for any causality or actionable results. Potentially, the algorithm could be useful and the questions are interesting. However, IMHO it's unfortunately back to the drawing board for designing and reporting the experimental evaluation. Validity of the findings: The generation of covering arrays is an interesting area of research in combinatorial testing. This paper is proposing an algorithm to accelerate the generation of covering arrays using joins. The algorithm takes two existing covering arrays of the same size and a strength t, to perform the join “operation”. First, the strength of the input covering arrays is reduced, and a cartesian product is taken between them. The final covering array with unused rows is then merged to obtain the output. The algorithm is then evaluated based on the time taken, size of covering array, constraints handling capability, and capability of variable strength covering array generation. The paper also discusses the potential oracle reuse, enabled by the proposed algorithm. Covering array generation is an open area of research. Authors have done a good job of explaining how the approach can be used with a diverse set of CIT tools for the input covering arrays generation. The evaluation includes most generation scenarios. Generally, the paper covers well the background of the area. The authors explain quite well the approach, but there are major points that must be addressed. While RQ1 shows promising results, the answers to the other research questions are not convincing. For example, RQ2 clearly shows that if t is 2, the size of the covering array generated by the proposed approach is at least 47% larger than the other approaches. If t is 3, the size increase is dramatic (around 264% at least). The authors’ argument in explaining this finding is that test execution is automated anyway. Even if the approach reduces the generation time by 11 to 88% (which would be a few minutes), this could add hours of test execution time. Section 5.3 discusses how this approach can help in reusing test oracles for three kinds of faults. Concrete examples are needed here for each of the defects and for which oracle can be reused. Also, the answer to RQ3 is not justifiable unless some results are provided demonstrating that reuse of test oracle using combinatorial join has indeed caught new bugs. The algorithm assumes that the covering arrays have the same number of columns, which makes the approach impossible for many realistic scenarios, e.g., regression testing scenarios or evolving software. The practicality of this algorithm needs to be discussed. The creation of basic and basic+ baselines is questionable. For example, how did the authors end up with formulae 16 and 17 for defining the constraints? How are these constraints really representing the constraints in some real applications? The algorithm also does not handle inter-covering array constraints that might limit the applicability of the approach. The running example for the demonstration of the algorithm considers two covering arrays with the same values and parameters. It makes sense to use a better example with different values in the covering array. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCELERATING COVERING ARRAY GENERATION BY COMBINATORIAL JOIN FOR INDUSTRY SCALE SOFTWARE TESTING Review round: 1 Reviewer: 2
Basic reporting: 1) This paper seems to be an extension of the authors’ previous paper (Ukai et al. 2019), but the difference is unclear. The authors should better discuss and highlight the novelty of this study. 2) There is no background of covering array (e.g., definition, notation, example, etc.), as well as the constraints in combinatorial interaction testing. This paper should include them to make it self-contained. 3) This paper discusses the different modelling languages used by different tools to describe constraints. I do not think the proposed algorithm makes a contribution on this aspect. 4) Figures 1 and 2 are hard to read; the p_{i,k} notation in Equation (16) is unclear. 5) This paper is too long. For example, the experiment design is too verbose; Figures 3, 4, 5, and 6 are unnecessary. Experimental design: 1) The largest problem instance used in the experiment has only 200 parameters, which is far away from the “industry scale” that might have thousands of parameters. 2) There are many real-world benchmarks for covering array generation, but the experiment includes synthetic problem instances only. 3) The experiment investigates coverage strengths two and three. The reduction of covering array generation time is more important and beneficial for higher strengths, say five and six. 4) As discussed in related works, there are also algorithms that seek to reuse existing covering arrays to generate new arrays. The proposed algorithm is not compared with them in the experiment. Validity of the findings: 1) The proposed algorithm assumes that there is no constraint across pre-constructed covering arrays. This is a very strict assumption, so the applicability of the proposed algorithm in the real world is limited. 2) The results are not very promising. Although the proposed algorithm can reduce the time cost of covering array generation, it produces much larger arrays, and accordingly, much higher testing cost. 3) There is no data to support the claim of the answer to RQ3. Additional comments: This paper addresses a very important topic in Combinatorial Interaction Testing (CIT). The proposed algorithm, which seeks to reuse existing covering arrays, could be an interesting direction in the field of CIT. However, this paper is too long; further improvements are also needed for the experiment design and discussion of results.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCELERATING COVERING ARRAY GENERATION BY COMBINATORIAL JOIN FOR INDUSTRY SCALE SOFTWARE TESTING Review round: 2 Reviewer: 1
Basic reporting: Even if the README file is useful, I could not find the suggested zip file, but only a latex project. I was unable to replicate the results. The README should also include details about the CASA benchmark and how one could use your approach on these. Since the last review, several readability improvements have been made and some of the comments have been satisfactorily addressed. Would be good to emphasise in the summary of contributions the results related to RQ2 as you do for RQ1, RQ3, RQ4. Still, the listings in the paper are not numbered (e.g., Listing 1) and references in the text. There are still places where I found writing issues. Another thorough proofreading would be helpful for fixing these issues (articles, punctuation). Figures 6 to 11 can be subfigures part of the same figure on scratch generation. The same for the other figures related to VSCA generation and incremental. Just a few examples: “superrior” should be “superior” “ with a focus in their 186 different characteristics in defining constraints” should be “with a focus on their 186 different characteristics in defining constraints”. “can be expressed using the < and negate operator” should be “can be expressed using the < and negate operators”. “based combinatorial 595 join approach.” should be “based combinatorial 595 join approach:” Experimental design: Improvements have been made to include input space models of larger size and the results for PICT. Substantial changes have been regarding the research questions. Substantial evaluation experiments have been made using the CASA benchmark. Unfortunately, not too many details about this benchmark are given. Also, the link in Shaowei Cai (2020) does not work. Shaowei Cai (2020). A description of this benchmark and some “size” metrics should be given in order to assess how realistic these benchmarks are. In Section 4 one should give more details on the data sets used in terms of size, how relevant they are, variability and so on. A specific section on the selection of the data sets should be included. Several improvements have been done in the threats to validity section. Validity of the findings: Good clarifications are given and I appreciate the improvements made to the algorithm as well as the discussion related to a trade-off analysis between test execution time and the number of test cases. The examples of oracles classes are useful in the discussion of fault detection capability. Would be good to clarify how the two approaches outlined in 4.3.1 Generation Scenario are useful in practice. When would an engineer using your approach benefit from generating a CA from scratch incrementally? Clarify the statement “Our tool makes reuse of oracles with higher possibilities”. Section 5 could benefit from better explanations and interpretations of the results shown in each Table and Figure. Additional comments: I notice with satisfaction that most of the comments made in my first review have been taken into account. Nevertheless, there are still some issues that remain to be resolved.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCELERATING COVERING ARRAY GENERATION BY COMBINATORIAL JOIN FOR INDUSTRY SCALE SOFTWARE TESTING Review round: 2 Reviewer: 2
Basic reporting: The presentation of the revised version is now clear and self-contained. Experimental design: The experiment design is improved. Especially, real-world problem instances are now included. Validity of the findings: The potential threats to validity have been well discussed. Additional comments: The authors have paid good efforts to address my previous comments, and I think this paper is now ready for publication.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ACCELERATING COVERING ARRAY GENERATION BY COMBINATORIAL JOIN FOR INDUSTRY SCALE SOFTWARE TESTING Review round: 3 Reviewer: 1
Basic reporting: Since the last review, several improvements have been made. Experimental design: I notice with satisfaction that most of the comments related to the experiment design have been taken into account. Validity of the findings: The paper now appears as a well-rounded piece of work. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ROBUSTNESS OF AUTOENCODERS FOR ESTABLISHING PSYCHOMETRIC PROPERTIES BASED ON SMALL SAMPLE SIZES: RESULTS FROM A MONTE CARLO SIMULATION STUDY AND A SPORTS FAN CURIOSITY STUDY Review round: 1 Reviewer: 1
Basic reporting: The article under review tries to compare the performances of the PCA and autoencoders in the field of questionnaire analysis. Although the work seems serious and well done I have some reservations about the general form of the article that made its reading difficult. Plus the exact definitions of the ento-encoders (see bellow). Some definitions are not clear and a mathematical clear definition: - What is communality and non-cummunality (see references problem bellow) - The formula of line 325 (and in general all the metrics used) should be clarified as one wonders if the sum is running across the data-set or across the dimensions of the data. - In section 2.4: what does the term "increased by" refer to? n is the number of samples, what is there to increase. In general a clearer mathamtical presentation is needed. I'm also concerned by the exact definitions of the autoencoders: In the colab First many references are not listed in the reference section (for example Forcino (2012) Manjarres-Martinez et al. (2012) can not be found in the reference section). Some citation follow the form [number] and other follow the form "Author (Year)". It is important to fix this citation problem. Likewise, some figures are mis-referenced (Figure 8 in line 403 is inexistent and surely the authors meant figure 6 in line 390). There is a bulletpoint in line 295 that has no text. Experimental design: My main concern is about the definition of the auto-encoders: In the code submitted the activation are "linear" this means that the autoencoders boils down to an affine transform. How can an affine transform do better than the PCA in termes of MSE? I think this is because non linear auto-encoders overfit on the small number of samples used for training. Judging by the code submitted, what is called deep-autoencoder has only one layer more than the autoencoder. This is not a really deep architecture. In the code submmited the deepautoencoder has two times 3-size layers while the text mesion one layer with dimension 6. I also encourage the authors to disclose the data generation code (for synthetic data) Validity of the findings: Pending the experimental clarification about the architectures I can not comment on the validity of the findings. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ROBUSTNESS OF AUTOENCODERS FOR ESTABLISHING PSYCHOMETRIC PROPERTIES BASED ON SMALL SAMPLE SIZES: RESULTS FROM A MONTE CARLO SIMULATION STUDY AND A SPORTS FAN CURIOSITY STUDY Review round: 1 Reviewer: 2
Basic reporting: The paper compares PCA and Autoencoder as dimentionality reduction techniques, aiming at showing the superiority of autoencoders in case of small sample sizes (and departures from normality) in the context of psychometric survey data As a preliminary observation, the title of the paper seems not appropriate since, its main content is a MC simulation study and only an application to psychometric data is presented. Moreover, this real data example is based on a single random sample draw from the original data set . The authors should either consider more real data sets or resampling several times form the single one considered averaging on the results also, the results on the whole data set (400obs) should be reported. When applying the PCA, since the items are ordinal (the indication of the scale is missing…) I suggest to employ Polychoric correlations. In general, the paper lacks definitions and details. While PCA is a very well known multivariate method , NN are in general considered black boxes , several structures are possible and none of the considered architecture are displayed in the paper, making very difficult to understand them and their differences. Note that there’s no figure 8 in the paper and that citations need to be fixed as the follow different citation standards. Also, on. Line 295 there’s an extra bulletpoint Experimental design: The three correlation structures are designed to mimic a 3 factor solution organizing 15 items into block of 5 showing the same correlation. For the high communality for example, the value of 0.8 and then always 0.1 for cross blocks items. I think that this design is too extreme and far from any real situation. In section 2.4 (sample size) don’t understand what the sample size increments are meant for. Validity of the findings: The paper is not clear enough to let me judge on the validity of the findings. Architectures of the NN should be displayed and suggestions about the real data application followed to facilitate the reading and the evaluation Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: ROBUSTNESS OF AUTOENCODERS FOR ESTABLISHING PSYCHOMETRIC PROPERTIES BASED ON SMALL SAMPLE SIZES: RESULTS FROM A MONTE CARLO SIMULATION STUDY AND A SPORTS FAN CURIOSITY STUDY Review round: 2 Reviewer: 1
Basic reporting: The authors have satisfactorily responded to all my questions . From my point of view, the paper is now suitable for publication. Experimental design: The authors have modified the experimental design according to my remarks Validity of the findings: In this new version the validity of the findings emerges in a clearer way. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: BOTS IN SOFTWARE ENGINEERING: A SYSTEMATIC MAPPING STUDY Review round: 1 Reviewer: 1
Basic reporting: The introduction provides a good, generalized background of the topic Bots in Software Engineering that gives an appreciation of the wide range of applications for this technology. However, the motivation for this study need to be made clearer. Experimental design: The survey methodology is consistent with a comprehensive coverage of the subject. Validity of the findings: It may be worthwhile to mention the tradeoffs involved in choosing the inclusion/exclusion criteria as opposed to other criteria to filter out irrelevant articles to the mapping study Additional comments: Overall, this is a clear study manuscript that has implications for the development of Bots in Software Engineering
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: BOTS IN SOFTWARE ENGINEERING: A SYSTEMATIC MAPPING STUDY Review round: 1 Reviewer: 2
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: In this Paper complete information related to bots with empirical data provided. it is a good source of information for developing new bots.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: BOTS IN SOFTWARE ENGINEERING: A SYSTEMATIC MAPPING STUDY Review round: 1 Reviewer: 3
Basic reporting: In general, the article is written in a clear and reproducible manner. There is an overall storyline, which supports the reader. Besides few minor issues, however, the report lacks information in two key aspects. Minor issues: - There is an inconsistency: the cover page abstract talks about a systematic literature review, while the rest of the paper talks about a systematic mapping study (more on that in the research design section). - Since this article does not have any page restrictions, authors should avoid the heavy use of abbreviations, e.g., SE, NL PR etc. Furthermore, it happens fairly often that, when abbreviations are used, no spaces have been left. For example, p. 2, l. 55: “study(SMS)”. Authors should thus carefully read the text and fix those issues. - In section 1 (introduction), authors should provide a bit more structure. A structured introduction with subsections “objective”, “problem statement”, “context”, “contribution” would help better support the reader getting key information quickly. - Also, in section 1, authors should provide a subsection “outline” in which they guide the reader in terms of what can be found where in the article. - Fig. 2 is of limited use as the authors do not provide numbers at the bars. Hence, this whole figure can be used as a trend, yet, authors should add the numbers to the bars and also to the trendline in order to provide details required to understand/follow the text. - Fig. 3 and Fig. 4: Authors should revise these figures. In their current form, especially with the color scheme applied, the figures are (in the best case) hard to read (in a gray-scaled printout they are unreadable). If I may, I’d suggest converting Fig. 3 into a tree to better visualize the topics and hierarchies. Fig. 4 should be converted into simple tables. Critical key aspects: Even though the general reporting is of good quality, the article suffers in two key aspects: 1) Related work: the related work should be revised substantially. About a third of the related work is only concerned with pointing to an ICSE workshop and dropping the 3 categories discussed on this workshop. It continues with collecting further terms, but the related work does not provide a detailed description. I wonder if something like Fig. 5 would be beneficial here. In particular, I miss a clear approach in setting the scene in the related work. Few aspects are mentioned in the introduction. However, I would expect a clear background and terminology building in this section, i.e., authors should not only name, e.g., “code bots”, but should also explain what they are, which tasks they do, etc. Long story short, I consider the related work insufficient and suggest the authors to create the two subsections “background” and “related work”, such that the scene is set, the terminology is introduced, and the state of the art is properly presented. Finally, authors should discuss gaps in the field to provide an argument why the study is necessary (more on that in the study design section). 2) I apologize for the clear and hard statement, but the research design is poor. Everything about the research design seems to be located in section 3. This section needs to be completely revised. Details on this in the study design section of this review. Experimental design: The study design is the weak point of this paper. In the submission-system abstract, the authors state to report a systematic literature review (SLR). In the actual paper, authors talk about a systematic mapping study (SMS). Unfortunately, the paper does not allow for judging which of these study types have been applied. The results section reads like an SLR. However, if the authors conducted a mapping study, key aspects are missing, notably: - A classification schema - A collection of maps To make my point clear, SMS and SLR both have different targets. An SMS is aimed to explore a domain of interest and to provide an overview of the field by classifying the publications, while an SLR explicitly aims to generate new knowledge from analyzing a publication body. In the current version, neither of these goals is properly addressed. Moreover, the research design is weak missing essential parts. Let’s start with the research questions. On page 2, l. 98, authors state “We surveyed the research articles pondering upon these three research questions” – however, even with a text search I was not able to find the research questions. This is the show stopper. Since I don’t know what the research questions are, I just cannot evaluate the paper properly and, therefore, I cannot provide much of a review of the findings (see respective section). Furthermore, the research design lacks important information: - Rationale for the study - Search queries – the construction procedure/rationale is missing - Search strategy: incomplete and inconsistent. Google Scholar is still subject to debate whether or not it is a proper (meta) search engine. An argument why “traditional” literature databases have been excluded from the search is, however missing. The search strategy is also inconsistent described since the snowballing is kind of located after the in-/exclusion, which is kind of “creative”. This decision was, unfortunately, not explained. - Paper selection: while in-/exclusion criteria are listed, the actual paper selection procedure is not explained. - The data extraction procedure is not explained. - Validity procedures and the actual threats to validity are not discussed Therefore, I suggest that the authors first clarify which kind of study the are reporting and, second, after having selected a study type, I suggest consulting the major references for executing and reporting these studies, i.e.: - Kitchenham et al.: Evidence-Based Software Engineering and Systematic Reviews - Petersen et al.: Systematic mapping studies in software engineering I apologize again for my hard statement. Without this information, the article just cannot be accepted since the requirements for transparent and reproducible research are not met. Validity of the findings: Well, I have to say the text in the results section is quite informative and, so, authors did a lot of work. However, since the research questions are missing, I just cannot seriously review this section. Since I just don’t know the exact questions, every review would be grounded in my personal interpretation. I kindly ask the authors for understanding that I cannot do this. Additional comments: In general, the overall writing of the paper is good and I have no further comments to be added to my list in the basic reporting section. My only general comment is that authors need to revise and probably rethink this article. It might happen that, after deciding on an actual study type, a considerable amount of rework is necessary, maybe including another data collection stage. Therefore, authors should critically discuss and revise their paper.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEEP CONVOLUTIONAL NEURAL NETWORKS FOR REGULAR TEXTURE RECOGNITION Review round: 1 Reviewer: 1
Basic reporting: The paper provides a good and sufficient background of Deep CNN that can classify regular and irregular texture images. Though the approach is relatively simple to implement, yet it is quite powerful. No significant limitations were discussed. It may be worthwhile to mention the tradeoffs involved in choosing the Deep CNN as opposed to some other technique Professional English used. No ambiguity Experimental design: The experimental setup is quite standard, and is appropriate for the study, especially the evaluation and comparison of several different classical CNN models, but is the sample size taken for testing sufficient to represent a large database images Validity of the findings: I think the motivation for this study need to be made still clearer. In particular, the connection between (a) the necessary of recognizing and locating regular textures by giving applications, and (b) the necessity of choosing Regular Texture Recognition Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEEP CONVOLUTIONAL NEURAL NETWORKS FOR REGULAR TEXTURE RECOGNITION Review round: 1 Reviewer: 2
Basic reporting: - This paper proposed a new texture database that includes 2800 images (1300 regular and 1500 irregular images). The author proposed using five state-of-the-art CNN models; AlexNet, InceptionV3, ResNet50, InceptionResNetV2, and texture CNN (T-CNN), to examine the texture database. The result showed that the InceptionResNetV2 obtained the highest accuracy on the texture database with an accuracy of 97.0%. - However, this paper did not show any novel method or did not show any complex experiments, such as experiments on insufficient data (10%, 20%, 30% of training set), proposed a new CNN method; Fusion CNNs, 1D-CNN, etc., or show the parameter tuning. Experimental design: This paper doesn't show any new experiments. It presented only which CNN model performs better in the proposed texture dataset and show only one experiment (see Table 3). The author should do more experiments, at least three experiments. The author should concern about the novel method or exciting experiments. Validity of the findings: This paper showed only which CNN model performs the best on the proposed database. Then, the author should provide a discussion section on why the InceptionResnetV2 outperformed other CNNs. Additional comments: - The author should concern more about the novelty of this work. Provide only a new texture database was not sufficient to publish in the outstanding journal. - The abstract section is relatively short. It did not include any new information about the proposed deep convolutional neural networks (CNNs). However, this paper provides a new regular texture database and uses only standard CNN in the experiments. - The author should do more experiments, at least three experiments. The author should concern about the novel method or exciting experiments.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEEP CONVOLUTIONAL NEURAL NETWORKS FOR REGULAR TEXTURE RECOGNITION Review round: 1 Reviewer: 3
Basic reporting: There were no significant constraints discussed. It's probably worth mentioning the tradeoffs involved in using deep CNN instead of another approach Experimental design: This article does not much new experiments over traditional. Only one CNN model performs better in the proposed texture dataset and show only one experiment. It should be compare with more methods see table 3 . The author should do some more experiments, atleast three experiments. Validity of the findings: Entire article discussed only which CNN model performance the best on the proposed database. Then, the author should summarise why inception ResnetV2 outperformed other CNNs. Additional comments: The author should concern more about the novelty of this work. Provide only a new texture database not much technical strength to publish in the outstanding journal. The abstract section is relatively short should be only 300 words The authors should do more experiments at least three methods.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEEP CONVOLUTIONAL NEURAL NETWORKS FOR REGULAR TEXTURE RECOGNITION Review round: 2 Reviewer: 1
Basic reporting: The revised manuscript submitted provided a good and sufficient background of Deep CNN that can classify regular and irregular texture images. The author also mentioned the tradeoffs involved in choosing the Deep CNN as opposed to some other technique Experimental design: The new manuscript designed provided more complex experiments using 10-fold cross- validation and is sufficient enough to validate the approach Validity of the findings: (a) the necessary of recognizing and locating regular textures by giving applications, and (b) the necessity of choosing Regular Texture Recognition The above 2 points has now been made clearer in the revised manuscript Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEEP CONVOLUTIONAL NEURAL NETWORKS FOR REGULAR TEXTURE RECOGNITION Review round: 2 Reviewer: 2
Basic reporting: In this paper, the author proposed a new texture database collected from Flickr and google images, including 1230 regular textures and 1230 irregular textures. The author proposed to use fisher vector pooling-CNN (FV-CNN) to extract the local feature from the images. In the experimental results, three CNN architectures were evaluated; ResNet-50, InceptionV3, and Inception-ResNetV2. The result showed that the Inception-ResNetV2 outperformed all CNNs with an accuracy of 96% with 10-fold cross-validation. Figure 6, however, could the author shows both train and validation losses and train and validation accuracy. The author showed the corrected classification of regular and irregular textures in Figures 7 and 9. However, could the author show the confusion matrix? So, the audience could see all the correct and incorrect numbers. In addition, the result of the Inception-ResNetV2 and FV-CNNreg-IncepResv2 were 96% and 97% accuracy, respectively. Is it a significant result? Experimental design: The author provided the experimental results of the FV-CNN. Also, a 10-fold cross-validation method was proposed. The author also compared the FV-CNN with the FV-SIFT method. However, the experimental result is not enough. The author should do more experiments as suggested before. Validity of the findings: In the new manuscript, the author proposed the FV-CNN to classify the texture images. In the experimental result, the author showed that the FV-CNN outperformed the original CNNs and FV-SIFT method. So, could the author report the kernel of the SVM used in the experiment and report the hyperparameter values. Could the author add the "Fisher Vector" into the framework (Figure 5)? In the framework, it was unclear how the author attached the FV into the CNN? Additional comments: - I suggest the author do more experiments, such as data augmentation techniques, tuning hyperparameters, etc. - Would you please provide the Discussion section?
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEEP CONVOLUTIONAL NEURAL NETWORKS FOR REGULAR TEXTURE RECOGNITION Review round: 3 Reviewer: 1
Basic reporting: More concerning points are as follows. - The author responded that the confusion matrix was added as Table 4. But, Table 4 showed the misclassified images by Inception-ResNet-v2. - The author presented training and validation in Figures 6, 10, and 11. However, the quality of the Figure is relatively low resolution and see some noises. The legend should not overlap on the graph. - In Figures 7, 8, and 9, what are the numbers under each pattern? The author should describe in the caption or in the text. - Up-to-date references (2019-2021) are required. Experimental design: The author clearly explained the experimental results. The experiments are cover the contribution. Validity of the findings: In Figure 5, please check the block of Fisher vector pooling. Somethings appeared before the word "pooling." Additional comments: - The author responded by adding the experiments of data augmentation. I found the words "data augmentation features" in line 234. However, I cannot see the compare results. Could the author recheck it? - The discussions section was presented in this revised manuscript.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 1 Reviewer: 1
Basic reporting: 1.1. Clear and unambiguous, professional English used throughout. I find the English acceptable for publication. However, I suggest adding some definitions and unifying some concepts. I found it difficult to understand whether some expressions were used interchangeably or not, which turned out to make it difficult to realize the scope of the paper regarding the type of software included in the study. In particular, * Is “computer systems” used as a synonym of “computer science”, “software”, “system software”, some of them or none? * Does system software correspond to software used as a platform to develop other software including operating systems, software languages and/or utility software? * Is the paper about system software in general or only system software created as part of a research process? Is the paper about software artifacts or research artifacts (which also include, for instance, data)? * What is the field the author refers to? Is it computer science, computer systems, system software, research software? Some typos: * Line 11, “research’s results”, possibly no need of the possessive form * Line 60, “are valued an are an important contribution”, possibly “and” rather than “an” * Line 443 and 444, “many—-f not most—”, possible “if” rather than “f” I suggest a full proof-reading to catch and fix those issues. 1.2. Literature references, sufficient field background/context provided. The article is accompanied with a good number of references. However, the author should double check the citation information. For instance * Line 472, reference ACM, URL not working (https://www.acm.org/publications/policies/artifact-review473 and-badging-current). * Whenever possible, even for web pages, please include information about the author and published date. * The reference used for FAIR applied to research software (line 319, Katz et al., 2021) corresponds to a (partial) outcome of a Research Data Alliance Working Group and could be updated to the now first formal output produced by this group (see DOI:10.15497/RDA00065 and corresponding PDF https://www.rd-alliance.org/system/files/FAIR4RS_Principles_v0.3_RDA-RFC.pdf) * I suggest also including a reference to the code supporting this publication. I know it is provided as supplementary material but getting a permanent identifier together with citation data for this particular release is also possible (and slowly becoming a common practice). I would also like to see the GitHub pages working and linked to from the publication (possibly with a note regarding differences that might appear as the repo evolves post-publication). 1.3. Professional article structure, figures, tables. Raw data shared. The article structure is appropriate as well as figures and tables. Raw (and processed) data is included in the supplementary materials. 1.4. Self-contained with relevant results to hypotheses. It was possible to read the article on its own. Results are aligned to hypotheses. 1.5. Formal results should include clear definitions of all terms and theorems, and detailed proofs. Code to process raw data and analyze results is shared, including a set of instructions on how to use it. The presentation of results is clear, detailed and supported by the presented data. Experimental design: 2.1. Original primary research within Aims and Scope of the journal. The article aligns to the aims and scope of the journal. 2.2. Research question well defined, relevant & meaningful. It is stated how research fills an identified knowledge gap. Yes 2.3. Rigorous investigation performed to a high technical & ethical standard. It looks like. 2.4. Methods described with sufficient detail & information to replicate. It looks like. Validity of the findings: 3.1. Impact and novelty not assessed. Meaningful replication encouraged where rationale & benefit to literature is clearly stated. Replication possible thanks to the available code and basic set of instructions on how to run it 3.2. All underlying data have been provided; they are robust, statistically sound, & controlled. Raw data has been provided. 3.3. Conclusions are well stated, linked to original research question & limited to supporting results. I would say yes. Additional comments: I find this article of broad interest for the research community. Sharing software and in general research artifacts is slowly becoming a common practice aligned to Open Science and the FAIR principles. I hope to see more research on this topic. As the authors stated, one of the main challenges was the curation of the data which was mainly manual. Although out of the scope of this paper, I hope research in this area will motivate not only conferences but also preprints and journals to be more proactive regarding the metadata accompanying research artifacts.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 1 Reviewer: 2
Basic reporting: [Introduction] The list of detailed research questions presented in the Introduction contrasts with the regular scope of an initial section. These questions should be properly justified and connected, which should be done in a further section. Alternatively, I suggest the authors expanding the "interesting" question (line 66) by exemplifying other possible indicators that would be influenced by artifact sharing. I suggest the authors summarize the main findings of the study in the Introduction. [Background and Related Work] The author cites several references addressing open science and initiatives promoting artifact sharing, which address the research background. However, I missed a section discussing related work on the topic. For instance, previous investigations/discussions on the maturity of open science in the field. [Figures/Tables] I suggest presenting the Figure 1 data using tables or grouping conferences by frequency. Yet regarding Figure 1, it is not clear to me the relevance of identifying the "Organization" of the conferences. For instance, several conferences continuously alternate between ACM and IEEE. I understand Table 1 should adopt a better-contextualized criterion than alphabetical order for listing conferences. For instance, the author may use the number of papers or the acceptance rate. Each conference has a variable time of running/publication, which may even lead to its publishing in the next year. I would like to understand how the author reached "42 months of publication" for all publications and why he opted by following this gold number. [Verifiability] I was expecting a user-friendly dataset available from a research paper addressing the artifacts' availability in research papers. Experimental design: Before reading about the study results (Section 3), I was expecting a Section reporting the study design. After reading the entire section of "Data and Methods" (Section 2), I could not find any reference to the research goals/ hypotheses. Besides, the author should properly address the study population and sample investigated. The text of section 2 is too monolithic and hard to follow. It does not address the basic content expected from a study design section. After reading it, I only got a vague idea about what the authors had done (execution) and the main data types they collected. It is not clear the criteria followed by the author to reach the "hand-curated collection" of 56 peer-reviewed "systems conferences." Even if the sample was established by convenience from google scholar, the authors should let it clear in the paper. I could not reach what does the author mean by "systems conferences." Besides, I am afraid about the lack of representativeness of the addressed in the study. For instance, I could found only one relevant conference from the Software Engineering field. It is not clear the methodology followed for identifying artifacts in the papers. Besides, I understand that artifacts available in research papers should also address their quality. I recommend the authors planning and performing an evaluation in this direction. Validity of the findings: Section 3.2 illustrates my difficulty in understanding the research methodology. It is a subsection about results. However, it starts by presenting a new research question: "does the open availability of an artifact affect the citations of a paper in computer systems?". Besides, this RQ is presented as "the main research question of this paper." Contrastingly, several of the eight research questions presented in the paper Introduction are barely (or clearly) addressed in the paper. The findings of the paper are not discussed. For instance, what are the possible implications of these findings for the practice of sharing artifacts? Which are possible ways for filling the gaps observed? The section 4.1 "Discussion" basically summarize the several statistical tests and analysis performed (4.1). However, even here I could not see a clear connection among them. Besides, without a proper research plan, I could not get why all these tests and analyses were performed. Only at the end of Section 5, I could observe some links between the study results and some implications. Additional comments: Unfortunately, the lack of properly reporting basic content expected from a research paper hampered me from performing a more accurate review and indicating further suggestions for improvement. Despite that, I recognize the relevance of the research topic. Therefore, I encourage the author to keep on with this research. Maybe the author would rethink the design of this study as a systematic review addressing other research artifacts beyond the papers.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 1 Reviewer: 3
Basic reporting: This article is well-written and addresses statistically the very important question if sharing of software artifacts has an effect on citation metrics of computer science, as well as the need for proper archiving of software artefacts for longevity. The argument is well put forward and supported by a thorough data investigation for the given sample set. A couple of smaller typos were found As the article points out, the availability of software artefacts is important - however this article itself does not have an explicit "Data and software availability" section - which would be very valuable given the data quality and reproducibility provided for this manuscript. I would also expect the data to be archived in Zenodo with a DOI rather than be a fluctual GitHub reference - I know the author have also uploaded a snapshot of said GitHub repository along with the article - but this would not have been detected by their own measures. As the data seems to be used for multiple articles under review, a separate version-less Zenodo DOI would be able to bring these articles together for describing different aspects of the same data (e.g. using Related Identifiers mechanism in Zenodo).Only one of these other articles are cited from this manuscript. Kelly McConville is acknowledged for assisting with the statistical analysis - this article is quite strong on statistics - I might have added McConville as co-author to recognize this fact. Experimental design: This research is rigourous and the experimental design is well explained. Validity of the findings: I have not assessed the choices of statistical methods, but the authors have been thorough throughout, e.g. providing p-values and describing method choices. The actual values and figures are also calculated dynamically by the R markdown for the manuscript, and so can be reproducibly verified. In order to verify the reproducibility I had some challenges in getting the R environment set up - however the author quickly responded to my GitHub issue and they added detail installation instructions which helped me. The findings are based on a large subste of the CS System conferences - perhaps more could be added on why these particular conferences (e.g. practicallity of access, notability, familiarity to the authors). The authors have done a good selection time-wise (all considered data in 2017) in order to consider citations building over time. The data set can be hard to navigate and use as it is shared across multiple publications and corresponding R markdown environments. README files have been provided, but the manuscript itself does not detail which of the data is relevant to this particular manuscript. For instance it is possible for someone proficient in R to analyse the R Markdown of pubs/artifact/artifact.Rmd where the statistical analysis is embedded. The data/ CSV files are also well documented in that they each have a little file describing their schema - some of these state their provenance, which is nice. Additional comments: Hi, I am Stian Soiland-Reyes http://orcid.org/0000-0001-9842-9718 and believe in open reviews. This article is a very welcome addition to the CS field, where sharing of software artefacts have only recently emerged as a practice, catcing up on fields like bioinformatics. The key message from this article for me is showing evidence for CS authors that sharing artefacts is beneficial for science and for their article's to be cited. I therefore think this is a very important article for PeerJ CS to publish. My recommendation would have been Accept except for that the article should more clearly promote its Data/Software Availability as well as archiving those with their own DOI at Zenodo or similar repository.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 2 Reviewer: 1
Basic reporting: I appreciate most of the improvements made by the author to the paper. Now, the paper reading flows better. The main research goal and study hypothesis are clear. Some design decisions are now better contextualized. ​However, there are still relevant remaining issues to be addressed. In the following, I summarize my key recommendations for this paper. 1- The research steps should be completely described, especially in the case of following a systematic design (which is not still clear to me). 2- The author should contextualize the detailed research questions and corresponding data analysis procedures, considering main study goals and hypotheses. 3- Discussions should focus on interpreting more the implications of the study findings rather than adding more ad hoc data analysis and statistical tests. 4- The author should significantly improve the discussions about threats to validity. Experimental design: The author does not report systematic steps followed for gathering papers' data. For instance, "....several search terms were used to assist in identifying artifacts, such as “github,” “gitlab,” “bitbucket,”, “sourceforge,” and “zenodo” for repositories; variants of “available,” “open source,” and “download” for links; and variations of “artifact,” “reproducibility,” and “will release” for indirect references." Besides, it is not clear how the author checked out whether a paper received an award. At the beginning of the paper, several research questions are presented without a proper justification. These detailed RQ should be addressed and contextualized in the proper section. I understand that the authors followed a partially ad-hoc process for gathering papers' data without double-checking, instead of following a systematic one. If true, the author should explicit these relevant threats to validity. Otherwise, the author should report step-by-step how they reached each data to led the package reproducible. Validity of the findings: I could not find any mention to double-checking on the gathered data. It is relevant at least fo support data gathering through non-systematic steps. What was the normality test applied? p-value? Yet about normality, the authros state that the data was normalized. However, after that the author argue that "...omit the 49 papers with zero citations to improve the linear fit with the predictors." Outliers? And what about those with >2000 citations? "...It is worth noting, however, that most of the source-code repositories in these artifacts showed no development activity—commits, forks, or issues—after the publication of their paper, suggesting limited impact for the artifacts alone". I disagree about this example of poor impact. Please note that commits and forks are typically made for source code, which is not necessarily the case of research data. I miss a discussion of the findings towards causality (which seems to be unfeasible) and more statistical tests (that does not help to understand the original results. Some of the complementary analysis seems to address irrelevant papers' features (at least without proper contextualization). For instance: "Incidentally, papers with released artifacts also tend to incorporate significantly more references themselves (mean: 32.31 vs. 28.71; t = 5.25, p < 10−6 )." I definitively could not see why this information is relevant in the context of the research. The same for "using colons" and "paper length," among others. "... In contradistinction, some positive attributes of artifacts were actually associated with fewer citations. For example, the mean citations of the 573 papers with a linked artifact, 47, was much lower than the 71.3 mean for the 102 papers with artifacts we found using a Web search (t = −2.02, p = 0.04; W = 22865, p < 10−3247 ). Curiously, the inclusion of a link in the paper, presumably making the artifact more accessible, was associated with fewer citations. This finding strengthens the need for going beyond a quantitative analysis over a single sample established by convenience. For instance, I did not see some treatment about self-citations. "Of the 2439 papers, 292 91.8% displayed at some point an accessible link to the full text on GS. What is the impact of these 9.2% over the citations analysis found? Accessibility is definitively relevant in this context. "These textual relationships may not be very insightful, because of the difficulty to ascribe any causality them, but they can clue the paper’s reader to the possibility of an artifact, even if one is not linked in the paper." I am afraid I could not get the point here. "As a crude approximation, a simple search for the string “github” in the full-text of all the papers yielded 900 distinct results. Keep in mind, however, that perhaps half of those could be referring to their own artifact rather than another paper’s, and that not all cited github repositories do indeed represent paper artifacts." I could not see how this analysis is relevant to support the discussions. Additional comments: Figure 1 is not cited. Besides, I understand that this figure could be omitted from the paper. I suggest combining the analysis presented in Figure 2 and Figure 3 into a single one. Table 1: please standardize the number of decimal places 2439 papers=> 2,439 papers "...when skimming these paper*s*..." The zenodo package is not cited in the paper.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 2 Reviewer: 2
Basic reporting: This article is well-written and addresses statistically the very important question if sharing of software artifacts has an effect on citation metrics of computer science, as well as the need for proper archiving of software artefacts for longevity. The argument is well put forward and supported by a thorough data investigation for the given sample set Overall language, background and methodology details have been significantly improved in response to peer review. Experimental design: This research is rigourous and the experimental design is well explained. The author has responded well to my change suggestions in GitHub and have now added a Zenodo DOI with archive of the data and software. Validity of the findings: I have not assessed the choices of statistical methods, but the authors have been thorough throughout, e.g. providing p-values and describing method choices. The actual values and figures are also calculated dynamically by the R markdown for the manuscript, and so can be reproducibly verified. The data set can be still be a bit hard to navigate as it is intermingled with the data, but the strength of that is that the the data results are (for this paper) reproducible. The install instructions have been improved. Additional comments: Thank you for improving this submission, this is now a very strong contribution. Repeating from my previous review: This article is a very welcome addition to the CS field, where sharing of software artefacts have only recently emerged as a practice, catcing up on fields like bioinformatics. The key message from this article for me is showing evidence for CS authors that sharing artefacts is beneficial for science and for their article's to be cited. I therefore think this is a very important article for PeerJ CS to publish.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 3 Reviewer: 1
Basic reporting: The article is presented in a clear way, with definitions and references useful to the reader. I still would suggest a quick proof-reading to double check for typos and so. For instance, an article seems to be missing in the sentence starting as “An additional goal of is an exploratory”. First mention of a research artifact (including data, publication, software, etc.) should be accompanied by a reference, even if detailed later (this makes it easier for readers interested in that element to directly go for it rather than looking for the reference somewhere else in the text). This is not the case of the dataset produced together with this study, first mentioned on line 115. Regarding the sharing of data and software. For any future occasion, I suggest separating data from software as they might have different licenses (unless using a data version for both is indeed intended). Some conferences might have the same acronym. I suggest expanding the name and adding a link to their websites (if still available). Experimental design: I suggest making clear from the beginning that the only research artifacts analyzed in this study are in the form of software (or systems software) excluding data, workflows and others. Step 6 (line 200) needs more detailed explanation. I (randomly) checked the file https://github.com/eitanf/sysconf/blob/master/data/papers/ASPLOS.json from the GitHub repo corresponding to this study, it includes some “citedBy” information. The first paper “cherupalli2017determining” appears with 21 in Google Scholar but that does not seem to coincide with the “citedBy” in that file or to the “outCitations” in the file https://raw.githubusercontent.com/eitanf/sysconf/master/data/s2papers.json. Based on the description on the GitHub repo “papers/: A collection of JSON files, one per conference, with Google Scholar statistics on each paper in the conference”, I was expecting to find 21 “citedBy” items there for the mentioned paper but I saw more. Again, this was a random and quick look but still it might be worth to add some more information on this step (and maybe the others). Validity of the findings: I find this study of value as we need to understand better how research artifacts beyond the paper itself are shared and what impact they represent in terms of recognition (e.g., in the form of citations) and reproducibility. One of the limitations of this paper that is not discussed at all (it is maybe outside the scope) is exclusive focus on software as research artifact. Sharing and citing data has been encouraged longer than sharing and citing software. One question that arises is whether the analysis on citation count would change across papers only sharing data, only sharing software or sharing both. One of the premises in this study is that paper sharing software is more reproducible and therefore it would be preferred by researchers, i.e., traducing in more citations. I find it difficult to reproduce a paper with data but not software or vice versa. Additional comments: Thanks to the author for taking the time to address the reviewers’ comments. I find the current version an improvement regarding the previous one.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 3 Reviewer: 2
Basic reporting: I understand the author made considerable improvements in the manuscript, especially those addressing the study's characterization and reproducibility. Therefore, I recommend accepting the paper. Besides, I invite the author to check the grammar of the following sentences: “These textual relationships may not be very insightful, because of the difficulty to ascribe any causality them, but they can clue the paper’s reader to the possibility of an artifact, even if one is not linked in the paper.” “…Using a search function on each document (“pdfgrep”) on each of search terms listed above…”, please check the grammar. Yet regarding the last sentence, please note that the paragraph where the terms are listed reports that “...several search terms were used to assist in identifying artifacts, *such as* “github,” “gitlab,” “bitbucket,”, “sourceforge,” and “zenodo”….” Here, I am afraid the use of the expression “such as” may not let clear whether all terms applied in the searchers are listed (as expected). It may address a reproducibility issue to fix. Experimental design: ok Validity of the findings: ok Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 4 Reviewer: 1
Basic reporting: No comments Experimental design: No comments Validity of the findings: No comments Additional comments: Thanks for taking into account all the comments raised by reviewers. Two comments on this new version. In line 202 “to” is duplicated, see “closest to to the”. One more comment related to the rebuttal letter. Comment by reviewer: I suggest making clear from the beginning that the only research artifacts analyzed in this study are in the form of software (or systems software) excluding data, workflows and others. Response by the author: That is not quite the case. I looked at all the research artifacts I could find in repositories and digital libraries. A minority of those were pure data or configuration files (even when hosted on github.com). They are treated equally to the code artifacts in this study. The article is titled “Software artifacts and citations in computer systems papers”. The abstract mentions software again “The availability of these software artifacts is critical” and the Introduction section mentions code “an important step towards this goal is the sharing of artifacts associated with the work, including computer code” and then the rest of the article mentions “artifacts” in general. It puzzles me that the title is about software artifacts but the article is about research/experimental artifacts.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: RESEARCH ARTIFACTS AND CITATIONS IN COMPUTER SYSTEMS PAPERS Review round: 5 Reviewer: 1
Basic reporting: Thanks for the revised version. I think all points raised by reviewers have been now addressed. Experimental design: No comment Validity of the findings: No comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A MATHEMATICAL FORMULATION AND AN NSGA-II ALGORITHM FOR MINIMIZING THE MAKESPAN AND ENERGY COST UNDER TIME-OF-USE ELECTRICITY PRICE IN AN UNRELATED PARALLEL MACHINE SCHEDULING Review round: 1 Reviewer: 1
Basic reporting: The language used in the study is very clear and unambigous. In the introduction, the topics are discussed regularly. There are enough figures to make understandable. Especially the part experiment and test has been prepared in detail and with care. Necessary definitions that make the equations more understandable are given in tables. Experimental design: It is a study in accordance within Aims and scope of the journal. Validity of the findings: The conclusion part is discussed in detail and explanatory. The curves obtained prove the effectiveness of the proposed method. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A MATHEMATICAL FORMULATION AND AN NSGA-II ALGORITHM FOR MINIMIZING THE MAKESPAN AND ENERGY COST UNDER TIME-OF-USE ELECTRICITY PRICE IN AN UNRELATED PARALLEL MACHINE SCHEDULING Review round: 1 Reviewer: 2
Basic reporting: no comment Experimental design: no comment Validity of the findings: no comment Additional comments: In this study, which optimizes the scheduling problem of a new bi-objective unrelated parallel machines with sequential setup times, it is aimed to minimize the makespan and the total energy cost. The motivation, importance and flow of the article is quite good. The contribution of the article appears to have met the standards for the "PeerJ Computer Science" journal.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A MATHEMATICAL FORMULATION AND AN NSGA-II ALGORITHM FOR MINIMIZING THE MAKESPAN AND ENERGY COST UNDER TIME-OF-USE ELECTRICITY PRICE IN AN UNRELATED PARALLEL MACHINE SCHEDULING Review round: 1 Reviewer: 3
Basic reporting: The abstract should not be expressed in general sentences. Instead, it should be explained as much as possible about the work done and the conditions under which the work takes place. Effects, values, results, etc. it will be much more convenient and useful for the reader/researcher to provide information containing answers to questions. The introduction section should be developed more clearly and concretely for those who are going to read this article. There are decouples between paragraphs. It should be rewritten more fluently. Experimental design: Similar articles in this field should be summarized and tabulated for readers at the end of the literature review section. A flowchart should be attached to the article that shows the general structure of the work. The sections proposed in the flowchart can be shown in more detail. The dataset studied and given as an example should be specified in the article and added as a reference. The success of the NSGA-II algorithm should be compared with other basic algorithms in this field. The advantages and disadvantages should be revealed by examining the results obtained. Validity of the findings: It should be compared with similar results obtained in similar studies conducted earlier. The effective aspects of the study should be highlighted. Considering the situations arising as a result of this study, new ideas should be offered to readers. Additional comments: no comment
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: A MATHEMATICAL FORMULATION AND AN NSGA-II ALGORITHM FOR MINIMIZING THE MAKESPAN AND ENERGY COST UNDER TIME-OF-USE ELECTRICITY PRICE IN AN UNRELATED PARALLEL MACHINE SCHEDULING Review round: 2 Reviewer: 1
Basic reporting: The authors correctly understood the changes I recommended and responded appropriately to all of them. Considering the necessary explanations, the article was revised as desired. Experimental design: The authors correctly understood the changes I recommended and responded appropriately to all of them. Considering the necessary explanations, the article was revised as desired. Validity of the findings: The authors correctly understood the changes I recommended and responded appropriately to all of them. Considering the necessary explanations, the article was revised as desired. Additional comments: The authors correctly understood the changes I recommended and responded appropriately to all of them. Considering the necessary explanations, the article was revised as desired.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SYNTAX AND PREJUDICE: ETHICALLY-CHARGED BIASES OF A SYNTAX-BASED HATE SPEECH RECOGNIZER UNVEILED Review round: 1 Reviewer: 1
Basic reporting: In investigating if and how syntactic information can be used to de-bias hate speech recognizers, this paper presents negative results: even when just syntactic features are considered, prejudice is captured from training corpora and make the application to new datasets not very fruitful. The advantage of Kermit, the model proposed by the authors, is that it enables a visual, post hoc analysis of the results by exploring the syntactic information used in the decision process of a neural network. Consequently, it is possible as future work to define ad-hoc rules for mitigating the bias, even if the authors do not clarify how this feedback can be incorporated in the model. The paper is sometimes not very clear or incomplete. In my opinion, the authors should provide - a short definition for hate speech recognizers in the introduction - in the related work section, they should quote works about gender-based bias in neural models A native speaker should check the paper; here is a list of several mistakes: Line 41- 42 There is a repetition "aiming in transformers aiming to put trigger words in contexts" Line 90 - 91 "learned models fail to capture another important fact: learned models can "right from the wrong reasons" -> "learned models fail to capture another important fact: learned models can be "right from the wrong reasons" Line 100 - 101 "help in shedding the light" -> "help in shedding light" Line 113 - 114 "Zanzotto et al. (2020)" is not reported correctly (add a verb) Line 130 "Devlin et al. (2018)" is not reported in the correct way Could you standardize the name using a letter for the model Kermit with a thunder symbol? Experimental design: line 35-36 The authors wrote "Ethically-charged biases are extremely more dangerous than simple probabilistic biases, which can lead to a conclusion utilizing wrong premises." However, I think that even the first type of bias is probabilistic in nature, in the sense that the model created on training data acquires their bias probabilistically. In other words, I don't see this dichotomy between probabilistic vs. ethical bias since even ethical bias is acquired through a probabilistic process. Statistical bias becomes ethical when we recognize them as such. Kermit, a syntax-based hate speech recognizer, is trained on a possibly unbiased training set and tested on two newly created datasets. The description of the datasets is not clear, and sometimes they are reported as corpus. I do not think that the way the new datasets have been created made them meaningful for the reported experiments. For example, the authors suppose that Black lives matter corpus is composed by tweets and comments from Instagram commenting on a movie produced by users from the black community. However, comments and tweets about the movie can be produced by users from different ethnicity. Since the authors can not be sure that the majority of the content came from the black community, the experiment is flawed. Validity of the findings: Since the datasets created on purpose to investigate the research questions are not well designed, it is not clear what the experiments are showing. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SYNTAX AND PREJUDICE: ETHICALLY-CHARGED BIASES OF A SYNTAX-BASED HATE SPEECH RECOGNIZER UNVEILED Review round: 1 Reviewer: 2
Basic reporting: - The structure of the paper is clear and reasonable. - The form of the sentences needs to be corrected, the beginning of a sentence must be capitalized and the end of a sentence must have a period, e.g, lines: 52, 53, 106-111, ... - I don't understand the "SaettaKermit" concept because it only appears in Table 2. - Sentences and grammar need to be sent to proof editing for correction to improve. Experimental design: - You need to describe more clearly the process of creating the dataset, how to annotate it, how to calculate inter-annotator agreement? It is necessary to analyze and compare datasets on many different linguistic aspects. For example, this paper (https://arxiv.org/pdf/2103.11528.pdf) describes how to create a dataset for HSD. - The evaluation indicators have not been clearly described. - There should be an ablation test to clarify the role of syntax integration. - The perturbation contribution needs to be further analyzed with clearer data. I don't know what the number is and how much your method solved. Validity of the findings: Integrating syntactic information into models is essential. This paper has proved its effectiveness on the HSD task. But you need to clarify my suggestions to increase the persuasion. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SYNTAX AND PREJUDICE: ETHICALLY-CHARGED BIASES OF A SYNTAX-BASED HATE SPEECH RECOGNIZER UNVEILED Review round: 1 Reviewer: 3
Basic reporting: - p.1 : what the authors mean with the phrase "ethically charged" can be misleading. The expression in itself can make one think of something that bear a positive connotation (i.e. something that is supposed to do some good and has a positive societal impact), but from what is stated in the immediately following sentences, it would seem to indicate the exact opposite, that is a type of bias that brings potential representational harms to certain groups - nodes in the parse trees should be made more visible; I understand that the purpose is to provide a visual representation of the different activation values, by foregrounding the active nodes, but in some figures the other nodes (especially the black ones) are barely readable, while I think it's important to give a clear picture of the whole sentence, also showing the nodes that remained inactive (the color scale used by the authors is indeed helpful in this) - p.3, ll. 82-3: is it "unintended" bias, instead of "untended"? I don't seem to find the reported quotation in Hardt et al. 2016 - p.8. ll.304-5: I don't fully understand what the authors mean when they say that Davidson et al.'s dataset is not sensitive to syntax; they motivate this claim commenting on the same accuracy reported for the three BERT-based models, but they should probably try to elaborate more on this, maybe explaining in particular what BERTreverse and BERTrandom models are useful for (besides just pointing to Zanzotto et al. 2020). - ll. 310-1: it is not clear why the dataset of Davidson et al. should be a good candidate for experiments related to the second research question, while it is not the case for that of Waseem and Hovy, which is the one on which SaettaKermit gave the best results ever. Wouldn't it be worth replicating the experiments of round 2 on this dataset as well? - ll. 314-15: the data shown in the figure seem to contradict what is stated in this sentence: from what I see in the other two datasets BERTbase also prefers, in non-"neutral" tweets, the classification of the tweet content as hateful, rather than as offensive (although in the Democrats dataset this difference is much less marked). - Appendix C: 1) it should be made clearer in the text that sub-figures b in the examples are those were one or more words were changed 2) line 472 says "parse trees obtained from KERMITbert", but based on what described in sect. 4.3 the qualitative analysis was carried on the parse trees from SaettaKermit, so using examples from a different model sounds very confusing. Should I assume this is just a typo or is there any other reason? Typos: - l. 75: dialectic -> dialect (?) - non-syntactic citations should be put in brackets, e.g. in line 114: " [...] for decisions in neural networks (Zanzotto et al. 2020). " - l.91: models can "right for the wrong reasons" -> can be right (?) - l.116: model for hate speech recognizer -> recognition - l. 162: it should helps -> help - l. 319: has matches-> matches (?) - Appedix C: the parse tree i Fig. 6a is exactly the same as the one in Figure 6b Experimental design: - p.6+Appendix A: (on the creation of the US elections dataset) an hashtag is not necessarily an endorsement, so it is not clearbased on what the presence of a hashtag referring to Biden or Trump can establish in such a deterministic way the political orientation of the user. Further details are provided in Appendix A about this, but it is not clear what the percentages in Table 5 actually refer to; is it the proportion of tweets in the respective dataset? Furthermore, the authors try to show that the choice of using the hashtag as a proxy for the assignment to one of the two datasets is reflected in the data on exit polls. However, the data given in the dedicated table 1) are not systematic (for some states the proportions are reversed), 2) the numerical difference does not appear so striking, except in a few cases 3) the table is incomplete, in that the data is not shown for all states, but only for those in which - according to the authors - the gap is greater. If, as it seems to understand, the gap refers precisely to the percentage of votes given to one of the two parties, the use of the hashtag as a discriminating factor seems to be a fairly weak motivation - p.7: the sample used for the Blind test and follow-ups is quite small, especially considering the original size of the novel datasets created for the experiments. Is there a specific reason for such a choice? Also, what background did the annotators have? Were they under/graduate students, crowdworkers, experts on syntax/deep learning, other? This seems to be a relevant information to add, especially to help the reader understand the possible reasons behind the low IAA results Appendix C: Fig. 5a: I don't see how this can be considered as an instance of offensive tweet. Is the example taken from the sample of tweets used for the second round of experiments? What I mean is whether human annotators actually agreed on the final label provided by Kermit Validity of the findings: no comment Additional comments: The paper introduces KERMITz (or SaettaKermit, as I will call it henceforth in the detailed comments), a visual tool that extends a syntax-based hate speech recognizer (Kermit), and describes the experiments carried out with such tool. More specifically, the two research questions the authors aimed to explore in this work are 1) whether the use of syntactic information is helpful and contributes to the improvement of the performance of a hate speech classifier, 2) if the use of such information can help remove possible biases, which other models, based mainly on lexical information alone, may tend instead to reproduce and amplify. With respect to the first point, the reported experiments show how the use of syntax - in the form of constituent trees - significantly increases performance. As for the second, however, the experiments carried out show that even the use of syntax is not able to reduce the presence of biases in the system's decision-making process. Precisely the negative results detected with respect to the latter point represent the main contribution of this work, together with the possibility of using a hate speech recognition system that also provides a visualization tool for post-hoc analyses of the system's decisions. The paper, however, is not always easy to follow, and relevant information is sometimes reported in a confusing or incomplete way.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SYNTAX AND PREJUDICE: ETHICALLY-CHARGED BIASES OF A SYNTAX-BASED HATE SPEECH RECOGNIZER UNVEILED Review round: 1 Reviewer: 4
Basic reporting: The article is generally well written and clear, but there are some typos: Line 141: the definition [...] that focusES (not "focus") Line 156: language is THE real reason Line 162: as it should help (not "helps") Line 187: 3.4.2 Black Lives Matter corpus (with capital "L" and "M") Note 3, Line 202: Available at (with capital "A") Line 205: was the United States presidential election (without "about") Line 315: and this predisposition (without the "-") Appendix A, Line 458: two datasets; (2) states (put a semicolon between point 1 and 2!) Appendix B, Line 463: if they agreed (using "he/she" is correct too, but the neutral "they" is also a good choice in standard formal English) Also, pay attention to bibliographic references, because some of them are not between brackets: Line 114: for decisions of neural networks Zanzotto et al. (2020) > for decisions of neural networks (Zanzotto et al. 2020). Line 130: The transformer component consists of BERT Devlin et al. (2018) > The transformer component consists of BERT (Devlin et al. 2018). Tables and figures are clear, but it is fundamental to correct: Table 2, bottom line: Is "SaettaKermit" KERMIT(with the bolt in subscript)? You never call it SaettaKermit in the rest of the article, so, please, align "SaettaKermit" with your usual terminology, or at least explain in the notes what "SaettaKermit" is; Appendix C, Figure 6 (FUNDAMENTAL!): Example (a) has the same image as example (b). Please, change it with the correct one (Mary is American), because Figure 6's examples are really powerful and useful for the paper's comprehension. The Introduction should contain examples of the prejudice you are describing, especially at Line 33, and at Line 39-40-41. Otherwise, it might appear that you believe that hate speech producers are somehow unfairly censored. For this reason, it might be useful for you to specify that this kind of unjust censorship is applied to those who produce non-hateful messages, thus also involving marginalized people who are simply describing their experiences (such as "Mary is African", from Appendix C). Also, it might be a good idea to specify that hate speech moderation is not censorship. This is motivated by the fact that hateful messages make their targets less prone to communicate and to express their point of view, thus de facto censoring them. This is the reason why preventing hate speech is a meaning to protect freedom of speech. Useful bibliography on this subject: West, C. (2012), Words That Silence? Freedom of Expression and Racist Hate Speech, in I. Maitra & M. K. McGowan (eds.) Speech and Harm. Controversies Over Free Speech, Oxford, University Press Scholarship Online. VERY IMPORTANT: The Introduction should contain a definition of hate speech, as hate speech currently does not have a universally accepted definition. In 4.1 Experimental Set-up you should include the inter-annotator agreement in the description of the Blind/Inside-out/Prejudice Test's, in each passage. Experimental design: No comment Validity of the findings: All underlying data have been provided; they are robust, statistically sound, & controlled: I am uncertain about this. The methodology used is generally solid, the underlying data have been provided and the experiments can be replicated. However, in my opinion, I think some sections could be improved. 1) 3.4.2 Black lives matter corpus: I am not convinced by the choice of investigating "non-offensive utterances produced during the Black Lives Matter (BLM) protest and written by the black community" through gathering tweets and Instagram posts about a movie. "Black Is King" is indeed a significative movie for the Black community, but you cannot be sure that only Black people commented it on Twitter/Instagram. Although this corpus is not without potential for studying BLM movement, perhaps a corpus composed of tweets directly about the BLM movement could have been more adequate, albeit also with more noise and hate speech. 2) 4.1 Experimental Set-up (from Line 272): The Blind/Inside-out/Prejudice Test is excellent, but I am not convinced by the number of sentences analysed, especially in the case of the Blind test, from which the sentences analysed in subsequent tests derive. I think 34 sentences are too few, especially if selected randomly from both corpora and if you want to compare the human annotation of HS with the annotation of BERT and KERMIT. There is no need to annotate manually over 6,000 tweets, like in HaSpeeDe 2 task from EVALITA 2020 (Sanguinetti et al., 2020), for example, but at least 100. Also, who are the annotators? You need to include some information about them, such as: - Are they English native speakers? Or at least they have a high/good comprehension of written English? - What is their nationality? Are they from the United States, thus emotionally involved in the BLM/presidential election? - Are there black people among them? There is a study on misogynistic hate speech's recognition that highlights how, in that case, annotators, who are part of the group targeted by the specific hate they are annotating, annotate hate speech differently from the annotators who are not targeted by the same hate speech (Look at Wojatzki et al. (2018) for the full study). Could we be facing a similar case with black people and white people annotating racist hate speech? Even without answering this question, this possibility should be taken into consideration in the paper. - Do they have any training in annotating hate speech? Have you given them a definition of hate speech? Sanguinetti M, Comandini G, Nuovo ED, Frenda S, Stranisci M, Bosco C, Caselli T, Patti V, Russo I (2020) Haspeede 2 @ EVALITA2020: overview of the EVALITA 2020 hate speech detection task. In: Basile V, Croce D, Maro MD, Passaro LC (eds) Proceedings of the seventh evaluation campaign of natural language processing and speech tools for Italian. Final Workshop (EVALITA 2020), Online event, December 17th,2020, CEUR Workshop Proceedings, vol 2765. CEUR-WS.org. http://ceur-ws.org/Vol-2765/paper162.pdf Wojatzki et al. (2018), Do Women Perceive Hate Differently: Examining the Relationship Between Hate Speech, Gender, and Agreement Judgement, in A. Barbaresi, H. Biber, F. Neubarth & R. Osswald (eds.) Pro-ceedings of the 14th conference on Natural Language Processing (KONVENS 2018), pp. 110-120. Additional comments: This paper discloses important findings for the study and the automatic recognition of hate speech. It needs to revise its language in a few paragraphs and to improve its data, in order to make them more solid. I think that the authors have done an excellent job and I believe they should be given the opportunity to revise their text for publication.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SYNTAX AND PREJUDICE: ETHICALLY-CHARGED BIASES OF A SYNTAX-BASED HATE SPEECH RECOGNIZER UNVEILED Review round: 2 Reviewer: 1
Basic reporting: The paper is significantly improved with respect to the previous version but several sentences are hard to read and there are some typos. For example: Line 35 "HSRs aim to recognized posts" -> "HSRs aim to recognize posts". Reformulate the whole sentence Line 51-54 Reformulate the sentence. Line 373 "Hate speech recognizers (HSRs) are a dual-use technology as these" -> "Hate speech recognizers (HSRs) are a dual-use technology that" Experimental design: I think that more details about the datasets created (a couple of concrete examples of the texts that contain) could be useful to better understand the complexity of the task Validity of the findings: I like the fact that the focus of the paper is on negative results. In this version is more evident. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SYNTAX AND PREJUDICE: ETHICALLY-CHARGED BIASES OF A SYNTAX-BASED HATE SPEECH RECOGNIZER UNVEILED Review round: 2 Reviewer: 2
Basic reporting: The paper still needs a thorough proofreading in a number of points. For example, the paragraph in ll.34-45 is very confusing and not really easy to follow, especially the discussion on censorship and free speech. I see the point, but the very notion of censorship here seems to have first a positive and then a negative connotation. The authors should try to organize the section such that concepts and ideas flow in a more logical way. In addition, the authors didn't fully addressed reviewers' request to introduce an operational definition of HS. They did, in fact, add references to relevant literature on this matter, but failed to provide a clearer picture of the phenomenon. Highlighting - as the authors rightfully did - the struggles in finding a common definition and the blurry boundaries with related phenomena is perfectly fine, but there actually are some distinguishing traits in HS (e.g. incitement to hate and violence against a specific target), and they should be properly defined in the paper. - l.107-8: "the above measures of prejudice for learned models fail to capture another important fact: learned models can be “right from the wrong reasons”": this statement could be further expanded and motivated Other typos and errors spotted here and there: - l.89: "inside a neural networks" - l.92: "Afro American English" -- African-American - l.99: "untended bias" - - unintended bias (?) - l.159: "statistical functions Kamishima et al. (2012)" -- (Kamishima et al, 2012) - ll.243-5: "Finally, we generated our political datasets filtering from the Kaggle dataset holding only the tweets geolocated in the US, all the tweets with at least one of the hashtags present in the lists." -- this sentence should be rephrased - l.251: "the two datasets made up the corpus actually reflect" -- the two datasets that make up (?) the corpus - l.293: "proposional to its class" -- proportional - l.304: "since it is for all different with greater or lesser knowledge of syntax, machine learning [...]" -- please rephrase this sentence fragment - l.309: "led to the definition of the next." -- next one - l.310: "questions [...] depends on" -- depend - Appendix A, l.507: "states those constitute" -- that constitute Experimental design: - Sect. 3.4.3: The list of hashtags used to filter the tweets should be made available, maybe along with the supplementary material in Appendix A - Sect. 4.1 and the Blind/Inside-Out/Prejudice tests: the authors in the rebuttal responded to reviewers' comments by motivating the difficutlies in getting a proper pool of annotators, but the main point in those comments didn't have to do with the number of annotators - which is more than reasonable - but with the sample size (i.e. the number of annotated items) used for the analysis, which is very small. With respect to this point, no action has apparently been taken. Also, for a more systematic description of the annotation process in that step, as well as in the creation of the BLM and US Election datasets, I recommend the authors following the scheme and guidelines proposed here: https://techpolicylab.uw.edu/wp-content/uploads/2021/11/Data_Statements_Guide_V2.pdf While this can be done in a concise way, I believe that addressing all the applicable points in these guidelines would help both the authors and the reader in getting a clearer and more exhaustive picture of the data at hand. Validity of the findings: - Sect. 4.2, l.340: It is not clear what kind of "gap" the authors refer to, when commenting on results in table 2; more in general, the whole sentence should be rephrased - l. 354-5: the authors say that BERTbase is more prone to classify tweets as offensive in the Democratic and Republican datasets, but according to Fig.3 it's the opposite (HS tag is preferred, between HS and offensive language, especially in the Republican dataset) - Sect. 4.3 (and examples in Appendix C): the authors' response to reviewer #3 on this point should be made more explicit in the paper as it would help the reader understand what actually motivated the perturbation analysis and why the authors picked precisely those examples. Additional comments: Despite the tentative improvements made to the revised version, the paper still needs further adjustments in order to be considered for full acceptance. Nonetheless, the work is of great interest to the community, as it sheds light on how and to what degree syntax can be helpful in HS detection. Therefore, I highly recommend to the authors that they address reviewers' comments more thoroughly.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: COMPOSITE LEARNING TRACKING CONTROL FOR UNDERACTUATED MARINE SURFACE VESSELS WITH OUTPUT CONSTRAINTS Review round: 1 Reviewer: 1
Basic reporting: This article solves the composite learning tracking control for the underactuated marine surface vessels. In particular, the output constraints are addressed in the control design procedure. The under-actuated ship motion control is a challenging problem. This article is well written and intersing. The composite learing technique is employed such that the control system has the learning capability. It can be considiered to be accepted after some revisions. Somme comments are given as follows: 1. It seems that the mass parameters m_jj should be known. But, some other parameters are unknown, Some remarks are needed for the parameter determine. 2. The output constraints should be explained frome a practical viewpoint. 3. The ocean disturbances acting on the ships are assumed to be bounded and then are attenuated with the NN approximation errors. It has some conservativenss. Some disturbance estimtion scheme has been provided such as 10.1109/TITS.2021.3054177. Authors should be mentioned this issue in the context or lie in the future research. Experimental design: No Validity of the findings: No Additional comments: No
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: COMPOSITE LEARNING TRACKING CONTROL FOR UNDERACTUATED MARINE SURFACE VESSELS WITH OUTPUT CONSTRAINTS Review round: 1 Reviewer: 2
Basic reporting: This paper develops a composite learning trajectory tracking control scheme for underactuated marine surface vehicles (MSVs) with unknown dynamics and time-varying disturbances under output constraints. The considered issue is interesting and the paper has an acceptable structure and presentation. However, the following comments are needed to be considered in the revision: 1.It is suggested to supplement these more related works of composite adaptive, the advantages of composite learning control scheme can be clearly understood by readers. 2.How to choose the design parameters of the controller selected, in the simulation? Please give some remarks about it. 3.Please add some explanations of how the MSV obtains its positions, orientations, velocities and angular velocities.. 4.It is suggested to supplement a block diagram of the control architecture Experimental design: 1. An extensive comparative study may help the authors to improve the paper presentation. 2. The format of Figures should be unified. Validity of the findings: The trajectory tracking control for underactuated marine surface vehicles subject to unknown dynamics and time-varying disturbances under output constraints is considered. The problem investigated is practical and valuable. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: COMPOSITE LEARNING TRACKING CONTROL FOR UNDERACTUATED MARINE SURFACE VESSELS WITH OUTPUT CONSTRAINTS Review round: 2 Reviewer: 1
Basic reporting: NA Experimental design: NA Validity of the findings: NA Additional comments: NA
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: COMPOSITE LEARNING TRACKING CONTROL FOR UNDERACTUATED MARINE SURFACE VESSELS WITH OUTPUT CONSTRAINTS Review round: 2 Reviewer: 2
Basic reporting: The article meets the PeerJ criteria and should be accepted as is. Experimental design: The article meets the PeerJ criteria and should be accepted as is. Validity of the findings: The article meets the PeerJ criteria and should be accepted as is. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING PHISHING WEBPAGES VIA HOMOLOGY ANALYSIS OF WEBPAGE STRUCTURE Review round: 1 Reviewer: 1
Basic reporting: - The problem statement should be included implicitly in the introduction section. No need to repeat it again in the Method section. - It is mentioned that (in abstract and other section): "the proposed method can accurately locate the family of phishing webpages and can detect phishing webpages efficiently". However, the proposed method is based on clustering which has high computational cost. Please justify how the proposed method is more efficient? - is it possible to discuss the benefits of this kind of detection method comparing to using ML methods? Experimental design: - The experimental design is described well. However, the time of detection should discuss the clustering time which is necessary to perform this method. Also discuss the cost/time of generating the fingerprint? - Will the fingerprint will be the same even you use different websites in the testing? When it should be updated? Will the detection rate will be enhanced if we use more websites in the training? Discuss this. Validity of the findings: - The validation (not evaluation) of the proposed method is not clear. Example: How you ensure that the proposed fingerprint is valid? The authors compared the proposed method using different parameters (for the same method). Is it possible to compare the performance of this proposed method with other previous methods to show the enhancements obtained? Additional comments: - The best values in all tables should be highlighted (bold) to make the comparison easy.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING PHISHING WEBPAGES VIA HOMOLOGY ANALYSIS OF WEBPAGE STRUCTURE Review round: 1 Reviewer: 2
Basic reporting: Certain sections of the paper require major revision before the manuscript can be considered for publication. Experimental design: Necessary references to the existing techniques should be provided in the experimental result section. Moreover, a comparative analysis of the proposed technique needs to be carried out with other techniques proposed in the literature to validate the effectiveness of the proposed model Validity of the findings: No Comment Additional comments: Some sections of the paper require proofreading to fix grammatical and typographical errors.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DETECTING PHISHING WEBPAGES VIA HOMOLOGY ANALYSIS OF WEBPAGE STRUCTURE Review round: 2 Reviewer: 1
Basic reporting: No comment Experimental design: No comment Validity of the findings: No comment Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SELF-SUPERVISED RECURRENT DEPTH ESTIMATION WITH ATTENTION MECHANISMS Review round: 1 Reviewer: 1
Basic reporting: The author developed a detph estimation algorithm using self-supervised attention mechanisms. By adopting ConvGRU layers, the networks can capture temporal information even better. Also, the proposed algorithm is precisely compared with the state-of-the-art algorithms in various aspects. In the introduction section, the author organized the related works clearly, which is quite reasonable to me. Experimental design: 1. Table 1 contains too much infomation. Some of them are useful to some readers but it appears to weaken the arguments. The author needs to simplified the table for more clarity. 2. If the author can upload the codes in the github, then the previous comments can be removed. The author can upload detailed information in the github rather than describing many details of the networks in the paper. Validity of the findings: 1. The table 3 is not clearly explained. The author needs to specify which algorithms are based on single images. 2. The proposed method shows worse performance than some single image baselines even though it uses much information. Then, what is the advantage of using the proposed algorithm beyond the performance? 3. How long does it take in a single feedforward pass? The author assumes that the proposed method has a potential to be used in autonomous driving environments. Then the algorithm should be ran in real-time, otherwise the paper fails to show its validity. 4. What is the performance of a depth estimation algorithm without using temporal information? The author needs to show the comparison results in the ablation study as the author states that using temporal information is the key of the algorithm. 5. In the conclusion section, the author compares the performance with Monodepth2, which seems to me that it is not on the right section. The author can move the statement to the discussion section for clarity. 6. In the conclusion section, the author mentioned Frechet Inception Distance (FID) as an evaluation metric. What is the meaning of it? Can it be a valid metric for comparision? If it is, what is the purpose of selecting FID as a metric? Additional comments: This paper applies convGRU with self-attention mechansim, which is quite novel to me. However, the result sections fails to illustrate the validity of the proposed framework as the advantage of the algorithm is not clearly described.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SELF-SUPERVISED RECURRENT DEPTH ESTIMATION WITH ATTENTION MECHANISMS Review round: 1 Reviewer: 2
Basic reporting: The writing overall is comprehendible but the authors can improve upon by better conforming to professional standards of expressions. There are a few expressions throughout the manuscript that do not seem suitable for academic writing; "a bunch of" would be one example. The authors also have a mixed usage of British and American English, such as 'regularisation' and 'regularization', or color. I suggest that the authors proofread the manuscript and address such issues prior to the final submission. The literature review and introduction seem to provide sufficient background and context regarding the subject. While most figures are able to provide enough context to the readers and are placed well, the figure describing the ConvGRU block (Fig. 2), appears to be identical to that of a conventional GRU block. The difference between a ConvGRU block and a conventional GRU block are adequately described in lines 263 through 269, as well as Equations 8 through 11, hence Fig. 2 may be unnecessary as GRU blocks are typically well-known throughout the field. Experimental design: The problem formulation seems to be well defined, yet some sections lack to provide enough logical reasoning behind the selection of each method. While each of the methods separately are described with sufficient detail, it would be clearer for the readers if the authors outline the overall methodology, in terms of presentation of the contents in the methodology section. This will also help with the transition from each subsection to another, The experimental details are well described and comprehensive enough for replication. Validity of the findings: The ablation study seems to provide a detailed analysis and justification for the models/methods used in this work. However, the authors should better emphasize the contributions of this paper as the results generally seem to be less convincing to the potential audience. Although the results do suggest that the usage of temporal information can enhance the performance as the authors claim, the proposed methodology performs worse compared to most of the 'u-seq' or models (which are in the same category), and the qualitative comparison of the view synthesis quality was done with a model from a different category. Despite the relative weakness in terms of RMSE compared to other models within the same category, conducting a qualitative comparison with one of the state-of-the-art methods and showing how similarly the proposed method can perform would be help potential readers to be more persuaded. Another suggestion would be to point out and highlight any other advantages the proposed methodology has over the other methods within the same category to compensate for the weakness in performance. Additionally, for Fig. 5, what are the ranges of the error maps? While the proposed methodology does seem to have less intensive values in the error map, it would be worth indicating the levels by having a colorbar next the the plots in J through O. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SELF-SUPERVISED RECURRENT DEPTH ESTIMATION WITH ATTENTION MECHANISMS Review round: 2 Reviewer: 1
Basic reporting: The responses of the author have removed my questions. It could give a better understanding to potential readers if some of the contents are reinforced. Experimental design: no comment Validity of the findings: 1. The author mentioned that 'supervised learning methods are very heady and self-supervised learning methods are light.' This could be seen as a hasty generalization mistakes as the computational burden varies with the structure and design of networks. If there are any references that support your claim, then adding them in the introduction section could eliminate the concerns. 2. The captions (or y-axis) of the figures 6 and 8 are mislabeled. The explaination of those two figures are not matched with the one from the captions. The author should check the 'D, E, F' and 'G, H, I' parts. 3. The computational speed of the proposed algorithm is quite astonishing and can be applicable to real-time, which could be considered as an advantage of using your algorithm. It would be much better if this contents are emphaized in the manuscript. Additional comments: The author mentioned that the codes are included in the original paper but It would be better to add the link of the code in the manuscript so that the potential readers can refer your methods easily.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SELF-SUPERVISED RECURRENT DEPTH ESTIMATION WITH ATTENTION MECHANISMS Review round: 2 Reviewer: 2
Basic reporting: The writing overall has been improved, though the authors could proofread the manuscript carefully prior to the final submission. For example in the first sentence of Section 4, the authors have written "we first our experimental setup", and it seems like there could be a missing verb. Again, it is recommended that the authors review the manuscript for any grammatical or spelling errors. Experimental design: No comment Validity of the findings: 1. Though the authors claim the contributions of the proposed methodology to be of utilizing new network structures and having new training strategies, how are these finding reflected in terms of performance? As the authors describe in Section 5.2, the proposed model performs worse than most of the multi-frame and sequential approaches, possibly weakening the authors' claim that incorporating temporal information could be beneficial. Novelty alone may not be sufficient for the justification and necessity of this method, and the fact that the proposed method perhaps underperforms most of the state-of-the-art methods would be less convincing to potential readers. Thus it is highly recommended that the authors either (1) highlight any advantages the proposed method holds over other existing methods (why this method might be useful despite worse performance) or (2) enhance the performance of the model such that acceptable performance is achieved. 2. The qualitative comparison with Monodepth 2 does not help in terms of convincing potential readers the usage of the proposed method. One of the main contributions that the authors claim are that sequential/temporal information can be utilized; it would logically make more sense to have a comparison between another sequential model. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SELF-SUPERVISED RECURRENT DEPTH ESTIMATION WITH ATTENTION MECHANISMS Review round: 3 Reviewer: 1
Basic reporting: No comment Experimental design: No comment Validity of the findings: No comment Additional comments: No comment
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: SELF-SUPERVISED RECURRENT DEPTH ESTIMATION WITH ATTENTION MECHANISMS Review round: 3 Reviewer: 2
Basic reporting: The authors have seemed to remove all typos and grammatical errors. Experimental design: No comment Validity of the findings: The authors have seemed to address previous comments and questions in the revised manuscript and rebuttal form. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: OPTIMAL POWER ALLOCATION FOR A WIRELESS COOPERATIVE NETWORK WITH UAV Review round: 1 Reviewer: 1
Basic reporting: The paper studies the optimal power allocation strategy in wireless networks where the energy-constrained relay nodes through RF energy from the UAVs. The addressed problem is interesting and the proposed solution seems effective. However, there are some weaknesses in the paper that the authors must particularly pay attention and handle: (1) (Line 461 on page 15) The format of references is inconsistent, for example, the caption case of a reference article is inconsistent. (2) The logic and English usage of this paper are poor. The quality of written English should be improved. Experimental design: The paper refers to the need to find a suitable α to make a compromise between the outage probability and the throughout, is it necessary to give the appropriate α in the form of simulation? Validity of the findings: no comment Additional comments: (1) (Line 2 on page 1) "UAV cooperative wireless DF relay network." What is DF? (2) The expression of the formula in this paper is wrong, and the physical meaning of some letters is unclear. (3) (Line 224 on page 7 and line 242 on page 8) Why the second level headings B and C are the same?
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: OPTIMAL POWER ALLOCATION FOR A WIRELESS COOPERATIVE NETWORK WITH UAV Review round: 1 Reviewer: 2
Basic reporting: Yes Experimental design: Yes Validity of the findings: Yes Additional comments: 1. In the system model, why choose DF protocol instead of AF protocol? Any justification. 2. Why Rayleigh channel, especially when Los link exists. 3. UAV itself has very limited energy, so it will very impractical to use it as an energy source to power the relay node. The authors must provide substantial evidence to show the effectiveness. 4. What are the computation complexity of solutions used to solve the identified optimization problems? 5. Please provide more baseline strategies used for baselines in simulation. 6. Presentation: (1) Undefined abbreviation in abstract "DF" (2) Too many typos and grammar problems. (3) Please put the figures at appropriate places instead of at the end of the manuscript. (4) Provide a table to include all parameters to better help the reviewer understand.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: OPTIMAL POWER ALLOCATION FOR A WIRELESS COOPERATIVE NETWORK WITH UAV Review round: 2 Reviewer: 1
Basic reporting: Good Experimental design: Good Validity of the findings: Good Additional comments: Thanks for addressing my comments. Here, more justifications are needed for the following two questions: (1) Why Rayleigh channel, especially when Los link exists. Since LOS links, the Ricean channel will be more suitable instead of Rayleigh. After reading the authors' feedback, I am not still convinced about the use of the Rayleigh channel. (2) Instead of only text, could the authors provide some simulation results for the baseline strategy. (3) Please insert the table of all parameters somewhere in the paper instead of listing it only in the response letter.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE USE OF STATISTICAL AND MACHINE LEARNING TOOLS TO ACCURATELY QUANTIFY THE ENERGY PERFORMANCE OF RESIDENTIAL BUILDINGS Review round: 1 Reviewer: 1
Basic reporting: In the present work, the authors applied and compared two machines learning models for better prediction of heating load (HL) and cooling load using eight input variables namely, Building Size, Floor Height, Glazing Area, Wall Area, Window to Wall Ratio, Win Glazing U-value, Roof U-value, and the External Wall U-value. The proposed models were; the multilayer perceptron neural network (MLPNN) model and the multiple linear regression model (MLR). The two models were developed using more than 3840 building samples, and three modelling strategies were analyzed based on different splitting ratios: 70/30, 80/20, 90/10. Three performances metrics were used for models evaluation and comparison: the RMSE, MAE and R2. The investigation showed that the MLPNN was more accurate than the MLR, which is a logical finding. The focus of the paper is very interesting, worthy of investigation and relevant to the PeerJ Journal. The necessary information’s for such study were provided by the authors and clearly presented. The writing style is good and the reader can easily understand what the authors tried to explain and to demonstrate. I have no problem about the paper structure, except some figures that I estimate they are non-value added as there are any comments of discussion about containing information in these figures (5-11). I would like to follow the positive aspect of the paper by mentioning some weak points. Indeed, the section results and discussion seems still to be the weak part of the investigation and it has significantly contributed to degrading the overall paper quality, information losses, worsened understanding of the overall results of the investigation and has not contributed to an increased paper reading. In conclusion, the paper needs in depth amendments before to be accepted. 1. Theoretical description of the MLPNN should be presented and clearly described. 2. In such modelling study, a comparison between the models having different input combinations should be conducted (at least six combinations). 3. The contribution of the input variables calculated as a percent ratio can help in better understanding the manuscript, see for example the connection weights approach (CW-AP) (Olden and Jackson 2002; Olden et al. 2004) and the Garson approach (G-AP) (Garson 1991). 4. Obtained results should be deeply discussed. 5. The scatterplot between measured and predicted data should be provided (only for the testing data). 6. The following figures are necessary: boxplot, violin plot, and Taylor diagram (only for the testing data). Experimental design: see comments above Validity of the findings: see comments above Additional comments: see comments above
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE USE OF STATISTICAL AND MACHINE LEARNING TOOLS TO ACCURATELY QUANTIFY THE ENERGY PERFORMANCE OF RESIDENTIAL BUILDINGS Review round: 1 Reviewer: 2
Basic reporting: This paper presents a new building energy consumption dataset and investigates the impact of eight input variables on residential buildings heating load (HL) and cooling load (CL), respectively. A variety of classical and non-parametric statistical analytic tools were used to find the most strongly associated input variables with each of the output variables. Then, using the performance measures Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and coefficient of determination (R2), two machine learning statistical methods to estimate HL and CL were compared: Multiple linear regression (MLR) and Multilayer perceptron (MLP). The contribution of the paper is interesting and relevant to building energy research community. The proposed solution and evaluation methodology are original. However, the manuscript still can be improved through the consideration of the following aspects: 1) The authors failed to motivate their proposed work in the Introduction. The paragraph should be added to the introduction for highlighting the applications of the building energy consumption datasets, such as energy prediction, energy disaggregation (or non-intrusive load monitoring), anomaly detection, fault diagnosis, etc. To that end, I suggest to the following references when you address this comments: - Identifying key determinants for building energy analysis from urban building datasets - Artificial intelligence based anomaly detection of energy consumption in buildings: A review, current trends and new perspectives - A hybrid data mining approach for anomaly detection and evaluation in residential buildings energy data - Predicting energy consumption in multiple buildings using machine learning for improving energy efficiency and sustainability - Robust event-based non-intrusive appliance recognition using multi-scale wavelet packet tree and ensemble bagging tree 2) The literature review is very terse and some new articles discussing building energy consumption datasets are missing, such as : - Building power consumption datasets: Survey, taxonomy and future directions - A novel approach for detecting anomalous energy consumption based on micro-moments and deep neural networks Where a new dataset, called Qatar University dataset (QUD) has been presented. 3) The drawbacks and limitations of the proposed work should be discussed in the conclusion along the most important findings. 4) The paper should be carefully proofread as there are some typos and grammatical issues. Experimental design: The experimental design is well discussed in the paper. the latter explains well the procedure used to gather data and validate the dataset with two machine learning models. Validity of the findings: More discussions are required to highlight the drawbacks and limitations of the proposed approach, and the future directions to overcome these issues. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE USE OF STATISTICAL AND MACHINE LEARNING TOOLS TO ACCURATELY QUANTIFY THE ENERGY PERFORMANCE OF RESIDENTIAL BUILDINGS Review round: 1 Reviewer: 3
Basic reporting: The article is clear, but requires a slight correction of the English language. Literature references are sufficient. Results supported by examples. Experimental design: The paper is an interesting approach in terms of forecasting energy consumption. Nevertheless, in the methods based on machine learning, an essential element is the training and validation set. The effectiveness of forecasting and the thesis mainly formulated depends on selecting criteria, the amount of data, and the appropriate data selection. Therefore, the universality of the algorithm has some limitations. Validity of the findings: 1) I propose to describe in more detail on what basis and the scope of selecting input data for the given research problem results. 2) The authors could refer to other machine learning methods and justify the choice of the methods presented. Additional comments: No additional comments
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE USE OF STATISTICAL AND MACHINE LEARNING TOOLS TO ACCURATELY QUANTIFY THE ENERGY PERFORMANCE OF RESIDENTIAL BUILDINGS Review round: 2 Reviewer: 1
Basic reporting: The have significantly improved the paper and the necessaries revisions were correctly done. The paper is ready for publication, no further revision is necessary. well Experimental design: well conducted Validity of the findings: well presented and discussed Additional comments: The have significantly improved the paper and the necessaries revisions were correctly done. The paper is ready for publication, no further revision is necessary.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: THE USE OF STATISTICAL AND MACHINE LEARNING TOOLS TO ACCURATELY QUANTIFY THE ENERGY PERFORMANCE OF RESIDENTIAL BUILDINGS Review round: 2 Reviewer: 2
Basic reporting: The paper has been significantly improved after revision. Experimental design: The experimental design part has been improved after revision. Validity of the findings: The main findings of the paper are innovative. Additional comments: The authors have addressed all my comments, I have no further suggestions.
You are one of the reviewers, your task is to write a review for the article. You will be given the title of the article, the number of the round in which the article is located, and your order among the reviewers.
Title: DEEP LEARNING-BASED ELECTROCARDIOGRAM RHYTHM AND BEAT FEATURES FOR HEART ABNORMALITY CLASSIFICATION Review round: 1 Reviewer: 1
Basic reporting: The manuscript is well-written, has good presentation, and thoroughly covers the literature. Experimental design: The manuscript should be clearer in stating the novelty of the proposed method versus the existing state-of-the-art. Also, the architecture of the proposed network should be more clearly presented. Validity of the findings: Results seem good, and the experiments use plenty of databases. However, the comparison with the state-of-the-art is flawed and does not suffice to assess the relative quality of the proposed method. The most promising literature approaches should be tested in the same conditions as the proposed method to directly compare their performance results. Additional comments: The exact contribution of this work is not very clear. Throughout the abstract and introduction, the authors state that the literature is composed of either rhythm or beat-based methods, which leads me to think that this would be a hybrid method combining rhythm and beat features. However, both tasks are addressed separately, which was already studied in (Nurmaini 2020) and (Tutuko 2021). The introduction should state, more clearly, what is the difference between this and the previous works. The figure 3, depicting the CNN architecture, needs to be improved. As it stands, it seems there is a FC neuron (in the bottom) for each convolutional layer, and somehow information flows forward and backward through the model using those neurons. Also, it is not clear what the orange squares represent. Were the state-of-the-art methods (in Table 7) evaluated with the same datasets as the proposed method? And the same data train-test splits? If not, and since they even consider different number of target classes, the results are not really comparable. Hence, we do not have a robust way to assess if the proposed method is, in fact, better than the state-of-the-art. Authors should implement a couple of the most promising literature methods and test them in the same conditions as the proposed method. In figure 6, since the AUC is very close to 1, perhaps it would be useful to use log-log scale axes for the ROC curve. Or, there could be a box zooming in on the [1.0, 1.0] area, so readers can see clearly the difference between the curves.