Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 15,241 Bytes
fad35ef |
1 |
{"forum": "B1g5dgfee4", "submission_url": "https://openreview.net/forum?id=B1g5dgfee4", "submission_content": {"title": "Title", "authors": ["Authors"], "authorids": ["mail@gmail.com"], "keywords": [], "TL;DR": "Segmenting minimally invasive surgery task in sub-gestures using temporal convolutional neural network", "abstract": "abstract", "pdf": "/pdf/a5f2103d69c45e7bd9e18c55970886ac26e109a5.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "authors|title"}, "submission_cdate": 1544720497731, "submission_tcdate": 1544720497731, "submission_tmdate": 1562169083217, "submission_ddate": null, "review_id": ["HJeZEs_h7E", "BkepY4ti7V", "rkgBbSnDfV"], "review_url": ["https://openreview.net/forum?id=B1g5dgfee4¬eId=HJeZEs_h7E", "https://openreview.net/forum?id=B1g5dgfee4¬eId=BkepY4ti7V", "https://openreview.net/forum?id=B1g5dgfee4¬eId=rkgBbSnDfV"], "review_cdate": [1548679977424, 1548616837231, 1547318524747], "review_tcdate": [1548679977424, 1548616837231, 1547318524747], "review_tmdate": [1548856754495, 1548856744187, 1548856703983], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper69/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper69/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper69/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1g5dgfee4", "B1g5dgfee4", "B1g5dgfee4"], "review_content": [{"pros": "The authors present a temporal convolutional neural network for the segmentation of surgical tasks and validate it against the JIGSAWS dataset. The problem they present is of relevance and of current active research. For instance, several challenges have been held in the past years to assess the quality of state of the art methods addressing the same problem.\n\nBeing an active area of research, the field is quite rich in literature. For a conference paper, it is quite difficult to cover all the existing works. The authors have made a good effort to present a concise summary of relevant related work. \n\nThe obtained results present an accuracy higher than other methods from the state of the art. ", "cons": "- Clarity: \n** The authors do not deliver very well the message. After reading the paper, I find it difficult to establish what exactly is their contribution. From my understanding, they have used previously proposed network architectures for the task they aim to solve being their main contribution to add skip connection to an encoder/decoder architecture previously proposed by Lea et al, 2016. The authors should try to make this quite clear in the paper. \n** The authors claim that a second contribution is to add a parallel convolution layer (Fig 3) to the modified ED-TCN network (Figure 2). However, I have the impression that these two networks are nearly equivalent. The sole difference is an extra parallel convolution layer that starts at layer 2 and connects at the output of the encoder. \n** Could you please explain why you chose the filter sizes as such? It seems that you have replicated the architecture proposed by Lea et al. While this is a valid choice, it is important to justify why the very same network architecture works well for your problem.\n** In general, the methods should be better explained as to try to justify the different methodological choices (e.g. why you need to add skip layers, why the ED-TCN was not changed at all or if experiments proof it works well as such, how is the kinematic data combined with the video data frames?). This would make the paper more clear.\n** The paper has numerous errors in the use of the English language. I recommend to have a careful review of it and/or have it proof read by a native speaker. Some common mistakes I have found:\n 1) Using \"an\" before a word starting with h. It should be \"a\".\n 2) Wrong use of verb tenses. Many times the authors use the tense for the third singular person when the noun is plural or the opposite. Examples: \"which would allows\", \"Measurements from the dataset includes\", and \"but it also increase\" (second case). \n 3) Using the singular form of a noun when the plural should be used. Examples: \"surgical task\", \"autonomous vehicle\", \"field\", \"block\", \"layer\", among others.\n 4) There multiple cases where the wrong indefinite or define article is used or it is missing.\n 5) results are not as high -> Good or accurate should be preferred.\n\n** The images from Figure 4 have been taken from the paper of Ahmidi et al, TBME 2017. Please give the right credits.\n\n- Quality of the evaluation:\n** Given that the benchmark from Ahmidi et al uses the very same dataset, one would expect that the authors use the same setup and metrics used there. Is there any particular reason why the authors decided to exclude some of the metrics and experimental setup (leave-one-super-trial-out)?\n** Table 3: The work from Lea et al 2016 (ED-TCN) has no reported accuracies. Is this an error? \n\n-Originality:\nAs previously mentioned, it is difficult to establish the original contributions of this paper. Currently, I consider its contributions are mainly incremental as it reuses state of the art work. I encourage the authors to re-structure their paper so that one can easily assess their unique contributions.\n\n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "\nSummary: The authors present an approach for surgical activity segmentation using fully convolutional neural networks (FCNN), i.e. an hourglass architecture i) in its vanilla form, ii) with direct skip connections from the down-sampling to the up-sampling path, and iii) skip connections with incorporated convolution+pooling+normalization blocks.\nThe paper is written clearly. Methods, materials and validation are of a sufficient quality. There are certain original aspects in this work (hourglass-networks with skip connections, once direct and once with additional convolution operations), but overall, the novelty is limited. The evaluation is performed on the publicly available JHU-ISI JIGSAWS dataset, for which competitive methods and results are available. \n\nPros:\n- Good overview of related literature on action segmentation from kinematic data\n- Validation on JIGSAWS data is well comparable to other methods in literature.\n- Comparison of several hourglass architectures and kinematics representations in the experiments. ", "cons": "\nRemaining questions / clarity:\n- While HMMs and RNN/LSTMs are designed to handle temporal sequences of varying length T, a FCNN architecture as proposed here requires a fixed-length input. Authors try different lengths in this work (10/20/.../50), but it is not clear 1) what the unit of this temporal window length is (10 seconds, or 10 samples of 76D kinematic vectors), 2) if 10 means \"10 samples\", at what framerate were kinematics recorded, and how many seconds of kinematic data are covered by 10/20/.../50 samples?, 3) whether the inference was performed in a sliding window fashion with striding, and what the striding factor was (dense sliding, or every n samples, or windows with 50% overlap)? \n- The comparison evaluation to other methods (Table 3) does not feature results for ED-TCN, but authors could include this with little effort, by removing the encoder-to-decoder skip connections from the ED-TCN-Link network and re-training.\n\nCons:\n- The ED-TCN-Link network architecture is an hourglass network with skip connections from the down-sampling to the up-sampling layers. This idea is not novel though, and the resulting architecture is in principle identical to a 1D U-Net with summation instead of concatenation of feature maps in the up-sampling path [1]. Could the authors please discuss this similarity and explain whether and in which way their architecture is different from a 1D U-Net?\n- Three different kinematics representations were tested (All/Slave/PVG), as originally proposed by Lea et al. Results confirm the previous finding by Lea et al. that PVG performs better than the other two representations, but no further insight beyond this is won from this experiment. For example, in future work, it could be more interesting to investigate whether more efficient latent representations of \"All\" can be achieved. One interesting direction could be e.g. deep bayesian state space models [2]. \n- The ED-TCN-ConvLink architecture is similar to ED-TCN-Link, but with convolutional and pooling layers put into the forward links. In the experiments, this architecture almost consistently performs worse than ED-TCN-Link. I can imagine that this is due to the incorporation of pooling (downsampling), and I would recommend trying to leave them out and perform only convolution instead (the link needs to be summed into one higher layer in the up-sampling path though, right after the up-sampling layer, to match resolution). In U-Net and comparable architectures, horizontal links preserve spatial resolution and high-frequency features from the down-sampling path. Maybe the loss in accuracy is due to this loss in resolution.\n\n[1] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Miccai. 2015;234\u201341. \n[2] Karl M., Soelch M., Bayer J., van der Smagt, P., Deep Variational Bayes Filters: Unsupervised Learning of State Space Models From Raw Data, ICLR, 2017, https://arxiv.org/abs/1605.06432", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "Summary: This paper discusses an approach to automatically recognise surgical gestures from temporal kinematics data. The authors propose to extend an existing method (Lea et al. 2016) with skip connections and test on a newly available dataset (JIGSAWS). \n\n- The authors use a new dataset to test a convolutional neural network approach for action recognition \n- This work extends a work by Lea et al. 2016 for action recognition by introducing skip connections\n- The presented results outperform previous work", "cons": "- my understanding of the JIGSAWS dataset is that it also comes with video data. Why hasn't this data been used as additional source of rich information. Only kinematics data has been used for 1D segmentation. \n- what's the difference between a U-Net (Ronneberger 2015) and the proposed Lea et al. 2016 approach with skip connections? Wouldn't a 1D U-Net be better suited for this job or do the same? \n- While the method is very straight forward and easy to understand, the paper is difficult to read mainly because of language and grammar shortcomings. \n- The authors describe the problem as a dictionary recognition problem in 1D. My feeling is that methods from the domain of natural language processing would be promising for the targeted problem (1D high dimensional features, dictionaries, grammars, etc.). Using a conventional convolutional segmentation approach method might not be ideal for this class of problems. \n- there is a lot of white space, especially around the figures that could have been used more efficiently.\n\nminors: \n- abstract: \" Automatic segmentation of continuous scene\" -> scenes\n- abstract: \"it is important that they understand the scene they are in, which imply the need to recognize action\". This sentence does not make any sense. understanding a scene does not imply understanding actions. Understanding actions usually requires understanding scenes...\n- abstract: \"specifically 1D Convolutional layer\" -> a layer? layers?\n- p1: \"it is crucial to be able to segment the scene in smaller segment\" ?? segments ?\n- at this point I gave up to suggest detailed language improvements. It feels like every sentence is grammatically wrong in the abstract and large parts of the remaining paper. \n- p3: 'an high level representation' -> 'a high level representation'\n- p4: \"ski connections\" -> \"skip connections\"\n- p7: \"This results could be because\" these results, this result...?\n- p7: \"which means their are not many\" -> their -> there\n- p.2: \"Unsupervised methods are what everyone is aiming for, however for now, the results are not as high as with supervised methods.\" -- what does this sentence contribute to the paper?\n- p10: \"Hochschreiter S. and Schmidhuber S. ...\" reference format is inconsistent with other references. \n\nOverall, this paper describes a trivial extension of an existing approach. The paper seems to have been written in a rush and would need major revision, both regarding the presentation as well as methodologically. I would suggest to condense this paper and submit to ISBI as a 1-page abstract. ", "rating": "1: strong reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["B1gMvaMySN", "B1eCMChZrV"], "comment_cdate": [1549901145892, 1550073366166], "comment_tcdate": [1549901145892, 1550073366166], "comment_tmdate": [1555945966545, 1555945958201], "comment_readers": [["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper69/AnonReviewer3", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper69/AnonReviewer1", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "comments", "comment": "I appreciate the effort of the authors to answer the observations raised. \nGiven the amount of concerns raised by the reviewers, I consider that the paper is at a state that would require a second revision before publication. As this is not an option for the conference, I rather maintain my initial score. \nI would recommend the authors to work on the recommendations and consider re-submission to another conference (e.g. SPIE)."}, {"title": "comments", "comment": "I think the direction is interesting but I would like to see evidence for the hopes and assumptions. Also seeing the results from the HMM experiment would be insightful. \nI agree with R3 that a major revision, perhaps submitted to SPIE, would be necessary before publication and I will maintain my initial score."}], "comment_replyto": ["SylrgMqSE4", "r1ldKHqH44"], "comment_url": ["https://openreview.net/forum?id=B1g5dgfee4¬eId=B1gMvaMySN", "https://openreview.net/forum?id=B1g5dgfee4¬eId=B1eCMChZrV"], "meta_review_cdate": 1551356617765, "meta_review_tcdate": 1551356617765, "meta_review_tmdate": 1551703167048, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The reviewers have all agreed on the value of the state-of-the-art review on action segmentation from kinematic data. Unfortunately, they have also recognized several issues regarding the lack of clarity of the contribution as well as language and presentation problems. \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1g5dgfee4¬eId=SJlfRzLBUN"], "decision": "Reject"} |