AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_43Xl6lK2q8.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.1 kB
{"forum": "43Xl6lK2q8", "submission_url": "https://openreview.net/forum?id=klRdXypxV", "submission_content": {"title": "Towards Multiple Enhancement Styles Generation in Mammography", "authors": ["Shilin Chen", "Ji He", "Jianhua Ma"], "authorids": ["shelena.chen@foxmail.com", "heji@smu.edu.cn", "jhma@smu.edu.cn"], "keywords": ["mammogram enhancement", "deep learning"], "TL;DR": "we present a deep learning (DL) framework to achieve multiple enhancement styles generation for mammogram enhancement.", "abstract": "Mammography is a well-established imaging modality for early detection and diagnosis of breast\ncancer. The raw detector-obtained mammograms are difficult for radiologists to diagnose due to the\nsimilarity between normal tissues and potential lesions in the attenuation level and thus mammogram\nenhancement (ME) is significantly necessary. However, the enhanced mammograms obtained\nwith different mammography devices can be diverse in visualization due to different enhancement\nalgorithms adopted in these mammography devices. Different styles of enhanced mammograms\ncan provide different information of breast tissue and lesion, which might help radiologists to\nscreen breast cancer better. In this paper, we present a deep learning (DL) framework to achieve\nmultiple enhancement styles generation for mammogram enhancement. The presented DL framework\nis denoted as DL-ME for simplicity. Specifically, the presented DL-ME is implemented with\na multi-scale cascaded residual convolutional neural network (MSC-ResNet), in which the output\nin the coarser scale is used as a part of inputs in the finer scale to achieve optimal ME performance.\nIn addition, a switch map is input into the DL-ME model to control the enhancement style of the\noutputs. To reveal the multiple enhancement styles generation ability of DL-ME for mammograms,\nclinical mammographic data from mammography devices of three different manufacturers are used\nin the work. The results show that the quality of the mammograms generated by our framework can\nreach the level of clinical diagnosis and enhanced mammograms with different styles can provide\nmore information, which can help radiologists to efficiently screen breast cancers.", "pdf": "/pdf/492adb4e70e88ce3bbc9fcd6d46fc7d6a6e50790.pdf", "track": "short paper", "paperhash": "chen|towards_multiple_enhancement_styles_generation_in_mammography", "paper_type": "methodological development", "_bibtex": "@misc{\nchen2020towards,\ntitle={Towards Multiple Enhancement Styles Generation in Mammography},\nauthor={Shilin Chen and Ji He and Jianhua Ma},\nyear={2020},\nurl={https://openreview.net/forum?id=klRdXypxV}\n}"}, "submission_cdate": 1579955621742, "submission_tcdate": 1579955621742, "submission_tmdate": 1587172156769, "submission_ddate": null, "review_id": ["6pasjKg1tV", "h4UrbfE6-l", "6Zme5F7Idr", "0exWDzZbL"], "review_url": ["https://openreview.net/forum?id=klRdXypxV&noteId=6pasjKg1tV", "https://openreview.net/forum?id=klRdXypxV&noteId=h4UrbfE6-l", "https://openreview.net/forum?id=klRdXypxV&noteId=6Zme5F7Idr", "https://openreview.net/forum?id=klRdXypxV&noteId=0exWDzZbL"], "review_cdate": [1584129868179, 1584100160603, 1584042099082, 1583355673557], "review_tcdate": [1584129868179, 1584100160603, 1584042099082, 1583355673557], "review_tmdate": [1585229783801, 1585229783286, 1585229782783, 1585229782276], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper7/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper7/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper7/AnonReviewer2"], ["MIDL.io/2020/Conference/Paper7/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["43Xl6lK2q8", "43Xl6lK2q8", "43Xl6lK2q8", "43Xl6lK2q8"], "review_content": [{"title": "Interesting application. Lack of information about the training process and the dataset used to train the models", "review": "The paper presents a method to generate mamography images with styles from different acquisition devices (Hologic, Gioto, Anke). The main motivation is to adapt different images to the preference of the expert reader.\n\nThe idea is interesting but the paper lacks any information regarding the training process or the training dataset used to generate the transformation models. Also, the experiment with the two experts contains a small number of images and the distribution of the classes is not described. The results does not allow to claim that using the style generation model by generating images with the expert\u2019s preference improves the expert\u2019s performance.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting idea but very weak validation ", "review": "Different mammogram vendors have different proprietary algorithms to post-process raw photon counts in digital mammography. This short paper proposes to learn to switch from one post-processing to another by learning such algorithms with a CNN. I find the idea interesting in some sense, but I have some doubts regarding both the technical side and the real impact of such tool, if it were to be further developed.\n\nThe architecture is more or less well detailed, but no mention on any training detail is made, nor the data used to learn these transformation is described. Did the authors have a database of raw mammograms and corresponding post-processed ones for each vendor? What was the resolution that their network admitted as input? The output was a low-resolution mammogram that was then upscaled to the original resolution (which was probably quite large)? If that is the case, that would be quite concerning, as objects of interest in these scans can be of a very small size (micro-calcifications). \n\nRegarding validation of the technique, by just looking at a small image in Fig. 2 one cannot say anything about this technique. There is a bit of a comment of showing 10 mammograms to two experts, but just looking at the accuracy of these two experts on only ten mammograms is very weak evidence of usefulness of this technique, more so considering that introducing this in a clinical workflow would lead to experts taking quite more time to read a scan. On the other hand, at the very least I was expecting that a numerical comparison between the output of the network and the reference mammogram (in terms of SSIM or other metrics commonly used in reconstruction or super-resolution papers).\n\nIn my opinion, this abstract lacks essential details to really understand the correctness and interest of the proposed technique, and the validation is too weak at this moment.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting approach to mammography processing", "review": "The problem addressed is how to map a raw mammogram (the data as measured) to an image suitable for viewing by the radiologist. This is by itself an important task. However, to my knowledge, the most difficulties radiologists have is when they have to judge priors made with a different processing, not so much the images themselves with a specific processing (although they have a preference).\n\nFurthermore, what is lacking:\n- How is the model trained?\n- What vendor and machine was used specifically?\n- How is the ground truth obtained?\n- What image resolution was used, and does it generalize over different detectors?\n- The reader study is not convincing. What pathologies were present? Who were these expert readers? How does the model perform with calcified lesions or DCIS lesions?", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Method to generate multiple enhancement style in Mammography using deep learning.", "review": "The paper proposes a CNN based method where a switch-map is used to generate multiple enhancement style from raw mammography images. The idea of the paper is simple and interesting. The paper lacks some of the details like how the switch map is generated, how the training of the network was done, were paired images from different mammography devices were used to train the network? etc. The quantitative results are not convincing and the details about the dataset is missing.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1585367816676, "meta_review_tcdate": 1585367816676, "meta_review_tmdate": 1585369717690, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper7 by AreaChair1", "meta_review_metareview": "All reviewers of the paper wrote more or less the same message. The idea is simple, interesting and useful. However, there are many missing details about the training and testing processing. In particular, the machines used in the image acquisition, how the ground truth was obtained, the image resolution used, and if there were pathologies present in the images. Moreover, the assessment based on two experts and a small number of images did not seem to convince the reviewers. Even though short papers do not need extensive validations, it must provide enough preliminary evidence to show that the proposed method has potential. Considering these issues, I agree with the reviewers on their weak reject rating.\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper7/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=klRdXypxV&noteId=fO7cfgsuaFR"], "decision": "reject"}