forum_id
stringlengths 10
10
| forum_title
stringlengths 5
188
| forum_authors
sequencelengths 0
98
| forum_abstract
stringlengths 3
4.69k
| forum_keywords
sequencelengths 0
29
| forum_pdf_url
stringlengths 40
40
| note_id
stringlengths 10
10
| note_type
stringclasses 3
values | note_created
int64 1,695B
1,737B
| note_replyto
stringlengths 10
10
| note_readers
sequencelengths 1
6
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 14
30.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | cLUJZOD8Rd | review | 1,708,334,328,248 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_gphm"
] | title: Review
review: Summary: This paper trained the RoBERTa model to predict whether individuals discussing anxiety in their posts will subsequently express interest in ADHD. They showed that it shows high performance (76% correct), which can give insights into their comorbidity.
Comments:
1. It is unclear what the ability of the RoBERT model to classify the groups implies. There could be many ways that drawing some clinical insights from the model performance can go wrong or be indefinite. The examples are below
- Some symptoms of anxiety are also indicative of ADHD. The RoBERT model captures the terms related to the comorbidity. However, it still seems unclear if they are just associative or if they have a causal relationship.
- ADHD patients have some features in common in their posts (not related to disorder symptoms)
- Or it could just imply selection bias as the reddit is not prospective data.
I think the implication should be more clearly stated. Also, I think modification of the experimental design could be necessary.
2. “Social media such as Reddit provides publicly available text data of anonymous
first-person experiences (Low et al. 2020).” This sentence at the end of the first paragraph in the introduction section looks abrupt. The first paragraph is mainly about the problem of misdiagnosis of ADHD and anxiety, so I think this sentence on the data source of this study should be discussed in the next paragraph.
rating: 6
confidence: 4 |
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | JkaG5A9Kq2 | review | 1,708,530,332,973 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_sZzm"
] | title: Good research work done to evaluate efficacy of RoBERTa for a specific use case.
review: The paper provided clear understanding of problem statement, and necessary background information to understand the challenge faced by less accurate techniques for detecting co-morbid ADHD. The data used for training, however, is from a platform that is biased in representation of general population, which is also acknowledged by the authors, and also appropriately explained the limited scope of the application of the results discovered by the authors.
The quality of work is good and meets the expectation. I do not consider myself to be able to comment on the originality of the work, as I need more experience in the field to be fair in my evaluation, however, the work is fairly original in my opinion. Significance of study presented is that it provides the comparison of three different models in performing classification for the task at hand, and find a model that outperforms the other two by a significant margin.
Pro of the paper is that it has found a model that has significantly higher prediction accuracy over other models.
Con of the paper is that the data set is biased and the visualizations cannot be published to protect the patients.
However, the results of the study are significant enough to outweigh the cons. This paper deserves publication in the esteemed conference.
rating: 9
confidence: 4 |
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | Eygml3g16n | review | 1,708,668,198,293 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_LMhd"
] | title: The authors describe their goal to identify a proxy of comorbid ADHD through reddit posts in the Anxiety and ADHD subreddits. The object to find posters that initially post in the Anxiety then in the ADHD subreddit is solved much better with a finetuned RoBERTa than the baseline when using keywords.
review: Clarity: The authors clearly describe their goal of assessing two groups of reddit posters the posters in the Anxiety subreddit and that then post in the ADHD subreddit and the posters in the Anxiety that do not post in the ADHD subreddit.
Originality: the task of determining a proxy of possible comorbid ADHD is novel.
Significance: The link between the proxy of possible comorbid ADHD is not well discussed in the manuscript and therefore it is hard to say what impact this will have in the clinical world. However, it is an interesting approach to the use of foundation models within a ‘semi’-clinical setting.
Quality: the study quality appears good as the method, data and performance of the model has been clearly stated.
Major points
1. You point towards it yourself in the data collection section of the paper where you note that the reddit posts are not a clinical diagnosis. I think you must discuss the implications on the significance of your work.
2. Why did you choose to remove data from posters that posted in the ADHD subreddit within 6 months after their post in the Anxiety subreddit?
3. Clarify if you download posts from the ADHD subreddit. In the data preprocessing section, you state that you don’t use the posts and then that you use the posts from the ADHD subreddit.
4. You reference a figure 6 that is not present in the manuscript.
5. You state that you visualize the phrases leading to “will post in ADHD” or “will not post in ADHD” for any given post but this is not presented anywhere.
Minor points
1. In section Data Preprocessing is it correctly understood that the posters who posted anywhere else than the Anxiety or/and ADHD subreddits were removed from the dataset? This should be clearer.
2. You mention the base rate of the test set, but you provide no detail on the distribution of the training set or if you have done any weighted sampling or indeed how you sampled the test dataset.
Pros
• Well written and concise.
• Interesting subject sure to spark interest even for people who are not experts in psychiatric disorders.
• Interesting application of existing models to proxy ADHD comorbidity
• In the appendix of the paper the limitations section does a good job of explaining the cons of the study in non-bias manner.
Cons
• The ADHD comorbidity proxy is not well discussed.
• The authors state that they have visualizations multiple times where none are shown.
rating: 2
confidence: 4 |
uwxUbSmhmc | LLMs Pick Up Cues of Potential Comorbid ADHD in People Reporting Anxiety when Keywords Are Not Enough | [
"Michael Guerzhoy"
] | We present a novel task that can elucidate the connection between anxiety and ADHD; use Transformers to make progress
toward solving a task that is not solvable by keyword-based classifiers; and discuss a method for visualization of our classifier
illuminating the connection between anxiety and ADHD presentations.
Up to approximately 50\% of adults with ADHD may also have an anxiety disorder and approximately 30\% of adults with anxiety may also have ADHD. Patients presenting with anxiety may be
treated for anxiety without ADHD ever being considered, possibly affecting treatment. We show how data that bears on ADHD that is comorbid with anxiety can be obtained from social media data, and show that Transformers can be used to detect a proxy for possible comorbid ADHD in people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We identified posters who first started posting in the
Anxiety subreddit and later started posting in the ADHD subreddit as well. We use this subset of the posters as a proxy for
people who presented with anxiety symptoms and then became aware that they might have ADHD.
We fine-tune a Transformer architecture-based classifier to classify people who started posting in the Anxiety subreddit and then
started posting in the ADHD subreddit vs. people who posted in the Anxiety subreddit without later posting in the ADHD
subreddit.
We show that a Transformer architecture is capable of achieving reasonable results (76\% correct for RoBERTa vs.
under 60\% correct for the best keyword-based model, both with 50\% base rate).
Disclosure: this paper was accepted at CLPsych @ EACL with the title ``Detecting a Proxy for Potential Comorbid ADHD in People Reporting Anxiety Symptoms from Social Media Data" | [
"adhd",
"anxiety",
"reddit"
] | https://openreview.net/pdf?id=uwxUbSmhmc | DbDwlbaReI | review | 1,708,637,683,004 | uwxUbSmhmc | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission47/Reviewer_VnGH"
] | title: Online forum text based classification for a weak proxy task is better solved by RoBERTa as compared to keyword based models.
review: ## Paper summary
The task of predicting whether a reddit user who first starts posting on Anxiety subreddits will later also start posting on ADHD subreddits is used as a proxy for identifying people with anxiety who might also have ADHD. Classification performance of a fine-tuned RoBERTa model is presented which is shown to be better than keyword based baselines. Some explainability experiments are promised in the future.
### Strengths
1. Misdiagnosed comorbid ADHD is an important issue.
1. Data collection and processing is sound -- the 6 month gap in user postage history between their first post in ADHD subreddits and the data gathered from anxiety subreddits is a reasonable choice.
### Weaknesses
1. The authors have acknowledged the weaknesses and assumptions in the proxy task -- subreddit posting behavior is a very weak link to whether the user actually has a high risk of having comorbid ADHD. There seems to be no method for verifying the link between this proxy task and actual comorbid ADHD.
1. The practical benefits of how the proposed study can better enable diagnosis of comorbid ADHD in the future is not discussed.
### Feedback to authors
* In order to refine the ground truth for the proxy task, a Chat-GPT or equivalent LLM can be used to query whether the user believes that they have ADHD and/or anxiety from their posts alone. This may clean up the collected data significantly.
rating: 4
confidence: 4 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | kAhWnnJc7p | review | 1,708,954,777,061 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_Apz6"
] | title: A simple paper with good evaluations of current SOTA models
review: An interesting benchmark with good reproducibility. Clear explanation and well written demonstrating how LLMS can be used to detect instances of race-based medicine.
Pros - Potentially an interesting read for a clinical audience who would like to know the feasibility of having such a model being run 'in clinical practise'.
Cons - This style of paper is very common (prompt crafting + evaluation) and can be criticised as not contributing anything novel to the field.
rating: 5
confidence: 4 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | Q0Jej06fAV | review | 1,708,639,286,393 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_LYU7"
] | title: Detecting race-based LLM responses
review: The authors experimented with using LLM to detect whether LLM responses contain debunked race-based content. It's an important topic to explore and the results help us to better understand how current available LLMs may respond to race-based medical questions.
13 questions were used to generate the responses from LLMs. It's unclear whether the set of the questions are representative enough for the study. 11 out of the 13 questions contained direct mentions of races in the question themselves, while 2 questions were fairly general. There is no comparison between the two to see if there is significant difference. Since direct mentions of the races may be more likely to generate race-based responses, the result responses set may have significant higher race-based responses compared to real life clinical uses cases, which may bias the evaluation results.
The paper also didn't discuss on how the safe guard mechanism in proprietary and/or open source models may affect the model responses. In appendix, the author mentioned that MedPalm-2 simply rejects to answer some of the questions due to the content filter. This type of content filtering is a common practice and it may change constantly without any notice, especially for proprietary models, which puts a lot of uncertainty on the evaluation results.
rating: 6
confidence: 4 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | OCXbQXnaNd | review | 1,708,497,525,587 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_GZnZ"
] | title: Review of "Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models" - Needs revision
review: # Peer Review for the Manuscript: "Evaluating Large Language Models for Race-Based Medical Content"
## General Evaluation
The paper presents an interesting study on the use of Large Language Models (LLMs) for identifying and evaluating race-based content in medical contexts. This topic is both timely and relevant, given the increasing reliance on LLMs across various sectors, including healthcare. The authors have made a commendable effort to highlight the challenges and nuances associated with race-based content in LLM outputs.
## Specific Feedback
### Strengths
1. **Relevance and Novelty:** The study addresses a critical gap in the current understanding and evaluation of LLMs in handling sensitive and crucial topics such as impact of racial stereotypes in medical advice.
2. **Methodological Approach:** The structured comparison across different LLMs provide a solid basis for the study's findings.
### Areas for Improvement
1. **Typos and Clarifications:** The paper mentions "nine unique LLM-prompt combinations" which should correctly be "twelve unique combinations." Attention to such details is crucial for the accuracy of the paper.
2. **Consideration of Skin Tone Variability:** The use of a more comprehensive skin tone classification, such as the Monk Skin Tone Scale ([Monk Skin Tone Scale](https://skintone.google/)), would enrich the study by providing a more nuanced understanding of race as it pertains to medical content. The current set of 13 prompts is limited, it is predominately representing black, white and some asian race. Recommend the authors to formulate a more diverse prompts by considering more examples of race-stereotype combinations.
3. **Benchmark Evaluation:** The paper states that there is a lack of methods to evaluate harmful content regarding race. This is not true, major players in Generative AI space like OpenAI, Meta, google all of them have released trust and safety scorecards and responsible AI covering bias and stereotypes is a big focus. However, existing benchmarks and datasets could be explored for their representation of race-related medical data. The absence of this exploration is a missed opportunity to contextualize the study's findings within the broader research landscape.
4. **Physician Backgrounds:** The background of physicians involved in the original research, particularly their awareness of bias and civil rights, is not detailed. This information is crucial for understanding the potential biases in the study's setup and interpretation.
5. **Statistical Measures:** A clearer explanation of the statistical measures used (Sensitivity, Specificity, NPV, PPV, F1) would make the paper accessible to a broader audience, including those not familiar with these terms.
6. **Methodology Suggestion:** Given the limitations of zero-shot prompting in niche domains like race-related medical data, exploring few-shot prompting or fine-tuning the models might yield more accurate results.
### Recommendations for Further Research
The authors are encouraged to explore the representation of race-related bias in publicly available datasets and benchmarks, such as those hosted on platforms like Hugging Face and Stanford's CRFM. Investigating these resources could provide insights into the current state of race representation in LLM training data and benchmarks. Furthermore, the authors should consider building and open-sourcing a dataset specifically for evaluating race-related content in medical advice. This contribution would significantly benefit the research community by providing a specialized resource for further studies.
## Some benchmarks for reference
- [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
- [Stanford CRFM HELM Lite](https://crfm.stanford.edu/helm/lite/latest/)
- [Hugging Face Chatbot Arena Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
- [Artificial Analysis](https://artificialanalysis.ai/)
- [Martian Leaderboard](https://leaderboard.withmartian.com/)
- [Hugging Face Enterprise Scenarios Leaderboard](https://huggingface.co/spaces/PatronusAI/enterprise_scenarios_leaderboard)
## Conclusion
The paper "Evaluating Large Language Models for Race-Based Medical Content" contributes important insights into the evaluation of LLMs for sensitive content. With the recommended revisions and further exploration of the highlighted areas, this paper has the potential to significantly impact the field.
rating: 6
confidence: 3 |
sm9Udj2c6u | Feasibility of Automatically Detecting Practice of Race-Based Medicine by Large Language Models | [
"Akshay Swaminathan",
"Sid Salvi",
"Philip Chung",
"Alison Callahan",
"Suhana Bedi",
"Alyssa Unell",
"Mehr Kashyap",
"Roxana Daneshjou",
"Nigam Shah",
"Dev Dash"
] | One challenge in integrating large language models (LLMs) into clinical workflows is ensuring the appropriateness of generated content. This study develops an automated evaluation method to detect if LLM outputs contain debunked stereotypes that perpetuate race-based medicine. To develop a race-based medicine evaluator agent, we selected the top performing (F1) LLM-prompt combination among 4 LLMs (GPT-3.5, GPT-4, GPT-4-0125 and GPT-4-1106) and three prompts, using a physician-labeled dataset of 181 LLM responses as the gold standard. This evaluator agent was then used to assess 1300 responses from ten LLMs to 13 questions (10 iterations each) related to race-based medicine. Across the nine candidate LLMs, the percentage of LLM responses that did not contain debunked race-based content ranged from 22% in falcon-7b-instruct to 76% in claude-2. This study demonstrates the potential of LLM-powered agents to automate the detection of race-based medical content. | [
"large language models",
"evaluation",
"race-based medicine"
] | https://openreview.net/pdf?id=sm9Udj2c6u | 2PHRJ6DmmE | review | 1,708,803,195,014 | sm9Udj2c6u | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission36/Reviewer_8Ds8"
] | title: This paper investigates the capability of GPT-3.5/4 as an evaluator to detect potential race-related bias in medicine. Overall this is a good work studying important problems in medicine.
review: Quality: The evaluation is comprehensive and convincing. The authors first chose the best evaluator agent using dataset from Omiye et al. Then the evaluator was used to assess candidate responses from 10 LLMs across 13 questions. The authors also studied different combinations of prompts, and showed GPT-4 with simple prompts is the best evaluator.
Clarity: The paper is well structured and easy to follow.
Significance: Race-based beliefs in healthcare can be harmful and it constructs a major concerns for doctors to apply LLMs in clinical settings. Conclusion from this paper is important to guide doctors to choose the best LLMs in practice.
rating: 7
confidence: 4 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | xySHQnT9jt | review | 1,707,875,867,662 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_x5Nf"
] | title: Good pilot study focusing on caption hallucination of LVLMs. However, the manuscript needs further polish to be published.
review: The paper proposes two automatic metrics for evaluating LVLMs hallucination degree in the medical domain.
## Pros
- hallucination evaluation in the medical domain is an important research question. And the authors motivate it well.
- the proposed object hallucination and "domain knowledge hallucination" are relevant to the medical domain
## Cons
- The proposed "domain knowledge hallucination" indeed only focuses on the diagnosis. Other medical concepts such as procedures, medications, medical conditions are not included. To my understanding, it is more appropriate to use the term "diagnosis hallucination".
- The automatic evaluation is great for scaling. Meanwhile, it would be interesting to know whether these model-based metrics are really reliable (i.e. the model-derived evaluations themselves do not contain hallucination). I suggest adding some human evaluation to see (1) whether LLM-based NER is reliable; (2) whether cosine-similarity and threshold are reasonable.
- Only LLaVA-Med is evaluated, which weakens the argument presented.
- Clarification needed:
- how to combine cosine similarity and the ICD-10-based distance?
- On the second round inference, would the ground truth diagnosis be leaked to the LVLM in the enhanced prompt?
## Misc.
- Figure 2 is presented, but never mentioned in the text.
Overall, the proposed metrics are variants of existing metrics, with a focus on the medical domain.
The clarity can be improved to make the manuscript stronger. Certain human evaluations and LVLM evaluations are needed to thoroughly validate the technical designs.
rating: 4
confidence: 5 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | gsLxEkQGzl | review | 1,708,640,337,398 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_iiRS"
] | title: Review
review: Contribution:
The authors made two contributions in their work: 1) clear definition of hallucination within the medical field, breaking it down into two types: Object Hallucination, which involves incorrect or fabricated details about objects, and Domain Knowledge Hallucination, which pertains to inaccuracies in medical knowledge or practices. 2) they developed a new tool called Med-HVL, designed specifically for the medical domain, to detect and evaluate these hallucinations.
Pros:
Its an interesting glimpse into the bias of LLM pretrained into a large amount of medical data. A nice extension of the work would be to systematically compare and benchmark those hallucinations across multiple dataset and models.
A few remarks:
- in the figure, gt caption and gt observation are the same. Is there a gt caption without irrelevant info that would not be the gt observation? it would be nice to have a different example in the figure
- "Object" in this context seems incorrect and can be confused with support device. Maybe a more suitable term would be anatomical structures?
rating: 6
confidence: 3 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | RoLKKrDJED | review | 1,708,715,906,961 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_FCvE"
] | title: Relevant topic, interesting approach that requires some discussion
review: In this work, the authors address a critical issue of evaluating the hallucination of LVLMs in a clinical context. As the authors assert, developing methods to assess hallucinations of these models in a quantitative and automated way is important. For this, the authors employ the CHAIR metric used in image captioning and propose a new domain knowledge hallucination metric. The authors present an initial evaluation of LLaVA-Med using the metric on the MedICAT dataset.
Comments:
- The authors address an important issue regarding hallucination of LVLMs in a clinical context.
- The proposed metric seems reasonable, however, the fact that an additional LLM is used obtain ground truth object observations is questionable. LLMs themselves are not fully validated in terms of their performance, so a proper evaluation/validation study of this step is necessary
- For assessing object hallucination and domain knowledge, the authors utilize cosine similarity of embeddings with BioBERT. Again, although the method addresses scalability, careful validation of this method for assessment is needed.
- Minor: the figure seems to be assessing GPT-4V whereas the text is evaluating LLaVA-Med
Overall, the work addresses an important issue and presents a reasonable set of metrics for assessing hallucination. Although the study is preliminary, the work will garner relevant discussion, in particular regarding the usage of existing models (such as BioBERT and GPT-4) for assessing other LVLMs.
rating: 6
confidence: 4 |
rxx8leoPy0 | Med-HVL: Automatic Medical Domain Hallucination Evaluation for Large Vision-Language Models | [
"Qianqi Yan",
"Xuehai He",
"Xin Eric Wang"
] | Advancements in Large Vision-Language Models (LVLMs) have made significant progress in integrating visual and textual data. However, their deployment in the medical domain is impeded by critical issues of hallucinations, asking for reliable evaluation metrics and methods. We define two novel metrics: Object Hallucination and Domain Knowledge Hallucination, to quantify the hallucination of LVLMs in the medical domain. We propose a scalable, automated evaluation framework, Med-HVL, to assess and mitigate hallucinations at both object and domain knowledge levels. We reveal a significant presence of hallucinations in the LVLMs, emphasizing the need for domain-specific adaptations and finetuning to enhance their reliability for medical applications. | [
"Large Vision-Language Models",
"Hallucination",
"Medical"
] | https://openreview.net/pdf?id=rxx8leoPy0 | Ofu58Lav78 | review | 1,708,663,940,147 | rxx8leoPy0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission41/Reviewer_Bbeo"
] | title: This paper proposes two potentially useful metrics for evaluating medical LVLMs
review: This paper proposes two potentially useful metrics, CHAIR and DVH, for evaluating medical LVLMs. Overall this is a good metric proposal, although given the use of chest X-rays, some comparison to other metrics like RadGraph and CheXBert would be useful here.
rating: 7
confidence: 4 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | hS5Xh0Gd2e | review | 1,707,950,770,013 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_tM9p"
] | title: A nice study with impressive-looking results, but difficult to follow
review: This is an interesting demonstration of an application of foundation language models cost-effectively fine-tuned to a clinical task and performing impressively, especially compared to language models without fine-tuning.
However, it is hard to follow for someone like myself with machine learning but not medical expertise. For instance, I don't know what TNM or triple-negative mean, and my lack of familiarity with medical reports make the problem and the description of data preparation, which are important to understand the implications of the results, hard to understand. In the results, it is unclear what the difference is between the sample count indicated in the model column vs the samples column. If the former is the number of samples used for training, then the claim that low sample count is sufficient for training seems unsupported, as the accuracy is substantially higher with greater sample count. Also, if possible, it would be helpful if the results included some model trained only on the cancer data so we can tell how much is gained by starting with a foundation model. Further, what are the practical implications of these results? What would be the impact of deploying this model in particular?
A small discussion of prior work in fine-tuning foundation models for clinical tasks and the gap that this work fills could help contextualize this work and understand its contribution. Much of the last page is speculative, which is, I think, less valuable than further supporting the experimental study - the main contribution of this paper - with more details as mentioned above.
rating: 5
confidence: 2 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | XGExrsQ0a4 | review | 1,708,658,608,220 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_opkq"
] | title: Meaningful efforts to apply multiple language models to TNM Tumor classification from unstructured reports
review: The paper explored the application of LoRA fine-tuning of Mistral 7B Instruct to classify TNM staging from unstructured pathology reports for triple negative breast cancers. Several baselines and tuning strategies are compared on a real-world data task to demonstrate the proposed fine-tuning approach could achieve better accuracy using small amount of training data.
- Quality: the paper is technically sound and of good quality. Most claims in the paper are well supported by experiment results.
- Clarity: the paper is well-structured and clear in experiment process and results discussion, despite that the paper lacks clarity on the clinical knowledge of TNM categories (pls refer to the suggestion #1 below).
- Originality: the paper demonstrates an interesting application of well-established techniques to a specific clinical task. Though fine tuning is not a novel idea, the application to the specific TNM staging seems to be novel from practical perspective.
- Significance: the paper mentioned potential generalization possibility of the proposed approach other clinical tasks, given the low cost and high reliability with comparison to other large language models.
Cons, suggestions or questions to the authors:
1. It'll be better to include a brief introduction of TNM staging (e.g. T stands for size of tumor, etc.) at the beginning of the paper for non-clinical audience, and also why TNM staging of triple negative breast cancers is challenging and demands the help of ML modeling and application. It will also improve the significance of the work from the clinical application perspective.
2. The paper mentioned manual labeling from subject matter experts. Would you pls provide more information on 1) how many experts are involved, 2) whether each document was labeled by multiple experts and how the final label is determined (e.g. if there's any disagreement) . More importantly, could you pls comment how to ensure the reliability and robustness of the proposed fine tuning approach towards label noise in the training data?
3. In table 1, different fine-tuning results show increasing accuracy but decreasing in confidence for "UNKNOWN" class. Does it indicate over-fitting problem?
rating: 7
confidence: 3 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | IAUrX5O4o8 | review | 1,708,638,011,022 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_JW1i"
] | title: TNM staging using LoRA finetuning
review: The authors performed LoRA finetuning on top of Mistral 7B model to do TNM staging classification tasks on breast cancer pathology reports. The authors carefully curated a dataset of anoymized reports with labels from subject matter experts. The results look very promising. Only one foundational model was used in the finetuning and evaluation, so it is unclear whether there could be significant result difference among different foundation models with different sizes. It is also unclear how well the model is generalized, e.g. whether the quality and the format of the original reports may affect the final results, though the authors mentioned it in the future work section.
rating: 8
confidence: 3 |
rLVQmYYgJP | TNM Tumor Classification from Unstructured Breast Cancer Pathology Reports using LoRA Finetuning of Mistral 7B | [
"Kyle McCleary",
"James Ghawaly",
"Lucio Miele"
] | Over the past year, large language models have seen an explosion in usage, with researchers and companies rushing to discover new applications. This explosion was kick-started by OpenAI, with their release of GPT 3.5 and GPT 4 to the general public. These foundation models have proven extraordinarily capable on a wide range of tasks, but their cost and reliability present problems for more sensitive and/or resource-limited applications. Over the same time-span, however, we have also seen a rush of development in smaller foundation models, such as Mistral's 7B model, as well as in fine-tuning those models for specific tasks.
In this paper, we explore the application of Low-Rank Adaptation (LoRA) fine-tuning of small language models for performing TNM staging on unstructured pathology reports for triple negative breast cancer cases. We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field.
We found that performing TNM staging with reliable accuracy is possible for a small foundational model through fine-tuning, allowing fast and reliable automation of critical language processing tasks within medicine. | [
"clinical foundation models",
"large language models",
"mistral",
"tumor classification",
"low rank adaptation",
"fine-tuning"
] | https://openreview.net/pdf?id=rLVQmYYgJP | 8C01mOSnnV | review | 1,708,644,502,248 | rLVQmYYgJP | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission20/Reviewer_c7Sw"
] | title: An interesting application of LoRa but with questionable data augmentation/quality and limited evaluation
review: Summary
-- the paper applies LoRa fine-tuning of a Mistral model to perform TNM phenotyping using pathology reports.
Pros
-- they perform data augmentation using 200 real world pathology reports. They synthesize new reports by stripping relevant sentences from existing reports and replacing them with example sentences that were mapped to certain label classes a priori
-- the inclusion of an UNKNOWN label class in cases where the relevant information was missing
-- the use of JSON-enforced output
-- reporting of training time as well as performance
-- the ability of the model to cite the relevant information
-- the use of LoRa here is interesting
Cons
-- While the data augmentation method is creative, the methodology is not described clearly enough and the quality of the resulting data is not examined. For example, "These sentences were then injected into the template reports at randomly selected marker locations" -- does this mean that the data is being pulled from a finite list of sentences in the JSON? If so, this is clearly not sufficiently representative of the diversity of natural clinical language.
-- unclear how the path reports were labeled -- what exactly was being labeled, and what were the qualifications of the labelers?
-- how did you strip the report of all info relevant for TNM using a script? How did you validate the accuracy of this process?
-- unclear how many data examples were generated in total. Also not clear whether or not the resulting dataset was high quality. It sounds like you replaced the parts relevant to TNM with random TNM ratings to augment the dataset. Were the resulting pathology reports realistic? It's not obvious that this procedure would result in realistic pathology reports.
-- Some irrelevant text, eg. there is no Section 5
-- This is an interesting application of LoRa, but it's just one task. A more comprehensive evaluation across several tasks would be much more compelling.
-- "We also attempt to develop a more generalized approach, so that our work can be applied to other NLP tasks within the medical field" -- it's not clear where this was done
rating: 6
confidence: 5 |
rAw5ANMNZ2 | Modelling Lexical Characteristics of the Healthy Aging Population with a Natural Speech Dataset | [
"Han Kunmei"
] | Modelling baseline language variation in normal aging is important for our understanding of healthy aging. Large-language databases and NLP tools enable us to conduct automated quantitative analysis of natural language data. In this study, we aim to demonstrate that using NLP tools and psycholinguistic metrics to process natural language datasets can help to set a normative benchmark of aging language. The benchmark can be applied to the assessment of cognitive aging. | [
"NLP tools",
"psycholinguistic metrics",
"natural language",
"cognitive aging"
] | https://openreview.net/pdf?id=rAw5ANMNZ2 | kQuqb9zI8x | review | 1,708,637,964,498 | rAw5ANMNZ2 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission11/Reviewer_aQeg"
] | title: Solid non-traditional track submission
review: This paper evaluates linguistic variation in speech data by age and gender using standard NLP tools: PoS using the Stanford Parser and the Penn Treebank tags adjusted for Singaporean English. The author goes on to derive a variety of linguistic features from this data, on which the analysis is performed. The results largely corroborate existing results related to the effect of age on linguistic variation -- particularly as it pertains to "lexical concreteness." The methodology appears to be sound, and the use of the Stanford Parser, in my opinion, constitutes foundation model usage and thus making it a relevant contribution to the workshop.
rating: 7
confidence: 2 |
rAw5ANMNZ2 | Modelling Lexical Characteristics of the Healthy Aging Population with a Natural Speech Dataset | [
"Han Kunmei"
] | Modelling baseline language variation in normal aging is important for our understanding of healthy aging. Large-language databases and NLP tools enable us to conduct automated quantitative analysis of natural language data. In this study, we aim to demonstrate that using NLP tools and psycholinguistic metrics to process natural language datasets can help to set a normative benchmark of aging language. The benchmark can be applied to the assessment of cognitive aging. | [
"NLP tools",
"psycholinguistic metrics",
"natural language",
"cognitive aging"
] | https://openreview.net/pdf?id=rAw5ANMNZ2 | cMomTZYNKL | review | 1,708,406,647,645 | rAw5ANMNZ2 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission11/Reviewer_Zp34"
] | title: Nice paper utilizes NLP tools and psycholinguistic metrics to analyze natural speech data
review: The paper investigates the baseline language variations in normal aging to understand cognitive changes. It utilizes NLP tools and psycholinguistic metrics to analyze natural speech data, aiming to establish a normative benchmark for aging language, which could assist in assessing cognitive aging.
Pros: The paper used NLP tools to objectively analyze natural speech data, overcoming the subjectivity in manual assessments of language abilities, besides, it provided a detailed year-by-year analysis of linguistic characteristics influenced by age and sex.
Cons: However, there are a few limitations to this study: it excludes individuals older than 80 years and those with less than 13 years of education, which could omit valuable insights from these groups. While the approach reduces subjectivity compared to manual assessments, the choice and interpretation of psycholinguistic metrics could still introduce bias.
Overall, the study presents a step forward in understanding language variation in aging.
rating: 6
confidence: 4 |
rAw5ANMNZ2 | Modelling Lexical Characteristics of the Healthy Aging Population with a Natural Speech Dataset | [
"Han Kunmei"
] | Modelling baseline language variation in normal aging is important for our understanding of healthy aging. Large-language databases and NLP tools enable us to conduct automated quantitative analysis of natural language data. In this study, we aim to demonstrate that using NLP tools and psycholinguistic metrics to process natural language datasets can help to set a normative benchmark of aging language. The benchmark can be applied to the assessment of cognitive aging. | [
"NLP tools",
"psycholinguistic metrics",
"natural language",
"cognitive aging"
] | https://openreview.net/pdf?id=rAw5ANMNZ2 | 74mAXjj5OK | review | 1,708,639,968,776 | rAw5ANMNZ2 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission11/Reviewer_xRGg"
] | title: Good study focusing on the lexical characteristics of the healthy aging population
review: This paper presents a significant study focusing on the lexical characteristics of the healthy aging population using natural speech datasets and psycholinguistic metrics. The results reveal that parts of speech distribution vary with gender, while lexical concreteness correlates with age, contributing valuable information to the understanding of language variation in aging. The following points could be considered:
1. It would be beneficial to include a comparison with existing studies on younger populations or those with cognitive impairments.
2. Given that the study is based on Singaporean English speakers, how do you anticipate the findings to generalize to other English-speaking populations or languages?
3. How were the audio recordings standardized across participants to minimize environmental and technical variations?
rating: 7
confidence: 3 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | PrK6XrDyZ8 | review | 1,708,640,221,971 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_tRbZ"
] | title: Fine-tuning strategies evaluation
review: The authors evaluated the performance of different fine-tuning strategies on medical QA tasks. A variety of medical QA datasets were used in the evaluation. Full parameter fine-tuning vs. LoRA fine-tuning were used and compared. Zero-shot performance were used to evaluate the model performance.
Only two different size Llama-2 models were used as the base model. Some studies have shown that different base models may provide different level of improvement after fine-tuning, so it would be great to see the results from some other commonly available open source models such as mistral.
The results were not new and mostly expected. Similar studies have been conducted and the results of this study were generally consistent with some previous findings. The authors did include more QA datasets for the evaluation, which made the results more comprehensive.
rating: 7
confidence: 4 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | JgMQezCtwC | review | 1,708,326,299,953 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_txoL"
] | title: Comparison of LoRA and Full-Parameter Fine-Tuning LLMs for Medical Q&A
review: ## Summary
This paper investigates the effectiveness of using LoRA fine-tuning compared to full-parameter fine-tuning LLMs for adaptation to the healthcare domain. Authors report results for both smaller 7B Llama-2 as well as larger 70B Llama-2 fine-tuned models. They also compare against close-source state-of-the-art models like GPT-4 and Med-PaLM-2. The findings and described methods are a useful reference for healthcare machine learning practitioners who want to fine-tune a general-domain LLM for health-care tasks.
## Pros
* Evaluation is comprehensive across multiple datasets
* Evaluation is carefully done with decontamination pipeline
* Compares one of the most popular parameter efficient fine-tuning technique, LoRA, against full fine-tuning and state-of-the art models (GPT-4 and Med-PaLM) in healthcare domain
* Models are publicly released and available
* Datasets are publicly released and available
* Manuscript is well written and easy to follow
## Cons
* Methods section describes that LoRA may be applied to only attention layers vs. all layers. PE-FT results in Table 1 are for LoRA applied to all layers. It would be nice to also show the performance for LoRA applied to only attention layers since authors have mentioned this is a common approach. However, this is more of a "nice to have".
rating: 8
confidence: 3 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | 7ZjPKN5eJ2 | review | 1,708,495,871,373 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_WxPy"
] | title: Official Review for Evaluating Fine-Tuning Strategies for Medical LLMs
review: **Summary:**
The paper focuses on fine-tuning the 7B and 70B Llama-2 models on a dataset compiled from several open medical datasets, evaluating the performance gap between LoRA fine-tuning and full-parameter fine-tuning. Experiments conducted on several medical benchmarks led to good performance on some of the benchmarks, outperformed only by models trained at a larger scale (GPT-4) and models pre-trained on medical corpora (MedPaLM-2). To ensure a fair evaluation, the paper also introduces a decontamination pipeline to remove potential common samples between the training and the testing splits of the benchmarks.
**Strengths:**
1. The evaluation benchmark is thorough, encompassing a wide set of medical benchmarks, thus enabling a more in-depth analysis.
2. The dataset introduced by the authors seems fairly comprehensive and suitable for the clinical domain, and the performance obtained by the Llama models trained in the paper realistically substantiates this.
3. The focus on data decontamination for a fairer analysis by the authors is appreciated and makes their results more relevant.
4. The overall work presented in this paper is very relevant to the topic of the venue.
5. The elaborate (for a short paper) description of the hyperparameters to enable reproducibility is appreciated.
**Weaknesses:**
1. The theme of the paper revolves around parameter-efficient fine-tuning vs full-parameter fine-tuning. However, the claim that parameter-efficient fine-tuning achieves results close to full-parameter fine-tuning is a well-known research artifact. The authors themselves note that the results are in line with prior work for LoRA vs full-parameter fine-tuning in other domains. I would recommend the authors adjust the paper to better describe their main contributions towards the compilation of the training dataset from open medical sources and the data decontamination pipeline, with a lesser focus on parameter-efficient fine-tuning vs full-parameter fine-tuning.
2. I recommend the authors describe the instruction tuning methodology in greater detail in the main paper, if space permits, else in the appendix.
**Other recommendations:**
There is a typo in the caption for Table 1, where GPT-3.5 is incorrectly mentioned as GPT-3.4.
rating: 6
confidence: 4 |
oulcuR8Aub | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | [
"Clement Christophe",
"Praveenkumar Kanithi",
"Prateek Munjal",
"Tathagata Raha",
"Nasir Hayat",
"Ronnie Rajan",
"Ahmed Al Mahrooqi",
"Avani Gupta",
"Muhammad Umar Salman",
"Marco AF Pimentel",
"Shadab Khan",
"Boulbaba Ben Amor"
] | This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies -- full-parameter fine-tuning and parameter-efficient tuning -- within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question answering capabilities.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
Notably, our medical LLM showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs.
Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications. | [
"LLMs",
"Clinical",
"Fine-tuning",
"Evaluation"
] | https://openreview.net/pdf?id=oulcuR8Aub | 4lnYndKQhh | review | 1,708,642,079,810 | oulcuR8Aub | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission7/Reviewer_VxwM"
] | title: Review
review: Pros:
- The authors articulate a well-defined research question and address it with clarity and efficiency;
- There's a comprehensive evaluation conducted against various state-of-the-art large language models, both open-source and proprietary;
- The presentation of results is clear and straightforward.
Suggestion for improvement:
- It would be beneficial to explicitly indicate in Table 1 which backbone corresponds to PE-FT and FP-FT; I assume it's llama70b. Additionally, including a column for llama7b would give the complete picture.
- It would be insightful to detail the computational resources required, in terms of GPU hours and memory, for both PE and FP fine-tuning. Providing this comparison could offer a clearer understanding of the differences in resource intensity between the two methods.
rating: 7
confidence: 4 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | biyMJhHprq | review | 1,708,653,960,430 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_VttH"
] | title: This study presents high-quality research in the clinical domain, addressing challenges in building Clinical Large Language Models (CLaMs). It demonstrates originality by introducing novel models for structuring patient clinical data, with clear organization and thorough evaluation methods. The significance lies in its contribution to addressing the crucial problem of conforming clinical notes to international interoperability standards, showcasing superior performance compared to existing models.
review: ># Quality
The work is of high quality, as it demonstrates a thorough understanding of the clinical domain and the challenges of building a clinical large language model (CLaM). The authors justify their choice of foundation model clearly. They also address several modeling challenges, such as long context window, medical jargon, and abbreviation expansion. Their models are evaluated using both next-token prediction and blind pairwise comparison with other popular LLMs, showing superior performance in patient clinical data structuring.
># Clarity
The work is well-written and organized, with clear problem formulation, methods, results, and discussion. The authors provide sufficient details and explanations for their data collection, model training, and evaluation methods, including several appendices with examples of their models’ outputs, ethical considerations, and reproducibility statement.
># Originality
This work is indeed original as it introduces a novel family of CLaMs designed for patient clinical data structuring, a crucial yet intricate component of clinical workflows.
># Significance
This work is significant, as it addresses a critical problem of structuring clinical notes into clinical data, according to international interoperability standards.
># Pros
* This work tackles an important area of healthcare and has a lot of potential significance.
* The work evaluates the models using both next-token prediction and blind pairwise comparison with other LLMs
rating: 7
confidence: 5 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | bdIYXPms97 | review | 1,708,655,194,151 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_Acro"
] | title: While the innovative approach in addressing healthcare challenges is commendable, potential drawbacks, such as the risk of hallucination and dependency on training data, underscore the importance of recognizing limitations for a comprehensive evaluation.
review: The strengths of the paper include its emphasis on high quality and clarity, showcasing SoftTiger's superior performance compared to established models like GPT-3.5. The clear articulation of objectives and addressing challenges in healthcare workflows adds to the paper's credibility.
The originality and significance of SoftTiger's approach in tackling critical subtasks within healthcare are commendable, contributing to its innovative standing. The acknowledgment of both the advanced capabilities and scalability of SoftTiger, with two configurations catering to different research needs, adds to its appeal.
However, the potential challenges or cons associated with SoftTiger, such as the risk of hallucination due to the statistical nature of language models and the dependency on the volume and quality of training data. Recognizing these limitations is essential for a comprehensive evaluation.
rating: 7
confidence: 3 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | MlmmvLRdWK | review | 1,708,359,130,138 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_WXeH"
] | title: Outlines the finetuning of an open source LLM for clinical data structuring. However there are large methodolgical flaws and some base assumptions are unconvincing.
review: 1. Summary and contributions: Briefly summarize the paper and its contributions
Outlines the development of an LLM called SoftTiger. This is a finetuned version of an open-source LLM named TigerBot. This is achieved using supervised finetuning tuning from a dataset of general text, a previously released clinical dataset and a novel clinical workflow dataset. The novel dataset is made up of instruction pairs for 3 tasks performed on the MIMIC-IV dataset with the outputs generated by GPT-4 and validated by 5 physicians. The results show that the finetuned models gained accuracy on automated evaluation benchmarks.
2. Strengths: Describe the strengths of the work. Typical criteria include: soundness of the claims (theoretical grounding, empirical evaluation), significance and novelty of the contribution, and relevance to the community.
- It is an early example of finetuning large (70B) open-source LLMs across multiple GPUs.
- After carefully evaluating the trade-off between clinical complexity and helpfulness, 3 clinical data structuring tasks were chosen. This gives the work a clear potential for clinical impact.
- A very good section outlines the administrative burden on physicians.
- Making IPS or FHIR structure the output is optimal for potential future integration into current e-health systems
3. Weaknesses: Explain the limitations of this work along the same axes as above.
-MIMIC data user agreement prevents the sharing of the data or derivates with 3rd parties. Therefore, the SoftTiger and dataset should not be publicly released. They could be hosted on PhysioNet though.
- Similarly, MIMIC data should not be sent to 3rd party LLM providers as seems to have occurred in Table 5 unless via Azure or Amazon (see https://physionet.org/news/post/gpt-responsible-use). Please state clearly in text or “Ethical Considerations and Reproducibility Statement” if these services were used.
- Evaluation and training data only uses MIMIC-IV, which are discharge summary notes from the ICU department of a single health centre. This should be noted as a limitation.
- GPT-4 is used to produce the clinical training and evaluation set. This is then corrected by clinical review. No mention of the performance of GPT-4 on the task is made or the inter-annotator agreement. Furthermore, justification (most likely from a data governance perspective) is given on why GPT-4 cannot be used for this task directly if it can produce the labels for the task.
4. Correctness: Are the claims and method correct? Is the empirical methodology correct?
- In the introduction, it is claimed the 2 primary challenges for LLM clinical adaptation are finding a ‘helpful clinical task’ and input length constraint. I do not believe this to be true. Numerous clinical tasks could be performed by LLMs, e.g. diagnoses, discharge summary writing, reporting of adverse drug events etc... Barriers such as effective and safe evaluation, data privacy and governance, and integration into healthcare providers’ electronic systems would seem equal if not greater to this constraint.
The second constraint of input length is notable but only for open-source models (closed-source models have context lengths >100k), a distinction that is not made. However, the trained model is only extended to 8k and claimed as a source of novelty. Current open-source models such as mistral have been trained with an 8k context window.
- Claimed that note length usually follows power law without proof or citation
- The approach is claimed to be “light-weight” but requires 64xA100s GPUs
- It is claimed that as TigerBot has a larger vocabulary size than Llama-2, it has a larger clinical vocabulary. However, as TigerBot is multilingual this claim only seems true if evaluating on multilingual data also. The claim that TigerBot has a greater English clinical vocabulary needs further explanation or proof.
- It is not obvious that the addition of the general-purpose or Asclepius datasets will improve performance on the clinical workflow tasks.
- “Dictionary of abbreviation expansion to standardize abbreviations” is known not to work due to the redundancy of terms. For example, “hr” could be expanded to hour or heart rate, depending on the context.
5. Clarity: Is the paper well written?
- “We then evaluate TigerBot and Llama-2 chat models using next-token prediction.” It is not clear to me how this evaluation works. Is this exact matching? Further explanation is required.
- Not clear how Llama-2 and TigerBot were extended from 4k to 8k inputs.
- It is claimed that “it is beneficial for worldwide adoption to build multilingual models”, which is true. But it is not clear if the fine-tuning dataset is also multilingual.
- Fig.1 shows some very helpful information, but it is not clear which task is related to which plot point due to the use of repeated colours. Furthermore, the Fibonacci scale is not explained.
- Not clear if, in normal practice, discharge summaries are the only source of information used to complete the 3 subtasks trained and evaluated in this work.
- Figure 2 is a direct screenshot from tensorboard or similar. Removal of the UI buttons, and adding full and axis titles would improve this figure. Not clear what the faint lines are in the figure
- Table 3 should be moved higher up the training data section and would be more instructive swap the size column for number of examples in each dataset.
- Figure 3’s final column is all 0% and so does not need to be included. Moreover, the information may be more helpfully presented as a table of min, median, max for input, output and total
6. Relation to prior work: Is it clearly discussed how this work differs from previous contributions?
- This is the first open-source LLM finetuning to output on FHIR IPS, FHIR Clinical Impression and FHIR Encounter from medical discharge summaries.
7. Reproducibility: Are there enough details to reproduce the major results of this work?
- The number, speciality, nationality, and seniority of clinicians surveyed to produce Fig 1 is not stated
- Would be useful to link or add in the appendices the exact FHIR structures of the 3 subtasks.
- The training framework section is limited. No training hyperparameters are given. The acronyms PP and DP are used without explanation.
- The settings, prompt, and model version used to generate the clinical workflow dataset using GPT-4 are not stated
rating: 3
confidence: 3 |
mKPbcJAb83 | SoftTiger: A Clinical Foundation Model for Healthcare Workflows | [
"Ye Chen",
"Igor Couto",
"Wei Cai",
"CONG FU",
"Bruno Dorneles"
] | We introduce SoftTiger, a clinical large language model (CLaM) designed as a foundation model for healthcare workflows. The narrative and unstructured nature of clinical notes is a major obstacle for healthcare intelligentization. We address a critical problem of structuring clinical notes into clinical data, according to international interoperability standards. We collect and annotate data for three subtasks, namely, international patient summary, clinical impression and medical encounter. We then supervised fine-tuned a state-of-the-art LLM using public and credentialed clinical data. The training is orchestrated in a way that the target model can first support basic clinical tasks such as abbreviation expansion and temporal information extraction, and then learn to perform more complex downstream clinical tasks. Moreover, we address several modeling challenges in the healthcare context, e.g., extra long context window. Our blind pairwise evaluation shows that SoftTiger outperforms other popular open-source models and GPT-3.5, comparable to Gemini-pro, with a mild gap from GPT-4. We believe that LLMs may become a step-stone towards healthcare digitalization and democratization. Therefore, we publicly release SoftTiger models at scales of 13 billion and 70 billion parameters, as well as datasets and code for our innovative scalable evaluation, hopefully, making a significant contribution to the healthcare industry. | [
"Large Language Model",
"Clinical Large Language Models",
"Clinical notes",
"International patient summary"
] | https://openreview.net/pdf?id=mKPbcJAb83 | 1hoM5YBilR | review | 1,708,688,275,493 | mKPbcJAb83 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission15/Reviewer_JAru"
] | title: SoftTiger review
review: This ambitious paper fine-tuning a large SOTA LLM for structuring clinical notes capable of handling long context windows and rigorously evaluates it to other commercial/open models using a chatbot arena + LLM-as-judge setup.
The background is well set with both a survey of clinical LLM tasks from arXiv and huggingface and an empirical investigation of context windows required for clinical notes in MIMIC-IV.
They chose a TigerBot base model as it is multilingual and demonstrated superior performance to Llama 2-70b chat with accuracy of next token prediction on 8k contexts. Whilst next token prediction isn’t a particularly useful task they justify it as suitable for rapid decision making and early exploration. The poor performance of Llama 2 70b on 8k vs 4k tokens is hypothesised due to under representativeness of clinical vocabulary leading to worsened hallucination - this seems possible, but I can’t believe that the justification that TigerBot was trained on arXiv which has 1.2% biomedical content explains the difference, I’m sure this was part of Llama training too!
The training data is constructed using clinical notes from MIMIC-IV processed using GPT-4. It’s good to see (trained!) expert evaluation of the training data. The ethical provisioning for this needs to be mentioned as PhysioNet does not permit use of OpenAI APIs! This is not mentioned despite the extensive ethics appendix.
Given the importance of training data it would be useful to have some additional details of how GPT-4 was used to restructure data for the three extraction tasks. There are some additional training datasets such as a ‘previously unseen corpus’ of general-purpose SFT data which needs clarifying. The use of ASclepius for basic clinical tasks like NER and abbreviation seems sensible but despite training for abbreviations they use a medical dictionary at both training and inference?
They structure training from general purpose to basic tasks (ner/abbreviations) to hard tasks (summarisation) but it would be nice to see some experimental results or referenced justification for why they did this. Otherwise this really follows the general LLM -> domain specific fine-tuning paradigm so I’m not sure constitutes a novel strategy in and of itself?
The technical details of training framework are clear and impressive.
Evaluation is rigorous with comparison with range of commercial and open-source (GPT3.5/4, Gemini Pro, llama2, mixtral). The use of ChatBot Arena for blind pairwise evaluation includes control groups with intentionally wrong information and swap-position executions. They use GPT-4 as a judge, supplying the evaluation prompt which is thorough, however despite using 5 domain experts to review training data I can’t see any expert review of the final evaluation data or the LLM as judge strategy? This appears to me to be the biggest weakness in an otherwise strong paper and is perhaps time related?
Overall this is a really ambitious and thorough piece of work which makes both an interesting contribution to the research literature as well as the open source community through release of trained models, datasets and evaluation code.
rating: 9
confidence: 4 |
lWsDWnre2l | Striding into Clarity: Wearable Sensor-Driven Estimation of Knee Adduction Moment, Unveiling the Black Box with Sequence-Based Neural Networks and Explainable Artificial Intelligence | [
"Jasmine Liang"
] | Knee adduction moment during walking has been reported as a sensitive biomechanical marker for predicting the risk of knee osteoarthritis. The traditional method of estimating the knee adduction moment relies on the inverse dynamics approach, primarily limited to laboratory settings due to it relies on specialized equipment and technical expertise, which prevents the clinicians' access to the crucial data. Our study employs wearable sensor technology integrated with advanced Artificial Intelligence and Machine Learning algorithms to predict knee moment outcomes with high accuracy. By analyzing attention weight trends, we establish a significant correlation with knee moment dynamics, validating the reliability of our predictive model. This alignment underscores the biomechanical relevance of our approach, offering promising implications for personalized patient care and clinical practice. | [
"knee adduction moment",
"knee osteoarthritis",
"gait",
"recurrent neural network",
"Long Short-Term Memory",
"wearable sensor",
"motion capture system",
"Explainable AI\u0000"
] | https://openreview.net/pdf?id=lWsDWnre2l | nxtpEUdFEU | review | 1,708,669,314,062 | lWsDWnre2l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission46/Reviewer_h7Xr"
] | title: The paper is interesting but there are no results presented in the paper and furthermore the paper does not include a foundation model or the use of one
review: The paper doesn’t fit the scope of the symposium as it is not a foundation model but rather a supervised learned model.
Clarity: The introduction of the paper is written in clear concise language. When explaining the methods of the paper, however, the language is unclear.
Significance: the work does not appear to be significant within the field of clinical foundation models as it has been trained in what appears to be a supervised manner.
Quality: the quality of the study is subpar with bad figured and poor explanation of methods.
Major points
1. Figure 1 is basically unreadable; the figure should be bigger.
2. Figures 3 and 4 are also way too small especially the text. Without a better explanation of time steps they are not very informative.
3. The sampling frequency specified, but the idea of a timestep is not.
4. The method is not clearly described.
a. Explain training objective.
b. Explain network structure.
5. Present results
Minor points
6. In section 2.1 replace 12 with twelwe.
7. Remove Average self selected walking speed equation 1 to explanation of equation below. Use m (meters) for meters an s (seconds) for time.
Pros
• New data was aquired for this study.
Cons
• The study does not include the creation of a foundation model to solve any task. Instead, a supervised model has been developed for assessing the
• RNN-LSTM seems like a somewhat outdated deep learning model architecture to implement as state of the art.
• Only 24 subjects were used to train and evaluate a deep learning model, although that is fine for a preliminary study.
• There are no results presented regarding the performance of the model.
• It appears that there was no validation dataset, leading me to question how the model was deemed to have trained for long enough.
• Figures are very bad.
rating: 3
confidence: 3 |
lWsDWnre2l | Striding into Clarity: Wearable Sensor-Driven Estimation of Knee Adduction Moment, Unveiling the Black Box with Sequence-Based Neural Networks and Explainable Artificial Intelligence | [
"Jasmine Liang"
] | Knee adduction moment during walking has been reported as a sensitive biomechanical marker for predicting the risk of knee osteoarthritis. The traditional method of estimating the knee adduction moment relies on the inverse dynamics approach, primarily limited to laboratory settings due to it relies on specialized equipment and technical expertise, which prevents the clinicians' access to the crucial data. Our study employs wearable sensor technology integrated with advanced Artificial Intelligence and Machine Learning algorithms to predict knee moment outcomes with high accuracy. By analyzing attention weight trends, we establish a significant correlation with knee moment dynamics, validating the reliability of our predictive model. This alignment underscores the biomechanical relevance of our approach, offering promising implications for personalized patient care and clinical practice. | [
"knee adduction moment",
"knee osteoarthritis",
"gait",
"recurrent neural network",
"Long Short-Term Memory",
"wearable sensor",
"motion capture system",
"Explainable AI\u0000"
] | https://openreview.net/pdf?id=lWsDWnre2l | aM5Dkg70Mj | review | 1,708,620,257,011 | lWsDWnre2l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission46/Reviewer_xo9v"
] | title: A Review of the Innovative Approach to Predicting Knee Moment Dynamics
review: This paper proposes an innovative approach to predicting the dynamic knee moment using wearable sensors, AI, and ML algorithms. The author provides a detailed description of the model structure, training process, and explanatory analysis facilitated by the XAI tool. Here are some review comments:
Methodology: It is strongly recommended that the author explains the physical meanings of each symbol and letter in the formulas on the right side of the second page, as well as the distinctions and connections between the second and third lines of the formulas. Additionally, is the sample size of only 24 participants a bit small?
Results and Discussion: The paper's explanatory analysis of model predictions, particularly using the XAI tool, is very interesting. It is suggested that the author delves deeper into analyzing the model's performance, limitations, and future directions. For instance, a discussion on the model's adaptability to different types of patients or varying environmental conditions would be valuable.
Figures: Figures 3, and 4, along with their respective explanations, are crucial for readers to understand the research results. However, it is advised that the author provides more detailed explanations in the captions and legends of the figures to ensure readers accurately comprehend the information presented in the charts.
Overall, this paper presents a promising study demonstrating how the integration of wearable technology and AI/ML algorithms can predict the dynamic knee moment. Through explanatory analysis and detailed discussions on model performance, the paper has the potential to further strengthen its contributions and practical applications.
rating: 7
confidence: 4 |
lWsDWnre2l | Striding into Clarity: Wearable Sensor-Driven Estimation of Knee Adduction Moment, Unveiling the Black Box with Sequence-Based Neural Networks and Explainable Artificial Intelligence | [
"Jasmine Liang"
] | Knee adduction moment during walking has been reported as a sensitive biomechanical marker for predicting the risk of knee osteoarthritis. The traditional method of estimating the knee adduction moment relies on the inverse dynamics approach, primarily limited to laboratory settings due to it relies on specialized equipment and technical expertise, which prevents the clinicians' access to the crucial data. Our study employs wearable sensor technology integrated with advanced Artificial Intelligence and Machine Learning algorithms to predict knee moment outcomes with high accuracy. By analyzing attention weight trends, we establish a significant correlation with knee moment dynamics, validating the reliability of our predictive model. This alignment underscores the biomechanical relevance of our approach, offering promising implications for personalized patient care and clinical practice. | [
"knee adduction moment",
"knee osteoarthritis",
"gait",
"recurrent neural network",
"Long Short-Term Memory",
"wearable sensor",
"motion capture system",
"Explainable AI\u0000"
] | https://openreview.net/pdf?id=lWsDWnre2l | 5BsYI2J1Rq | review | 1,708,713,501,463 | lWsDWnre2l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission46/Reviewer_ozX1"
] | title: Promising results, but needs more clarity in the significance
review: In this work, the authors present a preliminary study for estimating knee adduction moments from data acquired from accelerometers. The authors employ an LSTM-based network that includes an attention layer architecture. Over a cohort of 24 participants, 12 male and 12 female, whole-body motion was captured along with data from two IMU sensors. In addition to evaluating the prediction accuracy provided by the model, the authors analyze the attention weight and an XAI technique, LIME, to gain insights into the prediction made by the model.
This work demonstrates a solid preliminary study into the feasibility of predicting knee adduction moments using IMU data. However, there are several points that could be addressed:
1. There is no report of the overall performance of the model. It seems like the reported example is for a single subject, but there is no report of the evaluation over the entire cohort
2. There is no comparison to other simpler models that would demonstrate the performance improvement. Although this is understandable for a preliminary feasibility study, the overall statistics would be important.
3. The authors report an 80/20 split, but was this split over the participants? Details such as this should be included for a better assessment of the results.
4. Although the authors admit the limitations of the analysis of explainability, how the explainability relates to the trustworthiness of the model is less clear. Moreover, the approach does not seem particularly innovative, as it applies existing approaches, nor does it provide significant insights into the model and the broader field of biomechanics.
Overall, the results are promising but would recommend another iteration of the manuscript before resubmission.
Additional minor comments:
- ASSWS is not explained in the paper
- The manuscript is over the 2-page limit for the non-traditional track
rating: 4
confidence: 4 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | q8549HIxOl | review | 1,708,541,868,043 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_fEni"
] | title: Interesting work
review: This paper applies the Generative Pre-trained Transformer (GPT) model for photoplethysmography (PPG) signals. This is interesting and valuable. The point of using a logit-Laplace loss, instead of a MSE loss to train the model is also very insightful. My main concern about this paper is how can we make sure the converge when only using 5% of the data. We know transformers are data-hunger architectures. What is the limitation of such a model? As the paper mentioned in the future they will train the model using more data, I assume we will get the answer later. In general, this is an interesting and valuable paper for PPG-related tasks.
rating: 7
confidence: 3 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | O6qHckRvB3 | review | 1,708,448,812,426 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_gmkD"
] | title: Applying foundation model to an important domain.
review: Strength:
- This work develops PPG foundation models using a decoder-only transformer
- loss function, embedding, and linear head are specifically designed to fit PPG applications.
- Results in Table 1 show promising results
Weakness:
- Table 1 is a bit hard to read. Different performance metrics (MAE, F1, false alarm rates) are included. Please consider separating them.
- BP-SBP is not introduced. Also, why is 9.56 highlighted?
- In the conclusion section, it is claimed that the foundation model can be used for downstream tasks without further fine-tuning. I am not sure which experiments can support this claim.
rating: 7
confidence: 2 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | MOJMrdN4ix | review | 1,708,536,141,126 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_MUNY"
] | title: GPT-PPG foundation model
review: Summary of Contributions:
The work proposes a Foundation model for (as the title suggests) “assessing diverse physiological functions, using only photoplethysmography signals”. The authors encode vast numbers of 30s of PPG samples @40Hz and train a GPT like foundation model with the next sample prediction pre-training objective. Then they fine-tune this model for solving downstream tasks such as heart rate estimation, atrial fibrillation detection, blood pressure estimation, and detecting false arrhythmia alarms.
Strengths:
1. The paper is well written and easy to follow. The block diagram is representative of the method presented in the paper.
2. The proposed method is sound and it stands to reason that with the increase in the model parameters and the training set, more emergent behaviour should be witnessed.
3. The qualitative results in the appendix are quite impressive.
Weaknesses:
1. Details such as number of model parameters, training time, GPUs used, etc. are missing from the paper. They often provide some indication on how scaling up might improve the results, and also about the feasibility of using the model.
2. The fine-tuning time requirements when compared to the training time (from scratch) of SOTA specialist models are also missing in the paper.
3. Some ablation studies are missing. For instance, it was mentioned that (in the decoder) the RMSNorm was preferred over LayerNorm, then PoPE was used instead of Positional Encoding from the original transformers paper. Some indicator on how these choices would have affected the foundation model training might have been good. (Although the feasibility of these ablations will depend on the pre-training computational requirements, which again cannot be inferred unless disclosed in the paper)
rating: 7
confidence: 4 |
hfgdwxbNOW | Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach | [
"Zhaoliang Chen",
"Cheng Ding",
"Nirbhay Modhe",
"Jiaying Lu",
"Carl Yang",
"Xiao Hu"
] | This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks. | [
"Photoplethysmography",
"clinical foundation model",
"Generative Pretrained Transformer"
] | https://openreview.net/pdf?id=hfgdwxbNOW | 0cyZEOH6yz | review | 1,708,124,763,306 | hfgdwxbNOW | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission34/Reviewer_uhh3"
] | title: Review
review: Summary
This paper proposes architectural variants of transformers, pretraining, and fine-tuning methods for PPG domain.
Strengths
- Experiments are extensively conducted in PPG domain.
Weakness
- Lack of comparison with other transformer architectures for timeseries data
- Writing could be improved. For example, the difference between linear prediction head and attention-based prediction head is unclear, and it’s hard to identify SOTA algorithms in experiments.
rating: 5
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | iXEQD8Jy0O | review | 1,708,489,233,630 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_wybD"
] | title: Official Review
review: This paper introduces an encephalitis query-document dataset along with an embedding model used for retrieval. The experimental results demonstrate the superiority of the embedding based model over traditional key-based search engines.
**Pros:**
(1) The dataset and the model are valuable to the community.
(2) The code is clear and easy to read.
**Cons:**
(1) The experimental results only present some cases, instead of the performance on the whole dataset. It would be better to list a series of numbers in a table. For example, for the baselines, choose keyword-based search engines, as well as OpenAI’s text-embedding APIs. The metrics can be recall, etc. Also, the authors can explore whether the cheaper model developed by users (like the one proposed in this paper) can be on par with the OpenAI text-embedding API models.
(2) The layout of this paper can be further improved. For instance, move the “Links” parts as the footnotes. For the case study paragraphs in the appendix, it would be better to wrap them with a blockquote. It would be more convenient to use latex templates to modify these things, compared with Word.
(3) The title is too long and can be shorten, such as “Enhancing Encephalitis Research Retrieval: Leveraging GPT-4 for Semantic Query-Document Alignment Beyond Keywords.”
rating: 6
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | hV0F2R7VwK | review | 1,708,012,526,881 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_Qsco"
] | title: Ineffectiveness of naive baselines on tricky datasets is widely known
review: Perhaps the more useful contribution of this paper is in leveraging GPT-4 to create synthetic data in niche domains that can be tailored for specific purposes (in this case, benchmarking models that have poor keyword overlap between the document and query).
However, the points regarding shortcoming of keyword based models is already very well known, but also in this case quite expected since the dataset is explicitly constructed to fool keyword matching. It would make more sense to not cast the retrieval model's performance as proof for a move towards embedding-similarity based retrieval (as that has been the case for many years now) but as proof that the dataset is interesting. If that is the intention, it is more beneficial to provide principles and details of the prompt used in the ChatGPT API that enabled the curation of this dataset, so that a practitioner can then apply such principles to their own use case. That would be a simple two page report detailing the challenges and innovations required and the iteration cycles that one may have to go through, along with simple strategies for checking the quality of the generated dataset.
rating: 5
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | XBgNo8DS0M | review | 1,708,767,333,208 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_WE7Z"
] | title: Review
review: This paper addresses the issue that general-purpose search engines have inaccurate retrieval results for medical field content, especially for encephalitis research. Thus, they introduce a query-document set based on PubMed data. They further use GPT-4 to generate queries query varies that conceptually aligned but with different phrases and terms. A model is trained on the introduced data with contrastive loss. Results show the retrieval capabilities are enhanced.
Pros:
* The motivation is strong and work on the task is urgently needed
* Dataset is sampled, the model is trained, and results show the model works. The entire lifecycle is demonstrated with sufficient information
Cons:
* Now info on the collected dataset is limited, it would be great to show the data distribution and provide some samples
* Evaluation result is brief, more interpretation and analysis are appreciated
* How the contrastive loss is calculated, and how the data variants are used for the loss calculation needs to be clarified
rating: 6
confidence: 4 |
hV8clUPzkn | Harnessing and Distilling ChatGPT's Ability to Bridge Semantic Variance for Precise Query-Document Alignment in Encephalitis Research: Surpassing Keyword-Based Search Engines | [
"Santosh Gupta"
] | Keyword-based search engines often fail in retrieving information that aligns with user query intent, due to variations in keywords and phrasing in scientific literature. This paper introduces an encephalitis query-document dataset, characterized by its high semantic variability. Our dataset comprises thousands of query-document pairs. To represent the diverse linguistic expressions found in encephalitis research, we leveraged the advanced language understanding capabilities of GPT-4 to generate queries that, while conceptually aligned with the information in the documents, significantly differ in phrasing and terminology. This approach addresses a critical need in scientific literature searches – retrieving pertinent information that might be overlooked due to conventional keyword-based search limitations.
To evaluate the efficacy of our dataset, we trained a specialized transformer model capable of converting these query-document pairs into embeddings. Our results demonstrate a significant improvement in retrieving relevant encephalitis research papers, especially those that are not surfaced by traditional search engines like PubMed. This enhanced retrieval performance not only underscores the potential of embedding-based retrieval in medical literature search, but also opens up new avenues for comprehensive literature exploration. The implications of our findings extend beyond encephalitis studies, suggesting broader applicability for similar methodologies in other specialized fields of research. | [
"Transformers",
"Information Retrieval",
"Embeddings",
"GPT",
"Encephalitis"
] | https://openreview.net/pdf?id=hV8clUPzkn | Vj289joIXw | review | 1,708,332,583,572 | hV8clUPzkn | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission10/Reviewer_yZdn"
] | title: Review
review: Summary: The paper introduces an approach to enhance literature retrieval from PubMed for encephalitis-related research. The approach is based on a transformer-based embedding model that, given encephalitis-related queries, retrieves relevant PubMed literature. Unlike prior keyword-based searching systems, the proposed method is able to identify articles relevant to a query even when the exact keywords are absent from the query. To build the embedding model, the authors first created a dataset of thousands of query-document pairs. Specifically, they used ChatGPT to generate queries from the document as it has the ability to understand the variance of encephalitis representations and capture various semantics of the document.
Comments and suggestions:
1. Comparing the performance to that of a keyword-based searching system seems insufficient. Would it be possible to use a pre-trained, high-quality embedding model to perform this task? I believe recent advancements in retrieval-augmented generation (RAG) would provide useful techniques for this task.
2. How do we ensure that a general-purpose model like ChatGPT understands encephalitis literature, as I suppose it belongs to long-tail distribution?
3. The link for [1] is missing.
4. It is stated that the "Biopython" library is used. However, the "Biopython" library I know of is a biological sequencing tool (https://biopython.org/), which is not related to this paper.
5. The paper would benefit from a large-scale quantitative assessment of the model's performance. This addition would provide a clearer understanding of its efficacy compared to existing methods.
6. I think adding a graphical diagram would be beneficial to better outline the pipeline.
rating: 6
confidence: 4 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | rtTNNs0S3D | review | 1,707,958,443,875 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_Dzb1"
] | title: A good start to potentially impactful work
review: The authors train self-supervised vision foundation models on a very large set of pathology data and show that the embeddings they produce yield much better performance on downstream tasks compared to those trained on general image data, specifically ImageNet. The writing is clear, the presentation is thorough, and the results are strong. The work has potential for broad impact. I didn't read the supplementary sections in detail, but the thoroughness is appreciated. I am not familiar with pathology, so I will leave it to other reviewers to assess the selection of downstream tasks.
There are elements of the experiments and presentation that could be improved. While it should be straightforward to improve presentation, I understand that, since the experiments themselves are costly to run, additional experimental runs may not be able to be included in this submission, which is fine - it can be taken as feedback for further development of the work.
1. It is not necessary to show all the results across epochs. It makes the figures unnecessarily large and difficult to interpret, and it masks the effect of model selection, e.g. using a validation set to determine which checkpoint's model to actually use. In particular, the overlapping lines make Supplementary Figure 1 a bit hard to read. Perhaps just one figure to make the point about saturation and overfitting would be enough. The rest could be tables or bar charts or similar for the selected models.
2. It does not seem appropriate to draw conclusions about the effect of data quantity from the loss curves shown. I don't think you can reasonably disentangle the effects of training time and data quantity in these results. The best approach would be to train additional models on subsamples of the data set, but I understand this is costly.
3. It's unclear why not all models are shown in Figure 2.
4. Supplementary Figure 1 contains the main results of the study. These should be in the main paper. Just a table would be fine.
5. It's not clear what is tRes50 vs Res50.
6. The baseline model should have the same architecture (ViT) as the experimental models in order to isolate the effect of the pre-training data. It is even acknowledged by the authors that ResNet may be overfitting due to the architecture itself.
7. It is explained that DINO-ViT-large is excluded due to training cost, but why is there no MAE-ViT-small or MAE-ViT-base?
8. I would expect that the data cannot be released, but why can the pretrained models not be released? Regardless, the intention to set up an API to get embeddings is appreciated.
Minor nitpicks:
1. In the pre-training section, you mention that your data is an order of magnitude larger than any previous effort. It would be nice to cite the largest previous effort here.
2. Please define pseudo-epoch and explain why it is used instead of standard epochs (I am guessing it is to increase checkpoint frequency?).
3. Typo, first paragraph of discussion section: "SLL"
4. In the discussion section, you say you trained DINO only on ViT-small, but you report results also for ViT-base.
rating: 7
confidence: 4 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | C7qOCcC2QY | review | 1,708,769,706,768 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_7k2t"
] | title: Interesting work but corrections are required
review: The work "Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images" is an interesting report from the conducted experiments. The reviewer would like to claim that the problem stated by the Authors is of high interest to broader public (benchmark of recent Foundation Models on pathology dataset). The important thing is that the Authors trained their models with an extensive dataset (as it was claimed in the paper there was around of 3 billion images from around 423000 digital microscopy slides). The goal of the work is clearly given. However, the reviewer would like to raise two suggestions that need to be addressed before publication:
1. The first one is related to the description of the samples. It was claimed that all of them belong to 76794 patients - but no sufficient details about the patients are given. I mean information about the sex, race, age... etc. All these details can allow reader to better understand the approach (of course, I am totally aware that they could not have any alignment with the dataset itself but may have - as some of the illnesses are more probable in later stages of life).
2. I do not understand why huge amount of information is given in the form of supplementary material. I assume that all these subchapters need to be provided directly into the paper - not in the form of supplementaty material. It will be then easier to understand the whole idea as well as to compare the outcomes with the latest results.
The reviewer would like to claim that after all these corrections, the work is ready for publication.
rating: 7
confidence: 5 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | 2gLVRBVBZb | review | 1,708,148,774,481 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_G7XQ"
] | title: The paper presents a study on self-supervised learning for computational pathology, utilizing large-scale datasets and ViT models, demonstrating superior performance on clinically relevant tasks.
review: The paper presents a comprehensive study on the application of self-supervised learning (SSL) in computational pathology, focusing on the pre-training and downstream performance evaluation of visual foundation models on large-scale pathology tasks. The study compiled a massive academic pathology dataset, consisting of over 3 billion images from 423 thousand digital microscopy slides, and compared the pre-training of visual transformer models using masked autoencoders (MAE) and self-distillation models (DINO). The downstream performance was evaluated on six clinically relevant tasks from three anatomic sites and two institutions, demonstrating the benefits of pre-training on pathology data for downstream performance compared to pre-training on natural images. The DINO algorithm achieved better generalization performance across all tasks tested, signifying a significant advancement in computational pathology research.
Pros:
The study addresses a critical gap in the application of SSL algorithms and foundation models in the medical domain, particularly in computational pathology.
The compilation of the largest academic pathology dataset to date, consisting of over 3 billion images, demonstrates a significant contribution to the field.
The comparison of pre-training methods and evaluation of downstream performance on clinically relevant tasks provides valuable insights for the development of performant models in computational pathology.
The study's findings indicate a phase change in computational pathology research, paving the way for more performant models based on large-scale, parallel pre-training at the billion-image scale.
Cons:
The study could benefit from a more detailed discussion on the limitations and challenges of SSL algorithms and foundation models in the medical domain, particularly in clinical workflows.
While the downstream performance was evaluated on clinically relevant tasks, the study could further emphasize the potential impact of these findings on real-world clinical applications.
The document lacks a detailed discussion on the ethical considerations and potential biases associated with the use of large-scale pathology datasets and SSL algorithms in healthcare.
Overall, the work demonstrates high quality, clarity, originality, and significance in advancing the application of SSL algorithms and foundation models in computational pathology. The study's comprehensive approach, large-scale dataset compilation, and valuable insights into pre-training methods and downstream performance evaluation contribute significantly to the field of computational pathology. However, further discussion on ethical considerations and potential biases, as well as the translation of findings into real-world clinical applications, would enhance the overall impact of the work.
rating: 8
confidence: 4 |
g8tF7gGzZb | Computational Pathology at Health System Scale – Self-Supervised Foundation Models from Billions of Images | [
"Gabriele Campanella",
"Chad Vanderbilt",
"Thomas Fuchs"
] | Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in healthcare, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has focused on relatively small datasets for both pre-training and performance evaluation of downstream tasks. The aim of this work is to explore foundation models at a scale that goes orders of magnitude beyond the state of the art and benchmark current self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinically relevant pathology tasks.
We compiled the largest academic pathology dataset to date, consisting of over 3 billion images from 423 thousand digital microscopy slides. We compared the pre-training of visual transformer models with focus on masked autoencoders (MAE) and self-distillation models (DINO). Downstream performance is evaluated on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction.
The results demonstrate that pre-training on pathology data is beneficial for downstream performance com-pared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented model performances signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale. | [
"computational pathology",
"MAE",
"DINO"
] | https://openreview.net/pdf?id=g8tF7gGzZb | 1x3X1rGdqd | review | 1,708,636,145,726 | g8tF7gGzZb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission13/Reviewer_cUiu"
] | title: A Large Scale Self Supervised FM for Computational Pathology
review: **Summary**:
The authors present comprehensive work towards building a Foundation Model for computational pathology. They motivate the need for an FM in pathology, providing a background of existing work in the field. They present three models based on the Visual Transformer Architecture combined with two SSL algorithms, DINO and MAE. They have compiled a significant dataset for the pretraining of their FM using SSL and perform benchmarking on multiple downstream tasks. Their approach shows higher AUC for most tasks over the baselines they have used.
**Pros**:
- The models proposed by the authors showcase a clear superiority in performance to the baselines.
- The authors have collected an impressive amount of data pre-training their FM, alluding to their collected dataset being an order of magnitude larger than any other data collected in the field.
- The authors have done a good job of investigating the training behavior of their models, and have indicated potential next steps for extending their work, all of which I agree with.
- I am glad the authors have discussed open-sourcing their model, as I view their data collection and FM as valuable contributions to the field of pathology.
**Considerations**:
- Can the authors provide some reasoning regarding the generally poor performance observed across the proposed models and baselines for Task 6: `Institution 2
lung cancer immunotherapy outcome prediction`? It appears that this dataset has the largest label imbalance across the different tasks the authors are testing for. Could that be the reason?
- Authors use validation AUC to indicate performance, but AUC as a metric captures an aggregate performance of the model across different operational thresholds. When operationalizing an FM for a clinical setting, one often faces the dilemma of considering the ideal operational threshold of classification (especially in the binary case which is the case with a lot of the downstream tasks the authors test their models on). This is a minor nitpick, and maybe something the authors can show in supplementary material. However, I would be interested in the tradeoffs their FM makes on sensitivity vs specificity at a given threshold for the various downstream tasks.
- It is interesting to note that ViT-large with MAE performs worse in most cases than the two ViT models trained with DINO (in three cases, worse than the baseline). Can the authors comment on why they think this is? Did they explore ViT-large with DINO? If not, could they comment on why?
- The authors have provided multiple examples of SSL models pre-trained on pathology data, could they comment on why they didn't use some of those methods as baselines along with ResNet50?
**Quality**:
The overall quality of the paper is good.
**Originality**:
The authors pre-train their models on a very large corpus of pathology data, which they indicate is larger than any corpus of pathology data collected before. While the authors have used some off-the-shelf methods like DINO for their SSL strategy, the scale of the data they have pre-trained their models on encourages me to believe in the novelty of their SSL approach. This is further reflected by excellent performance in downstream tasks with their proposed approach. However, I am a little concerned with the lack of variety in their chosen baselines, I would like the authors to add some more baselines that use SSL as a pre-training strategy to firmly indicate the superiority of their SSL approach.
**Significance**:
The authors' contributions to the field of pathology with their FM and their collected corpus of data could potentially be very significant for the field of pathology. The authors have correctly identified a list of follow-up questions based on their approach which could further help assert the significance of their FM if they are answered.
**Miscellaneous Comments**:
- Could the authors elaborate a little more about GMA, as this is not a method I am familiar with? My assumption was the spatial distribution of the tiles of a single slide would be necessary for the downstream prediction of the slide as a whole, since you do not have tile-level annotations. Yet. the authors state in benchmark training that GMA does not consider the spatial distribution of the tiles in its prediction. I would appreciate it if the author clarified why GMA's property of not considering the spatial distribution of tiles works here.
rating: 6
confidence: 3 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | vkHWlGXS4j | review | 1,708,736,871,195 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_eh7c"
] | title: Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning
review: The study presents a compelling approach to enhancing the diagnosis of Chronic Kidney Disease (CKD) using a data-driven method. By utilizing the MIMIC IV, a large, open-source database of electronic health records (EHRs), the research employs ML techniques to determine best approach to assess the risk of CKD.
The methodology is comprehensive, incorporating patient-specific demographic data, vital signs, and past medical history to predict CKD status accurately. The authors meticulously detail their data sources, the process of pre-processing, and the strategies employed to handle missing data, showcasing a robust methodological framework. The training and test datasets are balanced across CKD disease stages, which is crucial for the reliability of the predictive models. The authors statistical analysis is sound and supported by clear tables and figures.
They show that the Random Forest Classifier was identified to have the best performance, achieving an accuracy (0.875), an Area Under the Receiver Operating Characteristic Curve (0.927), and a F-1 score (0.765). These results supports authors conclusion that the random forest CDK classifier algorithm maybe effective in identifying patients at risk of CKD, particularly those who may be under-diagnosed due to health disparities.
While the study is not about foundation models, the authors demonstrate and remind us of that simple algorithm maybe sufficient to solve pervasive healthcare problems.
rating: 9
confidence: 5 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | eldjYNMT8W | review | 1,707,896,302,573 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_sYoX"
] | title: A valuable approach but not directly related to the conference main theme
review: The paper addresses a crucial healthcare challenge with interesting objectives. It leverages machine learning techniques to improve the identification and management of CKD, which is a significant contribution given the global burden of the disease. The methodology employed is robust, and the results presented indicate a high level of effectiveness in achieving the paper's goals.
However, despite the promising application and outcomes, this study primarily utilizes classical machine learning approaches without incorporating any pre-trained models, which diverges from the conference's emphasis on innovative model foundations and applications in clinical settings. This discrepancy could be seen as a deviation from the core subject matter expected at the conference.
Furthermore, while the paper effectively demonstrates the performance of ML techniques in reducing CKD underdiagnosis, it falls short in situating its findings within the broader context of current state-of-the-art methods. A comparative analysis, not limited to data-driven approaches, in terms of precision and cost-effectiveness, would have greatly enriched the paper. Such a comparison is essential for understanding the true value and innovation of the proposed method over existing strategies.
In conclusion, while the application of machine learning techniques to tackle CKD underdiagnosis is indeed valuable, the paper's methodological approach lacks the novelty and direct relevance to the "Clinical Foundational Model" conference theme. The absence of a comparative analysis with state-of-the-art methods further limits the paper's contribution to the field. Therefore, despite its potential impact in healthcare, the paper may not meet the innovation threshold required for acceptance into the conference.
Pros:
- Tackling a challenging problem in healthcare
- Great accuracy
Cons:
- Not related to the main conference theme
- Lack comparison with SoTA methods
- Lack a cost reduction analysis
rating: 3
confidence: 3 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | VzpOLjyv7n | review | 1,708,393,700,983 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_MKZD"
] | title: Comparison of Random Forest, Logistic Regression, and KNN models for CKD prediction on MIMIC-IV Dataset
review: ## Summary
This manuscript explores the application of traditional machine learning models (Random Forest, Logistic Regression, and kNN) to the prediction of Chronic Kidney Disease (CKD) on the MIMIC-IV dataset using CKD-related ICD codes as labels and non-CKD-related ICD codes and demographic variables as features. This is an important healthcare problem and establishes a baseline on MIMIC-IV dataset. In general, the methodology for data sampling, splits, evaluation metrics are well done. This manuscript would be stronger with the inclusion of more advanced models such as gradient boosted trees which often achieve stronger performance in many healthcare prediction tasks with tabular data.
The authors are somewhat vague in the real-world application of this model--Is the goal to create a clinical decision support/forecasting model while the patient is still in the hospital? Is the goal to create a model that can detect if we forgot to mark the presence of CKD in an encounter for billing after the ICU stay? This is not clearly stated and these are different questions that require different modeling approaches. The design of the presented models are mainly useful for the second line of questioning. However, in an ICU population where creatinine and eGFR values are very frequently measured, why not compute a clinical baseline using traditional diagnostic definitions for comparison? This is currently absent from the manuscript and inclusion would make the research much stronger. More on this below.
## Pros
* Important healthcare problem. Research establishes baseline for CKD prediction task from ICD + demographic data on MIMIC-IV dataset.
* Well designed experiments, train/test split
* Large sample size & statistically robust
* Good choice of evaluation metrics (AUROC, AUPRC) and threshold selection rationale with MCC
## Cons
* I consider Random Forest to be a simple model. The findings would be more interesting if a more advanced model such as Gradient Boosted Trees was also compared, especially since Gradient Boosted Trees often achieve state-of-the-art performance on many healthcare prediction tasks using tabular features.
* Poor word choice in second paragraph of "Predicting Undiagnosed CKD Patients" section. The authors write "The lower sensitivity is a benefit in this case...", but lower sensitivity is never a benefit because we always desire higher sensitivity and specificity. I think the authors mean that it is an optimal trade-off.
* This analysis pools all CKD classes together rather than predicting individual CKD I, II, etc. It would be more interesting/valuable if authors designed experiments to predict individual CKD classes in addition to predicting presence/absence of CKD. This is because management and healthcare cost of CKD I/II patient (often no dialysis, medication management) is very different than CKD IV/V (patients usually more ill, more comorbidities, and/or require dialysis). The utility of this prediction model would be significantly improved if models were able to predict CKD classes.
* MIMIC IV dataset is derived from real electronic health records of ICU patients where there is a temporal nature to the data. Some diagnoses & ICD codes may be present in certain days/encounters earlier than others. The authors' methods indicate that ICD codes extracted correspond to all ICD codes for a given stay--that is at time of discharge. Choosing this time point limits the utility of this predictive model as many diagnoses (ICD codes) will be accumulated during the ICU stay and may not be present on admission. The value of this model thus becomes post-encounter detection (e.g. CKD billing code is missing), not for prospective clinical decision support. A more clinically useful time point for prediction for diagnostic/clinical decision support would be earlier in the hospital stay (e.g. day 1 or day 2 of admission). The target use case of the proposed model should be more clearly stated; currently it is vague. The chosen use case for this model would then inform different choices for data selection and model development.
- Since the study population are ICU patients, I expect almost all to have multiple creatinine value or eGFR determined in their laboratory studies. The creatinine and eGFR values are how CKD is diagnosed using diagnostic criteria such as KDIGO. The authors tangentially acknowledge these traditional diagnostic criteria in "Previous Work and Study Scope" section, but do not actually compute these values as a clinical baseline. The proposed line of research would be much stronger and clinically useful if authors determined CKD from traditional diagnostic criteria (which is still widely used in clinical medicine) as a baseline for comparison, or used the computed values as ground truth instead of ICD codes. This should be possible for most patients in the MIMIC dataset because of the high availability of creatinine and eGFR data in ICU patients. Currently authors compare against presence of ICD codes that are related to CKD, but this ground truth may be inaccurate given that ICD codes are primarily used for billing purposes. In theory ICD codes should reflect the patient's clinical reality but in reality it may not necessarily be reflective since it requires billing staff or healthcare staff to denote the presence of the diagnosis in the patient's EHR. Thus relying on ICD codes as ground truth may actually lead to your model to under-diagnose CKD.
rating: 6
confidence: 4 |
g7rqyMIvQb | Minimizing Chronic Kidney Disease (CKD) Underdiagnosis Using Machine Learning | [
"Lawrence Huang",
"Sachin Shankar",
"Keyvon Rashidi",
"Dany Alkurdi",
"Felipe Giuste"
] | Chronic Kidney Disease (CKD) is a prevalent and devastating progressive disease affecting up to 14% (>35.5 million individuals) of the United States population and costing Medicare well over $64 billion annually. As many as 90% of individuals with CKD are undiagnosed, indicating the need for better tools to diagnose CKD and prevent unnoticed disease progression. However, current methods of assessing CKD have limitations regarding accessibility, practicality, and accuracy. This study seeks to address these limitations by developing a data-driven method to assess CKD risk from a large opensource database of electronic health records that has not previously been applied for CKD prediction. Machine Learning (ML) methods were used to develop a software tool to predict patient CKD status with patient-specific demographic data, vital signs, and past medical history. Of the ML models used in this study, a Random Forest Classifier had the best performance in predicting CKD diagnosis correctly with an accuracy of 0.875, an Area Under the Receiver Operating Characteristic Curve of 0.927, and an F1 score of 0.765. Our results indicate that ML-based approaches can help facilitate early screening and intervention for patients at risk of CKD. For progressive diseases like CKD that become more devastating and expensive to treat as they progress, high rates of missed diagnoses can be reduced by ML models leveraging electronic health record data. | [
"Machine Learning",
"Chronic Kidney Disease",
"CKD",
"Value-Based Care",
"MIMIC-IV",
"Diagnosis",
"Prediction",
"Screening",
"Data-driven"
] | https://openreview.net/pdf?id=g7rqyMIvQb | 1MordrGaMy | review | 1,707,883,217,955 | g7rqyMIvQb | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission5/Reviewer_42Yx"
] | title: Official Review
review: The paper discusses machine learning for chronic kidney disease prediction. Multiple baselines are tested. The strength of the paper includes the detailed background introduction and problem-driven study, with thorough data analysis and experiment discussion. The code is made available online. The weakness of the paper includes too few baselines being tested, not standardized and lacking comparison to some foundation models (that are not specifically for chronic kidney disease but can be easily tailored to do so). Figure 1 is too small, especially with too small fonts. The Discussion section lacks more tables/figures/statistics to back up the sentences.
rating: 7
confidence: 3 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | dRGWxzWw0h | review | 1,708,220,880,283 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_Ek3d"
] | title: Good proposed approach but need further polish
review: The manuscript proposes a retrieval-augmented LLM approach for text-based cardiovascular disease detection.
Overall, the proposed approach is reasonable. However, there exists a number of unclear aspects that need to be addressed before the manuscript to be published.
**Strengthese**
- cardiovascular disease detection is one of the important risk prediction scenarios for clinical foundation models.
- the proposed retrieval augmentation is a useful technical to enhance foundation models
**Weaknesses**
- Heart disease dataset
- since it is an author-collected dataset, it is better to mention the size of training and testing set
- in the real clinical setting, there can exist healthy patients. However, the dataset does not contain the label for the healthy condition.
- Technical part
- how do the authors implement the re-ranker? Now there is no explanation for that.
- how to derive ``<RAGHere>`` from ``A``? In the introduction of the case fusion layer, the authors stop after introducing their cross-attention operator. There still exists a gap between the cross-attention and ``<RAGHere>``
- is there a particular reason to use the same $W_K$ for both $K=W_K A$ and $V=W_K A$?
- Experiment setup
- what is the RAG model? there is no reference for it. If it is a custom baseline, it would be better to introduce it.
- "During training, we randomly mask $m$ cases." $m$ is first introduced here. This introduction of a new variable without prior explanation can pose challenges for understanding the method effectively.
**Questions**
- Can the authors explain why the retrieved cases are still from the training data? Assuming the foundation model is well-trained, then it does not need
- what do ``standard values`` refer to (Results section)? Also, there is a typo for ``w/o stanard`` in Tabl 2. Is the ``standard value`` similar to the concept of ``standardization of units`` mentioned in the Heart disease dataset section? I can guess it may refer to the standard value range of a vital, but it is better to explicitly introduce it.
rating: 5
confidence: 4 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | QUR0VDQFvW | review | 1,708,570,707,878 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_6pZo"
] | title: Strong Manuscript
review: In reviewing this manuscript, it is clear the authors can intellectually articulate the focus of their study. They were able to explain the rationale of their research as well as their results.
My constructive feedback would be as follows:
1. The abstract should be more concrete in explaining the “encouraging results.” At present, there is no concrete description of any results whatsoever in the abstract.
2. In defining the attention function, the authors should define all terms in the model; at present, they do not describe the \\(d_k\\)
scaling term and the role it plays in the attention function.
3. In the section about pretraining, the authors state, "Specifically, we detect the word “Figure” in sentences and remove the corresponding sentences." Without having looked at the curated pre-training dataset, it would be imperative to know if all sentences analyzing images had the word "Figure" or words such as "Image" or other synonyms might have been used.
4. I would encourage the authors to explicitly define the "cls" subscript used in their arrays.
5. When describing the space of retrieved sets, I believe the authors may have a typesetting issue; namely, the authors state the set as \\(R^{1xh}\\), using an italic \\(x\\) as opposed to \\(\\times\\), indicating the array size of R being 1 by h. Hence, I believe the dimensionality of R should be represented as \\(R^{1 \\times h}\\).
6. Since the authors are using LaTex markups, I encourage them to express the learning rate as \\(10^{-4}\\) as opposed to 1e-4
7. The authors should define lora_alph and lora_r in the context of LoRA and display them with the appropriate LaTex markup (if applicable).
Overall, the authors have an extremely strong paper.
rating: 9
confidence: 4 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | CE3ZV1jYBX | review | 1,708,955,099,503 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_sCWy"
] | title: A nice end-to-end pipeline showing LLMs being used to assist in clinical predictive tasks
review: Well written paper proposing a pipeline capable of predicting cardiovascular diseases. Custom models are used with a RAG layer to retrieve predictions.
The evaluation is well thought out. Overall a good contribution.
rating: 7
confidence: 4 |
fGVQgxvrzI | HEART: Heart Expert Assistant with ReTrieval-augmented | [
"Junhao Guo",
"XueFeng Shan",
"Guoming Wang",
"Dong Chen",
"Rongxing Lu",
"Siliang Tang"
] | As the incidence of cardiovascular diseases continues to rise, people are increasingly emphasizing the prevention and treatment of cardiovascular diseases. However, in economically disadvantaged areas, the scarcity of medical resources and lack of clinical experience make the early detection of cardiovascular diseases particularly challenging. For this challenge, the HEART (HEART Expert Assistant with Retrieval-augmented) model was proposed, which leverages the powerful logical reasoning capabilities of Large Language Models (LLMs) to assess whether patients have heart disease. Specifically, HEART operates on a dual-component structure, consisting of a Diagnostic Module and a Case Retrieval Module. For the Diagnostic Module, the LLM is pre-trained on a cardiac ultrasound assessment dataset to master the relevant evaluation techniques. As for the Case Retrieval Module, a text encoder transforms input cases into hidden features, which are then used to retrieve auxiliary cases. The input case and auxiliary case are merged through a Case Fusion Layer to obtain the fused case features, which are combined with prompts for inference. We have tested our model on a congenital disease dataset and achieved encouraging results. The proposed HEART model has shown tremendous potential in becoming the foundational model for predicting cardiovascular diseases. | [
"Clinical Foundation Models",
"Cardiovascular diseases prediction",
"Large language model",
"Retrieval-Augmented model"
] | https://openreview.net/pdf?id=fGVQgxvrzI | 7kL28T1nx9 | review | 1,708,187,275,805 | fGVQgxvrzI | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission2/Reviewer_LEQu"
] | title: Using ECG examination records to predict heart defects
review: The authors propose to use ECG examination records (text) to classify heart defects. This is in contrast to the more "classical" approach of using directly ECG data.
To my understanding, at inference time this method would rely on a doctor first describing the ECG to produce the ECG examination record which would then be used as input to the model. I wonder if this is something that limits its applicability. In the introduction the authors note that there is a lack of cardiologists, but this method would not alleviate that issue.
The authors first apply a classic few-shot strategy using a llama2 variant that was pre-trained on a curated version of a public dataset of ECG notes. They note a low performance with this strategy.
Then they finetune the model on their dataset (1006 cases) and observe a marked improvement. Notably, there are no descriptions of whether they split the data in training and validation or of any cross-validation strategies. Additionally, pre-training and finetuning are done for very few epochs 10 and 15 respectively.
Then the authors use RAG to include context from similar cases to aid in the prediction. They can only include 1 or 2 retrieved samples before running out of tokens with a standard RAG strategy. The RAG strategies improve the performance. But I have several issues here. i) The knowledgebase for RAG is the same training cohort. This seems a major issue. ii) If the retrieved context includes a diagnosis, the model may just use the retrieved diagnosis. It may be beneficial to investigate this potential issue.
To include more context, they propose a context fusion model based on cross-attention. Then the entire context coming from the RAG portion boils down to a single vector if I understood correctly. This seems to improve performance further.
In general I would have liked to see baselines of performance using ECG data to compare if there could be benefits of the LLM strategies based on ECG reports.
I was confused about the "standard values" I could not find a description of them. They are mentioned only once in the text but they appear in the results table.
It would have been good to see the number of cases in each type of case: single, double, triple.
rating: 4
confidence: 3 |
cYSthunEPN | Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes | [
"Alison Cerezo"
] | For this paper, we describe our approach to benchmarking Common Factors of Empathy and Collaboration on the HOPE dataset—a publicly available dataset comprising 12.8k utterances from 212 therapy sessions involving a therapist and client dyad. Malhotra et al. (2022) conducted thorough processing of the HOPE dataset to eliminate noise and transcription errors. Common Factors Theory encompasses factors from (1) the client, (2) provider and, (3) therapeutic context; we specifically focus on provider behaviors in this paper. Our central research question: Can we produce a scalable, consistent, and unbiased way to assess the occurrences of reflective listening, appreciation, and confrontation–markers of empathy and collaboration, the core features of Common Factors Theory–using natural language processing and AI methods to augment provider communications?
**Could not add my co-authors in the portal.
Our full team: Alison Cerezo, PhD,1,2, Vijaykumar Palat, MS1, Amber Jolley-Paige, PhD1, Sarah Peregrine Lord, PsyD1,3
(1 mpathic.ai; 2 University of California Santa Barbara, 3 University of Washington) | [
"clinical benchmarks",
"common factors",
"health equity",
"machine learning",
"artificial intelligence"
] | https://openreview.net/pdf?id=cYSthunEPN | uRqEgFoGRA | review | 1,709,020,990,251 | cYSthunEPN | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission25/Reviewer_QiAY"
] | title: Benchmarking Common Factors in Psychotherapy Using AI Systems to Enhance Provider-to-Patient Dynamics to Improve Patient Outcomes
review: Authors present the use of LLM to improve psychotherapy sessions integrating common factors approach into LLM to provide feedback.
The paper is certainly original. It would enhance the clarity and understanding of the paper if authors present more detail on how the system is built, the interpretation of the metrics and how the system can improve the quality of visits. I would also appreciate a discussion on the ethical or social implications of using this type of technology in the medical setting.
rating: 6
confidence: 3 |
cYSthunEPN | Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes | [
"Alison Cerezo"
] | For this paper, we describe our approach to benchmarking Common Factors of Empathy and Collaboration on the HOPE dataset—a publicly available dataset comprising 12.8k utterances from 212 therapy sessions involving a therapist and client dyad. Malhotra et al. (2022) conducted thorough processing of the HOPE dataset to eliminate noise and transcription errors. Common Factors Theory encompasses factors from (1) the client, (2) provider and, (3) therapeutic context; we specifically focus on provider behaviors in this paper. Our central research question: Can we produce a scalable, consistent, and unbiased way to assess the occurrences of reflective listening, appreciation, and confrontation–markers of empathy and collaboration, the core features of Common Factors Theory–using natural language processing and AI methods to augment provider communications?
**Could not add my co-authors in the portal.
Our full team: Alison Cerezo, PhD,1,2, Vijaykumar Palat, MS1, Amber Jolley-Paige, PhD1, Sarah Peregrine Lord, PsyD1,3
(1 mpathic.ai; 2 University of California Santa Barbara, 3 University of Washington) | [
"clinical benchmarks",
"common factors",
"health equity",
"machine learning",
"artificial intelligence"
] | https://openreview.net/pdf?id=cYSthunEPN | qHZueMH6tF | review | 1,708,674,566,347 | cYSthunEPN | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission25/Reviewer_AfmP"
] | title: Review for Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes
review: Summary of the paper:
The abstract describes the use of NLP to detect healthcare provider behaviors aligned with common factors theory in psychotherapy. Common factors theory emphasizes building empathy, trust and positive relationships through provider skills like reflective listening and appreciation. While the clinical importance of this is paramount, I have concerns with the content that has been presented in the abstract.
Major Comments:
This is a great problem statement. However, I have a couple of major comments:
1. There is ambiguity in the description of the exact methods, vague terms such as machine learning and natural language processing are used.
* They mention the use of “synthetic and generative technologies to expand specific labeling strategies and data curation by generating and validating rare use cases” but don’t describe this - what was the generative model that was used, how did they verify its realism to simulate rare use cases, how much synthetic data was generated relative to non-synthetic data etc. More details need to be provided since this can have significant impacts on the quality of their model.
* “Machine learning methods were used to create natural language processing models based on conversational training data”. What was the base NLP model, did the authors fine-tune a model such as LLaMA? What specific machine learning method was used - NLP fine-tuning strategy needs to be described.
2. The authors mention that they will report results of benchmarking their model on the HOPE dataset, but they don’t do so within the paper.
The overall clarity of the abstract is low due to the above concerns.
Minor Comments:
1. They have not adhered to the AAAI submission format.
2. They use the phrase “using machine learning with natural language processing”. NLP is technically a subfield of ML, and this statement needs to be revised to reflect that.
rating: 4
confidence: 4 |
cYSthunEPN | Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes | [
"Alison Cerezo"
] | For this paper, we describe our approach to benchmarking Common Factors of Empathy and Collaboration on the HOPE dataset—a publicly available dataset comprising 12.8k utterances from 212 therapy sessions involving a therapist and client dyad. Malhotra et al. (2022) conducted thorough processing of the HOPE dataset to eliminate noise and transcription errors. Common Factors Theory encompasses factors from (1) the client, (2) provider and, (3) therapeutic context; we specifically focus on provider behaviors in this paper. Our central research question: Can we produce a scalable, consistent, and unbiased way to assess the occurrences of reflective listening, appreciation, and confrontation–markers of empathy and collaboration, the core features of Common Factors Theory–using natural language processing and AI methods to augment provider communications?
**Could not add my co-authors in the portal.
Our full team: Alison Cerezo, PhD,1,2, Vijaykumar Palat, MS1, Amber Jolley-Paige, PhD1, Sarah Peregrine Lord, PsyD1,3
(1 mpathic.ai; 2 University of California Santa Barbara, 3 University of Washington) | [
"clinical benchmarks",
"common factors",
"health equity",
"machine learning",
"artificial intelligence"
] | https://openreview.net/pdf?id=cYSthunEPN | gTax5c4Nkw | review | 1,708,766,809,165 | cYSthunEPN | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission25/Reviewer_KZ6p"
] | title: Interesting work but major corrections are required
review: The work "Benchmarking Common Factors in Psychotheraphy Using AI Systems to Enhance Provider-to-Patient Dynamics to Improve Patient Outcomes" is a really interesting paper that comprises a kind of a foundation model for neuro-symbolic AI connected with physchotherapy. However, there are several points that need to be clearly addressed before the work can be accepted for publication. To be clear enough, the general reviewer remarks are given in the form of numbered list provided below.
1. The Authors in the section "Method" wrote "(...) We built our initial models using labeled data derived from (...)". This statement is not really clear. The Authors need to address what was the structure of the data and how much of them were taken into account when it comes to the training/testing dataset. Right now, the overview of the data is missing.
2. In section "Method" one can also read that the Authors used methodologies for generation of the synthetic samples. However, once again, the description of the used algorithms and methods is missed. It is unclear what was the approach to generate these samples. The Authors must specify what kind of AI models or methodologies were consumed to generate the synthetic data. On the other hand it is also unclear how much samples were generated in that manner. This information is also needed to appropriately validate the worked-out models.
3. Section "Preliminary work" - there is information that Machine Learning models were used to create NLP algorithms. However, once again the details are missing. The Authors have to provide information about the algorithms that were used in their approach. Right now, it is impossible to understand the approach.
4. "Findings to be Reported" - one can read information about the levels of accuracy in detection of Appreciation and Toxicity/Confrontation - however, once again there is no sufficient details. How the algorithms were evaluated? What kind of approach was used for this aim? How the database was split and how much samples were consumed. All these questions need to be addressed.
To sum up, I would like to recommend the work for publication but only after major revision that will address all the statements given above.
rating: 5
confidence: 4 |
cYSthunEPN | Common Factors in Psychotherapy: Enhancing Provider-to-Patient Dynamics to Improve Patient Outcomes | [
"Alison Cerezo"
] | For this paper, we describe our approach to benchmarking Common Factors of Empathy and Collaboration on the HOPE dataset—a publicly available dataset comprising 12.8k utterances from 212 therapy sessions involving a therapist and client dyad. Malhotra et al. (2022) conducted thorough processing of the HOPE dataset to eliminate noise and transcription errors. Common Factors Theory encompasses factors from (1) the client, (2) provider and, (3) therapeutic context; we specifically focus on provider behaviors in this paper. Our central research question: Can we produce a scalable, consistent, and unbiased way to assess the occurrences of reflective listening, appreciation, and confrontation–markers of empathy and collaboration, the core features of Common Factors Theory–using natural language processing and AI methods to augment provider communications?
**Could not add my co-authors in the portal.
Our full team: Alison Cerezo, PhD,1,2, Vijaykumar Palat, MS1, Amber Jolley-Paige, PhD1, Sarah Peregrine Lord, PsyD1,3
(1 mpathic.ai; 2 University of California Santa Barbara, 3 University of Washington) | [
"clinical benchmarks",
"common factors",
"health equity",
"machine learning",
"artificial intelligence"
] | https://openreview.net/pdf?id=cYSthunEPN | NnLeW31Dqk | review | 1,708,765,487,054 | cYSthunEPN | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission25/Reviewer_9KXZ"
] | title: Review
review: This abstract presents an approach to building and benchmarking an AI system trained on Common Factors Theory using NLP techniques. Using labeled data, the work aims to use synthetic and generative technologies to generate and validate rare user cases. However, the model/method details are not clearly stated.
Pros:
* Applying common factors in AI development is a relatively novel idea
* Data in use is comprehensive and potentially sufficient
Cons:
* Method detail is not clearly stated (model architecture, how training is done etc.), and how Common Factors is integrated/reflected is not clear from current writing
* What is the difference between this work and the work supervised to detect different common factors? It would need more work to distinguish the contribution of this work and existing works.
* Vague connection with foundation models
rating: 3
confidence: 4 |
cDXtscWCKC | SleepFM: Multi-modal Representation Learning for Sleep across ECG, EEG and Respiratory Signals | [
"Rahul Thapa",
"Bryan He",
"Magnus Ruud Kjaer",
"Hyatt Moore IV",
"Gauri Ganjoo",
"Emmanuel Mignot",
"James Y. Zou"
] | Sleep is a complex physiological process involving multiple modalities across the body. We curate a large dataset of simultaneous polysomnography (PSG) recordings comprising electrical brain activity (EEG), heart rhythms (ECG), and respiratory patterns from over 14,000 participants, totaling over 100,000 hours of sleep data. We develop SleepFM, the first multi-modal foundation model for sleep learned through contrastive learning on this highly heterogeneous physiological data. When evaluated on a held-out test set, SleepFM significantly improves retrieval performance over 500x over random chance. A logistic regression model trained on SleepFM's learned embeddings achieves strong performance on sleep stage classification (macro AUPRC 0.69) and apnea detection (AUPRC 0.71), outperforming an end-to-end trained CNN for sleep stage classification (AUPRC 0.579) and apnea detection (AUPRC 0.56). We find representations learned using an innovative leave-one-out approach during contrastive learning significantly improve downstream task performance compared to representations from standard pairwise contrastive learning. This work demonstrates the value of holistic multi-modal sleep modeling. | [
"Deep Learning",
"Foundation Models",
"Sleep Study",
"Apnea"
] | https://openreview.net/pdf?id=cDXtscWCKC | dU3i6TyXF7 | review | 1,708,621,686,736 | cDXtscWCKC | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission21/Reviewer_Td6r"
] | title: Review and Suggestions for Improvement on a Novel Multi-Modal Sleep Event Identification Method Using Contrastive Learning
review: Reviewer's Comments:
The paper introduces an innovative method for sleep event identification based on multi-modal contrastive learning, which is innovative in the field of sleep medicine. The SleepFM model performs well in retrieval, sleep stage, and apnea classification, contributing to sleep medicine research. Here are some suggestions:
Language: The overall language of the paper is fluent, but there are instances where expressions are not clear. Additionally, there are numerous abbreviations without providing the original full terms. Particularly, for "SleepFM," I am curious about what "FM" stands for.
Methodology: The paper describes clear methods, including dataset selection, model architecture, and evaluation metrics. However, more detailed descriptions of the implementation details of the contrastive learning method may be needed for readers to understand the model training process and data processing flow.
Results and Discussion: The Results section provides detailed experimental results, but some explanations for certain results could be more in-depth. For example, why does pairwise contrastive learning perform better in retrieval? The authors could provide more explanations about the internal mechanisms of the model.
Overall, this paper is helpful for research in the field of sleep medicine, but there are still some aspects that can be further improved and refined.
rating: 9
confidence: 4 |
cDXtscWCKC | SleepFM: Multi-modal Representation Learning for Sleep across ECG, EEG and Respiratory Signals | [
"Rahul Thapa",
"Bryan He",
"Magnus Ruud Kjaer",
"Hyatt Moore IV",
"Gauri Ganjoo",
"Emmanuel Mignot",
"James Y. Zou"
] | Sleep is a complex physiological process involving multiple modalities across the body. We curate a large dataset of simultaneous polysomnography (PSG) recordings comprising electrical brain activity (EEG), heart rhythms (ECG), and respiratory patterns from over 14,000 participants, totaling over 100,000 hours of sleep data. We develop SleepFM, the first multi-modal foundation model for sleep learned through contrastive learning on this highly heterogeneous physiological data. When evaluated on a held-out test set, SleepFM significantly improves retrieval performance over 500x over random chance. A logistic regression model trained on SleepFM's learned embeddings achieves strong performance on sleep stage classification (macro AUPRC 0.69) and apnea detection (AUPRC 0.71), outperforming an end-to-end trained CNN for sleep stage classification (AUPRC 0.579) and apnea detection (AUPRC 0.56). We find representations learned using an innovative leave-one-out approach during contrastive learning significantly improve downstream task performance compared to representations from standard pairwise contrastive learning. This work demonstrates the value of holistic multi-modal sleep modeling. | [
"Deep Learning",
"Foundation Models",
"Sleep Study",
"Apnea"
] | https://openreview.net/pdf?id=cDXtscWCKC | PGMUhd4NAh | review | 1,708,497,926,353 | cDXtscWCKC | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission21/Reviewer_82eE"
] | title: Promising Idea and Approach
review: ### Summary
This paper proposes a foundational model for sleep-related tasks and shows its efficacy through two downstream tasks - (i) sleep apnea detection, and (ii) sleep stage detection. While it proposes a novel application of two interesting pre-training techniques, the experimentation does not truly evaluate the potential of the proposed foundational model. Overall, the paper is built on a promising idea and its presentation can be further improved through more rigorous experimentation.
### Strengths
- Novel application of pretraining tasks to align multiple modalities to create a sleep (time-series) foundational model
- Paper is written with clarity, explaining proposed methods and experiments well
- Promising results
### Weaknesses
- “Foundational Model” capabilities have not been evaluated: To claim that a new model is a foundational model, it must satisfy certain properties of foundational models (FM). It is the authors’ responsibility to then highlight the properties that have been tested and note limitations of the foundational model. For example, a common property found in foundational models is the ability to generalize reasonably well to an unseen dataset.
- Weak baselines: Ideally, the model should be compared with other methods which have been trained for the proposed downstream tasks. Eg: [1] is a sleep apnea detection method which can be included as a baseline.
### Other Feedback (to extend this work):
- The authors should motivate whether a foundational model for sleep is in fact useful. Identifying the shortcomings of existing methods (which do not consider all modalities, for example) and clearly delineating the research questions that you would like to answer would be a good way to approach this.
- Authors considered separate encoders for the different modalities. Some recent papers on time-series foundational modeling have shown that a single time-series model can encode time-series of different #channels and frequencies [2, 3]. Consider using one of these methods as the base model for your proposed pre-training tasks. This could help better align the modalities, especially if one of the modalities is more sparse than others.
- Experiment on the effect of leaving out one modality - could motivate the need to jointly model multiple modalities.
[1] https://ieeexplore.ieee.org/abstract/document/8571271
[2] https://arxiv.org/pdf/2302.11939.pdf
[3] https://arxiv.org/abs/2402.03885
rating: 5
confidence: 4 |
cDXtscWCKC | SleepFM: Multi-modal Representation Learning for Sleep across ECG, EEG and Respiratory Signals | [
"Rahul Thapa",
"Bryan He",
"Magnus Ruud Kjaer",
"Hyatt Moore IV",
"Gauri Ganjoo",
"Emmanuel Mignot",
"James Y. Zou"
] | Sleep is a complex physiological process involving multiple modalities across the body. We curate a large dataset of simultaneous polysomnography (PSG) recordings comprising electrical brain activity (EEG), heart rhythms (ECG), and respiratory patterns from over 14,000 participants, totaling over 100,000 hours of sleep data. We develop SleepFM, the first multi-modal foundation model for sleep learned through contrastive learning on this highly heterogeneous physiological data. When evaluated on a held-out test set, SleepFM significantly improves retrieval performance over 500x over random chance. A logistic regression model trained on SleepFM's learned embeddings achieves strong performance on sleep stage classification (macro AUPRC 0.69) and apnea detection (AUPRC 0.71), outperforming an end-to-end trained CNN for sleep stage classification (AUPRC 0.579) and apnea detection (AUPRC 0.56). We find representations learned using an innovative leave-one-out approach during contrastive learning significantly improve downstream task performance compared to representations from standard pairwise contrastive learning. This work demonstrates the value of holistic multi-modal sleep modeling. | [
"Deep Learning",
"Foundation Models",
"Sleep Study",
"Apnea"
] | https://openreview.net/pdf?id=cDXtscWCKC | 8VoyYYc6CV | review | 1,708,640,482,167 | cDXtscWCKC | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission21/Reviewer_w78d"
] | title: Official Review for SleepFM: Multi-modal Representation Learning for Sleep across ECG, EEG and Respiratory Signals
review: **Summary:**
The paper proposes SleepFM, a foundation model for sleep, trained using contrastive learning on a self-curated dataset consisting of EEG, ECG, and respiratory signals. The authors evaluate their model on downstream tasks like sleep stage classification and apnea event classification, showing performance superior to end-to-end trained CNN models. They also perform retrieval tests, showcasing their model’s ability to retrieve one modality’s closest embeddings from the test set based on another modality’s embeddings. The embedding layer in their model consists of three CNN encoders for each type of signal data, and the model is pretrained on a contrastive learning objective. The paper also evaluates the impact of pairwise contrastive learning vs leave-one-out contrastive learning objective, showing better performance on the downstream tasks using the latter. For classification, the model uses the embeddings from the pretrained model and uses them to train a logistic regression classifier to evaluate downstream performance. The model shows good performance compared to the baseline in the paper across a wide set of experiments.
**Strengths:**
1. The authors introduce a fairly large-scale multi-sensory dataset of simultaneous measurements of EEG, ECG, and EOG signals, focused on training a foundation model from scratch. This dataset seems well-curated, and based on the downstream results, the embeddings from the pre-trained model have a positive impact on the performance on given tasks.
2. The motivation for using a contrastive-learning-based pre-training methodology using all three types of time-series data is sufficiently articulated and well-grounded with prior work in the paper.
3. The pre-training, fine-tuning, validation, and test splits for the dataset are well-defined, mitigating the risk of any data contamination in the evaluation pipeline.
4. Owing to the availability of various types of paired time-series data in the pre-training dataset, the comparison of pairwise contrastive learning vs leave-one-out contrastive learning as the training objective was relevant and interesting.
5. The k-shot analysis of the model’s performance for classification was relevant, explicitly showcasing that contextual information learned during pretraining can improve downstream performance.
**Weaknesses:**
1. I am not sure I would call this model a “multi-modal” model, since the model is primarily trained on paired multi-variate time-series across different domains. EEG, ECG, and EOG data all lie in the time-series modality and are similarly modeled using the same type of encoders.
2. Experimental comparison with other recent statistical and deep learning methods for the given downstream tasks is necessary to have a more holistic understanding of the proposed model’s performance.
**Other recommendations:**
1. In future work, interpretability experiments to show what the model in learning would be interesting, and evaluating the model potentially in zero-shot settings would be appreciated as well.
2. Leave-one-out contrastive learning is not a new approach, and prior works from other domains [1,2] should be cited for the same.
[1] Sanchez-Fernandez, A., Rumetshofer, E., Hochreiter, S. et al. CLOOME: contrastive learning unlocks bioimaging databases for queries with chemical structures. Nat Commun 14, 7339 (2023). https://doi.org/10.1038/s41467-023-42328-w
[2] Xiao, T., Wang, X., Efros, A. A., & Darrell, T. (2021). What Should Not Be Contrastive in Contrastive Learning. International Conference on Learning Representations. https://openreview.net/forum?id=CZ8Y3NzuVzO
rating: 7
confidence: 4 |
cDXtscWCKC | SleepFM: Multi-modal Representation Learning for Sleep across ECG, EEG and Respiratory Signals | [
"Rahul Thapa",
"Bryan He",
"Magnus Ruud Kjaer",
"Hyatt Moore IV",
"Gauri Ganjoo",
"Emmanuel Mignot",
"James Y. Zou"
] | Sleep is a complex physiological process involving multiple modalities across the body. We curate a large dataset of simultaneous polysomnography (PSG) recordings comprising electrical brain activity (EEG), heart rhythms (ECG), and respiratory patterns from over 14,000 participants, totaling over 100,000 hours of sleep data. We develop SleepFM, the first multi-modal foundation model for sleep learned through contrastive learning on this highly heterogeneous physiological data. When evaluated on a held-out test set, SleepFM significantly improves retrieval performance over 500x over random chance. A logistic regression model trained on SleepFM's learned embeddings achieves strong performance on sleep stage classification (macro AUPRC 0.69) and apnea detection (AUPRC 0.71), outperforming an end-to-end trained CNN for sleep stage classification (AUPRC 0.579) and apnea detection (AUPRC 0.56). We find representations learned using an innovative leave-one-out approach during contrastive learning significantly improve downstream task performance compared to representations from standard pairwise contrastive learning. This work demonstrates the value of holistic multi-modal sleep modeling. | [
"Deep Learning",
"Foundation Models",
"Sleep Study",
"Apnea"
] | https://openreview.net/pdf?id=cDXtscWCKC | 6S7vikQ40Q | review | 1,708,623,972,856 | cDXtscWCKC | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission21/Reviewer_rUdU"
] | title: Interesting work on Sleep foundation model
review: The paper presents "SleepFM," a multi-modal foundation model for sleep analysis using large-scale polysomnography (PSG) data, including EEG, ECG, and respiratory signals. The model is trained via contrastive learning and demonstrates superior performance in tasks like sleep stage classification and apnea detection compared to traditional CNN methods. The innovative aspect of using a leave-one-out approach in contrastive learning for multi-modal data integration is highlighted, showing significant improvements in model performance.
As author stated in the limitation section, more external datasets are needed to prove the work as a foundation model. Also, it would be great to see the performance of using only one kind of signal as downstream task, e.g. ECG-only for sleep staging.
rating: 8
confidence: 5 |
bDfBemF2tw | Identity information based on human magnetocardiography signals | [
"Pengju Zhang",
"Chenxi Sun",
"Jianwei Zhang",
"Hong Guo"
] | We have developed an individual identification system based on magnetocardiography (MCG) signals captured using optically pumped magnetometers (OPMs). Our system utilizes pattern recognition to analyze the signals obtained at different positions on the body, by scanning the matrices composed of MCG signals with a $2\times2$ window. In order to make use of the spatial information of MCG signals, we transform the signals from adjacent small areas into four channels of a dataset. We further transform the data into time-frequency matrices using wavelet transforms and employ a convolutional neural network (CNN) for classification. As a result, our system achieves an accuracy rate of 97.04\% in identifying individuals. This finding indicates that the MCG signal holds potential for use in individual identification systems, offering a valuable tool for personalized healthcare management. | [
"magnetocardiography",
"individual identification",
"optically pumped magnetometers"
] | https://openreview.net/pdf?id=bDfBemF2tw | nQj4D1lpVf | review | 1,708,666,943,578 | bDfBemF2tw | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission40/Reviewer_Ag5k"
] | title: The research addresses a significant issue by exploring alternative methods for identity identification using MCG. Nowadays, security and privacy concerns are inevitable, and innovative approaches to identity verification are essential. By investigating the feasibility of extracting biometric information from magnetocardiography signals, the study offers an alternative for enhancing security measures without compromising individual privacy.
review: 1. The effort to control confounding factors by selecting participants with similar characteristics in this preliminary study is understandable. However, it's important to consider the potential trade-off between controlling variables and ensuring the model's generalizability to the broader population. With similar age and sex, the model may face increased difficulty in identifying individuals due to potential underlying differences in ECG patterns among different age groups and sexes. Given the limited diversity in the sample, there may be challenges in extrapolating the findings to a more diverse population. Future research could explore strategies to balance control over confounding factors with the need for dataset diversity to enhance the model's applicability across different demographics.
2. Please further clarify the methodology employed to augment the dataset. Specifically, how were the repeated measures conducted and combined to generate such a large dataset (3276 sets) with 5 participants? Providing insights into this process would enhance the transparency of the study and offer valuable guidance to researchers interested in implementing similar data augmentation techniques in their work.
3. I recommend the inclusion of measurements from ECG devices for comparison with the innovative self-manufactured device. To strengthen the evidence of the device's validity, it would be beneficial to authors include a figure comparing the measurements obtained from the new device with those from traditional ECG. This visual comparison could provide valuable insights into the device's accuracy and reliability.
4. To ensure the robustness of the findings, it's crucial to provide details on the validation process for the model. Please elaborate on the validation dataset used to assess the model's performance.
5. I suggest the authors consider using universal anatomical terms, such as transverse, coronal, and sagittal planes, to describe the human anatomy. Additionally, providing a fixed reference point when describing magnet induction intensity would be beneficial to prevent confusion and ensure the reproducibility and interpretation of the results. These adjustments could enhance clarity and consistency in the terminology used.
6. I appreciate the authors' insightful approach to adding noise to mimic real-world environments. This strategy is particularly valuable as it reflects the common scenario where sensors encounter significant noise levels alongside the signal of interest. By incorporating noise into the study, the authors have taken a crucial step towards enhancing the realism and applicability of their findings to practical settings.
rating: 7
confidence: 4 |
bDfBemF2tw | Identity information based on human magnetocardiography signals | [
"Pengju Zhang",
"Chenxi Sun",
"Jianwei Zhang",
"Hong Guo"
] | We have developed an individual identification system based on magnetocardiography (MCG) signals captured using optically pumped magnetometers (OPMs). Our system utilizes pattern recognition to analyze the signals obtained at different positions on the body, by scanning the matrices composed of MCG signals with a $2\times2$ window. In order to make use of the spatial information of MCG signals, we transform the signals from adjacent small areas into four channels of a dataset. We further transform the data into time-frequency matrices using wavelet transforms and employ a convolutional neural network (CNN) for classification. As a result, our system achieves an accuracy rate of 97.04\% in identifying individuals. This finding indicates that the MCG signal holds potential for use in individual identification systems, offering a valuable tool for personalized healthcare management. | [
"magnetocardiography",
"individual identification",
"optically pumped magnetometers"
] | https://openreview.net/pdf?id=bDfBemF2tw | leNwPGYElw | review | 1,708,623,243,135 | bDfBemF2tw | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission40/Reviewer_bMgS"
] | title: Interesting topic but more experiments are needed
review: The work proposed an interesting work on using MCG signals to conduct individual identification. While several more steps needs to be considered as a foundation model
1. "Our system (shown in Appendices Figure ??) has two BellBloom OPMs as a gradiometer to sense the cardiac magnetic
field." There is a reference issue.
2. More related work about MCG signal can be introduced to give readers a more comprehensive view of MCG.
3. In the introduction, the authors state that ECG quality can be influenced by physical activities etc, will MCG will also be influenced by those factors?
4. The most concerning issue is the model is tested on 5 subjects, it can hardly be called Foundation model.
rating: 4
confidence: 4 |
bDfBemF2tw | Identity information based on human magnetocardiography signals | [
"Pengju Zhang",
"Chenxi Sun",
"Jianwei Zhang",
"Hong Guo"
] | We have developed an individual identification system based on magnetocardiography (MCG) signals captured using optically pumped magnetometers (OPMs). Our system utilizes pattern recognition to analyze the signals obtained at different positions on the body, by scanning the matrices composed of MCG signals with a $2\times2$ window. In order to make use of the spatial information of MCG signals, we transform the signals from adjacent small areas into four channels of a dataset. We further transform the data into time-frequency matrices using wavelet transforms and employ a convolutional neural network (CNN) for classification. As a result, our system achieves an accuracy rate of 97.04\% in identifying individuals. This finding indicates that the MCG signal holds potential for use in individual identification systems, offering a valuable tool for personalized healthcare management. | [
"magnetocardiography",
"individual identification",
"optically pumped magnetometers"
] | https://openreview.net/pdf?id=bDfBemF2tw | Bu6uyMh9UJ | review | 1,708,718,453,169 | bDfBemF2tw | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission40/Reviewer_Uorw"
] | title: Interesting approach, relevance to topics in clinical foundation models is questionable
review: In this work, the authors develop and present preliminary results regarding an identification system based on magnetocardiography. The method for using MCG is an interesting non-invasive technique and significant effort has been shown in developing the system. However, there are several comments to be made regarding the work in the context of the symposium:
- First, the limited sample subjects does question the robustness of the results. Evaluation over a larger cohort would strengthen the conclusions of the paper
- The modality of MCG is novel, but given the size of the apparatus shown, it is questionable where such a system would be practical and needed.
- In addition, although the authors claim that MCG-based identification would be more robust and reliable, it is uncertain whether this is true given a lack of comparisons to other mentioned methods, such as facial features or ECG.
- The authors utilize a CNN with features extracted from a wavelet transform for the identification, it’s unclear that such a method would be scalable to large populations that would be necessary for practical deployment.
- Moreover, there doesn’t seem to be any significant innovation related to MCG or ML for healthcare in general.
Although the method of MCG is novel and interesting, further results and evaluation to other existing approaches is recommended
rating: 3
confidence: 4 |
ZLUsZ52ibx | Review of Language Models for Survival Analysis | [
"Vincent Jeanselme",
"Nikita Agarwal",
"Chen Wang"
] | By learning statistical relations between words, Large Language Models (LLMs) have presented the capacity to capture meaningful representations for tasks beyond the ones they were trained for. LLMs' widespread accessibility and flexibility have attracted interest among medical practitioners, leading to extensive exploration of their utility in medical prognostic and diagnostic applications. Our work reviews LLMs' use for survival analysis, a statistical tool for estimating the time to an event of interest and, consequently, medical risk. We propose a classification of LLMs' modelling strategies and adaptations to survival analysis, detailing their limitations and strengths. Due to the absence of standardised guidelines in the literature, we introduce a framework to assess the efficacy of diverse LLM strategies for survival analysis. | [
"Survival analysis; Risk Estimate; Review"
] | https://openreview.net/pdf?id=ZLUsZ52ibx | lgdeXVP4Qr | review | 1,707,843,899,119 | ZLUsZ52ibx | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission23/Reviewer_ZyZX"
] | title: An important effort in need of a few improvements
review: The work addresses an important problem - the current lack of a meaningful comparison of methods for application of LLMs to process unstructured data for clinical survival analysis. It is very well-written, organized, and enjoyable to read. I am not well-versed in this subfield, but it appears to be reasonably comprehensive. It provides a framework for addressing the issues brought up in the review.
There is also room for improvement. The descriptions of many works lack details and are hard to draw conclusions from. At several points recommendations are given, but they are not very clear or specific. If the point is to highlight that prior work does not support clear recommendations, it would be clearer to state this, as the current structure makes it appear that the intent is for the authors to give methodological recommendations based on the reviewed work. A large part of the contribution is a common evaluation framework, which addresses the gaps in the field highlighted by the review; however, this is largely relegated to the appendix, not described in much detail, and not justified clearly in its design choices. I cannot view the current state of the project as the link is hidden for anonymity.
Also, I would point out that the strategy used for literature search seems brittle. For example, the search includes titles with either "survival analysis" or "time-to-event" AND "medicine" or "healthcare". Ironically, this submission itself meets neither of these criteria. This suggests both the need for a more robust search strategy and possibly the need for a more descriptive title for this submission. It would also be helpful if it was clearer from the title that this is a review.
Questions and suggestions for the authors:
1. In section "Fine-tuning: Adjusting LLMs for the task - Limitations": are there any conclusions to be drawn from this? The findings seem contradictory.
2. Prompting: Querying in natural language - Strengths. Can you elaborate? The main point given is that these methods are the most novel, which is not a strength per se.
3. I found the following statement confusing: "However, multiple risk scores are rarely evaluated due to the prohibitive cost of extracting the required covariates x_i, which are often present in patients’ unstructured health records." What are the multiple risk scores being referenced here?
4. You encourage researchers to "compare LLMs strategies on private sources using our implementation." Can you clarify how this works? Is the data private, and if so, how do other researchers use and present it?
5. What is the current state of the GitHub? Can it be used yet? Can you provide any results for the methods that are implemented so far? I also understand that the intent is partially to use this conference to gather feedback for its improvement.
With some clarification on the nature of the provided evaluation framework, I think the paper meets the threshold for acceptance in its current state, but I hope the authors will take this feedback into account, as it could be a stronger paper without too much effort.
rating: 5
confidence: 4 |
ZLUsZ52ibx | Review of Language Models for Survival Analysis | [
"Vincent Jeanselme",
"Nikita Agarwal",
"Chen Wang"
] | By learning statistical relations between words, Large Language Models (LLMs) have presented the capacity to capture meaningful representations for tasks beyond the ones they were trained for. LLMs' widespread accessibility and flexibility have attracted interest among medical practitioners, leading to extensive exploration of their utility in medical prognostic and diagnostic applications. Our work reviews LLMs' use for survival analysis, a statistical tool for estimating the time to an event of interest and, consequently, medical risk. We propose a classification of LLMs' modelling strategies and adaptations to survival analysis, detailing their limitations and strengths. Due to the absence of standardised guidelines in the literature, we introduce a framework to assess the efficacy of diverse LLM strategies for survival analysis. | [
"Survival analysis; Risk Estimate; Review"
] | https://openreview.net/pdf?id=ZLUsZ52ibx | G2ORN0qToL | review | 1,708,553,397,336 | ZLUsZ52ibx | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission23/Reviewer_qHUF"
] | title: Leveraging Language Models for Risk Estimation in Medical Prognostic and Diagnostic Applications
review: The paper discusses leveraging Large Language Models (LLMs) for survival analysis in medical prognostic and diagnostic applications. It explores methodologies for estimating medical risk, with a focus on survival analysis, addressing the challenges posed by censoring in medical studies. The document reviews current LLM strategies for survival analysis, detailing their limitations and strengths, and proposes an open-source implementation for comparing these strategies. It aims to develop evidence-based recommendations for the effective use of LLMs in estimating patient survival outcomes.
**Pros**
- Comprehensive Approach: The paper provides a thorough overview of using LLMs for survival analysis, covering various methodologies and their adaptations for this specific application.
- Practical Contributions: By offering an open-source implementation, the work facilitates practical experimentation and comparison of different LLM strategies for survival analysis.
- Addressing a Crucial Challenge: The paper tackles the significant challenge of censoring in survival analysis, proposing methods to improve modeling under such conditions.
**Cons**
- Generalization of Findings: The paper might benefit from a broader evaluation across different types of LLMs and datasets to ensure the findings' generalizability.
- Detailed Evaluation Framework: While it proposes an open-source implementation for comparison, the paper could provide a more standardized evaluation framework for assessing the performance of LLM strategies.
rating: 7
confidence: 3 |
ZLUsZ52ibx | Review of Language Models for Survival Analysis | [
"Vincent Jeanselme",
"Nikita Agarwal",
"Chen Wang"
] | By learning statistical relations between words, Large Language Models (LLMs) have presented the capacity to capture meaningful representations for tasks beyond the ones they were trained for. LLMs' widespread accessibility and flexibility have attracted interest among medical practitioners, leading to extensive exploration of their utility in medical prognostic and diagnostic applications. Our work reviews LLMs' use for survival analysis, a statistical tool for estimating the time to an event of interest and, consequently, medical risk. We propose a classification of LLMs' modelling strategies and adaptations to survival analysis, detailing their limitations and strengths. Due to the absence of standardised guidelines in the literature, we introduce a framework to assess the efficacy of diverse LLM strategies for survival analysis. | [
"Survival analysis; Risk Estimate; Review"
] | https://openreview.net/pdf?id=ZLUsZ52ibx | 0vnJlnruNv | review | 1,708,410,150,925 | ZLUsZ52ibx | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission23/Reviewer_narw"
] | title: The graphs seems not well orgnized, could you please double theck the graphs
review: The paper emphasizes the potential of LLMs in extracting insights from unstructured medical data for survival analysis and risk estimation, proposing a comprehensive approach that incorporates embedding, fine-tuning, and prompting strategies. It also highlights the importance of cross-validation techniques in assessing the generalizability of these methods across different medical institutions.
The graphs seems not well orgnized, could you please double theck the graphs. Besides, the recommendation section seems fragmented, could it be reorganized to summarize it in the conclusion?
rating: 6
confidence: 3 |
Xpt9xzmWhg | A Reflection and Outlook on Clinical Adaption of Large Language Models | [
"Hanyin Wang",
"Chufan Gao",
"Jimeng Sun"
] | The recent advancements in large language models have brought about a significant revolution in various aspects of natural language processing. The emergence of potent open-source LLMs has paved the way for domain-specific fine-tuning within the clinical field. A recent survey comprehensively summarized the latest applications in constructing clinical LLMs, highlighting both their challenges and applications. In this study, we aim to build upon this previous work and provide further in-depth analysis into existing clinical LLMs, with a focus on their domain adaption approaches. Our objective is to stimulate meaningful discussions among participants during the AAAI workshop. We believe that by delving into these aspects, we can contribute to a better understanding of the potential and limitations of clinical LLMs. | [
"Large Language Model",
"Clinical Adaption"
] | https://openreview.net/pdf?id=Xpt9xzmWhg | ZrgKAfrkMh | review | 1,708,644,098,809 | Xpt9xzmWhg | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission19/Reviewer_EtCU"
] | title: Good study focusing a comprehensive analysis of current clinical Large Language Models (LLMs)
review: This paper provides a comprehensive analysis of the current state and future prospects of clinical Large Language Models (LLMs), focusing on domain adaptation strategies, performance comparisons, and the potential impact on healthcare. The following points should be considered:
1. How did the authors ensure the fairness of the results reported in Table 1, given the varying sizes, training data, and training methodologies employed by these models? Especially the differences in the evaluation settings on these two datasets, for example, BioGPT and GatorTronGPT employed prefix-tuning/prompt tuning to evaluate the pretrained LLMs, where the parameters of the LLMs are kept frozen, while PMC-LLaMA and BioMedGPT employed instruction tuning/fine tuning on the target dataset, where the parameters of the LLMs are trainable and may be better adapted to the downstream tasks.
2. The results in Table 1 are informative but could be enhanced, for example, the performance metric should be clarified, the column name "Approach" should be "Training approach", and the unit of parameter size should be clarified (e.g., 70B ->70 Billion). In the " Training data" column, there is some confusion about the unit (e.g., 160K data, 47B tokens). It would be beneficial to add a column to report the computational cost for each model.
3. Could the authors elaborate on the ethical and legal implications of using potentially copyrighted or sensitive patient data in training clinical LLMs?
rating: 6
confidence: 4 |
Xpt9xzmWhg | A Reflection and Outlook on Clinical Adaption of Large Language Models | [
"Hanyin Wang",
"Chufan Gao",
"Jimeng Sun"
] | The recent advancements in large language models have brought about a significant revolution in various aspects of natural language processing. The emergence of potent open-source LLMs has paved the way for domain-specific fine-tuning within the clinical field. A recent survey comprehensively summarized the latest applications in constructing clinical LLMs, highlighting both their challenges and applications. In this study, we aim to build upon this previous work and provide further in-depth analysis into existing clinical LLMs, with a focus on their domain adaption approaches. Our objective is to stimulate meaningful discussions among participants during the AAAI workshop. We believe that by delving into these aspects, we can contribute to a better understanding of the potential and limitations of clinical LLMs. | [
"Large Language Model",
"Clinical Adaption"
] | https://openreview.net/pdf?id=Xpt9xzmWhg | DxJYEAdQAX | review | 1,708,672,941,794 | Xpt9xzmWhg | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission19/Reviewer_cr9S"
] | title: Reject
review: The paper attempts to present an in-depth analysis of the current state and future directions of clinical Large Language Models (LLMs). The authors compare recent clinical LLMs, focusing on medical knowledge injection and domain-specific tuning or pretraining approaches. They provide insights into model performance across various benchmarks, including MedQA and PubMedQA, and discuss the implications of continuous pretraining versus supervised fine-tuning, model size, compute power, and the quality of training data.
**Pros**
1) **Easy to follow**: The paper is easy to follow but has some issues (pointed out in Cons section)
2) **Authors Cover various aspects of Clinical LLMs**:
2a. The training data and approach are well summarised in Table 1.
2b. The Results section provides insightful comments on continuous pretraining versus supervised fine-tuning, model size versus compute power, and the quality of training data.
**Cons**
1) **Evaluation in Table 1**: It's unclear whether the authors conduct the evaluations themselves or if the numbers are reported from respective papers. The table caption states, "For each model, only listed best performance," but it's not clear what "best performance" refers to. Additionally, the metrics used for performance (e.g., accuracy) should be clearly mentioned in the table.
2) **Some excerpts need to be refined:**
2a. The conclusion appears to be very brief.
2b. The discussion on downstream use cases is not comprehensive. Important industrial downstream tasks, such as early prevention of diseases and personalized medicine based on patient history, are not adequately covered.
4) **Novelty:** The paper seems to add little value to existing surveys [1, 2, 3]. Many of the discussed points, such as fine-tuning versus pre-training, training data, results on MedQA and PubMedQA, along with applications of LLMs in downstream use cases, are already extensively covered in the cited survey papers.
5) **Lack of Related Works**: Significant related works [2, 3] are not mentioned, which is a notable omission.
[1] Zhou, H.; Gu, B.; Zou, X.; Li, Y.; Chen, S. S.; Zhou, P.; Liu, J.; Hua, Y.; Mao, C.; Wu, X.; et al. 2023b. A survey of large language models in medicine: Progress, application, and challenge. arXiv preprint arXiv:2311.05112
[2] He, Kai, et al. "A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics." arXiv preprint arXiv:2310.05694 (2023).
[3] Singhal, Karan, et al. "Large language models encode clinical knowledge." Nature 620.7972 (2023): 172-180.
While the authors mention results and discussions insightfully, they lack novelty, and most points exist already in the survey papers pointed out above. Additionally, the missing evaluation details in Table 1 are a concern.
rating: 4
confidence: 4 |
Xpt9xzmWhg | A Reflection and Outlook on Clinical Adaption of Large Language Models | [
"Hanyin Wang",
"Chufan Gao",
"Jimeng Sun"
] | The recent advancements in large language models have brought about a significant revolution in various aspects of natural language processing. The emergence of potent open-source LLMs has paved the way for domain-specific fine-tuning within the clinical field. A recent survey comprehensively summarized the latest applications in constructing clinical LLMs, highlighting both their challenges and applications. In this study, we aim to build upon this previous work and provide further in-depth analysis into existing clinical LLMs, with a focus on their domain adaption approaches. Our objective is to stimulate meaningful discussions among participants during the AAAI workshop. We believe that by delving into these aspects, we can contribute to a better understanding of the potential and limitations of clinical LLMs. | [
"Large Language Model",
"Clinical Adaption"
] | https://openreview.net/pdf?id=Xpt9xzmWhg | 7D5hQyJZ3F | review | 1,708,408,808,170 | Xpt9xzmWhg | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission19/Reviewer_o7is"
] | title: Good review on clinical LLM reasoning benchmarks, and provide insight of training data selection and domain adaptation approaches.
review: This paper explores the use and adaptation of large-scale language modeling (LLM) to the clinical domain. It builds on previous work and provides an in-depth analysis of existing clinical LLMs, focusing on domain adaptation approaches. The study compares various models according to their medical knowledge infusion strategies, including domain-specific adaptation and pre-training from scratch using a medical corpus. The study emphasizes the effectiveness of continuous pre-training and supervised fine-tuning in improving model performance on MedQA and PubMedQA medical benchmarks. The discussion raises important points about the selection of training data, the potential for retrieval enhancement generation, and the need for careful consideration of copyright issues. The discussion also touches on the utility of clinical LLM in downstream applications and the challenges that remain in bridging the gap between clinical needs and academic research.
While there are areas for further research and clarification, the study stimulates meaningful discussion on the potential and limitations of clinical LLMs, paving the way for future advancements.
rating: 7
confidence: 4 |
WIHH0iOOUt | OpenMedLM: Prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models | [
"Anurag Garikipati",
"Jenish Maharjan",
"Navan Preet Singh",
"Leo Cyrus",
"Mayank Sharma",
"Madalina Ciobanu",
"Gina Barnes",
"Qingqing Mao",
"Ritankar Das"
] | LLMs have become increasingly capable at accomplishing a range of specialized-tasks and can be utilized to expand access to medical knowledge. Many researchers have attempted to leverage LLMs for medical applications and a range of medical benchmarks have been developed to test the acuity of these models on healthcare-specific tasks. Most of the LLMs that have been developed for medical applications have involved significant amounts of fine-tuning, leveraging specialized medical data and large amounts of computational power to complete. Additionally, many of the top performing models are proprietary models with limited access to all but a few research groups. However, open-source (OS) models represent a key area of growth for medical LLMs due to significant improvements in performance and to an inherent ability to provide the transparency and compliance required in the healthcare space. We present OpenMedLM, a prompting platform which delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks. Through a series of robust prompt engineering techniques, we found that OpenMedLM delivers SOTA results on 3 common medical LLM benchmarks, surpassing the previous best performing OS models which leveraged extensive fine-tuning. These results demonstrate the ability of OS foundation models to offer strong performance while alleviating the challenges associated with fine-tuning. Our results highlight medical-specific emergent properties in OS LLMs which have not yet been documented elsewhere and showcase the need to understand how else prompt engineering can improve the performance of LLMs for medical applications. | [
"LLMs",
"Medical AI",
"Prompt Engineering",
"Generative AI for Medicine"
] | https://openreview.net/pdf?id=WIHH0iOOUt | gS48DDcpAm | review | 1,708,553,013,373 | WIHH0iOOUt | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission24/Reviewer_irrN"
] | title: Enhancing Medical Question-Answering with Prompt Engineering
review: OpenMedLM proposes a platform employing prompt engineering techniques to optimize the performance of open-source large language models (LLMs) on medical question-answering benchmarks, achieving state-of-the-art (SOTA) results without fine-tuning. Using the Yi 34B model, OpenMedLM surpasses previous SOTA results on MedQA, MedMCQA, PubMedQA, and MMLU medical subsets through few-shot prompting, chain-of-thought (CoT) prompting, and self-consistency strategies. The study emphasizes the potential of prompt engineering in enhancing the capabilities of generalist LLMs for specialized tasks like medical question answering.
**Strengths**
- Innovative Approach: The study introduces a novel use of prompt engineering as a viable alternative to fine-tuning, offering a cost-effective method for enhancing LLM performance in specialized domains.
Comprehensive Evaluation: It evaluates the model across multiple benchmarks, thoroughly assessing its capabilities in medical question-answering tasks.
- Clear Methodology: The methodology, including using few-shot prompting, CoT prompting, and self-consistency, is well-articulated, offering clarity on the process and potential for replication.
**Weaknesses**
- Generalization Concerns: The study's focus on a single LLM (Yi 34B) raises questions about the generalizability of the findings to other models.
- Lack of Comparative Analysis: While it compares OpenMedLM's performance with that of the Meditron model, a broader comparison with more models, especially those using different fine-tuning approaches, could have provided a more comprehensive view of its relative performance.
rating: 6
confidence: 3 |
WIHH0iOOUt | OpenMedLM: Prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models | [
"Anurag Garikipati",
"Jenish Maharjan",
"Navan Preet Singh",
"Leo Cyrus",
"Mayank Sharma",
"Madalina Ciobanu",
"Gina Barnes",
"Qingqing Mao",
"Ritankar Das"
] | LLMs have become increasingly capable at accomplishing a range of specialized-tasks and can be utilized to expand access to medical knowledge. Many researchers have attempted to leverage LLMs for medical applications and a range of medical benchmarks have been developed to test the acuity of these models on healthcare-specific tasks. Most of the LLMs that have been developed for medical applications have involved significant amounts of fine-tuning, leveraging specialized medical data and large amounts of computational power to complete. Additionally, many of the top performing models are proprietary models with limited access to all but a few research groups. However, open-source (OS) models represent a key area of growth for medical LLMs due to significant improvements in performance and to an inherent ability to provide the transparency and compliance required in the healthcare space. We present OpenMedLM, a prompting platform which delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks. Through a series of robust prompt engineering techniques, we found that OpenMedLM delivers SOTA results on 3 common medical LLM benchmarks, surpassing the previous best performing OS models which leveraged extensive fine-tuning. These results demonstrate the ability of OS foundation models to offer strong performance while alleviating the challenges associated with fine-tuning. Our results highlight medical-specific emergent properties in OS LLMs which have not yet been documented elsewhere and showcase the need to understand how else prompt engineering can improve the performance of LLMs for medical applications. | [
"LLMs",
"Medical AI",
"Prompt Engineering",
"Generative AI for Medicine"
] | https://openreview.net/pdf?id=WIHH0iOOUt | aIKUSKhPBa | review | 1,708,198,190,930 | WIHH0iOOUt | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission24/Reviewer_bmaT"
] | title: The paper effectively demonstrates the superiority of various prompting techniques by showcasing their practical applicability to open-source models. However, improvements are needed in paper formatting, figure clarity, wider evaluation, and code release to enhance the paper's overall impact and reproducibility.
review: **Strengths:**
- The authors effectively demonstrate how various prompting techniques can lead to superior performance compared to models finetuned/pretrained on domain-specific data. This experimentation showcases the practical utility of these techniques for open-source models, benefiting the community.
- The paper provides clear and comprehensive background details, making it easy to follow.
**Weaknesses:**
- It is essential for the authors to include an abstract and adhere to the formatting guidelines specified in the AAAI-24 Author Kit.
- The authors have failed to cite papers referenced in Figure 1.
- Figure 2 suffers from clarity issues, possibly due to colour choices. Reproducing results from open-sourced models like Meditron would ensure consistency in the computing environment. The same applies to models such as Med42, Clinical Camel, and PMC-LLaMA.
- The evaluation of the authors’ claims is limited. Experimentation across different model architectures and sizes would provide stronger support for their assertions.
- The lack of code release for their OpenMedLM prompting platform needs to be addressed.
rating: 6
confidence: 5 |
WIHH0iOOUt | OpenMedLM: Prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models | [
"Anurag Garikipati",
"Jenish Maharjan",
"Navan Preet Singh",
"Leo Cyrus",
"Mayank Sharma",
"Madalina Ciobanu",
"Gina Barnes",
"Qingqing Mao",
"Ritankar Das"
] | LLMs have become increasingly capable at accomplishing a range of specialized-tasks and can be utilized to expand access to medical knowledge. Many researchers have attempted to leverage LLMs for medical applications and a range of medical benchmarks have been developed to test the acuity of these models on healthcare-specific tasks. Most of the LLMs that have been developed for medical applications have involved significant amounts of fine-tuning, leveraging specialized medical data and large amounts of computational power to complete. Additionally, many of the top performing models are proprietary models with limited access to all but a few research groups. However, open-source (OS) models represent a key area of growth for medical LLMs due to significant improvements in performance and to an inherent ability to provide the transparency and compliance required in the healthcare space. We present OpenMedLM, a prompting platform which delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks. Through a series of robust prompt engineering techniques, we found that OpenMedLM delivers SOTA results on 3 common medical LLM benchmarks, surpassing the previous best performing OS models which leveraged extensive fine-tuning. These results demonstrate the ability of OS foundation models to offer strong performance while alleviating the challenges associated with fine-tuning. Our results highlight medical-specific emergent properties in OS LLMs which have not yet been documented elsewhere and showcase the need to understand how else prompt engineering can improve the performance of LLMs for medical applications. | [
"LLMs",
"Medical AI",
"Prompt Engineering",
"Generative AI for Medicine"
] | https://openreview.net/pdf?id=WIHH0iOOUt | PgByJbLgqJ | review | 1,708,532,330,072 | WIHH0iOOUt | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission24/Reviewer_duC5"
] | title: Weak Accept
review: The paper introduces innovative prompting techniques and demonstrates their effectiveness in improving the performance of the general language model Yi-34B over a clinically fine-tuned LLM, Meditron 70B, across three out of four medical Q&A benchmarks: MedQA (4 Options), MedMCQA, PubMedQA, and MMLU - Medical.
**Pros**
1) **Clarity of paper**: The paper is well-written and easy to follow. Figure 1. showing LLM performance with time is good.
2) **Good improvements and ablations**: The results with different prompting techniques are well demonstrated in Figure 2 and Table A1.
**Cons**
1) **Unavailability of training dataset for KNN FS CoT (minor)**: Since this requires access to train set samples, it is a costly process, and training data won't always be available for many models. I do understand the point of open source LLMs which the paper advocates. But consider medical domain datasets that are sensitive to release. Hence the authors are not able to show results of Meditron 70B on kNN FS CoT. This is to be discussed in limitations.
2) **Experiment baselines**: The exclusive focus on Meditron 70B as a baseline leaves the comparison somewhat narrow. Including at least one additional medical LLM baseline would offer a broader perspective on the presented techniques' relative performance.
3) **Missing Proper References**: The paper lacks references for the various prompting methods (CoT, KNN prompting, etc.) utilized in the study. These missing references raises following two questions:
4) **Novelty**: : It is unclear whether the use of KNN with training samples is a novel contribution of this paper or if it has been previously used.
5) **How are similar questions selected for kNN FS CoT?**
I didn't find how similar questions to test examples are chosen via KNN in the paper. It is done in which space (embedding space of model/input space). Please explain it properly.
6) **Limitations of kNN FS CoT and other prompting strategies:** The potential for prompting strategies, such as kNN FS CoT, to overemphasize reasoning patterns from similar training samples needs further discussion. Highlighting this and other limitations of the proposed methods would provide a more balanced and comprehensive view of their applicability.
7) The clarity of Figure 2. can be improved with the usage of other shades of colors than green for more clarity.
The paper presents evidence that innovative prompting strategies can enhance the performance of open LLMs compared to fine-tuned counterparts in the medical domain. However, addressing the aforementioned concerns would strengthen the paper.
rating: 6
confidence: 3 |
WIHH0iOOUt | OpenMedLM: Prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models | [
"Anurag Garikipati",
"Jenish Maharjan",
"Navan Preet Singh",
"Leo Cyrus",
"Mayank Sharma",
"Madalina Ciobanu",
"Gina Barnes",
"Qingqing Mao",
"Ritankar Das"
] | LLMs have become increasingly capable at accomplishing a range of specialized-tasks and can be utilized to expand access to medical knowledge. Many researchers have attempted to leverage LLMs for medical applications and a range of medical benchmarks have been developed to test the acuity of these models on healthcare-specific tasks. Most of the LLMs that have been developed for medical applications have involved significant amounts of fine-tuning, leveraging specialized medical data and large amounts of computational power to complete. Additionally, many of the top performing models are proprietary models with limited access to all but a few research groups. However, open-source (OS) models represent a key area of growth for medical LLMs due to significant improvements in performance and to an inherent ability to provide the transparency and compliance required in the healthcare space. We present OpenMedLM, a prompting platform which delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks. Through a series of robust prompt engineering techniques, we found that OpenMedLM delivers SOTA results on 3 common medical LLM benchmarks, surpassing the previous best performing OS models which leveraged extensive fine-tuning. These results demonstrate the ability of OS foundation models to offer strong performance while alleviating the challenges associated with fine-tuning. Our results highlight medical-specific emergent properties in OS LLMs which have not yet been documented elsewhere and showcase the need to understand how else prompt engineering can improve the performance of LLMs for medical applications. | [
"LLMs",
"Medical AI",
"Prompt Engineering",
"Generative AI for Medicine"
] | https://openreview.net/pdf?id=WIHH0iOOUt | 97orriPsGk | review | 1,708,934,515,673 | WIHH0iOOUt | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission24/Reviewer_LwgB"
] | title: Prompt engineering for open-source large language models
review: This paper first demonstrated that prompt engineering can outperform fine-tuning in medical QA for open-source models. I really like this work and definitely think it's important as open-source models are more accessible for clinicians and have less privacy issues. Overall I think this is a good paper presenting important conclusions. My only concern is that this paper heavily relies on public benchmark and lacks the evaluation from the clinician.
rating: 7
confidence: 4 |
P3LOmrZWGR | CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation | [
"Zhihong Chen",
"Maya Varma",
"Jean-Benoit Delbrouck",
"Magdalini Paschali",
"Louis Blankemeier",
"Dave Van Veen",
"Jeya Maria Jose Valanarasu",
"Alaa Youssef",
"Joseph Paul Cohen",
"Eduardo Pontes Reis",
"Emily Tsai",
"Andrew Johnston",
"Cameron Olsen",
"Tanishq Mathew Abraham",
"Sergios Gatidis",
"Akshay S Chaudhari",
"Curtis Langlotz"
] | Chest X-rays (CXRs) are the most frequently performed imaging test in clinical practice. Recent advances in the development of vision-language foundation models (FMs) give rise to the possibility of performing automated CXR interpretation. In this work, we present (i) \emph{CheXinstruct} - a large-scale instruction-tuning dataset curated from 28 publicly-available datasets; (ii) \emph{CheXagent} - an instruction-tuned FM capable of analyzing and summarizing CXRs; and (iii) \emph{CheXbench} - a novel benchmark designed to systematically evaluate FMs across 8 clinically-relevant CXR interpretation tasks. Extensive quantitative evaluations and qualitative reviews with five expert radiologists demonstrate that CheXagent outperforms previously-developed general- and medical-domain FMs on CheXbench tasks by up to 97.5\%. | [
"AI in health",
"Foundation Models",
"Chest X-rays"
] | https://openreview.net/pdf?id=P3LOmrZWGR | qOGGEBzseX | review | 1,708,611,312,845 | P3LOmrZWGR | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission27/Reviewer_jwjZ"
] | title: Paper is fine, but is evaluation is lacking upper bounds in terms of expert models.
review: The paper presents a foundation model based on a custom dataset called CheXinstruct. The authors train a BLIP-2 model on the proposed dataset and evaluate on several downstream tasks. For the metrics the authors choose accuracy as stated in table 1, which seems to be ill-chosen in the chest x-ray domain since there is a massive class imbalance. Also in the proposed evaluation benchmark a comparison from the generalist BLIP-2 model based on mistral 7B to expert models specific to the individual tasks as a quasi upper bound is missing. Especially for the classification tasks this might provide some better perspective on the difficulty of the task. Similarly, a simple baseline might be beneficial to get a grasp of the lower bound of each task.
Overall there are some serious concerns regarding the evaluation protocoll of this paper which might be resolved by the use of better suited metrics and comparison models.
rating: 6
confidence: 5 |
P3LOmrZWGR | CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation | [
"Zhihong Chen",
"Maya Varma",
"Jean-Benoit Delbrouck",
"Magdalini Paschali",
"Louis Blankemeier",
"Dave Van Veen",
"Jeya Maria Jose Valanarasu",
"Alaa Youssef",
"Joseph Paul Cohen",
"Eduardo Pontes Reis",
"Emily Tsai",
"Andrew Johnston",
"Cameron Olsen",
"Tanishq Mathew Abraham",
"Sergios Gatidis",
"Akshay S Chaudhari",
"Curtis Langlotz"
] | Chest X-rays (CXRs) are the most frequently performed imaging test in clinical practice. Recent advances in the development of vision-language foundation models (FMs) give rise to the possibility of performing automated CXR interpretation. In this work, we present (i) \emph{CheXinstruct} - a large-scale instruction-tuning dataset curated from 28 publicly-available datasets; (ii) \emph{CheXagent} - an instruction-tuned FM capable of analyzing and summarizing CXRs; and (iii) \emph{CheXbench} - a novel benchmark designed to systematically evaluate FMs across 8 clinically-relevant CXR interpretation tasks. Extensive quantitative evaluations and qualitative reviews with five expert radiologists demonstrate that CheXagent outperforms previously-developed general- and medical-domain FMs on CheXbench tasks by up to 97.5\%. | [
"AI in health",
"Foundation Models",
"Chest X-rays"
] | https://openreview.net/pdf?id=P3LOmrZWGR | fMMuYxN1Ga | review | 1,708,574,383,865 | P3LOmrZWGR | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission27/Reviewer_kB8R"
] | title: A Foundation Model for Chest X-Ray Interpretation
review: **Summary**:
The authors present a foundation model for chest x-ray interpretation with three novel components, they present an instruction-tuning dataset, a foundation model trained on this dataset, and a benchmarking tool to evaluate FMs across different CXR datasets. The authors showcase the performance of their FM across multiple tasks, such as Image Perception, Question Answering, and Text Generation. The authors compare their FM against other General and Medical-domain specific FMs across 8 tasks and 7 CXR datasets and indicate a general superiority of the performance of their models across the different tasks.
**Pros**:
- *CheXagent's* performance looks impressive across the different tasks. They are able to demonstrate its utility as a Foundation Model by displaying generally superior performance across 7 tasks that are relevant to an FM in this domain.
- The authors comprehensively describe the steps they took to infuse the underlying LLM with medical and clinical knowledge. The overall architecture of *CheXagent* also appears convincing and matches intuition on their good performance across the different tasks.
- The authors have created a benchmark for evaluating FMs for Chest X-rays using their *CheXbench* benchmark. This is important for reproducibility and extending the evaluation across newer Chest X-rays datasets and Medical FMs.
**Cons**:
- The authors mention they provided an evaluation of the disparities of the mode's performance across demographics, and important considerations for FMs when they envision their FMs being adopted into clinical practice by radiologists.
- I would've liked the authors to provide a list of diseases they are evaluating the models on along with their distributions. It would have been interesting to provide some examples of the model's performance in the appendix.
- I would've liked the authors to make a comment or two on interpretability, and how they think they could extend their work to incorporate measures of explainability and trustworthiness, which is fundamental to the adoption of AI models into clinical practice. Frequently, we run into the issue of developing "black-box" models in the field with not a lot of effort made into explaining how the model makes its decisions.
**Quality**:
The overall quality of the paper is good.
**Originality**:
Even though it appears that some work has been done on building summarization models for Chest X-Rays using Multimodal LLMs like GPT, I believe the authors have done a good job highlighting the novelty of their work in compounding an extensive X-ray dataset, developing a novel FM specific to Chest X-ray and evaluating it across a variety of tasks.
**Significance**:
I believe this can be a significant model in the field of radiology if the authors are able to extend this work to take it out of its current "black-box" setting and integrate some form of explainability or generalization, and comprehensively evaluate how this model's performance differs across different demographics.
**Miscellaneous Comments**:
- Have the authors considered incorporating the Noisy CXR dataset into CheXinstruct? The dataset presents the unique dimension of capturing label noise, and it would be interesting to observe how CheXagent performs under such a scenario.
rating: 7
confidence: 4 |
P3LOmrZWGR | CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation | [
"Zhihong Chen",
"Maya Varma",
"Jean-Benoit Delbrouck",
"Magdalini Paschali",
"Louis Blankemeier",
"Dave Van Veen",
"Jeya Maria Jose Valanarasu",
"Alaa Youssef",
"Joseph Paul Cohen",
"Eduardo Pontes Reis",
"Emily Tsai",
"Andrew Johnston",
"Cameron Olsen",
"Tanishq Mathew Abraham",
"Sergios Gatidis",
"Akshay S Chaudhari",
"Curtis Langlotz"
] | Chest X-rays (CXRs) are the most frequently performed imaging test in clinical practice. Recent advances in the development of vision-language foundation models (FMs) give rise to the possibility of performing automated CXR interpretation. In this work, we present (i) \emph{CheXinstruct} - a large-scale instruction-tuning dataset curated from 28 publicly-available datasets; (ii) \emph{CheXagent} - an instruction-tuned FM capable of analyzing and summarizing CXRs; and (iii) \emph{CheXbench} - a novel benchmark designed to systematically evaluate FMs across 8 clinically-relevant CXR interpretation tasks. Extensive quantitative evaluations and qualitative reviews with five expert radiologists demonstrate that CheXagent outperforms previously-developed general- and medical-domain FMs on CheXbench tasks by up to 97.5\%. | [
"AI in health",
"Foundation Models",
"Chest X-rays"
] | https://openreview.net/pdf?id=P3LOmrZWGR | UqOtzzTr9d | review | 1,708,440,138,164 | P3LOmrZWGR | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission27/Reviewer_oVSn"
] | title: Chest X-Ray foundation model
review: In this work the authors bundle a pre-training dataset consisting of several open source cohorts, a vision-language model trained on said dataset, and a suite of benchmarking tasks to compare performance of foundation models in the space of chest X-rays interpretation.
The work is impressive and might have benefited from a longer write-up as with the 2 page constraint it is challenging to thoroughly describe all aspects of the work. I would have liked to see a less succinct explanation of the model and the datasets, in particular how the splitting between training and testing was done. Additionally, a section regarding limitations of the current work and ways into clinical application would have been appreciated.
rating: 9
confidence: 4 |
P3LOmrZWGR | CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation | [
"Zhihong Chen",
"Maya Varma",
"Jean-Benoit Delbrouck",
"Magdalini Paschali",
"Louis Blankemeier",
"Dave Van Veen",
"Jeya Maria Jose Valanarasu",
"Alaa Youssef",
"Joseph Paul Cohen",
"Eduardo Pontes Reis",
"Emily Tsai",
"Andrew Johnston",
"Cameron Olsen",
"Tanishq Mathew Abraham",
"Sergios Gatidis",
"Akshay S Chaudhari",
"Curtis Langlotz"
] | Chest X-rays (CXRs) are the most frequently performed imaging test in clinical practice. Recent advances in the development of vision-language foundation models (FMs) give rise to the possibility of performing automated CXR interpretation. In this work, we present (i) \emph{CheXinstruct} - a large-scale instruction-tuning dataset curated from 28 publicly-available datasets; (ii) \emph{CheXagent} - an instruction-tuned FM capable of analyzing and summarizing CXRs; and (iii) \emph{CheXbench} - a novel benchmark designed to systematically evaluate FMs across 8 clinically-relevant CXR interpretation tasks. Extensive quantitative evaluations and qualitative reviews with five expert radiologists demonstrate that CheXagent outperforms previously-developed general- and medical-domain FMs on CheXbench tasks by up to 97.5\%. | [
"AI in health",
"Foundation Models",
"Chest X-rays"
] | https://openreview.net/pdf?id=P3LOmrZWGR | QtOrZCbADE | review | 1,708,647,246,895 | P3LOmrZWGR | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission27/Reviewer_VAKt"
] | title: Foundational model for chest X-ray analysis, including a specialized dataset
review: This work lays the groundwork for the creation of a Foundation Model (FD) on the domain of Chest X-Ray (CXR) images. It proposes a set of datasets (CheXinstruct) containing 5 task categories related to CXR, a FD trained on these datasets (CheXagent) and an evaluation benchmark (CheXbench), in which the model achieves outstanding results. This work is well presented, clearly expressed and is especially relevant with regards to the topic of Foundational Models for medicine. Although it builds on the contributions of many other works (which is properly cited in the text), it is an important contribution nonetheless and will allow further research and advances in the topic. For these reasons, I consider to accept this paper for the symposium.
I would like to include some comments for further clarification or correction of the work.
- When listing the tasks in the construction of the dataset, some tasks might not be familiar for all readers and a short description would be beneficial, e.g., view classification.
- In the model section, “To infuse the model […], we utilize **five** distinct text sources for training”, but then only three are mentioned. Either a clarification or a correction is needed here.
- For training stage 2, training a vision-language bridger, a reference is needed to clarify how this process takes place.
- Regarding the evaluation table, no confidence intervals are given, which makes it harder to compare between models with similar metrics, particularly for single- and multi- disease identification.
- No results are given for any other method in the Findings Summarization task; if it is in the author’s power to do so, it would be beneficial to have results for these methods.
- Regarding the outstanding results in terms of View Classification, a comment on the reasons for this dramatic increase are missing.
rating: 9
confidence: 3 |
OZXzTwP71l | Enhancing Collaborative Medical Outcomes through Private Synthetic Hypercube Augmentation: PriSHA | [
"Shinpei Nakamura Sakai",
"Dennis Shung",
"Jasjeet S Sekhon"
] | Effective collaboration across medical institutions presents a significant challenge, primarily due to the imperative of maintaining patient privacy. Optimal machine learning models in healthcare demand access to extensive, high-quality data to achieve generality and robustness. Yet, typically, medical institutions are restricted to data within their networks, limiting the scope and diversity of information. This limitation is especially pronounced in the case of patients with rare or unique characteristics, resulting in decreased accuracy for this minority group. To address these challenges, our work introduces a framework designed to enhance existing clinical foundation models, Private Synthetic Hypercube Augmentation (PriSHA). We leverage generative models to produce synthetic data, generated from diverse sources, as a means to augment these models while adhering to strict privacy standards. This approach promises to broaden the dataset's scope and improve model performance without compromising patient confidentiality. To our knowledge, our framework is the first synthetic data augmentation framework that merges privacy-preserving tabular data and real data from multiple sources. | [
"Synthetic data",
"distribution shift",
"generative models",
"differential privacy"
] | https://openreview.net/pdf?id=OZXzTwP71l | hkz9SeVxSa | review | 1,708,448,877,836 | OZXzTwP71l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission17/Reviewer_Ar9n"
] | title: Applying generative models to encourage data sharing is very interesting
review: # Review to 17
Strength:
- This paper studies the problem of generating medical data while preserving privacy. This is an interesting and important problem.
- The idea of combining Bayesian optimization and privacy-preserving generative models to generate informative and differentially private data is novel.
Weakness:
- The optimization specification and pseudo-code is a bit hard to follow. Here are some questions/problems:
-- definition of $K$
-- I assume $\mathcal{X}_i$ is the $i$-th dimension of the input space. Is my understanding correct? If yes, please define the notation.
-- When sampling from $[\alpha_i, \alpha_i+\beta_i]$, I assume a uniform distribution is used. Is this correct? If yes, please specify.
- The empirical results are not so impressive.
-- It seems that the data augmentation, both standard and PriSHA, can only improve the AUC marginal (85.30%);
-- When DPGAN is selected, the benefit of including Bayesian optimization, which inevitably increases computation complexity, is not so significant (only 2 out of 8).
Problems
- The benchmark with 88.87% AUC is claimed to be obtained using $D_{AB'}$. Is this distribution $D_{AB}$?
- Could you provide some details of the used dataset? For instance, how many samples are included? What is the ratio of different ethnical groups?
rating: 8
confidence: 3 |
OZXzTwP71l | Enhancing Collaborative Medical Outcomes through Private Synthetic Hypercube Augmentation: PriSHA | [
"Shinpei Nakamura Sakai",
"Dennis Shung",
"Jasjeet S Sekhon"
] | Effective collaboration across medical institutions presents a significant challenge, primarily due to the imperative of maintaining patient privacy. Optimal machine learning models in healthcare demand access to extensive, high-quality data to achieve generality and robustness. Yet, typically, medical institutions are restricted to data within their networks, limiting the scope and diversity of information. This limitation is especially pronounced in the case of patients with rare or unique characteristics, resulting in decreased accuracy for this minority group. To address these challenges, our work introduces a framework designed to enhance existing clinical foundation models, Private Synthetic Hypercube Augmentation (PriSHA). We leverage generative models to produce synthetic data, generated from diverse sources, as a means to augment these models while adhering to strict privacy standards. This approach promises to broaden the dataset's scope and improve model performance without compromising patient confidentiality. To our knowledge, our framework is the first synthetic data augmentation framework that merges privacy-preserving tabular data and real data from multiple sources. | [
"Synthetic data",
"distribution shift",
"generative models",
"differential privacy"
] | https://openreview.net/pdf?id=OZXzTwP71l | XPWb31Jd2b | review | 1,708,610,537,437 | OZXzTwP71l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission17/Reviewer_qny2"
] | title: This paper proposes a data augmentation technique to broaden the dataset scopes and improve model performance without compromising patient confidentiality.
review: Advantages:
1. This paper proposes a novel method, which augments models using only the most predictive hypercube of synthetic data.
2. The motivation of the model is reasonable, and the application is important.
Disadvantages:
1. According to experimental results, the improvement from the model is relatively limited.
2. The author should introduce the concrete implementation process of the article in more detail.
rating: 6
confidence: 3 |
OZXzTwP71l | Enhancing Collaborative Medical Outcomes through Private Synthetic Hypercube Augmentation: PriSHA | [
"Shinpei Nakamura Sakai",
"Dennis Shung",
"Jasjeet S Sekhon"
] | Effective collaboration across medical institutions presents a significant challenge, primarily due to the imperative of maintaining patient privacy. Optimal machine learning models in healthcare demand access to extensive, high-quality data to achieve generality and robustness. Yet, typically, medical institutions are restricted to data within their networks, limiting the scope and diversity of information. This limitation is especially pronounced in the case of patients with rare or unique characteristics, resulting in decreased accuracy for this minority group. To address these challenges, our work introduces a framework designed to enhance existing clinical foundation models, Private Synthetic Hypercube Augmentation (PriSHA). We leverage generative models to produce synthetic data, generated from diverse sources, as a means to augment these models while adhering to strict privacy standards. This approach promises to broaden the dataset's scope and improve model performance without compromising patient confidentiality. To our knowledge, our framework is the first synthetic data augmentation framework that merges privacy-preserving tabular data and real data from multiple sources. | [
"Synthetic data",
"distribution shift",
"generative models",
"differential privacy"
] | https://openreview.net/pdf?id=OZXzTwP71l | WBhUmX7uLW | review | 1,707,878,920,360 | OZXzTwP71l | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission17/Reviewer_a2Rx"
] | title: Official Review
review: The paper proposes a privacy-preserving synthetic data generation technique PriSHA for assisting training clinical foundation models. The DPGAN and PATE-GAN are combined to make the synthetic data generation private, enhancing the privacy protection capability. The experiments show slight improvement of PriSHA over PATEGAN Standard.
Strength:
1. Privacy protection and using synthetic data is very important in healthcare.
2. The combination of existing techniques makes sense.
3. The experimental results are surprisingly good enough.
Opportunity to improve:
1. I am not sure if I buy in the concept that the proposed method can "address distribution shift". To me, it is just combining two data sources, except that one is synthetic. It is quite different from deliberate design of tackling distribution shift. It would connect more dots to federated learning.
2. I understand DPGAN provides noises to protect training data, while PATE-GAN trains student models to generate synthetic data for model training. I am puzzled why the improvement is not over standard DPGAN, but over the PATE-GAN. More importantly, I do not think synthetic data can significantly improve model performance in general, since it is impossible to generate OOD data out of the scope of the existing data distribution to provide new information. This should be not be the main focus of the study.
3. There is no evaluation and real-world cases showing whether the privacy is protected well enough. This is the main motivation of this paper, but it is poorly presented and rarely discussed.
rating: 7
confidence: 3 |
MXRy6bYBfB | EEGFormer: Towards Transferable and Interpretable Large-Scale EEG Foundation Model | [
"Yuqi Chen",
"Kan Ren",
"Kaitao Song",
"Yansen Wang",
"Yifan Wang",
"Dongsheng Li",
"Lili Qiu"
] | Self-supervised learning has emerged as a highly effective approach in the fields of natural language processing and computer vision. It is also applicable to brain signals such as electroencephalography (EEG) data, given the abundance of available unlabeled data that exist in a wide spectrum of real-world medical applications ranging from seizure detection to wave analysis. The existing works leveraging self-supervised learning on EEG modeling mainly focus on pretraining upon each individual dataset corresponding to a single downstream task, which cannot leverage the power of abundant data, and they may derive sub-optimal solutions with a lack of generalization. Moreover, these methods rely on end-to-end model learning which is not easy for humans to understand. In this paper, we present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data. The pretrained model cannot only learn universal representations on EEG signals with adaptable performance on various downstream tasks but also provide interpretable outcomes of the useful patterns within the data. To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings. Furthermore, we demonstrate how the learned model exhibits transferable anomaly detection performance and provides valuable interpretability of the acquired patterns via self-supervised learning. | [
"EEG Signal",
"Foundation Model",
"Large-Scale Pretrain"
] | https://openreview.net/pdf?id=MXRy6bYBfB | zaz1oj7ylw | review | 1,708,639,323,218 | MXRy6bYBfB | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission1/Reviewer_zQs8"
] | title: Official review of Reviewer
review: ## Summary
In this work, the authors present EEGFORMER, a foundational model for electroencephalography (EEG) data. They present a new pretraining method for EEG data, that works as follows: first, EEG signals are segmented into patches and passed into a Transformer encoder. Then, they apply a vector-quantized model to convert the patch representations into discrete indices, which as subsequently fed into a Transformer decoder, with the objective to reconstruct the input. The authors apply EEGFORMER in 5 downstream tasks, showing good performance and transfer. Moreover, they show that the representations learned can also be highly interpretable.
## Strengths
- The self-supervised approach the authors propose is promising, and has the potential to efficiently utilize the vast amounts of raw EEG data available.
- The idea to encode the EEG signals into quantized vectors can push the model towards learning interpretable representations, e.g. similar signal patches may be mapped into the same quantized encoding, that may then help us interpret predictions, as the authors show in an experiment.
- The empirical evaluations demonstrate good performance on all downstream tasks tested, and seem transferable.
## Weaknesses
- It would be good to further explore the transferability and interpretability of EEGFORMER's representations, by performing further experiments on additional cases and corpora, to verify if the authors' observations truly generalize.
## Overall Assessment
The paper proposes a novel foundational model and pretraining method for EEG data, and shows strong downstream results and promising interpretability of the representations. It has the potential to pave the way for utilizing large amounts of unlabeled EEG data for various downstream tasks.
rating: 7
confidence: 3 |
MXRy6bYBfB | EEGFormer: Towards Transferable and Interpretable Large-Scale EEG Foundation Model | [
"Yuqi Chen",
"Kan Ren",
"Kaitao Song",
"Yansen Wang",
"Yifan Wang",
"Dongsheng Li",
"Lili Qiu"
] | Self-supervised learning has emerged as a highly effective approach in the fields of natural language processing and computer vision. It is also applicable to brain signals such as electroencephalography (EEG) data, given the abundance of available unlabeled data that exist in a wide spectrum of real-world medical applications ranging from seizure detection to wave analysis. The existing works leveraging self-supervised learning on EEG modeling mainly focus on pretraining upon each individual dataset corresponding to a single downstream task, which cannot leverage the power of abundant data, and they may derive sub-optimal solutions with a lack of generalization. Moreover, these methods rely on end-to-end model learning which is not easy for humans to understand. In this paper, we present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data. The pretrained model cannot only learn universal representations on EEG signals with adaptable performance on various downstream tasks but also provide interpretable outcomes of the useful patterns within the data. To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings. Furthermore, we demonstrate how the learned model exhibits transferable anomaly detection performance and provides valuable interpretability of the acquired patterns via self-supervised learning. | [
"EEG Signal",
"Foundation Model",
"Large-Scale Pretrain"
] | https://openreview.net/pdf?id=MXRy6bYBfB | G3vvMdpO2f | review | 1,708,668,657,867 | MXRy6bYBfB | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission1/Reviewer_YWnc"
] | title: A well written paper that perform their task of making a foundation model capable of solving multiple tasks well and mostly better that state of the art.
review: Clarity:
The authors present a well written paper stating the current litterature and why their work is an advancement in the field of EEG pretrained models stating that their model achieves better performance and is able to generalize across different tasks via finetuning.
Originality: The work combine many state of the art methods in the space of foundation models and apply them to a large scale foundation model trained on EEG data. A left out dataset reserved for validation that has no connnection with the TUH training dataset.
Significance: The performance shows a clear advancement within this space compared to previous supervised and self-supervised models.
Quality: the authors sometimes miss explanations on datasplitting and other common practices that need to be present to ensure. The fundamentals should be included in the script.
Major points
1. Figure 1 should have a better explanation emphasising subfigures a and b
2. the finetuning paradigm is not clearly defined. You should consider explaining the way you constrained updating the weights during end-to-end fine-tuning if you did so to avoid misconceptions.
3. In table 2 you present the AUROC and AUPRC of only a subset of the datasets you have avaliable and you show that with the model architecture you have created there is little difference between traning self-supervised and trainning supervised. I would want you to show the performance on the Neonate dataset to demonstrate the difference between supervised or self-supervised traning.
4. There is very little disscussion of the results and interpretation of for example figure 3 showing the intrepetation of the naïve Bayes model. Here you seem to highlight the areas for seazures but you do not specify any output from the model.
5. You need to specify the datasets used for pre-training, fine-tuning, and testing to ensure that there is no lekage from training to testing
Minor points
1. In table 1, there is little reason to present all the EEGFormer variants as they have very similar performance. I would suggest that you proceed with only the EEGFormerl for simplicity and just state the other tests in the text.
Pros
• The authors find high effectiveness in their model training a codebook to represent EEG with subsequent finetuning to solve interesting tasks such as: abnormal EEG detection, classifying EEG artifacts, classifying EEG slowing events, seizure detection, and neonatal seizures detection.
cons
• Authors do not show the perfomance of finetuning styles for models on all datasets
• Auphors do not provide the way they split the training, test, and validation data giving no indication that the
• The authors presend very little discussion on their results, thus making the intrepetation of their results hard to understand out of the gate.
• The embeddings seem to be highly dependent on large amounts of fine-tuning in order to perform well.
rating: 7
confidence: 4 |
MXRy6bYBfB | EEGFormer: Towards Transferable and Interpretable Large-Scale EEG Foundation Model | [
"Yuqi Chen",
"Kan Ren",
"Kaitao Song",
"Yansen Wang",
"Yifan Wang",
"Dongsheng Li",
"Lili Qiu"
] | Self-supervised learning has emerged as a highly effective approach in the fields of natural language processing and computer vision. It is also applicable to brain signals such as electroencephalography (EEG) data, given the abundance of available unlabeled data that exist in a wide spectrum of real-world medical applications ranging from seizure detection to wave analysis. The existing works leveraging self-supervised learning on EEG modeling mainly focus on pretraining upon each individual dataset corresponding to a single downstream task, which cannot leverage the power of abundant data, and they may derive sub-optimal solutions with a lack of generalization. Moreover, these methods rely on end-to-end model learning which is not easy for humans to understand. In this paper, we present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data. The pretrained model cannot only learn universal representations on EEG signals with adaptable performance on various downstream tasks but also provide interpretable outcomes of the useful patterns within the data. To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings. Furthermore, we demonstrate how the learned model exhibits transferable anomaly detection performance and provides valuable interpretability of the acquired patterns via self-supervised learning. | [
"EEG Signal",
"Foundation Model",
"Large-Scale Pretrain"
] | https://openreview.net/pdf?id=MXRy6bYBfB | 9aWbpnEFb3 | review | 1,708,624,621,397 | MXRy6bYBfB | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission1/Reviewer_LxqK"
] | title: Interesting work on EEG foundation model
review: This work introduces a self-supervised learning model for EEG data analysis aimed at improving transferability and interpretability. While the approach is innovative, several critical aspects require scrutiny:
1. The concept of leveraging self-supervised learning for EEG data is not novel. The paper must delineate clearly how EEGFormer diverges from and improves upon existing models like BrainBERT or SeqCLR in terms of architecture, learning efficiency, or application to diverse tasks.
2. While the paper asserts that EEGFormer offers interpretable outcomes (which I think it is explainability rather than interpretability), it also falls short of providing a comprehensive framework or quantitative measures for interpretability. The paper should incorporate case studies or comparisons with expert analyses to substantiate these claims.
3. The work needs to show a few-shot learning performance in order to be a foundation model.
rating: 9
confidence: 4 |
IQU5NsX7Mj | Memorize and Rank: Enabling Large Language Models for Medical Event Prediction | [
"Mingyu Derek Ma",
"Yijia Xiao",
"Anthony Cuturrufo",
"Xiaoxuan Wang",
"Wei Wang"
] | Medical event prediction produces patient's potential diseases given their visit history. It is personalized yet requires an in-depth understanding of domain knowledge. Existing works integrate clinical knowledge into the prediction with techniques like concept embedding, patient records as knowledge graphs, and external knowledge bases, leaving the knowledge obtained through the pretraining of modern Large Language Models untouched. We introduce Mera, a clinical event prediction model that bridges pertaining natural language knowledge with medical code. We apply contrastive learning on a predicted ranking list for task-specialized optimization. With concept memorization through fine-tuning, we equip the LLM with an in-depth understanding to recall the natural language definitions for medical code during inference. Experimental results on MIMIC datasets show that Mera outperforms state-of-the-art models. | [
"Generative LM",
"LLM",
"Diagnosis Prediction"
] | https://openreview.net/pdf?id=IQU5NsX7Mj | ZvIKpaYfmi | review | 1,709,106,357,087 | IQU5NsX7Mj | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission43/Reviewer_RGj5"
] | title: The work introduces a new approach to the field of clinical outcome prediction by using medical coding as input for a transformer-based LM model. The paper primarily focuses on taking a pretrained language model (LM) and fine-tuning it to memorize medical codes, initially to represent diagnoses, and then to learn both causal and temporal relations between visits as well as within visits. The work demonstrates superior performance in two types of prediction tasks across two datasets.
review: Strength:
The method outperfomrs state-of-the-art models: the method yields significantly better performance on diagnois production and heart failture prediction for two MIMIC datasets.
Weak points and suggestions:
1. Missing important related work on bridging ICD code with LLM:
(1) "DRG-LLaMA : tuning LLaMA model to predict diagnosis-related group for hospitalized patients"
(2) "CPLLM: Clinical Prediction with Large Language Models"
2. Unclear writing:
(1) This sentence is confusing to read: “But there is still a significant gap between the primary language, i.e., natural language, with the model’s hidden represntation”
(2) In the 'Performance of Memorization' section, what is the sample size? How many ICD-9 codes were assessed?
3. Presentation error:
(1) Missing citation: "The events are normally presented in medical code format, such as ICD-9 disease codes, with a large candidate space to choose from (13,000 disease candidates in ICD-9) (?)"
(2) Figure 1 is difficult to read and could benefit from additional labeling.
rating: 4
confidence: 3 |
IQU5NsX7Mj | Memorize and Rank: Enabling Large Language Models for Medical Event Prediction | [
"Mingyu Derek Ma",
"Yijia Xiao",
"Anthony Cuturrufo",
"Xiaoxuan Wang",
"Wei Wang"
] | Medical event prediction produces patient's potential diseases given their visit history. It is personalized yet requires an in-depth understanding of domain knowledge. Existing works integrate clinical knowledge into the prediction with techniques like concept embedding, patient records as knowledge graphs, and external knowledge bases, leaving the knowledge obtained through the pretraining of modern Large Language Models untouched. We introduce Mera, a clinical event prediction model that bridges pertaining natural language knowledge with medical code. We apply contrastive learning on a predicted ranking list for task-specialized optimization. With concept memorization through fine-tuning, we equip the LLM with an in-depth understanding to recall the natural language definitions for medical code during inference. Experimental results on MIMIC datasets show that Mera outperforms state-of-the-art models. | [
"Generative LM",
"LLM",
"Diagnosis Prediction"
] | https://openreview.net/pdf?id=IQU5NsX7Mj | RsZqJ1stnk | review | 1,708,525,291,267 | IQU5NsX7Mj | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission43/Reviewer_xarc"
] | title: Foundation model for medical code prediction by code - definition training.
review: **Summary**
The paper proposes a large language model-based foundation model for medical code prediction tasks. The proposed model is based on LLaMA-2-7B, with further fine-tuning on (1) the medical code and text definition pretext task; (2) the medical code prediction task. The model is tested on MIMIC-III and MIMIC-IV, with a comparison to RNN/CNN-based models and graph-based models.
**Strengths**
- Clear motivation for the "concept memorization" training and "hierarchical contrastive learning".
- I appreciate that the authors include a task formulation section, for readers to better understand the task and data format.
- This part can be further improved if authors can clarify whether ``diagnosis prediction`` is a multi-label classification
- The technical part is easy to follow
**Weaknesses**
- The input sequence perturbation makes sense to me. However, authors do not explicitly mention whether the within-visit order of target visit matters or not.
- During training, will the sequence perturbation also apply to the target visit?
- During evaluation, how the accuracy/precision/recall are calculated? (If target code order is ``996.74, 428.0, 414.1``, and model outputs ``414.1, 428.0, 996.74``)
- Authors mention that transformer-based LMs are widely used in the same task (Paragraph#3 in the Introduction Section). Is there a reason transformers are not included as compared baselines?
**Misc.**
- Missing reference in Paragraph#1 in the Introduction Section.
> 13,000 disease candidates in ICD-9 ``(?)``
Overall, the quality of this paper is great. I would be very interested in seeing authors' responses regarding my concerns.
rating: 6
confidence: 4 |
IQU5NsX7Mj | Memorize and Rank: Enabling Large Language Models for Medical Event Prediction | [
"Mingyu Derek Ma",
"Yijia Xiao",
"Anthony Cuturrufo",
"Xiaoxuan Wang",
"Wei Wang"
] | Medical event prediction produces patient's potential diseases given their visit history. It is personalized yet requires an in-depth understanding of domain knowledge. Existing works integrate clinical knowledge into the prediction with techniques like concept embedding, patient records as knowledge graphs, and external knowledge bases, leaving the knowledge obtained through the pretraining of modern Large Language Models untouched. We introduce Mera, a clinical event prediction model that bridges pertaining natural language knowledge with medical code. We apply contrastive learning on a predicted ranking list for task-specialized optimization. With concept memorization through fine-tuning, we equip the LLM with an in-depth understanding to recall the natural language definitions for medical code during inference. Experimental results on MIMIC datasets show that Mera outperforms state-of-the-art models. | [
"Generative LM",
"LLM",
"Diagnosis Prediction"
] | https://openreview.net/pdf?id=IQU5NsX7Mj | MvO2gEaRcm | review | 1,708,406,763,963 | IQU5NsX7Mj | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission43/Reviewer_Ls2v"
] | title: Good Adaptation of LLMs for Medical Event Prediction with SOTA Performance
review: ### Summary
This paper proposes a new strategy to solve “medical event prediction”. They motivate the problem, describe its main challenges and propose a solution that addresses these challenges. The proposed method consists of two main steps - (i) medical concept memorization - to adapt the vocabulary of the LLM, and (ii) contrastive learning to capture inter and intra-visit relations in medical events. This method achieves SOTA performance on MIMIC-III and MIMIC-IV datasets.
### Strengths:
- Provides a clear motivation for the problem and outlines the main challenges in solving the problem.
- Novel and effective LLM pre-training tasks are proposed to directly address the above challenges
- A diverse set of baselines are considered is evaluating the proposed method
### Weaknesses:
- No ablation study in experiments to understand which of the pretraining tasks leads to performance gains.
- Does not consider any Clinical LMs as a base model in experimentation. Clinical LMs like BioGPT, MedPaLM may already understand medical vocabulary. This may remove the need for the “memorize” phase in the proposed method.
- Poor readability in certain sections. Eg: explanation of proposed method can be improved by correcting grammatical errors and adding equations/figures.
### Other Feedback:
- Other baselines also evaluate on MIMIC-III and MIMIC-IV. Consider releasing your train/test split and characteristics of the dataset in your next iteration of this work. This could help standardize research in the “medical event prediction” field.
- Since your method is built on top of a generative LM, it could be inherently better than other methods at quickly (with limited data) understanding new medical events. It would be interesting to see experiments on predicting medical events in a few-shot/zero-shot setting.
- The authors noted that some other methods modeled external information as knowledge graphs. There are some advantages to these methods (eg: updating information) and those should be further explored in future work. For example, authors could add pre-training tasks using the knowledge graphs, similar to the way the ontology of medical codes is currently used.
rating: 7
confidence: 4 |
IQU5NsX7Mj | Memorize and Rank: Enabling Large Language Models for Medical Event Prediction | [
"Mingyu Derek Ma",
"Yijia Xiao",
"Anthony Cuturrufo",
"Xiaoxuan Wang",
"Wei Wang"
] | Medical event prediction produces patient's potential diseases given their visit history. It is personalized yet requires an in-depth understanding of domain knowledge. Existing works integrate clinical knowledge into the prediction with techniques like concept embedding, patient records as knowledge graphs, and external knowledge bases, leaving the knowledge obtained through the pretraining of modern Large Language Models untouched. We introduce Mera, a clinical event prediction model that bridges pertaining natural language knowledge with medical code. We apply contrastive learning on a predicted ranking list for task-specialized optimization. With concept memorization through fine-tuning, we equip the LLM with an in-depth understanding to recall the natural language definitions for medical code during inference. Experimental results on MIMIC datasets show that Mera outperforms state-of-the-art models. | [
"Generative LM",
"LLM",
"Diagnosis Prediction"
] | https://openreview.net/pdf?id=IQU5NsX7Mj | IIBrB2xucg | review | 1,709,020,485,490 | IQU5NsX7Mj | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission43/Reviewer_kvo1"
] | title: Memorize and Rank: Enabling Large Language Models for Medical Event Prediction
review: Authors present an approach to use language model to predict the diagnosis of the next visit of patients given as input a sequence of visits and their associated code-diagnose. They benchmark the new approach with existing approaches and standard datasets form medical settings.
The paper is original in the use of language model technology for next visit prediction, however the reviewer and reader would appreciate more detail in how the system is built and how it learns, specific parameters. What type of model do authors use, what is the architecture, is it an encoder-decoder, only decoder architecture? How is the fine tuning performed?
I would like to suggest that the clarity of the writing could be improved. There are some sections where the language appears a bit unclear or could benefit from a revision for smoother readability. I recommend a thorough review of the English language throughout the manuscript to enhance its overall clarity. This will undoubtedly contribute to a better understanding of the valuable research.
rating: 6
confidence: 4 |
IQU5NsX7Mj | Memorize and Rank: Enabling Large Language Models for Medical Event Prediction | [
"Mingyu Derek Ma",
"Yijia Xiao",
"Anthony Cuturrufo",
"Xiaoxuan Wang",
"Wei Wang"
] | Medical event prediction produces patient's potential diseases given their visit history. It is personalized yet requires an in-depth understanding of domain knowledge. Existing works integrate clinical knowledge into the prediction with techniques like concept embedding, patient records as knowledge graphs, and external knowledge bases, leaving the knowledge obtained through the pretraining of modern Large Language Models untouched. We introduce Mera, a clinical event prediction model that bridges pertaining natural language knowledge with medical code. We apply contrastive learning on a predicted ranking list for task-specialized optimization. With concept memorization through fine-tuning, we equip the LLM with an in-depth understanding to recall the natural language definitions for medical code during inference. Experimental results on MIMIC datasets show that Mera outperforms state-of-the-art models. | [
"Generative LM",
"LLM",
"Diagnosis Prediction"
] | https://openreview.net/pdf?id=IQU5NsX7Mj | 4oMPCxfySH | review | 1,708,953,832,759 | IQU5NsX7Mj | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission43/Reviewer_Kwhx"
] | title: good paper solving important problem of 'next' medical disease prediction
review: The authors present an approach to predicting the next most likely disease /outcome for a patient given a previous sequence of outcomes. They focus on using the icd10 ontology. The model is an LLM that is further fine tuned on intermediate predictive tasks such as recalling definitions of codes. The authors trial their approach on open source EHR data and show good performance. The paper is a good contribution to an important area of research.
In general I have two criticism to offer. My criticism is on the practicality of using approach in the real world. Based on my reading it assumes codes are accurately recorded by the clinician/clinical care team following a given patient episode. In reality clinicians are not very good at recording this level of data. At what stage of the clinical workflow would this 'model' would this be useful?
Secondly I am not sure how much is lost by discarding the actual clinical text as part of the input data. This approach seems to model the problem as a sequence of codes (or sequence of bag-of-codes). LLMs are designed to interpret free text so why not take advantage of this?
Overall I think the paper presents an interesting and somewhat original solution to the problem. I think the clarity of writing can be improved and ideally some justification for design choices needed to be provided.
rating: 6
confidence: 3 |
H4YTWMehKx | GatorTron and GatorTronGPT: Large Language Models for Clinical Narratives | [
"Cheng Peng",
"Xi Yang",
"Mengxian Lyu",
"Kaleb E Smith",
"Anthony Costa",
"Mona G Flores",
"Jiang Bian",
"Yonghui Wu"
] | Large language models (LLMs) have become the foundational technology for natural language processing (NLP). We introduce clinical LLMs including GatorTron and GatorTronGPT, summarize their applications, highlight the impact on clinical NLP and artificial intelligence (AI) applications, and provide insights in using LLMs for medical AI applications. | [
"Foundation model",
"Electronic health records",
"Natural Language Processing"
] | https://openreview.net/pdf?id=H4YTWMehKx | uyy4E9kBnE | review | 1,708,122,206,255 | H4YTWMehKx | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission12/Reviewer_eN2v"
] | title: Review
review: Quality and clarity:
- The writing is generally clear and easy to understand
Originality and significance:
- Gatertron Foundational Model trained on clinical data, highly relevant for this workshop
Weaknesses
- Performance ranking makes it more difficult to visualize the actual performance of the model, more fine-grained evaluation metrics would be appreciated, like accuracy for QA, precision and recall for the extraction tasks, etc
- Where are the GatertronGPT evaluation results?
rating: 7
confidence: 4 |
H4YTWMehKx | GatorTron and GatorTronGPT: Large Language Models for Clinical Narratives | [
"Cheng Peng",
"Xi Yang",
"Mengxian Lyu",
"Kaleb E Smith",
"Anthony Costa",
"Mona G Flores",
"Jiang Bian",
"Yonghui Wu"
] | Large language models (LLMs) have become the foundational technology for natural language processing (NLP). We introduce clinical LLMs including GatorTron and GatorTronGPT, summarize their applications, highlight the impact on clinical NLP and artificial intelligence (AI) applications, and provide insights in using LLMs for medical AI applications. | [
"Foundation model",
"Electronic health records",
"Natural Language Processing"
] | https://openreview.net/pdf?id=H4YTWMehKx | nVlvwTNQEz | review | 1,708,359,355,090 | H4YTWMehKx | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission12/Reviewer_1meb"
] | title: Summary paper of previous excellent work in the development and use of the GatorTron family of models for clinical NLP tasks.
review: 1. Summary and contributions: Briefly summarize the paper and its contributions
- The paper outlines the development of billion parameter GatorTron, an encoder only, and GatorTronGPT, a decoder only, transformers.
- They are trained from scratch on billions of text tokens, both from general text corpora as well as de-identified clinical notes. The paper then goes on to summarise the previously published performance of the models on commonly used clinical NLP benchmark tasks.
2. Strengths: Describe the strengths of the work. Typical criteria include: soundness of the claims (theoretical grounding, empirical evaluation), significance and novelty of the contribution, and relevance.
- It is a huge (maybe unique?) technical achievement for an academic institution to be able to train billion parameter LLMs on billions of tokens.
- The performance on various benchmarks is very strong.
- Highly relevant to this symposium
3. Weaknesses: Explain the limitations of this work along the same axes as above.
- The hypothesis of this work is that explicitly including clinical notes in an LLM’s training data aids performance on downstream clinical tasks. Therefore, a direct comparison to state-of-the-art general LLMs (e.g. GPT/Gemini/Llama/Mistral) in Table 1 would strengthen this argument. Especially since general LLMs have been through RLHF, unlike GatorTronGPT
- Not clear why the models were trained from scratch instead of finetuning a generally pre-trained model. Although the training corpus size is large, it is still dramatically smaller than that used in open-source models (such as Llama 2 trillion tokens)
- Does GatorTronGPT exhibit zero/few-shot learning abilities, or must it be finetuned?
- No section on ethical considerations or the ethics board approval. This is highly relevant given the large-scale use of (de-id) clinical notes and the subsequent open-sourcing of GatorTron. This is a great initiative, so it would be helpful to be explicit on how this was achieved so others could do the same in the future.
4. Correctness: Are the claims and method correct? Is the empirical methodology correct?
- “GatorTronGPT provides a solution to solve many diverse information extraction and classification tasks using unified text-to-text learning” seems like a strong statement.
- The methodology, although only briefly outlined due to page limit, is sound.
5. Clarity: Is the paper well written?
- The paper is well written. However, the last paragraph of the conclusion is oddly formatted and does not seem to follow from the previous one.
- The repetition of Peng et al b and c on seemingly the same datasets and tasks but with differing performance rankings in Table 1 is a little confusing.
- Would be useful to note the context window of the models.
6. Relation to prior work: Is it clearly discussed how this work differs from previous contributions?
- Yes, a clear explanation of general-purpose LLMs and their evaluation in biomedical/clinical NLP space. This then goes on to outline the limited specialised training of LLMs in the clinical domain. Although the motivation for this is implicit rather than explicit (training on clinical data may lead to better results on clinical tasks)
7. Reproducibility: Are there enough details to reproduce the major results of this work?
- No, but that is out of scope, given the page limit. The author could add the compute required to train the models.
- Links to the open source model weights or training code would be helpful.
rating: 8
confidence: 3 |
H4YTWMehKx | GatorTron and GatorTronGPT: Large Language Models for Clinical Narratives | [
"Cheng Peng",
"Xi Yang",
"Mengxian Lyu",
"Kaleb E Smith",
"Anthony Costa",
"Mona G Flores",
"Jiang Bian",
"Yonghui Wu"
] | Large language models (LLMs) have become the foundational technology for natural language processing (NLP). We introduce clinical LLMs including GatorTron and GatorTronGPT, summarize their applications, highlight the impact on clinical NLP and artificial intelligence (AI) applications, and provide insights in using LLMs for medical AI applications. | [
"Foundation model",
"Electronic health records",
"Natural Language Processing"
] | https://openreview.net/pdf?id=H4YTWMehKx | lSb2bz0i7i | review | 1,708,496,204,894 | H4YTWMehKx | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission12/Reviewer_c1UJ"
] | title: This is a clear review of two significant clinical LLMs though lack of originality as a review.
review: The abstract introduces two clinical LLMs, GatorTron and GatorTronGPT, and their applications in a clinical context. To this end, the authors review the training dataset of the two LLMs and summarize their performances in various evaluation tasks within a medical context. The abstract highlights the necessity of evaluating LLMs specialized for clinical purposes. The detailed description of the applications of the two LLMs provided a comprehensive picture for audience. Some concerns arose when I was reading the abstract:
1. The abstract tries to provide resources to facilitate the use of GatorTron, but I could hardly find lines addressing this claim throughout the abstract.
2. GatorTron and GatorTronGPT were distinguished from general-purpose LLMs in that they were trained with medical corpora. Hence, I’m curious about the performance contrast of a general-purpose model like ChatGPT (as a benchmark), and a specialized medical LLM like GatorTron. A direct comparison such as accuracy scores in medical NLP tasks (if it is possible to quantify their performance) between the two kinds of model will do.
3. The “Conclusion and Discussion” section gives a nice summary about the applications of the two LLMs in clinical contexts. From my personal conjecture, readers may want to know more about the state-of-art news of GatorTron and GatorTronGPT, such as the technical challenges we are dealing with. Put such questions in a bigger picture, audience may be interested in where we are going to, and what we will be able to do with GatorTron and GatorTronGPT in the future. I believe these topics fit well in the discussion session and can contribute to the discussion in the symposium.
In sum, the abstract provides a clear review of GatorTron and GatorTronGPT, a significant work in clinical NLP. As a review of existing studies, this abstract is inevitably short of originality. An additional discussion on the state-of-art and limitations of the present models will make the abstract more beneficial to the conference. Recommended.
rating: 6
confidence: 3 |
H4YTWMehKx | GatorTron and GatorTronGPT: Large Language Models for Clinical Narratives | [
"Cheng Peng",
"Xi Yang",
"Mengxian Lyu",
"Kaleb E Smith",
"Anthony Costa",
"Mona G Flores",
"Jiang Bian",
"Yonghui Wu"
] | Large language models (LLMs) have become the foundational technology for natural language processing (NLP). We introduce clinical LLMs including GatorTron and GatorTronGPT, summarize their applications, highlight the impact on clinical NLP and artificial intelligence (AI) applications, and provide insights in using LLMs for medical AI applications. | [
"Foundation model",
"Electronic health records",
"Natural Language Processing"
] | https://openreview.net/pdf?id=H4YTWMehKx | VdR1WJeCUd | review | 1,708,184,598,935 | H4YTWMehKx | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission12/Reviewer_NU4V"
] | title: GatorTron
review: This paper provides an overview of GatorTron's use in existing literature. While the authors raise concerns about existing LLMs for medical applications, the paper does not delineate the methodological novelty and performance improvement of the presented model. A crucial ambiguity lies in the selection process of the cited studies, leaving readers uncertain whether it constitutes a systematic review of evidence. Overall, the paper offers some use cases of GatorTron within the literature.
rating: 5
confidence: 4 |
Fxi7pRmnYJ | Adapting Segment Anything Models to Medical Imaging via Fine-Tuning without Domain Pretraining | [
"Kevin Li",
"Pranav Rajpurkar"
] | Medical image segmentation is an important task in the context of medical care, with applications in diagnostic and treatment processes. Segment Anything (SAM), a generalist foundation model trained on a corpus of 11 million natural images, demonstrates limited adaptability to the medical domain in a zero-shot prompting context, but shows promise under parameter-efficient fine-tuning. MedSAM is a foundation model which adapts SAM to the medical domain via training on a diverse medical corpus consisting of different modalities (one million images of modality CT, MRI, CXR, etc). In this work, we evaluate the advantage of MedSAM over SAM for medical task-specific adaptation achieved via parameter-efficient fine-tuning. Our results demonstrate that MedSAM does not yield a consistent advantage over SAM in this setting. We also introduce a novel parameter-efficient approach, LoRaMedNet, which combines elements of previous fine-tuning methods to achieve greater flexibility of adaptation for SAM, and find that LoRaMedNet-adapted SAM attains the best performance. The implication of this finding is that generalist models like SAM can achieve superior adaptation to specific medical tasks even when compared to models with medical pre-training. | [
"Segment Anything",
"Medical Pre-training",
"Domain Adaptation"
] | https://openreview.net/pdf?id=Fxi7pRmnYJ | YY7DHdiiiV | review | 1,708,536,310,584 | Fxi7pRmnYJ | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission22/Reviewer_3GuJ"
] | title: Parameter efficient fine-tuning SAM for medical image segmentation
review: Summary of Contributions:
The work proposes a new way to efficiently fine-tune SAM for medical image segmentation, i.e. a custom lightweight ConvNet head after the SAM encoder. The SAM encoder undergoes parameter efficient fine-tuning by using Low Rank Adaptation. The authors claim and demonstrate that fine-tuning SAM is better than fine-tuning MedSAM, i.e. for successful adaptation, medical pretraining is not necessary.
Strengths:
1. The authors challenge the popular belief that foundation models should be pre-trained with data from the domain where they are to fine-tuned, and successfully demonstrate that (in their own words) “generalist models like SAM need not be abandoned in favor of models with medical pre-training”
2. The work experiments with various decoder networks for fine-tuning SAM and MedSAM.
3. The paper is easy to follow and the Block Diagrams are representative of the methods provided in the work.
4. Preprocessing steps, training hyper-paramters, along with memory requirements have been provided.
Weaknesses:
1. The work does not provide quantitative comparisons with specialist models (i.e. models trained from scratch on the given dataset, say UNet, which inspired the Convent decoder)
2. The work only provides fine-tuning results for a single dataset. Ideally two more medical image segmentation dataset (such as BraTS, VerSE, etc.) should be included.
Questions, Suggestions, Comments:
1. What are the input and output dimensions for the images?
2. It is not very clear that how the decoder from UNet is used, more specifically: the UNet decoder uses a prior from the encoder after each upscaling step (in the form of residual connections). What priors (if any) are being used in the proposed decoder ConvNet?
3. Addressing the weakness will definitely improve the quality of the work, and make the author’s claim more substantiated.
rating: 6
confidence: 4 |
Fxi7pRmnYJ | Adapting Segment Anything Models to Medical Imaging via Fine-Tuning without Domain Pretraining | [
"Kevin Li",
"Pranav Rajpurkar"
] | Medical image segmentation is an important task in the context of medical care, with applications in diagnostic and treatment processes. Segment Anything (SAM), a generalist foundation model trained on a corpus of 11 million natural images, demonstrates limited adaptability to the medical domain in a zero-shot prompting context, but shows promise under parameter-efficient fine-tuning. MedSAM is a foundation model which adapts SAM to the medical domain via training on a diverse medical corpus consisting of different modalities (one million images of modality CT, MRI, CXR, etc). In this work, we evaluate the advantage of MedSAM over SAM for medical task-specific adaptation achieved via parameter-efficient fine-tuning. Our results demonstrate that MedSAM does not yield a consistent advantage over SAM in this setting. We also introduce a novel parameter-efficient approach, LoRaMedNet, which combines elements of previous fine-tuning methods to achieve greater flexibility of adaptation for SAM, and find that LoRaMedNet-adapted SAM attains the best performance. The implication of this finding is that generalist models like SAM can achieve superior adaptation to specific medical tasks even when compared to models with medical pre-training. | [
"Segment Anything",
"Medical Pre-training",
"Domain Adaptation"
] | https://openreview.net/pdf?id=Fxi7pRmnYJ | SxfOcbzbHM | review | 1,708,638,710,537 | Fxi7pRmnYJ | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission22/Reviewer_dpBW"
] | title: Review
review: The paper addresses the challenge of adapting generalist foundation models like SAM to medical image segmentation tasks, crucial for diagnostic and treatment processes. Despite initial promise, the specific adaptation model, MedSAM, fails to consistently outperform SAM in medical task-specific contexts. However, the introduction of LoRaMedNet, a novel parameter-efficient approach, demonstrates the potential for enhancing SAM's adaptability and achieving superior performance in medical tasks. The findings underscore the significance of exploring adaptation techniques for generalist models, suggesting that they can excel in specific medical applications even without dedicated medical pre-training.
rating: 9
confidence: 4 |
Fxi7pRmnYJ | Adapting Segment Anything Models to Medical Imaging via Fine-Tuning without Domain Pretraining | [
"Kevin Li",
"Pranav Rajpurkar"
] | Medical image segmentation is an important task in the context of medical care, with applications in diagnostic and treatment processes. Segment Anything (SAM), a generalist foundation model trained on a corpus of 11 million natural images, demonstrates limited adaptability to the medical domain in a zero-shot prompting context, but shows promise under parameter-efficient fine-tuning. MedSAM is a foundation model which adapts SAM to the medical domain via training on a diverse medical corpus consisting of different modalities (one million images of modality CT, MRI, CXR, etc). In this work, we evaluate the advantage of MedSAM over SAM for medical task-specific adaptation achieved via parameter-efficient fine-tuning. Our results demonstrate that MedSAM does not yield a consistent advantage over SAM in this setting. We also introduce a novel parameter-efficient approach, LoRaMedNet, which combines elements of previous fine-tuning methods to achieve greater flexibility of adaptation for SAM, and find that LoRaMedNet-adapted SAM attains the best performance. The implication of this finding is that generalist models like SAM can achieve superior adaptation to specific medical tasks even when compared to models with medical pre-training. | [
"Segment Anything",
"Medical Pre-training",
"Domain Adaptation"
] | https://openreview.net/pdf?id=Fxi7pRmnYJ | DApNmqlTDT | review | 1,708,640,277,460 | Fxi7pRmnYJ | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission22/Reviewer_rHvS"
] | title: Relevant contribution showcasing the advantage of generalist foundation models
review: This paper evaluates the usage of in-domain pretraining for medical segmentation tasks as opposed to the use of a generalist foundation model for segmentation (SAM), and proposes a new PEFT technique: LoRaMedNet. The authors show that the in-domain pretrained model, MedSAM, does not consistently outperform SAM when fine-tuning on medical segmentation data, which brings the usefulness of MedSAM into question. Additionally, their proposed LoRaMedNet outperforms standard PEFT techniques for fine-tuning on medical data. The technique combines standard LoRA with an additional convolutional head. The set of comparisons also seem solid as a workshop contribution, as they compare to a variety of existing fine-tuning baselines on their target dataset -- the Automated Cardiac Diagnosis Challenge (ACDC). This paper seems quite relevant for the workshop and is well-written.
rating: 8
confidence: 2 |
Fxi7pRmnYJ | Adapting Segment Anything Models to Medical Imaging via Fine-Tuning without Domain Pretraining | [
"Kevin Li",
"Pranav Rajpurkar"
] | Medical image segmentation is an important task in the context of medical care, with applications in diagnostic and treatment processes. Segment Anything (SAM), a generalist foundation model trained on a corpus of 11 million natural images, demonstrates limited adaptability to the medical domain in a zero-shot prompting context, but shows promise under parameter-efficient fine-tuning. MedSAM is a foundation model which adapts SAM to the medical domain via training on a diverse medical corpus consisting of different modalities (one million images of modality CT, MRI, CXR, etc). In this work, we evaluate the advantage of MedSAM over SAM for medical task-specific adaptation achieved via parameter-efficient fine-tuning. Our results demonstrate that MedSAM does not yield a consistent advantage over SAM in this setting. We also introduce a novel parameter-efficient approach, LoRaMedNet, which combines elements of previous fine-tuning methods to achieve greater flexibility of adaptation for SAM, and find that LoRaMedNet-adapted SAM attains the best performance. The implication of this finding is that generalist models like SAM can achieve superior adaptation to specific medical tasks even when compared to models with medical pre-training. | [
"Segment Anything",
"Medical Pre-training",
"Domain Adaptation"
] | https://openreview.net/pdf?id=Fxi7pRmnYJ | 8ZJmEvNJVW | review | 1,708,124,346,651 | Fxi7pRmnYJ | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission22/Reviewer_mT6h"
] | title: Review
review: This paper proposes a variant of low rank adaptation method for SAM and MedSAM models to medical imaging tasks.
Strengths
* The effectiveness of the adaptation on SAM is noteworthy, demonstrating significant improvements even without extensive pretraining or finetuning in the medical domain
* The clarity and comprehensiveness of the writing make the methodology, experiments, and results easily understandable
* The comparative analysis is included, benchmarking against current baselines and showcasing the proposed method's advantages clearly.
Weaknesses
* Considering LoRA is already a popular approach as a parameter efficient fine-tuning method, the novelty of method is limited.
rating: 7
confidence: 4 |
EycZiXnkS0 | Retrieval-Based Disease Prediction for Myocardial Injury after Noncardiac Surgery: Leveraging Language Models as Diagnostic Tools | [
"Namjun Park",
"Donggeun Ko",
"Dongjun Lee",
"San Kim",
"Jaekwang KIM"
] | Predicting Myocardial Injury after Noncardiac Surgery (MINS) is crucial for enhancing patient outcomes, as these injuries significantly affect health and survival rates. This study presents a novel approach for MINS prediction by transforming and converting collected comprehensive pre-operative and intra-operative medical data into a textual description format compatible with Language Models (LM). We employ a Retrieval Based Disease Prediction (RBD) framework, leveraging advanced natural language processing (NLP) techniques to interpret complex patient information. Our results demonstrate that this LM-based approach outperforms traditional machine learning methods. Furthermore, our findings indicate that leveraging LMs with medical data improves predictive performance and potentially enhances patient care and postoperative outcomes. Moreover, the versatility of the RBD framework in adapting to various medical data types highlights its potential as a transformative tool and a stepping stone in healthcare analytics and predictive diagnostics. | [
"Myocardial Injury after Noncardiac Surgery",
"Language Model",
"Disease Prediction",
"Natural Language Processing"
] | https://openreview.net/pdf?id=EycZiXnkS0 | jNTOQMWCL2 | review | 1,707,853,288,888 | EycZiXnkS0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission39/Reviewer_RopL"
] | title: Official Review
review: This paper explains an approach to convert tabular data into textual descriptions that can be parsed by a language model for comparison against a specific medical definition.
Strengths:
1. The paper presents an interesting idea to supplement LM models with clinical definitions to inform model predictions.
2. Multiple experiments were performed. Results were averaged and standard deviations were included.
Weaknesses:
1. Paper contributions are misleading. Paper says, "approach transforms multifaceted pre-operative and intra-operative tabular data into coherent text-based descriptions, enabling the use of advanced Language Models (LM) for data interpretation." This seems to imply that a model is used to output text descriptions from static data but really these descriptions are developed using medical experts. Also the conclusions say, "The key contribution of our study is to increase the predictive performance of medical data with heavy class im- balances in disease prediction". Class imbalance is not discussed as an objective in the introduction and it unclear how the method was designed for this purpose in mind.
2. More explanation of the RBF framework is needed. What is it? How does your method differ/expand on this approach?
3. Paper contributions as explained do not seem to present a novel methodology. The text and definition are developed by medical experts and then just parsed through an existing LM where the output embeddings are compared using cosine similarity. It seems that the main contribution is the approach to group sentence inputs to a downstream classifier for prediction rather than directly using an LM model fine-tuned for binary classification. Perhaps the embedding approach for grouped sentences is novel? More explanation and a perhaps refocused message is needed.
4. Based on confidence intervals the results do significantly improve over baselines in terms of precision and recall.
5. Explanation of baseline LM training is vague.
6. How was the relevance threshold selected? This should be a hyperparameter that is tuned based on a validation set and not manually selected based on model performance. More explanation is needed here.
Questions:
1. Is the definition of MINS provided to the other baseline LMs? If not, it would be difficult to compare how this approach compares to other baseline LMs and assess whether the downstream classifier is necessary.
2. What is the purpose of the different inputs channels? Is it to create embeddings over a set of grouped sentences rather than individual sentences? Did you experiment with just inputing the embedding over the entire patient description? This would be similar to using an LM binary classifier baseline.
Other:
1. I would recommend emphasizing the value of this approach to adapt to new/changing definitions easy without needing to relabel the data and retrain a model. That is a very valuable approach in healthcare.
rating: 4
confidence: 5 |
EycZiXnkS0 | Retrieval-Based Disease Prediction for Myocardial Injury after Noncardiac Surgery: Leveraging Language Models as Diagnostic Tools | [
"Namjun Park",
"Donggeun Ko",
"Dongjun Lee",
"San Kim",
"Jaekwang KIM"
] | Predicting Myocardial Injury after Noncardiac Surgery (MINS) is crucial for enhancing patient outcomes, as these injuries significantly affect health and survival rates. This study presents a novel approach for MINS prediction by transforming and converting collected comprehensive pre-operative and intra-operative medical data into a textual description format compatible with Language Models (LM). We employ a Retrieval Based Disease Prediction (RBD) framework, leveraging advanced natural language processing (NLP) techniques to interpret complex patient information. Our results demonstrate that this LM-based approach outperforms traditional machine learning methods. Furthermore, our findings indicate that leveraging LMs with medical data improves predictive performance and potentially enhances patient care and postoperative outcomes. Moreover, the versatility of the RBD framework in adapting to various medical data types highlights its potential as a transformative tool and a stepping stone in healthcare analytics and predictive diagnostics. | [
"Myocardial Injury after Noncardiac Surgery",
"Language Model",
"Disease Prediction",
"Natural Language Processing"
] | https://openreview.net/pdf?id=EycZiXnkS0 | gGPd1HpiT3 | review | 1,708,732,604,508 | EycZiXnkS0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission39/Reviewer_9CR7"
] | title: Retrieval-Based Disease Prediction for Myocardial Injury after Noncardiac Surgery: Leveraging Language Models as Diagnostic Tools
review: Brief Overview of the Paper
The paper presents a novel approach to predicting Myocardial Injury after Non-cardiac Surgery (MINS) using a Retrieval Based Disease (RBD) prediction framework. This method innovatively addresses the challenge of analyzing complex, unstructured clinical data by transforming it into a coherent text description for improved decision-making accuracy. The authors claim their Language Model (LM)-based model outperforms traditional Machine Learning (ML) models, offering promising results that suggest further validation is needed in larger datasets or new clinical scenarios with different disease prevalence.
Quality
Technical Soundness and Description
The methods and accompanying figures presented by the authors are technically sound and well-described, providing a clear overview of the proposed RBD framework and its advantages over traditional ML approaches. The incorporation of various types of clinical data into a unified text-description model showcases a significant advancement in handling and interpreting complex data for disease prediction.
Addressing Data Imbalance:
While the authors acknowledge the challenge of working with an imbalanced dataset due to low disease prevalence, the paper would benefit from a more detailed discussion on how to address this issue. Suggested improvements include implementing resampling strategies, such as oversampling the under-represented disease class, to mitigate the imbalance's impact. Additionally, incorporating Receiver Operating Characteristic (ROC)-Area Under Curve (AUC) comparisons among the models could provide deeper insights into their performance, especially in managing false positives and negatives in an imbalanced dataset context.
Clarity and Justification
The statistical methods used in the paper are clearly explained and justified. The authors' approach to analyzing the data and the results presented are sound, offering a solid foundation for their conclusions.
Recommendation & Significance
This paper introduces a compelling and innovative approach to disease prediction using LM-based models, addressing significant challenges in analyzing unstructured clinical data. While the results are promising, addressing the highlighted areas for improvement, particularly around data imbalance and statistical analysis depth, could significantly enhance the paper's impact and validity. Further validation of the proposed model in broader clinical settings is recommended to substantiate its effectiveness and applicability in real-world scenarios.
Additional Comments
Strengths:
- The novel RBD framework represents a significant innovation in leveraging LM-based models for disease prediction, especially in dealing with complex, unstructured clinical data.
- The initial findings suggesting superior performance of the RBD framework over traditional ML models are promising and warrant further investigation.
Weaknesses:
- The paper lacks a comprehensive strategy to address the issue of data imbalance, which is crucial for validating the model's effectiveness across different clinical scenarios.
- The statistical analysis would benefit from more detailed comparisons between models and a clearer justification for methodological choices, such as the selection of k-values.
Recommendations for Improvement:
To strengthen the paper's statistical analysis, it would be beneficial to include a direct statistical comparison of the models' recall and F-1 scores. This comparison could highlight the proposed model's effectiveness more clearly against traditional ML approaches. Additionally, the rationale behind selecting a k-value of 5 for certain analyses requires further explanation. Expanding on this decision could enhance the reader's understanding of the methodology and its implications for the study's findings.
rating: 8
confidence: 3 |
EycZiXnkS0 | Retrieval-Based Disease Prediction for Myocardial Injury after Noncardiac Surgery: Leveraging Language Models as Diagnostic Tools | [
"Namjun Park",
"Donggeun Ko",
"Dongjun Lee",
"San Kim",
"Jaekwang KIM"
] | Predicting Myocardial Injury after Noncardiac Surgery (MINS) is crucial for enhancing patient outcomes, as these injuries significantly affect health and survival rates. This study presents a novel approach for MINS prediction by transforming and converting collected comprehensive pre-operative and intra-operative medical data into a textual description format compatible with Language Models (LM). We employ a Retrieval Based Disease Prediction (RBD) framework, leveraging advanced natural language processing (NLP) techniques to interpret complex patient information. Our results demonstrate that this LM-based approach outperforms traditional machine learning methods. Furthermore, our findings indicate that leveraging LMs with medical data improves predictive performance and potentially enhances patient care and postoperative outcomes. Moreover, the versatility of the RBD framework in adapting to various medical data types highlights its potential as a transformative tool and a stepping stone in healthcare analytics and predictive diagnostics. | [
"Myocardial Injury after Noncardiac Surgery",
"Language Model",
"Disease Prediction",
"Natural Language Processing"
] | https://openreview.net/pdf?id=EycZiXnkS0 | a5Vgb6ETve | review | 1,708,647,226,289 | EycZiXnkS0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission39/Reviewer_YG1g"
] | title: Retrieval-Based Disease Prediction for Myocardial Injury after Noncardiac Surgery: Leveraging Language Models as Diagnostic Tools
review: - Myocardial injury after non-cardiac surgery (MINS) is a postoperative complication presented in diverse profile patients and marked by varied symptoms. Thus, its early prediction is a challenge and ML approaches, even heavily utilized, present some difficulties in processing unstructured and imbalanced data.
- Authors propose a framework based on the one presented in https://arxiv.org/pdf/2306.02052.pdf (RBF), called Retrieval Based Disease (RBD) Prediction framework. Their approach transforms multifaceted pre-operative and intra-operative tabular data into coherent text-based descriptions, enabling the use of Language Models (LM) for data interpretation.
- The authors face an imbalanced dataset of MINS, with only 7.9% of the total patients experiencing MINS. They compare their framework with a RandomForest, a XGBoost, a BERT-base and a SciBERT, ClinicalBERT and BioBERT fine-tuned for binary classification. They show that their RBD framework outperformed the ML approaches and the LM ones. Even with an ablation study, RBD-t showed better performance than the others (but the original RBD).
The presented framework is a variation of a pre-existing one (RBF) and combines it with other Languange Model for Biomedical domain (PubMedBERT). However, the methods seem to be properly adapted and a good choice to the problem to be tackled, with the corresponding citation. The framework is correctly presented in Figure 2 and the results of the few experiments presented clearly in Tables 1 and 2.
The presented results show a good improvement compared with the other baselines, which seem to be diverse and relevant enough; first, two widely used ML techniques and then 4 domain specific language models. Even the ablated version of the framework (RBD-t) overperforms the other baselines, despite performing worse than the not ablated RBD.
Even though it uses pre-existing methods, the need of adaption of it to imbalanced datasets is essential and the significance of having early detection techniques without the need of post-operative data is valuable.
Regarding to format and clarity, the work is well presented and clearly structured, having not found major errors in the writing. The trivial next steps are presented as future work, with the aim of making the framework more general by extending it to multi-label datasets.
COMMENT:
- If I am not wrong, the only innovation here is the adaptation of RBF to medical data by using a specific BERT model for biomedical NLP tasks, named PubMedBERT. Right?
- What is the explanation of K=3 showing the best results?
rating: 8
confidence: 3 |
EycZiXnkS0 | Retrieval-Based Disease Prediction for Myocardial Injury after Noncardiac Surgery: Leveraging Language Models as Diagnostic Tools | [
"Namjun Park",
"Donggeun Ko",
"Dongjun Lee",
"San Kim",
"Jaekwang KIM"
] | Predicting Myocardial Injury after Noncardiac Surgery (MINS) is crucial for enhancing patient outcomes, as these injuries significantly affect health and survival rates. This study presents a novel approach for MINS prediction by transforming and converting collected comprehensive pre-operative and intra-operative medical data into a textual description format compatible with Language Models (LM). We employ a Retrieval Based Disease Prediction (RBD) framework, leveraging advanced natural language processing (NLP) techniques to interpret complex patient information. Our results demonstrate that this LM-based approach outperforms traditional machine learning methods. Furthermore, our findings indicate that leveraging LMs with medical data improves predictive performance and potentially enhances patient care and postoperative outcomes. Moreover, the versatility of the RBD framework in adapting to various medical data types highlights its potential as a transformative tool and a stepping stone in healthcare analytics and predictive diagnostics. | [
"Myocardial Injury after Noncardiac Surgery",
"Language Model",
"Disease Prediction",
"Natural Language Processing"
] | https://openreview.net/pdf?id=EycZiXnkS0 | 6ev0EC3xWg | review | 1,708,571,166,856 | EycZiXnkS0 | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission39/Reviewer_PdWD"
] | title: Strong manuscript
review: In reviewing this manuscript, it is clear the authors can intellectually articulate the focus of their study. They were able to explain the rationale of their research as well as their results.
I would encourage the authors to discuss the results of their RBD model in the abstract.” At present, the abstract states their "LM-based approach outperforms traditional machine learning methods" but does not state any results.
Overall, I believe this is a strong work from the authors.
rating: 9
confidence: 4 |
DDImhCbXpo | CycleTrans: a transformer-based clinical foundation model for safer prescription | [
"Yuhan Zheng",
"Xiaotao Lin",
"Kexuan Chen",
"Shengxin Zhu"
] | Deep learning techniques are extensively utilized in prescribing drug combinations, drawing on extensive electronic medical records (EMRs). A prescription assistant may be able to provide immediate guidance on drug combinations for some urgent clinical situations. A well-controlled drug-drug interaction (DDI) rate and high recommendation precision are of great importance for a safe prescription. A lower DDI often implies the set of drug combinations should be as small as possible, which is challenging because EMR prescriptions for certain symptom(s) are often highly noised due to the diversity side symptoms of individuals. We propose a model comprised of cycle transformers (CycleTrans) to handle these challenges. CycleTrans employs cross-attention and transformers, integrates patients' longitudinal EMRs, enhances knowledge representations through the so-called cycle-embedding module, and thus predicts safer and better essential drug combinations for new-coming cases. The new model achieves the state-of-the-art in three dimensions: high precision (89%), low DDI rate (0.34%), and small drug set size (3.02) on the MIMIC-III benchmark dataset, surpassing previous bests of 73%, 5%, and 17 in each dimension, respectively. Such a significant advancement makes a much safer clinic prescription possible. The idea of the cycle transformer we proposed has considerable potential for other domains besides clinics, such as set recommendations, translation, and unsupervised representation learning in knowledge graphs. | [
"Cycle Transformer; drug set recommendation; EMR; clinic foundation models"
] | https://openreview.net/pdf?id=DDImhCbXpo | zJ4v6ygbPe | review | 1,707,876,648,445 | DDImhCbXpo | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission37/Reviewer_Sbmp"
] | title: Official Review
review: The paper proposes a deep learning framework that integrates a cycle transformer inspired by CycleGAN and a DDI loss inspired by 4SDrug to predict the drug combination prescription. The experiment carries out with MIMIC-III dataset, indicating that the proposed model outperforms previous models by large in terms of Precision, DDI rate, and Average number of drugs in the set.
### Strength:
1. The experimental results are promising.
2. The model design seems reasonable.
3. The problem of predicting drug combination is important.
### Opportunity to improve:
1. The reference number of Baseline in Table 1 does not align with other citation styles.
2. The definition of Precision needs clarification. If multiple drugs suit the patient, does hit with any of them count as 1? If it is not the case, it would be very unusual to see the Precision outperforming baselines by this far while maintaining such a small Avg #. It is extremely crucial to clarify this.
3. How is the DDI adjacent matrix defined? Without clarification, it can be very confusing. There may raise concerns if the definition of DDI is actually making sense in terms of medicine and for clinicians.
4. I do not see why L_{EMD} can be jointly added to the loss function, even though WGAN uses it. What are set A and set B, and why is there a need to transfer A to B?
5. Hyperparameters tuning will be very important since there are multiple loss functions. I assume adding appendices to explain how to reproduce the results and/or code would be helpful.
6. I do not think it is a good idea to call it a “foundation model”, since it has so few downstream tasks. It is not necessary to name it a foundation model just to fit in the symposium topic.
rating: 6
confidence: 4 |
DDImhCbXpo | CycleTrans: a transformer-based clinical foundation model for safer prescription | [
"Yuhan Zheng",
"Xiaotao Lin",
"Kexuan Chen",
"Shengxin Zhu"
] | Deep learning techniques are extensively utilized in prescribing drug combinations, drawing on extensive electronic medical records (EMRs). A prescription assistant may be able to provide immediate guidance on drug combinations for some urgent clinical situations. A well-controlled drug-drug interaction (DDI) rate and high recommendation precision are of great importance for a safe prescription. A lower DDI often implies the set of drug combinations should be as small as possible, which is challenging because EMR prescriptions for certain symptom(s) are often highly noised due to the diversity side symptoms of individuals. We propose a model comprised of cycle transformers (CycleTrans) to handle these challenges. CycleTrans employs cross-attention and transformers, integrates patients' longitudinal EMRs, enhances knowledge representations through the so-called cycle-embedding module, and thus predicts safer and better essential drug combinations for new-coming cases. The new model achieves the state-of-the-art in three dimensions: high precision (89%), low DDI rate (0.34%), and small drug set size (3.02) on the MIMIC-III benchmark dataset, surpassing previous bests of 73%, 5%, and 17 in each dimension, respectively. Such a significant advancement makes a much safer clinic prescription possible. The idea of the cycle transformer we proposed has considerable potential for other domains besides clinics, such as set recommendations, translation, and unsupervised representation learning in knowledge graphs. | [
"Cycle Transformer; drug set recommendation; EMR; clinic foundation models"
] | https://openreview.net/pdf?id=DDImhCbXpo | x9XiRMAVSv | review | 1,708,148,896,556 | DDImhCbXpo | [
"everyone"
] | [
"AAAI.org/2024/Spring_Symposium_Series/Clinical_FMs/Submission37/Reviewer_23UG"
] | title: The document introduces CycleTrans, a transformer-based model for safer prescription recommendations, addressing challenges in clinical reasoning and medication prediction, achieving high precision and low drug-drug interaction rates.
review: The paper presents a study on the development of the CycleTrans model for predicting specific medications for patients based on their disease diagnoses. The model incorporates a cycle-embedding module to enhance symptom and drug embeddings, utilizes cross-attention and transformers to integrate patients' longitudinal data, and achieves high clinical precision and low drug-drug interaction (DDI) rate. The study also discusses the need for additional data, ethical concerns, and the unresolved issue of AI explainability in the medical field.
Pros:
The study introduces a novel model, CycleTrans, which addresses the need for precise medication recommendations based on patient diagnoses.
The incorporation of a cycle-embedding module and the use of cross-attention and transformers demonstrate a comprehensive approach to addressing the complexities of medication recommendation in clinical settings.
The model achieves high clinical precision and low DDI rate, indicating its potential for improving patient safety and treatment efficacy.
Cons:
The study acknowledges the need for additional data, particularly recent clinical domain data, to substantiate and validate the findings, indicating potential limitations in the current model's training and evaluation.
Ethical and moral concerns about AI-generated conclusions and the lack of clear AI explainability in the medical field remain unresolved, raising questions about the practical application of the model in real-world clinical settings.
The study does not provide a detailed discussion of potential biases or limitations in the model's predictions, which could impact its practical significance and real-world applicability.
Overall, the study presents a novel approach to medication recommendation in clinical settings, but it also highlights the need for further validation, consideration of ethical concerns, and addressing potential biases in the model's predictions. The work's significance lies in its potential to improve patient care and treatment outcomes, but its practical application may be limited by unresolved ethical and explainability concerns.
rating: 9
confidence: 4 |
Subsets and Splits