huangzhii commited on
Commit
f5b39a9
1 Parent(s): 34ca5d9

Add demo twitter

Browse files
Files changed (1) hide show
  1. details.py +16 -12
details.py CHANGED
@@ -22,18 +22,22 @@ def app():
22
  #st.markdown(intro_markdown, unsafe_allow_html=True)
23
  st.markdown("# Leveraging medical Twitter to build a visual-language foundation model for pathology")
24
 
25
- st.markdown("The lack of annotated publicly available medical images is a major barrier for innovations. At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter. Here we harness these crowd platforms to curate OpenPath, a large dataset of <b>208,414</b> pathology images paired with natural language descriptions. This is the largest public dataset for pathology images annotated with natural text. We demonstrate the value of this resource by developing PLIP, a multimodal AI with both image and text understanding, which is trained on OpenPath. PLIP achieves state-of-the-art zero-shot and few-short performance for classifying new pathology images across diverse tasks. Moreover, PLIP enables users to retrieve similar cases by either image or natural language search, greatly facilitating knowledge sharing. Our approach demonstrates that publicly shared medical data is a tremendous opportunity that can be harnessed to advance biomedical AI.", unsafe_allow_html=True)
26
-
27
- render_svg("resources/SVG/Asset 49.svg")
28
- st.caption('An example of tweet')
29
- components.html('''
30
- <blockquote class="twitter-tweet">
31
- <a href="https://twitter.com/xxx/status/1580753362059788288"></a>
32
- </blockquote>
33
- <script async src="https://platform.twitter.com/widgets.js" charset="utf-8">
34
- </script>
35
- ''',
36
- height=500)
 
 
 
 
37
 
38
 
39
  st.markdown("#### PLIP is trained on the largest public vision–language pathology dataset: OpenPath")
 
22
  #st.markdown(intro_markdown, unsafe_allow_html=True)
23
  st.markdown("# Leveraging medical Twitter to build a visual-language foundation model for pathology")
24
 
25
+
26
+ col1, col2 = st.columns([4, 1])
27
+
28
+ with col1:
29
+ st.markdown("The lack of annotated publicly available medical images is a major barrier for innovations. At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter. Here we harness these crowd platforms to curate OpenPath, a large dataset of <b>208,414</b> pathology images paired with natural language descriptions. This is the largest public dataset for pathology images annotated with natural text. We demonstrate the value of this resource by developing PLIP, a multimodal AI with both image and text understanding, which is trained on OpenPath. PLIP achieves state-of-the-art zero-shot and few-short performance for classifying new pathology images across diverse tasks. Moreover, PLIP enables users to retrieve similar cases by either image or natural language search, greatly facilitating knowledge sharing. Our approach demonstrates that publicly shared medical data is a tremendous opportunity that can be harnessed to advance biomedical AI.", unsafe_allow_html=True)
30
+ render_svg("resources/SVG/Asset 49.svg")
31
+ with col2:
32
+ st.markdown('<b>Watch our success image-to-image retrieval via PLIP:</b>', unsafe_allow_html=True)
33
+ components.html('''
34
+ <blockquote class="twitter-tweet">
35
+ <a href="https://twitter.com/ZhiHuangPhD/status/1641899092195565569"></a>
36
+ </blockquote>
37
+ <script async src="https://platform.twitter.com/widgets.js" charset="utf-8">
38
+ </script>
39
+ ''',
40
+ height=900)
41
 
42
 
43
  st.markdown("#### PLIP is trained on the largest public vision–language pathology dataset: OpenPath")