Blaise-g commited on
Commit
ac2835f
β€’
1 Parent(s): e154b2a

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +7 -7
app.py CHANGED
@@ -153,19 +153,19 @@ if __name__ == "__main__":
153
 
154
  with demo:
155
 
156
- gr.Markdown("#Automatic summarization of biomedical research papers with neural abstractive methods into a long and comprehensive synopsis or extreme TLDR summary version")
157
  gr.Markdown(
158
  "A rather simple demo using an ad-hoc fine-tuned LongT5 or LED model to summarize long biomedical articles (or any scientific text related to the biomedical domain) into a detailed or extreme TLDR version."
159
  )
160
  with gr.Column():
161
 
162
- gr.Markdown("## Load Inputs & Select Parameters")
163
  gr.Markdown(
164
- "Enter text below in the text area. The text will be summarized [using the selected parameters](https://huggingface.co/blog/how-to-generate). Optionally load an example below or upload a file."
165
  )
166
  with gr.Row():
167
  model_size = gr.Radio(
168
- choices=["tldr", "sumpubmed"], label="Model Variant", value="large"
169
  )
170
  num_beams = gr.Radio(
171
  choices=[2, 3, 4],
@@ -229,7 +229,7 @@ if __name__ == "__main__":
229
  with gr.Column():
230
  gr.Markdown("## Generate Summary")
231
  gr.Markdown(
232
- "Summary generation should take approximately 1-2 minutes for most settings."
233
  )
234
  summarize_button = gr.Button(
235
  "Summarize!",
@@ -251,9 +251,9 @@ if __name__ == "__main__":
251
  gr.Markdown("---")
252
 
253
  with gr.Column():
254
- gr.Markdown("## About the Model")
255
  gr.Markdown(
256
- "- [This model](https://huggingface.co/pszemraj/led-large-book-summary) is a fine-tuned checkpoint of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the [BookSum dataset](https://arxiv.org/abs/2105.08209).The goal was to create a model that can generalize well and is useful in summarizing lots of text in academic and daily usage."
257
  )
258
  gr.Markdown(
259
  "- The two most important parameters-empirically-are the `num_beams` and `token_batch_length`. However, increasing these will also increase the amount of time it takes to generate a summary. The `length_penalty` and `repetition_penalty` parameters are also important for the model to generate good summaries."
 
153
 
154
  with demo:
155
 
156
+ gr.Markdown("# Automatic summarization of biomedical research papers with neural abstractive methods into a long and comprehensive synopsis or extreme TLDR summary version")
157
  gr.Markdown(
158
  "A rather simple demo using an ad-hoc fine-tuned LongT5 or LED model to summarize long biomedical articles (or any scientific text related to the biomedical domain) into a detailed or extreme TLDR version."
159
  )
160
  with gr.Column():
161
 
162
+ gr.Markdown("## Load Text Inputs & Select Generation Parameters")
163
  gr.Markdown(
164
+ "Enter text below in the text area. The text will be summarized [using the selected text generation parameters](https://huggingface.co/blog/how-to-generate). Optionally load an available example below or upload a file."
165
  )
166
  with gr.Row():
167
  model_size = gr.Radio(
168
+ choices=["tldr", "sumpubmed"], label="Model Variant", value="sumpubmed"
169
  )
170
  num_beams = gr.Radio(
171
  choices=[2, 3, 4],
 
229
  with gr.Column():
230
  gr.Markdown("## Generate Summary")
231
  gr.Markdown(
232
+ "Summary generation should take approximately less than 2 minutes for most settings."
233
  )
234
  summarize_button = gr.Button(
235
  "Summarize!",
 
251
  gr.Markdown("---")
252
 
253
  with gr.Column():
254
+ gr.Markdown("## About the Models")
255
  gr.Markdown(
256
+ "- [Blaise-g/longt5_tglobal_large_sumpubmed](https://huggingface.co/Blaise-g/longt5_tglobal_large_sumpubmed) is a fine-tuned checkpoint of [Stancld/longt5-tglobal-large-16384-pubmed-3k_steps](https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps) on the [SumPubMed dataset](https://aclanthology.org/2021.acl-srw.30/). [Blaise-g/longt5_tglobal_large_scitldr](https://huggingface.co/Blaise-g/longt5_tglobal_large_scitldr) is a fine-tuned checkpoint of [Blaise-g/longt5_tglobal_large_sumpubmed](https://huggingface.co/Blaise-g/longt5_tglobal_large_sumpubmed) on the [Scitldr dataset](https://arxiv.org/abs/2004.15011). The goal was to create two models capable of handling the complex information contained in long biomedical documents and subsequently producing scientific summaries according to one of the two possible levels of conciseness: 1) A long explanatory synopsis that retains the majority of domain-specific language used in the original source text. 2)A one sentence long, TLDR style summary."
257
  )
258
  gr.Markdown(
259
  "- The two most important parameters-empirically-are the `num_beams` and `token_batch_length`. However, increasing these will also increase the amount of time it takes to generate a summary. The `length_penalty` and `repetition_penalty` parameters are also important for the model to generate good summaries."