Gabriel commited on
Commit
62df880
1 Parent(s): 7944963

Update text.py

Browse files
Files changed (1) hide show
  1. text.py +6 -6
text.py CHANGED
@@ -3,11 +3,11 @@ sum_app_text_tab_1= """
3
 
4
  The goal of text summarization is to condense long documents into summaries, while maintaining key information found within the original text document. This is one of the most challenging NLP tasks as it requires a range of abilities, such as understanding long passages and generating coherent text that captures the main topics in a document. However, when done well, text summarization is a powerful tool that can speed up various business processes by relieving the burden of domain experts to read long documents in detail.
5
 
6
- Text summarization methods can either be used as an extractive or abstractive model. An Extractive method does what it sounds like, it concatenates different important sentences or paragraphs without understanding the meaning of those parts. Extractive summarization does not create any new word phrases. For instance, if you presented a page of text to an extractive model, it would just act as a text “highlighter”. However, Abstractive summarization generates text in a fashion that tries to guess the meaning in a summarised way of the page of text it is presented. It would put words together in a meaningful way and add the most important fact found in the text.
7
 
8
  <figure>
9
  <img src="file/EX_VS_ABS.png" alt="EX_VS_ABS" style="width:100%">
10
- <figcaption style="text-align: center;">Fig.1 - The two different approaches to text summarization: Extractive and Abstractive.</figcaption>
11
  </figure>
12
 
13
  <h3><center> Abstractive Model </center></h3>
@@ -23,11 +23,11 @@ To see more in depth regarding the training go to model card: [Gabriel/bart-base
23
  sum_app_text_tab_2= """
24
  <h2><center> 🤗 </center></h2>
25
 
26
- The figure below illustrates how the skill level of the model increases at each step:
27
 
28
  <figure>
29
  <img src="file/BART_SEQ.png" alt="BART_SEQ" style="width:100%">
30
- <figcaption style="text-align: center;">Fig.1 - A model progression between on 3 aspects during sequential adoption: domain language, task and language..</figcaption>
31
  </figure>
32
 
33
  The main benefits of transfer learning in general include the saving of resources and improved efficiency when training new models, so feel free to adopt this model for your type of problem!
@@ -38,10 +38,10 @@ The extractive models for this app are using sentence-transformer models, which
38
 
39
  <figure>
40
  <img src="file/Lex_rank.png" alt="Lex_rank" style="width:100%">
41
- <figcaption style="text-align: center;">Fig.3 - The right similarity graphs connection have been filtered with a corresponding threshold.</figcaption>
42
  </figure>
43
 
44
- The figure above showcase how LexRank formats similarity graphs based on all possible sentence combinations sentence similarity from the vector embeddings. Notice that the most "recommended" sentences that are extracted (the right graph) are derived from a threshold value which filters "weaker" connections in the similarity graph.
45
  For more information on this topic read: [LexRank](https://www.aaai.org/Papers/JAIR/Vol22/JAIR-2214.pdf)
46
  """
47
 
 
3
 
4
  The goal of text summarization is to condense long documents into summaries, while maintaining key information found within the original text document. This is one of the most challenging NLP tasks as it requires a range of abilities, such as understanding long passages and generating coherent text that captures the main topics in a document. However, when done well, text summarization is a powerful tool that can speed up various business processes by relieving the burden of domain experts to read long documents in detail.
5
 
6
+ Text summarization methods can either be used as an extractive or abstractive model. An Extractive method does what it sounds like, it concatenates different important sentences or paragraphs without understanding the meaning of those parts. Extractive summarization does not create any new word phrases. For instance, if you presented a page of text to an extractive model, it would just act as a text “highlighter”, see Figure 1. However, Abstractive summarization generates text in a fashion that tries to guess the meaning in a summarised way of the page of text it is presented. It would put words together in a meaningful way and add the most important fact found in the text.
7
 
8
  <figure>
9
  <img src="file/EX_VS_ABS.png" alt="EX_VS_ABS" style="width:100%">
10
+ <figcaption style="text-align: center;">Figure 1 - The two different approaches to text summarization: Extractive and Abstractive.</figcaption>
11
  </figure>
12
 
13
  <h3><center> Abstractive Model </center></h3>
 
23
  sum_app_text_tab_2= """
24
  <h2><center> 🤗 </center></h2>
25
 
26
+ Figure 2 below illustrates how the skill level of the model increases at each step:
27
 
28
  <figure>
29
  <img src="file/BART_SEQ.png" alt="BART_SEQ" style="width:100%">
30
+ <figcaption style="text-align: center;"> Figure 2 - A model progression between on 3 aspects during sequential adoption: domain language, task and language..</figcaption>
31
  </figure>
32
 
33
  The main benefits of transfer learning in general include the saving of resources and improved efficiency when training new models, so feel free to adopt this model for your type of problem!
 
38
 
39
  <figure>
40
  <img src="file/Lex_rank.png" alt="Lex_rank" style="width:100%">
41
+ <figcaption style="text-align: center;"> Figure 3 - The right similarity graphs connection have been filtered with a corresponding threshold.</figcaption>
42
  </figure>
43
 
44
+ Figure 3 above showcase how LexRank formats similarity graphs based on all possible sentence combinations sentence similarity from the vector embeddings. Notice that the most "recommended" sentences that are extracted (the right graph) are derived from a threshold value which filters "weaker" connections in the similarity graph.
45
  For more information on this topic read: [LexRank](https://www.aaai.org/Papers/JAIR/Vol22/JAIR-2214.pdf)
46
  """
47