Gabriel commited on
Commit
a0a4ba9
1 Parent(s): e729f1b

Update text.py

Browse files
Files changed (1) hide show
  1. text.py +8 -4
text.py CHANGED
@@ -1,12 +1,13 @@
1
  sum_app_text_tab_1= """
2
- <h3><center> The Summarization Task </center></h3>
3
 
4
  The goal of text summarization is to condense long documents into summaries, while maintaining key information found within the original text document. This is one of the most challenging NLP tasks as it requires a range of abilities, such as understanding long passages and generating coherent text that captures the main topics in a document. However, when done well, text summarization is a powerful tool that can speed up various business processes by relieving the burden of domain experts to read long documents in detail.
5
 
6
  Text summarization methods can either be used as an extractive or abstractive model. An Extractive method does what it sounds like, it concatenates different important sentences or paragraphs without understanding the meaning of those parts. Extractive summarization does not create any new word phrases. For instance, if you presented a page of text to an extractive model, it would just act as a text “highlighter”. However, Abstractive summarization generates text in a fashion that tries to guess the meaning in a summarised way of the page of text it is presented. It would put words together in a meaningful way and add the most important fact found in the text.
7
  ![alt text](file/EX_VS_ABS.png)
8
 
9
- ## Abstractive vs Extractive
 
10
 
11
  The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
12
 
@@ -17,13 +18,16 @@ To see more in depth regarding the training go to model card: [Gabriel/bart-base
17
  """
18
 
19
  sum_app_text_tab_2= """
20
- ## 🤗
 
21
  The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
22
  ![alt text2](file/BART_SEQ.png)
23
 
24
  The main benefits of transfer learning in general include the saving of resources and improved efficiency when training new models, so feel free to adopt this model for your type of problem!
25
 
26
- The extractive models for this app are using sentence-transformer models, which basically is using a bi-encoder that determines how similar two sentences are. This type of models convert texts into vectors (embedding) that capture semantic information. Additionally, LexRank, an unsupervised graph-based algorithm, is used to determine centrality scores as a post-process step to summarise. The main idea is that sentences "recommend" other similar sentences to the reader. Thus, if one sentence is very similar to many others, it will likely be a sentence of great importance. The importance of this sentence also stems from the importance of the sentences "recommending" it. Thus, to get ranked highly and placed in a summary, a sentence must be similar to many sentences that are in turn also similar to many other sentences.
 
 
27
  ![alt text3](file/Lex_rank.png)
28
  """
29
 
 
1
  sum_app_text_tab_1= """
2
+ <h2><center> The Summarization Task </center></h2>
3
 
4
  The goal of text summarization is to condense long documents into summaries, while maintaining key information found within the original text document. This is one of the most challenging NLP tasks as it requires a range of abilities, such as understanding long passages and generating coherent text that captures the main topics in a document. However, when done well, text summarization is a powerful tool that can speed up various business processes by relieving the burden of domain experts to read long documents in detail.
5
 
6
  Text summarization methods can either be used as an extractive or abstractive model. An Extractive method does what it sounds like, it concatenates different important sentences or paragraphs without understanding the meaning of those parts. Extractive summarization does not create any new word phrases. For instance, if you presented a page of text to an extractive model, it would just act as a text “highlighter”. However, Abstractive summarization generates text in a fashion that tries to guess the meaning in a summarised way of the page of text it is presented. It would put words together in a meaningful way and add the most important fact found in the text.
7
  ![alt text](file/EX_VS_ABS.png)
8
 
9
+ <h3><center> Abstractive Model </center></h3>
10
+
11
 
12
  The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
13
 
 
18
  """
19
 
20
  sum_app_text_tab_2= """
21
+ <h3><center> 🤗 </center></h3>
22
+
23
  The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
24
  ![alt text2](file/BART_SEQ.png)
25
 
26
  The main benefits of transfer learning in general include the saving of resources and improved efficiency when training new models, so feel free to adopt this model for your type of problem!
27
 
28
+ <h3><center> Extractive Model </center></h3>
29
+
30
+ The extractive models for this app are using sentence-transformer models, which basically is a bi-encoder that determines how similar two sentences are. This type of models convert texts into vectors (embedding) that capture semantic information. Additionally, LexRank, an unsupervised graph-based algorithm, is used to determine centrality scores as a post-process step to summarise. The main idea is that sentences "recommend" other similar sentences to the reader. Thus, if one sentence is very similar to many others, it will likely be a sentence of great importance. The importance of this sentence also stems from the importance of the sentences "recommending" it. Thus, to get ranked highly and placed in a summary, a sentence must be similar to many sentences that are in turn also similar to many other sentences.
31
  ![alt text3](file/Lex_rank.png)
32
  """
33