Gabriel commited on
Commit
42905cc
1 Parent(s): 44c0ccc

Update text.py

Browse files
Files changed (1) hide show
  1. text.py +7 -9
text.py CHANGED
@@ -1,14 +1,9 @@
1
  sum_app_text_tab_1= """
2
- ## The Summarization Task
3
-
4
- The goal of text summarization is to extract or generate concise and accurate summaries of a given text document while maintaining key information found within the original text document. Text summarization methods can either be used as an extractive or abstractive model. An Extractive method does what it sounds like, it concatenates different important sentences or paragraphs without understanding the meaning of those parts. Extractive summarization does not create any new word phrases. For instance, if you presented a page of text to an extractive model, it would just act as a text “highlighter”. However, Abstractive summarization generates text in a fashion that tries to guess the meaning in a summarised way of the page of text it is presented. It would put words together in a meaningful way and add the most important fact found in the text.
5
-
6
- ![alt text](file/EX_VS_ABS.png)
7
 
8
-
9
- """
10
-
11
- sum_app_text_tab_2= """ ## Abstractive vs Extractive
12
 
13
  The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
14
 
@@ -16,6 +11,9 @@ The underlying engines for the Abstractive part are transformer based model BART
16
  - [Gabriel/cnn_daily_swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe)
17
 
18
  To see more in depth regarding the training go to model card: [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe).
 
 
 
19
 
20
  The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
21
  ![alt text2](file/BART_SEQ.png)
 
1
  sum_app_text_tab_1= """
2
+ ## The Summarization Task
3
+ The goal of text summarization is to extract or generate concise and accurate summaries of a given text document while maintaining key information found within the original text document. Text summarization methods can either be used as an extractive or abstractive model. An Extractive method does what it sounds like, it concatenates different important sentences or paragraphs without understanding the meaning of those parts. Extractive summarization does not create any new word phrases. For instance, if you presented a page of text to an extractive model, it would just act as a text “highlighter”. However, Abstractive summarization generates text in a fashion that tries to guess the meaning in a summarised way of the page of text it is presented. It would put words together in a meaningful way and add the most important fact found in the text.
4
+ ![alt text](file/EX_VS_ABS.png)
 
 
5
 
6
+ ## Abstractive vs Extractive
 
 
 
7
 
8
  The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
9
 
 
11
  - [Gabriel/cnn_daily_swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe)
12
 
13
  To see more in depth regarding the training go to model card: [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe).
14
+ """
15
+
16
+ sum_app_text_tab_2= """
17
 
18
  The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
19
  ![alt text2](file/BART_SEQ.png)