Gabriel commited on
Commit
66508d3
1 Parent(s): 7b9fa2e

Update text.py

Browse files
Files changed (1) hide show
  1. text.py +7 -7
text.py CHANGED
@@ -8,14 +8,14 @@ sum_app_text_tab_1= """
8
 
9
  """
10
 
11
- sum_app_text_tab_2= """
12
- ## Abstractive vs Extractive
13
-
14
- The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
15
- - Gabriel/cnn_daily_swe (link)
16
- - Gabriel/xsum_swe (link)
17
 
18
- To see more in depth regarding the training go to link.
19
 
20
  The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
21
 
 
8
 
9
  """
10
 
11
+ sum_app_text_tab_2= """ ## Abstractive vs Extractive
12
+
13
+ The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
14
+
15
+ - [Gabriel/xsum_swe](https://huggingface.co/datasets/Gabriel/xsum_swe)
16
+ - [Gabriel/cnn_daily_swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe)
17
 
18
+ To see more in depth regarding the training go to model card: [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe).
19
 
20
  The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
21