Spaces:
Runtime error
Runtime error
Update text.py
Browse files
text.py
CHANGED
@@ -13,7 +13,7 @@ The underlying engines for the Abstractive part are transformer based model BART
|
|
13 |
- [Gabriel/xsum_swe](https://huggingface.co/datasets/Gabriel/xsum_swe)
|
14 |
- [Gabriel/cnn_daily_swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe)
|
15 |
|
16 |
-
To see more in depth regarding the training go to model card: [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe). The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pre-trained model on different datasets. For more information on this topic read: [Sequential Adoption](https://arxiv.org/pdf/1811.01088v2.pdf)
|
17 |
"""
|
18 |
|
19 |
sum_app_text_tab_2= """
|
|
|
13 |
- [Gabriel/xsum_swe](https://huggingface.co/datasets/Gabriel/xsum_swe)
|
14 |
- [Gabriel/cnn_daily_swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe)
|
15 |
|
16 |
+
To see more in depth regarding the training go to model card: [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe). The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pre-trained model on different datasets. It should be noted that the MT datasets will not teach the model Swedish perfectly, but it will give a more ideal basis to further fine-tune on a more domain specific use case. For more information on this topic read: [Sequential Adoption](https://arxiv.org/pdf/1811.01088v2.pdf)
|
17 |
"""
|
18 |
|
19 |
sum_app_text_tab_2= """
|