Update README.md
Browse files
README.md
CHANGED
@@ -1,58 +1,48 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
tags:
|
4 |
- generated_from_keras_callback
|
|
|
|
|
|
|
|
|
5 |
model-index:
|
6 |
-
- name: assamim/
|
7 |
results: []
|
8 |
---
|
9 |
-
|
10 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
11 |
probably proofread and complete it, then remove this comment. -->
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
The following hyperparameters were used during training:
|
43 |
-
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
44 |
-
- training_precision: float32
|
45 |
-
|
46 |
-
### Training results
|
47 |
-
|
48 |
-
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|
49 |
-
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
|
50 |
-
| 2.7206 | 2.4015 | 29.5818 | 7.8964 | 22.6985 | 22.7069 | 18.84 | 0 |
|
51 |
-
|
52 |
-
|
53 |
### Framework versions
|
54 |
|
55 |
-
- Transformers 4.19.
|
56 |
- TensorFlow 2.8.2
|
57 |
- Datasets 2.2.2
|
58 |
-
- Tokenizers 0.12.1
|
|
|
1 |
---
|
|
|
2 |
tags:
|
3 |
- generated_from_keras_callback
|
4 |
+
- Summarization
|
5 |
+
- T5-Small
|
6 |
+
datasets:
|
7 |
+
- Xsum
|
8 |
model-index:
|
9 |
+
- name: assamim/mt5-pukulenam-summarization
|
10 |
results: []
|
11 |
---
|
|
|
12 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
13 |
probably proofread and complete it, then remove this comment. -->
|
14 |
+
# assamim/mt5-pukulenam-summarization
|
15 |
+
This model is a fine-tuned version of [T5-Small](https://huggingface.co/t5-small) on an [XSUM](https://huggingface.co/datasets/xsum) dataset
|
16 |
+
## Using this model in `transformers` (tested on 4.19.2)
|
17 |
+
```python
|
18 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
19 |
+
import re
|
20 |
+
news = """
|
21 |
+
Anggota Unit Perlindungan Rakyat Kurdi di kota Rabia, pada perbatasan Irak-Suriah. Pasukan Kurdi Irak dilaporkan sudah menguasai kembali kota Rabia meskipun banyak korban jatuh. Pejabat senior Kurdi Irak mengatakan pasukan Kurdi Peshmerga mencatat kemajuan lewat serangan dini hari di Rabia. Sementara itu, milisi ISIS berusaha memukul mundur pasukan Kurdi Suriah di bagian lain perbatasan. Hal ini terjadi saat koalisi pimpinan Amerika terus melanjutkan serangan udara terhadap sasaran ISIS di Suriah dan Irak. Hari Selasa (30 September) dilaporkan juga terjadi serangkaian serangan bom di ibu kota Irak, Baghdad dan kota suci Syiah, Karbala. Dalam perkembangan terpisah, sejumlah tank Turki berada di bukit di sepanjang perbatasan dekat kota Kobane, Suriah setelah sejumlah bom mengenai wilayah Turki saat terjadi bentrokan dengan milisi ISIS dan pejuang Kurdi. Pemerintah Turki diperkirakan akan menyampaikan mosi ke parlemen, agar menyetujui aksi militer terhadap ISIS di Irak dan Suriah.
|
22 |
+
"""
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained("assamim/t5-small-english")
|
24 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("assamim/t5-small-english", from_tf=True)
|
25 |
+
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
|
26 |
+
input_ids = tokenizer.encode(WHITESPACE_HANDLER(news1), return_tensors='pt')
|
27 |
+
summary_ids = model.generate(input_ids,
|
28 |
+
min_length=20,
|
29 |
+
max_length=200,
|
30 |
+
num_beams=7,
|
31 |
+
repetition_penalty=2.5,
|
32 |
+
length_penalty=1.0,
|
33 |
+
early_stopping=True,
|
34 |
+
no_repeat_ngram_size=2,
|
35 |
+
use_cache=True,
|
36 |
+
do_sample = True,
|
37 |
+
temperature = 0.8,
|
38 |
+
top_k = 50,
|
39 |
+
top_p = 0.95)
|
40 |
+
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
|
41 |
+
print(summary_text)
|
42 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
### Framework versions
|
44 |
|
45 |
+
- Transformers 4.19.2
|
46 |
- TensorFlow 2.8.2
|
47 |
- Datasets 2.2.2
|
48 |
+
- Tokenizers 0.12.1
|