polieste commited on
Commit
59f1c5c
1 Parent(s): e17a2e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -1,25 +1,23 @@
1
  ---
2
  language: vi
3
  datasets:
4
- - cc100
5
  tags:
6
  - summarization
7
-
8
  license: mit
9
-
10
  widget:
11
- - text: "Input text."
12
  ---
13
 
14
- # ViT5-large Finetuned on `vietnews` Abstractive Summarization
15
 
16
 
17
 
18
  ```python
19
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
20
 
21
- tokenizer = AutoTokenizer.from_pretrained("VietAI/vit5-large-vietnews-summarization")
22
- model = AutoModelForSeq2SeqLM.from_pretrained("VietAI/vit5-large-vietnews-summarization")
23
  model.cuda()
24
 
25
  sentence = "Input text"
@@ -34,4 +32,4 @@ outputs = model.generate(
34
  for output in outputs:
35
  line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
36
  print(line)
37
- ```
 
1
  ---
2
  language: vi
3
  datasets:
4
+ - Yuhthe/vietnews
5
  tags:
6
  - summarization
 
7
  license: mit
 
8
  widget:
9
+ - text: Input text.
10
  ---
11
 
12
+ # fastAbs-large Finetuned on `vietnews` Abstractive Summarization
13
 
14
 
15
 
16
  ```python
17
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
18
 
19
+ tokenizer = AutoTokenizer.from_pretrained("polieste/fastAbs_large")
20
+ model = AutoModelForSeq2SeqLM.from_pretrained("polieste/fastAbs_large")
21
  model.cuda()
22
 
23
  sentence = "Input text"
 
32
  for output in outputs:
33
  line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
34
  print(line)
35
+ ```