Arashasg commited on
Commit
cb93ec6
1 Parent(s): 9eacb85

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -24
README.md CHANGED
@@ -1,3 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # WikiBert2WikiBert
2
  Bert language models can be employed for Summarization tasks. WikiBert2WikiBert is an encoder-decoder transformer model that is initialized using the Persian WikiBert Model weights. The WikiBert Model is a Bert language model which is fine-tuned on Persian Wikipedia. After using the WikiBert weights for initialization, the model is trained for five epochs on PN-summary and Persian BBC datasets.
3
 
@@ -33,27 +59,10 @@ input = 'your input comes here'
33
  summary = generate_summary(input)
34
  ```
35
 
36
- ---
37
- language:
38
- - fa
39
- tags:
40
- - Wikipedia
41
- - Summarizer
42
- - bert2bert
43
- task_categories:
44
- - summarization
45
- - text generation
46
- task_ids:
47
- - news-articles-summarization
48
- license:
49
- - apache-2.0
50
- multilinguality:
51
- - monolingual
52
- datasets:
53
- - pn-summary
54
- - XL-Sum
55
- metrics:
56
- - rouge-1
57
- - rouge-2
58
- - rouge-l
59
- ---
 
1
+ ---
2
+ language:
3
+ - fa
4
+ tags:
5
+ - Wikipedia
6
+ - Summarizer
7
+ - bert2bert
8
+ task_categories:
9
+ - summarization
10
+ - text generation
11
+ task_ids:
12
+ - news-articles-summarization
13
+ license:
14
+ - apache-2.0
15
+ multilinguality:
16
+ - monolingual
17
+ datasets:
18
+ - pn-summary
19
+ - XL-Sum
20
+ metrics:
21
+ - rouge-1
22
+ - rouge-2
23
+ - rouge-l
24
+ ---
25
+
26
+
27
  # WikiBert2WikiBert
28
  Bert language models can be employed for Summarization tasks. WikiBert2WikiBert is an encoder-decoder transformer model that is initialized using the Persian WikiBert Model weights. The WikiBert Model is a Bert language model which is fine-tuned on Persian Wikipedia. After using the WikiBert weights for initialization, the model is trained for five epochs on PN-summary and Persian BBC datasets.
29
 
 
59
  summary = generate_summary(input)
60
  ```
61
 
62
+ ## Evaluation
63
+ I separated 5 percent of the pn-summary for evaluation of the model. The rouge scores of the model are as follows:
64
+
65
+ | Rouge-1 | Rouge-2 | Rouge-l |
66
+ | ------------- | ------------- | ------------- |
67
+ | 38.97% | 18.42% | 34.50% |
68
+