pszemraj commited on
Commit
69f98a9
1 Parent(s): 604d506

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -18
README.md CHANGED
@@ -3,21 +3,22 @@ license: bsd-3-clause
3
  base_model: pszemraj/pegasus-x-large-book-summary
4
  tags:
5
  - generated_from_trainer
 
6
  metrics:
7
  - rouge
8
- model-index:
9
- - name: pegasus-x-large-book-summary-synthsumm-16384-v2
10
- results: []
11
  datasets:
12
  - pszemraj/synthsumm
13
  pipeline_tag: summarization
 
 
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
  # pegasus-x-large-book-summary-synthsumm-16384-v2
20
 
 
 
 
 
21
  This model is a fine-tuned version of [pszemraj/pegasus-x-large-book-summary](https://huggingface.co/pszemraj/pegasus-x-large-book-summary) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 1.5481
@@ -27,18 +28,6 @@ It achieves the following results on the evaluation set:
27
  - Rougelsum: 42.1211
28
  - Gen Len: 73.9846
29
 
30
- ## Model description
31
-
32
- More information needed
33
-
34
- ## Intended uses & limitations
35
-
36
- More information needed
37
-
38
- ## Training and evaluation data
39
-
40
- More information needed
41
-
42
  ## Training procedure
43
 
44
  ### Training hyperparameters
 
3
  base_model: pszemraj/pegasus-x-large-book-summary
4
  tags:
5
  - generated_from_trainer
6
+ - synthsumm
7
  metrics:
8
  - rouge
 
 
 
9
  datasets:
10
  - pszemraj/synthsumm
11
  pipeline_tag: summarization
12
+ language:
13
+ - en
14
  ---
15
 
 
 
 
16
  # pegasus-x-large-book-summary-synthsumm-16384-v2
17
 
18
+ This was fine-tuned on a synthetic dataset of curated long-context text and `GPT-3.5-turbo-1106` summaries spanning several domains, including "random" long-context examples from redpajama, the pile, etc. Try it in the [gradio demo](https://huggingface.co/spaces/pszemraj/document-summarization)
19
+
20
+ ## Model description
21
+
22
  This model is a fine-tuned version of [pszemraj/pegasus-x-large-book-summary](https://huggingface.co/pszemraj/pegasus-x-large-book-summary) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
  - Loss: 1.5481
 
28
  - Rougelsum: 42.1211
29
  - Gen Len: 73.9846
30
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ## Training procedure
32
 
33
  ### Training hyperparameters