johngiorgi commited on
Commit
e25a976
1 Parent(s): d7347c3

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -1
README.md CHANGED
@@ -1,3 +1,63 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - allenai/mslr2022
6
+ model-index:
7
+ - name: baseline
8
+ results: []
9
  ---
10
+
11
+ # Overview
12
+
13
+ This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [Cochrane](https://github.com/allenai/mslr-shared-task#cochrane-dataset) dataset. The model received as input the titles and abstracts of up to 25 included studies for each example, concatenated by the `"</s>"` token. Global attention is applied to the special start token `"<s>"` and each of the document seperator tokens `"</s>"`. The model performs comparably to the reported results in the original paper: [MS2: Multi-Document Summarization of Medical Studies](https://arxiv.org/abs/2104.06486).
14
+
15
+ It achieves the following results on the `validation` set:
16
+
17
+ - Loss: 4.0216
18
+ - Rouge1 Fmeasure Mean: 26.3026
19
+ - Rouge2 Fmeasure Mean: 6.0324
20
+ - Rougel Fmeasure Mean: 18.1513
21
+ - Rougelsum Fmeasure Mean: 22.5031
22
+ - Bertscore Hashcode: microsoft/deberta-xlarge-mnli_L40_no-idf_version=0.3.11(hug_trans=4.22.0.dev0)-rescaled_fast-tokenizer
23
+ - Bertscore F1 Mean: 20.5937
24
+ - Seed: 42
25
+ - Model Name Or Path: allenai/led-base-16384
26
+ - Doc Sep Token: </s>
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 3e-05
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 4
48
+ - seed: 42
49
+ - gradient_accumulation_steps: 4
50
+ - total_train_batch_size: 16
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 20
55
+ - mixed_precision_training: Native AMP
56
+ - label_smoothing_factor: 0.1
57
+
58
+ ### Framework versions
59
+
60
+ - Transformers 4.22.0.dev0
61
+ - Pytorch 1.12.0
62
+ - Datasets 2.4.0
63
+ - Tokenizers 0.12.1