d0r1h commited on
Commit
6a62f6e
1 Parent(s): 1536934

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md CHANGED
@@ -1,3 +1,43 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets: ilc
4
+ tags:
5
+ - summarization
6
  ---
7
+
8
+ # Longformer Encoder-Decoder (LED) fine-tuned on ILC
9
+
10
+ This model is a fine-tuned version of [led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [ILC](https://huggingface.co/datasets/d0r1h/ILC) dataset.
11
+
12
+ As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-base-16384* was initialized from [*bart-base*](https://huggingface.co/facebook/bart-base) since both models share the exact same architecture. To be able to process 16K tokens, *bart-base*'s position embedding matrix was simply copied 16 times.
13
+
14
+ ```Python
15
+
16
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
17
+ device = "cuda" if torch.cuda.is_available() else "CPU"
18
+
19
+ checkpoint = "d0r1h/led-base-ilc"
20
+
21
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
22
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, return_dict_in_generate=True).to(device)
23
+ case = "......."
24
+ input_ids = tokenizer(case, return_tensors="pt").input_ids.to(device)
25
+ global_attention_mask = torch.zeros_like(input_ids)
26
+ global_attention_mask[:, 0] = 1
27
+ sequences = model.generate(input_ids,
28
+ global_attention_mask=global_attention_mask).sequences
29
+ summary = tokenizer.batch_decode(sequences,
30
+ skip_special_tokens=True)
31
+
32
+ ```
33
+
34
+ ## Evaluation results
35
+
36
+ When the model is used for summarizing ILC documents(10 samples), it achieves the following results:
37
+
38
+ | Model | rouge1-f | rouge1-p | rouge2-f | rouge2-p | rougeL-f | rougeL-p |
39
+ |:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
40
+ | led-base-ilc | **42** | **47** | **22** | **24** | **39** | **44** |
41
+ | led-base | 3 | 39 | 1 | 21 | 3 | 37 |
42
+
43
+ [This notebook](https://colab.research.google.com/github/d0r1h/Notebooks/blob/main/NLP/Summarization/led_base_ilc_summarization.ipynb) shows how *led* can effectively be used for downstream tasks such as summarization.