mrm8488 commited on
Commit
c33f24b
1 Parent(s): af07cf7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -10,3 +10,38 @@ widget:
10
  ---
11
 
12
  # German BERT2BERT fine-tuned on MLSUM DE for summarization
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  # German BERT2BERT fine-tuned on MLSUM DE for summarization
13
+
14
+
15
+ ## Model
16
+ [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) (BERT Checkpoint)
17
+ ## Dataset
18
+ **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, **German**, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
19
+ [MLSUM de](https://huggingface.co/datasets/viewer/?dataset=mlsum)
20
+ ## Results
21
+ |Set|Metric| # Score|
22
+ |----|------|------|
23
+ | Test |Rouge2 - mid -precision | ****|
24
+ | Test | Rouge2 - mid - recall | ****|
25
+ | Test | Rouge2 - mid - fmeasure | ****|
26
+ ## Usage
27
+ ```python
28
+ import torch
29
+ from transformers import BertTokenizerFast, EncoderDecoderModel
30
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
31
+ ckpt = 'mrm8488/bert2bert_shared-german-finetuned-summarization'
32
+ tokenizer = BertTokenizerFast.from_pretrained(ckpt)
33
+ model = EncoderDecoderModel.from_pretrained(ckpt).to(device)
34
+ def generate_summary(text):
35
+ inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
36
+ input_ids = inputs.input_ids.to(device)
37
+ attention_mask = inputs.attention_mask.to(device)
38
+ output = model.generate(input_ids, attention_mask=attention_mask)
39
+ return tokenizer.decode(output[0], skip_special_tokens=True)
40
+
41
+ text = "Your text here..."
42
+
43
+ generate_summary(text)
44
+
45
+ ```
46
+ > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
47
+ > Made with <span style="color: #e25555;">&hearts;</span> in Spain