File size: 934 Bytes
1dbc4dd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
datasets:
- reddit
metrics:
- rouge
---
[Distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) finetuned on the [reddit dataset](https://huggingface.co/datasets/reddit)
Example usage:
```python
# Load finetuned model
tokenizer = BartTokenizer.from_pretrained("NielsV/distilbart-cnn-6-6-reddit")
model = BartForConditionalGeneration.from_pretrained("NielsV/distilbart-cnn-6-6-reddit")
input_text = "..." # The text you want summarized
# Tokenize the text, summarize and decode the result
inputs = tokenizer(input_txt, max_length=1024, return_tensors="pt")
summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, max_length=60)
summary = tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# The string summary contains the tldr
```
For more information, check out [this repository](https://github.com/VerleysenNiels/arxiv-summarizer) |