File size: 4,164 Bytes
4f75298
c5121e4
 
4f75298
 
 
 
9137060
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f75298
5481f4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6d60e92
5481f4b
6d60e92
5481f4b
abcd64d
 
 
 
 
5481f4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
language:
- en
tags:
- summarization
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
model-index:
- name: facebook/bart-large-cnn
  results:
  - task:
      type: summarization
      name: Summarization
    dataset:
      name: cnn_dailymail
      type: cnn_dailymail
      config: 3.0.0
      split: train
    metrics:
    - name: ROUGE-1
      type: rouge
      value: 42.9486
      verified: true
    - name: ROUGE-2
      type: rouge
      value: 20.8149
      verified: true
    - name: ROUGE-L
      type: rouge
      value: 30.6186
      verified: true
    - name: ROUGE-LSUM
      type: rouge
      value: 40.0376
      verified: true
    - name: loss
      type: loss
      value: 2.529000997543335
      verified: true
    - name: gen_len
      type: gen_len
      value: 78.5866
      verified: true
---
# BART (large-sized model), fine-tuned on CNN Daily Mail 

BART model pre-trained on English language, and fine-tuned on [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository (https://github.com/pytorch/fairseq/tree/master/examples/bart). 

Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.

## Model description

BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.

BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.

## Intended uses & limitations

You can use this model for text summarization. 

### How to use

Here is how to use this model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html):

```python
from transformers import pipeline

summarizer = pipeline("summarization", model="facebook/bart-large-cnn")

ARTICLE = """ It’s generally prohibitive for IoT devices with restricted computation, memory, radio bandwidth, and battery resource to execute computational-intensive and latency-sensitive security
tasks especially under heavy data streams [7]. However, most existing security solutions generate
heavy computation and communication load for IoT devices, and outdoor IoT devices such as
cheap sensors with light-weight security protections are usually more vulnerable to attacks than
computer systems.
"""
print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))
>>> [{'summary_text': 'Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.'}]
```

### BibTeX entry and citation info

```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
  author    = {Mike Lewis and
               Yinhan Liu and
               Naman Goyal and
               Marjan Ghazvininejad and
               Abdelrahman Mohamed and
               Omer Levy and
               Veselin Stoyanov and
               Luke Zettlemoyer},
  title     = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
               Generation, Translation, and Comprehension},
  journal   = {CoRR},
  volume    = {abs/1910.13461},
  year      = {2019},
  url       = {http://arxiv.org/abs/1910.13461},
  eprinttype = {arXiv},
  eprint    = {1910.13461},
  timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}