nielsr HF staff commited on
Commit
51f9983
1 Parent(s): c5eee23

Add model card

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language: en
4
+ ---
5
+
6
+ # BART (base-sized model)
7
+
8
+ BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart).
9
+
10
+ Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.
11
+
12
+ ## Model description
13
+
14
+ BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
15
+
16
+ BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
17
+
18
+ ## Intended uses & limitations
19
+
20
+ You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
21
+
22
+ ### How to use
23
+
24
+ Here is how to use this model in PyTorch:
25
+
26
+ ```python
27
+ from transformers import BartTokenizer, BartModel
28
+
29
+ tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
30
+ model = BartModel.from_pretrained('facebook/bart-large')
31
+
32
+ inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
33
+ outputs = model(**inputs)
34
+
35
+ last_hidden_states = outputs.last_hidden_state
36
+ ```
37
+
38
+ ### BibTeX entry and citation info
39
+
40
+ ```bibtex
41
+ @article{DBLP:journals/corr/abs-1910-13461,
42
+ author = {Mike Lewis and
43
+ Yinhan Liu and
44
+ Naman Goyal and
45
+ Marjan Ghazvininejad and
46
+ Abdelrahman Mohamed and
47
+ Omer Levy and
48
+ Veselin Stoyanov and
49
+ Luke Zettlemoyer},
50
+ title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
51
+ Generation, Translation, and Comprehension},
52
+ journal = {CoRR},
53
+ volume = {abs/1910.13461},
54
+ year = {2019},
55
+ url = {http://arxiv.org/abs/1910.13461},
56
+ eprinttype = {arXiv},
57
+ eprint = {1910.13461},
58
+ timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
59
+ biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
60
+ bibsource = {dblp computer science bibliography, https://dblp.org}
61
+ }
62
+ ```