patrickvonplaten commited on
Commit
a5ad8b5
1 Parent(s): 1e6e6eb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language: en
4
+ ---
5
+
6
+ **NOTE: This is the FP32 version of [Facebook's official bart-large](https://huggingface.co/facebook/bart-large/edit/main/README.md).**
7
+
8
+ # BART (large-sized model)
9
+
10
+ BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart).
11
+
12
+ Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.
13
+
14
+ ## Model description
15
+
16
+ BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
17
+
18
+ BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
19
+
20
+ ## Intended uses & limitations
21
+
22
+ You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
23
+
24
+ ### How to use
25
+
26
+ Here is how to use this model in PyTorch:
27
+
28
+ ```python
29
+ from transformers import BartTokenizer, BartModel
30
+
31
+ tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
32
+ model = BartModel.from_pretrained('facebook/bart-large')
33
+
34
+ inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
35
+ outputs = model(**inputs)
36
+
37
+ last_hidden_states = outputs.last_hidden_state
38
+ ```
39
+
40
+ ### BibTeX entry and citation info
41
+
42
+ ```bibtex
43
+ @article{DBLP:journals/corr/abs-1910-13461,
44
+ author = {Mike Lewis and
45
+ Yinhan Liu and
46
+ Naman Goyal and
47
+ Marjan Ghazvininejad and
48
+ Abdelrahman Mohamed and
49
+ Omer Levy and
50
+ Veselin Stoyanov and
51
+ Luke Zettlemoyer},
52
+ title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
53
+ Generation, Translation, and Comprehension},
54
+ journal = {CoRR},
55
+ volume = {abs/1910.13461},
56
+ year = {2019},
57
+ url = {http://arxiv.org/abs/1910.13461},
58
+ eprinttype = {arXiv},
59
+ eprint = {1910.13461},
60
+ timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
61
+ biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
62
+ bibsource = {dblp computer science bibliography, https://dblp.org}
63
+ }
64
+ ```