patrickvonplaten dleve123 commited on
Commit
8435883
1 Parent(s): d0af988

typo: encoder-encoder -> encoder-decoder (#1)

Browse files

- typo: encoder-encoder -> encoder-decoder (697b60936996ac65b2b0e739ba3f02b12115f319)


Co-authored-by: Daniel Levenson <dleve123@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ Disclaimer: The team releasing BART did not write a model card for this model so
11
 
12
  ## Model description
13
 
14
- BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
15
 
16
  BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
17
 
11
 
12
  ## Model description
13
 
14
+ BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
15
 
16
  BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
17