File size: 539 Bytes
1f06ae1
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
tags:
- summarization
- bart
language:
- ro
inference: false
---

This is a pretrained-from-scratch **BART large** model (**400M** parameters).

Training was performed on a clean **50GB Romanian** text corpus for 3M steps with these [scripts](https://github.com/cosmoquester/transformers-bart-pretrain). The model was trained with a maximum sequence length of **512**. 

**!! IMPORTANT !!** This model was pretrained on the text corruption task, meaning this model is **not usable** in any downstream task **without finetuning** first!