--- inference: false language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # ROBERTA-TO-ROBERTA EncoderDecoder with Shared Weights This model was introduced in [this paper](https://arxiv.org/pdf/1907.12461.pdf). ## Model description The training data is private English-Bulgarian parallel data. ## Intended uses & limitations You can use the raw model for translation from English to Bulgarian. ### How to use Here is how to use this model in PyTorch: ```python >>> from transformers import EncoderDecoderModel, XLMRobertaTokenizer >>> >>> model_id = "rmihaylov/roberta2roberta-shared-nmt-bg" >>> model = EncoderDecoderModel.from_pretrained(model_id) >>> model.encoder.pooler = None >>> tokenizer = XLMRobertaTokenizer.from_pretrained(model_id) >>> >>> text = """ Others were photographed ransacking the building, smiling while posing with congressional items such as House Speaker Nancy Pelosi's lectern or at her staffer's desk, or publicly bragged about the crowd's violent and destructive joyride. """ >>> >>> inputs = tokenizer.encode_plus(text, max_length=100, return_tensors='pt', truncation=True) >>> >>> translation = model.generate(**inputs, >>> max_length=100, >>> num_beams=4, >>> do_sample=True, >>> num_return_sequences=1, >>> top_p=0.95, >>> decoder_start_token_id=tokenizer.bos_token_id) >>> >>> print([tokenizer.decode(g.tolist(), skip_special_tokens=True) for g in translation]) ['Други бяха заснети да бягат из сградата, усмихвайки се, докато се представят с конгресни предмети, като например лекцията на председателя на парламента Нанси Пелози или на бюрото на нейния служител, или публично се хвалят за насилието и разрушителната радост на тълпата.'] ```