nllb-200-bxr-ru / README.md
SaranaAbidueva's picture
Update README.md
87d22a9 verified
metadata
license: mit
datasets:
  - SaranaAbidueva/buryat-russian_parallel_corpus
language:
  - ru
metrics:
  - bleu

This is NLLB-200 trained on buryat-russian language pairs. It translates from buryat to russian and vice-versa.

BLEU bxr-ru: 20, ru-bxr:13

Thanks to https://huggingface.co/slone/nllb-rus-tyv-v1 tutorial

!pip install sentencepiece transformers==4.33
from transformers import NllbTokenizer, AutoModelForSeq2SeqLM, AutoConfig
def fix_tokenizer(tokenizer, new_lang='bxr_Cyrl'):
    """ Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """
    old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder)
    tokenizer.lang_code_to_id[new_lang] = old_len-1
    tokenizer.id_to_lang_code[old_len-1] = new_lang
    # always move "mask" to the last position
    tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset

    tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
    tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
    if new_lang not in tokenizer._additional_special_tokens:
        tokenizer._additional_special_tokens.append(new_lang)
    # clear the added token encoder; otherwise a new token may end up there by mistake
    tokenizer.added_tokens_encoder = {}
    tokenizer.added_tokens_decoder = {}
MODEL_URL = "SaranaAbidueva/nllb-200-bxr-ru"
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_URL)
tokenizer = NllbTokenizer.from_pretrained(MODEL_URL, force_download=True)
fix_tokenizer(tokenizer)

def translate(text, src_lang='rus_Cyrl', tgt_lang='bxr_Cyrl', a=32, b=3, max_input_length=1024, num_beams=4, **kwargs):
    tokenizer.src_lang = src_lang
    tokenizer.tgt_lang = tgt_lang
    inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
    result = model.generate(
        **inputs.to(model.device),
        forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
        max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
        num_beams=num_beams,
        **kwargs
    )
    return tokenizer.batch_decode(result, skip_special_tokens=True)

translate("красная птица", src_lang='rus_Cyrl', tgt_lang='bxr_Cyrl')