Edit model card

Northern Frisian translation model

This is an NLLB-200-600M model fine-tuned for translating between German and the Northern Frisian dialect Mooring following this great blogpost.

Data

The dataset for finetuning consisted of 5597 sentence pairs of the Ååstermooring dialect of North Frisian with German translation, with 500 random pairs being retained for validation. Most examples (roughly 4200) were taken directly from "Rüm Hart" published by the Nordfriisk Instituut. For sentence splitting the python sentence-splitting library was used, however the splitting wasn't perfect, especially in cases of direct speech, so that after manual re-alignment still many of these pairs consisted in fact of multiple sentences. A further roughly 1200 examples were taken from the Frasch Uurdebök, Friesisches Wörterbuch, Neumünster 1988. Finally, a little over 100 very simple self-written examples were added.

Usage

How to use the model:

!pip install transformers==4.33

from transformers import AutoModelForSeq2SeqLM, NllbTokenizer

def create_tokenizer_with_new_lang(model_id, new_lang):
    tokenizer = NllbTokenizer.from_pretrained(model_id)
    old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder)
    tokenizer.lang_code_to_id[new_lang] = old_len-1
    tokenizer.id_to_lang_code[old_len-1] = new_lang
    # always move "mask" to the last position
    tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset

    tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
    tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
    if new_lang not in tokenizer._additional_special_tokens:
        tokenizer._additional_special_tokens.append(new_lang)
    # clear the added token encoder; otherwise a new token may end up there by mistake
    tokenizer.added_tokens_encoder = {}

    return tokenizer

def translate(
    text,
    tokenizer,
    model,
    src_lang='frr_Latn',
    tgt_lang='deu_Latn',
    a=32,
    b=3,
    max_input_length=1024,
    num_beams=4,
    **kwargs
):
    tokenizer.src_lang = src_lang
    tokenizer.tgt_lang = tgt_lang
    inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
    result = model.generate(
        **inputs.to(model.device),
        forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
        max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
        num_beams=num_beams,
        **kwargs
    )
    return tokenizer.batch_decode(result, skip_special_tokens=True)

path = "CmdCody/nllb-deu-moo"
tokenizer = create_tokenizer_with_new_lang(path, 'frr_Latn')
model = AutoModelForSeq2SeqLM.from_pretrained(path)

translate("Momme booget önj Naibel", tokenizer=tokenizer, model=model)
Downloads last month
1
Inference Examples
Inference API (serverless) has been turned off for this model.

Finetuned from