Edit model card

Model Summary

dataequity-kde4-en-de-qlora is a Transformer based language translator fine tuned using the kde dataset. The base model used is Helsinki-NLP/opus-mt-en-de

Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.

eng-spa

  • source group: English

  • target group: German

  • model: transformer

  • source language(s): en

  • target language(s): de

  • model: transformer

Inference Code:

from transformers import MarianMTModel, MarianTokenizer,

hub_repo_name = 'dataequity/dataequity-kde4-en-de-qlora'
tokenizer = MarianTokenizer.from_pretrained(hub_repo_name)
finetuned_model = MarianMTModel.from_pretrained(hub_repo_name)

questions = [
    "How are the first days of each season chosen?",
    "Why are laws requiring identification for voting scrutinized by the media?",
    "Why aren't there many new operating systems being created?"
]

translated = finetuned_model.generate(**tokenizer(questions, return_tensors="pt", padding=True))
[tokenizer.decode(t, skip_special_tokens=True) for t in translated]
Downloads last month
0
Safetensors
Model size
73.9M params
Tensor type
F32
·

Dataset used to train dataequity/dataequity-kde4-en-de-qlora