Edit model card

yoruba-diacritics-quantized

This model is a fine-tuned version of Davlan/mT5_base_yoruba_adr on a version of Niger-Volta-LTI, provided by Bunmie-e on huggingface.

Model description

The fine-tuning was performed using the PEFT-LoRa technique, aiming to improve the model's performance on tasks like diacritization restoration and generation.

Key Features:

  • Base model: mT5_base_yoruba_adr pre-trained on Yoruba text
  • Fine-tuned dataset: Yoruba diacritics dataset from bumie-e/Yoruba-diacritics-vs-non-diacritics
  • Fine-tuning technique: PEFT-LoRa

Potential Applications:

  • Diacritization restoration in Yoruba text
  • Generation of Yoruba text with correct diacritics
  • Natural language processing tasks for Yoruba language

Code for Testing:

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

config = PeftConfig.from_pretrained("Professor/yoruba-diacritics-quantized")
model = AutoModelForSeq2SeqLM.from_pretrained("Davlan/mT5_base_yoruba_adr")
model = PeftModel.from_pretrained(model, "Professor/yoruba-diacritics-quantized")
tokenizer = AutoTokenizer.from_pretrained("Davlan/mT5_base_yoruba_adr")

inputs = tokenizer(
    "Mo ti so fun bobo yen sha, aaro la wa bayi",
    return_tensors="pt",
)

device = "cpu" # use your GPU if you have

model.to(device)

with torch.no_grad():
    inputs = {k: v.to(device) for k, v in inputs.items()}
    outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=100)
    print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))

Intended uses & limitations

More information coming

Training and evaluation data

More information coming

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

coming soon.

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.38.0.dev0
  • Pytorch 2.0.0
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
12
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Adapter for