Edit model card

XLM-RoBERTa Large trained on Dravidian Language QA

Overview

Language model: XLM-RoBERTa-lg Language: Multilingual, focussed on Tamil & Hindi Downstream-task: Extractive QA Eval data: K-Fold on Training Data

Hyperparameters

batch_size = 4
base_LM_model = "xlm-roberta-large"
learning_rate = 1e-5

optimizer = AdamW
weight_decay = 1e-2
epsilon = 1e-8
max_grad_norm = 1.0

lr_schedule = LinearWarmup
warmup_proportion = 0.2

max_seq_len = 256
doc_stride=128
max_query_length=64

Performance

Evaluated on our human annotated dataset with 1000 tamil question-context pairs [link]

  "em": 77.536,
  "f1": 85.665

Usage

In Transformers

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "Srini99/FYP_TamilQA"

model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
    'question': 'யாரால் பொங்கல் சிறப்பாகக் கொண்டாடப்படுகிறது?',
    'context': 'பொங்கல் என்பது தமிழர்களால் சிறப்பாகக் கொண்டாடப்படும் ஓர் அறுவடைப் பண்டிகை ஆகும்.'
}
res = nlp(QA_input)
Downloads last month
6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Srini99/TQA