Edit model card

mBERT-hi-be-MLM-SQuAD-TyDi-MLQA Model Card

Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("question-answering", model="hapandya/mBERT-hi-be-MLM-SQuAD-TyDi-MLQA")

Load model directly

from transformers import AutoTokenizer, AutoModelForQuestionAnswering

tokenizer = AutoTokenizer.from_pretrained("hapandya/mBERT-hi-be-MLM-SQuAD-TyDi-MLQA") model = AutoModelForQuestionAnswering.from_pretrained("hapandya/mBERT-hi-be-MLM-SQuAD-TyDi-MLQA")

Downloads last month
2
Safetensors
Model size
177M params
Tensor type
I64
·
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train hapandya/mBERT-hi-bn-MLM-SQuAD-TyDi-MLQA