Back to all models
fill-mask mask_token: <mask>
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$ curl -X POST \
https://api-inference.huggingface.co/models/surajp/SanBERTa
Share Copied link to clipboard

Monthly model downloads

surajp/SanBERTa surajp/SanBERTa
32 downloads
last 30 days

pytorch

tf

Contributed by

surajp Suraj Parmar
3 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("surajp/SanBERTa") model = AutoModelWithLMHead.from_pretrained("surajp/SanBERTa")

RoBERTa trained on Sanskrit (SanBERTa)

Mode size (after training): 340MB

Dataset:

Wikipedia articles (used in iNLTK). It contains evaluation set.

Sanskrit scraps from CLTK

Configuration

Parameter Value
num_attention_heads 12
num_hidden_layers 6
hidden_size 768
vocab_size 29407

Training :

  • On TPU
  • For language modelling
  • Iteratively increasing --block_size from 128 to 256 over epochs

Evaluation

Metric # Value
Perplexity (block_size=256) 4.04

Example of usage:

For Embeddings


tokenizer = AutoTokenizer.from_pretrained("surajp/SanBERTa")
model = RobertaModel.from_pretrained("surajp/SanBERTa")

op = tokenizer.encode("इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।", return_tensors="pt")
ps = model(op)
ps[0].shape
'''
Output:
--------
torch.Size([1, 47, 768])

For <mask> Prediction

from transformers import pipeline

fill_mask = pipeline(
    "fill-mask",
    model="surajp/SanBERTa",
    tokenizer="surajp/SanBERTa"
)

## इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।
fill_mask("इयं भाषा न केवल<mask> भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।")

ps = model(torch.tensor(enc).unsqueeze(1))
print(ps[0].shape)
'''
Output:
--------
[{'score': 0.7516744136810303,
  'sequence': '<s> इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।</s>',
  'token': 280,
  'token_str': 'à¤Ĥ'},
 {'score': 0.06230105459690094,
  'sequence': '<s> इयं भाषा न केवली भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।</s>',
  'token': 289,
  'token_str': 'à¥Ģ'},
 {'score': 0.055410224944353104,
  'sequence': '<s> इयं भाषा न केवला भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते।</s>',
  'token': 265,
  'token_str': 'ा'},
  ...]

It works!! 🎉 🎉 🎉

Created by Suraj Parmar/@parmarsuraj99 | LinkedIn

Made with in India