Model Card for Model ID
Digaro Mishmi, also known as Tawra, Taoran, Taraon, or Darang, is a member of the Digarish language family, spoken by the Mishmi people in northeastern Arunachal Pradesh, India, and parts of Zayü County, Tibet, China. The language has several autonyms, including tɑ31 rɑŋ53 or da31 raŋ53 in Arunachal Pradesh, and tɯŋ53 in China, where the Deng (登) also refer to the language. The language holds an essential place in the Anjaw district of Arunachal Pradesh, spoken in Hayuliang, Changlagam, and Goiliang circles, as well as in the Dibang Valley district and parts of Assam. Although Ethnologue’s 2001 census estimated around 35,000 native speakers, Digaro Mishmi remains critically under-resourced in terms of computational linguistics and digital preservation.
- source: Wikipedia
Model Details
Model Description
- Developed by: Tungon Dugi and Rushanti Kri
- Dataset by: Miss Rushanti Kri
- Affiliation: National Institute of Technology Arunachal Pradesh, India
- Email: tungondugi@gmail.com or tungon.phd24@nitap.ac.in
- Model type: Translation
- Language(s) (NLP): English (en) and Tawra (taw)
- Finetuned from model: repleeka/eng-tagin-nmt
Direct Use
This model can be used for translation and text-to-text generation.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("repleeka/eng-taw-nmt")
model = AutoModelForSeq2SeqLM.from_pretrained("repleeka/eng-taw-nmt")
Training Details
Training Data
English-Tawra Corpus by Rushanti Kri
Evaluation
The model achieved the following metrics after 10 training epochs:
Metric | Value |
---|---|
BLEU Score | 0.25157 |
Evaluation Runtime | 644.278 seconds |
The model’s BLEU score suggests promising results, with the low evaluation loss indicating strong translation performance on the English-Tawra Corpus, suitable for practical applications. This model represents a significant advancement for Tawra language resources, enabling English-to-Tawra translation in NLP applications.
Summary
The eng_taw_nmt
model is currently in its early phase of development. To enhance its performance, it requires a more substantial dataset and improved training resources. This would facilitate better generalization and accuracy in translating between English and Tawra, addressing the challenges faced by this extremely low-resource language. As the model evolves, ongoing efforts will be necessary to refine its capabilities further.
- Downloads last month
- 27