This is a MicroBERT model for Uyghur.
- Its suffix is -m, which means that it was pretrained using supervision from masked language modeling.
- The unlabeled Uyghur data was taken from a February 2022 dump of Uyghur Wikipedia, totaling 2,401,445 tokens.
- The UD treebank UD_Uyghur-UDT, v2.9, totaling 40,236 tokens, was used for labeled data.
Please see the repository and the paper for more details.
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.