This model is based on MBART and translates Buddhist Chinese to English. It is optimized for a sequence length of 300 (Buddhist Chinese input sequences shouldn't exceed 150 characters). This model uses "#" with a space before and after as delimiter between sentences (in addition to the normal Chinese punctuation). Input should be converted to simplified Chinese before running. The model also doesn't like short sequences very much, for best results supply input sequences between 100 and 150 characters in length.
The model shows good performance on Sūtra texts and does perform not too bad on Abhidharma and Yogācāra. However, it does have the usual problems that NMT systems have with named entities (names of persons and places). Also it shows a tendency to hallucinate at times, i.e. generating a translation that has no direct relationship with the input.

Downloads last month
28
Inference Examples
Inference API (serverless) has been turned off for this model.