|
--- |
|
license: mit |
|
language: |
|
- en |
|
- zh |
|
library_name: transformers |
|
tags: |
|
- translation |
|
- fine tune |
|
- fine_tune |
|
widget: |
|
- text: >- |
|
I {i}should{/i} say that I feel a little relieved to find out that |
|
{i}this{/i} is why you’ve been hanging out with Kaori lately, though. She’s |
|
really pretty and I got jealous and...I’m sorry. |
|
--- |
|
|
|
# Normal1919/mbart-large-50-one-to-many-lil-fine-tune |
|
|
|
* base model: mbart-large-50 |
|
* pretrained_ckpt: facebook/mbart-large-50-one-to-many-mmt |
|
* This model was trained for [rpy dl translate](https://github.com/O5-7/rpy_dl_translate) |
|
|
|
## Model description |
|
* source group: English |
|
* target group: Chinese |
|
* model: transformer |
|
* source language(s): eng |
|
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant |
|
* fine_tune: On the basis of mbart-large-50-one-to-many-mmt checkpoints, train English original text with renpy text features (including but not limited to {i} [text] {/i}) to Chinese with the same reserved flag, as well as training for English name retention for LIL |
|
|
|
## How to use |
|
```python |
|
>>> from transformers import MBartForConditionalGeneration, MBart50TokenizerFast |
|
>>> mode_name = 'Normal1919/mbart-large-50-one-to-many-lil-fine-tune' |
|
>>> model = MBartForConditionalGeneration.from_pretrained(mode_name) |
|
>>> tokenizer = MBart50TokenizerFast.from_pretrained(mode_name, src_lang="en_XX", tgt_lang="zh_CN") |
|
>>> translation = pipeline("Marian-NMT-en-zh-lil-fine-tune", model=model, tokenizer=tokenizer) |
|
>>> translation('I {i} should {/i} say that I feel a little relieved to find out that {i}this {/i} is why you’ve been hanging out with Kaori lately, though. She’s really pretty and I got jealous and...I’m sorry', max_length=400) |
|
[{'我{i}应该{/i}说发现{i}这{/i}是你最近和Kaori出去的原因,我有点松了一口气。她很漂亮,我嫉妒,而且......我很抱歉。'}] |
|
``` |
|
|
|
## Contact |
|
|
|
517205163@qq.com or |
|
a4564563@gmail.com |