File size: 4,196 Bytes
dea46fb
 
c450c68
 
3a27485
c450c68
 
 
58aafbd
0b2fa43
 
 
 
c6c837f
dea46fb
c450c68
 
 
 
330b53d
 
29e8295
 
 
000ead6
 
29e8295
330b53d
 
3a27485
330b53d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3006ef6
330b53d
 
 
 
087e425
 
3a27485
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
087e425
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: mit
datasets:
- Helsinki-NLP/tatoeba_mt
- sappho192/Tatoeba-Challenge-jpn-kor
language:
- ja
- ko
pipeline_tag: translation
tags:
- python
- transformer
- pytorch
inference: false
---
# Japanese to Korean translator for FFXIV

**FINAL FANTASY is a registered trademark of Square Enix Holdings Co., Ltd.**

This project is detailed on the [Github repo](https://github.com/sappho192/ffxiv-ja-ko-translator).

# Demo
[![demo.gif](demo.gif)](https://huggingface.co/spaces/sappho192/ffxiv-ja-ko-translator-demo)
[Click to try demo](https://huggingface.co/spaces/sappho192/ffxiv-ja-ko-translator-demo)
[![demo2.gif](demo2.gif)](https://github.com/sappho192/ffxiv-ja-ko-translator/tree/main/onnx_project/dotnet)
[Check this Windows app demo with ONNX model](https://github.com/sappho192/ffxiv-ja-ko-translator/tree/main/onnx_project/dotnet)

# Usage

## Inference (PyTorch)

```Python
from transformers import(
    EncoderDecoderModel,
    PreTrainedTokenizerFast,
    BertJapaneseTokenizer,
)

import torch

encoder_model_name = "cl-tohoku/bert-base-japanese-v2"
decoder_model_name = "skt/kogpt2-base-v2"

src_tokenizer = BertJapaneseTokenizer.from_pretrained(encoder_model_name)
trg_tokenizer = PreTrainedTokenizerFast.from_pretrained(decoder_model_name)

# You should change following `./best_model` to the path of model **directory**
model = EncoderDecoderModel.from_pretrained("./best_model")

text = "ใ‚ฎใƒซใ‚ฌใƒกใƒƒใ‚ทใƒฅ่จŽไผๆˆฆ"
# text = "ใ‚ฎใƒซใ‚ฌใƒกใƒƒใ‚ทใƒฅ่จŽไผๆˆฆใซ่กŒใฃใฆใใพใ™ใ€‚ไธ€็ท’ใซ่กŒใใพใ—ใ‚‡ใ†ใ‹๏ผŸ"

def translate(text_src):
    embeddings = src_tokenizer(text_src, return_attention_mask=False, return_token_type_ids=False, return_tensors='pt')
    embeddings = {k: v for k, v in embeddings.items()}
    output = model.generate(**embeddings, max_length=500)[0, 1:-1]
    text_trg = trg_tokenizer.decode(output.cpu())
    return text_trg

print(translate(text))
```

## Inference (Optimum.OnnxRuntime)
Note that current Optimum.OnnxRuntime still requires PyTorch for backend. [[Issue](https://github.com/huggingface/optimum/issues/526)]
You can use either [[ONNX](https://huggingface.co/sappho192/ffxiv-ja-ko-translator/tree/main/onnx)] or [[quantized ONNX](https://huggingface.co/sappho192/ffxiv-ja-ko-translator/tree/main/onnxq)] model.

```Python
from transformers import BertJapaneseTokenizer,PreTrainedTokenizerFast
from optimum.onnxruntime import ORTModelForSeq2SeqLM
from onnxruntime import SessionOptions
import torch

encoder_model_name = "cl-tohoku/bert-base-japanese-v2"
decoder_model_name = "skt/kogpt2-base-v2"

src_tokenizer = BertJapaneseTokenizer.from_pretrained(encoder_model_name)
trg_tokenizer = PreTrainedTokenizerFast.from_pretrained(decoder_model_name)

sess_options = SessionOptions()
sess_options.log_severity_level = 3 # mute warnings including CleanUnusedInitializersAndNodeArgs
# change subfolder to "onnxq" if you want to use the quantized model
model = ORTModelForSeq2SeqLM.from_pretrained("sappho192/ffxiv-ja-ko-translator",
        sess_options=sess_options, subfolder="onnx") 

texts = [
    "้€ƒใ’ใ‚!",  # Should be "๋„๋ง์ณ!"
    "ๅˆใ‚ใพใ—ใฆ.",  # "๋ฐ˜๊ฐ€์›Œ์š”"
    "ใ‚ˆใ‚ใ—ใใŠ้ก˜ใ„ใ—ใพใ™.",  # "์ž˜ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค."
    "ใ‚ฎใƒซใ‚ฌใƒกใƒƒใ‚ทใƒฅ่จŽไผๆˆฆ",  # "๊ธธ๊ฐ€๋ฉ”์‰ฌ ํ† ๋ฒŒ์ „"
    "ใ‚ฎใƒซใ‚ฌใƒกใƒƒใ‚ทใƒฅ่จŽไผๆˆฆใซ่กŒใฃใฆใใพใ™ใ€‚ไธ€็ท’ใซ่กŒใใพใ—ใ‚‡ใ†ใ‹๏ผŸ",  # "๊ธธ๊ฐ€๋ฉ”์‰ฌ ํ† ๋ฒŒ์ „์— ๊ฐ‘๋‹ˆ๋‹ค. ๊ฐ™์ด ๊ฐ€์‹ค๋ž˜์š”?"
    "ๅคœใซใชใ‚Šใพใ—ใŸ",  # "๋ฐค์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค"
    "ใ”้ฃฏใ‚’้ฃŸในใพใ—ใ‚‡ใ†."  # "์Œ, ์ด์ œ ์‹์‚ฌ๋„ ํ•ด๋ณผ๊นŒ์š”"
 ]


def translate(text_src):
    embeddings = src_tokenizer(text_src, return_attention_mask=False, return_token_type_ids=False, return_tensors='pt')
    print(f'Src tokens: {embeddings.data["input_ids"]}')
    embeddings = {k: v for k, v in embeddings.items()}

    output = model.generate(**embeddings, max_length=500)[0, 1:-1]
    print(f'Trg tokens: {output}')
    text_trg = trg_tokenizer.decode(output.cpu())
    return text_trg


for text in texts:
    print(translate(text))
    print()
```

## Training

Check the [training.ipynb](https://huggingface.co/sappho192/ffxiv-ja-ko-translator/blob/main/training.ipynb).