sappho192 commited on
Commit
21e09ac
1 Parent(s): 3cbd622

Write model description

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md CHANGED
@@ -1,3 +1,85 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - ja
5
+ - en
6
+ pipeline_tag: translation
7
  ---
8
+
9
+ # Japanese to Korean translator
10
+
11
+ Japanese to Korean translator model based on [EncoderDecoderModel](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)([bert-japanese](https://huggingface.co/cl-tohoku/bert-base-japanese)+[GPT2](https://huggingface.co/openai-community/gpt2))
12
+
13
+ # Usage
14
+ ## Demo
15
+ Please visit https://huggingface.co/spaces/sappho192/jesc-ja-en-translator-demo
16
+
17
+ ## Dependencies (PyPI)
18
+
19
+ - torch
20
+ - transformers
21
+ - fugashi
22
+ - unidic-lite
23
+
24
+ ## Inference
25
+
26
+ ```Python
27
+ import transformers
28
+ import torch
29
+
30
+ encoder_model_name = "cl-tohoku/bert-base-japanese-v2"
31
+ decoder_model_name = "openai-community/gpt2"
32
+ src_tokenizer = transformers.BertJapaneseTokenizer.from_pretrained(encoder_model_name)
33
+ trg_tokenizer = transformers.PreTrainedTokenizerFast.from_pretrained(decoder_model_name)
34
+ model = transformers.EncoderDecoderModel.from_pretrained("sappho192/jesc-ja-en-translator")
35
+
36
+
37
+ def translate(text_src):
38
+ embeddings = src_tokenizer(text_src, return_attention_mask=False, return_token_type_ids=False, return_tensors='pt')
39
+ embeddings = {k: v for k, v in embeddings.items()}
40
+ output = model.generate(**embeddings, max_length=512)[0, 1:-1]
41
+ text_trg = trg_tokenizer.decode(output.cpu())
42
+ return text_trg
43
+
44
+ texts = [
45
+ "逃げろ!", # Should be "run!"
46
+ "初めまして.", # "nice to meet you."
47
+ "よろしくお願いします.", # "thank you."
48
+ "ギルガメッシュ討伐戦", # "the battle for gilgamesh's domain"
49
+ "ギルガメッシュ討伐戦に行ってきます。一緒に行きましょうか?", # "I'm going to the battle for gilgamesh's domain. shall we go together?"
50
+ "夜になりました", # "and then it got dark."
51
+ "ご飯を食べましょう." # "let's eat."
52
+ ]
53
+
54
+ for text in texts:
55
+ print(translate(text))
56
+ print()
57
+
58
+ ```
59
+
60
+ # Dataset
61
+
62
+ The dataset used to train the model is JESC(Japanese-English Subtitle Corpus).
63
+ Its license is [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/).
64
+ All data information can be accessed through following links:
65
+
66
+ - Dataset link: https://nlp.stanford.edu/projects/jesc/
67
+ - Paper link: https://arxiv.org/abs/1710.10639
68
+ - Github link: https://github.com/rpryzant/JESC
69
+ - Bibtex:
70
+
71
+ ```bibtex
72
+ @ARTICLE{pryzant_jesc_2017,
73
+ author = {{Pryzant}, R. and {Chung}, Y. and {Jurafsky}, D. and {Britz}, D.},
74
+ title = "{JESC: Japanese-English Subtitle Corpus}",
75
+ journal = {ArXiv e-prints},
76
+ archivePrefix = "arXiv",
77
+ eprint = {1710.10639},
78
+ keywords = {Computer Science - Computation and Language},
79
+ year = 2017,
80
+ month = oct,
81
+ }
82
+ ```
83
+
84
+
85
+