docs: language: ko

#1
by Bingsu - opened
Files changed (1) hide show
  1. README.md +49 -48
README.md CHANGED
@@ -1,48 +1,49 @@
1
- ---
2
- datasets:
3
- - mc4
4
- license: apache-2.0
5
- ---
6
-
7
- # ByT5-Korean - small
8
-
9
- ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
10
-
11
- A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
12
- While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
13
-
14
- ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
15
- ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
16
-
17
- ## Encoding Scheme
18
- ```text
19
- id: token
20
- 0: <pad>
21
- 1: <eos>
22
- 2: <unk>
23
- 3~258: utf-8 encoding
24
- 259~277: beginning consonants(μ΄ˆμ„±), 19개(γ„±γ„²γ„΄γ„·γ„Έγ„Ήγ…γ…‚γ…ƒγ……γ…†γ…‡γ…ˆγ…‰γ…Šγ…‹γ…Œγ…γ…Ž)
25
- 278~298: middle vowel(쀑성), 21개(γ…γ…γ…‘γ…’γ…“γ…”γ…•γ…–γ…—γ…˜γ…™γ…šγ…›γ…œγ…γ…žγ…Ÿγ… γ…‘γ…’γ…£)
26
- 299~326: final consonant(μ’…μ„±), 무쒅성+27개(γ„±γ„²γ„³γ„΄γ„΅γ„Άγ„·γ„Ήγ„Ίγ„»γ„Όγ„½γ„Ύγ„Ώγ…€γ…γ…‚γ…„γ……γ…†γ…‡γ…ˆγ…Šγ…‹γ…Œγ…γ…Ž)
27
- 327~384: from <extra_id_0> to <extra_id_57>
28
- ```
29
-
30
- ## Example Inference
31
-
32
- ```python
33
- import torch
34
- from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-small/blob/main/tokenizer.py
35
- from transformers import T5ForConditionalGeneration
36
-
37
- tokenizer_jamo = ByT5KoreanTokenizer()
38
- model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-small')
39
-
40
- input_sentence = 'ν•œκ΅­μ–΄ μœ„ν‚€λ°±κ³Ό(μ˜μ–΄: Korean Wikipedia)λŠ” ν•œκ΅­μ–΄λ‘œ μš΄μ˜λ˜λŠ” μœ„ν‚€λ°±κ³Όμ˜ λ‹€μ–Έμ–΄νŒ κ°€μš΄λ° ν•˜λ‚˜λ‘œμ„œ, 2002λ…„ 10μ›” 11일에 <extra_id_0>. λ˜ν•œ ν˜„μž¬ ν•œκ΅­μ–΄ μœ„ν‚€λ°±κ³Όμ—λŠ” λ„˜κ²¨μ£ΌκΈ°, ν† λ‘ , κ·Έλ¦Ό λ“± νŽ˜μ΄μ§€λ‘œ λΆˆλ¦¬λŠ” λͺ¨λ“  λ¬Έμ„œλ₯Ό ν¬ν•¨ν•˜λ©΄ 총 2,629,860κ°œκ°€ <extra_id_1>λ˜μ–΄ 있으며, λ„˜κ²¨μ£ΌκΈ°λ₯Ό ν¬ν•¨ν•œ 일반 λ¬Έμ„œ μˆ˜λŠ” 1,278,560개,[1] 그쀑 λ„˜κ²¨μ£ΌκΈ°, 막닀λ₯Έ λ¬Έμ„œλ₯Ό μ œμ™Έν•œ 일반 λ¬Έμ„œ μˆ˜λŠ” 573,149κ°œμ΄λ‹€.'
41
-
42
- input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
43
- outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
44
- print(tokenizer_jamo.decode(outputs_jamo[0]))
45
- # <pad><extra_id_0>μ„€λ¦½λ˜μ—ˆλ‹€<extra_id_1>Δ‘Δ›
46
- ```
47
-
48
- Additional information coming soon...
 
 
1
+ ---
2
+ language: ko
3
+ datasets:
4
+ - mc4
5
+ license: apache-2.0
6
+ ---
7
+
8
+ # ByT5-Korean - small
9
+
10
+ ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
11
+
12
+ A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
13
+ While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
14
+
15
+ ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
16
+ ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
17
+
18
+ ## Encoding Scheme
19
+ ```text
20
+ id: token
21
+ 0: <pad>
22
+ 1: <eos>
23
+ 2: <unk>
24
+ 3~258: utf-8 encoding
25
+ 259~277: beginning consonants(μ΄ˆμ„±), 19개(γ„±γ„²γ„΄γ„·γ„Έγ„Ήγ…γ…‚γ…ƒγ……γ…†γ…‡γ…ˆγ…‰γ…Šγ…‹γ…Œγ…γ…Ž)
26
+ 278~298: middle vowel(쀑성), 21개(γ…γ…γ…‘γ…’γ…“γ…”γ…•γ…–γ…—γ…˜γ…™γ…šγ…›γ…œγ…γ…žγ…Ÿγ… γ…‘γ…’γ…£)
27
+ 299~326: final consonant(μ’…μ„±), 무쒅성+27개(γ„±γ„²γ„³γ„΄γ„΅γ„Άγ„·γ„Ήγ„Ίγ„»γ„Όγ„½γ„Ύγ„Ώγ…€γ…γ…‚γ…„γ……γ…†γ…‡γ…ˆγ…Šγ…‹γ…Œγ…γ…Ž)
28
+ 327~384: from <extra_id_0> to <extra_id_57>
29
+ ```
30
+
31
+ ## Example Inference
32
+
33
+ ```python
34
+ import torch
35
+ from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-small/blob/main/tokenizer.py
36
+ from transformers import T5ForConditionalGeneration
37
+
38
+ tokenizer_jamo = ByT5KoreanTokenizer()
39
+ model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-small')
40
+
41
+ input_sentence = 'ν•œκ΅­μ–΄ μœ„ν‚€λ°±κ³Ό(μ˜μ–΄: Korean Wikipedia)λŠ” ν•œκ΅­μ–΄λ‘œ μš΄μ˜λ˜λŠ” μœ„ν‚€λ°±κ³Όμ˜ λ‹€μ–Έμ–΄νŒ κ°€μš΄λ° ν•˜λ‚˜λ‘œμ„œ, 2002λ…„ 10μ›” 11일에 <extra_id_0>. λ˜ν•œ ν˜„μž¬ ν•œκ΅­μ–΄ μœ„ν‚€λ°±κ³Όμ—λŠ” λ„˜κ²¨μ£ΌκΈ°, ν† λ‘ , κ·Έλ¦Ό λ“± νŽ˜μ΄μ§€λ‘œ λΆˆλ¦¬λŠ” λͺ¨λ“  λ¬Έμ„œλ₯Ό ν¬ν•¨ν•˜λ©΄ 총 2,629,860κ°œκ°€ <extra_id_1>λ˜μ–΄ 있으며, λ„˜κ²¨μ£ΌκΈ°λ₯Ό ν¬ν•¨ν•œ 일반 λ¬Έμ„œ μˆ˜λŠ” 1,278,560개,[1] 그쀑 λ„˜κ²¨μ£ΌκΈ°, 막닀λ₯Έ λ¬Έμ„œλ₯Ό μ œμ™Έν•œ 일반 λ¬Έμ„œ μˆ˜λŠ” 573,149κ°œμ΄λ‹€.'
42
+
43
+ input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
44
+ outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
45
+ print(tokenizer_jamo.decode(outputs_jamo[0]))
46
+ # <pad><extra_id_0>μ„€λ¦½λ˜μ—ˆλ‹€<extra_id_1>Δ‘Δ›
47
+ ```
48
+
49
+ Additional information coming soon...