cbdb commited on
Commit
26a1722
1 Parent(s): ccaf183

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ license: cc-by-nc-sa-4.0
14
  ---
15
 
16
  # <font color="IndianRed"> Kraft (Korean Romanization From Transformer) </font>
17
- [![Open In Colab](https://colab.research.google.com/drive/1CyyBvXZYNjaidOZUNGSCtVbmBganRUCn?usp=sharing)
18
 
19
  The Kraft (Korean Romanization From Transformer) model translates the characters (Hangul) of a Korean person name into the Roman alphabet ([McCune–Reischauer system](https://en.wikipedia.org/wiki/McCune%E2%80%93Reischauer)). Kraft uses the Transformer architecture, which is a type of neural network architecture that was introduced in the 2017 paper "Attention Is All You Need" by Google researchers. It is designed for sequence-to-sequence tasks, such as machine translation, language modeling, and summarization.
20
 
 
14
  ---
15
 
16
  # <font color="IndianRed"> Kraft (Korean Romanization From Transformer) </font>
17
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CyyBvXZYNjaidOZUNGSCtVbmBganRUCn?usp=sharing)
18
 
19
  The Kraft (Korean Romanization From Transformer) model translates the characters (Hangul) of a Korean person name into the Roman alphabet ([McCune–Reischauer system](https://en.wikipedia.org/wiki/McCune%E2%80%93Reischauer)). Kraft uses the Transformer architecture, which is a type of neural network architecture that was introduced in the 2017 paper "Attention Is All You Need" by Google researchers. It is designed for sequence-to-sequence tasks, such as machine translation, language modeling, and summarization.
20