bertshared-kor-base / README.md
nazneen's picture
model documentation
e561545
---
language: ko
tags:
- text-2-text-generation
---
# Model Card for Bert base model for Korean
# Model Details
## Model Description
More information needed.
- **Developed by:** kiyoung kim
- **Shared by [Optional]:** kiyoung kim
- **Model type:** Text2Text Generation
- **Language(s) (NLP):** Korean
- **License:** More information needed
- **Parent Model:** bert-base-multilingual-uncased
- **Resources for more information:**
- [GitHub Repo](https://github.com/kiyoungkim1/LM-kor)
# Uses
## Direct Use
This model can be used for the task of text2text generation.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
* 70GB Korean text dataset and 42000 lower-cased subwords are used
The model authors also note in the [GitHub Repo](https://github.com/kiyoungkim1/LM-kor):
> ํ•™์Šต์— ์‚ฌ์šฉํ•œ ๋ฐ์ดํ„ฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.
1.) ๊ตญ๋‚ด ์ฃผ์š” ์ปค๋จธ์Šค ๋ฆฌ๋ทฐ 1์–ต๊ฐœ + ๋ธ”๋กœ๊ทธ ํ˜• ์›น์‚ฌ์ดํŠธ 2000๋งŒ๊ฐœ (75GB)
2.) ๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜ (18GB)
3.) ์œ„ํ‚คํ”ผ๋””์•„์™€ ๋‚˜๋ฌด์œ„ํ‚ค (6GB)
๋ถˆํ•„์š”ํ•˜๊ฑฐ๋‚˜ ๋„ˆ๋ฌด ์งค์€ ๋ฌธ์žฅ, ์ค‘๋ณต๋˜๋Š” ๋ฌธ์žฅ๋“ค์„ ์ œ์™ธํ•˜์—ฌ 100GB์˜ ๋ฐ์ดํ„ฐ ์ค‘ ์ตœ์ข…์ ์œผ๋กœ 70GB (์•ฝ 127์–ต๊ฐœ์˜ token)์˜ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต์— ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.
๋ฐ์ดํ„ฐ๋Š” ํ™”์žฅํ’ˆ(8GB), ์‹ํ’ˆ(6GB), ์ „์ž์ œํ’ˆ(13GB), ๋ฐ˜๋ ค๋™๋ฌผ(2GB) ๋“ฑ๋“ฑ์˜ ์นดํ…Œ๊ณ ๋ฆฌ๋กœ ๋ถ„๋ฅ˜๋˜์–ด ์žˆ์œผ๋ฉฐ ๋„๋ฉ”์ธ ํŠนํ™” ์–ธ์–ด๋ชจ๋ธ ํ•™์Šต์— ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค
## Training Procedure
### Preprocessing
The model authors also note in the [GitHub Repo](https://github.com/kiyoungkim1/LM-kor):
> BERT ๋ชจ๋ธ์—๋Š” whole-word-masking์ด ์ ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
> ํ•œ๊ธ€, ์˜์–ด, ์ˆซ์ž์™€ ์ผ๋ถ€ ํŠน์ˆ˜๋ฌธ์ž๋ฅผ ์ œ์™ธํ•œ ๋ฌธ์ž๋Š” ํ•™์Šต์— ๋ฐฉํ•ด๊ฐ€๋œ๋‹ค๊ณ  ํŒ๋‹จํ•˜์—ฌ ์‚ญ์ œํ•˜์˜€์Šต๋‹ˆ๋‹ค(์˜ˆ์‹œ: ํ•œ์ž, ์ด๋ชจ์ง€ ๋“ฑ)
[Huggingface tokenizers](https://github.com/huggingface/tokenizers) ์˜ wordpiece๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด 40000๊ฐœ์˜ subword๋ฅผ ์ƒ์„ฑํ•˜์˜€์Šต๋‹ˆ๋‹ค.
์—ฌ๊ธฐ์— 2000๊ฐœ์˜ unused token๊ณผ ๋„ฃ์–ด ํ•™์Šตํ•˜์˜€์œผ๋ฉฐ, unused token๋Š” ๋„๋ฉ”์ธ ๋ณ„ ํŠนํ™” ์šฉ์–ด๋ฅผ ๋‹ด๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
| | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :-------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :-----------------------------------: |
| kcbert-base | 89.87 | 85.00 | 67.40 | 75.57 | 75.94 | 93.93 | **68.78** |
|**OURS**|
| **bert-kor-base** | 90.87 | 87.27 | 82.80 | 82.32 | 84.31 | 95.25 | 68.45 |
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
```bibtex
@misc{kim2020lmkor,
author = {Kiyoung Kim},
title = {Pretrained Language Models For Korean},
year = {2020},
publisher = {GitHub},
howpublished = {\url{https://github.com/kiyoungkim1/LMkor}}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
* Cloud TPUs are provided by [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc/) program.
* Also, [๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜](https://corpus.korean.go.kr/) is used for pretraining data.
# Model Card Authors [optional]
Kiyoung kim in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
# only for pytorch in transformers
from transformers import BertTokenizerFast, EncoderDecoderModel
tokenizer = BertTokenizerFast.from_pretrained("kykim/bertshared-kor-base")
model = EncoderDecoderModel.from_pretrained("kykim/bertshared-kor-base")
```
</details>