File size: 1,755 Bytes
d20ab0d 0e74741 d20ab0d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
license: mit
language:
- ko
metrics:
- f1
library_name: transformers
tags:
- bert
- ruber
- open-domain
- chit-chat
- evaluation
---
# Model Card for Model ID
This model is fine-tuned version of KLUE BERT (https://huggingface.co/klue/bert-base) for the open-domain dialogue evaluation based on the original BERT-RUBER(https://arxiv.org/pdf/1904.10635) architecture.
## Model Details
The model consists of the BERT encoder for contextualized embedding and an additional multi-layer classifier. For pooling, this model uses the embedding of CLS token.
The details can be found on the original paper: https://arxiv.org/pdf/1904.10635
### Model Description
- **Developed by:** devjwsong
- **Model type:** BertModel + MLP
- **Language(s) (NLP):** Korean
- **License:** MIT
- **Finetuned from model [optional]:** klue/bert-base (https://huggingface.co/klue/bert-base)
-
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/devjwsong/bert-ruber-kor-pytorch
- **Paper:** https://arxiv.org/pdf/1904.10635
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
- Ghazarian, S., Wei, J. T. Z., Galstyan, A., & Peng, N. (2019). Better automatic evaluation of open-domain dialogue systems with contextualized embeddings. arXiv preprint arXiv:1904.10635.
- Park, S., Moon, J., Kim, S., Cho, W. I., Han, J., Park, J., ... & Cho, K. (2021). Klue: Korean language understanding evaluation. arXiv preprint arXiv:2105.09680.
## Model Card Authors
Jaewoo (Kyle) Song (devjwsong)
## Model Card Contact
- devjwsong@gmail.com
- https://github.com/devjwsong
- https://www.linkedin.com/in/jaewoo-song-13b375196
|