language: | |
- en | |
- ko | |
pipeline_tag: text-generation | |
inference: false | |
tags: | |
- meta | |
- pytorch | |
- llama | |
- llama-2 | |
- llama-2-chat | |
license: apache-2.0 | |
# komt : korean multi task instruction tuning model | |
Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities. | |
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively. | |
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs). | |
## Model Details | |
* **Model Developers** : davidkim(changyeon kim) | |
* **Repository** : https://github.com/davidkim205/komt | |
* **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0 |