Edit model card

komt : korean multi task instruction tuning model

multi task instruction tuning.jpg

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities. However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively. This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).

Model Details

  • Model Developers : davidkim(changyeon kim)
  • Repository : https://github.com/davidkim205/komt(will be updated soon.)
  • base mode : Edentns/DataVortexS-10.7B-dpo-v1.11
  • dataset : comp-341k(will be updated soon.)
Downloads last month
1,947
Safetensors
Model size
10.9B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for davidkim205/komt-solar-10.7b-sft-v5

Finetunes
1 model
Merges
4 models