Edit model card

Model Details

์ƒ์„ฑํ•œ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ axolotl์„ ์ด์šฉํ•˜์—ฌ ํŒŒ์ธํŠœ๋‹ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

LogicKor์—์„œ 2.1B์˜ ํŒŒ๋ผ๋ฉ”ํ„ฐ๋กœ default ๊ธฐ์ค€ 4.21์ ์„ ๊ธฐ๋กํ•˜์˜€์Šต๋‹ˆ๋‹ค.

์•„์ง ์‹คํ—˜์ค‘์ธ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

Model Description

Qwen/Qwen2-1.5B-Instruct ๋ชจ๋ธ์„ ์ด์šฉํ•˜์—ฌ ์ƒ์„ฑํ•˜์˜€์Šต๋‹ˆ๋‹ค.

LogicKor

default

Category Single turn Multi turn
์ดํ•ด (Understanding) 6.14 5.43
๋ฌธ๋ฒ• (Grammar) 5.00 2.43
์ˆ˜ํ•™ (Math) 1.86 1.86
์ถ”๋ก  (Reasoning) 5.57 2.14
์ฝ”๋”ฉ (Coding) 3.57 3.71
๊ธ€์“ฐ๊ธฐ (Writing) 6.00 6.86
Category Score
Single turn 4.69
Multi turn 3.74
Overall 4.21

1-shot

Category Single turn Multi turn
์ถ”๋ก (Reasoning) 4.14 1.43
์ˆ˜ํ•™(Math) 2.86 1.00
๊ธ€์“ฐ๊ธฐ(Writing) 5.00 4.57
์ฝ”๋”ฉ(Coding) 3.14 3.43
์ดํ•ด(Understanding) 4.29 3.71
๋ฌธ๋ฒ•(Grammar) 2.71 1.43
Category Score
Single turn 3.69
Multi turn 2.60
Overall 3.14

cot-1-shot

Category Single turn Multi turn
์ถ”๋ก (Reasoning) 3.00 2.86
์ˆ˜ํ•™(Math) 1.57 1.00
๊ธ€์“ฐ๊ธฐ(Writing) 5.86 6.00
์ฝ”๋”ฉ(Coding) 4.29 4.14
์ดํ•ด(Understanding) 3.43 3.43
๋ฌธ๋ฒ•(Grammar) 3.00 1.14
Category Score
Single turn 3.52
Multi turn 3.10
Overall 3.31

Applications

This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.

Limitations and Considerations

While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.

Model Card

@article{Carrot-Ko-2.1B-Instruct,
  title={CarrotAI/Carrot-Ko-2.1B-Instruct Card},
  author={CarrotAI (L, GEUN)},
  year={2024},
  url = {https://huggingface.co/CarrotAI/Carrot-2.1B-Instruct}
}
Downloads last month
249
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for CarrotAI/Carrot-Ko-2B-Instruct

Quantizations
3 models

Dataset used to train CarrotAI/Carrot-Ko-2B-Instruct