Model Card for Model ID

AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ(Saxo) ์ด์‚ฌ๊ฐ€ NousResearch/Meta-Llama-3.1-70B-Instruct ๋ฒ ์ด์Šค๋ชจ๋ธ์„ KT-CLOUD์ƒ์˜ H100-80G 4๊ฐœ๋ฅผ ํ†ตํ•ด SFT->DPO ํŒŒ์ธ ํŠœ๋‹์„ ํ•œ ํ•œ๊ธ€ ์–ธ์–ด ๋ชจ๋ธ๋กœ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ๋กœ์ง€์ปฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ํ•œ๊ธ€ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋ฉฐ ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ. ํŠนํžˆ ๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ, 128k-Context Window, Tool Calling ์ง€์›, ์• ๊ตญ๊ฐ€ ๊ฐ€์‚ฌ๋ฅผ ์ •ํ™•ํžˆ ์•„๋Š” ๋ชจ๋ธ ์ž…๋‹ˆ๋‹ค ^^ Deepspeed Stage=3, rslora ๋ฅผ ์‚ฌ์šฉ
ollama run benedict/linkbricks-llama3.1-korean:70b

Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the NousResearch/Meta-Llama-3.1-70B-Instruct base model with SFT->DPO using four H100-80Gs on KT-CLOUD. It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.

www.linkbricks.com, www.linkbricks.vc

Downloads last month
112
Safetensors
Model size
70.6B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Saxo/Linkbricks-Horizon-AI-Korean-llama3.1-sft-dpo-70B

Quantized
(3)
this model
Merges
2 models

Datasets used to train Saxo/Linkbricks-Horizon-AI-Korean-llama3.1-sft-dpo-70B