metadata
library_name: transformers
license: apache-2.0
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- Saxo/total_ko_train_set_1_without_wiki_with_orca
language:
- ko
- en
- ja
- zh
pipeline_tag: text-generation
Model Card for Model ID
AI ์ ๋น ๋ฐ์ดํฐ ๋ถ์ ์ ๋ฌธ ๊ธฐ์ ์ธ Linkbricks์ ๋ฐ์ดํฐ์ฌ์ด์ธํฐ์คํธ์ธ ์ง์ค์ฑ(Saxo) ์ด์ฌ๊ฐ meta-llama/Meta-Llama-3-8B๋ฅผ ๋ฒ ์ด์ค๋ชจ๋ธ๋ก GCP์์ H100-80G 8๊ฐ๋ฅผ ํตํด SFT-DPO ํ๋ จํ ํ๊ธ ๊ธฐ๋ฐ LLAMA3-8b 8๊ฐ์ MoE(Mixture of Expert)๋ชจ๋ธ. ํ ํฌ๋์ด์ ๋ ๋ผ๋ง3๋ ๋์ผํ๋ฉฐ ํ๊ธ VOCA ํ์ฅ์ ํ์ง ์์ ๋ฒ์ ์ ๋๋ค. ์ผ๋ฐ์ง์์๋ต(์ฑํ )-์๋ฃ-๊ตฐ์ฌ-ํ์ค์ผ๋ฒ์ญ-์ฝ๋ฉ ๊ฐ ํนํ LLM์ ํตํฉ
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, trained the meta-llama/Meta-Llama-3-8B base model on 8 H100-60Gs on GCP for 4 hours of instructional training (8000 Tokens). Accelerate, Deepspeed Zero-3 libraries were used.