|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
datasets: |
|
- Saxo/total_ko_train_set_1_without_wiki_with_orca |
|
language: |
|
- ko |
|
- en |
|
- ja |
|
- zh |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<div align="center"> |
|
<img src="https://www.linkbricks.com/wp-content/uploads/2022/03/%E1%84%85%E1%85%B5%E1%86%BC%E1%84%8F%E1%85%B3%E1%84%87%E1%85%B3%E1%84%85%E1%85%B5%E1%86%A8%E1%84%89%E1%85%B3%E1%84%85%E1%85%A9%E1%84%80%E1%85%A9-2-1024x804.png" /> |
|
</div> |
|
|
|
|
|
AI ์ ๋น
๋ฐ์ดํฐ ๋ถ์ ์ ๋ฌธ ๊ธฐ์
์ธ Linkbricks์ ๋ฐ์ดํฐ์ฌ์ด์ธํฐ์คํธ์ธ ์ง์ค์ฑ(Saxo) ์ด์ฌ๊ฐ meta-llama/Meta-Llama-3-8B๋ฅผ ๋ฒ ์ด์ค๋ชจ๋ธ๋ก GCP์์ H100-80G 8๊ฐ๋ฅผ ํตํด SFT-DPO ํ๋ จํ ํ๊ธ ๊ธฐ๋ฐ LLAMA3-8b 8๊ฐ์ MoE(Mixture of Expert)๋ชจ๋ธ. |
|
ํ ํฌ๋์ด์ ๋ ๋ผ๋ง3๋ ๋์ผํ๋ฉฐ ํ๊ธ VOCA ํ์ฅ์ ํ์ง ์์ ๋ฒ์ ์
๋๋ค. |
|
์ผ๋ฐ์ง์์๋ต(์ฑํ
)-์๋ฃ-๊ตฐ์ฌ-ํ์ค์ผ๋ฒ์ญ-์ฝ๋ฉ ๊ฐ ํนํ LLM์ ํตํฉ |
|
|
|
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, trained the meta-llama/Meta-Llama-3-8B base model on 8 H100-60Gs on GCP for 4 hours of instructional training (8000 Tokens). |
|
Accelerate, Deepspeed Zero-3 libraries were used. |
|
|
|
www.linkbricks.com, www.linkbricks.vc |