Update!
- [2024.06.18] ์ฌ์ ํ์ต๋์ 250GB๊น์ง ๋๋ฆฐ Bllossom ELO๋ชจ๋ธ๋ก ์ ๋ฐ์ดํธ ๋์์ต๋๋ค. ๋ค๋ง ๋จ์ดํ์ฅ์ ํ์ง ์์์ต๋๋ค. ๊ธฐ์กด ๋จ์ดํ์ฅ๋ long-context ๋ชจ๋ธ์ ํ์ฉํ๊ณ ์ถ์ผ์ ๋ถ์ ๊ฐ์ธ์ฐ๋ฝ์ฃผ์ธ์!
- [2024.06.18] Bllossom ELO ๋ชจ๋ธ์ ์์ฒด ๊ฐ๋ฐํ ELO์ฌ์ ํ์ต ๊ธฐ๋ฐ์ผ๋ก ์๋ก์ด ํ์ต๋ ๋ชจ๋ธ์ ๋๋ค. LogicKor ๋ฒค์น๋งํฌ ๊ฒฐ๊ณผ ํ์กดํ๋ ํ๊ตญ์ด 10B์ดํ ๋ชจ๋ธ์ค SOTA์ ์๋ฅผ ๋ฐ์์ต๋๋ค.
LogicKor ์ฑ๋ฅํ :
Model | Math | Reasoning | Writing | Coding | Understanding | Grammar | Single ALL | Multi ALL | Overall |
---|---|---|---|---|---|---|---|---|---|
gpt-3.5-turbo-0125 | 7.14 | 7.71 | 8.28 | 5.85 | 9.71 | 6.28 | 7.50 | 7.95 | 7.72 |
gemini-1.5-pro-preview-0215 | 8.00 | 7.85 | 8.14 | 7.71 | 8.42 | 7.28 | 7.90 | 6.26 | 7.08 |
llama-3-Korean-Bllossom-8B | 5.43 | 8.29 | 9.0 | 4.43 | 7.57 | 6.86 | 6.93 | 6.93 | 6.93 |
Bllossom | Demo | Homepage | Github
- ๋ณธ ๋ชจ๋ธ์ CPU์์ ๊ตฌ๋๊ฐ๋ฅํ๋ฉฐ ๋น ๋ฅธ ์๋๋ฅผ ์ํด์๋ 8GB GPU์์ ๊ตฌ๋ ๊ฐ๋ฅํ ์์ํ ๋ชจ๋ธ์ ๋๋ค! Colab ์์ |
์ ํฌ Bllossomํ ์์ ํ๊ตญ์ด-์์ด ์ด์ค ์ธ์ด๋ชจ๋ธ์ธ Bllossom์ ๊ณต๊ฐํ์ต๋๋ค!
์์ธ๊ณผ๊ธฐ๋ ์ํผ์ปดํจํ
์ผํฐ์ ์ง์์ผ๋ก 100GB๊ฐ๋๋ ํ๊ตญ์ด๋ก ๋ชจ๋ธ์ ์ฒด๋ฅผ ํํ๋ํ ํ๊ตญ์ด ๊ฐํ ์ด์ค์ธ์ด ๋ชจ๋ธ์
๋๋ค!
ํ๊ตญ์ด ์ํ๋ ๋ชจ๋ธ ์ฐพ๊ณ ์์ง ์์ผ์
จ๋์?
- ํ๊ตญ์ด ์ต์ด! ๋ฌด๋ ค 3๋ง๊ฐ๊ฐ ๋๋ ํ๊ตญ์ด ์ดํํ์ฅ
- Llama3๋๋น ๋๋ต 25% ๋ ๊ธด ๊ธธ์ด์ ํ๊ตญ์ด Context ์ฒ๋ฆฌ๊ฐ๋ฅ
- ํ๊ตญ์ด-์์ด Pararell Corpus๋ฅผ ํ์ฉํ ํ๊ตญ์ด-์์ด ์ง์์ฐ๊ฒฐ (์ฌ์ ํ์ต)
- ํ๊ตญ์ด ๋ฌธํ, ์ธ์ด๋ฅผ ๊ณ ๋ คํด ์ธ์ดํ์๊ฐ ์ ์ํ ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ ๋ฏธ์ธ์กฐ์
- ๊ฐํํ์ต
์ด ๋ชจ๋ ๊ฒ ํ๊บผ๋ฒ์ ์ ์ฉ๋๊ณ ์์
์ ์ด์ฉ์ด ๊ฐ๋ฅํ Bllossom์ ์ด์ฉํด ์ฌ๋ฌ๋ถ ๋ง์ ๋ชจ๋ธ์ ๋ง๋ค์ด๋ณด์ธ์ฅ!
๋ณธ ๋ชจ๋ธ์ CPU์์ ๊ตฌ๋๊ฐ๋ฅํ๋ฉฐ ๋น ๋ฅธ ์๋๋ฅผ ์ํด์๋ 6GB GPU์์ ๊ตฌ๋ ๊ฐ๋ฅํ ์์ํ ๋ชจ๋ธ์
๋๋ค!
1. Bllossom-8B๋ ์์ธ๊ณผ๊ธฐ๋, ํ
๋์ธ, ์ฐ์ธ๋ ์ธ์ด์์ ์ฐ๊ตฌ์ค์ ์ธ์ดํ์์ ํ์
ํด ๋ง๋ ์ค์ฉ์ฃผ์๊ธฐ๋ฐ ์ธ์ด๋ชจ๋ธ์
๋๋ค! ์์ผ๋ก ์ง์์ ์ธ ์
๋ฐ์ดํธ๋ฅผ ํตํด ๊ด๋ฆฌํ๊ฒ ์ต๋๋ค ๋ง์ด ํ์ฉํด์ฃผ์ธ์ ๐
2. ์ด ๊ฐ๋ ฅํ Advanced-Bllossom 8B, 70B๋ชจ๋ธ, ์๊ฐ-์ธ์ด๋ชจ๋ธ์ ๋ณด์ ํ๊ณ ์์ต๋๋ค! (๊ถ๊ธํ์ ๋ถ์ ๊ฐ๋ณ ์ฐ๋ฝ์ฃผ์ธ์!!)
3. Bllossom์ NAACL2024, LREC-COLING2024 (๊ตฌ๋) ๋ฐํ๋ก ์ฑํ๋์์ต๋๋ค.
4. ์ข์ ์ธ์ด๋ชจ๋ธ ๊ณ์ ์
๋ฐ์ดํธ ํ๊ฒ ์ต๋๋ค!! ํ๊ตญ์ด ๊ฐํ๋ฅผ์ํด ๊ณต๋ ์ฐ๊ตฌํ์ค๋ถ(ํนํ๋
ผ๋ฌธ) ์ธ์ ๋ ํ์ํฉ๋๋ค!!
ํนํ ์๋์ GPU๋ผ๋ ๋์ฌ ๊ฐ๋ฅํํ์ ์ธ์ ๋ ์ฐ๋ฝ์ฃผ์ธ์! ๋ง๋ค๊ณ ์ถ์๊ฑฐ ๋์๋๋ ค์.
The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features:
- Knowledge Linking: Linking Korean and English knowledge through additional training
- Vocabulary Expansion: Expansion of Korean vocabulary to enhance Korean expressiveness.
- Instruction Tuning: Tuning using custom-made instruction following data specialized for Korean language and Korean culture
- Human Feedback: DPO has been applied
- Vision-Language Alignment: Aligning the vision transformer with this language model
This model developed by MLPLab at Seoultech, Teddysum and Yonsei Univ.
This model was converted to GGUF format from MLP-KTLim/llama-3-Korean-Bllossom-8B
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Demo Video
NEWS
- [2024.05.08] Vocab Expansion Model Update
- [2024.04.25] We released Bllossom v2.0, based on llama-3
- [2023/12] We released Bllossom-Vision v1.0, based on Bllossom
- [2023/08] We released Bllossom v1.0, based on llama-2.
- [2023/07] We released Bllossom v0.7, based on polyglot-ko.
Example code
!CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
!huggingface-cli download MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M --local-dir='YOUR-LOCAL-FOLDER-PATH'
from llama_cpp import Llama
from transformers import AutoTokenizer
model_id = 'MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = Llama(
model_path='YOUR-LOCAL-FOLDER-PATH/llama-3-Korean-Bllossom-8B-Q4_K_M.gguf',
n_ctx=512,
n_gpu_layers=-1 # Number of model layers to offload to GPU
)
PROMPT = \
'''๋น์ ์ ์ ์ฉํ AI ์ด์์คํดํธ์
๋๋ค. ์ฌ์ฉ์์ ์ง์์ ๋ํด ์น์ ํ๊ณ ์ ํํ๊ฒ ๋ต๋ณํด์ผ ํฉ๋๋ค.
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.'''
instruction = 'Your Instruction'
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt=True
)
generation_kwargs = {
"max_tokens":512,
"stop":["<|eot_id|>"],
"top_p":0.9,
"temperature":0.6,
"echo":True, # Echo the prompt in the output
}
resonse_msg = model(prompt, **generation_kwargs)
print(resonse_msg['choices'][0]['text'][len(prompt):])
Citation
Language Model
@misc{bllossom,
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
year = {2024},
journal = {LREC-COLING 2024},
paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
},
}
Vision-Language Model
@misc{bllossom-V,
author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
year = {2024},
publisher = {GitHub},
journal = {NAACL 2024 findings},
paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
},
}
Contact
- ์๊ฒฝํ(KyungTae Lim), Professor at Seoultech.
ktlim@seoultech.ac.kr
- ํจ์๊ท (Younggyun Hahm), CEO of Teddysum.
hahmyg@teddysum.ai
- ๊นํ์(Hansaem Kim), Professor at Yonsei.
khss@yonsei.ac.kr
Contributor
- ์ต์ฐฝ์(Chansu Choi), choics2623@seoultech.ac.kr
- ๊น์๋ฏผ(Sangmin Kim), sangmin9708@naver.com
- ์์ธํธ(Inho Won), wih1226@seoultech.ac.kr
- ๊น๋ฏผ์ค(Minjun Kim), mjkmain@seoultech.ac.kr
- ์ก์น์ฐ(Seungwoo Song), sswoo@seoultech.ac.kr
- ์ ๋์ฌ(Dongjae Shin), dylan1998@seoultech.ac.kr
- ์ํ์(Hyeonseok Lim), gustjrantk@seoultech.ac.kr
- ์ก์ ํ(Jeonghun Yuk), usually670@gmail.com
- ์ ํ๊ฒฐ(Hangyeol Yoo), 21102372@seoultech.ac.kr
- ์ก์ํ(Seohyun Song), alexalex225225@gmail.com
- Downloads last month
- 12,028
Model tree for MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M
Base model
meta-llama/Meta-Llama-3-8B
Finetuned
MLP-KTLim/llama-3-Korean-Bllossom-8B