File size: 4,873 Bytes
aba489e ad7f852 aba489e ad7f852 aba489e ad7f852 aba489e ad7f852 aba489e ad7f852 aba489e ad7f852 aba489e ad7f852 1851ea5 ad7f852 1851ea5 ad7f852 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 |
---
language:
- en
- ko
license: llama3
library_name: transformers
base_model:
- meta-llama/Meta-Llama-3-8B
---
<a href="https://taemin6697.github.io/">
<img src="https://github.com/taemin6697/taemin6697/assets/96530685/46a29020-e640-4e74-9d77-f12e466fc706" width="40%" height="50%">
</a>
# Hansung Bllossom | [Demo]() | [Developer κΉνλ―Ό](https://taemin6697.github.io/) | [Github](https://github.com/taemin6697/HansungGPT/tree/main) |
```bash
νμ±λνκ΅ QA κΈ°λ°μΌλ‘ νμ΅μν¨Hansung-Bllossom-8B λ₯Ό μΆμν©λλ€.
μ΄λ MLP-KTLim/llama-3-Korean-Bllossom-8B μ κΈ°λ°μΌλ‘ νμ΅λμμ΅λλ€.
```
The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features:
* **Knowledge Linking**: Linking Korean and English knowledge through additional training
* **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness.
* **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture
* **Human Feedback**: DPO has been applied
* **Vision-Language Alignment**: Aligning the vision transformer with this language model
## Example code
### Install Dependencies
```bash
pip install torch transformers==4.40.0 accelerate
```
### Python code with Pipeline
```python
import transformers
import torch
model_id = "kfkas/Hansung-Bllossom-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
pipeline.model.eval()
PROMPT = '''λΉμ μ μ μ©ν AI μ΄μμ€ν΄νΈμ
λλ€. μ¬μ©μμ μ§μμ λν΄ μΉμ νκ³ μ ννκ² λ΅λ³ν΄μΌ ν©λλ€.
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.'''
instruction = "νμ±λνκ΅μμλ μ΄λ€ μΆμ λ νμ¬κ° μ΄λ¦¬λμ?"
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
print(outputs[0]["generated_text"][len(prompt):])
```
### Python code with AutoModel
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = 'kfkas/Hansung-Bllossom-8B'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
PROMPT = '''λΉμ μ μ μ©ν AI μ΄μμ€ν΄νΈμ
λλ€. μ¬μ©μμ μ§μμ λν΄ μΉμ νκ³ μ ννκ² λ΅λ³ν΄μΌ ν©λλ€.
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.'''
instruction = "νμ±λνκ΅λ μΈμ μ€λ¦½λμλμ?"
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
```
## Citation
**Language Model**
```text
@misc{bllossom,
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
year = {2024},
journal = {LREC-COLING 2024},
paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
},
}
```
**Vision-Language Model**
```text
@misc{bllossom-V,
author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
year = {2024},
publisher = {GitHub},
journal = {NAACL 2024 findings},
paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
},
}
```
## Contact
- κΉνλ―Ό(Taemin Kim), Intelligent System. `taemin6697@gmail.com`
## Contributor
- κΉνλ―Ό(Taemin Kim), Intelligent System. `taemin6697@gmail.com` |