metadata
tags:
- text-generation
license: cc-by-nc-sa-4.0
language:
- ko
base_model: hyeogi/SOLAR-10.7B-dpo-v0.1
pipeline_tag: text-generation
datasets:
- beomi/KoAlpaca-v1.1a
- Edentns/Worktronics-FAQ
DataVortexS-10.7B-v0.2
Our Team
Research & Engineering | Product Management |
---|---|
Kwangseok Yang | Seunghyun Choi |
Jeongwon Choi | Hyoseok Choi |
Model Details
Base Model
Trained On
- OS: Ubuntu 20.04
- GPU: H100 80GB 1ea
- transformers: v4.36.2
Dataset
- beomi/KoAlpaca-v1.1a
- Edentns/Worktronics-FAQ - private
Instruction format
It follows Alpaca format.
E.g.
text = """\
λΉμ μ μ¬λλ€μ΄ μ 보λ₯Ό μ°Ύμ μ μλλ‘ λμμ£Όλ μΈκ³΅μ§λ₯ λΉμμ
λλ€.
### Instruction:
λνλ―Όκ΅μ μλλ μ΄λμΌ?
### Response:
λνλ―Όκ΅μ μλλ μμΈμ
λλ€.
### Instruction:
μμΈ μΈκ΅¬λ μ΄ λͺ λͺ
μ΄μΌ?
"""
Model Benchmark
Ko LM Eval Harness
Task | 0-shot | 5-shot | 10-shot | 50-shot |
---|---|---|---|---|
kobest_boolq | 0.501449 | 0.668845 | 0.652565 | 0.655491 |
kobest_copa | 0.635474 | 0.685637 | 0.708601 | 0.725683 |
kobest_hellaswag | 0.417966 | 0.442942 | 0.428077 | 0.425199 |
kobest_sentineg | 0.681941 | 0.880517 | 0.921754 | 0.939528 |
Average | 0.5592075 | 0.66948525 | 0.67774925 | 0.68647525 |
Ko-LLM-Leaderboard
Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
---|---|---|---|---|---|
43.6 | 38.74 | 50.74 | 38.98 | 44.7 | 44.86 |
Implementation Code
This model contains the chat_template instruction format.
You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.2")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.2")
messages = [
{"role": "system", "content": "λΉμ μ μ¬λλ€μ΄ μ 보λ₯Ό μ°Ύμ μ μλλ‘ λμμ£Όλ μΈκ³΅μ§λ₯ λΉμμ
λλ€."},
{"role": "user", "content": "λνλ―Όκ΅μ μλλ μ΄λμΌ?"},
{"role": "assistant", "content": "λνλ―Όκ΅μ μλλ μμΈμ
λλ€."},
{"role": "user", "content": "μμΈ μΈκ΅¬λ μ΄ λͺ λͺ
μ΄μΌ?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
License
The model is licensed under the cc-by-nc-sa-4.0 license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.