language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
Synatra-7B-Instruct-v0.2
Made by StableFluffy
Contact
Discord : is.maywell
Telegram : AlzarTakkarsen
License
This model is strictly non-commercial (cc-by-nc-4.0) use only which takes priority over the LLAMA 2 COMMUNITY LICENSE AGREEMENT. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
Model Details
Base Model
mistralai/Mistral-7B-Instruct-v0.1
Trained On
A6000 48GB * 8
TODO
- RP κΈ°λ° νλ λͺ¨λΈ μ μ
- λ°μ΄ν°μ μ μ
- μΈμ΄ μ΄ν΄λ₯λ ₯ κ°μ
- μμ 보μ
- ν ν¬λμ΄μ λ³κ²½
Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST]
and [/INST]
tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
text = "<s>[INST] μμ΄μ λ΄ν΄μ μ
μ μ μλ €μ€. [/INST]"
Model Benchmark
Ko-LLM-Leaderboard
| Model | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | Avg | --- | --- | --- | --- | --- | --- | kyujinpy/KoT-platypus2-13B(No.1 at 10-12) | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 | 49.55 | Synatra-V0.1-7B-Instruct | 39.32 | 41.72 | 49.28 | 43.27 | 43.75 | 43.47 | Synatra-7B-Instruct-v0.2 | 41.81 | 49.35 | 43.99 | 45.77 | 42.96 | 44.78
MMLUμμλ μ°μΈνλ Ko-CommonGen V2 μμ ν¬κ² μ½ν λͺ¨μ΅μ 보μ.
Implementation Code
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
If you run it on oobabooga your prompt would look like this.
[INST] λ§μ»¨μ λν΄μ μλ €μ€. [/INST]
Readme format: beomi/llama-2-ko-7b