This is VERY Ealry Model of Development!
μ΄ λͺ¨λΈμ Synatra-Zephyr-7Bμ κ·Ήμ΄κΈ° λ²μ μ λλ€.
Synatra-Zephyr-7B-v0.01π§
Support Me
μλνΈλΌλ κ°μΈ νλ‘μ νΈλ‘, 1μΈμ μμμΌλ‘ κ°λ°λκ³ μμ΅λλ€. λͺ¨λΈμ΄ λ§μμ λμ ¨λ€λ©΄ μ½κ°μ μ°κ΅¬λΉ μ§μμ μ΄λ¨κΉμ?
Wanna be a sponser? Contact me on Telegram AlzarTakkarsen
License
This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
Model Details
Base Model
mistralai/Mistral-7B-Instruct-v0.1
Trained On
A100 80G * 4
Model Benchmark
Ko-LLM-Leaderboard
On Benchmarking...
Implementation Code
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-Zephyr-7B-v0.01")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-Zephyr-7B-v0.01")
messages = [
{"role": "user", "content": "λ°λλλ μλ νμμμ΄μΌ?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
- Downloads last month
- 1,233