language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
Synatra-V0.2-7B
Made by StableFluffy
Visit my website! - Currently on consturction..
License
This model is strictly non-commercial (cc-by-nc-4.0) use only which takes priority over the LLAMA 2 COMMUNITY LICENSE AGREEMENT. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released.
Model Details
Base Model
mistralai/Mistral-7B-Instruct-v0.1
Trained On
A6000 48GB * 8
Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST]
and [/INST]
tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
text = "<s>[INST] μμ΄μ λ΄ν΄μ μ
μ μ μλ €μ€. [/INST]"
Model Benchmark
Preparing...
Implementation Code
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
If you run it on oobabooga your prompt would look like this.
[INST] λ§μ»¨μ λν΄μ μλ €μ€. [/INST]
Readme format: beomi/llama-2-ko-7b