--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- # **Synatra-V0.1-7B** Made by StableFluffy "STRICT" cc-by-nc-4.0 license. ## Model Details **Base Model** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) **Trained On** A6000 48GB * 8 ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. Plus, It is strongly recommended to add a space at the end of the prompt. E.g. ``` text = "[INST] 아이작 뉴턴의 업적을 알려줘. [/INST] " ``` # **Model Benchmark** 구름v2 repo 에서 제공되는 데이터셋과 프롬프트를 사용하여 평가했습니다. 당시 GPT4와 현재 GPT4가 완전히 동일하지는 않기에 실제 결과와 약간의 차이가 존재 할 수 있습니다. ![img](./kullm_eval.png) | Model | 이해가능성 | 자연스러움 | 맥락유지 | 흥미로움 | 지시어사용 | 전반적퀄리티 | --- | --- | --- | --- | --- | --- | --- | GPT-3.5 | 0.980 | 2.806 | 2.849 | 2.056 | 0.917 | 3.905 | GPT-4 | 0.984 | 2.897 | 2.944 | 2.143 | 0.968 | 4.083 | KoAlpaca v1.1 | 0.651 | 1.909 | 1.901 | 1.583 | 0.385 | 2.575 | koVicuna | 0.460 | 1.583 | 1.726 | 1.528 | 0.409 | 2.440 | KULLM v2 | 0.742 | 2.083 | 2.107 | 1.794 | 0.548 | 3.036 | **Synatra-V0.1-7B** | **0.960** | **2.821** | **2.755** | **2.356** | **0.934** | **4.065** # Implementation Code Since, chat_template already contains insturction format above. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B") tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---