--- license: cc-by-sa-4.0 --- ## Exl2 version of [maywell/Synatra-7B-v0.3-dpo](https://huggingface.co/maywell/Synatra-7B-v0.3-dpo) ## branch [main](https://huggingface.co/IHaBiS/Synatra-7B-v0.3-dpo-exl2/tree/main) : 8bpw h8 [b6h8](https://huggingface.co/IHaBiS/Synatra-7B-v0.3-dpo-exl2/tree/b3.75h8) : 6bpw h8 [b4h8](https://huggingface.co/IHaBiS/Synatra-7B-v0.3-dpo-exl2/tree/b4h6) : 4bpw h8 ### below this line is original readme # **Synatra-7B-v0.3-dpo🐧** ![Synatra-7B-v0.3-dpo](./Synatra.png) ## Support Me μ‹œλ‚˜νŠΈλΌλŠ” 개인 ν”„λ‘œμ νŠΈλ‘œ, 1인의 μžμ›μœΌλ‘œ 개발되고 μžˆμŠ΅λ‹ˆλ‹€. λͺ¨λΈμ΄ λ§ˆμŒμ— λ“œμ…¨λ‹€λ©΄ μ•½κ°„μ˜ 연ꡬ비 지원은 μ–΄λ–¨κΉŒμš”? [Buy me a Coffee](https://www.buymeacoffee.com/mwell) Wanna be a sponser? (Please) Contact me on Telegram **AlzarTakkarsen** # **License** This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-sa/4.0/) (**cc-by-sa-4.0**) use, Under **5K MAU** The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-sa-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. If your service has over **5K MAU** contact me for license approval. # **Model Details** **Base Model** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) **Trained On** A100 80GB * 1 **Instruction format** It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format and **Alpaca(No-Input)** format. # **Model Benchmark** ## KOBEST_BOOLQ, SENTINEG, WIC - ZERO_SHOT [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot)λ₯Ό μ‚¬μš©ν•˜μ—¬ BoolQ, SentiNeg, Wic을 μΈ‘μ •ν–ˆμŠ΅λ‹ˆλ‹€. | Model | COPA | HellaSwag | BoolQ | SentiNeg | --- | --- | --- | --- | --- | EleutherAI/polyglot-ko-12.8b | 0.7937 | 0.5954 | 0.4818 | 0.9117 | Synatra-7B-v0.3-base | 0.6344 | 0.5140 | 0.5226 | NaN | **Synatra-7B-v0.3-dpo** | **0.6380** | **0.4780** | **0.8058** | **0.8942** ## Ko-LLM-Leaderboard On Benchmarking... # **Implementation Code** Since, chat_template already contains insturction format above. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-dpo") tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-dpo") messages = [ {"role": "user", "content": "λ°”λ‚˜λ‚˜λŠ” μ›λž˜ ν•˜μ–€μƒ‰μ΄μ•Ό?"}, ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ```