File size: 2,962 Bytes
6e66d1f b3a407d 6e66d1f b3a407d 83f5870 b3a407d 9123c53 fd293e1 b3a407d 5ca980b b3a407d 69e4320 b3a407d 216d630 8fa546c 216d630 b3a407d fd293e1 459af97 2fe83c2 fd293e1 b3a407d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# **Synatra-7B-Instruct-v0.2**
Made by StableFluffy
**Contact (Do not Contact for personal things.)**
Discord : is.maywell
Telegram : AlzarTakkarsen
## License
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
## Model Details
**Base Model**
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
**Trained On**
A6000 48GB * 8
## TODO
- RP κΈ°λ° νλ λͺ¨λΈ μ μ
- λ°μ΄ν°μ
μ μ
- μΈμ΄ μ΄ν΄λ₯λ ₯ κ°μ
- μμ 보μ
- ν ν¬λμ΄μ λ³κ²½
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] μμ΄μ λ΄ν΄μ μ
μ μ μλ €μ€. [/INST]"
```
# **Model Benchmark**
## Ko-LLM-Leaderboard
| Model | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | Avg
| --- | --- | --- | --- | --- | --- | ---
| kyujinpy/KoT-platypus2-13B(No.1 at 2023/10/12) | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 | 49.55
| Synatra-V0.1-7B-Instruct | 41.72 | 49.28 | 43.27 | 43.75 | 39.32 | 43.47
| **Synatra-7B-Instruct-v0.2** | **41.81** | **49.35** | **43.99** | **45.77** | **42.96** | **44.78**
MMLUμμλ μ°μΈνλ Ko-CommonGen V2 μμ ν¬κ² μ½ν λͺ¨μ΅μ 보μ.
# **Implementation Code**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
If you run it on oobabooga your prompt would look like this.
```
[INST] λ§μ»¨μ λν΄μ μλ €μ€. [/INST]
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
--- |