perlthoughts commited on
Commit
2d9410b
β€’
1 Parent(s): 18d9372

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ko
4
+
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ license: cc-by-nc-4.0
8
+ ---
9
+
10
+ # **Synatra-11B-v0.3-RP🐧**
11
+
12
+ # Original Model Card
13
+
14
+ ![Synatra-7B-v0.3-RP](./Synatra.png)
15
+
16
+ ## Support Me
17
+ μ‹œλ‚˜νŠΈλΌλŠ” 개인 ν”„λ‘œμ νŠΈλ‘œ, 1인의 μžμ›μœΌλ‘œ 개발되고 μžˆμŠ΅λ‹ˆλ‹€. λͺ¨λΈμ΄ λ§ˆμŒμ— λ“œμ…¨λ‹€λ©΄ μ•½κ°„μ˜ 연ꡬ비 지원은 μ–΄λ–¨κΉŒμš”?
18
+ [<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell)
19
+
20
+ Wanna be a sponser? Contact me on Telegram **AlzarTakkarsen**
21
+
22
+ # **License**
23
+
24
+ This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only.
25
+ The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
26
+ The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
27
+
28
+ # **Model Details**
29
+ **Base Model**
30
+ [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
31
+
32
+ **Trained On**
33
+ A6000 48GB * 8
34
+
35
+ **Instruction format**
36
+
37
+ It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format.
38
+
39
+ **TODO**
40
+
41
+ - ~~``RP 기반 νŠœλ‹ λͺ¨λΈ μ œμž‘``~~ βœ…
42
+ - ~~``데이터셋 μ •μ œ``~~ βœ…
43
+ - μ–Έμ–΄ 이해λŠ₯λ ₯ κ°œμ„ 
44
+ - ~~``상식 보완``~~ βœ…
45
+ - ν† ν¬λ‚˜μ΄μ € λ³€κ²½
46
+
47
+
48
+ # **Model Benchmark**
49
+
50
+ ## Ko-LLM-Leaderboard
51
+
52
+ On Benchmarking...
53
+
54
+ # **Implementation Code**
55
+
56
+ Since, chat_template already contains insturction format above.
57
+ You can use the code below.
58
+
59
+ ```python
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+
62
+ device = "cuda" # the device to load the model onto
63
+
64
+ model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-RP")
65
+ tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-RP")
66
+
67
+ messages = [
68
+ {"role": "user", "content": "λ°”λ‚˜λ‚˜λŠ” μ›λž˜ ν•˜μ–€μƒ‰μ΄μ•Ό?"},
69
+ ]
70
+
71
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
72
+
73
+ model_inputs = encodeds.to(device)
74
+ model.to(device)
75
+
76
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
77
+ decoded = tokenizer.batch_decode(generated_ids)
78
+ print(decoded[0])
79
+ ```
80
+
81
+ # Why It's benchmark score is lower than preview version?
82
+
83
+ **Apparently**, Preview model uses Alpaca Style prompt which has no pre-fix. But ChatML do.
84
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
85
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-7B-v0.3-RP)
86
+
87
+ | Metric | Value |
88
+ |-----------------------|---------------------------|
89
+ | Avg. | 57.38 |
90
+ | ARC (25-shot) | 62.2 |
91
+ | HellaSwag (10-shot) | 82.29 |
92
+ | MMLU (5-shot) | 60.8 |
93
+ | TruthfulQA (0-shot) | 52.64 |
94
+ | Winogrande (5-shot) | 76.48 |
95
+ | GSM8K (5-shot) | 21.15 |
96
+ | DROP (3-shot) | 46.06 |