raincandy-u commited on
Commit
110095b
1 Parent(s): af68319

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -16
README.md CHANGED
@@ -12,7 +12,6 @@ tags:
12
  - CoT
13
  ---
14
  <style>
15
-
16
  @font-face {
17
  font-family: Zpix;
18
  src: url(https://zpix.now.sh/zpix.woff2?v2021-03-21);
@@ -20,28 +19,41 @@ tags:
20
  * {
21
  font-family:Zpix;
22
  }
23
-
24
- *:hover {
25
- color: red;
26
- font-size: 1000px;
 
 
 
 
 
 
 
27
  }
28
  </style>
29
  <img src="https://pbs.twimg.com/media/GKJ6VOdbIAAo2yr?format=png&name=900x900"></img>
30
 
31
- <h1 style="font-size:48px;margin-bottom:30px;" >ジェルばんは~</h1>
32
 
33
- # 🧬Rain-7B-v0.1
34
 
35
- Rain-7B-v0.1 is a experimental model finetuned on [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) with thousands of **chain of thought** conversations.
 
 
36
 
37
- # 🧬Evaluation
 
38
 
39
- |Model Name|MMLU|
 
 
 
 
 
40
  |---|---|
41
- |Qwen1.5-7B-Chat||
42
- |**Rain-7B-v0.1**||
43
 
44
- # 🧬Usage
45
 
46
  ```python
47
  !pip install -qU transformers accelerate
@@ -50,8 +62,8 @@ from transformers import AutoTokenizer
50
  import transformers
51
  import torch
52
 
53
- model = "raincandy-u/Rain-7B-v0.1"
54
- messages = [{"role": "user", "content": "Who is Cho-Tan chan?"}]
55
 
56
  tokenizer = AutoTokenizer.from_pretrained(model)
57
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
@@ -64,4 +76,8 @@ pipeline = transformers.pipeline(
64
 
65
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
66
  print(outputs[0]["generated_text"])
67
- ```
 
 
 
 
 
12
  - CoT
13
  ---
14
  <style>
 
15
  @font-face {
16
  font-family: Zpix;
17
  src: url(https://zpix.now.sh/zpix.woff2?v2021-03-21);
 
19
  * {
20
  font-family:Zpix;
21
  }
22
+ #main-ame-back {
23
+ font-family:Zpix;
24
+ color: #fd96fd !important;
25
+
26
+ padding: 15px;
27
+ }
28
+ a {
29
+ color:#fd87c2 !important
30
+ }
31
+ #main-ame-back h1{
32
+ color:#8e45f5 !important;
33
  }
34
  </style>
35
  <img src="https://pbs.twimg.com/media/GKJ6VOdbIAAo2yr?format=png&name=900x900"></img>
36
 
 
37
 
 
38
 
39
+ <div id="main-ame-back">
40
+
41
+ <div style="font-size:40px;color: #ebb4dd;font-weight:bolder;">ジェルばんは~</div>
42
 
43
+ <br>
44
+ <h1>🧬Rain-7B-v0.1</h1>
45
 
46
+
47
+ Rain-7B-v0.1 is a experimental model finetuned on <a href="https://huggingface.co/Qwen/Qwen1.5-7B-Chat">Qwen1.5-7B-Chat</a> with thousands of <b>chain of thought</b> conversations.
48
+
49
+ <h1>🧬Evaluation</h1>
50
+
51
+ |Model name|Score|
52
  |---|---|
53
+ |Qwen1.5-7B-Chat|55.8|
54
+ |Rain-7B-v0.1|58.1|
55
 
56
+ <h1>🧬Usage</h1>
57
 
58
  ```python
59
  !pip install -qU transformers accelerate
 
62
  import transformers
63
  import torch
64
 
65
+ model = "mlabonne/AlphaMonarch-7B"
66
+ messages = [{"role": "user", "content": "What is a large language model?"}]
67
 
68
  tokenizer = AutoTokenizer.from_pretrained(model)
69
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
 
76
 
77
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
78
  print(outputs[0]["generated_text"])
79
+ ```
80
+
81
+ </div>
82
+
83
+