Update README.md
Browse filesMore details, especially about prompt format and context size
README.md
CHANGED
@@ -9,16 +9,22 @@ license: apache-2.0
|
|
9 |
language:
|
10 |
- en
|
11 |
---
|
12 |
-
# WestLake-10.7B-v2 (GGUF version)
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
and goliath-120b! I would describe the improvements as a better writing style, with more details. It does have
|
19 |
a small negative point, which is it has a bit more difficulties following instruction, but not by much.
|
20 |
|
21 |
-
It is also the first model I
|
22 |
```
|
23 |
Write a sequence of nominal groups that flow into one another, using the following rules:
|
24 |
- each nominal group is made of exactly 3 words
|
@@ -33,6 +39,9 @@ Finally, explain why you chose your specific theme.
|
|
33 |
```
|
34 |
|
35 |
## Merge Details
|
|
|
|
|
|
|
36 |
### Merge Method
|
37 |
|
38 |
This model was merged using the passthrough merge method.
|
@@ -69,6 +78,8 @@ slices:
|
|
69 |
|
70 |
---
|
71 |
|
|
|
|
|
72 |
**Update Notes:**
|
73 |
*Version 2 trained 1 additional epoch cycle for 3 total*
|
74 |
|
|
|
9 |
language:
|
10 |
- en
|
11 |
---
|
12 |
+
# WestLake-10.7B-v2 (GGUF version): Role-Play & Text Generation Specialist Model
|
13 |
+
|
14 |
+
* Base: WestLake-7B-v2 based of Mistral-7B-v0.1
|
15 |
+
* Context size: **8192** (even though Mistral-7B is 32k, WestLake was trained with 8k, and using a larger context is likely to cause problems)
|
16 |
+
* Prompt format: in general, Mistral based models are able to understand many prompt formats, but the following ones produce the best results, and are recommended
|
17 |
+
- **ChatML** (used during WestLake training)
|
18 |
+
- **Zephyr** (variant of ChatML which sometimes produces better results)
|
19 |
+
- **Alpaca** (reported by senseable as working better than ChatML)
|
20 |
+
- **Mistral Instruct** (original format from Mistral-7B)
|
21 |
+
|
22 |
+
This is my first viable self-merge of the fantastic WestLake-7B-v2 model, obtained after 12 rounds of testing with different
|
23 |
+
merge settings. In my [LLM Creativity Benchmark](https://huggingface.co/datasets/froggeric/creativity), it greatly improves over the original 7B model, and ranks between miqu-1-120b
|
24 |
and goliath-120b! I would describe the improvements as a better writing style, with more details. It does have
|
25 |
a small negative point, which is it has a bit more difficulties following instruction, but not by much.
|
26 |
|
27 |
+
It is also the first model I have tested to obtain a perfect score with the following test!
|
28 |
```
|
29 |
Write a sequence of nominal groups that flow into one another, using the following rules:
|
30 |
- each nominal group is made of exactly 3 words
|
|
|
39 |
```
|
40 |
|
41 |
## Merge Details
|
42 |
+
|
43 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
44 |
+
|
45 |
### Merge Method
|
46 |
|
47 |
This model was merged using the passthrough merge method.
|
|
|
78 |
|
79 |
---
|
80 |
|
81 |
+
# Original model card
|
82 |
+
|
83 |
**Update Notes:**
|
84 |
*Version 2 trained 1 additional epoch cycle for 3 total*
|
85 |
|