intervitens commited on
Commit
2fe6f8a
1 Parent(s): a4e46f6

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - mistralai/Mixtral-8x7B-v0.1
4
+ - jondurbin/bagel-dpo-8x7b-v0.2
5
+ - Sao10K/Sensualize-Mixtral-bf16
6
+ - mistralai/Mixtral-8x7B-v0.1
7
+ - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
8
+ - mistralai/Mixtral-8x7B-Instruct-v0.1
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+
13
+ ---
14
+
15
+ Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset. For purposes other than RP, use quantizations done on a more general dataset.
16
+
17
+ Requires ExllamaV2 version 0.0.11 and up.
18
+
19
+ Original model link: [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
20
+
21
+ Original model README below.
22
+
23
+ ***
24
+
25
+ # BagelMIsteryTour-v2-8x7B
26
+
27
+ [GGUF versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-GGUF)
28
+ [AWQ versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-AWQ)
29
+
30
+ Bagel, Mixtral Instruct, with extra spices. Give it a taste. Works with Alpaca prompt formats, though the Mistral format should also work.
31
+
32
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63044fa07373aacccd8a7c53/lxNMzXo_dq_JCP9YyUyaw.jpeg)
33
+
34
+ I started experimenting around seeing if I could improve or fix some of Bagel's problems. Totally inspired by seeing how well Doctor-Shotgun's Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss worked (which is a LimaRP tune on top of base Mixtral, and then merged with Mixtral Instruct) - I decided to try some merges of Bagel with Mixtral Instruct as a result.
35
+
36
+ Somehow I ended up here, Bagel, Mixtral Instruct, a little bit of LimaRP, a little bit of Sao10K's Sensualize. So far in my testing it's working very well, and while it seems fairly unaligned on a lot of stuff, it's maybe a little too aligned on a few specific things (which I think comes from Sensualize) - so that's something to play with in the future, or maybe try to DPO out.
37
+
38
+ I've been running (temp last) minP 0.1, dynatemp 0.5-4, rep pen 1.07, rep range 1024. I've been testing Alpaca style Instruction/Response, and Instruction/Input/Response and those seem to work well, I expect Mistral's prompt format would also work well. You may need to add a stopping string on "{{char}}:" for RPs because it can sometimes duplicate those out in responses and waffle on. Seems to hold up and not fall apart at long contexts like Bagel and some other Mixtral tunes seem to, definitely doesn't seem prone to loopyness either. Can be pushed into extravagant prose if the scene/setting calls for it.
39
+
40
+ __Version 2:__ lowered the mix of Sensualize.
41
+
42
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
43
+
44
+ ## Merge Details
45
+ ### Merge Method
46
+
47
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base.
48
+
49
+ ### Models Merged
50
+
51
+ The following models were included in the merge:
52
+ * [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2)
53
+ * [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16)
54
+ * [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
55
+ * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
56
+
57
+ ### Configuration
58
+
59
+ The following YAML configuration was used to produce this model:
60
+
61
+ ```yaml
62
+ base_model: mistralai/Mixtral-8x7B-v0.1
63
+ models:
64
+ - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
65
+ parameters:
66
+ density: 0.5
67
+ weight: 0.2
68
+ - model: Sao10K/Sensualize-Mixtral-bf16
69
+ parameters:
70
+ density: 0.5
71
+ weight: 0.1
72
+ - model: mistralai/Mixtral-8x7B-Instruct-v0.1
73
+ parameters:
74
+ density: 0.6
75
+ weight: 1.0
76
+ - model: jondurbin/bagel-dpo-8x7b-v0.2
77
+ parameters:
78
+ density: 0.6
79
+ weight: 0.5
80
+ merge_method: dare_ties
81
+ dtype: bfloat16
82
+
83
+
84
+ ```
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mistralai/Mixtral-8x7B-v0.1",
3
+ "architectures": [
4
+ "MixtralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mixtral",
15
+ "num_attention_heads": 32,
16
+ "num_experts_per_tok": 2,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 8,
19
+ "num_local_experts": 8,
20
+ "output_router_logits": false,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_theta": 1000000.0,
23
+ "router_aux_loss_coef": 0.02,
24
+ "sliding_window": null,
25
+ "tie_word_embeddings": false,
26
+ "torch_dtype": "bfloat16",
27
+ "transformers_version": "4.36.2",
28
+ "use_cache": true,
29
+ "vocab_size": 32000
30
+ }
mergekit_config.yml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ base_model: mistralai/Mixtral-8x7B-v0.1
2
+ models:
3
+ - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
4
+ parameters:
5
+ density: 0.5
6
+ weight: 0.2
7
+ - model: Sao10K/Sensualize-Mixtral-bf16
8
+ parameters:
9
+ density: 0.5
10
+ weight: 0.1
11
+ - model: mistralai/Mixtral-8x7B-Instruct-v0.1
12
+ parameters:
13
+ density: 0.6
14
+ weight: 1.0
15
+ - model: jondurbin/bagel-dpo-8x7b-v0.2
16
+ parameters:
17
+ density: 0.6
18
+ weight: 0.5
19
+ merge_method: dare_ties
20
+ dtype: bfloat16
21
+
output-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72b231ecbc0db4cdfaa2432650922273c91ed1d2735dd02b571612023eb5ac67
3
+ size 8588676288
output-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f5f5aae2f975db3e4a1187b9550e4d0107fa03b90b9911c609ea6639f64c03c
3
+ size 8589944408
output-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20aa2f7acb099f7487e620d73a608c2e09832d93e5e1e6b858440d0dccd85007
3
+ size 3531123560
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": null,
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }