grimjim commited on
Commit
6b49067
1 Parent(s): 0fe138a

Initial release

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +63 -0
  3. fireblossom-32K-7B.Q8_0.gguf +3 -0
.gitattributes CHANGED
@@ -4,6 +4,7 @@
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
 
7
  *.gz filter=lfs diff=lfs merge=lfs -text
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
 
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gguf filter=lfs diff=lfs merge=lfs -text
8
  *.gz filter=lfs diff=lfs merge=lfs -text
9
  *.h5 filter=lfs diff=lfs merge=lfs -text
10
  *.joblib filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,66 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - HuggingFaceH4/zephyr-7b-beta
4
+ - cgato/TheSpice-7b-v0.1.1
5
+ - SanjiWatsuki/Kunoichi-DPO-v2-7B
6
+ - SanjiWatsuki/Kunoichi-7B
7
+ - mistralai/Mistral-7B-v0.1
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
  license: cc-by-nc-4.0
13
  ---
14
+ # Fireblossom-32K-7B
15
+
16
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
+
18
+ For this merge, I went back to Mistral 7B v0.1 for the literal base model for task arithmetic merger, which can be pushed to at least 16K context length after adjusting rope theta from 10K to 100K. With the original (true) base model, the models merged in should be mathematically equivalent to LoRA adapters. I left the original 32K context claimed by Mistral 7B v0.1.
19
+
20
+ The goal was a merge model more varied in its outputs, a goal which inherently harms accuracy in favor of creativity. To this end, I chose a model trained to be strong at narrative roleplay (cgato's work) along with three models that were good at reasoning (fine-tunes by HuggingFaceH4 and SanjiWatsuki). The result appears to be good at following card instructions, perhaps to a fault.
21
+
22
+ Sampler settings: Tested lightly with temperature=0.7 and minP=0.01. For greater creativity, boost temperature.
23
+
24
+ Download options:
25
+ * [full weights](https://huggingface.co/grimjim/fireblossom-32K-7B)
26
+ * [Q8_0 GGUF](https://huggingface.co/grimjim/fireblossom-32K-7B-GGUF)
27
+ * [8.0bpw h8 exl2](https://huggingface.co/grimjim/fireblossom-32K-7B-8.0bpw_h8_exl2)
28
+
29
+ ## Merge Details
30
+ ### Merge Method
31
+
32
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
33
+
34
+ ### Models Merged
35
+
36
+ The following models were included in the merge:
37
+ * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
38
+ * [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1)
39
+ * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
40
+ * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
41
+
42
+ ### Configuration
43
+
44
+ The following YAML configuration was used to produce this model:
45
+
46
+ ```yaml
47
+ models:
48
+ - model: mistralai/Mistral-7B-v0.1
49
+ # no parameters necessary for base model
50
+ - model: SanjiWatsuki/Kunoichi-DPO-v2-7B
51
+ parameters:
52
+ weight: 0.45
53
+ - model: cgato/TheSpice-7b-v0.1.1
54
+ parameters:
55
+ weight: 0.05
56
+ - model: HuggingFaceH4/zephyr-7b-beta
57
+ parameters:
58
+ weight: 0.05
59
+ - model: SanjiWatsuki/Kunoichi-7B
60
+ parameters:
61
+ weight: 0.45
62
+ merge_method: task_arithmetic
63
+ base_model: mistralai/Mistral-7B-v0.1
64
+ dtype: float16
65
+
66
+ ```
fireblossom-32K-7B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d84ab59afeb63d2eeaf06180032d8ceafb10610f2d5499723ddbbe4f876b16a
3
+ size 7695857376