RichardErkhov commited on
Commit
f1f02e0
·
verified ·
1 Parent(s): b2610df

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Llama-3.2-3B-Prodigy - AWQ
11
+ - Model creator: https://huggingface.co/bunnycore/
12
+ - Original model: https://huggingface.co/bunnycore/Llama-3.2-3B-Prodigy/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ base_model:
20
+ - bunnycore/Llama-3.2-3B-Pure-RP
21
+ - bunnycore/Llama-3.2-3B-Mix-Skill
22
+ - bunnycore/Llama-3.2-3B-Sci-Think
23
+ - huihui-ai/Llama-3.2-3B-Instruct-abliterated
24
+ library_name: transformers
25
+ tags:
26
+ - mergekit
27
+ - merge
28
+
29
+ ---
30
+ # merge
31
+
32
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
33
+
34
+ ## Merge Details
35
+ ### Merge Method
36
+
37
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base.
38
+
39
+ ### Models Merged
40
+
41
+ The following models were included in the merge:
42
+ * [bunnycore/Llama-3.2-3B-Pure-RP](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP)
43
+ * [bunnycore/Llama-3.2-3B-Mix-Skill](https://huggingface.co/bunnycore/Llama-3.2-3B-Mix-Skill)
44
+ * [bunnycore/Llama-3.2-3B-Sci-Think](https://huggingface.co/bunnycore/Llama-3.2-3B-Sci-Think)
45
+
46
+ ### Configuration
47
+
48
+ The following YAML configuration was used to produce this model:
49
+
50
+ ```yaml
51
+ models:
52
+ - model: bunnycore/Llama-3.2-3B-Sci-Think
53
+ parameters:
54
+ density: 0.5
55
+ weight: 0.5
56
+ - model: bunnycore/Llama-3.2-3B-Mix-Skill
57
+ parameters:
58
+ density: 0.5
59
+ weight: 0.5
60
+ - model: bunnycore/Llama-3.2-3B-Pure-RP
61
+ parameters:
62
+ density: 0.5
63
+ weight: 0.5
64
+
65
+ merge_method: ties
66
+ base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
67
+ parameters:
68
+ normalize: false
69
+ int8_mask: true
70
+ dtype: float16
71
+
72
+ ```
73
+
74
+