mlabonne commited on
Commit
46a3041
1 Parent(s): 24665cb

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -7
README.md CHANGED
@@ -11,7 +11,6 @@ base_model:
11
  - Locutusque/llama-3-neural-chat-v1-8b
12
  - cloudyu/Meta-Llama-3-8B-Instruct-DPO
13
  - vicgalle/Configurable-Llama-3-8B-v0.3
14
- - dreamgen/opus-v1.2-llama-3-8b
15
  ---
16
 
17
  # ChimeraLlama-3-8B-v2
@@ -23,7 +22,6 @@ ChimeraLlama-3-8B-v2 is a merge of the following models using [LazyMergekit](htt
23
  * [Locutusque/llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b)
24
  * [cloudyu/Meta-Llama-3-8B-Instruct-DPO](https://huggingface.co/cloudyu/Meta-Llama-3-8B-Instruct-DPO)
25
  * [vicgalle/Configurable-Llama-3-8B-v0.3](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3)
26
- * [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b)
27
 
28
  ## 🧩 Configuration
29
 
@@ -50,15 +48,11 @@ models:
50
  - model: cloudyu/Meta-Llama-3-8B-Instruct-DPO
51
  parameters:
52
  density: 0.55
53
- weight: 0.1
54
  - model: vicgalle/Configurable-Llama-3-8B-v0.3
55
  parameters:
56
  density: 0.55
57
  weight: 0.1
58
- - model: dreamgen/opus-v1.2-llama-3-8b
59
- parameters:
60
- density: 0.55
61
- weight: 0.05
62
  merge_method: dare_ties
63
  base_model: NousResearch/Meta-Llama-3-8B
64
  parameters:
 
11
  - Locutusque/llama-3-neural-chat-v1-8b
12
  - cloudyu/Meta-Llama-3-8B-Instruct-DPO
13
  - vicgalle/Configurable-Llama-3-8B-v0.3
 
14
  ---
15
 
16
  # ChimeraLlama-3-8B-v2
 
22
  * [Locutusque/llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b)
23
  * [cloudyu/Meta-Llama-3-8B-Instruct-DPO](https://huggingface.co/cloudyu/Meta-Llama-3-8B-Instruct-DPO)
24
  * [vicgalle/Configurable-Llama-3-8B-v0.3](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3)
 
25
 
26
  ## 🧩 Configuration
27
 
 
48
  - model: cloudyu/Meta-Llama-3-8B-Instruct-DPO
49
  parameters:
50
  density: 0.55
51
+ weight: 0.15
52
  - model: vicgalle/Configurable-Llama-3-8B-v0.3
53
  parameters:
54
  density: 0.55
55
  weight: 0.1
 
 
 
 
56
  merge_method: dare_ties
57
  base_model: NousResearch/Meta-Llama-3-8B
58
  parameters: