qingy2024 commited on
Commit
b6b3859
·
verified ·
1 Parent(s): 2486e1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -40
README.md CHANGED
@@ -7,44 +7,6 @@ tags:
7
  - merge
8
 
9
  ---
10
- # merge
11
 
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
-
14
- ## Merge Details
15
- ### Merge Method
16
-
17
- This model was merged using the passthrough merge method.
18
-
19
- ### Models Merged
20
-
21
- The following models were included in the merge:
22
- * [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B)
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- slices:
30
- - sources:
31
- - layer_range: [0, 6]
32
- model: Qwen/Qwen2.5-3B
33
- - sources:
34
- - layer_range: [3, 12]
35
- model: Qwen/Qwen2.5-3B
36
- - sources:
37
- - layer_range: [9, 18]
38
- model: Qwen/Qwen2.5-3B
39
- - sources:
40
- - layer_range: [14, 24]
41
- model: Qwen/Qwen2.5-3B
42
- - sources:
43
- - layer_range: [20, 30]
44
- model: Qwen/Qwen2.5-3B
45
- - sources:
46
- - layer_range: [26, 36]
47
- model: Qwen/Qwen2.5-3B
48
- merge_method: passthrough
49
- dtype: bfloat16
50
- ```
 
7
  - merge
8
 
9
  ---
10
+ # Qwark 4B
11
 
12
+ > any of a number of subatomic language models carrying 4 billion parameters, postulated as building blocks of small AI Agents. Qwarks have not been directly observed (until now) but theoretical predictions based on their existence have been confirmed experimentally.