flemmingmiguel
commited on
Commit
•
9ff390a
1
Parent(s):
53e5b63
Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,8 @@ tags:
|
|
6 |
- lazymergekit
|
7 |
- mlabonne/NeuralBeagle14-7B
|
8 |
- mlabonne/NeuralDaredevil-7B
|
|
|
|
|
9 |
---
|
10 |
|
11 |
# DareBeagle-7B
|
@@ -16,7 +18,7 @@ DareBeagle-7B is a merge of the following models using [LazyMergekit](https://co
|
|
16 |
|
17 |
As an experiment to find the best base merge to further fine-tuning, expect a lot of experiments named using parts of the component models until a clear winner emerges in the benchmarks
|
18 |
|
19 |
-
In this case
|
20 |
|
21 |
## 🧩 Configuration
|
22 |
|
|
|
6 |
- lazymergekit
|
7 |
- mlabonne/NeuralBeagle14-7B
|
8 |
- mlabonne/NeuralDaredevil-7B
|
9 |
+
datasets:
|
10 |
+
- argilla/distilabel-intel-orca-dpo-pairs
|
11 |
---
|
12 |
|
13 |
# DareBeagle-7B
|
|
|
18 |
|
19 |
As an experiment to find the best base merge to further fine-tuning, expect a lot of experiments named using parts of the component models until a clear winner emerges in the benchmarks
|
20 |
|
21 |
+
In this case merging the DPO versions of 2 merge models with different characterisics to meassure what capabilities remain or improve
|
22 |
|
23 |
## 🧩 Configuration
|
24 |
|