Commit
•
2399c28
1
Parent(s):
5b606f6
Update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,11 @@ license: apache-2.0
|
|
6 |
|
7 |
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra.jpeg" width=320></p>
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
|
|
|
|
12 |
and optimized code until i achieved the best possible results.
|
13 |
|
14 |
It has not been without challenges. there were skeptics who doubted my abilities and questioned my approach. approach can be changed, but a closed mind cannot.
|
|
|
6 |
|
7 |
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra.jpeg" width=320></p>
|
8 |
|
9 |
+
## Purpose
|
10 |
|
11 |
+
Merging the "thick"est model weights from mistral models using amazing training methods like deep probabilistic optimization (dpo) and reinforced learning.
|
12 |
+
|
13 |
+
I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
|
14 |
and optimized code until i achieved the best possible results.
|
15 |
|
16 |
It has not been without challenges. there were skeptics who doubted my abilities and questioned my approach. approach can be changed, but a closed mind cannot.
|