perlthoughts
commited on
Commit
•
4a8ef30
1
Parent(s):
06c4b86
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,8 @@ license: apache-2.0
|
|
4 |
|
5 |
# Chupacabra 7B
|
6 |
|
7 |
-
<
|
|
|
8 |
As a very young (mad)lad, I knew my purpose in life was to merge with the most thick model weights using the most amazing training methods like deep probabilistic optimization (dpo) and reinforced learning.
|
9 |
|
10 |
it has been a daunting task catching up. I spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
|
|
|
4 |
|
5 |
# Chupacabra 7B
|
6 |
|
7 |
+
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra.jpeg" width=320></p>
|
8 |
+
|
9 |
As a very young (mad)lad, I knew my purpose in life was to merge with the most thick model weights using the most amazing training methods like deep probabilistic optimization (dpo) and reinforced learning.
|
10 |
|
11 |
it has been a daunting task catching up. I spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
|