perlthoughts commited on
Commit
42d3e42
1 Parent(s): d0ac0a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ List of all models and merging path is coming soon.
24
 
25
  ## Purpose
26
 
27
- Merging the "thick"est model weights from mistral models using amazing training methods like deep probabilistic optimization (dpo) and reinforced learning.
28
 
29
  I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
30
  and optimized code until i achieved the best possible results.
 
24
 
25
  ## Purpose
26
 
27
+ Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning.
28
 
29
  I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
30
  and optimized code until i achieved the best possible results.