Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ language:
|
|
5 |
---
|
6 |
GPT-J-Pyg_PPO-6B [GPT-J Pygmalion Dev V8p4 + GPT-J PPO_HH]
|
7 |
|
8 |
-
GPT-J-Pyg_PPO-6B is an experimental model containing a parameter-wise
|
9 |
|
10 |
-Intended Merge Value-
|
11 |
|
|
|
5 |
---
|
6 |
GPT-J-Pyg_PPO-6B [GPT-J Pygmalion Dev V8p4 + GPT-J PPO_HH]
|
7 |
|
8 |
+
GPT-J-Pyg_PPO-6B is an experimental model containing a parameter-wise 40/60 blend (weighted average PPO_HH:Pygmalion) of the weights of ppo_hh_gpt-j and Pygmalion-6b Dev V8p4.
|
9 |
|
10 |
-Intended Merge Value-
|
11 |
|