TehVenom commited on
Commit
4767495
1 Parent(s): ba8e8e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -1
README.md CHANGED
@@ -1,3 +1,72 @@
1
  ---
2
- license: creativeml-openrail-m
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: bigscience-openrail-m
3
+ language:
4
+ - en
5
  ---
6
+ GPT-J-Pyg_PPO-6B [GPT-J Pygmalion + GPT-J PPO_HH]
7
+
8
+ GPT-J-Pyg_PPO-6B is an experimental model containing a parameter-wise 50/50 blend (weighted average) of the weights of ppo_hh_gpt-j and Pygmalion-6b.
9
+
10
+ -Intended Merge Value-
11
+
12
+ As with fine-tuning, merging weights does not add information but transforms it, therefore it is important to consider trade-offs.
13
+ Pyg_PPO combines ppo_hh_gpt-j and Pygmalion-6b; both technical
14
+ achievements are blended with the intent to elevate the strengths of
15
+ both. Datasets of both are linked below to assist in exploratory speculation on which datasets in what quantity and configuration have
16
+ the largest impact on the usefulness of a model without the expense of
17
+ fine-tuning. Blend was done in FP32 and output in FP16.
18
+
19
+ -Intended Use-
20
+
21
+ Research purposes only, intended for responsible use.
22
+ Express a conversation in natural language, and Pyg_PPO will do the thing.
23
+ Try starting a two line prompt such as
24
+ "Bot: Hello, how are you?
25
+ You: I am doing just fine, thank you.", or any other
26
+ topic, and the model will carry on in this back and forth format.
27
+
28
+ Can also be used as a base to merge with other creative,
29
+ technical, or adventure themed models of the same class
30
+ (GPT-J & 6b NeoX) and parameter size (6b) to experiment with
31
+ the morphology of model weights based on the value added
32
+ by instruct.
33
+
34
+ Merge tested using KoboldAI with Nucleus Sampling Top-P set to 0.9, Temperature at 0.6, and Repetition Penalty at 1.1; extra samplers
35
+ disabled.
36
+
37
+ -Credits To-
38
+
39
+ Core Model:
40
+ https://huggingface.co/EleutherAI/gpt-j-6B
41
+ Author:
42
+ https://www.eleuther.ai/
43
+
44
+ Model1; 50% ppo_hh_gpt-j:
45
+ https://huggingface.co/reciprocate/ppo_hh_gpt-j
46
+
47
+ Author Repo:
48
+ https://huggingface.co/reciprocate
49
+
50
+ Related; CarperAI:
51
+ https://huggingface.co/CarperAI
52
+
53
+ Dataset is a variant of the Helpful Harmless assistant themed
54
+ dataset and Proximal Policy Optimization, specific datasets
55
+ used are unknown; listed repo datasets include:
56
+ https://huggingface.co/datasets/reciprocate/summarize_eval_ilql
57
+ https://huggingface.co/datasets/reciprocate/hh_eval_ilql
58
+
59
+ PPO explained:
60
+ https://paperswithcode.com/method/ppo
61
+ Potential HH-type datasets utilized:
62
+ https://huggingface.co/HuggingFaceH4
63
+ https://huggingface.co/datasets/Anthropic/hh-rlhf
64
+
65
+ Model2; 50% Pygmalion-6b:
66
+ https://huggingface.co/PygmalionAI/pygmalion-6b
67
+
68
+ Author Repo:
69
+ https://huggingface.co/PygmalionAI
70
+
71
+ Weight merge Script credit to Concedo:
72
+ https://huggingface.co/concedo