Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ The model was trained almost entirely on synthetic GPT-4 outputs. Curating high
|
|
30 |
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
|
31 |
|
32 |
## Collaborators
|
33 |
-
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art
|
34 |
|
35 |
Special mention goes to @winglian for assisting in some of the training issues.
|
36 |
|
|
|
30 |
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
|
31 |
|
32 |
## Collaborators
|
33 |
+
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art and Redmond AI.
|
34 |
|
35 |
Special mention goes to @winglian for assisting in some of the training issues.
|
36 |
|