ericpolewski
commited on
Commit
•
ce63731
1
Parent(s):
d143b22
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,14 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
datasets:
|
4 |
+
- Open-Orca/OpenOrca
|
5 |
+
- ise-uiuc/Magicoder-Evol-Instruct-110K
|
6 |
+
- tatsu-lab/alpaca
|
7 |
+
- garage-bAInd/Open-Platypus
|
8 |
---
|
9 |
+
|
10 |
+
This is Mistral-v0.1 and a combination of the AIRIC dataset sprinkled into the other datasets listed. Trained for 3 epochs on the q-v-k-o layers at rank 128 until loss hit about 1.37. I noticed some "it's important to remembers" in there that I may try to scrub out but otherwise the model wasn't intentionally censored.
|
11 |
+
|
12 |
+
This was the original post: https://www.reddit.com/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/
|
13 |
+
|
14 |
+
This is how I did the data extraction: https://www.linkedin.com/pulse/how-i-trained-ai-my-text-messages-make-robot-talks-like-eric-polewski-9nu1c/
|