ludis's picture
Update README.md
e626c3d
|
raw
history blame
637 Bytes
---
datasets:
- ewof/koishi-instruct-metharme
- PygmalionAI/PIPPA
---
just testing for now, qlora merge, several things different between this and the 7b
https://rentry.org/v43eo updated the rentry slightly
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f38f3a89203d118da8b477/HkWCy5CQwlBGEpmmLAwuO.png)
NousResearch/Llama-2-13b-hf tuned on koishi data (without code subsets) for 1 epoch
then tuned on pippa for 1 epoch
then tuned on gpt4 rp data from whocars proxy for 1 epoch
then tuned on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) for 2 epochs
all metharme format