--- datasets: - ewof/koishi-instruct-metharme - PygmalionAI/PIPPA - ludis/geepeetee4 --- just testing for now, qlora merge, several things different between this and the 7b ## training NousResearch/Llama-2-13b-hf tuned on koishi data (without code subsets) for 1 epoch then tuned on pippa for 1 epoch then tuned on ludis/geepeetee4 commit 56e194b for 1 epoch then tuned on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-09-02 for 2 epochs all metharme format ## prompting https://rentry.org/tsukasa13b - reccomended prompts and gen settings The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.