File size: 710 Bytes
02f7f51
 
 
3c31e7e
02f7f51
 
8426670
 
 
 
a9d3d83
02f7f51
a9d3d83
02f7f51
 
 
 
 
 
 
 
 
 
e730bed
02f7f51
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
datasets:
  - PygmalionAI/PIPPA
  - lemonilia/LimaRP
---

## Gen Settings & Prompting

https://rentry.org/tsukasamodel

## GGUF

little endian

## Training

[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 4x nvidia a100 gpu cluster.

the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).

rank 16 qlora (all modules) tune

base model mistralai/Mixtral-8x7B-v0.1 tuned on koishi commit 6e675d1 for one epoch

then tuned on pippa 6412b0c for one epoch (metharme completion)

then tuned on limarp Version 2023-10-19 for 2 epochs in metharme completion format with limit_data_length set to 32768 in dataprepare-templates.py