ewof commited on
Commit
ee2492a
1 Parent(s): caada0c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - ewof/koishi-instruct-metharme
4
+ ---
5
+
6
+ ## GPTQ
7
+
8
+ 2048 sequence length
9
+ VMware/open-instruct dataset
10
+
11
+ ## Training
12
+
13
+ [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
14
+ on a 4x nvidia a100 gpu cluster.
15
+
16
+ the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).
17
+
18
+ trained on koishi commit 6e675d1 for one epoch
19
+
20
+ ## Base Model
21
+
22
+ rank 16 qlora tune of mistralai/Mixtral-8x7B-v0.1 (all modules, merged)
23
+
24
+ ## Prompting
25
+
26
+ The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
27
+
28
+ The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.