File size: 2,388 Bytes
59333a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e43a3cd
59333a4
9f88ae2
e43a3cd
 
59333a4
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
language:
- en
---

### Description:
This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.

### Prompt format: 
Metharme

The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
```
<|system|>system message here<|user|>user message here<|model|>
```
```
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
```
```
<|system|>system message here<|model|>
```
```
<|system|>system message here<|model|>model message<|user|>user message here<|model|>
```

Some example prompts:
```
<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
```
```
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
```
```
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
```
More will be added at a later date.

### Perplexity Benchmarks:
- TBA

### Training information:
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- GPTQ 4 bit LoRA
- 7 Epochs
- 64 / 32 R / A
- 2048 Cutoff
- 18 hours on 4x RTX 4090s

### Data used in training:
- TBA

### Models used: 
For training:
https://huggingface.co/PocketDoc/llama-13b-gptq-4bit-128g

For merging:

https://huggingface.co/PocketDoc/Dans-PersonalityEngine-13b-LoRA

and

https://huggingface.co/huggyllama/llama-13b


### Disclaimer:
It has not been aligned and no warranty is given for the quality or safety of its outputs.