File size: 2,367 Bytes
e3fcfd2
 
 
b6b58c1
 
 
fe89999
b6b58c1
1ae8a82
b6b58c1
 
 
 
1ae8a82
 
 
 
 
b6b58c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ae8a82
 
 
 
 
 
 
 
b6b58c1
 
 
 
 
 
1ae8a82
23bb4d7
b6b58c1
 
 
 
23bb4d7
 
 
b6b58c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23bb4d7
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
---

# LimaRP-Llama2-7B-v3 (Alpaca, experimental, 4-bit LoRA adapter)

This is an experimental version of LimaRP for Llama2, using a somewhat updated dataset (1800 training samples)
and a 2-pass training procedure. The first pass includes unsupervised tuning on 2800 stories within
4k tokens length and the second is LimaRP.

For more details about LimaRP, see the model page for the [previously released version](https://huggingface.co/lemonilia/limarp-llama2-v2).
Most details written there apply for this version as well.

## Prompt format
Same as before. It uses the [extended Alpaca format](https://github.com/tatsu-lab/stanford_alpaca),
with `### Input:` immediately preceding user inputs and `### Response:` immediately preceding
model outputs. While Alpaca wasn't originally intended for multi-turn responses, in practice this
is not a problem; the format follows a pattern already used by other models.

```
### Instruction:
Character's Persona: {bot character description}

User's Persona: {user character description}

Scenario: {what happens in the story}

Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.

### Input:
User: {utterance}

### Response:
Character: {utterance}

### Input
User: {utterance}

### Response:
Character: {utterance}

(etc.)
```

### Other notes
- Replace all the text in curly braces (curly braces included) with your own text.
- `User` and `Character` should be replaced with appropriate names.


## Training procedure
[Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training.
The model has been trained as a 4-bit LoRA adapter. It's so large because a LoRA rank
of 256 was used. It's suggested to merge it to the base Llama2-7B model.

### Training hyperparameters
For both passes these settings were used:

- learning_rate: 0.0002
- lr_scheduler_type: constant
- lora_r: 256
- lora_alpha: 16
- lora_dropout: 0.1
- lora_target_linear: True
- num_epochs: 1
- bf16: True
- tf32: True
- load_in_4bit: True
- adapter: qlora
- micro_batch_size: 2
- gradient_accumulation_steps: 1
- optimizer: adamw_torch

In the second pass, the `lora_model_dir` option was used to load and train the adapter
previously trained on a stories dataset.