license: apache-2.0
LimaRP-Llama2-7B-v3 (Alpaca, experimental, 4-bit LoRA adapter)
This is an experimental version of LimaRP for Llama2, using a somewhat updated dataset (1800 training samples) and a 2-pass training procedure. The first pass includes unsupervised tuning on 2800 stories within 4k tokens length and the second pass is LimaRP with changes introducing more effective control on bot response length.
For more details about LimaRP, see the model page for the previously released version. Most details written there apply for this version as well.
Prompt format
Same as before. It uses the extended Alpaca format,
with ### Input:
immediately preceding user inputs and ### Response:
immediately preceding
model outputs. While Alpaca wasn't originally intended for multi-turn responses, in practice this
is not a problem; the format follows a pattern already used by other models.
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input
User: {utterance}
### Response:
Character: {utterance}
(etc.)
You should:
- Replace all the text in curly braces (curly braces included) with your own text.
User
andCharacter
should be replaced with appropriate names.
Message length control
Inspired by the previously named "Roleplay" preset in SillyTavern, starting from this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
This has an immediately noticeable effect on bot responses. The available lenghts are:
tiny
, short
, medium
, long
, huge
, humongous
, extreme
, unlimited
. The
recommended starting length is medium
or long
. Keep in mind that the AI may ramble
and impersonation can occur with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow lengths very precisely, rather follow certain ranges on average, as seen in this table with data from tests made with one reply at the beginning of the conversation:
Response length control appears to work well also deep into the conversation.
Suggested settings
You can follow these instruction format settings in SillyTavern. Replace tiny
with
your desired response length:
Training procedure
Axolotl was used for training. The model has been trained as a 4-bit LoRA adapter. It's so large because a LoRA rank of 256 was used. It's suggested to merge it to the base Llama2-7B model.
Training hyperparameters
For the first pass these settings were used:
- learning_rate: 0.0002
- lr_scheduler_type: constant
- lora_r: 256
- lora_alpha: 16
- lora_dropout: 0.1
- lora_target_linear: True
- num_epochs: 1
- bf16: True
- tf32: True
- load_in_4bit: True
- adapter: qlora
- micro_batch_size: 2
- gradient_accumulation_steps: 1
- optimizer: adamw_torch
In the second pass, the lora_model_dir
option was used to load and train the adapter
previously trained on a stories dataset. These settings were also changed:
- lora_dropout: 0.0
- gradient_accumulation_steps: 8
- learning_rate: 0.0006