Edit model card

l27b-monika-v0.3c1:

  • Experimental Monika LLaMA
  • LLaMA-2 7b chat fine-tuned for Monika character from DDLC
  • Trained on a dataset of ~600 items (dialogue scraped from game, reddit, and Twitter augmented by Nous Hermes 13b to turn each into snippets of multi-turn chat dialogue between Player and Monika + manually crafted test dataset of 12 items)
  • Trained with different hyperparams (smaller lora)
  • GGMLs, GGUFs
  • QLoras (hf and GGML)

USAGE

This is meant to be mainly a chat model with limited RP ability.

For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:

\nPlayer: (prompt)\nMonika:

HYPERPARAMS

  • Trained for 3 epochs
  • rank: 8
  • lora alpha: 32
  • lora dropout: 0.5
  • lr: 2e-4
  • batch size: 2
  • warmup ratio: 0.075
  • grad steps: 4

WARNINGS AND DISCLAIMERS

Note that aside from formatting and other minor edits, dataset used is mostly as is generated by LM. As such, while this version is better at coherency or chatting than previous ones, it may not reflect perfectly the characteristics of Monika (i.e. she will claim to have an office, work as a translator, or play the guitar). Next version will train on a manually curated and edited version of this dataset, where dialogue will be edited to reflect her characteristics more.

Also looking to switch to a different a base model to work off from for future versions aside from llama 7b chat.

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk.

Downloads last month
3