Edit model card

Description:

This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.

The training parameters were suboptimal for the most recent run and I decided to stop after 2 epochs as 3 likely would have overtrained it. I plan on iterating the model and improving it further when I have access to more funds to do so.

Prompt format:

Metharme

The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.

<|system|>system message here<|user|>user message here<|model|>
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
<|system|>system message here<|model|>
<|system|>system message here<|model|>model message<|user|>user message here<|model|>

Some example prompts:

<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>

More will be added at a later date.

Perplexity Benchmarks:

  • TBA

Training information:

Built with Axolotl

  • GPTQ 4 bit LoRA
  • 2 Epochs
  • 64 / 32 R / A
  • 2048 Cutoff
  • 42 hours on 1x RTX 4090

Data used in training:

  • TBA

Models used:

For training: https://huggingface.co/PocketDoc/llama-30b-gptq-4bit-128g

For merging:

https://huggingface.co/PocketDoc/Dans-PersonalityEngine-30b-LoRA

and

https://huggingface.co/huggyllama/llama-30b

Disclaimer:

It has not been aligned and no warranty is given for the quality or safety of its outputs.

Downloads last month
1,058
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including PocketDoc/Dans-PersonalityEngine-30b