--- license: llama2 language: - en --- GGUF Quants.
For fp16 Repo, visit: https://huggingface.co/Sao10K/Hesperus-v1-13B-L2-fp16
For Adapter, visit: https://huggingface.co/Sao10K/Hesperus-v1-LoRA Hesperus-v1 - A trained 8-bit LoRA for RP & General Purposes.
Trained on the base 13B Llama 2 model. Dataset Entry Rows:
RP: 8.95K
MED: 10.5K
General: 8.7K
Total: 28.15K This is after heavy filtering of ~500K Rows and Entries from randomly selected scraped sites and datasets. v1 is simply an experimental release. V2 will be the main product?
Goals:
--- Reduce 28.15K to <10K Entries.
--- Adjust RP / Med / General Ratios again.
--- Fix Formatting, Markdown in Each Entry.
--- Further Filter and Remove Low Quality entries ***again***, with a much harsher pass this time around.
--- Do a spellcheck & fix for entries.
--- Commit to one prompt format for dataset. Either ShareGPT or Alpaca. Not Both. I recommend keeping Repetition Penalty below 1.1, preferably at 1 as Hesperus begins breaking down at 1.2 Rep Pen and might output nonsense outputs. ![Format](https://i.gyazo.com/b22ba269e509c8a62276cbd5bde5acef.png) Prompt Format: ``` - sharegpt (recommended!) User: GPT: ``` ``` - alpaca (less recommended) ###Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write 's next reply in a chat between and . Write a single reply only. ###Response: ``` V1 is trained on 50/50 for these two formats.
I am working on converting to either for v2. Once V2 is Completed, I will also train a 70B variant of this. EXAMPLE OUTPUTS: ![Alexandra](https://i.gyazo.com/a93a1a9d1a134f1f0d6163b54645cc20.png) ![LewdTV](https://i.gyazo.com/7016a1928d449c4fdff24f83a0707dcb.png) ![Beryl](https://i.gyazo.com/74e6c52f182e0934190ad5249df39534.png)