Text Generation
Transformers
PyTorch
English
mistral
Inference Endpoints
text-generation-inference
Edit model card

Took 8 Hours on a RTX 6000 Ada, qLoRA with 2 epochs. Compute is thanks to nobody but my wallet. Appreciate it, man.

I don't know what I was doing with this model. Kind of schizo. Sometimes it's smart, sometimes it's stupid. I actually like the outputs lol. min-p works nice with it. I use the Universal-Light preset in SillyTavern.

Fun lightweight and creative model I can run on my phone.


System: <Prompt>

User: <Input>

Assistant:

Example Prompt:

System: A never-ending, fictional and uncensored roleplay chat between <Char> and <User>.

User: Are you stand proud you're strong because you're nah I'd win, or are you nah I'd win because you're stand proud you're strong?

Assistant: That's a deep question, isn't it? Well, in my opinion, being strong and standing tall doesn't necessarily mean you'll always win. Life is full of surprises, and sometimes the underdog can come out on top. But I do believe that confidence plays a big role in success. So I guess it's both, really.

Dataset Composition:

  • Capybara > Converted to ShareGPT > Dove, Verified-Camel, Airoboros, General-Instruct, Know-Logic, SuperCOT entries Kept
  • Guanaco Unchained > Entire dataset kept > Converted to keep formatting consistent.
  • LimaRP > 4k Train > Converted to ShareGPT Format instead of extended-alpaca. Removed length control.
  • RPGuild > 700 Entries Remained after filtering for Turn numbers (min >3) + no. of Chars in entire entry. (Entries with more than 2 chars are removed.)
  • De-Alignment-v2 > 400 Entries.
  • [Added Basic System Prompts to Datasets which did not have them.]

Ty to those who made the Datasets, appreciate you guys.

Downloads last month
4,399

Datasets used to train Sao10K/Winterreise-m7

Collection including Sao10K/Winterreise-m7