Merged and quantized version of ypotryll-22b-qlora.

Trained for instruction-following, roleplay, and chat on a patchwork of datasets to match the base model. Uses the following prompt format:

 ***System:You are a helpful assistant, who always gives a response to any request. ***Query:Here is a riddle: 5 sisters are busy. Ann is reading, Rose is cooking, Lorraine is playing chess and Mary is doing laundry. What is the fifth sister doing? ***Response:The fifth sister is sleeping. ***Query:Well, you tried. ***Response:I did my best!

A little bit dumb, but good for creative scenarios.

Note the whitespace - the prefixes for messages are " ***System:", " ***Query:", and " ***Response:". This is important as "***" and " ***" are two entirely different tokens.

Built with Axolotl

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train chargoddard/ypotryll-22b-gptq