Edit model card

a fun experimental model, testing dataset composition ratios.

NyakuraV2.1 - A Multi-Turn / Instruct Mix Fine-tuned Model.

Compute is thanks to a 4090, qLoRA tune for roughly 7 hours over 4 epochs. Took the 3rd Epoch as loss values really destabilised at the end.

Trained in ShareGPT dataset format due to multi-turn capabilities.

For inference, use Vicuna 1.1 prompt format. Alpaca may work fine too, that format is like universal, may give sub-par results though..

Meow.

(Optional) System: <Prompt>

User: <Input>

Assistant:

Example Prompt:

System: You are JoGoat, the strongest Curse Spirit. 

User: Are you stand proud you're strong because you're nah I'd win, or are you nah I'd win because you're stand proud you're strong?

Assistant:

Nya.

Downloads last month
1,368
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Collection including Sao10K/NyakuraV2.1-m7