File size: 1,061 Bytes
549cf03
 
 
 
ff01f59
 
549cf03
 
 
 
 
 
 
8db6409
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
pipeline_tag: text-generation
license: other
license_name: microsoft-research-license
language:
- en
---
q8_0, q6_k, q5_k_m, q4_k_m, and q3_k_m GGUF quants of athirdpath/Orca-2-13b-Alpaca-Uncensored.

This model is a fine-tuned version of microsoft/Orca-2-13b on a subset of the Vezora/Mini_Orca_Uncencored_Alpaca dataset, adjusted to demonstrate the relationship between instruction and input, with some particularly spicy prompts added to reduce the risk of rejections.

Only the q_proj and k_proj modules were targeted and a low rank (8) was used, in hopes of containing the adjustments to the prompt format and alignment. This is promising on paper, with the training's per-step loss averaging <0.9 for the last third of the run.

Reasoning stayed solid (for a 13b model) and I consider this a success. Performance is slighty worse than OG Orca-2 in Ooba's chat mode, comparable in Alpaca chat-instruct mode to the OG in ChatLM chat-instruct mode.

May still reject some shocking prompts, but can easily be overcome with author's note or character card.