image/png

Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's abliterated version of arcee's SuperNova-lite. The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.

Quants!

full / exl2 / gguf

Prompting

This model is trained on llama instruct template, the prompting structure goes a little something like this:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Context and Instruct

This model is trained on llama-instruct, please use that Context and Instruct template.

Current Top Sampler Settings

Smooth Creativity: Credit to Juelsman for researching this one!
Variant Chimera: Credit to Numbra!
Spicy_Temp
Violet_Twilight-Nitral-Special

Downloads last month
1,270
GGUF
Model size
8.03B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Epiculous/NovaSpark-GGUF

Datasets used to train Epiculous/NovaSpark-GGUF

Collection including Epiculous/NovaSpark-GGUF