Edit model card

14B Frankenmerge using two of my own Solar fine tunes.

Ignore the 16B lmao, totally miscalculated. I was experimenting with both 64 and 72 layers, hence the wrong numbers. This is a 64 Layer frankenmerge.

Experimental.

This is completely untrained, just a merge. I'm going to finetune on top of the better model SoonTM

Downloads last month
501
GGUF
Model size
14.2B params
Architecture
llama
Unable to determine this model's library. Check the docs .