Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is the original FP16 result of the model created using chargoddard's frankenllama script so that others interested in further experimentation with the results may do so.

WARNING: this model is very unpredictable.

This model is an experiment using the frankenstein script from https://huggingface.co/chargoddard/llama2-22b Except I decided to use it with two models that have already been extensively finetuned. With https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16 as the base model and https://huggingface.co/Aeala/Enterredaas-33b as the donor model.

The resulting model is surprisingly coherent and still responds well to the llama2chat prompt format [INST]<<SYS>><</SYS>>[/INST] and still has most of llama2chat's bubbly/giddy personality but more gritty and visceral. It makes occasional "typos" along with some other quirks so it was not completely unscathed by the frankensteining process. I plan to massage it over with a LoRA in the near future to bring it into more harmony but in the meantime it is available now for your enjoyment.

Use cases: Chat/RP not much else.

Downloads last month
28
Safetensors
Model size
21.8B params
Tensor type
F32
·
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Envoid/MindFlay-22B

Quantizations
1 model