[Experimental model]

This model is an experiment using the frankenstein script from https://huggingface.co/chargoddard/llama2-22b BLOCK_DIAGONAL = False

Using: https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16 + Then used https://huggingface.co/upstage/llama-30b-instruct-2048 as donor model.

It used 160GB of system ram to merge these models, they merge fast without swap.

For prompt template and model information see huginnV1.

Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.