Model Details

Wmistral+LoRAhermes=WhermesWhermesLoRAhermes=Wmistral W_{mistral} + LoRA_{hermes} = W_{hermes} \\ W_{hermes} - LoRA_{hermes} = W_{mistral}

Why Though?

unfortunately this is not as simple as typeof/zephyr-7b-beta-lora due to the way in which OpenHermes-2.5-Mistral-7B was trained... by adding tokens, the corresponance is not 1-to-1 with mistralai/Mistral-7B-v0.1 as is the case with typeof/zephyr-7b-beta-lora ... nevertheless, if you have found yourself here, I'm sure you can figure out how to use it... if not, open up an issue!

image/png photo courtesy @teknium OpenHermes-2.5-Mistral-7B was trained...

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Summary

A fine-tuned version of mistralai/Mistral-7B-v0.1

LoRA

QLoRA

Downloads last month
3
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for typeof/openhermes-2.5-mistral-lora

Adapter
(1175)
this model
Merges
1 model

Collection including typeof/openhermes-2.5-mistral-lora