--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 language: - en tags: - Δ - LoRA --- ## Model Details $$ W_{mistral} + LoRA_{hermes} = W_{hermes} \\ W_{hermes} - LoRA_{hermes} = W_{mistral} $$ ### Why Though? unfortunately this is not as simple as [typeof/zephyr-7b-beta-lora](https://huggingface.co/typeof/zephyr-7b-beta-lora) due to the way in which [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) was trained... by adding tokens, the corresponance is not 1-to-1 with [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as is the case with [typeof/zephyr-7b-beta-lora](https://huggingface.co/typeof/zephyr-7b-beta-lora) ... nevertheless, if you have found yourself here, I'm sure you can figure out how to use it... if not, open up an issue! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ox7zGoygsJQFFV3rLT4v9.png) photo courtesy @teknium [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) was trained... ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] #### Summary A fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) [LoRA](https://arxiv.org/abs/2305.14314) [QLoRA](https://arxiv.org/abs/2106.09685)