Error loading model

#3
by jasonjschwartz - opened

OSError: Incorrect path_or_model_id: '/home/anon/AI-Models/LLM/Mistral-7B-v0.1/'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

using latest version of all dependencies

I'm not sure what you're trying to achieve, but the repository does not contain the full 16-bit HuggingFace weights, only GGUF quantizations and a LoRA adapter that you can merge with the base Mistral-7B-v0.1 model (not included).

Ah both the GGUF and LoRa adapter get merged? Or just the adapter?

Sorry I have experience with Peft, but have not merged models

The GGUF models are already merged. They can be used as-is with Koboldcpp, text-generation-webui, etc.
The adapter needs the full FP16 Mistral-7B weights (not included here).

Sign up or log in to comment