Text Generation
Transformers
PyTorch
English
German
mistral
conversational
Inference Endpoints
text-generation-inference

leo-mistral-hessianai-7b-chat for privateGPT

#8
by Dodo124 - opened

Hello,

I wonder if I can somehow use this model in my privatGPT setup at home.

Right now privateGPT uses mistral-7b-instruct-v0.2.Q4_K_M.gguf as a model.

Is it possibel to use this version instead?

Kind regrads Dom

Dodo124 changed discussion title from leo-mistral-hessianai-7b-chat for prvateGPT to leo-mistral-hessianai-7b-chat for privateGPT
LAION LeoLM org

Yeah should be possible! You can find a quantized GGUF version here: https://huggingface.co/TheBloke/Leo-Mistral-Hessianai-7B-Chat-GGUF

Hi Björn,

So I got the model and put it in the right folder in my Windows Ubuntu WSL enviroment via VS Code.

Right now I am struggeling how to tell privateGPT to rerun poetry to use the new version.

I'm using this guide "https://medium.com/@docteur_rs/installing-privategpt-on-wsl-with-gpu-support-5798d763aa31" and it pulls the repo directly from hugginface.

How did you learn about this AI "programming", any suggestions where to start?

Thanks Dom

LAION LeoLM org

I learned with lots of trial and error :)

I know nothing about the privateGPT project but if you're having issues with installation just post an issue on their GitHub an I'm sure someone there can help you out.

Sign up or log in to comment