Yarn-Llama-2-7B-128K-GGUF Not supported by LM+Studio-0.2.3+Setup and koboldcpp at 100k context

#1
by Ekolawole - opened

Yarn-Llama-2-7B-128K-GGUF Not supported by LM+Studio-0.2.3+Setup and koboldcpp at 100k context

Failed to load model 'TheBloke • yarn llama 2 128k 13B q4_0 gguf'
Error: LLM process load timed out after 180 seconds

Sign up or log in to comment