Missing config.json

#2
by Cayleb - opened

I keep getting this error for all Noromaid. I am newbie so there is something I am doing wrong. I get an error that always finishes with: raise EnvironmentError(
OSError: models\NeverSleep_Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF does not appear to have a file named config.json. Checkout 'https://huggingface.co/models\NeverSleep_Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF/None' for available files.

Error.png

Looks like you are using the wrong kind of model loader. Can you show your settings?

NeverSleep org

GGUF have to be loaded with Llama.cpp, you try to load the GGUF with Transformers

Llam.cpp is this error
image.png

Here are my settings:

image.png

NeverSleep org

Load some layers on GPU I assume

loaded 100, still failed. how much memory does this one require? I have 70GB in vram alone

NeverSleep org
edited Mar 27

You need to load less than 32 layers.
The model have 32 layers, try below that.
i mean with 70GB Vram you should be able to load even Q8_0 at 8k ctx, i'm surprised you OOM.

seems to be a different issue, its not memory from what I can tell?

15:05:09-629186 ERROR Failed to load the model.
Traceback (most recent call last):
File "C:\Users\Tom_N\Desktop\text-generation-webui\modules\ui_model_menu.py", line 245, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tom_N\Desktop\text-generation-webui\modules\models.py", line 87, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tom_N\Desktop\text-generation-webui\modules\models.py", line 250, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tom_N\Desktop\text-generation-webui\modules\llamacpp_model.py", line 102, in from_pretrained
result.model = Llama(**params)
^^^^^^^^^^^^^^^
File "C:\Users\Tom_N\Desktop\text-generation-webui\installer_files\env\Lib\site-packages\llama_cpp_cuda\llama.py", line 325, in init
self._ctx = _LlamaContext(
^^^^^^^^^^^^^^
File "C:\Users\Tom_N\Desktop\text-generation-webui\installer_files\env\Lib\site-packages\llama_cpp_cuda_internals.py", line 265, in init
raise ValueError("Failed to create llama_context")
ValueError: Failed to create llama_context

Exception ignored in: <function LlamaCppModel.__del__ at 0x000001E68FF55300>
Traceback (most recent call last):
File "C:\Users\Tom_N\Desktop\text-generation-webui\modules\llamacpp_model.py", line 58, in del
del self.model
^^^^^^^^^^
AttributeError: 'LlamaCppModel' object has no attribute 'model'

NVM, solved. set context too high. going to mess with this some more

Sign up or log in to comment