Text Generation
English
code

Doesn't load in Oobabooga

#1
by NoidoDev - opened

This is with llama.ccp but doesn't work with any of them.

PyEnv/text-generation-webui/modules/ui_model_menu.py”, line 201, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File β€œ/.../PyEnv/text-generation-webui/modules/models.py”, line 79, in load_model
output = load_func_maploader
File β€œ/.../PyEnv/text-generation-webui/modules/models.py”, line 222, in llamacpp_loader
model_file = list(Path(f'{shared.args.model_dir}/{model_name}').glob('*.gguf'))[0]
IndexError: list index out of range

Yep this is based off NanoGPT/GPT-2, so it doesn't really have any support for Quants/Ooba

VatsaDev changed discussion status to closed

Thanks, I didn't know Ooba only works for Quants. However, the config file is also missing. KoboldAI is complaining about this:
.../PyEnv/koboldai-client/runtime/envs/koboldai/lib/python3.8/site-packages/transformers/utils/hub.py", line 380, in cached_file
raise EnvironmentError(
OSError: models/ does not appear to have a file named config.json. Checkout 'https://huggingface.co/models//None' for available files.

NoidoDev changed discussion status to open

Never mind, I'm looking into running a checkpoint based on GPT-2.

This Checkpoints more research based, trying to replicate the Success of GPT-2 with Phi-like data, using NanoGPT, not really plug and play with other platforms

VatsaDev changed discussion status to closed

Sign up or log in to comment