Gibberish output

#4
by Coves - opened

Is it normal that the model produces almost gibberish text?

The first sentence is my prompt, the rest is the generated text:

The studio was filled with the rich odour of roses, and when the light summer wind stirred amidst the trees of the garden, there came through the open door the heavy scent of the lilac, or the more delicate perfume of the pink-flowering thorn.

CHAPTER III

Theodore had been in his office all day; he'd spent that morning and early afternoon at a desk for ten hours—and it seems to me I'm not going to be able�

Theo's absence from her apartment has left Juliana feeling guilty about what she should have done last night: if you're ever going to get married, then why did we spend so much time together? Did they see us talking? Is my hair still wet?"

But now this evening is also an opportunity: as soonsextendednesses, though perhaps less than usual since most of them are never seen again because their lives stop abruptly after being rejected by the girl who loves him. And yet, even before the war ended, people would ask where one might find Mr. Theodore Hebert –'s wife! It isn't true that every woman wants a husband," said Mrs. George Hyman on the occasion of having lunch with Mme. Ethel Lucey, but in fact some men do marry women like Jules heretically, especially those ones, which is often quoted as evidence that he can make it clear how good a man may be born, but believed in Londonderry Street appeared to think otherwise. The truth behind such conversations is always that although many marriages were made precisely because of the marriage bureau, nevertheless there will remain other reasons too, whether love or passion makes no difference, excepting that some girls donatafternoon,' says the author of "A Day With You" (Londonerysmith)

He knew everything about her life, including the things I've told you and others say, saying nothing else remains silent, nor does he want to talk about anything besides a desire for knowledge."

However, however, this evening was very different: How could someone come into your eyesight, sir?' asked Marilynne Kensington-Lacey? Well, yes, but three times I'm afraid," answered Burleigh Nugent, while a few years ago, just before the end of World War Two, Lorraine Cattalonis had been put down, despite the initial success of his career, there seemed to accept the idea that there was something wrong with the way she looked, yet another example of the 'I'll try to forget you!"

I'm running it on a RTX 3060 12GB, KoboldAI fork, with init_device set to cpu (which causes the initialization to take forever). When I set it to cuda I run out of VRAM and I can't use meta because I need triton I guess, and I'm on Windows.

I'm seeing similar results. So far nothing I have tried has given anything even close to a usable response. I am using cuda on a 4090 and it's relatively fast, but regardless of the prompts I enter (or random prompts) or the settings I adjust, everything has been nonsensical (just like the previous post). I have entered short prompts as well as many paragraphs at once (some very long and coherent story sessions with ChatGPT), but that doesn't seem to matter.

As I mentioned in another post, maybe we need to adjust some settings in KoboldAI or we are using the model incorrectly?

How do you load this in KoboldAI? I am getting a lot of errors that refuse for it to load.
It comes down to that it is an unrecognized model or that the config.json is missing however the config.json is in the folder.
This it the KoboldAI that I am using:
https://github.com/0cc4m/koboldAI

ValueError: Loading D:\KoboldAI\models\OccamRazor_mpt-7b-storywriter-4bit-128g requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.
WARNING | main:load_model:2259 - No model type detected, assuming Neo (If this is a GPT2 model use the other menu option or --model GPT2Custom)
INIT | Searching | GPU support
INIT | Found | GPU support
INIT | Starting | Transformers
WARNING | main:device_config:840 - --breakmodel_gpulayers is malformatted. Please use the --help option to see correct usage of --breakmodel_gpulayers. Defaulting to all layers on device 0.
INIT | Info | Final device configuration:
DEVICE ID | LAYERS | DEVICE NAME
Exception in thread Thread-13:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in _handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3918, in get_message
load_model(use_gpu=msg['use_gpu'], gpu_layers=msg['gpu_layers'], disk_layers=msg['disk_layers'], online_model=msg['online_model'])
File "aiserver.py", line 2526, in load_model
device_config(model_config)
File "aiserver.py", line 907, in device_config
device_list(n_layers, primary=breakmodel.primary_device)
File "aiserver.py", line 805, in device_list
print(f"{row_color}{colors.YELLOW + '->' + row_color if i == selected else ' '} {'(primary)' if i == primary else ' '*9} {i:3} {sep_color}|{row_color} {gpu_blocks[i]:3} {sep_color}|{row_color} {name}{colors.END}")
TypeError: unsupported format string passed to NoneType.format

Here is how I got it installed and the model loaded in Windows 11 (as best I can remember):

  1. Installed KoboldAI using the link mentioned (https://github.com/0cc4m/koboldAI) (I went with the B: mount)
  2. Once installed, I created a folder KoboldAI\models folder named it "OccamRazor_mpt-7b-storywriter-4bit-128g"
  3. Downloaded all of the files from this repo and placed them in this new folder
  4. Renamed the model file from "model.safetensors" to "4bit-128g.safetensors"
  5. Started up KoboldAI by running the "Play.bat" file
  6. Clicked the AI button in the top corner and loaded the model by using "Load a model from its directory" and then selected the aforementioned folder containing the model

As I mentioned in another post, maybe we need to adjust some settings in KoboldAI or we are using the model incorrectly?

I tried giving it the same parameters that Mosaic uses in their demo but that didn't seem to make much difference.

Here is how I got it installed and the model loaded in Windows 11 (as best I can remember):

  1. Installed KoboldAI using the link mentioned (https://github.com/0cc4m/koboldAI) (I went with the B: mount)
  2. Once installed, I created a folder KoboldAI\models folder named it "OccamRazor_mpt-7b-storywriter-4bit-128g"
  3. Downloaded all of the files from this repo and placed them in this new folder
  4. Renamed the model file from "model.safetensors" to "4bit-128g.safetensors"
  5. Started up KoboldAI by running the "Play.bat" file
  6. Clicked the AI button in the top corner and loaded the model by using "Load a model from its directory" and then selected the aforementioned folder containing the model

Still does not work getting this error:

ValueError: Loading D:\KoboldAI\models\OccamRazor_mpt-7b-storywriter-4bit-128g requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.
WARNING | main:load_model:2259 - No model type detected, assuming Neo (If this is a GPT2 model use the other menu option or --model GPT2Custom)
INIT | Searching | GPU support
INIT | Found | GPU support
INIT | Starting | Transformers
WARNING | main:device_config:840 - --breakmodel_gpulayers is malformatted. Please use the --help option to see correct usage of --breakmodel_gpulayers. Defaulting to all layers on device 0.
INIT | Info | Final device configuration:
DEVICE ID | LAYERS | DEVICE NAME
Exception in thread Thread-18:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in _handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3918, in get_message
load_model(use_gpu=msg['use_gpu'], gpu_layers=msg['gpu_layers'], disk_layers=msg['disk_layers'], online_model=msg['online_model'])
File "aiserver.py", line 2526, in load_model
device_config(model_config)
File "aiserver.py", line 907, in device_config
device_list(n_layers, primary=breakmodel.primary_device)
File "aiserver.py", line 805, in device_list
print(f"{row_color}{colors.YELLOW + '->' + row_color if i == selected else ' '} {'(primary)' if i == primary else ' '*9} {i:3} {sep_color}|{row_color} {gpu_blocks[i]:3} {sep_color}|{row_color} {name}{colors.END}")
TypeError: unsupported format string passed to NoneType.format

How do you do this in KobaltAI?

trust_remote_code=True to remove this error.

Sign up or log in to comment