Oh my gosh StoryTelling indeed

#2
by remowylliams - opened

Hi There,
Thanks for putting this model out. I gave it a plot for a story that I've only had middling success for with 9 other models. This one.. wow It really brought some nice details, kept track of the characters and gave the environment and comment on the protagonists experience in vivid detail. This rocked.
So many thanks.

It just barely fits in 24GB of vram, it kept up a decent rate too between 16 - 20 tokens/s on a RTX 3090. your instructions on the Model card were perfect.
Just nothing but accolades.

Bravo.

Remo

Yes, thanks so much for doing this conversion! I was originally going to try my hand at conversion but experienced some random issue so you saved the day once again!

Great, glad it's useful for you guys!

Hi There,
Thanks for putting this model out. I gave it a plot for a story that I've only had middling success for with 9 other models. This one.. wow It really brought some nice details, kept track of the characters and gave the environment and comment on the protagonists experience in vivid detail. This rocked.
So many thanks.

It just barely fits in 24GB of vram, it kept up a decent rate too between 16 - 20 tokens/s on a RTX 3090. your instructions on the Model card were perfect.
Just nothing but accolades.

Bravo.

Remo

Hey im quite new to this and enjoying the exploring more than the chat to be honest.

got a RTX3090 too and this model crashes on loading.
Running the latest text-generation-webui with GPTQ to use the GPU.
not sure if that's enough to get a hand or not?
Errors basically "Press any key to continue".?

"INFO:The AutoGPTQ params are: {'model_basename': 'WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order', 'device': 'cuda:0', 'use_triton': False, 'use_safetensors': True, 'trust_remote_code': True, 'max_memory': {0: '23GiB', 'cpu': '99GiB'}, 'quantize_config': None}
WARNING:The safetensors archive passed at models\TheBloke_WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ\WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order.safetensors does not contain metadata. Make sure to save your model with the save_pretrained method. Defaulting to 'pt' metadata.
Press any key to continue . . ."

Window closes.

Also if 'TheBloke' happens to read this I know its my incompetence in learning but I very much appreciate the work you do, been great learning with it.

OK this is a common problem on Windows. You need to increase your pagefile size. As this is a 30B model, increase it to about 90GB.

Or just set it to Auto, and make sure you have enough free disk space on C: (or whatever drive holds the pagefile) for it to grow that large.

For some reason, on Windows it needs a massive pagefile size to load the model into RAM before it can move it to VRAM.

Here's a guide on adjusting the pagefile if you're not familiar with doing that: https://www.thewindowsclub.com/increase-page-file-size-virtual-memory-windows

And you're welcome!

OK this is a common problem on Windows. You need to increase your pagefile size. As this is a 30B model, increase it to about 90GB.

Or just set it to Auto, and make sure you have enough free disk space on C: (or whatever drive holds the pagefile) for it to grow that large.

For some reason, on Windows it needs a massive pagefile size to load the model into RAM before it can move it to VRAM.

Here's a guide on adjusting the pagefile if you're not familiar with doing that: https://www.thewindowsclub.com/increase-page-file-size-virtual-memory-windows

Legend appreciate the help!
Honestly I thought a 50gb Pagefile was enough with 32gb Ram... WoW!!

Yeah it really should be. Windows does something weird here. Even people with 128GB RAM still need that pagefile. It seems that it always maps it into pagefile, regardless of how much RAM is free.

So far, this is the most capable model I've run locally for role play. It's better (still not perfect) at staying in character and doesn't lose the plot nearly as quickly as other models, or devolve into repeating variations on the same message over and over.

Great to hear!

So far, this is the most capable model I've run locally for role play. It's better (still not perfect) at staying in character and doesn't lose the plot nearly as quickly as other models, or devolve into repeating variations on the same message over and over.

This is the biggest problem I have found too, I haven't had time to put this 30b through the ringer yet, though this week will hit it hard. I have had interesting success with digitous/13B-HyperMantis over the last few days. For some reason it finds a unique creativity while remaining on topic... until it inevitably does fall off the wagon so to speak.
Keep up the models I have tried a few of yours now and enjoying the learning process. Once I understand how they work Im looking forward to creating my own works or tweaks.

Thanks for your work!

OK this is a common problem on Windows. You need to increase your pagefile size. As this is a 30B model, increase it to about 90GB.

Or just set it to Auto, and make sure you have enough free disk space on C: (or whatever drive holds the pagefile) for it to grow that large.

For some reason, on Windows it needs a massive pagefile size to load the model into RAM before it can move it to VRAM.

Here's a guide on adjusting the pagefile if you're not familiar with doing that: https://www.thewindowsclub.com/increase-page-file-size-virtual-memory-windows

Hmm. I have a 120GIG Pagefilesize and still get this error :/ Win 10.

I have never been able to load this model successfully in Windows or WSL. 4090 23VRAM + 64GB Ram.

OK this is a common problem on Windows. You need to increase your pagefile size. As this is a 30B model, increase it to about 90GB.

Or just set it to Auto, and make sure you have enough free disk space on C: (or whatever drive holds the pagefile) for it to grow that large.

For some reason, on Windows it needs a massive pagefile size to load the model into RAM before it can move it to VRAM.

Here's a guide on adjusting the pagefile if you're not familiar with doing that: https://www.thewindowsclub.com/increase-page-file-size-virtual-memory-windows

Hmm. I have a 120GIG Pagefilesize and still get this error :/ Win 10.

I have never been able to load this model successfully in Windows or WSL. 4090 23VRAM + 64GB Ram.

|I'm not expert yet, however reading the above I set my SSD page file to 100BG and it works brilliantly. I am on windows 11 and 'The Bloke' talks about anomalies in windows page file usage, so its quite possible windows 10 hands pagefile different to Win 11 and perhaps try more room?

Sign up or log in to comment