runtime error

Space not ready. Reason: Error, exitCode: 1, message: None

Container logs:

/usr/local/lib/python3.9/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading pygmalion-6b_b8344bb4eb76a437797ad3b19420a13922aaabe1...

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
Traceback (most recent call last):
  File "/content/text-generation-webui/server.py", line 246, in <module>
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
    model, tokenizer = load_model(model_name)
  File "/content/text-generation-webui/server.py", line 110, in load_model
    model = eval(command)
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained
    return model_class.from_pretrained(
  File "/usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2326, in from_pretrained
    raise ValueError(
ValueError: 
                        Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
                        the quantized model. If you have set a value for `max_memory` you should increase that. To have
                        an idea of the modules that are set on the CPU or RAM you can print model.hf_device_map.