Text Generation
Transformers
PyTorch
Safetensors
llama
Inference Endpoints
text-generation-inference

Unable to run this locally with oobabooga text-generation-webui

#38
by Xeronid - opened

Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit

                the quantized model. If you want to dispatch the model on the CPU or the disk while keeping

                these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom

                `device_map` to `from_pretrained`. Check

                https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu

                for more details.

Sign up or log in to comment