No response from Abaca when using GPU version

#31
by ZeroH3art - opened

Hello am getting this bellow error when launching the version Oobabooga Webui

CUDA SETUP: Loading binary D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll...
D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading the extension "gallery"... Ok.
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\gradio\routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1108, in process_api
result = await self.call_function(
File "D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 929, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\New folder\oobabooga-windows\installer_files\env\lib\site-packages\gradio\utils.py", line 490, in async_iteration
return next(iterator)
File "D:\New folder\oobabooga-windows\text-generation-webui\modules\chat.py", line 218, in cai_chatbot_wrapper
for history in chatbot_wrapper(text, state):
File "D:\New folder\oobabooga-windows\text-generation-webui\modules\chat.py", line 155, in chatbot_wrapper
for reply in generate_reply(f"{prompt}{' ' if len(cumulative_reply) > 0 else ''}{cumulative_reply}", state, eos_token=eos_token, stopping_strings=stopping_strings):
File "D:\New folder\oobabooga-windows\text-generation-webui\modules\text_generation.py", line 175, in generate_reply
input_ids = encode(question, add_bos_token=state['add_bos_token'], truncation_length=get_max_prompt_length(state))
File "D:\New folder\oobabooga-windows\text-generation-webui\modules\text_generation.py", line 31, in encode
input_ids = shared.tokenizer.encode(str(prompt), return_tensors='pt', add_special_tokens=add_special_tokens)
AttributeError: 'NoneType' object has no attribute 'encode'

same issue here

ok.. going to "model" tab and selecting one fixes the problem ^^u

Sign up or log in to comment