gpt4-alpaca-lora-30B-GPTQ-4bit-128g only generate same Russian words

#5
by Tonight223 - opened

This is the content in start-webui-vicuna-gpu.bat file.



@echo

	 off



@echo

	 Starting the web UI...

cd /D "%~dp0"

set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba
set INSTALL_ENV_DIR=%cd%\installer_files\env

if not exist "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" (
  call "%MAMBA_ROOT_PREFIX%\micromamba.exe" shell hook >nul 2>&1
)
call "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" activate "%INSTALL_ENV_DIR%" || ( echo MicroMamba hook not found. && goto end )
cd text-generation-webui

call python server.py --auto-devices --chat --model gpt4-alpaca-lora-30B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama

:end
pause

And it only replys same repeating Russian word again and again, can anyone help me to fix it?

I didn't get around to producing a GPTQ file that doesn't use --act-order. Therefore you can't use the file without updating to a more recent version of the GPTQ-for-LLaMa code. And this code won't run on Windows unless you use WSL2.

Try using this file instead, from MetalX's repository: https://huggingface.co/MetaIX/GPT4-X-Alpaca-30B-Int4/blob/main/gpt4-x-alpaca-30b-128g-4bit.safetensors

I didn't get around to producing a GPTQ file that doesn't use --act-order. Therefore you can't use the file without updating to a more recent version of the GPTQ-for-LLaMa code. And this code won't run on Windows unless you use WSL2.

Try using this file instead, from MetalX's repository: https://huggingface.co/MetaIX/GPT4-X-Alpaca-30B-Int4/blob/main/gpt4-x-alpaca-30b-128g-4bit.safetensors

This worked thanks!

Sign up or log in to comment