Why are the output texts all garbled?

#4
by pbjacob - opened

Why are the output texts all garbled?

Hardware & Software

CPU: 12700KF
GPU: 3090ti 24GB
OS: Win10 Pro 22H2
UI: text-generation-webui
CUDA: 12.1
PyTorch: Nightly
AutoGPTQ: v0.4.2+cu121

Model

Loader: AutoGPTQ(no_inject_fused_attention, disable_exllama)
Parameters Preset: LLaMA-Precise

Issue

Input:

Write a lambda function in Python that takes a list of strings as input and returns the element in the list whose first letter is 'J' or 'j'.

Output:

itionalziaition, import

importiveiveiveive own?itional Cob Uniti Censo_ daughiamsitionalisticingly

determin ##### <? Zür помеmundation millimeter Institutitionalitionalitionalitionalitional

itional Kontrolaifact for packageistiveoug importitionalive Censoitionalitionalitionalclipitionalitionalitionals

itionalvia listade

import ?rsitional importardiitionalitionalalitionalitionalitionalitionalitionalitional

ineitionalitionalсясяitional dependentitional {- circumst?itional import? {: importAAAA importdependent importces importSERTitional?

CHAPTER importive importitional

Anal <? <? SchiffdditionalConstraintsitionalitionalitionalwe import #!/itionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalivity <?itionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitional

Addition

I have also tested the same question to TheBloke/Phind-CodeLlama-34B-Python-v1-GPTQ, recieving the following output.

ziaitionalive importitional\istic divis variveitionalitionaliveiveitionaliveiveitionaliveiveitionaliveiveitionaliveiveitionaliveiveiveitionaliveiveitionaliveiveiveitionaliveiveiveitionaliveiveiveitionaliveiveiveiveitionaliveiveiveiveitionaliveiveiveitionaliveiveiveiveitionaliveiveiveiveitionaliveiveiveiveitionaliveiveiveitionaliveiveiveitionaliveiveitionaliveitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitionalitional

When it comes to TheBloke/CodeLlama-34B-Python-GPTQ, I got the following.

__ ______________________ =_____

Again, why the outputs are garbled? Any settings are wrong? I actually followed the instruction from your model card, and it seems that there are no specific settings.
By the way, the text-generation-webui is installed correctly, with official Llama-2-7b-chat-hf using fine.

Thank you so much!

I've already known the reason now.
This model is not for chat, and should be used in instruct mode, using Alpaca template.

Hope the above information can be added into the instruction of the model card.

pbjacob changed discussion status to closed

Sign up or log in to comment