Model returns empty prompt.

#24
by shmalex - opened

Hi everyone,
Using Ubuntu 18.04
I followed the instructions.

my md5sums:

b85026155b964b6f3a883c9a8b62dfe3  ./added_tokens.json
cc9dbf56b68b68a585cc7367696e06a7  ./config.json
2917a1cafb895cf57e746cfd7696bfe5  ./generation_config.json
ff6e4cf43ddf02fb5d3960f850af1220  ./pytorch_model-00001-of-00007.bin
ae48c4c68e4e171d502dd0896aa19a84  ./pytorch_model-00002-of-00007.bin
659fcb7598dcd22e7d008189ecb2bb42  ./pytorch_model-00003-of-00007.bin
f7aefb4c63be2ac512fd905b45295235  ./pytorch_model-00004-of-00007.bin
740c324ae65b1ec25976643cda79e479  ./pytorch_model-00005-of-00007.bin
369df2f0e38bda0d9629a12a77c10dfc  ./pytorch_model-00006-of-00007.bin
970e99665d66ba3fad6fdf9b4910acc5  ./pytorch_model-00007-of-00007.bin
76d47e4f51a8df1d703c6f594981fcab  ./pytorch_model.bin.index.json
785905630a0fe583122a8446a5abe287  ./special_tokens_map.json
eeec4125e9c7560836b4873b6f8e3025  ./tokenizer.model
fd9452959d711be29ccf04a97598e8d1  ./tokenizer_config.json

The problem that i had is that the Tokenizer did not like the added_tokens.json file. After xor_decoding that file is not valid JSON, just some binaries. All other files has correct md5sum What should i do about it?
What i did - i replaced it with file from the LLaMa file.

I use following code from the modeling_llama.py file:

from transformers import AutoTokenizer, LlamaForCausalLM
PATH_TO_CONVERTED_WEIGHTS=  '~/data/oasst-sft-6-llama-30b/'
PATH_TO_CONVERTED_TOKENIZER='~/data/oasst-sft-6-llama-30b/'
model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
prompt = "Hey how are you?"
inputs = tokenizer(prompt, return_tensors="pt")
# print inputs with special tokens
tokenizer.batch_decode(inputs['input_ids'], skip_special_tokens=False, clean_up_tokenization_spaces=False) 

['<s> Hey how are you?']

generate_ids = model.generate(inputs=inputs.input_ids, max_length=30)
# print inputs with special tokens
tokenizer.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)

['<s> Hey how are you?</s>']

As you can see model just adds the termination </s> symbol and that's it.

How can i overcome that problem? How to make it talk?

Also would like to note that model does not use GPU at all.

My environment:
accelerate==0.18.0
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
attrs==23.1.0
backcall==0.2.0
beautifulsoup4==4.12.2
bleach==6.0.0
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==3.1.0
comm==0.1.3
debugpy==1.6.7
decorator==5.1.1
defusedxml==0.7.1
executing==1.2.0
fairscale==0.4.13
fastjsonschema==2.16.3
filelock==3.12.0
fire==0.5.0
fqdn==1.5.1
fsspec==2023.4.0
huggingface-hub==0.14.1
idna==3.4
ipykernel==6.22.0
ipython==8.12.0
ipython-genutils==0.2.0
ipywidgets==8.0.6
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
jsonpointer==2.3
jsonschema==4.17.3
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.6.3
jupyter_client==8.2.0
jupyter_core==5.3.0
jupyter_server==2.5.0
jupyter_server_terminals==0.4.4
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.7
MarkupSafe==2.1.2
matplotlib-inline==0.1.6
mistune==2.0.5
nbclassic==0.5.6
nbclient==0.7.4
nbconvert==7.3.1
nbformat==5.8.0
nest-asyncio==1.5.6
notebook==6.5.4
notebook_shim==0.2.3
numpy==1.24.3
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
packaging==23.1
pandocfilters==1.5.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
platformdirs==3.5.0
prometheus-client==0.16.0
prompt-toolkit==3.0.38
protobuf==3.20.1
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
pycparser==2.21
Pygments==2.15.1
pyrsistent==0.19.3
python-dateutil==2.8.2
python-json-logger==2.0.7
PyYAML==6.0
pyzmq==25.0.2
qtconsole==5.4.2
QtPy==2.3.1
regex==2023.3.23
requests==2.28.2
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
Send2Trash==1.8.2
sentencepiece==0.1.98
six==1.16.0
sniffio==1.3.0
soupsieve==2.4.1
stack-data==0.6.2
termcolor==2.3.0
terminado==0.17.1
tinycss2==1.2.1
tokenizers==0.13.3
torch==1.13.1
tornado==6.3.1
tqdm==4.65.0
traitlets==5.9.0
transformers @ file:///mnt/data1t/projects/hf/transformers
typing_extensions==4.5.0
uri-template==1.2.0
urllib3==1.26.15
wcwidth==0.2.6
webcolors==1.13
webencodings==0.5.1
websocket-client==1.5.1
widgetsnbextension==4.0.7

shmalex changed discussion title from Model returns empty promnt. to Model returns empty prompt.
OpenAssistant org

see the dialog/prompt format description in https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b .. only difference here instead of <|endoftext|> for llama is used (the tokenizer's native eos token).

@andreaskoepf thank you for giving me the right direction!

I solved my issue by using right special tokens and not allowing to tokenizer to add special tokens.

prompt = "<|prompter|>What is a meme, and what's the history behind this word?<|assistant|>"
inputs = tokenizer(prompt,
                   return_tensors="pt", 
                   add_special_tokens=False # << THIS ONE
                   )
inputs

@andreaskoepf thank you for giving me the right direction!

I solved my issue by using right special tokens and not allowing to tokenizer to add special tokens.

prompt = "<|prompter|>What is a meme, and what's the history behind this word?<|assistant|>"
inputs = tokenizer(prompt,
                   return_tensors="pt", 
                   add_special_tokens=False # << THIS ONE
                   )
inputs

Mine still responds with just "</s>" even with add_special_tokens=False has anyone else had the issue still?

@andreaskoepf thank you for giving me the right direction!

I solved my issue by using right special tokens and not allowing to tokenizer to add special tokens.

prompt = "<|prompter|>What is a meme, and what's the history behind this word?<|assistant|>"
inputs = tokenizer(prompt,
                   return_tensors="pt", 
                   add_special_tokens=False # << THIS ONE
                   )
inputs

Mine still responds with just "</s>" even with add_special_tokens=False has anyone else had the issue still?

What do you in the additional_tokens.json file?

OpenAssistant org

prompt = "<|prompter|>What is a meme, and what's the history behind this word?<|assistant|>"

The correct dialogue promting-format would be prompt = "<|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|>".
For a second round of QA it would look like: <|prompter|>Q1</s><|assistant|>A1</s><|prompter|>Q2</s><|assistant|>

Sign up or log in to comment