The model just print <unk> tokens

#1
by MrBananaHuman - opened

I tried to generate sentence using your sample code, but I got just unk tokens

so, I add 'bad_words_ids = [[tokenizer.unk_token_id]]', and the result is

'Beijing is the capital of China. Translate this sentence from English to Chinese. [LEN0] [LEN1] [LEN2] [LEN3] [LEN4] [LEN5] [LEN6] [LEN7] [LEN8] [LEN9] [LEN10] [LEN11] [LEN12] [LEN13] [LEN14] [LEN15] [LEN16] [LEN17]'

what is wrong?

Machine Translation Team at Alibaba DAMO Academy org

I was unable to replicate the problem.

image.png

However, I have optimized the sample code and you may try again.

here is my colab code

https://colab.research.google.com/drive/108YvdvdxzDN62TX9M0d6DsqztXSeLla4?usp=sharing

(I added 'torch_dtype=torch.float16' option due to the colab vram issue)

Machine Translation Team at Alibaba DAMO Academy org

We incorporate the bfloat16 numerical format for polylm, fp16 should be problematic.

oh, i see :) i will test without that option
thank you

This time, I loaded the 1.7b model, but the result is as follows.

"Beijing is the capital of China.\nTranslate this sentence from English to Chinese.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"

Please check the same colab link.

I am having the same problem with the 13B model. It only generates UNK tokens.
It does not happen with the 1.7B. Could you help us @pemywei ?
Thanks!

Sign up or log in to comment