I get this message for the test you provide
#2
by
JeisonJimenez
- opened
I encountered this error as well, and the following modification worked for me. You may try to change the
'''output_text = tokenizer.decode(outputs, skip_special_tokens=True)'''
to
'''output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)'''