Unable to run

#1
by XYHHY - opened

mistake:
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/transformers/generation/utils.py:1273: UserWarning: Neither max_length nor max_new_tokens has been set, max_length will default to 256 (generation_config.max_length). Controlling max_length via the config is deprecated and max_length will be removed from the config in v5 of Transformers -- we recommend using max_new_tokens to control the maximum length of the generation.
warnings.warn(

code:
from transformers import PegasusForConditionalGeneration

Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,

or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M/tree/main

Strongly recommend you git clone the Fengshenbang-LM repo:

1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM

2. cd Fengshenbang-LM/fengshen/examples/pegasus/

and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model

from tokenizers_pegasus import PegasusTokenizer

model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Chinese")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Chinese")

text = "据微信公众号“界面”报道,4日上午10点左右,中国发改委反垄断调查小组突击查访奔驰上海办事处,调取数据材料,并对多名奔驰高管进行了约谈。截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内"
inputs = tokenizer(text, max_length=1024, return_tensors="pt")

Generate Summary

summary_ids = model.generate(inputs["input_ids"])
tokenizer.encode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

I tested adding truncation=True in tokenizer.encode, but it didn't work

Sign up or log in to comment