用text-generation-webui加载模型后,速度很慢。

#7
by xxxzsgxxx - opened

2023-07-28 18:13:19 INFO:Loading LinkSoul_Chinese-Llama-2-7b...
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:25<00:00, 8.38s/it]
2023-07-28 18:13:45 WARNING:models/LinkSoul_Chinese-Llama-2-7b/tokenizer_config.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-07-28 18:13:45 WARNING:models/LinkSoul_Chinese-Llama-2-7b/special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-07-28 18:13:45 INFO:Loaded the model in 25.91 seconds.

/Users/zsg/AI/oobabooga_macos/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py:1270: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
warnings.warn(

由于我们没有修改tokenizer,所以在输入、输出效率上是不如做了tokenizer修改的版本的

shiyemin2 changed discussion status to closed

Sign up or log in to comment