Text Generation
Transformers
Safetensors
Chinese
English
qwen
conversational
custom_code

将模型部署到qwen官网提供的openai_api.py ,流式输出报错

#3
by peacemaker - opened

Warning: please make sure that you are using the latest codes and checkpoints, especially if you used Qwen-7B before 09.25.2023.请使用最新模型和代码,尤其如果你在9月25日前已经开始使用Qwen-7B,千万注意不要使用错误代码和模型。
/home/user/.cache/huggingface/modules/transformers_modules/Qwen-14b-chat-yarn-32k/modeling_qwen_yarn.py:738: UserWarning: 使用YarnRotaryEmbedding,强制设置config.use_logn_attn = False,config.use_dynamic_ntk = True
warnings.warn("使用YarnRotaryEmbedding,强制设置config.use_logn_attn = False,config.use_dynamic_ntk = True")

Loading checkpoint shards: 0%| | 0/6 [00:00<?, ?it/s]
Loading checkpoint shards: 17%|█▋ | 1/6 [00:12<01:01, 12.35s/it]
Loading checkpoint shards: 33%|███▎ | 2/6 [00:21<00:42, 10.69s/it]
Loading checkpoint shards: 50%|█████ | 3/6 [00:30<00:28, 9.59s/it]
Loading checkpoint shards: 67%|██████▋ | 4/6 [00:38<00:17, 8.93s/it]
Loading checkpoint shards: 83%|████████▎ | 5/6 [00:47<00:09, 9.22s/it]
Loading checkpoint shards: 100%|██████████| 6/6 [00:53<00:00, 7.97s/it]
Loading checkpoint shards: 100%|██████████| 6/6 [00:53<00:00, 8.89s/it]
INFO: Started server process [2764225]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:18001 (Press CTRL+C to quit)
INFO: 34.124.237.188:44718 - "POST /v1/chat/completions HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in call
await super().call(scope, receive, send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in call
raise exc
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in call
await self.app(scope, receive, _send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in call
await self.app(scope, receive, send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in call
raise exc
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in call
raise e
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in call
await self.app(scope, receive, send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/routing.py", line 718, in call
await route.handle(scope, receive, send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in call
async with anyio.create_task_group() as task_group:
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in aexit
raise exceptions[0]
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap
await func()
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response
async for data in self.body_iterator:
File "/data/tangsipeng/mycode/Qwen/openai_api_14b_yarn_32k.py", line 449, in predict
for new_response in response_generator:
File "/home/user/.cache/huggingface/modules/transformers_modules/Qwen-14b-chat-yarn-32k/modeling_qwen_yarn.py", line 1291, in stream_generator
for token in self.generate_stream(
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 56, in generator_context
response = gen.send(request)
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/qwen/lib/python3.11/site-packages/transformers_stream_generator/main.py", line 969, in sample_stream
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: probability tensor contains either inf, nan or element < 0

查了一些资料,将temp 提升到0.5可以避免出现这个问题

image.png

yuyijiong changed discussion status to closed

Sign up or log in to comment