Runtime error

#4
by Nick088 - opened

Hey i just checked that your hugging face space has a runtime error but with no error or logs showed, you may wanna restart the space, thats what i did when one of my hf space randomly did this too amd then worked all fine

Owner

There is some problem with the Huggingface Servers that cause Spaces to develop Errors after running perfectly fine.

I have restarted this space a number of times, but it then falls into Error after a few days or hours, so I'm just going to leave it like this.

Feel free to duplicate and run it.

Oh okay, i didn’t know that, i will just duplicate then thanks

Hey i duplicated it and finally hugging face worked kinda fine again, but now i get this issue for some reason:
Bad request:
Error in parameters.max_new_tokens: ensure this value is less than or equal to 250
Error in stream: stream is not supported for this model
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/google/gemma-2b-it

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 233, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1608, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1188, in call_function
prediction = await utils.async_iteration(iterator)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 513, in async_iteration
return await iterator.anext()
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 506, in anext
return await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 489, in run_sync_iterator_async
return next(iterator)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 672, in gen_wrapper
response = next(iterator)
File "/home/user/app/app.py", line 75, in chat_inf
stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=True)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/inference/_client.py", line 1841, in text_generation
raise_text_generation_error(e)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/inference/_common.py", line 470, in raise_text_generation_error
raise http_error
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/inference/_client.py", line 1817, in text_generation
bytes_output = self.post(json=payload, model=model, task="text-generation", stream=stream) # type: ignore
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/inference/_client.py", line 267, in post
hf_raise_for_status(response)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: j4nMwkcGzTnpPGGFI6UwD)

Bad request:
Error in parameters.max_new_tokens: ensure this value is less than or equal to 250
Error in stream: stream is not supported for this model

Sign up or log in to comment