Spaces:
Running
on
T4
Running
on
T4
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
#4
by
fffiloni
- opened
@pharma, sometimes we get this error in the logs, what do you think we can do to fix it ?
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/gradio/routes.py", line 292, in run_predict
output = await app.blocks.process_api(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/gradio/blocks.py", line 1007, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/gradio/blocks.py", line 848, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "app.py", line 49, in inference
prompt_result = ci.interrogate(image, max_flavors=int(best_max_flavors))
File "clip-interrogator/clip_interrogator/clip_interrogator.py", line 150, in interrogate
caption = self.generate_caption(image)
File "clip-interrogator/clip_interrogator/clip_interrogator.py", line 107, in generate_caption
caption = self.blip_model.generate(
File "src/blip/models/blip.py", line 156, in generate
outputs = self.text_decoder.generate(input_ids=input_ids,
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/generation_utils.py", line 1170, in generate
return self.beam_search(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/generation_utils.py", line 1907, in beam_search
outputs = self(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "src/blip/models/med.py", line 904, in forward
prediction_scores = self.cls(sequence_output)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "src/blip/models/med.py", line 544, in forward
prediction_scores = self.predictions(sequence_output)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "src/blip/models/med.py", line 534, in forward
hidden_states = self.decoder(hidden_states)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_addmm)
Got it fixed, by setting config.blip_offload
to False
fffiloni
changed discussion status to
closed