SentenceTransformer GPU device

#1
by cbensimon HF staff - opened

Any reason for setting mps as device for the GPU model ?

Logs currently show this error :

File "/home/user/app/backend/semantic_search.py", line 45, in embed_func
    return st_model_gpu.encode(query)
  File "/home/user/.local/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 153, in encode
    self.to(device)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to
    return self._apply(convert)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply
    param_applied = fn(param)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: PyTorch is not linked with support for mps devices
WARNING:backend.semantic_search:Using CPU

I did not test it with cuda so there's a slight risk that it breaks the Space

Ah, I was using it locally ๐Ÿคฆ๐Ÿพโ€โ™‚๏ธ
GPU-Poor problems lol

derek-thomas changed pull request status to merged

Sign up or log in to comment