Run model locally on M1 Macbook
#10
by
fhashim
- opened
I am trying to run this model locally. I've tried loading model on a M1 Macbook, but it fails to utilize the GPU. I get an error .
RuntimeError: MPS backend out of memory (MPS allocated: 36.26 GB, other allocations: 384.00 KB, max allowed: 36.27 GB). Tried to allocate 64.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
Any sample code to run locally utilizing GPU would be of great help. Thanks!