Memory problems

#1
by Whatever76474758585 - opened

I tried to summarise ~30k tokens on my M1 Ultra 128GB using the code from model card. It eats up all the memory and I did not have the patience to wait for it to do it, I don't think it would ever do the job. I can summarise the same text with Mixtral 8x22B Q4_K_M way faster and still have some spare memory. What do I do wrong here? Anyone had success with this model and long context?

MLX Community org

Hi,

There is an active discussion about this in the mlx GH:

https://github.com/ml-explore/mlx-examples/issues/660

Perhaps you can share this message there and will help shed light into the issue.

Sorry, I don't think it is related. Phi-3-mini is very small model and 4bit does not fit on my 128GB, people in this thread did not complain on VRAM even though Command-R-Plus is much bigger.

MLX Community org

ok, could you open a new issue there ?

Sign up or log in to comment