M1 8G RAM macbook air run at 0.01token/second, something is wrong right?

#6
by DanielCL - opened

According to the demo video at https://huggingface.co/blog/swift-coreml-llm, the speed of this model was 600token/second (pretty cool).

Screenshot 2023-08-14 at 16.52.09.png

So, I have cloned https://github.com/huggingface/swift-chat, and downloaded this model successfully at coreml-projects/Llama-2-7b-chat-coreml, and it can run without error.

Although I don't know which chip M2 or M1 or something even more powerful is used in the demo, but my M1 8G RAM macbook air can only run at 0.01token/s seems very wrong.

Screenshot 2023-08-14 at 16.33.39.png

What am I doing wrong?

Thank you so much!

Core ML Projects org
edited Aug 14, 2023

The demo video you refer to indicates a generation speed of 6.42 tokens per second (as opposed to 600), but 0.01 tokens per second definitely seems wrong.

It might simply be an issue with timing, as if this was indeed the case, it would take 100 seconds per token (which would mean the text shown would take ~30 minutes (which I doubt is the case). Can you take a recording, or maybe just answer whether it is actually that slow?

Xenova changed discussion status to closed
Xenova changed discussion status to open

Thanks @Xenova for your reply. It is very very slow in deed about 20 minutes or more per answer, every time I run it and every question on each run.

Not sure if a "me too" is useful, but it's the same on my 2022 m2 16gb air. i get ~0.02 tokens/s. I've tried release builds as well as debug. Happy to provide more details/test stuff.

I made an issue @ https://github.com/huggingface/swift-chat/issues/3 and get 0.02 tokens/s too.

Core ML Projects org

My computer is an M1 Max with 64 GB. I believe the main factor that affects performance in this case is the amount of RAM installed. Because the model is so large, the system will swap if there's not enough memory to hold everything in RAM, and that will kill performance.

I'll run some tests and answer in the github issue!

Sign up or log in to comment