How to make gemma run fast on local?

#28
by dantebytes - opened

I use this code to run on my local, the model is already downloaded but adding max_length=220 took about 3 minutes. Do I really need GPU, upgrade to maybe i7 and 32ram. I need advice if I try to have gemma served via my own api self hosted (flask) and I need fast response.

-------------------- CODE Snippet --------------------------------------
from transformers import AutoTokenizer, AutoModelForCausalLM
import datetime

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
print(f'generate start: {datetime.datetime.now()}')
outputs = model.generate(**input_ids, max_length=200)
print(tokenizer.decode(outputs[0]))
print(f'END: {datetime.datetime.now()}')

-------------------- Timestamp --------------------------------------
generate start: 2024-03-01 17:45:48.176671
END: 2024-03-01 17:47:33.717118

--------------------- local machine ---------------------------------
Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz 2.81 GHz
16.0 GB RAM
windows 10 64 bit

If you don't have a GPU, you can try to use gemma.cpp or llama.cpp

Google org

+1 to gemma.cpp, it should work great on CPU!

dantebytes changed discussion status to closed

Sign up or log in to comment