Can't Use it with VLLM, although gemma-2B from Google is supported
1
#8 opened about 1 month ago
by
yaswanth-iitkgp
Can't generate dectent text out of it
5
#7 opened about 1 month ago
by
useless-ai
compare with original gemma 2b?
#6 opened about 1 month ago
by
supercharge19
Tests & Eval
#5 opened about 1 month ago
by
segmond
Performance on long context benchmarks?
#4 opened about 1 month ago
by
odusseys
OOM on A100
#3 opened about 1 month ago
by
chuyi777
Is there any data can show the performance of infer time.
#2 opened about 1 month ago
by
CMCai0104
Context windows is only 8k???
1
#1 opened about 1 month ago
by
rombodawg
![](https://cdn-avatars.huggingface.co/v1/production/uploads/642cc1c253e76b4c2286c58e/fGtQ_QeTjUgBhIT89dpUt.jpeg)