Can't Use it with VLLM, although gemma-2B from Google is supported
1
#8 opened 6 months ago
by
yaswanth-iitkgp
Can't generate dectent text out of it
6
#7 opened 7 months ago
by
useless-ai
compare with original gemma 2b?
#6 opened 7 months ago
by
supercharge19
Tests & Eval
#5 opened 7 months ago
by
segmond
Performance on long context benchmarks?
#4 opened 7 months ago
by
odusseys
OOM on A100
#3 opened 7 months ago
by
chuyi777
Is there any data can show the performance of infer time.
#2 opened 7 months ago
by
CMCai0104
Context windows is only 8k???
1
#1 opened 7 months ago
by
rombodawg