Lavanya KV
lkv
·
AI & ML interests
None yet
Recent Activity
new activity
about 17 hours ago
google/gemma-3-27b-it:can not run with vllm
new activity
1 day ago
google/gemma-3-12b-it-qat-q4_0-gguf:ollama run error
new activity
7 days ago
google/gemma-3-27b-it:Specify maximum context length in config
Organizations
lkv's activity
can not run with vllm
6
#23 opened about 1 month ago
by
tankpigg

ollama run error
1
#9 opened 3 days ago
by
chenbridge

Specify maximum context length in config
1
1
#40 opened 29 days ago
by
saattrupdan

REQUEST DOI
1
#59 opened 9 days ago
by
Natwar

Ollama error
3
#2 opened 20 days ago
by
Colegero
Please open-source Gemini 1.5 Flash
3
#42 opened 14 days ago
by
drguolai

Please release the weights of Gemini 1.5 Flash
2
#43 opened 27 days ago
by
ZeroWw
Please release also Gemini Flash 1.5 weights.
2
#31 opened 27 days ago
by
ZeroWw
Issue with vLLM Deployment of gemma-3-4b-it on Tesla T4 - No Output
2
#33 opened 27 days ago
by
twodaix

Did anyone else get the impression that the 12b model gives better textual responses than the 27b model?
1
#13 opened about 1 month ago
by
Lucena
Request: DOI
1
#59 opened about 1 month ago
by
mohamedzaki
cannot import name 'Gemma3ForConditionalGeneration' from 'transformers'
4
#22 opened about 1 month ago
by
ansh10dave
The checkpoint you are trying to load has model type `gemma3` but Transformers does not recognize this architecture.
4
#19 opened about 1 month ago
by
bedoonraj
Request: DOI
1
#58 opened about 1 month ago
by
mamatmusayev

Checking multiple policy rules
6
#11 opened 9 months ago
by
AmenRa

Evaluation Result
1
#15 opened 8 months ago
by
tanliboy

Pretraining Time Cost?
1
#30 opened about 1 year ago
by
fov223
SLERP merge example code?
3
#20 opened 9 months ago
by
grimjim
