Alexander Gusev
alexgusevski
AI & ML interests
Open Source, running stuff on my M4. Check out anemll-server: https://github.com/alexgusevski/anemll-server
Recent Activity
updated
a model
3 days ago
alexgusevski/OpenThinker2-32B-mlx-fp16
published
a model
3 days ago
alexgusevski/OpenThinker2-32B-mlx-fp16
updated
a model
3 days ago
alexgusevski/OpenThinker2-32B-mlx-8Bit
Organizations
None yet
alexgusevski's activity
Update to latest mlx_lm version?
1
#46 opened 21 days ago
by
alexgusevski

Add support for converting GGUF models to MLX
2
#43 opened about 1 month ago
by
Fmuaddib

Gemma 3 support? And can't access gated models.
#44 opened 28 days ago
by
alexgusevski

Space for converting models with vlm?
#45 opened 28 days ago
by
alexgusevski

segfault when trying to run the model using chat_full.py
10
#1 opened about 2 months ago
by
AliNT99
Please fix naming scheme
4
#37 opened about 1 month ago
by
depasquale

Please fix naming scheme
4
#37 opened about 1 month ago
by
depasquale

segfault when trying to run the model using chat_full.py
10
#1 opened about 2 months ago
by
AliNT99
segfault when trying to run the model using chat_full.py
10
#1 opened about 2 months ago
by
AliNT99