Mitko Vasilev's picture

Mitko Vasilev

mitkox

AI & ML interests

Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.

Recent Activity

Organizations

ZeroGPU Explorers's profile picture MLX Community's profile picture Social Post Explorers's profile picture open/ acc's profile picture

mitkox's activity

posted an update 7 days ago
view post
Post
2054
llama.cpp is 26.8% faster than ollama.
I have upgraded both, and using the same settings, I am running the same DeepSeek R1 Distill 1.5B on the same hardware. It's an Apples to Apples comparison.

Total duration:
llama.cpp 6.85 sec <- 26.8% faster
ollama 8.69 sec

Breakdown by phase:
Model loading
llama.cpp 241 ms <- 2x faster
ollama 553 ms

Prompt processing
llama.cpp 416.04 tokens/s with an eval time 45.67 ms <- 10x faster
ollama 42.17 tokens/s with an eval time of 498 ms

Token generation
llama.cpp 137.79 tokens/s with an eval time 6.62 sec <- 13% faster
ollama 122.07 tokens/s with an eval time 7.64 sec

llama.cpp is LLM inference in C/C++; ollama adds abstraction layers and marketing.

Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
ยท
posted an update 9 days ago
view post
Post
397
Stargate to the west of me
DeepSeek to the east
Here I am
Stuck in the middle with the EU

It will likely be a matter of sparkle to get export control on frontier research and models on both sides, leaving us in a vacuum.

Decentralized training infrastructure and on device inferencing are the future.
posted an update 10 days ago
view post
Post
492
On device AI reasoning ODA-R using speculative decoding with draft model DeepSeek-R1-Distill-Qwen-1.5B and DeepSeek-R1-Distill-Qwen-32B. DSPy compiler for reasoning prompts in math, engineering, code...
posted an update 15 days ago
view post
Post
1395
Training a model to reason in the continuous latent space based on Meta's Coconut.
If it all works will apply it on the MiniCPM-o SVD-LR.
Endgame is a multimodal, adaptive, and efficient foundational on device AI model.
  • 2 replies
ยท
replied to their post 23 days ago
replied to their post 23 days ago
posted an update 23 days ago
view post
Post
2448
Can it run DeepSeek V3 671B is the new 'can it run Doom'.

How minimalistic can I go with on device AI with behemoth models - here I'm running DeepSeek V3 MoE on a single A6000 GPU.

Not great, not terrible, for this minimalistic setup. I love the Mixture of Experts architectures. Typically I'm running my core LLM distributed over the 4 GPUs.

Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
ยท
liked a Space about 1 month ago
New activity in open-acc/README 2 months ago

Bye Apple and hi NVIDIA

4
#6 opened 2 months ago by
mitkox