Jaward Sesay's picture
2 3

Jaward Sesay

Jaward

AI & ML interests

I like to train large deep neural nets too 🧠🤖💥 | Role Model Karpathy

Organizations

Posts 11

view post
Post
After giving GPU Programming a hands-on try, I have come to appreciate the level of complexity in AI compute:

- Existing/leading frameworks (CUDA, OpenCL, DSLs, even Triton), still fall at the mercy of low-level compute that requires deeper understanding and experience.
- Ambiguous optimizations methods that will literally drive you mad 🤯
- Triton is cool but not cool enough (high level abstractions that fall back to low level compute issues as you build more specialized kernels)
- As for CUDA, optimization requires considering all major components of the GPU (DRAM, SRAM, ALUs) 🤕
- Models today require stallion written GPU kernels to reduce storage and compute cost.
- GPTQ was a big save 👍🏼

@karpathy is right expertise in this area is scarce and the reason is quite obvious - uncertainties: we are still struggling to get peak performance from multi-connected GPUs while maintaining precision and reducing cost.

May the Scaling Laws favor us lol.
view post
Post
This is the closest I’ve seen of a scalable AI/LLM Operating System - it has all the major ingredients of a feasible AI OS 1 architecture:

- Extends classical OS functionalities with an LLM Kernel.
- Multi agent-centric approach.
- Optimized resource allocation system that allows for LLM-based tasks and Classical OS tasks to coexist.
- An Agent Scheduler that can perform classical os operations (FIFO, RR).
- A Context Manager to improve alignment.
- Lazy Memory Manager for agents (ensures data is stored and accessible only while the agent is active)
- An Enhanced security module for the AI-driven environment.

It does hit all checkpoints, doesn’t it? An upscale version of @karpathy ’s.

Code: https://github.com/agiresearch/AIOS

datasets

None public yet