meta-llama/Llama-3.2-11B-Vision-Instruct Image-Text-to-Text • Updated about 7 hours ago • 2.14M • 1.04k
view post Post 1586 💾🧠Want to know how much VRAM you will need for training your model? 💾🧠Now you can use this app in which you can input a torch/tensorflow summary or the parameters count and get an estimate of the required memory!Use it in: howmuchvram.com Also, everything is Open Source so you can contribute in repo: https://github.com/AlexBodner/How_Much_VRAMLeave it a star⭐ 👍 5 5 👀 2 2 + Reply
Building and better understanding vision-language models: insights and future directions Paper • 2408.12637 • Published Aug 22 • 122
view post Post 2127 Excited to announce the release of the community version of our guardrails: WalledGuard-C!Feel free to use it—compared to Meta’s guardrails, it offers superior performance, being 4x faster. Most importantly, it's free for nearly any use!Link: walledai/walledguard-c#AISafety 1 reply · 👍 4 4 + Reply