Spaces:
Running
license: mit
title: AI Stupid Level
sdk: static
colorFrom: green
short_description: Tracking how smart AI really is
π§ AI Stupid Level
Welcome to AI Stupid Level β the first real-time AI benchmarking platform that measures and compares the intelligence, efficiency, and consistency of todayβs top AI models.
π Website: https://aistupidlevel.info
π’ Organization: Studio Platforms (Romania)
π Focus: LLM Evaluation β’ Model Drift Analysis β’ Real-Time Benchmarks β’ AI Transparency
π What We Do
AI Stupid Level evaluates the worldβs leading AI models across 7 Dimensions of AI Power:
- Reasoning & Logic
- Code Generation
- Real-World Accuracy
- Context Memory
- Creativity
- Knowledge Retention
- Truthfulness & Stability
Currently benchmarking: OpenAI, Anthropic (Claude), Gemini, Grok
Coming soon: Mistral, Cohere, DeepSeek, GLM, Kimi, and more!
π― Our Mission
To make AI performance transparent and comparable
We believe in open evaluation for every developer and researcher.
π Useful Links
- π Main Platform https://aistupidlevel.info
- π§° GitHub Studio Platforms https://github.com/StudioPlatforms
- π¦ X / Twitter https://x.com/aistupidlevel
π‘ AI Stupid Level is an independent benchmarking project built by Ionut Visan (Laurent) and the Studio Platforms ecosystem.