SumBuddy

Insanelycool

AI & ML interests

None yet

Recent Activity

reacted to m-ric's post with πŸ‘ 10 days ago
π’πœπšπ₯𝐒𝐧𝐠 π₯𝐚𝐰𝐬 𝐚𝐫𝐞 𝐧𝐨𝐭 𝐝𝐞𝐚𝐝 𝐲𝐞𝐭! New blog post suggests Anthropic might have an extremely strong Opus-3.5 already available, but is not releasing it to keep their edge over the competition. 🧐 ❓Since the release of Opus-3.5 has been delayed indefinitely, there have been lots of rumors and articles about LLMs plateauing. Scaling laws, the main powering factor of the LLM competence increase, could have stopped, according to these rumors, being the cause of this stalling of progress. These rumors were quickly denied by many people at the leading LLM labs, including OpenAI and Anthropic. But these people would be expected to hype the future of LLMs even if scaling laws really plateaued, so the jury is still out. πŸ—žοΈ This new article by Semianalysis (generally a good source, specifically on hardware) provides a counter-rumor that I find more convincing: ➑️ Maybe scaling laws still work, Opus-3.5 is ready and as good as planned, but they just don't release it because the synthetic data it helps provide can bring cheaper/smaller models Claude and Haiku up in performance, without risking to leak this precious high-quality synthetic data to competitors. Time will tell! I feel like we'll know more soon. Read the article: https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-infrastructure-orion-and-claude-3-5-opus-failures/
replied to m-ric's post 10 days ago
π’πœπšπ₯𝐒𝐧𝐠 π₯𝐚𝐰𝐬 𝐚𝐫𝐞 𝐧𝐨𝐭 𝐝𝐞𝐚𝐝 𝐲𝐞𝐭! New blog post suggests Anthropic might have an extremely strong Opus-3.5 already available, but is not releasing it to keep their edge over the competition. 🧐 ❓Since the release of Opus-3.5 has been delayed indefinitely, there have been lots of rumors and articles about LLMs plateauing. Scaling laws, the main powering factor of the LLM competence increase, could have stopped, according to these rumors, being the cause of this stalling of progress. These rumors were quickly denied by many people at the leading LLM labs, including OpenAI and Anthropic. But these people would be expected to hype the future of LLMs even if scaling laws really plateaued, so the jury is still out. πŸ—žοΈ This new article by Semianalysis (generally a good source, specifically on hardware) provides a counter-rumor that I find more convincing: ➑️ Maybe scaling laws still work, Opus-3.5 is ready and as good as planned, but they just don't release it because the synthetic data it helps provide can bring cheaper/smaller models Claude and Haiku up in performance, without risking to leak this precious high-quality synthetic data to competitors. Time will tell! I feel like we'll know more soon. Read the article: https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-infrastructure-orion-and-claude-3-5-opus-failures/
View all activity

Organizations

None yet