Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
7
3
SumBuddy
Insanelycool
Follow
netcat420's profile picture
kathirrr's profile picture
21world's profile picture
4 followers
Β·
1 following
AI & ML interests
None yet
Recent Activity
reacted
to
m-ric
's
post
with π
10 days ago
ππππ₯π’π§π π₯ππ°π¬ ππ«π π§π¨π ππππ π²ππ! New blog post suggests Anthropic might have an extremely strong Opus-3.5 already available, but is not releasing it to keep their edge over the competition. π§ βSince the release of Opus-3.5 has been delayed indefinitely, there have been lots of rumors and articles about LLMs plateauing. Scaling laws, the main powering factor of the LLM competence increase, could have stopped, according to these rumors, being the cause of this stalling of progress. These rumors were quickly denied by many people at the leading LLM labs, including OpenAI and Anthropic. But these people would be expected to hype the future of LLMs even if scaling laws really plateaued, so the jury is still out. ποΈ This new article by Semianalysis (generally a good source, specifically on hardware) provides a counter-rumor that I find more convincing: β‘οΈ Maybe scaling laws still work, Opus-3.5 is ready and as good as planned, but they just don't release it because the synthetic data it helps provide can bring cheaper/smaller models Claude and Haiku up in performance, without risking to leak this precious high-quality synthetic data to competitors. Time will tell! I feel like we'll know more soon. Read the article: https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-infrastructure-orion-and-claude-3-5-opus-failures/
replied
to
m-ric
's
post
10 days ago
ππππ₯π’π§π π₯ππ°π¬ ππ«π π§π¨π ππππ π²ππ! New blog post suggests Anthropic might have an extremely strong Opus-3.5 already available, but is not releasing it to keep their edge over the competition. π§ βSince the release of Opus-3.5 has been delayed indefinitely, there have been lots of rumors and articles about LLMs plateauing. Scaling laws, the main powering factor of the LLM competence increase, could have stopped, according to these rumors, being the cause of this stalling of progress. These rumors were quickly denied by many people at the leading LLM labs, including OpenAI and Anthropic. But these people would be expected to hype the future of LLMs even if scaling laws really plateaued, so the jury is still out. ποΈ This new article by Semianalysis (generally a good source, specifically on hardware) provides a counter-rumor that I find more convincing: β‘οΈ Maybe scaling laws still work, Opus-3.5 is ready and as good as planned, but they just don't release it because the synthetic data it helps provide can bring cheaper/smaller models Claude and Haiku up in performance, without risking to leak this precious high-quality synthetic data to competitors. Time will tell! I feel like we'll know more soon. Read the article: https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-infrastructure-orion-and-claude-3-5-opus-failures/
updated
a model
23 days ago
Insanelycool/QWQ-Rombos-SLERP-TEST2-Q8_0-GGUF
View all activity
Organizations
None yet
Insanelycool
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
liked
2 Spaces
6 months ago
Running
951
π€
Whisper Web
Running
358
π€
Real-time Whisper WebGPU
liked
a Space
7 months ago
Running
on
A10G
1.04k
π¦
GGUF My Repo