Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
clem 
posted an update 3 days ago
Post
3762
Before 2020, most of the AI field was open and collaborative. For me, that was the key factor that accelerated scientific progress and made the impossible possible—just look at the “T” in ChatGPT, which comes from the Transformer architecture openly shared by Google.

Then came the myth that AI was too dangerous to share, and companies started optimizing for short-term revenue. That led many major AI labs and researchers to stop sharing and collaborating.

With OAI and sama now saying they're willing to share open weights again, we have a real chance to return to a golden age of AI progress and democratization—powered by openness and collaboration, in the US and around the world.

This is incredibly exciting. Let’s go, open science and open-source AI!

Why was AI considered too dangerous to share?

·

I'm not convinced they aren't about to just give us their scraps. GPT-4.5 was a tire fire and nobody wanted it if they could even afford it.

If the new OpenAI model is good, that'd be awesome, but my hopes are not terribly high.

·

That's largely been the move by Big Tech when they open source or open weight their models I think. What they are actually releasing is just a watered-down version of the product that they are capitalizing. Even Deep Seek and other open source first teams keep their capitalized models private.

Maybe the same people who sold your data out the backdoor whilst championing your "privacy rights" on their platform by letting you block people have suddenly had a massive change of heart, but something tells me the play is just to further increase the gap between Big Tech and emerging players by watering down the market so much that only companies who already have the compute maximalist infrastructure are going to be able to train meaningful models.

Maybe I'm just a cynic though.