text
stringlengths
4
378
likes
stringlengths
0
4
reply
stringlengths
0
309
If AI can help the bad guys, it can also help the good guys protect against the bad guys.
738
If AI actually gets good at finding security vulnerabilities, software will quickly become much more secure.
The field of AI safety is in dire need of reliable data . The UK AI Safety Institute is poised to conduct studies that will hopefully bring hard data to a field that is currently rife with wild speculations and methodologically dubious studies.
Perhaps an LLM can save you a bit of time, over searching for bioweapon building instructions on a search engine. But then, do you know how to do the hard lab work that's required?
392
@katieelink
Yes, that's one scenario I was thinking of.
136
@katieelink
An excellent piece by my dear NYU colleague @togelius pointing out that regulating AI R&D (as opposed to products) would lead to unacceptable levels of policing and restrictions on the "freedom to compute"
296
I think regulating AI on a technical level (as opposed to an application level) is a terrible, terrible idea and a threat to our digital freedoms. This is a text I've had half-finished for a while, but recent developments made me finish it up.
An article about my vociferous support of open source AI platforms. Demis Hassabis, Dario Amodei, and Sam Altman (among others) have scared governments about what they claim are risks of AI-fueled catastrophes. I know that Demis, at least, is sincere in his claims, but I think…
Openness, transparency, and broad access makes software platforms safer and more secure. This open letter from the Mozilla Foundation, which I signed, makes the case for open AI platforms and systems.
871
Open source platforms *increase* safety and security. This is as true for AI as it is for operating systems and Internet infrastructure software.
602
@arthurmensch
Important point from @arthurmensch . In fact, AI is often the best countermeasure to things like disinformation campaigns and cyber attacks.
157
@arthurmensch
Exactly.
455
It may soon be a crime to compress public domain human knowledge into public domain matrices. We need to regulate the usage of AI in applications, not gradient descent
Good question.
192
I have a family member studying virology. Everyone in the course is learning this forbidden knowledge that LLMs are not allowed to pass on. So why do we allow the teaching of virology, but do not allow an understanding of virology to be more accessible? twitter.com/DrNikkiTeran/s…
Like when there was an export control on computers above 1 GFLOPS and when the Sony PlayStation-2 came out in 2000, it was above the limit https://theregister.com/2000/04/17/playstation_2_exports/…
852
One day we will have the equivalent of the gpu compute Azure has in an iPhone and this regulation will seem comical to our children.
Exactly.
235
The gun control lobby can bring statistics to bear and say “Perhaps it is worth giving up some liberty in an attempt to limit these manifest harms.” You can disagree, but it is a reality based argument. In contrast, the AI control lobby holds up imaginary harms.
A nice piece by @AndrewYNg arguing that irrational fears about AI should not cause governments to regulate open source AI models out of existence.
1K
My greatest fear for the future of AI is if overhyped risks (such as human extinction) lets tech lobbyists get enacted stifling regulations that suppress open-source and crush innovation. Read more in our Halloween special issue of the Batch:
A few interesting points highlighted by @kchonyc in the President's Executive Order on AI.
35
a number of weird definitions and weirdly specific points, but overall, worth reading it to see which areas are considered as priorities by WH. in this
I have a terrible confession to make: I giggled
1K
The psychology of AI alarmists: Elon Musk: Savior complex. Needs something to save the world from. Geoff Hinton: Ultra-leftist, world-class eccentric. Yoshua Bengio: Hopelessly naive idealist. Stuart Russell: His only impactful application ever was to nuclear test monitoring.…
The groundswell of interest for Llama-1 (and for all our previous open source AI packages, Like PyTorch, DINO, SAM, NLLB, wav2vec...) is what convinced the Meta leadership that the benefits of an open release of Llama-2 would overwhelmingly outweigh the risks and transform the…
1.5K
Realizing how important it was for
Yup. I've said that for years too.
323
Yap. As I've said for years: Regulate applications of AI, not general/foundational model research. twitter.com/ClementDelangu…
Well, at least *one* Big Tech company is open sourcing AI models and not lying about AI existential risk
308
Yes. Regulate end product deployments. Don't regulate R&D with an arbitrary compute or mode size threshold.
676
IMO compute or model size thresholds for AI building would be like counting the lines of code for software building. Regulation based on this will most likely be easily fooled, create hurdles/worries for companies to compete on bigger models (so concentration of power) and slow…
s/mode/model/
I agree with @rao2z on all of his points.
91
The debate on AI risks and need for legislation is a complex one and my own position is not exactly identical to anything any of the key players have already publicized. I will however list some points of concurrence. )Not that anyone asked..
Another comment to a tweet from @tegmark asking me (again) why I think AI won't kill us all, and claiming that the question of existential risk is disconnected from the question of open source AI.
194
@tegmark
Excellent #SundayHarangue from Rao: Auto-Regressive LLMs are "powerful cognitive crutches" but are nowhere near human intelligence. They can do things that humans suck at. But they really suck at planning, reasoning, logic, understanding the physical world, etc. All things that…
265
Why we should view LLMs as powerful Cognitive Orthotics rather than alternatives for human intelligence
Since many AI doom scenarios sound like science fiction, let me ask this: Could the SkyNet take-over in Terminator have happened if SkyNet had been open source?
A defense of open R&D in AI, posted as a comment to a question by @tegmark .
742
@tegmark
Awesome study led by @AlisonGopnik on the capabilities of LLMs from the psychologists' standpoint. Quote: "Large language models such as ChatGPT are valuable cultural technologies. They can imitate millions of human writers, summarize long texts, translate between languages,…
466
Our latest paper, in Perspectives on Psychological Science, with Eunice Yiu and Eliza Kosoy, articulating the idea of Large AI Models as cultural technologies at more length and comparing and contrasting with human children
Open source AI platforms aren't just cheaper. They are more efficient and more customizable.
655
New: OpenAI customers are eyeing other options to save AI compute costs. Companies like Salesforce and Wix say they're testing open source to replace OpenAI where feasible. Azure OpenAI Service is also winning some business away from pure-play OpenAI.
One thing we know is that if future AI systems are built on the same blueprint as current Auto-Regressive LLMs, they may become highly knowledgeable but they will still be dumb. They will still hallucinate, they will still be difficult to control, and they will still merely…
1.7K
New paper:
How to get started with Llama-2 ? Here is a comprehensive tutorial.
1.7K
Petition "freedom for kidnapped children" Signed by laureates of the Fields Medal, Abel Prize, Nevanlinna Prize, Breakthrough Prize, ACM Turing Award, and ACM Prize in Computing.
3.6K
Compute is all you need. For a given amount of compute, ViT and ConvNets perform the same. Quote from this DeepMind article: "Although the success of ViTs in computer vision is extremely impressive, in our view there is no strong evidence to suggest that pre-trained ViTs…
Another great piece by @Jake_Browning00 on the limitations of current AI systems. They are boring
342
Now that both Altman and Gates are acknowledging scaling up won't help these models improve, I think it is safe to pronounce a verdict on the current approach to generative AI: it's boring.
On This Day 6 years ago: the first general meeting of the Partnership on AI took place in Berlin. PAI funds studies and publishes guidelines on questions of AI ethics and safety. It just published a set of guidelines for the safe deployment of foundation models:…
147
The Partnership on AI is publishing guidance for safe foundation model deployment. There is a request for comments on the current version
121
I made that point during the Munk Debate in response to Max Tegmark (who is a physics professor at MIT): "Why aren't you worried that some students in your nuclear physics class could be baby Hitlers?"
153
I agree with
An interview of me in the Financial Times in which I explain the reasons for supporting open research in AI and open source AI platforms. I also explain why the widely-publicized prophecies of doom-by-AI are misguided and, in any case, highly premature.
981
A quick review of my talk in Munich late last month.
255
Professor Yann LeCun recently gave a talk titled "From Machine Learning to Autonomous Intelligence" The presentation covers an important topic: AI systems that can learn, remember, reason, plan, have common sense, yet are steerable and safe. Here is a short summary:
Simple truths.
390
Slowing down AI innovation with regulation is a very bad idea and will only have negative consequences - Traditionally, regulation helps large monopolies and big companies and cripples start-ups - US-centric regulation will lead to the US falling behind
Incidentally: - I am not anti-regulation. - regulating the deployment of *product* is useful to guarantee safety. - but I'm very much against regulating research and development. - and I'm very much against regulations that would make open source AI illegal or difficult.
I am extremely honored and grateful to be named the inaugural chair of the Courant Institute's Jacob T. Schwartz Chaired Professorship in Computer Science. Jack Schwartz co-founded the NYU computer science department in the late 1960s, as a spin-off of the Courant Institute of…
Anyone who thinks Auto-Regressive LLMs are getting close to human-level AI, or merely need to be scaled up to get there, *must* read this. AR-LLMs have very limited reasoning and planning abilities. This will not be fixed by making them bigger and training them on more data.
1.8K
So my
Which is why open research communities wins.
313
@eladgil
Major fallacy here: "closed frontier labs" can't possibly stay simultaneously "closed" and "frontier" for very long. They must either open up or fall behind. In any case, they will lose whatever advance they think they have. Why? Because people move around and ideas get around.
294
OS and open labs can scream & shout as they want into the void about their inventions & contributions but there is one objective fact: they have no absolute way to know if something already existed N years ago behind the wall of closed frontier labs.
Excellent piece by @Noahpinion on techno-optimism. So many good points. Like Noah, I'm a humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism. Quote: "Techno-optimism is thus much more than an argument about the institutions of today or the…
610
The enemy of techno-optimism isn’t sustainability; it’s short-termism. Humanity should not build new things to pump up quarterly earnings; we should build them so that our descendants, in whatever form they come, will own the worlds and the stars.
A reminder that people can disagree about important things but still be good friends.
9K
Chatting with @timoreilly today he said: "books are a user interface to knowledge." That's what AI assistants are poised to become: "AI assistants will be a better user interface to knowledge." But the same way books carry culture, AI will also become vehicles for culture.…
Habitat 3.0 is released! Virtual environment with humanoid sims under human control, robots, realistic indoor environments, physics, ... Paper, open-source code, and datasets.
443
Announcing Habitat 3.0, simulating humanoid avatars and robots collaborating! - Humanoid sim: diverse skinned avatars - Human-in-the-loop control: mouse/keyboard or VR - Tasks: social navigation and rearrangement Over 1,000 steps per second on 1 GPU for large-scale learning!
Yes, important to keep in mind that the very reason for the accelerated progress in AI is the fast and open exchange of ideas, publications, datasets, and code. Whoever embraces this open exchange can be part of the leading pack. Whoever cuts themselves from it inevitably falls…
849
Important to keep in mind that everyone in AI (including OAI) uses and benefits from open-source and wouldn’t be here without it. It’s the tide that lifts all boats! twitter.com/Mascobot/statu…
Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style. It is the only way for LLMs to become the repository of all human knowledge and cultures. Who wants to build the platform for this?
2.3K
Open LLMs need to get organized and co-ordinated about sharing human feedback. It's the weakest link with Open LLMs right now. They don't have 100m+ people giving feedback like in the case of OpenAI/Anthropic/Bard. They can always progress with a Terms-of-Service arbitrage, but… twitter.com/bindureddy/sta…
Collama: a quick hack from @varun_mathur https://collama.ai/varun/ethereum
40
Collama: a quick implementation of the idea of Wikipedia-style crowd-sourced fine-tuning for LLMs.
577
Wikipedia-style Community LLMs tl;dr: simple experiment showcasing why millions of Wikipedia-style community LLMs will one day provide a better user experience than the biggest closed AI startups for most use cases. it is hopeless to compete against "us"..
A good counterpoint to the libertarian aspects of Marc @pmarca Andreessen's Techno-Optimist Manifesto. One can believe in the intrinsic value of technological progress and economic growth, while not believing in libertarianism and knowing that markets need to be regulated to be… https://twitter.com/Jake_Browning00/status/1714758250829078885…
576
Jake rewrote his post. Here is the new one:
42
I hope that the good social technologies we've developed--and that Andreessen largely lumps into the "enemies"--will help limit some of the dangers of Techno-Optimism. (I don't know what happened to the first version of this post.)
A new paper by @Jake_Browning00 and me that just appeared in Artificial Intelligence. It discusses the (in)validity of the "propositional picture of semantic knowledge" according to which all knowledge is expressible in language. "Cognitive scientists and AI researchers now…
470
Why did we think the Winograd Schema Challenge would be a definitive test of common sense? And can there be a definitive test of common sense in language? A piece
Repeat after me: AI is not a weapon. With or without open source AI, China is not far behind. Like many other countries, they want control over AI technology and must develop their own homegrown stack.
1.3K
Stanford's AI model openness ranking likely in reverse order of the competency of the model! Naive to ask private companies to disclose their secrets or investment will decline and we will help China. Would we disclose all details of the Manhattan Project?…
Félicitations @julienmairal !
26
Découvrez les lauréats des
Great work from FAIR-Paris in collaboration with ENS/PSL on reconstructing visual and speech inputs from magnetoencephalography signals.
249
Today we're sharing new research that brings us one step closer to real-time decoding of image perception from brain activity. Using MEG, this AI system can decode the unfolding of visual representations in the brain with an unprecedented temporal resolution. More details
"Democratized access [to open-source AI models] is a feature, not a bug."
349
I got to participate in a group discussion with leading policy thinkers/representatives/gov representatives on AI and open source regulation in the UK leading up to the summit today. Was a great experience and want to thank the people who put it together/invited me. The…
Knee-jerk panics about the deployment of new technology are nothing new. Even from scientists calling for bans.
506
The EU thought GMOs were an existential risk. Putting a moratorium on them, then massively over-regulating. They’re only now walking some of it back.
Good piece. One doesn't have to agree with all the details to subscribe to the main tenets.
456
The Techno-Optimist Manifesto -- please read and Ask Me Anything! Post questions as replies to this xeet.
Yay! Congrats @LerrelPinto
27
I’m honored to receive the 2023 Packard Fellowship! With this funding, our lab
Moral panic.
440
Congress discussing in 1993 how a pixelated Street Fighter game would cause violence and should be banned/regulated. This was a big regulation topic a the time: twitter.com/PessimistsArc/…
Exactly.
181
The AI models = nuclear weapons analogy is terrible for a lot of reasons, but most importantly it heavily misleads policy-makers. Nuke-inspired regulation won't prevent people from building powerful AIs, but it will protect tech companies from competition
So am I. Nuclear weapons are designed to wipe out entire cities. AI is designed to amplify human intelligence. One destroys complexity. The other increases it. Two things could not be more different.
1.3K
I'm *really* tired of the AI models = nuclear weapons analogy
As I pointed out before: AI alignment is going to be an iterative process of refinement. There will be huge pressures to get rid of unaligned AI systems from customers and regulators.
423
Dog is highly aligned and human-interpretable intelligence. This was achieved through selection pressure (unaligned dogs were put down). Same will happen via market selective pressures on AIs. They can and will be domesticated by the market. twitter.com/simongerman600…
Discussions of AI safety need to be rational and based on science.
290
Things we need to get past in the AI safety discussion to make progress: - Circular arguments / tautologies : AGI definitionally being the feared end goal is a substance free position. - Bad/incomplete inductive arguments : I've yet to find an inductive step that has any rigor…
Periodic reminder.
2.5K
Language is an imperfect, incomplete, and low-bandwidth serialization protocol for the internal data structures we call thoughts.
A more developed argument in this piece by @Jake_Browning00 and me.
106
Open source AI models will soon become unbeatable. Period.
3.4K
The pace of open-source LLM innovation and research is breath-taking I suspect that open-source will soon become unbeatable for anyone except maybe OpenAI Here's why - Open-source community is
The future will consist of - a small number of open source inference code, - free pre-trained base models, and - crowd-sourced fine-tuned models, on top of which customized (possibly closed source) products will be built.
To those who say: "but closed source products have billions of investments behind them" I reply: In the mid-90a, Microsoft and Sun Microsystems battled to provide the software infrastructure of the internet, MS with WinNT+IIT+ASP+IE, Sun with Solaris+httpd+Java+Netscape. They…
The original version of this slide is from January 2016.
393
@infoxiao
Do LLMs perform reasoning or approximate retrieval? There is a continuum between the two, and Auto-Regressive LLMs are largely on the retrieval side.
731
People won't be so easily taken by "LLM ingenuity" if they stop thinking that they have a clue of what really is on the web..
Thanks, @alex_peys
63
lots of us are too busy working on this stuff to get in these debates on twitter, but yann is completely right here (just like he was with the cake thing, and the neural net thing, and and and) twitter.com/ylecun/status/…
The cake thing.
44
Cool.
214
Introducing Universal Simulator (UniSim), an interactive simulator of the real world. Interactive website:
Giving a keynote at MICCAI (26th International Conference on Medical Image Computing and Computer Assisted Intervention) in about 30 minutes. https://conferences.miccai.org/2023/en/KEYNOTES.html…
176
The heretofore silent majority of AI scientists and engineers who - do not believe in AI extinction scenarios or - believe we have agency in making AI powerful, reliable, and safe and - think the best way to do so is through open source AI platforms NEED TO SPEAK UP !
2.1K
@sriramk
Even if you are not an Open Source Absolutist, it is hard to overestimate how much value OSS has added to the world. This goes for AI as for everything else.
864
Be an Open Source Absolutist! It is hard to overstate how much value Open Source Software has added to the world, and how broadly empowering it is. Operating systems, development tools, core libraries, and critical applications – a great many of the software tools used by the…
Glad to be an advisor to this new initiative in AI for Science.
539
I'm super excited to share a new initiative I am a part of! Announcing: Polymathic AI
The public in North America and the EU (not the rest of the world) is already scared enough about AI, even without mentioning the specter of existential risk. As you know, the opinion of the *vast* majority of AI scientists and engineers (me included) is that the whole debate…
684
I think a big problem w getting the public to care about AI risk is that it’s just a *huge* emotional ask — for someone to really consider that there’s a solid chance that the whole world’s about to end. People will instinctively resist it tooth-and-nail.
A review of a new book by Scottish-born Princeton economist and Nobel Laureate Angus Deaton: "Economics in America: An Immigrant Economist Explores the Land of Inequality" Quote: "The [US] political system is more responsive to the needs of those who finance it than to its…
It's not a controversial opinion. It's just wrong. People who say a particular kind of music or painting is "formless and meaningless" simply do not understand its structure. They simply cannot fathom how other people can understand and appreciate the structure they can't grok.
1.4K
Controversial opinion: Jazz is simply failed music. It sounds like a child scribbling all over the page. Formless, meaningless, and unpleasant. Just a bunch of random noise that people trick themselves into liking, while they become numb to true musical quality in the process.…
In case you're wondering: this is tenor saxophonist Dexter Gordon.
Zing!
209
Yes. And it’s called saving lives. twitter.com/elonmusk/statu…
France, obviously. Despite what many French residents falsely believe, inequalities have not substantially increased in France in the last 30 years. And inequalities decreased dramatically before that. In the US, Reaganomics in 1980, with lower taxes on high incomes, caused…
491
Now, what country is this (red color)? It is a big and important country. (Answer in the next tweet.)
There are about 5000 gods and divinities that humans have invented over millennia and are (still) worshiping. Monotheists believe in one of them and do not believe in the remaining 4999. The difference between the number of gods monotheists and atheists do not believe in is a…
If you value intelligence above all qualities, you're gonna have a *great* time.
871
if you value intelligence above all other human qualities, you’re gonna have a bad time
Llama impact grants!
110
Applications for Llama Impact Grants are now open! Today until Nov 15, you can submit a proposal for using Llama 2 to address challenges across education, environment & open innovation for a chance to be awarded a $500K grant. Details + application
Mostly good advice.
63
Real rules to survive in New York: • Wear comfortable walking shoes. • Walk quickly, otherwise we'll despise you. • Walking 3 or 4 abreast is a hate crime. • Wear dark colors. You and the stains will blend in. • Pedestrian "Don't Walk" signs are only for decoration. •…
Good point.
62
Magnetic fields carry angular momentum. Consider charges sitting along a circular wire. Apply a voltage and they start moving counter-clockwise. Theres now angular momentum. But momentum is conserved, and started off zero - where's the equal but opposite amount? In the field.
Kaiser Permanente @aboutKP at the forefront of generative AI deployment in healthcare with Nabla's Copilot.
60
"nerds are more often than not largely responsible for winning wars." Largely true, even if almost every war movie gives way more importance to heroism, sacrifice, and duty than to technology.
503
@JoeyMannarinoUS
All physical laws are time reversible (under CPT symmetry). Yet it doesn't look like the universe, or any sufficiently complex system, follows a reversible evolution.
828
What's the most mind-blowing fact you know about the universe?
Some folks: "Open source AI must be outlawed." Open source AI startup community in Paris:
1.5K
If this is 7:40pm what happens at 10?
Speech-from-MEG training code and data are available.
47
Our work on decoding is now published in Nature Machine Intelligence! We release the code to reproduce our results (and improve on them) based on public datasets (175 subjects, 160+ hours of brain recordings, EEG and MEG)
Decoding speech from magnetoencephalography signals (MEG). From FAIR-Paris.
210
`Decoding speech perception from non-invasive brain recordings`, led by the one an only
Given the prevalence of simultaneously clueless and misleading comments about various serious topics on X/Twitter (e.g. AI, vaccines,..), I feel the need for a new hashtag: #YHNIWYATA : You Have No Idea What You Are Talking About. Or perhaps the less polite #YHNIWTFYATA.
Check out this thought-provoking piece by Léon Bottou and @bschoelkopf entitled "Borges and AI".
71
Léon and I have been hatching this out for over a year. Re-visiting Borges in this context profoundly impressed me. twitter.com/KyleCranmer/st…