text
stringlengths
4
378
likes
stringlengths
0
4
reply
stringlengths
0
309
Not a bad set of tenets.
112
Okay I’ve had enough extremism: I’m founding an AI Centrist Party. Tenets: * exponentially improving AI isn’t right around the corner * LLMs are a massive step in AI capability for any good definition of that word * worrying about AI risk is reasonable * retweeting Yud is not
How Effective Altruism fell down a kind of purity spiral.
370
Effective altruist now seems synonymous with AI doomer, but it wasn’t always that way. My own experience & why I think a lot of it is bullshit AI doomerism now: I was an active member - went to multiple of their global summits and still mostly donate to causes that they… twitter.com/sapinker/statu…
Yuandong is one of several folks who have been working on planning at FAIR. He explains the difference in applicability between A* (search for shortest path in a graph) and MCTS (search in an exponentially growing tree).
668
How likely is the hypothesis that Q* = Q-learning + A*? From my past experience on OpenGo (reproduction of AlphaZero), A* can be regarded as a deterministic version of MCTS with value (i.e., heuristic) function Q only. This should be suitable for tasks in which the state is easy…
Exactly. Yuandong has been working on various approaches to planning at FAIR.
147
I partly disagree that AGI can be just solved by scaling up with synthetic data. The reason why search is powerful, is that for properly designed environment, it will create infinitely new patterns for the model to learn & adapt. However, whether learning such new patterns… twitter.com/DrJimFan/statu…
Built with MusicGen and Demucs, both open source packages from Meta-FAIR. https://huggingface.co/spaces/facebook/MusicGen… https://github.com/facebookresearch/demucs…
329
Introducing Remix - a tool made with Nendo that generates remixes of any song in any style. Upload a song or YT video & have fun! (For research purposes only. Usage is at your own risk) Colab:
Video of my public interview with Brian Greene at the World Science Festival in NYC a few weeks back. It is followed by a debate about "AGI" with @SebastienBubeck It ends with a debate about AI safety with Sébastien and Tristan Harris. I find it strange that Tristan uses the…
The cideo of our AGI debate premieres 75 minutes from now.
242
Join us on YouTube at 1pm PT/4pm ET today for the premiere of our "debate" with
Please ignore the deluge of complete nonsense about Q*. One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning. Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published…
Or even just a working cat bot.
813
We need a moratorium on talking about looming AGI until we have at least a working housebot.
The dystopian fantasy of machines taking over humanity is so old that it's a cliché.
1.1K
'Lowly Machines to Overtake Man, Rule Universe' (1948)
Current LLMs are trained on text data that would take 20,000 years for a human to read. And still, they haven't learned that if A is the same as B, then B is the same as A. Humans get a lot smarter than that with comparatively little training data. Even corvids, parrots, dogs,…
8K
@DrJimFan
GAIA: A benchmark for general AI assistants, by a team from Meta-FAIR, Meta-GenAI, HuggingFace, and AutoGPT. Current Auto-Regressive LLMs don't do very well.
1.2K
GAIA: a benchmark for General AI Assistants paper page:
Interesting list.
144
The top 15 most-liked organizations on
Periodic reminder.
274
@HistedLab
One of the most important reason why AI infrastructure models must be open source, and their training/fine-tuning must be crowd-sourced.
1.3K
An interesting aspect of this discussion is the fact that LLMs will soon start affecting our thoughts, beliefs, mental & linguistic habits, and culture. The idea that we could select a handful of "trustworthy" institutions with the "correct" set of values and beliefs to shape LLM… twitter.com/karpathy/statu…
System 2 Attention. Making LLM reason. From @jaseweston and @tesatory at FAIR.
1K
New paper!
One might run into this poster at various airports around the world.
782
Pionnier de l’intelligence artificielle. Lauréat du prix Turing.
To everyone who lives in a civilized country and thinks their health insurance system sucks: Be thankful you don't live in the United States.
1.8K
A thread about our depraved healthcare system, and a plea. In 2021, my friend Carole was diagnosed with terminal stage four cancer. A single parent with two kids, she had just turned 43. In 2017, she had watched her brother Chris die of the same cancer. He too was in his 40s.
Research strives on stability. When your horizon is 3, 5, or 10 years, you need stability. There are very, very few stable and sustainable models for research. I'm talking about research, not product development.
s/strives/thrives/
There is at least one industry research lab where the leadership believes that (super)human-level AI: - is attainable - is a scientific research question, not just a question of more compute and more data. - is not "just around the corner". It will take a while. - is not an…
Another open source LLM from France. They keep piling up.
782
Big news from
Still lots of conceptual progress to be made in AI. This is a scientific question. Not merely a question of more compute and more data.
1.1K
“Machine learning sucks!” Keynote by
Governments should be careful to listen to people who know what they are talking about, and be skeptical of people who don't.
646
Politico reports Tristan Harris helped inspire Biden’s executive order on AI. (
Science is not a zero-sum game. With more scientists, research accelerates. But no one has a monopoly on good ideas. So, with and more and faster communication, research accelerates further.
519
Science is not a zero-sum game. With more knowledge, we can do more things, with less. Science is not a zero-sum game. With more funding, more labs, we’ll train more students, and grow the pie for all. Science is not a zero-sum game. Publish, I can build on your breakthrough.
Pretty much the most important question in the debate about short-term risks of LLMs. No clear evidence so far.
341
Is there any known case of anyone accessing “harmful capabilities” of an LLM that didn’t consist of knowledge already freely available and clearly described in documents on the open web? Is the fear that we are basically just getting what we would already have if Google / Bing…
OpenAI’s Board Pushes Out Sam Altman, Its High-Profile C.E.O.
221
Kyutai: a new non-profit AI research lab based in Paris dedicated to open science. The founding members are top notch.
763
Announcing Kyutai: a non-profit AI lab dedicated to open science. Thanks to Xavier Niel (
Translation: Auto-Regressive LLMs scaling is giving diminishing returns. As I've said repeatedly, a new architecture will emerge for the next leap, perhaps along the lines of the Objective-Driven AI I've been proposing. But deep learning will still be the foundation. No wall.
1.2K
Translation: “deep learning is hitting a wall” twitter.com/burny_tech/sta…
Cool image editing with Emu-Edit from Meta.
255
Look at the fine image editing control. EmuEdit is pretty cool!
Weird doomer argument: "society should not deploy <technology-X> because *I* don't fully understand it, and I believe no one else does, either."
Genau,
33
@BertuzLuca
Emu video & Emu edit: video generation and image animation.
290
Today we’re sharing two new advances in our generative AI research: Emu Video & Emu Edit. Details
Mistral's official stance on the EU AI Act. TL;DR: regulate products, don't regulate technology. Promote open source foundational models. At the very least, don't regulate them in ways that favor incumbents.
489
We have heard many extrapolations of Mistral AI’s position on the AI Act, so I’ll clarify. In its early form, the AI Act was a text about product safety. Product safety laws are beneficial to consumers. Poorly designed use of automated decision-making systems can cause…
That's pretty insane!
1.2K
Hack of the day: Llama on a microcontroller
Truth hurts. But not as much as bullets.
762
The House GOP just voted to prevent the CDC from studying gun deaths and injuries. Hmm...I wonder why?
There isn't a shred of evidence that AI poses a paradigmatic shift in safety. It's all fantasies fueled by popular science fiction culture suggesting some sort of imaginary but terrible, terrible catastrophic risk.
893
There isn't a shred of evidence that AI poses a paradigmatic shift in safety that requires new regulation. And we should not draft charters or policies that suggest differently.
Llama as a service on Azure. Mistral as a service on Azure. Thrilled to see that Microsoft and @satyanadella are supporting open source AI platforms for their customers.
815
Copilot will be the new UI for both the world's knowledge and your organization's knowledge, but most importantly, it will be your agent that helps you act on that knowledge. Here are highlights from my keynote today at
Lots more open source AI goodies here. Including FAISS, DINOv2, Detectron2, Segment Anything, CodeLlama, Nougat, PyTorch-BigGraph, fastText, ... https://github.com/facebookresearch…
796
Zuck has so far open sourced and given the world: ◆ React ◆ React Native ◆ PyTorch ◆ Llama ◆ GraphQL ◆ Flow ◆ Jest ◆ Relay ◆ HHVM / Hack ◆ Yoga ◆ Hermes ◆ RocksDB ◆ Zstandard Goated.
"Is France the Next Open-Source AI Capital?" Mais oui !
912
Let's do it
Want to use AI for good? How about using AI to help discover new chemical compounds that would solve the energy storage problem? If energy storage were solved, we could cover a small desert with solar panels and power the world.
531
Announcing the Open Catalyst Intro video series! Want to learn more about how we can help mitigate climate change through advances in AI and chemistry? We’re creating a video series to help AI researchers get up to speed in this exciting research area!
American exceptionalism: Dying from preventable causes and spending a lot for it.
229
This chart shows a comparison of health systems performances and resources. The most effective are the closest to the origin, with lower mortality rate from avoidable causes and lower expenditure per capita. [
The US economy is doing amazingly well. But half the population has been brainwashed into thinking that things are bad.
1.6K
Labor productivity up 4.7% last quarter. Real wages at record highs. Unemployment near record lows. Now headline inflation coming in at 0% this month. The US economy is absolutely CRUSHING it. twitter.com/jasonfurman/st…
The whole story of Galactica, told by its first author @rosstaylor90 . You know the open source mantra "release early, release often"? When it comes to AI, one should add "yes, but be prepared to ignore ridiculous prophecies of doom from Twitter mobs."
297
I am the first author of the Galactica paper and have been quiet about it for a year. Maybe I will write a blog post talking about what actually happened, but if you want the TLDR: 1. Galactica was a base model trained on scientific literature and modalities. 2. We approached… twitter.com/sharongoldman/…
This is *not* "Big Tech" versus The People or whatever. This is open source AI versus closed and proprietary AI. On the one hand, you have Mistral, Aleph, HuggingFace, Meta, IBM, and the entire startup ecosystem arguing for open source AI foundation models. On the other hand,…
1.5K
This last-second attempt by big tech to exempt the future of AI (LLMs) would make the EU AI Act the laughing-stock of the world, not worth the paper it’s printed on. After years of hard work, the EU has the opportunity to lead a world waking up to the need to regulate these… twitter.com/BertuzLuca/sta…
Galactica, the LLM for scientists from Meta, was released a couple of weeks before ChatGPT but was taken down after 3 days. It was murdered by a ravenous Twitter mob. The mob claimed that what we now call LLM hallucinations was going to destroy the scientific publication system.…
2.9K
One year ago — 2 weeks before
A piece by MBZUAI president @ericxing in WEF Agenda explaining why worries about AI existential risks are baseless.
84
The dystopian AI predictions owe more to sensationalism than scientific substance. Read my latest article on World Economic Forum's Agenda where I explain why AIXrisk is baseless, and AI is a catalyst for human advancement rather than a harbinger of doom.
For cultural_phenomenon in [novels, waltz, cinema, jazz, radio, TV, comics, video games, D&D, internet, social networks, chatbots] : Print( cultural_phenomenon, " is a waste of time that destroys the mind of the youth.")
229
1849 article on the vice of novel reading: - "The natural affections of the soul become perverted." - "Vice is often represented as virtue; wisdom and discretion as folly." - "Time is wasted, the taste is vitiated and corrupted; the improvement of the mind is prevented"
Giving a talk at the AI New Horizon Symposium in Hong Kong Saturday Nov 18. Organized by @pascalefung , @harryshum , and Nancy Ip from HKUST. https://ai-newhorizons2023.com
106
Exactly.
268
The most effective risk mitigators are determinate optimists. They believe in solutions. They’re often drowned out by the least effective risk mitigators: the indeterminate pessimists. Why? Because they’re too busy building solutions to write op-eds and do Ted talks.
The scientists and engineers who have made turbojets as safe and reliable as they are today have been determinated optimists.
s/determinated/determined/
Beware of testing on the training set.
1.4K
The famous "Chihuahua or Muffin" problem in computer vision is considered solved by GPT-4V on social media. But really? The answer is NO. GPT-4V cannot reason well about the same images in the original "Chihuahua or Muffin" grid when they are in a different layout. I…
LLMs as role-playing engines.
326
My new
An X/Twitter fake account named "yylecun" is impersonating me. It DMs people to peddle cryptocrap. I flagged it but it's still around.
Nick Bostrom attempting to stamp down the AI doomer movement he jumpstarted from falling into a purity spiral and extremist oblivion. AI doomerism is doomed.
721
Here is Nick Bostrom admitting that AI panic feels out of control right now, "like a wrecking ball" that could "destroy the future"
The original AI doomers are leaving the sinking ship of AI doomerism.
692
On Bostrom Yudkowsky must feel lonely now.
It was great to meet science fiction writer Qiufan "Stanley" Chen before our panel on AI at the Paris Peace Forum.
308
A panel on AI at the Paris Peace Forum today. Starting at 42'00, I plot the future of AI: all of our interactions with the digital world will be mediated by AI assistants that will eventually become smarter than us. Because they will become a common infrastructure containing all…
Very strange anti-openness argument from Microsoft President Brad Smith in response to my open source AI platform advocacy. I thought Microsoft had long abandoned its anti open source stance. Doesn't Azure run on Linux?
1.1K
"It all depends on your definition of open" -- Brad Smith, Microsoft
A curious response from Microsoft President Brad Smith, in response to my advocacy for open source AI platforms at the Paris Peace Forum today. He claimed Llama isn't open because Meta is controlled by one person while OpenAI is trustworthy because it is owned by a non-profit.
764
Outrageous!! Massive misrepresentation by Brad Smith (MSFT) claiming chat-gpt is as open as LLaMa (among other non truths) It's sad to see MSFT become the enemy to open source again. They did it during the OS / browser wars. And they're doing it again.
Interesting development.
168
There is nothing better than some Friday drama to close such a hectic week. A technical meeting on the
The full story of the open-access Deep Learning course that Alfredo Canziani and I have been teaching together and refining over the last several years.
1.1K
The fears of AI-fueled existential risks are based on flawed ideas. A WSJ piece by Princeton's @random_walker and @sayashk .
54
@sayashk
An article about the bubbling AI startup ecosystem in Paris and the important role FAIR-Paris played in seeding it.
313
Big AI industry event at Station F in Paris, Friday Nov 17. Keynote Speakers: Eric Schmidt (Schmidt Future), Jensen Huang (CEO Nvidia), Xavier Niel (Iliad/Scaleway). Speakers include: Thomas Scialom (Meta, of Llama fame), Arthur Mensch (CEO Mistral, or Mistral-7B fame), Jason…
NYU is clearly where AI professors want to be. Open faculty positions in AI at NYU.
223
We have open-rank faculty positions in AI at
Knowledge is not understanding. Reminds me of the folks who have tried wearing prism glasses that invert the world. They get used to it after a few weeks of practice. Then they get all confused when they remove the glasses. But it only takes a short time to recover.
328
@ylecun
Open source AI is the way to go! Proud to see @huggingface , @scaleway , & @meta joining to launch an AI startup accelerator at Station F. This will help concretize our common vision of an open and collaborative AI ecosystem. More from TechCrunch:
1.4K
L’IA open source est la voie à suivre ! Je suis fier de voir @huggingface , @scaleway et @meta unir leurs expertises pour lancer un programme d’accélération de startups à Station F et ainsi matérialiser la vision d’un écosystème de l'IA ouvert et collaboratif !…
LLMs don't even know how to climb stairs.
791
@boazbaraktcs
The phrase "whole career" is particularly accurate. 1. Pick a difficult technology area. Anything: Flying cars, quantum computing, interstellar travel, nuclear fusion, hypersonic flight, AI, whatever. 2. Complain that "current approaches are hitting a wall and will never work."…
1.9K
It's amazing that one can make a whole career of only spouting pessimistic interpretations a field that is clearly advancing to amazing capabilities.
My wife has an MNIST-like tailleur. (here with @pascalefung and @frossi_t ).
208
High-performance open-source LLMs popping up around the world. Does anyone believe regulating AI R&D can be effective? (Assuming it's useful. Which it is not).
727
http://01.AI
Well, technically, my first paper on Joint Embedding Architectures (AKA Siamese nets) is from NIPS 1994. but that paper and many subsequent works use sample-contrastive learning and no predictor. The JEPA idea trained non sample-contrastive losses (Barlow Twins, VICReg, MCR2) or…
195
In 2022 Yann LeCun submitted a paper on joint embedding predictive architectures for learning representations. How many years will it take for people to finally warm up to his ideas?
Don't confuse the approximate retrieval abilities of LLMs for actual reasoning abilities.
717
LLM's seem to fake both "solving" and "self-critiquing" solutions to reasoning problems by approximate retrieval. The two faking abilities just depend on different parts of the training data (..and disappear when such data is not present in the training corpus..) Our recent… twitter.com/rao2z/status/1…
In 1983, when the free world was starting to play with personal computers, the Ceaucescu regime in Romania required a license to own a typewriter. Obscurantism isn't just preventing people from accessing knowledge. It's also preventing people from exchanging knowledge.
922
https://newsletter.pessimistsarchive.org/p/remember-typewriter-licenses…
So much for the idea that self-driving cars are hitting walls. They are not hitting obstacles nearly as much as human drivers. Yes, Waymo cars only drive in some areas that are fully mapped, they use all kinds of sensors, and they have required a decade of data collection,…
1K
Waymo self-driving cars already appear to be much safer than human drivers: 1/4 the accident rate of the average person insured by SwissRe. And as
Any takers? https://t.co/4TobcpwIRR
493
An interview with @craigss on Eye on AI where I talk about world models, JEPA, the future of AI, and the necessity of open source AI platforms.
97
This is one of the most enlightening conversations I've had. Please listen. I think Yann is far ahead of the rest of the AI research community and I think world models will soon surpass LLMs in exhibiting true intelligence.
An interview in Computer Vision News: Self-Supervised Learning, JEPA, and the future of AI.
222
Hot off the press - Computer Vision News is out!
Hahahaha *pant pant* hahahaha
216
Collective amnesia about technophobia is ENDEMIC. ‘It is different this time’ is NOT an excuse to ignore history. Encryption panic threatend privacy. GMO panic killed millions through malnutrition. Nuclear panic delayed critical carbon free future.
AI regulatory capture recipe.
374
Regulatory capture in a nutshell: 1) Get the uninformed general public terrified of vague threats 3) Frame it as "for the greater good" 4) Employ true believers/well intention extremists 5) Co-opt ^^ useful idiots 6) Gov eliminates competition for you!
Excellent thread by Stanford's @percyliang with a list of reasons why open source AI platforms are inherently *safer* than closed source ones.
395
Myth: open foundation models are antithetical to AI safety. Fact: open foundation models are critical for AI safety. Here are three reasons why:
YES! One can believe that LLMs can do amazing things and are useful, *without* believing they are anywhere close to human-level intelligence (even if they are superior to humans in a few tasks). One can believe that LLMs will give new tools to people with bad intention *without*…
793
You can be amazed at Generative AI (and LLMs), while still recognizing their limitations. You can be concerned about Generative AI (and LLMs) opening up new attack surfaces, while still not stressing about fake threats. You can resist both hype and doom. Imagine!
VentureBeat writes about the earth-shattering effect of the release of Llama and Llama-2 on the LLM landscape.
318
Obscurantism recedes with education.
203
Incredible progress over several centuries in providing some measure of formal education
Some people are assimilating the risks associated with pandemics with the risks associated with AI progress. There is a fundamental difference. Pandemics are natural phenomena. We can take preventive measures before they happen and plan for corrective measures in case they do.…
The idea that knowledge is too dangerous for people to have access to it has a name: Obscurantism.
That's because finding cures for cancer with the help of AI will involve thousands of the best biomedical and computer scientists in the world, with lots of funding, lots of computing resources, lots of open information exchange, and lots of clinical trials. On the contrary,…
1.2K
@BlancheMinerva
I signed this open letter to President Biden to argue that Executive Orders and regulations of AI should promote open source AI platforms, and certainly not hinder them. Signed by execs from Meta, HF, Mistral, Shopify... as well as a number of prominent execs from the VC world.
1.8K
1/ We’ve submitted a letter to President Biden regarding the AI Executive Order and its potential for restricting open source AI. We believe strongly that open source is the only way to keep software safe and free from monopoly. Please help amplify.
I'm glad @arthurmensch and @nickclegg were present for the second day of the AI Safety Summit to defend the idea of open-source AI research, code, and models. (I was there the 1st day but wasn't invited to the 2nd day)
190
Leaving the AI Safety Summit after some constructive discussions today and yesterday. I voiced how open-source was today the safest way to develop AI, putting this transformative technology under the highest level of scrutiny. With many others, we recalled the enormous…
Very interesting thread from a computer security expert. He says: 1. Building something that works in the real world is harder than what most armchair AI safety folks think. 2. there is a natural tendency to exaggerate the potential risks of your own work, because it makes you…
1.7K
I wish I had more time to chime into the AI doom debate, but here a very quick thread: 1) The one thing all AI doomers seem to assume is that almost all engineering problems can be solved by thinking, vs. experimentation. 2) Humanity has seen multiple individuals of ...
Hahaha
1K
Sigh
The acceleration of scientific progress with the help of AI is super exciting.
260
What do you think foundation models should do for your science topic?
Excellent interview of @bgurley in which he forcefully defend the idea of open source AI platforms.
197
I interviewed Bill Gurley
Good news everyone: The UK Deputy Prime Minister understands the benefits of open source AI platforms.
380
@vmanancourt
How long before regulators realize that search engines still produce more accurate information than LLMs? They both use the same public data. Search engines index it. Llama summarizes it approximately.
1.3K
I am mystified by the "Oh my god, bad hombres can find how to make weapons/viruses by querying LLMs" angst. These hombres didn't have access to Google until ChatGPT came along? After all, every bit of ChatGPT training data also indexed by Google--no? I mean, I haven't been… twitter.com/rao2z/status/1…
s/Llama/LLM/ Funny how my spell checker automatically changes "LLM" into "Llama" behind my back
An extensive comparison of various architectures, various (pre-) training procedures, on various vision tasks. TL;DR: use ConvNext.
236
Excited to announce a large-scale comparison of pretrained vision backbones including SSL, vision-language models, and CNNs vs ViTs across diverse downstream tasks ranging from classification to detection to OOD generalization and more! NeurIPS 2023