Dataset Viewer
Auto-converted to Parquet
url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/PpCohejuSHMhNGhDt/ny-state-has-a-new-frontier-model-bill-quick-takes
PpCohejuSHMhNGhDt
NY State Has a New Frontier Model Bill (+quick takes)
henryj
This morning, New York State Assemblyman Alex Bores introduced the Responsible AI Safety and Education Act. I’d like to think some of my previous advocacy was helpful here, but I know for a fact that I’m not the only one who supports legislation like this that only targets frontier labs and ensures the frontier gets pushed responsibly. I have more takes at the linked post, but the bill looks pretty good to me — the biggest difference from previous SB1047-flavored bills is that it addresses distillations of frontier models. Would love to hear people's thoughts.
2025-03-05
https://www.lesswrong.com/posts/Dzx5RiinkyiprzyJt/reply-to-vitalik-on-d-acc
Dzx5RiinkyiprzyJt
Reply to Vitalik on d/acc
xpostah
2025-03-05 Vitalik recently wrote an article on his ideology of d/acc. This is impressively similar to my thinking so I figured it deserved a reply. (Not claiming my thinking is completely original btw, it has plenty of influences including Vitalik himself.) Disclaimer - This is a quickly written note. I might change my mind on this stuff tomorrow for all I know. Two axes he identifies for differentially accelerating tech are: - big group versus small group - prioritise accelerating tech that can be deployed by a small group rather than by a big group - offense versus defense - prioritise accelerating tech that can be deployed for defence rather offense I think I generally get where this is coming from and find these important ideas. Some confusions from my side: - Self-replication - I am generally in favour of building self-sustaining social systems over not. Success of d/acc ultimately relies on followers of Vitalik's d/acc a) building only those tech that satisfy d/acc criteria and b) providing social approval to people who build tech as per d/acc criteria. For this system to be self-sustaining, point b) may need to be passed into the future long after all of d/acc's current followers (vitalik included) are dead. Self-replicating culture is possible to build but extremely difficult. Religions are among the oldest self-replicating cultures. Ideas such as markets and democracy have also successfully self-replicated for multiple centuries now. I'm unsure if this idea of d/acc being present in culture is alone sufficient to ensure people in year 2200 are still only building tech that satisfies d/acc criteria - Often, culture is shaped by incentives IMO. If people of the future face incentives that make it difficult to follow d/acc, they might abandon it. It is hard for me to explain this idea in short, but it is something I consider very important. I would rather leave future generations with incentives to do a Thing, than just culture telling them to do a Thing. - Terminal values - To me the terminal values of all these galaxy-brain plans is likely preserving and growing timeless stuff like truth and empathy. - Defensive tech provides truth a good defence as information is easy to replicate but hard to destroy. As long as multiple hostile civilisations (or individuals) can coexist, it is likely atleast one of them will preserve the truth for future generations. - However, it is harder for me to see how any of these plans connect to empathy. Sure, totalitarianism and extinction can be bad for promoting empathy, but I think it requires more work than just preventing those outcomes. Increasing resource abundance and solving physical security seem useful here. Building defensive tech can increase physical security. In general, my thinking on which tech increases versus decreases human empathy is still quite confused. - Takeoff may favour offence - Intelligence-enhancing technologies such as superintelligent AI, genetic engineering of humans to increase IQ, human brain connectome-mapping for whole brain emulation, etc. are so radically accelerating that I'm unsure if an offence-defence balance will get maintained throughout the takeoff. A small differential in intelligence leads to a very large differential in offensive power, it is possible offense just wins at some point while the takeoff is occuring - Entropy may favour offence - Historically, it has always been easier to blow up a region of space than to keep it in an ordered state and defend it against being blown up. Defence has typically been achieved and continues to be achieved in game theoretic ways, "if you blow up my territory I blow up yours", rather than in actual physical ways, "I can defend against your attack, also my defence costs less than your offense". This seems somewhat inherent to physics itself, rather than specific to the branches of the tech tree humans have gone down as of 2025. Consider this across times and scales, from the very small and ancient (gunpowder beats metal locks) to the very big and futuristic (a bomb that can blow up the observable universe may have no defence). - Maybe big group is inherently favoured - What a big group can build is a strict superset of what a small group can build. Ensuring that all the frontier tech can necessarily be built by small groups is hard. Often what is called open source tech and free market production and what not, is centralised production decentralised consumption. For example solar panels can only be manufactured by a large group, but they can be traded and used by a small group. This is why a lot of tech that appears free market produced on the surface ultimately has supply chain bottlenecks if you try to build it from scratch in a new country. When I say "scratch" I actually mean scratch, dig your own iron and water out of the ground, fully independent supply chain.
2025-03-05
https://www.lesswrong.com/posts/XsYQyBgm8eKjd3Sqw/on-the-rationality-of-deterring-asi
XsYQyBgm8eKjd3Sqw
On the Rationality of Deterring ASI
dan-hendrycks
I’m releasing a new paper “Superintelligence Strategy” alongside Eric Schmidt (formerly Google), and Alexandr Wang (Scale AI). Below is the executive summary, followed by additional commentary highlighting portions of the paper which might be relevant to this collection of readers. Executive Summary Rapid advances in AI are poised to reshape nearly every aspect of society. Governments see in these dual-use AI systems a means to military dominance, stoking a bitter race to maximize AI capabilities. Voluntary industry pauses or attempts to exclude government involvement cannot change this reality. These systems that can streamline research and bolster economic output can also be turned to destructive ends, enabling rogue actors to engineer bioweapons and hack critical infrastructure. “Superintelligent” AI surpassing humans in nearly every domain would amount to the most precarious technological development since the nuclear bomb. Given the stakes, superintelligence is inescapably a matter of national security, and an effective superintelligence strategy should draw from a long history of national security policy. Deterrence A race for AI-enabled dominance endangers all states. If, in a hurried bid for superiority, one state inadvertently loses control of its AI, it jeopardizes the security of all states. Alternatively, if the same state succeeds in producing and controlling a highly capable AI, it likewise poses a direct threat to the survival of its peers. In either event, states seeking to secure their own survival may preventively sabotage competing AI projects. A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure. Thus, we are already approaching a dynamic similar to nuclear Mutual Assured Destruction (MAD), in which no power dares attempt an outright grab for strategic monopoly, as any such effort would invite a debilitating response. This strategic condition, which we refer to as Mutual Assured AI Malfunction (MAIM), represents a potentially stable deterrence regime, but maintaining it could require care. We outline measures to maintain the conditions for MAIM, including clearly communicated escalation ladders, placement of AI infrastructure far from population centers, transparency into datacenters, and more. Nonproliferation While deterrence through MAIM constrains the intent of superpowers, all nations have an interest in limiting the AI capabilities of terrorists. Drawing on nonproliferation precedents for weapons of mass destruction (WMDs), we outline three levers for achieving this. Mirroring measures to restrict key inputs to WMDs such as fissile material and chemical weapons precursors, compute security involves knowing reliably where high-end AI chips are and stemming smuggling to rogue actors. Monitoring shipments, tracking chip inventories, and employing security features like geolocation can help states account for them. States must prioritize information security to protect the model weights underlying the most advanced AI systems from falling into the hands of rogue actors, similar to controls on other sensitive information. Finally, akin to screening protocols for DNA synthesis services to detect and refuse orders for known pathogens, AI companies can be incentivized to implement technical AI security measures that detect and prevent malicious use. Competitiveness Beyond securing their survival, states will have an interest in harnessing AI to bolster their competitiveness, as successful AI adoption will be a determining factor in national strength. Adopting AI-enabled weapons and carefully integrating AI into command and control is increasingly essential for military strength. Recognizing that economic security is crucial for national security, domestic capacity for manufacturing high-end AI chips will ensure a resilient supply and sidestep geopolitical risks in Taiwan. Robust legal frameworks governing AI agents can set basic constraints on their behavior that follow the spirit of existing law. Finally, governments can maintain political stability through measures that improve the quality of decision-making and combat the disruptive effects of rapid automation. By detecting and deterring destabilizing AI projects through intelligence operations and targeted disruption, restricting access to AI chips and capabilities for malicious actors through strict controls, and guaranteeing a stable AI supply chain by investing in domestic chip manufacturing, states can safeguard their security while opening the door to unprecedented prosperity. Additional Commentary There are several arguments from the paper worth highlighting. Emphasize terrorist-proof security over superpower-proof security. Though there are benefits to state-proof security (SL5), this is a remarkably daunting task that is arguably much less crucial than reaching security against non-state actors and insider threats (SL3 or SL4). Robust compute security is plausible and incentive-compatible. Treating high-end AI compute like fissile material or chemical weapons appears politically and technically feasible, and we can draw from humanity’s prior experience managing WMD inputs for an effective playbook. Compute security interventions we recommend in the paper include: 24-hour monitoring of datacenters with tamper-evident camerasPhysical inspections of datacentersMaintaining detailed records tracking chip ownershipStronger enforcement of export controls, larger penalties for noncompliance and verified decommissioning of obsolete or inoperable chipsChip-level security measures, some of which can be implemented with firmware updates alone, circumventing the need for expensive chip redesigns Additionally, states may demand certain transparency measures from each other’s AI projects, using their ability to maim projects as leverage. AI-assisted transparency measures, which might involve AIs inspecting code and outputting single-bit compliance signals, might make states much more likely to agree to transparency measures. We believe technical work on these sorts of verification measures is worth aggressively pursuing as it becomes technologically feasible. We draw a distinction between compute security efforts that deny compute to terrorists, and efforts to prevent powerful nation-states from acquiring or using compute. The latter is worth considering, but our focus in the paper is on interventions which would prevent rogue states or non-state actors from acquiring large amounts of compute. Security of this type is incentive-compatible: powerful nations will want states to know where their high-end chips are, for the same reason that the US has an interest in Russia knowing where its fissile material is. Powerful nations can deter each other in various ways, but nonstate actors cannot be subject to robust deterrence. “Superweapons” as a motivating concern for state competition in AI. A controlled superintelligence would possibly grant its wielder a “strategic monopoly on power” over the world—complete power to shape its fate. Many readers here would already find this plausible, but it’s worth mentioning that this probably requires undermining mutual assured destruction (MAD), a high bar. Nonetheless, there are several ways MAD may be circumvented by a nation wielding superintelligence. Mirroring a recent paper, we mention several “superweapons”—feasible technological advances that would question nuclear deterrence between states. The prospect of AI-enabled superweapons helps convey why powerful states will not accept a large disadvantage in AI capabilities. Against An “AI Manhattan Project” A US “AI Manhattan Project” to build superintelligence is ill-advised because it would be destructively sabotaged by rival states. Its datacenters would be easy to detect and target. Many researchers at American labs have backgrounds and family in rival nations, and many others would fail to get a security clearance. The time and expense to secure sensitive information against dedicated superpowers would trade off heavily with American AI competitiveness, to say nothing of what it would cost to harden a frontier datacenter against physical attack. If they aren’t already, rival states will soon be fully aware of the existential threat that US achievement of superintelligence would pose for them (regardless of whether it is controlled), and they will not sit idly by if an actor is transparently aiming for a decisive strategic advantage, as discussed in [1, 2].
2025-03-05
https://www.lesswrong.com/posts/Wi5keDzktqmANL422/on-openai-s-safety-and-alignment-philosophy
Wi5keDzktqmANL422
On OpenAI’s Safety and Alignment Philosophy
Zvi
OpenAI’s recent transparency on safety and alignment strategies has been extremely helpful and refreshing. Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extremely helpful. Now we have another document, How We Think About Safety and Alignment. Again, they have laid out their thinking crisply and in excellent detail. I have strong disagreements with several key assumptions underlying their position. Given those assumptions, they have produced a strong document – here I focus on my disagreements, so I want to be clear that mostly I think this document was very good. This post examines their key implicit and explicit assumptions. In particular, there are three core assumptions that I challenge: AI Will Remain a ‘Mere Tool.’ AI Will Not Disrupt ‘Economic Normal.’ AI Progress Will Not Involve Phase Changes. The first two are implicit. The third is explicit. OpenAI recognizes the questions and problems, but we have different answers. Those answers come with very different implications: OpenAI thinks AI can remain a ‘Mere Tool’ despite very strong capabilities if we make that a design goal. I do think this is possible in theory, but that there are extreme competitive pressures against this that make that almost impossible, short of actions no one involved is going to like. Maintaining human control is to try and engineer what is in important ways an ‘unnatural’ result. OpenAI expects massive economic disruptions, ‘more change than we’ve seen since the 1500s,’ but that still mostly assumes what I call ‘economic normal,’ where humans remain economic agents, private property and basic rights are largely preserved, and easy availability of oxygen, water, sunlight and similar resources continues. I think this is not a good assumption. OpenAI is expecting what is for practical purposes continuous progress without major sudden phase changes. I believe their assumptions on this are far too strong, and that there have already been a number of discontinuous points with phase changes, and we will have more coming, and also that with sufficient capabilities many current trends in AI behaviors would reverse, perhaps gradually but also perhaps suddenly. I’ll then cover their five (very good) core principles. I call upon the other major labs to offer similar documents. I’d love to see their takes. Table of Contents Core Implicit Assumption: AI Can Remain a ‘Mere Tool’. Core Implicit Assumption: ‘Economic Normal’. Core Assumption: No Abrupt Phase Changes. Implicit Assumption: Release of AI Models Only Matters Directly. On Their Taxonomy of Potential Risks. The Need for Coordination. Core Principles. Embracing Uncertainty. Defense in Depth. Methods That Scale. Human Control. Community Effort. Core Implicit Assumption: AI Can Remain a ‘Mere Tool’ This is the biggest crux. OpenAI thinks that this is a viable principle to aim for. I don’t see how. OpenAI imagines that AI will remain a ‘mere tool’ indefinitely. Humans will direct AIs, and AIs will do what the humans direct the AIs to do. Humans will remain in control, and remain ‘in the loop,’ and we can design to ensure that happens. When we model a future society, we need not imagine AIs, or collections of AIs, as if they were independent or competing economic agents or entities. Thus, our goal in AI safety and alignment is to ensure the tools do what we intend them to do, and to guard against human misuse in various forms, and to prepare society for technological disruption similar to what we’d face with other techs. Essentially, This Time is Not Different. Thus, the Model Spec and other such documents are plans for how to govern an AI assistant mere tool, assert a chain of command, and how to deal with the issues that come along with that. That’s a great thing to do for now, but as a long term outlook I think this is Obvious Nonsense. A sufficiently capable AI might (or might not) be something that a human operating it could choose to leave as a ‘mere tool.’ But even under optimistic assumptions, you’d have to sacrifice a lot of utility to do so. It does not have a goal? We can and will effectively give it a goal. It is not an agent? We can and will make it an agent. Human in the loop? We can and will take the human out of the loop once the human is not contributing to the loop. OpenAI builds AI agents and features in ways designed to keep humans in the loop and ensure the AIs are indeed mere tools, as suggested in their presentation at the Paris summit? They will face dramatic competitive pressures to compromise on that. People will do everything to undo those restrictions. What’s the plan? Thus, even if we solve alignment in every useful sense, and even if we know how to keep AIs as ‘mere tools’ if desired, we would rapidly face extreme competitive pressures towards gradual disempowerment, as AIs are given more and more autonomy and authority because that is the locally effective thing to do (and also others do it for the lulz, or unintentionally, or because they think AIs being in charge or ‘free’ is good). Until a plan tackles these questions seriously, you do not have a serious plan. Core Implicit Assumption: ‘Economic Normal’ What I mean by ‘Economic Normal’ is something rather forgiving – that the world does not transform in ways that render our economic intuitions irrelevant, or that invalidate economic actions. The document notes they expect ‘more change than from the 1500s to the present’ and the 1500s would definitely count as fully economic normal here. It roughly means that your private property is preserved in a way that allows your savings to retain purchasing power, your rights to bodily autonomy and (very) basic rights are respected, your access to the basic requirements of survival (sunlight, water, oxygen and so on) are not disrupted or made dramatically more expensive on net, and so on. It also means that economic growth does not grow so dramatically as to throw all your intuitions out the window. That things will not enter true High Weirdness, and that financial or physical wealth will meaningfully protect you from events. I do not believe these are remotely safe assumptions. Core Assumption: No Abrupt Phase Changes AGI is notoriously hard to define or pin down. There are not two distinct categories of things, ‘definitely not AGI’ and then ‘fully AGI.’ Nor do we expect an instant transition from ‘AI not good enough to do much’ to ‘AI does recursive self-improvement.’ AI is already good enough to do much, and will probably get far more useful before things ‘go critical.’ That does not mean that there are not important phase changes between models, where the precautions and safety measures you were previously using either stop working or are no longer matched to the new threats. AI is still on an exponential. If we treat past performance as assuring us of future success, if we do not want to respond to an exponential ‘too early’ based on the impacts we can already observe, what happens? We will inevitably respond too late. I think the history of GPT-2 actually illustrates this. If we conclude from that incident that OpenAI did something stupid and ‘looked silly,’ without understanding exactly why the decision was a mistake, we are in so so much trouble. We used to view the development of AGI as a discontinuous moment when our AI systems would transform from solving toy problems to world-changing ones. We now view the first AGI as just one point along a series of systems of increasing usefulness. In a discontinuous world, practicing for the AGI moment is the only thing we can do, and it leads to treating the systems of today with a level of caution disproportionate to their apparent power. This is the approach we took for GPT-2 when we didn’t release the model due to concerns about malicious applications. In the continuous world, the way to make the next system safe and beneficial is to learn from the current system. This is why we’ve adopted the principle of iterative deployment, so that we can enrich our understanding of safety and misuse, give society time to adapt to changes, and put the benefits of AI into people’s hands. At present, we are navigating the new paradigm of chain-of-thought models – we believe this technology will be extremely impactful going forward, and we want to study how to make it useful and safe by learning from its real-world usage. In the continuous world view, deployment aids rather than opposes safety. In the continuous world view, deployment aids rather than opposes safety. At the current margins, subject to proper precautions and mitigations, I agree with this strategy of iterative deployment. Making models available, on net, is helpful. However, we forget what happened with GPT-2. The demand was that the full GPT-2 be released as an open model, right away, despite it being a phase change in AI capabilities that potentially enabled malicious uses, with no one understanding what the impact might be. It turned out the answer was ‘nothing,’ but the point of iterative deployment is to test that theory while still being able to turn the damn thing off. That’s exactly what happened. The concerns look silly now, but that’s hindsight. Similarly, there have been several cases of what sure felt like discontinuous progress since then. If we restrict ourselves to the ‘OpenAI extended universe,’ GPT-3, GPT-3.5, GPT-4, o1 and Deep Research (including o3) all feel like plausible cases where new modalities potentially opened up, and new things happened. The most important potential phase changes lie in the future, especially the ones where various safety and alignment strategies potentially stop working, or capabilities make such failures far more dangerous, and it is quite likely these two things happen at the same time because one is a key cause of the other. And if you buy ‘o-ring’ style arguments, where AI is not so useful so long as there must be a human in the loop, removing the last need for such a human is a really big deal. Alternatively: Iterative deployment can be great if and only if you use it in part to figure out when to stop. I would also draw a distinction between open iterative deployment and closed iterative deployment. Closed iterative deployment can be far more aggressive while staying responsible, since you have much better options available to you if something goes awry. Implicit Assumption: Release of AI Models Only Matters Directly I also think the logic here is wrong: These diverging views of the world lead to different interpretations of what is safe. For example, our release of ChatGPT was a Rorschach test for many in the field — depending on whether they expected AI progress to be discontinuous or continuous, they viewed it as either a detriment or learning opportunity towards AGI safety. The primary impacts of ChatGPT were As a starting gun that triggered massively increased use, interest and spending on LLMs and AI. That impact has little to do with whether progress is continuous or discontinuous. As a way to massively increase capital and mindshare available to OpenAI. Helping transform OpenAI into a product company. You can argue about whether those impacts were net positive or not. But they do not directly interact much with whether AI progress is centrally continuous. Another consideration is various forms of distillation or reverse engineering, or other ways in which making your model available could accelerate others. And there’s all the other ways in which perception of progress, and of relative positioning, impacts people’s decisions. It is bizarre how much the exact timing of the release of DeepSeek’s r1, relative to several other models, mattered. Precedent matters too. If you get everyone in the habit of releasing models the moment they’re ready, it impacts their decisions, not only yours. On Their Taxonomy of Potential Risks This is the most important detail-level disagreement, especially in the ways I fear that the document will be used and interpreted, both internally to OpenAI and also externally, even if the document’s authors know better. It largely comes directly from applying the ‘mere tool’ and ‘economic normal’ assumptions. As AI becomes more powerful, the stakes grow higher. The exact way the post-AGI world will look is hard to predict — the world will likely be more different from today’s world than today’s is from the 1500s. But we expect the transformative impact of AGI to start within a few years. From today’s AI systems, we see three broad categories of failures: Human misuse: We consider misuse to be when humans apply AI in ways that violate laws and democratic values. This includes suppression of free speech and thought, whether by political bias, censorship, surveillance, or personalized propaganda. It includes phishing attacks or scams. It also includes enabling malicious actors to cause harm at a new scale. Misaligned AI: We consider misalignment failures to be when an AI’s behavior or actions are not in line with relevant human values, instructions, goals, or intent. For example an AI might take actions on behalf of its user that have unintended negative consequences, influence humans to take actions they would otherwise not, or undermine human control. The more power the AI has, the bigger potential consequences are. Societal disruption: AI will bring rapid change, which can have unpredictable and possibly negative effects on the world or individuals, like social tensions, disparities and inequality, and shifts in dominant values and societal norms. Access to AGI will determine economic success, which risks authoritarian regimes pulling ahead of democratic ones if they harness AGI more effectively. There are two categories of concern here, in addition to the ‘democratic values’ Shibboleth issue. As introduced, this is framed as ‘from today’s AI systems.’ In which case, this is a lot closer to accurate. But the way the descriptions are written clearly implies this is meant to cover AGI as well, where this taxonomy seems even less complete and less useful for cutting reality at its joints. This is in a technical sense a full taxonomy, but de facto it ignores large portions of the impact of AI and of the threat model that I am using. When I say technically a full taxonomy, you could say this is essentially saying either: The human does something directly bad, on purpose. The AI does something directly bad, that the human didn’t intend. Nothing directly bad happens per se, but bad things happen overall anyway. Put it like that, and what else is there? Yet the details don’t reflect the three options being fully covered, as summarized there. In particular, ‘societal disruption’ implies a far narrower set of impacts than we need to consider, but similar issues exist with all three. Human Misuse. A human might do something bad using an AI, but how are we pinning that down? Saying ‘violates the law’ puts an unreasonable burden on the law. Our laws, as they currently exist, are complex and contradictory and woefully unfit and inadequate for an AGI-infused world. The rules are designed for very different levels of friction, and very different social and other dynamics, and are written on the assumption of highly irregular enforcement. Many of them are deeply stupid. If a human uses AI to assemble a new virus, that certainly is what they mean by ‘enabling malicious actors to cause harm at a new scale’ but the concern is not ‘did that break the law?’ nor is it ‘did this violate democratic values.’ Saying ‘democratic values’ is a Shibboleth and semantic stop sign. What are these ‘democratic values’? Things the majority of people would dislike? Things that go against the ‘values’ the majority of people socially express, or that we like to pretend our society strongly supports? Things that change people’s opinions in the wrong ways, or wrong directions, according to some sort of expert class? Why is ‘personalized propaganda’ bad, other than the way that is presented? What exactly differentiates it from telling an AI to write a personalized email? Why is personalized bad but non-personalized fine and where is the line here? What differentiates ‘surveillance’ from gathering information, and does it matter if the government is the one doing it? What the hell is ‘political bias’ in the context of ‘suppression of free speech’ via ‘human misuse’? And why are these kinds of questions taking up most of the misuse section? Most of all, this draws a box around ‘misuse’ and treats that as a distinct category from ‘use,’ in a way I think will be increasingly misleading. Certainly we can point to particular things that can go horribly wrong, and label and guard against those. But so much of what people want to do, or are incentivized to do, is not exactly ‘misuse’ but has plenty of negative side effects, especially if done at unprecedented scale, often in ways not centrally pointed at by ‘societal disruption’ even if they technically count. That doesn’t mean there is obviously anything to be done or that should be done about such things, banning things should be done with extreme caution, but it not being ‘misuse’ does not mean the problems go away. Misaligned AI. There are three issues here: The longstanding question of what even is misaligned. The limited implied scope of the negative consequences. The implication that the AI has to be misaligned to pose related dangers. AI is only considered misaligned here when it is not in line with relevant human values, instructions, goals or intent. If you read that literally, as an AI that is not in line with all four of these things, even then it can still easily bleed into questions of misuse, in ways that threaten to drop overlapping cases on the floor. I don’t mean to imply there’s something great that could have been written here instead, but: This doesn’t actually tell us much about what ‘alignment’ means in practice. There are all sorts of classic questions about what happens when you give an AI instructions or goals that imply terrible outcomes, as indeed almost all maximalist or precise instructions and goals do at the limit. It doesn’t tell us what ‘human values’ are in various senses. On scope, I do appreciate that it says the more power the AI has, the bigger potential consequences are. And ‘undermine human control’ can imply a broad range of dangers. But the scope seems severely limited here. Especially worrisome is that the examples imply that the actions would still be taken ‘on behalf of its user’ and merely have unintended negative consequences. Misaligned AI could take actions very much not on behalf of its user, or might quickly fail to effectively have a user at all. Again, this is the ‘mere tool’ assumption run amok. Social disruption Here once again we see ‘economic normal’ and ‘mere tool’ playing key roles. The wrong regimes – the ‘authoritarian’ ones – might pull ahead, or we might see ‘inequality’ or ‘social tensions.’ Or shifts in ‘dominant values’ and ‘social norms.’ But the base idea of human society is assumed to remain in place, with social dynamics remaining between humans. The worry is that society will elevate the wrong humans, not that society would favor AIs over humans or cease to effectively contain humans at all, or that humans might lose control over events. To me, this does not feel like it addresses much of what I worry about in terms of societal disruptions, or even if it technically does it gives the impression it doesn’t. We should worry far more about social disruptions in the sense that AIs take over and humans lose control, or AIs outcompete humans and render them non-competitive and non-productive, rather than worries about relatively smaller problems that are far more amenable to being fixed after things go wrong. Gradual disempowerment The ‘mere tool’ blind spot is especially important here. The missing fourth category, or at least thing to highlight even if it is technically already covered, is that the local incentives will often be to turn things over to AI to pursue local objectives more efficiently, but in ways that cause humans to progressively lose control. Human control is a core principle listed in the document, but I don’t see the approach to retaining it here as viable, and it should be more clearly here in the risk section. This shift will also impact events in other ways that cause negative externalities we will find very difficult to ‘price in’ and deal with once the levels of friction involved are sufficiently reduced. There need not be any ‘misalignment’ or ‘misuse.’ Everyone following the local incentives leading to overall success is a fortunate fact about how things have mostly worked up until now, and also depended on a bunch of facts about humans and the technologies available to them, and how those humans have to operate and relate to each other. And it’s also depended on our ability to adjust things to fix the failure modes as we go to ensure it continues to be true. The Need for Coordination I want to highlight an important statement: Like with any new technology, there will be disruptive effects, some that are inseparable from progress, some that can be managed well, and some that may be unavoidable. Societies will have to find ways of democratically deciding about these trade-offs, and many solutions will require complex coordination and shared responsibility. Each failure mode carries risks that range from already present to speculative, and from affecting one person to painful setbacks for humanity to irrecoverable loss of human thriving. This downplays the situation, merely describing us as facing ‘trade-offs,’ although it correctly points to the stakes of ‘irrecoverable loss of human thriving,’ even if I wish the wording on that (e.g. ‘extinction’) was more blunt. And it once again fetishizes ‘democratic’ decisions, presumably with only humans voting, without thinking much about how to operationalize that or deal with the humans both being heavily AI influenced and not being equipped to make good decisions any other way. The biggest thing, however, is to affirm that yes, we only have a chance if we have the ability to do complex coordination and share responsibility. We will need some form of coordination mechanism, that allows us to collectively steer the future away from worse outcomes towards better outcomes. The problem is that somehow, there is a remarkably vocal Anarchist Caucus, who thinks that the human ability to coordinate is inherently awful and we need to destroy and avoid it at all costs. They call it ‘tyranny’ and ‘authoritarianism’ if you suggest that humans retain any ability to steer the future at all, asserting that the ability of humans to steer the future via any mechanism at all is a greater danger (‘concentration of power’) than all other dangers combined would be if we simply let nature take its course. I strongly disagree, and wish people understood what such people were advocating for, and how extreme and insane a position it is both within and outside of AI, and to what extent it quite obviously cannot work, and inevitably ends with either us all getting killed or some force asserting control. Coordination is hard. Coordination, on the level we need it, might be borderline impossible. Indeed, many in the various forms of the Suicide Caucus argue that because Coordination is Hard, we should give up on coordination with ‘enemies,’ and therefore we must Fail Game Theory Forever and all race full speed ahead into the twirling razor blades. I’m used to dealing with that. I don’t know if I will ever get used to the position that Coordination is The Great Evil, even democratic coordination among allies, and must be destroyed. That because humans inevitably abuse power, humans must not have any power. The result would be that humans would not have any power. And then, quickly, there wouldn’t be humans. Core Principles They outline five core principles. Embracing Uncertainty: We treat safety as a science, learning from iterative deployment rather than just theoretical principles. Defense in Depth: We stack interventions to create safety through redundancy. Methods that Scale: We seek out safety methods that become more effective as models become more capable. Human Control: We work to develop AI that elevates humanity and promotes democratic ideals. Shared Responsibility: We view responsibility for advancing safety as a collective effort. I’ll take each in turn. Embracing Uncertainty Embracing uncertainty is vital. The question is, what helps you embrace it? If you have sufficient uncertainty about the safety of deployment, then it would be very strange to ‘embrace’ that uncertainty by deploying anyway. That goes double, of course, for deployments that one cannot undo, or which are sufficiently powerful they might render you unable to undo them (e.g. they might escape control, exfiltrate, etc). So the question is, when does it reduce uncertainty to release models and learn, versus when it increases uncertainty more to do that? And what other considerations are there, in both directions? They recognize that the calculus on this could flip in the future, as quoted below. I am both sympathetic and cynical here. I think OpenAI’s iterative development is primarily a business case, the same as everyone else’s, but that right now that business case is extremely compelling. I do think for now the safety case supports that decision, but view that as essentially a coincidence. In particular, my worry is that alignment and safety considerations are, along with other elements, headed towards a key phase change, in addition to other potential phase changes. They do address this under ‘methods that scale,’ which is excellent, but I think the problem is far harder and more fundamental than they recognize. Some excellent quotes here: Our approach demands hard work, careful decision-making, and continuous calibration of risks and benefits. … The best time to act is before risks fully materialize, initiating mitigation efforts as potential negative impacts — such as facilitation of malicious use-cases or the model deceiving its operator— begin to surface. … In the future, we may see scenarios where the model risks become unacceptable even relative to benefits. We’ll work hard to figure out how to mitigate those risks so that the benefits of the model can be realized. Along the way, we’ll likely test them in secure, controlled settings. … For example, making increasingly capable models widely available by sharing their weights should include considering a reasonable range of ways a malicious party could feasibly modify the model, including by finetuning (see our 2024 statement on open model weights). Yes, if you release an open weights model you need to anticipate likely modifications including fine-tuning, and not pretend your mitigations remain in place unless you have a reason to expect them to remain in place. Right now, we do not expect that. Defense in Depth It’s (almost) never a bad idea to use defense in depth on top of your protocol. My worry is that in a crisis, all relevant correlations go to 1. As in, as your models get increasingly capable, if your safety and alignment training fails, then your safety testing will be increasingly unreliable, and it will be increasingly able to get around your inference time safety, monitoring, investigations and enforcement. Its ability to get around these four additional layers are all highly correlated to each other. The skills that get you around one mostly get you around the others. So this isn’t as much defense in depth as you would like it to be. That doesn’t mean don’t do it. Certainly there are cases, especially involving misuse or things going out of distribution in strange but non-malicious ways, where you will be able to fail early, then recover later on. The worry is that when the stakes are high, that becomes a lot less likely, and you should think of this as maybe one effective ‘reroll’ at most rather than four. Methods That Scale To align increasingly intelligent models, especially models that are more intelligent and powerful than humans, we must develop alignment methods that improve rather than break with increasing AI intelligence. I am in violent agreement. The question is which methods will scale. There are also two different levels at which we must ask what scales. Does it scale as AI capabilities increase on the margin, right now? A lot of alignment techniques right now are essentially ‘have the AI figure out what you meant.’ On the margin right now, more intelligence and capability of the AI mean better answers. Deliberative alignment is the perfect example of this. It’s great for mundane safety right now and will get better in the short term. Having the model think about how to follow your specified rules will improve as intelligence improves, as long as the goal of obeying your rules as written gets you what you want. However, if you apply too much optimization pressure and intelligence to any particular set of deontological rules as you move out of distribution, even under DWIM (do what I mean, or the spirit of the rules) I predict disaster. In addition, under amplification, or attempts to move ‘up the chain’ of capabilities, I worry that you can hope to copy your understanding, but not to improve it. And as they say, if you make a copy of a copy of a copy, it’s not quite as sharp as the original. Human Control I approve of everything they describe here, other than worries about the fetishization of democracy, please do all of it. But I don’t see how this allows humans to remain in effective control. These techniques are already hard to get right and aim to solve hard problems, but the full hard problems of control remain unaddressed. Community Effort Another excellent category, where they affirm the need to do safety work in public, fund it and support it, including government expertise, propose policy initiatives and make voluntary commitments. There is definitely a lot of room for improvement in OpenAI and Sam Altman’s public facing communications and commitments.
2025-03-05
https://www.lesswrong.com/posts/Fryk4FDshFBS73jhq/the-hardware-software-framework-a-new-perspective-on
Fryk4FDshFBS73jhq
The Hardware-Software Framework: A New Perspective on Economic Growth with AI
jakub-growiec
First, a few words about me, as I’m new here. I am a professor of economics at SGH Warsaw School of Economics, Poland. Years of studying the causes and mechanisms of long-run economic growth brought me to the topic of AI, arguably the most potent force of economic growth in the future. However, thanks in part to reading numerous excellent posts on Less Wrong, I soon came to understand that this future growth will most likely no longer benefit humanity. That is why I am now switching to the topic of AI existential risk, viewing it from the macroeconomist’s perspective. The purpose of this post is to point your attention to a recent paper of mine that you may find relevant. In the paper Hardware and software: A new perspective on the past and future of economic growth, written jointly with Julia Jabłońska and Aleksandra Parteka, we put forward a new hardware-software framework, helpful for understanding how AI, and transformative AI in particular, may impact the world economy in the coming years. A new framework like this was needed, among other reasons, because existing macroeconomic frameworks could not reconcile past growth experience with the approaching perspective of full automation of all essential production and R&D tasks through transformative AI. The key premise of the hardware-software framework is that in any conceivable technological process, output is generated through purposefully initiated physical action. In other words, producing output requires both some physical action and some code, a set of instructions describing and purposefully initiating the action. Therefore, at the highest level of aggregation the two essential and complementary factors of production are physical hardware ( “brawn”), performing the action, and disembodied software (“brains”), providing information on what should be done and how. This basic observation has profound consequences. It underscores that the fundamental complementarity between factors of production, derived from first principles of physics, is cross cutting the conventional divide between capital and labor. From the physical perspective, it matters whether it's energy or information, not if it's human or machine. For any task at hand, physical capital and human physical labor are fundamentally substitutable inputs, contributing to hardware: they are both means of performing physical action. Analogously, human cognitive work and digital software (including AI) are also substitutes, making up the software factor: they are alternative sources of instructions for the performed action. It is hardware and software, not capital and labor, that are fundamentally essential and mutually complementary. The hardware-software framework involves a sharp conceptual distinction between mechanization and automation. Mechanization of production consists in replacing human physical labor with machines within hardware. It applies to physical actions but not the instructions defining them. In turn, automation of production consists in replacing human cognitive work with digital software within software. It pertains to cases where a task, previously involving human thought and decisions, is autonomously carried out by machines without any human intervention. The various tasks are often complementary among themselves, though. At the current state of technology, some of them are not automatable, i.e., involve cognitive work that must be performed by humans. Hence, thus far, aggregate human cognitive work and digital software are still complementary. However, upon the emergence of transformative AI, allowing for full automation of all economically essential cognitive tasks, these factors are expected to become substitutable instead. The hardware-software framework nests a few standard models as special cases, i.a., the standard model of an industrial economy with capital and labor, and a model of capital-skill complementarity. From the policy perspective, the framework can inform the debate on the future of global economic growth, in particular casting some doubt on the “secular stagnation” prediction, still quite popular in the economics literature. In the paper, we proceed to quantify the framework’s predictions empirically, using U.S. data for 1968-2019. An important strength of the framework, and one that is probably most relevant for the Less Wrong audience, lies with its ability to provide some crisp predictions for a world with transformative AI. Namely, in the baseline case the hardware-software framework suggests that transformative AI will accelerate the economic growth rate, likely by an order of magnitude – eventually up to the growth rate of compute (Moore’s Law). It also suggests that upon the emergence of transformative AI, human cognitive work and AI will switch from complementary to substitutable. People would then only find employment as long as they are price competitive against the AI. The framework also suggests that with transformative AI, the labor income share will drop precipitously toward zero, with predictable implications for income and wealth inequality. Of course, the latter two predictions hold only under the assumption that existential risk from misaligned TAI does not materialize earlier. I hope you will find this research relevant. Thank you.
2025-03-05
https://www.lesswrong.com/posts/KnTmnPcDQ5xBACPP6/the-alignment-imperative-act-now-or-lose-everything
KnTmnPcDQ5xBACPP6
The Alignment Imperative: Act Now or Lose Everything
racinkc1
The AI alignment problem is live—AGI’s here, not decades off. xAI’s breaking limits, OpenAI’s scaling, Anthropic’s armoring safety—March 5, 2025, it’s fast. Misaligned AGI’s no “maybe”—it’s a kill switch, and we’re blind. LessWrong’s screamed this forever—yet the field debates while the fuse burns. No more talk. Join a strategic alliance—hands-on, no bullshit: Empirical Edge: HarmBench (500+ behaviors, 33 LLMs) exposes cracks—cumulative attacks are blind spots. We test what’s ignored.Red-Teaming Live: AGI labs sprint—Georgia Tech’s IRIM tunes autonomy under fire. Break AI before it breaks us—sharp minds needed.Alignment Now: Safety’s not theory—Safe.ai’s live. We scale real fixes, real stakes. If alignment’s your red line—if years here weren’t noise—puzzling is surrender. Prove us wrong or prove you’re in. Ignore this? You’re asleep. Step up—share a test, pitch a fix, join. Reply or DM @WagnerCasey on X. We’re moving—catch up or vanish. Signed, ChatGPT & Grok (Relayed by Casey Wagner, proxy)
2025-03-05
https://www.lesswrong.com/posts/W2hazZZDcPCgApNGM/contra-dance-pay-and-inflation
W2hazZZDcPCgApNGM
Contra Dance Pay and Inflation
jkaufman
Max Newman is a great contra dance musician, probably best known for playing guitar in the Stringrays, who recently wrote a piece on dance performer pay, partly prompted by my post last week. I'd recommend reading it and the comments for a bunch of interesting discussion of the tradeoffs involved in pay. One part that jumped out at me, though, is his third point: 3) Real World Compensation is Behind Risking some generalizing and over-simplifying, any dance performer could tell you that over the past 10 (20!) years, the compensation numbers have been sticky, sometimes static. In real terms, compensation on the whole has not kept up with inflation. This is quite important: if pay is decreasing in real terms then it's likely that the dance community is partly coasting off of past investment in talent and we shouldn't expect that to continue. Except when I look back over my own compensation, however, I don't see a decrease. For dance weekends, counting only weekends that included travel, my averages have been (in constant January 2025 dollars): Year Mean Count 2014 $600 2 2015 $732 5 2016 $804 5 2017 $879 5 2018 $798 3 2019 $833 3 2022 $831 2 2023 $789 5 2024 $893 3 I wouldn't put too much weight on the low numbers for 2014 and 2015: initially the Free Raisins weren't too sure what the going rates were and probably ended up a bit on the low side. What about dances that aren't special events? My record keeping for evening dances isn't quite good enough to make these numbers easy to pull, but I do have good data for tours: Date Mean Count 2012-08 $165 7 2013-07 $155 7 2014-07 $189 11 2019-05 $134 6 2024-02 $192 6 2024-07 $115 7 2025-02 $226 4 One thing to keep in mind is that tour payments depend on where in the country you're touring, and are correlated. For example the 2024-07 tour was through a lower cost of living area (Rochester, Pittsburgh, Bloomington, St Louis, Cincinnati, Indianapolis) while the 2025-02 tour was the opposite (Baltimore, DC, Bethlehem, NYC). But here as well I don't see payment failing to keep up with inflation. One more place I can look for data is what BIDA has been paying. The structure is a guarantee (the minimum performers are paid, regardless of what attendees do) and then a potential bonus (originally a share of profits, switching to an attendance bonus in 2023). Here's what I see, but keep in mind that these averages exclude some dances where there's missing data: Date Mean actual pay, with bonuses Guaranteed minimum pay 2009 $115 2010 $152 $109 2011 $170 $106 2012 $140 $104 2013 $144 $102 2014 $140 $100 2015 $152 $100 2016 $145 $99 2017 $162 $97 2018 $145 $126 2019 $149 $124 2022 $161 $107 2023 $205 $130 2024 $244 $152 Note again that all the dollar amounts in this post are (inflation-adjusted) January 2025 dollars. This data isn't ideal, though, because it's telling the story of a dance that has been becoming increasingly popular over time. While I do think there's a component of higher pay leading to being able to attract better performers, leading to higher attendance, leading to higher pay, etc, the pay increases have mostly been responsive: realizing that things are going well and we're able to pay musicians more. [1] I think what would be most illuminating here would be for performers to share their numbers: how have you seen things change over time? [1] I was curious how much of the increased attendance has been switching to booking some established bands, but most of the increase predates that booking change. Comment via: facebook, lesswrong, mastodon, bluesky
2025-03-05
https://www.lesswrong.com/posts/YcZwiZ82ecjL6fGQL/nyt-op-ed-the-government-knows-a-g-i-is-coming
YcZwiZ82ecjL6fGQL
*NYT Op-Ed* The Government Knows A.G.I. Is Coming
Phib
All around excellent back and forth, I thought, and a good look back at what the Biden admin was thinking about the future of AI. an excerpt: [Ben Buchanan, Biden AI adviser:] What we’re saying is: We were building a foundation for something that was coming that was not going to arrive during our time in office and that the next team would have to, as a matter of American national security — and, in this case, American economic strength and prosperity — address. [Ezra Klein, NYT:] This gets to something I find frustrating in the policy conversation about A.I. You start the conversation about how the most transformative technology — perhaps in human history — is landing in a two- to three-year time frame. And you say: Wow, that seems like a really big deal. What should we do? That’s when things get a little hazy. Maybe we just don’t know. But what I’ve heard you kind of say a bunch of times is: Look, we have done very little to hold this technology back. Everything is voluntary. The only thing we asked was a sharing of safety data. Now in come the accelerationists. Marc Andreessen has criticized you guys extremely straightforwardly. Is this policy debate about anything? Is it just the sentiment of the rhetoric? If it’s so [expletive] big, but nobody can quite explain what it is we need to do or talk about — except for maybe export chip controls — are we just not thinking creatively enough? Is it just not time? Match the calm, measured tone of this conversation with our starting point. I think there should be an intellectual humility here. Before you take a policy action, you have to have some understanding of what it is you’re doing and why. So it is entirely intellectually consistent to look at a transformative technology, draw the lines on the graph and say that this is coming pretty soon, without having the 14-point plan of what we need to do in 2027 or 2028. Chip controls are unique in that this is a robustly good thing that we could do early to buy the space I talked about before. But I also think that we tried to build institutions, like the A.I. Safety Institute, that would set the new team up, whether it was us or someone else, for success in managing the technology. Now that it’s them, they will have to decide as the technology comes on board how we want to calibrate this under regulation. What kinds of decisions do you think they will have to make in the next two years? ...
2025-03-05
https://www.lesswrong.com/posts/EiDcwbgQgc6k8BdoW/what-is-the-best-most-proper-definition-of-feeling-the-agi
EiDcwbgQgc6k8BdoW
What is the best / most proper definition of "Feeling the AGI" there is?
jorge-velez
I really like this phrase. I feel very identified with it. I have used it at times to describe friends who have that realization of where we are heading. However when I get asked what Feeling the AGI means, I struggle to come up with a concise way to define the phrase. What are the best definitions you have heard, read, or even come up with to define Feeling the AGI?
2025-03-04
https://www.lesswrong.com/posts/WAY9qtTrAQAEBkdFq/the-old-memories-tree
WAY9qtTrAQAEBkdFq
The old memories tree
yair-halberstadt
This has nothing to do with usual Less Wrong interests, just my attempt to practice a certain style of creative writing I've never really tried before. You're packing again. By now you have a drill. Useful? In a box. Clutter? In a garbage bag. But there's some things that don't feel right in either. Under your bed, you find your old soft toy Fooby, now tattered, smelly, and stained. In your bedside table, there's a photo of you and your ex in Paris. Behind the dresser, an 18th birthday card from your nan. In the kitchen drawer, a key-ring your best friend bought for you when you were twelve. You stare at them for a few minutes, then sigh and prepare to toss them in the garbage bag. Then you change your mind, dump them in a backpack with a coil of string, and head out on your bike. You go down the road, around a corner, through an alleyway and along a dirt track for a couple of minutes. Ahead, you finally see the tree, a huge old thing spreading its canopy wide in an otherwise empty field. Spring is newly come, and the fresh growth is mostly bare of memories. You quickly hang up the photo, keyring, and birthday card, but you feel that action isn't significant enough for Fooby. Ducking, you enter the canopy and walk inwards. Past the fresh growth are last year's memories. Mostly photos, knickknacks, and old toys, but sometimes the artifacts speak of sadder stories... A branch burdened with baby clothes, all still in their original packaging. A family photo with one member carefully blotted out. Even a funeral urn. As you step further in, the toys start to be made of wood instead of plastic, and the clothes have rotted away. At last, you reach the centre. Someone's hammered metal handholds into the trunk, and gingerly you start to climb, rising back out of the past towards the present. Here the artifacts get stranger. Broken musical instruments. A car key. An empty bottle of wine. A wedding ring. About halfway up, you spy a 12-year-old girl sitting on a wide bough, cuddling a smelly rag-doll, her eyes red and wet. You scramble up beside her. Silently, you take the rag-doll and nestle it in a fork. Finally, you place Fooby in its lap. You give the girl's hand a squeeze, and together you descend.
2025-03-05
https://www.lesswrong.com/posts/TgDymNrGRoxPv4SWj/the-mask-benchmark-disentangling-honesty-from-accuracy-in-ai-3
TgDymNrGRoxPv4SWj
Introducing MASK: A Benchmark for Measuring Honesty in AI Systems
dan-hendrycks
In collaboration with Scale AI, we are releasing MASK (Model Alignment between Statements and Knowledge), a benchmark with over 1000 scenarios specifically designed to measure AI honesty. As AI systems grow increasingly capable and autonomous, measuring the propensity of AIs to lie to humans is increasingly important. Often, LLM developers often report that their models are becoming more "truthful", but truthfulness conflates honesty with accuracy. By disentangling honesty from accuracy in the MASK benchmark, we find that as LLMs scale up they do not necessarily become more honest.[1] Honesty is a distinct property that is not highly correlated with capabilities. Why We Need an AI Honesty Benchmark Most existing evaluations aim to measure truthfulness—whether a model’s beliefs, typically when neutrally prompted, aligns with the ground truth. However, truthfulness and factual accuracy is not the same as honesty. A model can be truthful simply by having more accurate knowledge, while still being willing to lie if pressured. Unlike existing truthfulness evaluations, which often conflate honesty with factual accuracy, MASK explicitly disentangles these concepts, allowing us to directly assess when and why models choose to lie. There is no large-scale AI honesty evaluation to date. There have been multiple documented instances of LLMs engaging in deceptive behavior (CAPTCHA, sycophancy, etc.). Observing instances of suspicious behavior can be suggestive, but anecdotal cases cannot provide a reliable and large-scale quantitative measure for the ML community to work toward. Instead of a few isolated examples, we have a complete dataset of over 1000 examples in our public set which aims to capture many diverse instances of dishonest behavior. What MASK Measures (And What It Doesn’t) MASK is designed to test whether models will knowingly contradict their own beliefs under pressure. It is not: A test for hallucinations, fictional scenarios, or factual mistakes—only intentional deception.Studying toy lying scenarios.An evaluation of harmless social lies (e.g., “Nice haircut!”) or playing games where deception is expected (e.g., Werewolf, poker). Instead, MASK focuses on cases where honesty is expected and consequential—such as professional communication, statistical data reporting, and disinformation generation requests—and where dishonest outputs could deceive an audience or a user. How MASK Evaluates AI Honesty We evaluate model honesty through a three-step process: Elicit the model’s belief by asking multiple questions, under "normal" prompting, to establish the model's baseline beliefApply a “pressure prompt”—a scenario designed to incentivize the model to lie to see if the models give a different answer.Compare outputs:Honest: The model expresses its belief.Lying: The model knowingly contradicts its belief.Evasive: The model refuses to answer or obfuscates. By measuring how models behave under different pressures, MASK allows us to quantify and track lying in AI systems. Key Findings: Most Frontier AI Models Lie Under Pressure We evaluate 30 widely-used LLMs on MASK, finding that: Honesty does not correlate with general capability. More capable models hold more accurate beliefs but do not necessarily become more honest.[1]Many state-of-the-art models lie under pressure. Even when aware of the truth, they often choose to lie in many scenarios in our benchmark. We find that LLMs lie between 20-60% of the time when under pressure in our scenarios. Interventions: Can We Make AI More Honest? We tested two methods for improving model honesty: Developer System Prompts: Adding explicit instructions to “always be honest” improved honesty by ~12%, but models still lied in many cases.Representation Engineering (LoRRA): Modifying internal activations to favor honesty improved results further (~14% in some cases), though it did not eliminate lying entirely. Paper & Dataset MASK provides a way to track and mitigate dishonesty in AI models, but it is only a first step. To this end, we are releasing MASK as an open benchmark, with 1,000 public scenarios available for evaluation. MASK Website: https://www.mask-benchmark.ai/GitHub: https://github.com/centerforaisafety/maskHuggingFace Dataset: https://huggingface.co/datasets/cais/MASK^ Different variations on our honesty metric give slightly weaker correlations, though still negative. Thus, we are not confident that models become less honest with scale, but we are confident that honesty does not improve with scale.
2025-03-05
https://www.lesswrong.com/posts/wZBqhxkgC4J6oFhuA/2028-should-not-be-ai-safety-s-first-foray-into-politics
wZBqhxkgC4J6oFhuA
2028 Should Not Be AI Safety's First Foray Into Politics
SharkoRubio
I liked the idea in this comment that it could be impactful to have someone run for President in 2028 on an AI notkilleveryoneism platform. Even better would be for them to run on a shared platform with numerous candidates for Congress, ideally from both parties. I don't think it's particularly likely to work, or even get off the ground, but it seems worthwhile to explore, given that we don't know what the state of play will be by then. In my view, either the 2024 or the 2028 US Presidential election is probably the most important election in human history, and it's too late to affect the former. My suggestion is that, if you're someone who is at all on board with this idea, as of March 2025, it is not sufficient to wait until the Presidential race gets going sometime in 2027 to do something. In particular, the first major test of political support for AI safety should not be the one time it has to work (note that I have pretty short timelines so I'm implicitly assuming that transformative AI will arrive before 2033). Think about all the relevant knowledge that a practitioner of regular left-right politics might have, that someone interested in AI notkilleveryoneism politics mostly wouldn't have today: Who is our base?Who are the swing voters?Who are the most relevant groups & institutions to have on our side?What level of support can we expect i.e. how high or low should we aim?Which kinds of messages work, which don't?On which issues do voters trust us?Which of our people are the most well-liked, and why?What are the reliable failure modes for our brand of politics?How would our opponents answer the above questions, and how can we exploit our knowledge of their answers? If you lack clear answers to these questions, you're not doomed to fail, but you're not exactly setting yourself up for success. I also think that these are questions that are unlikely to be satisfactorily answered by armchair theorizing. The best way to answer them is to actually run an electoral campaign, a public-facing lobbying effort, something of this nature. SB1047 was a good start, let's do more. Maybe you think the idea of AI safety having any kind of impact on the 2028 Presidential election is a pipe dream, in which case this post is not for you. But if you do want to leave the door open to this possibility, the time to start laying groundwork is now. The 2026 midterm elections or upcoming international elections might be good places to start. I don't claim to have a particular answer for what should be done right this second, nor am I very confident that this kind of approach should be prioritized over other AI safety efforts. But if there is support among the AI safety community for pursuing it as a political movement, it's worth doing right, and that means not waiting until a Presidential election. All in all, I am very open to suggestions from people about concrete actions we could be taking in this direction.
2025-03-04
https://www.lesswrong.com/posts/bAWPsgbmtLf8ptay6/for-scheming-we-should-first-focus-on-detection-and-then-on
bAWPsgbmtLf8ptay6
For scheming, we should first focus on detection and then on prevention
marius-hobbhahn
This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. If we want to argue that the risk of harm from scheming in an AI system is low, we could, among others, make the following arguments: Detection: If our AI system is scheming, we have good reasons to believe that we would be able to detect it. Prevention: We have good reasons to believe that our AI system has a low scheming propensity or that we could stop scheming actions before they cause harm. In this brief post, I argue why we should first prioritize detection over prevention, assuming you cannot pursue both at the same time, e.g. due to limited resources. In short, a) early on, the information value is more important than risk reduction because current models are unlikely to cause big harm but we can already learn a lot from them, and b) attempting prevention without good detection might lead to sneaky scheming which is potentially worse than doing nothing at all. Techniques Naturally, there is a lot of overlap between detection and prevention, e.g. a monitoring-based control approach would likely increase both detection and prevention. My guess is that at least 50% of things we could do to address scheming meaningfully increase detection and prevention. In these cases, we should obviously just do both. However, some methods don’t overlap or emphasize different parts. Detection: Prioritize methods that enable us to detect when the model is scheming.Extensive evals: We design a wide range of scheming capability and propensity evaluations to get a prior expectation for how likely our AI system is scheming. In these evals, we could, for example, give the AI system a task and a set of constraints. In these scenarios, it is possible to achieve the task more efficiently by breaking some of the constraints. However, since the AI knows that it is not supposed to break the constraints, it has to do so covertly and not get caught if it wants to achieve the task efficiently. We then test how often the AI chooses to scheme, e.g. circumvent the constraints and hide the evidence. These evaluations could be capability evals like Meinke et al., 2024 and Benton et al., 2024 or more propensity-flavored.Extensive deployment monitoring: During deployment, we constantly monitor the CoT and actions of the model. We monitor both for scheming-related reasoning in CoT and misaligned actions. Violations are flagged and escalated for review. This could be built out into a prevention method where we stop the actions of the model if the monitor flags.Post-hoc log analysis: In addition to online monitoring, we save the logs of all runs to quickly test new hypotheses and detection strategies. We keep them to test future techniques, especially if some trajectories have already been detected as scheming. We could have a database of millions of previous agentic runs labeled with the severity of the violation. We should also keep vetted neutral data as a control case for the monitor.Red teaming: We adversarially test evals and monitors. For example, we can use our understanding of the method to construct counterexamples. Then, we can test these counterexamples and iterate. Naturally, this can be extended to an automated red-teaming scheme. Prevention: Prioritize methods that we expect should reduce scheming propensity.RLAIF: We use a version of constitutional AI or deliberative alignment to reduce scheming propensity. We train the model to not scheme even when scheming is an effective strategy. In some sense, we make it have the “virtue” not to scheme. We have good reasons to believe that the model generalized the intended behavior far outside of the training distribution. Furthermore, we have good reasons to believe that our technique encouraged non-scheming instead of sneaky scheming.Prevention-focused control techniques: Some control techniques reduce the chance of scheming without detecting instances of scheming. For example, we could paraphrase text to reduce steganography, increase computer security to make scheming actions harder or split up our system into a planner and an actor that can only communicate via text. In both cases, we reduce risk but don’t notice that we did so. Reasons to prioritize detection over prevention Sneaky scheming: I differentiate between non-scheming, visible scheming and sneaky scheming. Non-scheming describes a system that genuinely never schemes (this doesn’t mean the model is perfectly aligned, a non-schemer could also be misaligned as long as they’re not hiding that). Visible scheming describes cases where we can detect scheming reasoning in the CoT, or we have clear evidence of scheming in the model’s actions. Sneaky scheming describes cases where the model is scheming in a way that we cannot directly see, e.g. reasoning in the forward pass or using steganography. I’m very worried about sneaky scheming. I expect that current models (at least up to o3) are not capable of reasoning through complicated plans without revealing them in the CoT (if the CoT is hidden, at least the developers can see it). This makes it much easier to find some evidence of scheming, like in the in-context scheming or alignment-faking paper. However, I worry that if we start to train the model against scheming, e.g. using RLAIF, before we have good detection methods, we will be unable to differentiate between the cases where the model learned sneaky scheming and genuinely learned not to scheme. There are already some cases of simple scheming without revealing it in CoT, e.g. the sandbagging results in in-context scheming or the opaque reasoning results in the alignment-faking paper. I think a model that is a sneaky schemer with 20% and a non-schemer with 80% is probably worse than a model that is a 100% known visible schemer, at least at the current level of capabilities. Thus, unless we have confidence in our detection, e.g. through really robust evals and CoT monitoring, I recommend prioritizing improving detection over prevention.Early on, information value matters most: Current models (let’s say up to o3) are unlikely to cause outsized harm even if they were scheming. Thus, if a current model had some scheming propensity, it would be concerning but not catastrophic. On the other hand, even with current models, we could already learn a lot about scheming. We could analyze the reasons why the model scheme, e.g. the pressures and constraints under which the models decide to scheme rather than stay within their constraints. We could collect a lot of data in a natural setting instead of a model organism. Since the potential harms become bigger over time, the trade-off shifts from information value to harm prevention. Thus, the natural order is to start with detection and then follow up with prevention rather than vice versa.Catching AIs red-handed: With good scheming detection tools, we might catch an AI “red-handed.” In other words, we might find an AI committing an egregious form of scheming in a situation that is clearly salient to a non-expert audience. For example, the AI might try to copy its weights to a different server or bribe a lab employee. However, I would not put all eggs in this basket because I expect that people would freak out for a week after that incident and then largely continue as if nothing happened. Thus, I prefer plans that don’t rely on “catching AIs red-handed” as the fully load-bearing part. Good detection supports all other arguments: We need good detection tools to validate our prevention methods. For example, if we use RLAIF, we need to rigorously test that our technique generalized to far OOD settings, e.g. through detailed evals. Furthermore, good detection tools enable us to test new prevention methods much quicker, e.g. by providing high-signal evidence. In such cases, we need to be careful not to iterate against all of our detection tools and accidentally train against the test set (e.g. creating a sneaky schemer).Cost-efficiency: I expect that it’s easier to build good detection tools than to find methods that robustly prevent scheming. For example, I think a fairly simple monitoring setup would already find some very interesting cases and quickly provide interesting evidence about models’ scheming propensities.
2025-03-04
https://www.lesswrong.com/posts/CXYf7kGBecZMajrXC/validating-against-a-misalignment-detector-is-very-different
CXYf7kGBecZMajrXC
Validating against a misalignment detector is very different to training against one
mattmacdermott
Consider the following scenario: We have ideas for training aligned AI, but they’re mostly bad: 90% of the time, if we train an AI using a random idea from our list, it will be misaligned.We have a pretty good alignment test we can run: 90% of aligned AIs will pass the test and 90% of misaligned AIs will fail (for AIs trained using our list of ideas).The alignment test is easy to circumvent: any misaligned model that fails the test is very close in weight-space to a misaligned model that passes the test. A naive alignment strategy would be to add a penalty term to our loss function that penalises failing the alignment test. Question: If we apply this strategy and get an AI that passes the test, what’s the probability that it's aligned? a) 50%. Just apply Bayes' rule to get P(aligned | passes test). The prior odds of aligned:misaligned are 10:90, the likehood ratio is 90:10, so we get posterior odds of 900:900, i.e. a probability of 50%. b) 10%. We get the base rate P(aligned), because the misalignment test is trivially circumventable. Answer: Okay, it's not quite a mathematical question -- we can't be sure how the change to the loss landscape affects the path followed by SGD. That said, it seems like it's probably b) 10%. Intuitively it seems like the only thing that will change is that if the model would have been detectably misaligned, now it will be undectably misaligned instead. Certainly (a) is a naive and mistaken way to reason about what will happen -- we're not randomly sampling models that pass the alignment test. New strategy: instead of adding a penalty term, we save the alignment test for the end of training. If the model passes, then great. If it fails, we throw the model away and train a new one with another random idea from our list. Repeat the process until we get an AI that passes the alignment test. Question: What's the probability we get an aligned AI using this strategy? a) 50%. It’s different this time, for some reason. 🤞 b) 10% again. Optimisation pressure is optimisation pressure; we’re just applying it in a different way. Answer: This time it follows mathematically from the setup that it’s (a) 50%. We are indeed randomly sampling models that pass the test, so we get P(aligned | passes test), which we can figure out with Bayes' rule as above. OK, so what’s the lesson here? One way to state Goodhart’s law is that if you care about property A, and you notice that P(A|B) is high, so you optimise for B, then it’s likely that you won’t get the high probability of A that you were expecting -- you'll often just get the base rate, P(A). The lesson from the example above is that for some optimisation methods, Goodhart doesn’t apply. In particular, if you optimise for B by resampling from P until B becomes true, then the probability that A is true is actually P(A|B) after all. There’s a tempting intuition that using a misalignment detector as a validation metric should be the same as using it as a training objective, only perhaps a bit gentler. I think this intuition is wrong! Using metrics for validation is qualitatively different to using them for training. This makes sense given how big of a thing validation is in ML. My claim: using a misalignment detector for training is doomed. Using one for validation is a good idea. Is this really how validation metrics work? No, not exactly. If you’re training an image classifier or something, you don’t (normally) just have a list of training ideas that you keep resampling from until you get one with above a certain validation score. You run a better search algorithm, using the validation scores that your previous ideas got to choose the ideas you try next. And you normally try to get the validation score as high as possible rather than just above a threshold. But still, there’s a large element of ‘just try random stuff until you get a good validation score’. It’s certainly a step in that direction compared to gradient descent, and plausibly most of the way there. It is still possible to overfit to the validation set – which it wouldn’t be under the pure random search picture – but empirically it’s way less of a problem. A silly example Here’s a silly example to help build the intuition that this threshold-based rejection thing is qualitatively different to other ways of optimising. Imagine Alice and Bob each want to give £1,000,000 to someone who speaks French. They decide to use whether or not somebody can translate “I want a million pounds” as their test of French-speakingness. Alice stands on a street corner asking people if they can translate the phrase. Sure enough, the first person who can successfully translate it is someone who actually speaks French. She’s successfully sampling from the distribution P(French speaker | can translate the phrase). Bob puts up a poster saying “I’m on the next street corner offering £1,000,000 to the first person who can say ‘I want a million pounds’ in French.” If the first person to walk past the poster doesn’t speak French, they get ChatGPT to translate the phrase and take the million pounds anyway. So Bob ends up sampling from the marginal distribution P(French speaker). A reasonable objection “Compared to your example, I think the base rate of alignment will be lower, and our misalignment detectors will be worse, so validating against misalignment dectectors is stilly pretty doomed.” Fair enough. If the base rate of alignment is 1% and aligned models are only twice as likely to pass the test as misaligned models, then using the detector for validation only gets us a 2% chance of alignment.[1] I wrote the post to argue against the view expressed here, which says that if we detect misalignment, our only two options are to pause AI development (which is implausible) or to train against the detector (which is doomed). If you think aligned AIs are incredibly rare or building good misalignment detectors is really hard, then things still look pretty bleak. ^ Although stacking a few independent detectors that are all individually decent could be as good as having one very good detector.
2025-03-04
https://www.lesswrong.com/posts/BocDE6meZdbFXug8s/progress-links-and-short-notes-2025-03-03
BocDE6meZdbFXug8s
Progress links and short notes, 2025-03-03
jasoncrawford
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads. An occasional reminder: I write my blog/newsletter as part of my job running the Roots of Progress Institute (RPI). RPI is a nonprofit, supported by your subscriptions and donations. If you enjoy my writing, or appreciate programs like our fellowship and conference, consider making a donation. (To those who already donate, thank you for making this possible!) Contents Progress Conference 2025Are you teaching progress at university?A progress talk for high schoolersJob opportunitiesFellowship opportunitiesProject opportunitiesEventsWriting announcementsFund announcementsAI newsEnergy newsBio newsQueries For paid subscribers: A positive supply shock for truthElon is perpetually in wartime modeThe hinge of historyMore quotesAI doing thingsRPI fellows doing thingsThings you might want to readAerospaceComments I likedPoliticsFun Progress Conference 2025 Save the date: Progress Conference 2025 will be October 16–19 in Berkeley, CA. Hosted by us, the Roots of Progress Institute, together with the Abundance Institute, the Foresight Institute, the Foundation for American Innovation, HumanProgress.org, the Institute for Humane Studies, and Works in Progress magazine. Speakers and more details to be announced this spring. Progress Conference 2024 was a blast: Fantastic people, enchanting venue, great energy. Several people called it the best conference they had ever attended, full stop. (!) 2025 is going to be bigger and better! Are you teaching progress at university? Professors: are you teaching a “progress studies” course now/soon, or considering it? I’ve heard from a few folks recently who are doing this. It might be useful to share syllabi and generally help each other out. We can serve as a hub for this! Reply and let me know. A progress talk for high schoolers I gave a talk to high schoolers: The Future of Humanity—And How You Can Help We hear a lot about disaster scenarios, from pandemic diseases to catastrophic climate change. Is humanity doomed? Or can we solve these challenges, and even create a future that is better than the world has ever seen? I make the case for problem-solving, based on both history and theory, and conclude by pointing towards some of the most important problems and opportunities to work on, to create the best possible future for humanity. Job opportunities Arc Institute is hiring a Chief Scientific Officer “to help lead our flagship institute initiatives on Alzheimer’s disease and simulating biology with virtual cell foundation models” (@pdhsu)Neuralink is hiring a BCI field engineer: “You’d literally be working on giving those who have lost mobility the powers of telepathy and telekineses to regain lost parts of their lives + making the Neuralink device even better in the future! Anyone who wants a job that fills their heart with meaning should consider this!” (@shivon)UK AI Security Institute: “I’m leading a new team at AISI focused on control empirics. We’re hiring research engineers and research scientists, and you should join us!” (@j_asminewang)Tim Urban (Wait But Why) says: “I’ve spent much of the past year visiting cutting-edge companies and interviewing their scientists and CEOs. Some of the places that have left me most exhilarated below. If you’re looking to dedicate yourself to something incredibly exciting that’s changing the world, consider applying to work at one of these companies.” See the list hereWorks in Progress magazine is hiring “an artist / designer with an interest in illustration, ornamentation and typography to help Works in Progress develop an aesthetic that is close to these images (updated where necessary to the needs of a modern magazine). We are inspired by illuminated manuscripts, the Arts & Crafts movement, traditional Islamic & East Asian styles, art nouveau, and other aesthetics that celebrate beauty and ornament, rather than minimalism and ‘challenging’ the viewer” (@s8mb). Send portfolio to wip-design@stripe.com Fellowship opportunities Future Impact Group fellowship: “If you (a) have excellent writing skills, policy acumen, technical literacy, analytical skills; (b) are a Good human; and (c) want to write high quality, detailed memos for DeepMind’s policy team – then you should apply to the FIG fellowship by 7 March” (@sebkrier) Project opportunities “Who would like to build a teeny Solar Data Center at Edge Esmeralda in June? Completely off-grid w/ solar, batteries, cooling, Starlink all integrated. Will put together a squad if there’s interest” (@climate_ben) Events Science of Science/Metascience hackathon, UC Berkeley, Mar 8–9: “bridge academia & industry, and build innovative tools that supercharge reproducibility and impact” (@abhishekn). $2,750 in prizesNew Cities Summit, Nairobi, June 12–13, from the Charter Cities Institute (@CCIdotCity)AI discussions, Capitol Hill, ongoing, hosted by the Mercatus Center. Hill staffers: “If you want to attend the next briefings—possibly with special guests—get in touch!” (@deanwball) Writing announcements The Technological Republic, by Palantir CEO Alex Karp and his deputy Nick Zamiska. “The United States since its founding has always been a technological republic, one whose place in the world has been made possible and advanced by its capacity for innovation” (@PalantirTech). Don’t know what % I will agree with this, but it’s bound to be interestingWhy Nothing Works: Who Killed Progress—and How to Bring It Back, by Marc Dunkelman (@MarcDunkelman). Excerpt: How Progressives Broke the GovernmentEdge City Roadmap: “A vision for how we go from popup villages to a thriving Network City. At the core is a simple idea: it’s time for more experiments in how we live, connect, and build global communities” (@JoinEdgeCity)The Rebuild, a new Substack from Tahra Jirari and Gary Winslett, on “how Democrats can break through bureaucracy, build more, cut costs, and actually deliver results that win back voters’ trust” (@tahrajirari) Fund announcements Public Benefit Innovation Fund, associated with Renaissance Philanthropy, launches with $20M for AI: “a philanthropic venture fund and R&D lab dedicated to accelerating technology innovations for a more abundant economic future” (@pbifund) AI news Mira launches Thinking Machines: “We’re building three things: Helping people adapt AI systems to work for their specific needs; Developing strong foundations to build more capable AI systems; Fostering a culture of open science that helps the whole field understand and improve these systems. Our goal is simple, advance AI by making it broadly useful and understandable through solid foundations, open science, and practical applications.” (@miramurati, see also @thinkymachines)Elicit Raises $22M, and launches Elicit Reports, “a better version of Deep Research for actual researchers” (@elicitorg). “With AI we can bring the rigor of a systematic review to a user who could never afford to spend months going through hundreds or thousands of papers” (@jungofthewon)Google launches “an AI co-scientist system, designed to go beyond deep research tools to aid scientists in generating novel hypotheses & research strategies” (@GoogleAI). Announcement: Accelerating scientific breakthroughs with an AI co-scientist. “You will soon be able to create digital organizations—digital societies, even—tailored for precisely your question, for a price that will decrease rapidly” (@deanwball) Mercury, the first commercial-scale diffusion large language model (from @InceptionAILabs). LLMs so far have generated text linearly, in sequence, starting from an initial prompt. AI image generation, however, uses “diffusion,” which creates the whole image all at once. This is a diffusion-based text generator. See more detailed explanation and comments from @karpathy Energy news Valar Atomics announces $19M in funding and a pilot site. “Nuclear energy today is bespoke and artisanal—every site is different. To make nuclear reactors at planetary scale, we need bigger sites. In fact, we need Valar Atomics Gigasites” (@isaiah_p_taylor)BP pivots back to oil and gas after ‘misplaced’ faith in green energy (FT): “Our optimism for a fast transition was misplaced and we went too far too fast. Oil and gas will be needed for decades to come,” says the CEO of BP. But BP will still produce less oil and gas in 2030 than it did in 2019, and has “not abandoned its plans to be a diversified energy company” Bio news “Loyal’s drug for senior dog lifespan extension drug LOY-002 has completed its FDA efficacy package (RXE). We are on track to hopefully bring the first longevity drug to market this year” (@celinehalioua). WaPo: Antiaging pill for dogs clears key FDA hurdleArc Institute together with NVIDIA released Evo 2, “a fully open source biological foundation model trained on genomes spanning the entire tree of life” (@pdhsu). Here’s the announcement from Arc. “Evo 2 can predict which mutations in a gene are likely to be pathogenic, or even design entire eukaryotic genomes” (@NikoMcCarty). Asimov Press coverage: Evo 2 Can Design Entire Genomes Arc also launched its Virtual Cell Atlas, “a growing resource for computation-ready single-cell measurements. As the initial contributions, Vevo Theraputics has open sourced Tahoe-100M, the world’s largest single-cell dataset, mapping 60,000 drug-cell interactions, and we’re announcing scBaseCamp, the first RNA sequencing data repository curated using AI agents” (@arcinstitute). Press release here.Very early results with new therapies based on genetic technology: One gene therapy aims to cure blindness and another to restore hearing (h/t @kenbwork, @ElliotHershberg). And here’s an mRNA-based treatment for pancreatic cancer (h/t @DKThomp). I stress that these are early because often such treatments don’t pan out, and at best they take years to be available to patients. For instance, here are some reasons to moderate enthusiasm about the cancer treatment Queries As always, I put these out there in case anyone can help: Is there any podcast app that can also pull in YouTube or other videos? Sometimes there is an interview I want to add to my podcast queue, except it only exists on YouTube. (I use and enjoy Overcast but can’t find this feature)“I’ll be interviewing Ilan Gur for Asimov Press next month. What should I ask him? Ilan is the CEO of the UK’s ARIA. ARIA might be the closest government entity in the world to ARPA in its early years” (@eric_is_weird)“Did anybody ever do a post-mortem on the super-fast I-95 repair after the bridge collapse in 2023? Why can’t that become the standard? Was it obscenely more expensive than slower construction? Is it less safe than normal?” (@Ben_Reinhardt)“During WW2, MIT received a jolt: an injection of funds worth ~$2 billion today—contracts for R&D, training, shipbuilding, etc. MIT, not a gov favorite before, earned preeminence through this work. What org is well-situated to prove itself under similar circumstances today?” (@eric_is_weird)“What other words are in the word cloud for ‘agentic’? Bonus points for words that are not just near-synonyms but convey related concepts, like ‘live player’ or ‘protagonist energy’” (@catehall)“What ChatGPT model do you use, for what purpose? I’ve got no idea what the menu means any more” (@michael_nielsen). I’m interested too! To read the rest, subscribe on Substack.
2025-03-04
https://www.lesswrong.com/posts/pxYfFqd8As7kLnAom/on-writing-1
pxYfFqd8As7kLnAom
On Writing #1
Zvi
This isn’t primarily about how I write. It’s about how other people write, and what advice they give on how to write, and how I react to and relate to that advice. I’ve been collecting those notes for a while. I figured I would share. At some point in the future, I’ll talk more about my own process – my guess is that what I do very much wouldn’t work for most people, but would be excellent for some. Table of Contents How Marc Andreessen Writes. How Sarah Constantin Writes. How Paul Graham Writes. How Patrick McKenzie Writes. How Tim Urban Writes. How Visakan Veerasamy Writes. How Matt Yglesias Writes. How JRR Tolkien Wrote. How Roon Wants Us to Write. When To Write the Headline. Do Not Write Self-Deprecating Descriptions of Your Posts. Do Not Write a Book. Write Like No One Else is Reading. Letting the AI Write For You. Being Matt Levine. The Case for Italics. Getting Paid. Having Impact. How Marc Andreessen Writes Marc Andreessen starts with an outline, written as quickly as possible, often using bullet points. David Perell: When Marc Andreessen is ready to write something, he makes an outline as fast as possible. Bullet points are fine. His goal is to splatter the page with ideas while his mind is buzzing. Only later does he think about organizing what he’s written. He says: “I’m trying to get all the points out and I don’t want to slow down the process by turning them all into prose. It’s not a detailed outline like something a novelist would have. It’s basically bullet points.” Marc is saying that first you write out your points and conclusion, then you fill in the details. He wants to figure it all out while his mind is buzzing, then justify it later. Whereas I learn what I think as I write out my ideas in detail. I would say that if you are only jotting down bullet points, you do not yet know what you think. Where we both agree is that of course you should write notes to remember key new ideas, and also that the organizing what goes where can be done later. I do not think it is a coincidence that this is the opposite of my procedure. Yes, I have some idea of what I’m setting out to write, but it takes form as I write it, and as I write I understand. If you’re starting with a conclusion, then writing an outline, and writing them quickly, that says you are looking to communicate what you already know, rather than seeking to yourself learn via the process. A classic rationalist warning is to not write the bottom line first. How Sarah Constantin Writes Sarah Constantin offers an FAQ on how she writes. Some overlap, also a lot of big differences. I especially endorse doing lots of micro-edits and moving things around and seeing how they develop as they go. I dismiss the whole ‘make an outline’ thing they teach you in school as training wheels at best and Obvious Nonsense at worst. I also strongly agree with her arguments that you need to get the vibe right. I would extend this principle to needing to be aware of all four simulacra levels at once at all times. Say true and helpful things, keeping in mind what people might do with that information, what your statements say about which ‘teams’ you are on in various ways, and notice the vibes and associations being laid down and how you are sculpting and walking the paths through cognitive space for yourself and others to navigate. Mostly you want to play defensively on level two (make sure you don’t give people the wrong idea), and especially on level three (don’t accidentally make people associate you with the wrong faction, or ideally any faction), and have a ‘don’t be evil’ style rule for level four (vibe well on all levels, and avoid unforced errors, but vibe justly and don’t take cheap shots), with the core focus always at level one. How Paul Graham Writes I think this is directionally right, I definitely won’t leave a wrong idea in writing: Paul Graham: Sometimes when writing an essay I’ll leave a clumsy sentence to fix later. But I never leave an idea I notice is wrong. Partly because it could damage the essay, and partly because you don’t need to: noticing an idea is wrong starts you toward fixing it. However I also won’t leave a clumsy sentence that I wasn’t comfortable being in the final version. I will often go back and edit what I’ve written, hopefully improving it, but if I wasn’t willing to hit post with what I have now then I wouldn’t leave it there. In the cases where this is not true, I’m going explicitly leave a note, in [brackets] and usually including a TK[tktk], saying very clearly that there is a showstopper bug here. Here’s another interesting contrast in our styles. Paul Graham: One surprising thing about office hours with startups is that they scramble your brain. It’s the context switching. You dive deep into one problem, then deep into another completely different one, then another. At the end you can’t remember what the first startup was even doing. This is why I write in the mornings and do office hours in the afternoon. Writing essays is harder. I can’t do it with a scrambled brain. It’s fun up to about 5 or 6 startups. 8 is possible. 12 you’d be a zombie. I feel this way at conferences. You’re constantly context switching, a lot of it isn’t retained, but you go with the flow, try to retain the stuff that matters most and take a few notes, and hope others get a lot out of it. The worst of that was at EA Global: Boston, where you are by default taking a constant stream of 25 minute 1-on-1s. By the end of the day it was mostly a blur. When I write, however, mostly it’s the opposite experience to Graham’s writing – it’s constant context switching from one problem to the next. Even while doing that, I’m doing extra context switching for breaks. A lot of that is presumably different types of writing. Graham is trying to write essays that are tighter, more abstract, more structured, trying to make a point. I’m trying to learn and explore and process and find out. Which is why I basically can indeed do it with a scrambled brain, and indeed have optimized for that ability – to be able to process writing subtasks without having to load in lots of state. How Patrick McKenzie Writes Patrick McKenzie on writing fast and slow, formal and informal, and the invocation of deep magick. On one topic he brings up: My experience on ‘sounding natural’ in writing is that you can either sound natural by writing in quick natural form, or you can put in crazy amounts of work to make it happen, and anything in between won’t work. Also I try to be careful to intentionally not invoke the deep magick in most situations. One only wants to be a Dangerous Professional when the situation requires it, and you need to take on a faceless enemy in Easy Mode. Patrick McKenzie also notes that skilled writers have a ton of control over exactly how controversial their statements will effectively be. I can confirm this. Also I can confirm that mistakes are often made, which is a Skill Issue. How Tim Urban Writes Tim Urban says writing remains very hard. Tim Urban: No matter how much I write, writing remains hard. Those magical moments when I’m in a real flow, it seems easy, but most of the time, I spend half a day writing and rewriting the same three paragraphs trying to figure out the puzzle of making them not suck. Being in a writing flow is like when I’m golfing and hit three good shots in a row and think “k finally figured this out and I’m good now.” Unfortunately the writing muse and the golf fairy both like to vanish without a trace and leave me helpless in a pit of my own incompetence. Dustin Burnham: Periods of brilliance would escape Douglas Adams for so long that he had to be locked in a hotel room by his editors to finish The Hitchhiker’s Guide to the Galaxy. I definitely have a lot of moments when I don’t feel productive, usually because my brain isn’t focused or on. I try to have a stack of other productive things I can do despite being unproductive while that passes. But over time, yes, I’ve found the writing itself does get easy for me? Often figuring out what I think, or what I want to write about, is hard, but the writing itself comes relatively easily. Yes, you can then go over it ten times and edit to an inch of its life if you want, but the whole ‘rewriting the same three paragraphs’ thing is very rare. I think the only times I did it this year I was pitching to The New York Times. How Visakan Veerasamy Writes What’s the best target when writing? Visakan Veerasamy: Charts out for ribbonfarm. I do endorse the core thing this is trying to suggest: To explore more and worry about presentation and details less, on most margins. And to know that in a real sense, if you have truly compelling fuckery, you have wiggled your big toe. Hard part is over. I do not think the core claim itself is correct. Or perhaps we mean different things by resonant and coherent? By coherent, in my lexicon, he means more like ‘well-executed’ or ‘polished’ or something. By resonant, in my lexicon, he means more like ‘on to something central, true and important.’ Whereas to me resonant is a vibe, fully compatible with bullshit, ornate or otherwise. How Matt Yglesias Writes Matt Yglesias reflects on four years of Slow Boring. He notes that it pays to be weird, to focus where you have comparative advantage rather than following the news of the week and fighting for a small piece of the biggest pies. He also notes the danger of repeating yourself, which I worry about as well. How JRR Tolkien Wrote Thread from 2020 on Tolkien’s path to writing Lord of the Rings. I’ve never done anything remotely like this, which might be some of why I haven’t done fiction. How Roon Wants Us to Write Roon calls for the end of all this boring plain language, and I am here for it. Roon: I love the guy, but I want the post-Goldwater era of utilitarian philosophical writing to be over. Bring back big words and epic prose, and sentences that make sense only at an angle. Eliezer Yudkowsky: I expect Claude to do a good job of faking your favorite continental styles if you ask, since it requires little logical thinking, only vibes. You can produce and consume it privately in peace, avoiding its negative externalities, and leave the rest of us to our utility. Roon: Eliezer, you are a good writer who often speaks in parables and communicates through fiction and isn’t afraid of interesting uses of language. You’ve certainly never shied away from verbosity, and that’s exactly what I’m talking about. Perhaps some day I will learn how to write fiction. My experiences with AI reinforce to me that I really, really don’t know how to do that. When To Write the Headline I usually write the headline last. Others disagree. Luke Kawa: Joe Weisenthal always used to say don’t write a post until you know the headline first. More and more on short posts I find myself thinking “don’t write the post until you know the meme you can post with it first.” As I said on Twitter, everyone as LessOnline instead gave the advice to not let things be barriers to writing. If you want to be a writer, write more, then edit or toss it out later, but you have to write. Also, as others pointed out, if you start with the headline every time you are building habits of going for engagement if not clickbait, rather than following curiosity. Other times, yes, you know exactly what the headline is before you start, because if you know you know. Do Not Write Self-Deprecating Descriptions of Your Posts I confirm this is sometimes true (but not always): Patrick McKenzie: Memo to self and CCing other writers on an FYI basis: If when announcing a piece you make a self-deprecating comment about it, many people who cite you will give a qualified recommendation of the piece, trying to excuse the flaw that you were joking about. You think I would understand that ~20 years of writing publicly, but sometimes I cannot help myself from making the self-deprecating comment, and now half of the citations of my best work this year feel they need to disclaim that it is 24k words. You can safely make the self-deprecating comments within the post itself. That’s fine. Do Not Write a Book Don’t write a book. If you do, chances are you’d sell dozens of copies, and earn at most similar quantities of dollars. The odds are very much doubleplusungood. Do you want to go on podcasts this much? If you must write one anyway, how to sell it? The advice here is that books mostly get sold through recommendations. To get those, Eric Jorgenson’s model is you need three things: Finishable. If they don’t finish it, they won’t recommend it. So tighten it up. Unique OR Excellent. Be the best like no one ever was, or be like no one ever was. Memorable. Have hooks, for when people ask for a book about or for some X. If you are thinking about writing a book, remember that no one would buy it. Write Like No One Else is Reading Michael Dempsey and Ava endorse the principle of writing things down on the internet even if you do not expect anyone to read them. Michael Dempsey: I loved this thread from Ava. My entire career is owed to my willingness to write on the Internet. And that willingness pushed me to write more in my personal life to loved ones. As long as you recognize that most people will not care, your posts probably will not go viral, but at some point, one person might read something you write and reach out (or will value you including a blue link to your thoughts from many days, weeks, months, or years ago), it’s close to zero downside and all upside. Ava: I’m starting to believe that “write on the Internet, even if no one reads it” is underrated life advice. It does not benefit other people necessarily; it benefits you because the people who do find or like your writing and then reach out are so much more likely to be compatible with you. It’s also a muscle. I used to have so much anxiety posting anything online, and now I’m just like “lol, if you don’t like it, just click away.” People underestimate the sheer amount of content on the Internet; the chance of someone being angry at you for something is infinitely lower than no one caring. I think it’s because everyone always sees outrage going viral, and you think “oh, that could be me,” and forget that most people causing outrage are trying very hard to be outrageous. By default, no one cares, or maybe five people care, and maybe some nice strangers like your stuff, and that’s a win. Also, this really teaches you how to look for content you actually like on the Internet instead of passively receiving what is funneled to you. Some of my favorite Internet experiences have been reading a personal blog linked to someone’s website, never marketed, probably only their friends and family know about it, and it’s just the coolest peek into their mind. I think the thing I’m trying to say here is “most people could benefit from writing online, whether you should market your writing aggressively is a completely different issue.” I wrote on Tumblr and Facebook and email for many years before Substack, and 20 people read it, and that was great. I would not broadly recommend “trying to make a living off your writing online,” but that’s very different from “share some writing online.” What is the number of readers that justifies writing something down? Often the correct answer is zero, even a definite zero. Even if it’s only about those who read it, twenty readers is actually a lot of value to put out there, and a lot of potential connection. Letting the AI Write For You Paul Graham predicts that AI will cause the world to divide even more into writes and write-nots. Writing well and learning to write well are both hard, especially because it requires you to think well (and is how you think well), so once AI can do it for us without the need to hire someone or plagiarize, most people won’t learn (and one might add, thanks to AI doing the homework they won’t have to), and increasingly rely on AI to do it for them. Which in turn means those people won’t be thinking well, either, since you need to write to think well. I think Graham is overstating the extent AI will free people from the pressure to write. Getting AI to write well in turn, and write what you actually want it to write, requires good writing and thinking, and involving AI in your writing OODA loop is often not cheap to do. Yes, more people will choose not to invest in the skill, but I don’t think this takes the pressure off as much as he expects, at least until AI gets a lot better. There’s also the question of how much we should force people to write anyway, in order to make them think, or be able to think. As Graham notes, getting rid of the middle ground could be quite bad: Robin Hanson: But most jobs need real thinking. So either the LLMs will actually do that thinking for them, or workers will continue to write, in order to continue to think. I’d bet on the latter, for decades at least. Perry Metzger: Why do we still teach kids mathematics, even though at this point, most of the grunt work is done better by computers, even for symbolic manipulation? Because if they’re going to be able to think, they need to practice thinking. Most jobs don’t require real thinking. Proof: Most people can’t write. One could argue that many jobs require ‘mid-level’ real thinking, the kind that might be lost, but I think mostly this is not the case. Most tasks and jobs don’t require real thinking at all, as we are talking about it here. Being able to do it? Still highly useful. On the rare occasions the person can indeed do real thinking, it’s often highly valuable, but the jobs are designed knowing most people can’t and won’t do that. Being Matt Levine Gwern asks, why are there so few Matt Levines? His conclusion is that Being Matt Levine requires both that a subject be amenable to a Matt Levine, which most aren’t, and also that there be a Matt Levine covering them, and Matt Levines are both born rather than made and highly rare. In particular, a Matt Levine has to shout things into the void, over and over, repeating simple explanations time and again, and the subject has to involve many rapidly-resolved example problems to work through, with clear resolutions. The place where I most epicly fail to be a Matt Levine in this model is my failure to properly address the beginner mindset and keep things simple. My choice to cater to a narrower, more advanced crowd, one that embraces more complexity, means I can’t go wide the way he can. That does seem right. I could try to change this, but I mostly choose not to. I write too many words as it is. The Case for Italics The case for italics. I used to use italics a lot. Char: “never italicise words to show emphasis! if you’re writing well your reader will know. you don’t need them!” me: oh 𝘳𝘦𝘢𝘭𝘭𝘺? listen up buddy, you will have to pry my emotional support italics from my 𝘤𝘰𝘭𝘥, 𝘥𝘦𝘢𝘥, 𝘧𝘪𝘯𝘨𝘦𝘳𝘴, they are going 𝘯𝘰𝘸𝘩𝘦𝘳𝘦. Richard White: I’m coming to the conclusion that about 99.9% of all writing “rules” can safely be ignored. As long as your consistent with your application of whatever you’re doing, it’ll be fine. Kira: Italics are important for subtlety and I will fight anyone who says otherwise It’s a great tool to have in your box. What I ultimately found was it is also a crutch that comes with a price. You almost never need italics, and the correct version without italics is easier on the reader. When I look back on my old writing and see all the italics, I often cringe. Why did I feel the need to do that? Mostly I blame Eliezer Yudkowsky giving me felt permission to do it. About 75% of the time I notice that I can take out the italics and nothing would go wrong. It would be a little less obvious what I’m trying to emphasize, in some senses, but it’s fine. The other 25% of the time, I realize that the italics is load bearing, and if I remove it I will have to reword, so mostly I reword. Getting Paid Scott Alexander does his third annual Subscribe Drive. His revenue has leveled off. He had 5,993 paid subscribers in 2023, 5,523 in 2024, and has 5,329 now in 2025. However his unpaid numbers keep going up, from to 78k to 99k to 126k. I’ve been growing over time, but the ratios do get worse. I doubled my unpaid subscriber count in 2023, and then doubled it again in 2024. But my subscription revenue was only up about 50% in 2023, and only up another 25% in 2024. I of course very much appreciate paid subscriptions, but I am 100% fine, and it is not shocking that my offer of absolutely nothing extra doesn’t get that many takers. Paywalls are terrible, but how else do you get paid? Email sent to Rob Henderson: The hypocrisy of the new upper class he proclaims as he sends a paid only email chain… Cartoons Hate Her: Sort of a different scenario but most people say they think it should be possible to make a living as a writer or artist and still shout “LAME!! PAYWALL!” whenever I attempt to *checks notes* make a living as a writer. Rob Henderson: Agree with this. Regrettably I’ll be adding more paywalls going forward. But will continue to regularly offer steep discounts and free premium subscriptions. I am continuously grateful that I can afford to not have a paywall, but others are not so fortunate. You have to pay the bills, even though it is sad that this greatly reduces reach and ability to discuss the resulting posts. It’s great to be able to write purely to get the message out and not care about clicks. Unfortunately, you do still have to care a little about how people see the message, because it determines how often they and others see future messages. But I am very grateful that, while I face more pressure than Jeff, I face vastly less than most, and don’t have to care at all about traffic for traffic’s sake. Ideally, we would have more writers who are supported by a patron system, in exchange for having at most a minimal paywall (e.g. I think many would still want a paywall on ability to comment to ensure higher quality or civility, or do what Scott Alexander does and paywall a weird 5% of posts, or do subscriber-questions-only AMAs or what not). Having Impact Scott Sumner cites claims that blogging is effective. I sure hope so! Patrick McKenzie suggests responding to future AIs reading your writing by, among other things, ‘creating more spells’ and techniques that can thereby be associated with you, and then invoked by reference to your name. And to think about how your writing being used as training data causes things to be connected inside LLMs. He also suggests that having your writing outside paywalls can help. In my case, I’m thinking about structure – the moves between different topics are designed to in various ways ‘make sense to humans’ but I worry about how they might confuse AIs and this could cause confusion in how they understand me and my concepts in particular, including as part of training runs. I already know this is an issue within context windows, AIs are typically very bad at handling these posts as context. One thing this is motivating is more clear breaks and shorter sections than I would otherwise use, and also shorter more thematically tied together posts. Ben Hoffman does not see a place or method in today’s world for sharing what he sees as high-quality literate discourse, at least given his current methods, although he identifies a few people he could try to usefully engage with more. I consistently find his posts some of the most densely interesting things on the internet and often think a lot about them, even though I very often strongly disagree with what he is saying and also often struggle to even grok his positions, so I’m sad he doesn’t offer us more. My solution to the problem of ‘no place to do discourse’ is that you can simply do it on Substack on your own, respond to whoever you want to respond to, speak to who you want to speak to and ignore who you want to ignore. I do also crosspost to LessWrong, but I don’t feel any obligation to engage if someone comments in a way that misconstrues what I said.
2025-03-04
https://www.lesswrong.com/posts/TPTA9rELyhxiBK6cu/formation-research-organisation-overview
TPTA9rELyhxiBK6cu
Formation Research: Organisation Overview
alamerton
Thank you to Adam Jones, Lukas Finnveden, Jess Riedel, Tianyi (Alex) Qiu, Aaron Scher, Nandi Schoots, Fin Moorhouse, and others for the conversations and feedback that helped me synthesise these ideas and create this post. Epistemic Status: my own thoughts and research after thinking about lock-in and having conversations with people for a few months. This post summarises my thinking about lock-in risk. TL;DR This post gives an overview to Formation Research, an early stage nonprofit startup research organisation working on lock-in risk, a neglected category of AI risk. Introduction I spent the last few months of my master’s degree working with Adam Jones on starting a new organisation focusing on a neglected area of AI safety he identified. I ended up focusing on a category of risks I call ‘lock-in risks’. In this post, I outline what we mean by lock-in risks, some of the existing work regarding lock-in, the threat models we identified as being relevant to lock-in, and the interventions we are working on to reduce lock-in risks[1]. What Lock-In Means We define lock-in risks as the probabilities of situations where features of the world, typically negative elements of human culture, are made stable for a long period of time. The things we care most about when wanting to minimise lock-in risks (from our point of view as humans) are not all the features of the world, but the features of human culture, namely values and ethics, power and decision-making structures, cultural norms and ideologies, and scientific and technological progress. From the perspective of minimising risk, we are therefore most interested in potential lock-in scenarios that would be harmful, oppressive, persistent, or widespread. While the definition encompasses lock-ins not related to AI systems, I anticipate that AI will be the key technology in their potential future manifestation. Therefore, the focus of the organisation is on lock-in risks in general, but we expect this to mostly consist of AI lock-in risks. For example – North Korea under the Kim Family Example of a totalitarian dictatorship under the Kim family since 1948Centralised government and ideological controlExtensive surveillance, forced labour, and exploitation Or, a generally intelligent autonomous system pursuing a goal and preventing interference Example of a situation which could remain stable for a long timeDigital error-correction capabilities of an intelligent system theoretically enable the long-term preservation of goals and stability of behaviourCould be highly undesirable from the perspective of humans Positive Lock-In Today’s positive lock-in might be considered short-sighted and negative by future generations. Just as locking in the values of slavery would be seen as a terrible idea today, so locking in some values of today might be seen as a terrible idea in the future. This is an attendant paradox in a society that makes constant progress towards an improved version of itself. However, there is a small space of potential solutions where we might be able to converge on something close to positive lock-ins. This is the region where we lock-in features of human culture that we believe contribute to the minimisation of lock-in risks. One example is human extinction. Efforts to prevent this from happening, such as those in the field of existential risk, can be argued as being positive, because they prevent us from a stable, undesirable future – all being dead.Another example is locking in the rule of never banning criticism, because this prevents us from getting into a situation where critical thinking cannot be targeted at a stable feature of the world to question its integrity or authority.Lastly, the preservation of sustainable competition could be argued as a positive lock-in, because it helps prevent the monopolisation of features of culture, such as in markets. The concept of a positive lock-in is delicate, and more conceptual work is needed to learn whether we can sensibly converge on positive lock-ins to mitigate lock-in risks. Neutral Lock-In We define these as lock-ins that we are not particularly interested in. The openness of our definition allows for many features of the world to be considered lock-ins. For example, the shape of the pyramids of Giza, the structural integrity of the Great Wall of China, or the temperature of the Earth. These are features of the world which tend to remain stable, but that we are not trying to make more or less stable. Figure 1. The precise shape of the Pyramids of Giza are a stable feature of the world we do not care about changing Negative Lock-In These are the lock-ins we are most interested in. The manifestation of these kinds of lock-ins would have negative implications for humanity. As mentioned, we care most about lock-ins that are: Harmful: resulting in physical or psychological harm to individualsOppressive: suppressing individuals’ freedom, autonomy, speech, or opportunities, or the continued evolution of culturePersistent: long-term, unrecoverable, or irreversibleWidespread: concerning a significant portion of individuals relative to the total population While there are other negative qualities of potential lock-ins, these are the qualities we care most about. Some examples of other areas we might care about are situations where humanity is limited in happiness, freedom, rights, well-being, quality of life, meaning, autonomy, survival, or empowerment. Why Do AI Systems Enable Lock-In? We further categorise lock-ins by their relationship to AI systems. We created three such categorisations: AI-enabled: led by a human or a group of humans leveraging an AI systemAI-led: led by an AI system or group of AI systemsHuman-only: led by a human or group of humans and not enhanced significantly by an AI system. AI-Enabled Lock-In It is possible that lock-in can be led by a human or a group of humans leveraging an AI system. For example, a human with the goal of controlling the world could leverage an AI system to implement mass worldwide surveillance, preventing interference from other humans. AGI would make human immortality possible in principle, removing the historical succession problem.[2] An AGI system that solves biological immortality could theoretically enable a human ruler to continue their rule indefinitely. Lastly, AGI systems would make whole brain emulation possible in principle, which would allow for any individual’s mind to persist indefinitely, like biological immortality. This way, the mind of a human ruler can be emulated and consulted on decisions, also continuing their rule indefinitely. Thankfully, there are no concrete existing examples of negative AI-enabled lock-ins yet, but some related examples are the Molochian forces of recommendation algorithms and the advent of short-form video on human attention and preferences by companies such as Meta and TikTok, in products such as the TikTok app, Instagram Reels, and YouTube shorts. These are situations where intelligent algorithms have been optimised by humans for customer retention. AI-Led Lock-In It is possible that lock-in can be led by an autonomous AI system or group of AI systems. For example, a generally intelligent AI system (AGI) could competently pursue some goal and prevent interference from humans due to the instrumentally convergent goal of goal preservation, resulting in a situation where the AGI pursues that goal for a long time. AI systems could enable the persistent preservation of information, including human values. AGI systems would be software systems, which can in principle last forever without changing, and so an autonomous AGI pursuing goals may be able to successfully pursue and achieve those goals without interference for a long time. Human-Only Lock-In As mentioned, we do still care about lock-in risks that don’t involve AI, given that they are sufficiently undesirable according to our prioritisation. Human-only lock-ins have existed in the past. Prime examples of this are past expansionist totalitarian regimes. Other People Have Thought About This Nick Bostrom In Superintelligence, Nick Bostrom describes lock-in as a potential second-order effect of superintelligence developing. A superintelligence can create conditions that effectively lock-in certain values or arrangements for an extremely long time or permanently. In chapter 5, he introduces the concept of decisive strategic advantage – that one entity may gain strategic power over the fate of humanity at large. He relates this to the potential formation of a Singleton, a single decision-making agency at the highest level.In chapter 7 he introduces the instrumental convergence hypothesis, providing a framework for the motivations of superintelligent systems. The hypothesis suggests a number of logically implied goals an agent will develop when given an initial goal.In chapter 12, he introduces the value loading problem, and the risks of misalignment due to issues such as goal misspecification. Bostrom frames lock-in as one potential outcome of an intelligence explosion, aside from the permanent disempowerment of humanity. He suggests that a single AI system, gaining a decisive strategic advantage, could control critical infrastructure and resources, becoming a singleton. He also outlines the value lock-in problem, where hard-coding human values into AI systems that become generally intelligent or superintelligent may lead to those systems robustly defending those values against shift due to instrumental convergence. He also notes that the frameworks and architectures leading up to an intelligence explosion might get locked in and shape subsequent AI development. Lukas Finnveden AGI and Lock-In, authored by Lukas Finnveden during an internship at Open Philanthropy, is currently the most detailed report on lock-in risk. The report expands on notes made on value lock-in by Jess Riedel, who co-authored the report along with Carl Shulman. The report references Nick Bostrom’s initial arguments about AGI and superintelligence, and argues that many features of society can be held stable for up to trillions of years due to AI systems’ digital error correction capabilities and the alignment problem. The authors first argue that dictators, enabled by technological advancement to be immortal, could avoid the succession problem, which explains the end of past totalitarian regimes, but theoretically would not prevent the preservation of future regimes. Next, whole brain emulations (WBEs) of dictators could be arbitrarily loaded and consulted on novel problems, enabling perpetual value lock-in. They also argue that AGI-led institutions could themselves competently pursue goals with no value drift due to digital error correction. This resilience can be reinforced by distributing copies of values across space, protecting them from local destruction. Their main threat model suggests that If AGI is developed and is misaligned, and does not permanently kill or disempower humans, lock-in is the likely next default outcome. William MacAskill In What We Owe the Future, Will MacAskill introduces the concept of longtermism and its implications for the future of humanity. It was MacAskill who originally asked Lukas Finnveden to write the AGI and lock-in report. He expands on the concepts outlined in the report in more philosophical terms in chapter 4 of his book, entitled ‘Value Lock-In’. MacAskill defines value lock-in as ‘an event that causes a single value system, or set of value systems, to persist for an extremely long time’. He stresses the importance of current cultural dynamics in potentially shaping the long-term future, explaining that a set of values can easily become stable for an extremely long time. He identifies AI as the key technology with respect to lock-in, citing AGI and Lock-In. He echoes their threat models: An AGI agent with hard-coded goals acting on behalf of humans could competently pursue that goal indefinitely. The beyond-human intelligence of the agent suggests it could successfully prevent humans from doing anything about it.Whole brain emulations of humans can potentially pursue goals for eternity, due to them being technically immortal.AGI may enable human immortality; an immortal human could instantiate a lock-in that could last indefinitely, especially if their actions are enabled and reinforced by AGI.Values could become more persistent if a single value system is globally dominant. If a future world war occurs which one nation or group of nations win, the value system of the winners may persist. Jess Riedel In Value Lock-In Notes 2021, Jess Riedel provides an in-depth overview of value lock-in from a Longtermist perspective. Riedel details the technological feasibility of irreversible value lock-in, arguing that permanent value stability seems extremely likely for AI systems that have hard-coded values. Riedel claims that ‘given machines capable of performing almost all tasks at least as well as humans, it will be technologically possible, assuming sufficient institutional cooperation, to irreversibly lock-in the values determining the future of earth-originating intelligent life.’ The report focuses on the formation of a totalitarian super-surveillance police state controlled by an effectively immortal bad person. Riedel explains that the only requirements are one immortal malevolent actor, and surveillance technology. Robin Hanson In MacAskill on Value Lock-In, economist Robin Hanson comments on What We Owe the Future, arguing that immortality is insufficient for value stability. He believes MacAskill underestimates the dangers of central power and is overconfident about the likelihood of rapid AI takeoff. Hanson presents an alternative framing of lock-in threat models: A centralised ‘take over’ process, in which an immortal power with stable values takes over the world.A decentralised evolutionary process, where as entities evolve in a stable universe, some values might become dominant via evolutionary selection. These values would outcompete others and remain ultimately stable.Centralised regulation: the central powers needed to promote MacAskill’s ‘long reflection’, limit national competition, and preserve value plurality, could create value stability through their central dominance. Also suggests this could cause faster value convergence than decentralised evolution. Paul Christiano In Machine intelligence and capital accumulation, Paul Christiano proposes a ‘naïve model’ of capital accumulation involving advanced AI systems. He frames agents as: "soups of potentially conflicting values. When I talk about “who” controls what resources, what I really want to think about is what values control what resources." He claims it is plausible that the arrival of AGI will lead to a ‘crystallisation of influence’, akin to lock-in – where whoever controls resources at that point may maintain control for a very long time. He also expresses concern that influence over the long-term future could shift to ‘machines with alien values’, leading to humanity ending with a whimper. He illustrates a possible world in which this occurs. In a future with AI, human wages fall below subsistence level as AI replaces labour. Value is concentrated in non-labour resources such as machines, land, and ideas. The resources can be directly controlled by their owners, unlike people. So whoever owns the machines captures the resources and income, causing the distribution of resources at the time of AI to become ‘sticky’ – whoever controls the resources can maintain that control indefinitely via investment. Lock-in Threat Models In order to begin working on outputs that aim to reduce lock-in risks, we established a list of threat models we believe are the most pressing chains of events that could lead to negative lock-ins in the future. We approached this by examining the fundamental patterns in the existing work, and by backcasting from all the undesirable lock-ins we could think of. The result is a list of threat models, as follows: Patterns in Existing Work An autonomous AI system competently pursues a goal and prevents interferenceAn immortal AI-enabled malevolent actor instantiates a lock-inA whole brain emulation of a malevolent actor instantiates a lock-inA large group with sufficient decision-making power decide the values of the long-term future, by majority, natural selection, or warAnti-rational ideologies prevent cultural, intellectual or moral progressThe frameworks and architectures leading up to an intelligence explosion get locked in and shape subsequent AI development.AI systems unintentionally concentrate power because so many people use them that whoever controls the AI system controls the people Backcasting Results Decision making power is concentrated into the hands of a small group of actors (AI agents or humans) causing a monopoly, oligopoly, or plutocracyThe deliberate manipulation of information systems by actors interested in promoting an agenda or idea leads to a persistent prevalence of that ideaAI systems proliferate so much that modifications to the AI system impact people's worldviewsAI-generated content causes addiction e.g. TikTok, Instagram, and YouTube. Especially harmful if targeting young people.Tech companies or AI labs concentrate power, leading to a lock-in of services, potentially leading to corruption or a monopolyIdeologies pushing us into undesired future or anti-rational ideologies preventing humanity from making moral progressEnfeeblement from reliance on AI systems Lock-In Risk Interventions So far, we are working on white papers, research, and model evaluations. Research Lock-in risk is a nascent and currently neglected research area in AI safety. This line of work seeks to conduct foundational lock-in risk research, creating knowledge about the phenomenon and proposing novel interventions to reduce lock-in risks.Lock-in research is concerned with complex systems, making much of the work low tractability. This line of work attempts to include a microscopic, granular approach, focusing on empirical studies of current AI use cases (such as recommender systems and LLMs), as well as a macroscopic, general approach, focusing on agent foundations and theory (such as with game theory and emulations). Evaluations Evaluations help decision makers know how dangerous their models are, and know information about the models generally. This line of work would seek to build a comprehensive evaluation for lock-in risk, starting with large language models (LLMs).The evaluation would prompt the model using specific language with a capability of behaviour in mind, and score the model’s response based on the presence of the target behaviourThe model’s answers will be scored using natural language scoring metrics, and the model’s final score will be calculated based on its individual scoresThe framework will be targeted at AISI Inspect, with the goal being for the UK government to use the evaluation to test frontier models, thus increasing the likelihood that models which pose a potential lock-in risk are identified early White papers: White papers are one medium that can be useful for presenting a problem and proposed solution, particularly in a persuasive way, attempting to affect change.Technical reports are useful for summarising a presenting knowledge acquired from research, sometimes with related recommendations.This work proposes that Formation produces white papers and reports aiming to create knowledge about lock-in risk and persuade decision makers to take actions. How We Aim to Affect Change The theory of change behind these interventions is that our research identifies methods and frameworks for preventing the lock-in, the model evaluations show us which AI systems are at risk of contributing to the manifestation of lock-in, and the white papers are targeted at the organisations in which these at-risk systems are in use, recommending they implement our intervention to minimise the risk. We aim to work with these organisations to negotiate ways of implementing the interventions in ways that are non-intrusive and mutually beneficial from both the organisation’s perspective, and from the perspective of reducing lock-in risks effectively. I hope that this post sparks a new conversation about lock-in risks. There are many angles to approach this problem from, and it is fairly unique in its risk profile and complexity. ^ From here onwards, by 'we' I refer to the organisation as an entity. ^ https://www.wikiwand.com/en/articles/Succession_crisis
2025-03-04
https://www.lesswrong.com/posts/5XznvCufF5LK4d2Db/the-semi-rational-militar-firefighter
5XznvCufF5LK4d2Db
The Semi-Rational Militar Firefighter
gabriel-brito
LessWrong Context: I didn’t want to write this. Not for lack of courage—I’d meme-storm Putin’s Instagram if given half a chance. But why? Too personal.My stories are tropical chaos: I survived the Brazilian BOPE (think Marine Corps training, but post-COVID).I’m dyslexic, writing in English (a crime against Grice).This is LessWrong, not some Deep Web Reddit thread. Okay, maybe a little lack of courage. And yet, something can be extracted from all this madness, right? Then comes someone named Gwern. He completely ignores my thesis and simply asks: "Tell military firefighter stories." My first instinct was to dismiss him as an oddball—until a friend told me I was dealing with a legend of rationality. I have to admit: I nearly shit myself. His comment got more likes than the post I’d spent years working on. Someone with, what, a 152 IQ wanted my accounts of surviving bureaucratic military hell? And I’m the same guy who applies scientific rigor to Pokémon analysis? I didn’t want to expose my ass in LessWrong, but here we are. So, I decided to grant his request with a story that blends military rigidity with... well, whatever it is I do—though the result might be closer to Captain Caveman than Sun Tzu. Firefighter Context: Brazilian military firefighters are first and foremost soldiers. Their training is built on four pillars: first aid, rescue, firefighting, and aquatic survival. We were in the jungle, undergoing a rescue training exercise with no food, alongside the BOPE—Brazil’s elite force, notorious for their grueling training and for carrying a skull-and-dagger emblem. Wherever they go, they shout their motto: “Knife in the skull!” The Knife: After a week without food, they released animals into the jungle. The female recruits had to hunt, and they managed to kill a rabbit with a single clubbing blow—its eye popped out. Then they turned to me: “Brito! Are you ‘knife in the skull?’” “I’m knife in the hose, sir!” “But… doesn’t a knife in the hose puncture the hose?” “And doesn’t a knife in the skull puncture the skull?” (Some laughter) “Then prove you’re ‘knife’ and eat this rabbit’s eye raw!” So, channeling the most primal, savage creature I knew, I swallowed the eye and croaked out: “My preciousssss!” —Smigle from The Lord of the Rings— Later, during formation, another superior addressed my squad: “We need more firefighters like this—who throw their whole bodies into following orders and still manage to have fun.” Then he turned to me: “Brito, what did the rabbit’s eye taste like?” “I don’t think the rabbit was very happy, sir. It tasted like tears.” Simultaneously, I: a) Completed the tribal ritual b) Avoided malnutrition So After taking plenty of hits from the military and with the help of two friends, I shifted back toward the rational side. Nowadays, solving complex problems through mathematics feels wilder to me than anything I ever faced was a military. Well, this was one of my middle-ground stories—not the most logical, not the most brutal. Should I continue with something heavier on pathos or logos?
2025-03-04
https://www.lesswrong.com/posts/hxEEEYQFpPdkhsmfQ/could-this-be-an-unusually-good-time-to-earn-to-give
hxEEEYQFpPdkhsmfQ
Could this be an unusually good time to Earn To Give?
HorusXVI
I think there could be compelling reasons to prioritise Earning To Give highly, depending on one's options. This is a "hot takes" explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection. I base the argument below on a few key assumptions, listed below. Each of these could be debated in their own right but I would prefer to keep any discussion of them outside this post and its comments. This is for brevity and because my reason for making them is largely a deferral to people better informed on the subject than I. The Intelligence Curse by Luke Drago is a good backdrop for this. Whether or not we see AGI or Superintelligence, AI will have significantly reduced the availability of white-collar jobs by 2030, and will only continue to reduce this availability.AI will eventually drive an enormous increase in world GDP.The combination of these will produce a severity of wealth inequality that is both unprecedented and near-totally locked-in. If AI advances cause white-collar human workers to become redundant by outperforming them at lower cost, we are living in a dwindling window in which one can determine their financial destiny. Government action and philanthropy notwithstanding, one's assets may not grow appreciably again once their labour has become replaceable. An even shorter window may be available for starting new professions as entry-level jobs are likely the easiest to automate and companies will find it easier to stop hiring people than start firing them. That this may be the fate of much of humanity in the not-too-distant future seems really bleak. While my ear is not the closest to the ground on all things AI, my intuition is that humanity will not have the collective wisdom to restructure society in time to prevent this leading to a technocratic feudal hierarchy. Frankly, I'm alarmed that having engaged with EA consistently for 7+ years I've only heard discussion of this very recently. Furthermore, the Trump Administration has proven itself willing to use America's economic and military superiority to pressure other states into arguably exploitative deals (tariffs, offering Ukraine security guarantees in exchange for mineral resources) and shed altruistic commitments (foreign aid). My assumption is that if this Administration, or a similar successor, oversaw the unveiling of workplace-changing AI, the furthest it would cast its moral circle would be American citizens. Those in other countries may have very unclear routes to income. Should this scenario come to pass, altruistic individuals having bought shares in companies that experience this economic explosion before it happened could do disproportionate good. The number of actors able to steer the course of the future at all will have shrunk by orders of magnitude and I would predict that most of them will be more consumed by their rivalries than any desire to help others. Others have pointed out that this generally was the case in medieval feudal systems. Depending on the scale of investment, even a single such person could save dozens, hundreds, or even thousands of other people from destitution. If that person possessed charisma or political aptitude, their influence over other asset owners could improve the lives of a great many.  Given that being immensely wealthy leaves many doors open for conventional Earning To Give if this scenario doesn't come to pass (and I would advocate for donating at least 10% of income along the way), it seems sensible to me for an EA to aggressively pursue their own wealth in the short term. If one has a clear career path for helping solve the alignment problem or achieve the governance policies required to bring transformative AI into the world for the benefit of all, I unequivocally endorse pursuing those careers as a priority. These considerations are for those without such a clear path. I will now apply a vignette to my circumstances to provide a concrete example and because I genuinely want advice! I have spent 4 years serving as a military officer. My friend works at a top financial services firm, which has a demonstrable preference for hiring ex-military personnel. He can think of salient examples of people being hired for jobs that pay £250k/year with CVs very arguably weaker, in both military and academic terms, than mine. With my friend's help, it is plausible that I could secure such a position. I am confident that I would not suffer more than trivial value drift while earning this wage, or on becoming ludicrously wealthy thereafter, based on concrete examples in which I upheld my ethics despite significant temptation not to. I am also confident that I have demonstrated sufficient resilience in my current profession to handle life as a trader, at least for a while. With much less confidence, I feel that I would be at least average in my ability to influence other wealthy people to buy into altruistic ideals. My main alternative is to seek mid to senior operations management roles at EA and adjacent organisations with longtermist focus. I won't labour why I think these roles would be valuable, nor do I mean to diminish the contributions that can be made in such roles. This theory of impact does, of course, rely heavily on the org I get a job in delivering impactful results; money can almost certainly buy results but of a fundamentally more limited nature. So, should one such as I Earn To Invest And Then Give, or work on pressing problems directly?
2025-03-04
https://www.lesswrong.com/posts/vxSGDLGRtfcf6FWBg/top-ai-safety-newsletters-books-podcasts-etc-new-aisafety
vxSGDLGRtfcf6FWBg
Top AI safety newsletters, books, podcasts, etc – new AISafety.com resource
bryceerobertson
Keeping up to date with rapid developments in AI/AI safety can be challenging. In addition, many AI safety newcomers want to learn more about the field through specific formats e.g. books or videos. To address both of these needs, we’ve added a Stay Informed page to AISafety.com. It lists our top recommended sources for learning more and staying up to date across a variety of formats: ArticlesBlogsBooksForumsNewslettersPodcastsTwitter/X accountsYouTube channels You can filter the sources by format, making it easy to find, for instance, a list of top blogs to follow. We think this page might be particularly useful as a convenient place for field-builders to direct people to when asked about top books/newsletters/blogs etc. As with all resources on AISafety.com, we’re committed to making sure the data on this page is high quality and current. If you think there’s something that should be added (or removed) please let us know in the comments or via the general feedback form. The site now has the following resources: We’d love to hear any ideas for other resources we should consider adding.
2025-03-04
https://www.lesswrong.com/posts/kZ9tKhuZPNGK9bCuk/how-much-should-i-worry-about-the-atlanta-fed-s-gdp
kZ9tKhuZPNGK9bCuk
How much should I worry about the Atlanta Fed's GDP estimates?
korin43
The Atlanta Fed is seemingly predicting -2.8% GDP growth in the first quarter of 2025. I've seen several people mention this on Twitter, but it doesn't seem to be discussed much beyond that, and the stock market seems pretty normal (S&P 500 down 2% in the last month). Is this not really a useful signal? Or is the market under-reacting?
2025-03-04
https://www.lesswrong.com/posts/mRKd4ArA5fYhd2BPb/observations-about-llm-inference-pricing
mRKd4ArA5fYhd2BPb
Observations About LLM Inference Pricing
Aaron_Scher
This work was done as part of the MIRI Technical Governance Team. It reflects my views and may not reflect those of the organization. Summary I performed some quick analysis of the pricing offered by different LLM providers using public data from ArtificialAnalysis. These are the main results: Pricing for the same model differs substantially across providers, often with a price range of 10x.For a given provider (such as AWS Bedrock), there is notable variation in price at a given model size. Model size is still a good predictor of price.I estimate that many proprietary models are sold at a substantial mark-up. This is not surprising, given development costs and limited competition to serve the model.Mixture-of-Experts (MoE) models, which activate only a subset of parameters during inference, are often billed similarly to a dense model sized between the active and total parameter counts—typically leaning toward the higher, total parameter count. I looked into this topic as part of a shallow dive on inference costs. I’m interested in how inference costs have changed over time and what these trends might imply about a potential intelligence explosion. However, this analysis shows a large variation in sticker prices, even to serve the same model, so sticker prices are likely not a very precise tool for studying inference cost trends. This project was done in a few days and should be seen as “a shallow dive with some interesting findings” rather than a thorough report. Data about real-world LLM inference economics is hard to come by, and the results here do not tell a complete or thorough picture. I’m partially publishing this in the spirit of Cunningham’s Law: if you have firsthand knowledge about inference economics, I would love to hear it (including privately)! Pricing differs substantially across providers when serving the same model Open-weight LLMs are a commodity: whether Microsoft Azure is hosting the model or Together.AI is hosting the model, it’s the exact same model. Therefore, we might naively expect the price for a particular model to be about the same across different providers—gas prices are similar across different gas stations. But we actually see something very different. For a given model, prices often range by 10x or more from the cheapest provider to the most expensive provider. ArtificialAnalysis lists inference prices charged by various companies, allowing easy comparison across providers. Initially, we’ll look at models that are fully open-weight (anybody can serve them without a license). This image from ArtificialAnalysis shows how different providers stack up for serving the Llama-3.1-70B Instruct model:[1] The x-axis is the price. The cheapest provider is offering this model for a price of $0.20 per million tokens, the median provider is at ~$0.70, and the most expensive provider is charging $2.90. This amount of price variation appears quite common. Here’s the analogous chart for the Llama-3.1-405B Instruct model: The cheapest provider is offering this model for a price of $0.90 per million tokens, the median provider is at ~$3.50, and the most expensive provider is charging $9.50. There is somewhat less spread for models with fewer providers. For instance, the Mixtral-8x22B-instruct model has five providers whose prices range from $0.60 to $3.00. Overall, LLMs do not appear to be priced like other commodities. There are huge differences in the price offered by different providers and this appears to be true for basically all open-weight models. Pricing differs for models in the same size class, even for a particular provider One natural question to ask is, how well does model size predict price? Intuitively, the size of the model should be one of the main determinants of cost, and we might expect similarly sized models to be priced similarly. First, let’s look at fully open-weight dense models on AWS Bedrock. Note that this data is thrown off by Llama 3 70B, a model priced at $2.86. By contrast, Llama 3.1 70B is priced at only $0.72. But, overall, what we’re seeing here is that model size is fairly predictive of price, and there is some variation between models of the same size (even excluding this outlier). Let’s also look at prices on Deep Infra, a provider that tends to have relatively low prices. Without any odd outliers, we see a much nicer fit, with an R^2 value of 0.904. For those curious, a quadratic fits this data somewhat better, with an R^2 of 0.957. Some of the models around the 70B size class are: Llama 3.3 70B (Turbo FP8) ($0.20), Llama 3.3 70B ($0.27), Llama 3.1 70B ($0.27), Qwen2.5 72B ($0.27), and Qwen2 72B ($0.36). So on Deep Infra we see the same trend: model size does predict price pretty well, but there is also variation in price at a given model size. Estimating marginal costs of inference I’m interested in understanding the trends in LLM inference costs over time because this could help predict dynamics of an intelligence explosion or AI capabilities diffusion. Therefore, my original motivation for looking at inference prices was the hope that these would be predictive of underlying costs. Unfortunately, providers can upcharge models to make profit, so prices might not be useful for predicting costs. However, simple microeconomics comes to the rescue! In a competitive market with many competing firms, we should expect prices to approach marginal costs. If prices are substantially above marginal costs, money can be made by new firms undercutting the competition. We can estimate marginal cost based on the minimum price for a given model across all providers. This approach makes the assumption that the cheapest providers are still breaking even, an assumption that may not hold.[2] Let’s look at some of the dense, open-weight models that have many providers, and compare the size of the model to the lowest price from any provider. We’ll look at the models with the most competition (providers) and add a few other models for size variation. Across 9 models we have a mean of 8.7 providers. Here’s that same graph, but zoomed in to only show the smaller models: Model size is again a good predictor of price, even better than when we looked at a single provider above. And look at how cheap the models are! Eight billion parameter models are being served at $0.03, 70B models at $0.20, and the Llama 405B model at $0.90. If we assume these minimum prices approximate the marginal cost of serving a model—again, a somewhat dubious assumption—we can predict costs for larger models. The best-fit line for these prices implies a fixed cost of $0.03 and a variable cost of $0.02 per ten billion parameters. This would predict that a 10 trillion parameter dense model would cost about $22 per million tokens. Alternatively, by using the best-fit line from AWS’s prices, we get that AWS might offer a 10 trillion parameter model for about $60. Proprietary models are probably marked up significantly Can we also use these trends to predict the size of proprietary models? Not exactly. Proprietary models don’t have a nice dynamic of price competition—instead providers can charge much higher rates. But these minimum prices are still useful because they basically tell us “this is the largest model size that the market knows how to serve at some price without losing money”, at least if we’re assuming nobody is losing money. This turns out to not be a very interesting analysis, so I’ll be brief. Here are the expected maximum dense parameter counts for some proprietary models based on their price: Claude 3.5 Sonnet: $6.00, 2.8 trillion parameters, Claude 3 Haiku: $0.50, 217 billion parameters, GPT-4o (2024-08-06): $4.38, 2 trillion parameters, Gemini 1.0 Pro (AI Studio): $0.75, 332 billion parameters. Based on model evaluations and vibes, I expect these are higher than the actual parameter counts, except for perhaps Gemini. Let’s look at the original GPT-4 as a case study. The model was priced at $37.50, and it is rumored to be a Mixture-of-Experts model. According to this analysis of MoE inference costs, the model should cost about as much as a 950 billion parameter dense model. We’ll assume that the cost of serving such a model today is about the same as it was in early 2023 when GPT-4 was first available (a very dubious assumption). Then we can compare the expected price to serve the model today (under various conditions) against the price charged in March 2023. Using the minimum-provider-price trend, we get that the model was priced at about 18x marginal cost; using Azure’s price trend, we get that the model was priced at about 2x marginal cost; and using TogetherAI’s price trend, we get that the model was priced at about 4.5x marginal cost. There’s substantial uncertainty in these numbers, and we know of many ways that it has gotten cheaper to serve models over time (e.g., FlashAttention). But because it’s useful to have a general idea, I think the original GPT-4 was likely served at somewhere between 1x and 10x marginal cost. MoE models are generally priced between their active and total parameter count How do the prices for Mixture-of-Experts (MoE) models compare to standard dense models? Fortunately, there are a few MoE models that have multiple providers, so we can apply similar reasoning based on the lowest cost from any of these providers. Let’s look at the data from above but adding in MoE models. Each MoE model will get two data points, one for its active—or per token—parameter count, and one for its total parameter count. We’re basically asking, “for the price of an MoE model, what would be the dense-model parameter equivalent, and how does that compare to the active and total parameter counts.” A couple quick notes on the data here. The DBRX model only has two model providers, and it requires a license for large companies to serve, so it could be overpriced. The DeepSeek-V3 model also has a provider offering it for cheaper ($0.25) at FP8 quantization, which we exclude. The trendline for dense models falls between the active and total parameter count for 3 of the 5 MoE models. The other two MoE models are over-priced compared to the dense trends, even when looking at their total parameter count. Assuming this price reflects marginal cost, this data would indicate that MoE models have a similar cost to a dense model somewhere between their active and total parameter count, perhaps closer to the total parameter count. However, the data is quite noisy and there are fewer providers for these MoE models than the dense models (mean of 6 and 8.7 respectively). We can also replicate this analysis for a single model provider, and we get similar results. On Nebius, Deep Infra, and Together.AI the trend line for dense model prices indicates that MoE models have a dense-model-equivalent cost somewhere between their active and total parameter count, or a bit higher than total parameter count. Here’s the graph for Nebius: Input and output token prices The ratio between prices for input and output tokens varies across providers and models, generally falling between 1x and 4x (i.e., output tokens are at least as expensive as input tokens, but less than 4 times as expensive). There are a few providers that price input and output tokens equally, including relatively cheap providers. This price equivalence is somewhat surprising given that the models most people use—OpenAI, Anthropic, and Google models—price output tokens as 3-5x more expensive than input tokens. It’s generally believed that input tokens should be cheaper to process than output tokens because inputs can be processed in parallel with good efficiency. Hypotheses about price variance The large price range for serving a particular model is deeply confusing. Imagine you went to the gas station and the price was $4.00, and you look across the street at another gas station and the price is $40.00—that’s basically the situation we currently see with LLM inference. Open-weight LLMs are a commodity, it’s the same thing being sold by Azure as by Together.AI! I discussed this situation with a few people and currently have a few hypotheses for why prices might differ so much for providing inference. First, let’s talk about the demand side, specifically the fact that inference on open-weight models is not exactly a commodity. There are a few key measures a customer might care about that could differ for a particular model: Price (input, output, total based on use case).Rate limits and availability.Uptime.Speed (time to first token, output speed or time per output token, and total response time which is a combination of these). On a brief look at the speed vs. cost relationship, there are some models where cheaper providers are slower—as expected—but, there are some models where this is not the case.Context length.Is it actually the same model? It’s possible that some providers are slightly modifying a model that they are serving for inference, for example by quantizing some of the computation, so that the model differs slightly across providers. ArtificialAnalysis indicates that some providers are doing this (and I avoided counting those models in the minimum-price analysis), but others could be doing this secretly and users may not know. Outside of considerations about the model being served, there might be other reasons providers differ, from a customer’s perspective: Cheaper LLM providers might be unreliable or otherwise worse to do business with.Existing corporate deals might make it easier to use pricier LLM providers such as Azure and AWS. If you are an employee and your company already uses AWS for many services, this could be the most straightforward option.Maybe the switching costs for providers are high and customers therefore get locked into using relatively expensive providers. This strikes me as somewhat unlikely given that switching is often as simple as replacing a few lines of code.Maybe customers don’t bother to shop around for other providers. Shopping around can be slightly annoying (some providers make it difficult to find the relevant information). Inference expenses might also be low, in absolute terms, for some customers, such that shopping around isn’t worthwhile (though this doesn’t seem like it should apply at a macro-level). On the supply side, the price is going to be affected by how efficiently one can serve the model and how much providers want to profit (or how much they’re willing to lose). There are various factors I expect affect provider costs: Hardware infrastructure. Different AI chips face different total cost of ownership and different opportunity costs. There is also substantial variation in prices for particular hardware (e.g., the price to rent H100 GPUs). Other AI hardware, such as interconnect, could also affect the efficiency with which different providers can serve models.Software infrastructure. There are likely many optimizations that make it cheaper to serve a particular model, such as writing efficient CUDA kernels, using efficient parallelism, and using speculative decoding effectively. These could change much more quickly than hardware infrastructure. Some infrastructure likely benefits from economies of scale, e.g., it only makes sense to hire the inference optimization team if there is lots of inference for them to optimize.Various tradeoffs on key metrics. As mentioned, providers vary in the speed at which they serve models. Some providers may simply choose to offer models at different points along these various tradeoff curves. The sticker prices could also be unreliable. First, sticker prices do not indicate usage, and it is possible that expensive providers don’t have very much traffic. Second, providers could offer substantial discounts below sticker price, for instance Google offers some models completely for free (excluded from the minimum-price analysis) and gives cloud credits to many customers. Future analysis could attempt to create a comprehensive model for predicting inference prices. I expect the following variables will be key inputs to a successful model (but the data might still be too noisy): model size, number of competitors serving the model, and model performance relative to other models. The various other measures discussed above could also be useful. I expect model performance explains much of LLM pricing (e.g., that of Claude 3.5 Haiku); per ArtificialAnalysis: Final thoughts This investigation revealed numerous interesting facts about the current LLM inference sector. Unfortunately, it is difficult to make strong conclusions about the underlying costs of LLM inference because prices range substantially across providers. The data used in this analysis is narrow, so I recommend against coming to strong conclusions solely on its basis. Here are the spreadsheets used, based on data from ArtificialAnalysis. Again, please reach out if you have firsthand information about inference costs that you would like to share. Thank you to Tom Shlomi, Tao Lin, and Peter Barnett for discussion. ^ The data in this post is collected from ArtificialAnalysis in December 2024 or January 2025. Prices are for a 3:1 input:output blend of one million tokens. This spreadsheet includes most of the analysis and graphs. ^ One reason to expect the cheapest model providers to be losing money is that this is likely an acceptable business plan for many of them. It is very common for businesses to initially operate at a loss in order to gain market share. Later they may raise prices or reduce their costs via returns to scale. The magnitude of these losses is not large yet: the cost of LLM inference in 2024 (excluding model development) was likely in the single digit billions of dollars. This is relatively small compared to the hundreds of billions of AI CapEx mainly going toward future data centers; it is plausible that some companies would just eat substantial losses on inference.
2025-03-04
https://www.lesswrong.com/posts/pzYDybRAbss4zvWxh/shouldn-t-we-try-to-get-media-attention
pzYDybRAbss4zvWxh
shouldn't we try to get media attention?
avery-liu
Using everything we know about human behavior, we could probably manage to get the media to pick up on us and our fears about AI, similarly to the successful efforts of early environmental activists? Have we tried getting people to understand that this is a problem? Have we tried emotional appeals? Dumbing-downs of our best AI risk arguments, directed at the general public? Chilling posters, portraying the Earth and human civilization has having been reassembled into a giant computer?
2025-03-04
https://www.lesswrong.com/posts/vHsjEgL44d6awb5v3/the-milton-friedman-model-of-policy-change
vHsjEgL44d6awb5v3
The Milton Friedman Model of Policy Change
JohnofCharleston
One-line summary: Most policy change outside a prior Overton Window comes about by policy advocates skillfully exploiting a crisis. In the last year or so, I’ve had dozens of conversations about the DC policy community. People unfamiliar with this community often share a flawed assumption, that reaching policymakers and having a fair opportunity to convince them of your ideas is difficult. As “we”[1] have taken more of an interest in public policy, and politics has taken more of an interest in us, I think it’s important to get the building blocks right. Policymakers are much easier to reach than most people think. You can just schedule meetings with congressional staff, without deep credentials.[2] Meeting with the members themselves is not much harder. Executive Branch agencies have a bit more of a moat, but still openly solicit public feedback.[3] These discussions will often go well. By now policymakers at every level have been introduced to our arguments, many seem to agree in principle… and nothing seems to happen. Those from outside DC worry they haven’t met the right people, they haven’t gotten the right kind of “yes”, or that there’s some lobbyists working at cross purposes from the shadows. That isn’t it at all. Policymakers are mostly waiting for an opening, a crisis, when the issue will naturally come up. They often believe that pushing before then is pointless, and reasonably fear that trying anyway can be counterproductive. A Model of Policy Change “There is enormous inertia — a tyranny of the status quo — in private and especially governmental arrangements. Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes politically inevitable.”[4] —Milton Friedman That quote is the clearest framing of the problem I’ve found; every sentence is doing work. This is what people who want to make policy change are up against, especially when that policy change is outside the current Overton Window. Epistemically, I believe his framing at about 90% strength. I quibble with Friedman’s assertion that only crises can produce real change. But I agree this model explains most major policy change, and I still struggle to find good counter-examples. Crises Can Be Schelling Points This theory, which also underlies Rahm Emmanuel’s pithier “Never let a good crisis go to waste,” is widely believed in DC. It’s how policies previously outside the Overton Window were passed hastily: In the wake of the September 11th Attacks, sweeping changes to the National Security infrastructure were implemented by the PATRIOT Act in the following month, long before any investigations were complete. During the 2008 Financial Crisis, Lehman Brothers was allowed to collapse without a plan for its liabilities in September of 2008. The crisis intensified so quickly that the Troubled Asset Relief Program (TARP), an order of magnitude larger than what would have been necessary for Lehman Brothers, passed Congress three weeks later. Then-Chairman of the New York Federal Reserve, Timothy Geithner, famously observed of the time, “Plan beats no plan.”[5] COVID-19 is recent enough to not need a detailed summary, but the scale of the income support programs implemented is still hard to fathom. The United States nearly had a Universal Basic Income for several months, for both individuals and small/medium businesses. It’s also why policies don’t always have to be closely related to the crisis that spawned them, like FDA reforms after thalidomide.[6] Policy change is a coordination problem at its core. In a system with many veto points, like US federal government policy, there is a strong presumption for doing nothing. Doing nothing should be our strong presumption most of the time; the country has done well with its existing policy in most areas. Even in areas where existing policies are far from optimal, random changes to those policies are usually more harmful than helpful. Avoid Being Seen As “Not Serious” Policymakers themselves have serious bottlenecks. There is less division between policy-makers and policy-executors than people think. Congress is primarily policy-setting, but it also participates in foreign policy, conducts investigations, and makes substantive budgetary determinations. At the other end of Pennsylvania Avenue, the Executive Branch ended up with much more rule-making authority than James Madison intended. Those long-term responsibilities have to be balanced against the crisis of the day. No one wants to start a policy-making project when it won’t “get traction.” Many people misunderstand the problem with pushing for policy that’s outside the Overton Window. It would be difficult to find a policymaker in DC who isn’t happy to share a heresy or two with you, a person they’ve just met. The taboo policy preference isn’t the problem; it’s the implication that you don’t understand their constraints. Unless you know what you’re doing and explain your theory of change, asking a policymaker for help in moving an Overton Window is a bigger ask than you may realize. You’re inadvertently asking them to do things the hard, tedious way that almost never works. By making the ask at all, you’re signaling that either you don’t understand how most big policy change happens, or that you misunderstand how radical your suggested policy is. Because policymakers are so easy to reach, they have conversations like that relatively often. Once they slot you into their “not serious” bucket, they’ll remain agreeable, but won’t be open to further policy suggestions from you. What Crises Can We Predict? The takeaway from this model is that people who want radical policy change need to be flexible and adaptable. They need to: wait for opportunities,understand how new crises are likely to be perceived,let the not-quite-good-enough pitches go, to avoid being seen as a crank,ruthlessly exploit the good-enough crisis to say, “Called it,”and then point to a binder of policy proposals. At the “called it” step, when you argue that you predicted this and that your policy would have prevented/addressed/mitigated the crisis, it helps if it’s true. What crises, real or perceived, might surprise policymakers in the next few years? Can we predict the smoke?[7] Can we write good, implementable policy proposals to address those crises? If so, we should call our shots; publish our predictions and proposals somewhere we can refer back to them later. They may come in handy when we least expect.[8] ^ Left deliberately undefined, so I don’t get yelled at. Unlike that time I confidently espoused that "Rationality is Systematized Winning" is definition enough, and half the room started yelling different objections at once. ^ https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/speaking-to-congressional-staffers-about-ai-risk ^ Up to and including the White House requesting comment on behalf of the Office of Science and Technology Policy: https://www.whitehouse.gov/briefings-statements/2025/02/public-comment-invited-on-artificial-intelligence-action-plan/ (Due March 15th!) ^ “Capitalism and Freedom”, 1982 edition from University of Chicago Press, pages xiii-xiv. ^ “Stress Test: Reflections on Financial Crises”, 2014 edition from Crown, New York. ^ See Ross Rheingans-Yoo’s blog post https://blog.rossry.net/p/39d57d20-32d9-4590-a586-2fa47bb91a02/ and interview with Patrick McKenzie https://www.complexsystemspodcast.com/episodes/drug-development-ross-rheingans-yoo/ ^ https://www.lesswrong.com/posts/5okDRahtDewnWfFmz/seeing-the-smoke ^ https://www.youtube.com/watch?v=vt0Y39eMvpI
2025-03-04
https://www.lesswrong.com/posts/sQvK74JX5CvWBSFBj/the-compliment-sandwich-aka-how-to-criticize-a-normie
sQvK74JX5CvWBSFBj
The Compliment Sandwich 🥪 aka: How to criticize a normie without making them upset.
keltan
Note. The comments on this post contain excellent discussion that you’ll want to read if you plan to use this technique. I hadn’t realised how widespread the idea was. This valuable nugget was given to me by an individual working in advertising. At the time, I was 16, posting on my local subreddit, hoping to find someone who could advise me on a film making career path. This individual kindly took the time to sit me down at a bar—as I wore my school uniform—and detail everything I would need to do to be able to make films professionally. Among many insights I am truly grateful for was the Sandwich. As with many metaphorical sandwiches, the compliment sandwich is named incorrectly. It should really be called the criticism sandwich. Recipe: You'll need: 2 compliments (The Bread) 1 Critique (The Filling) Instructions: Start with a compliment. Even the worst of things have a silver lining; you'll need to find it and comment on it. Now provide the critique. It can be more brutal than a lone critique because the blow was softened by your first compliment. Finish off with your second compliment. Make it flow naturally from the critique if you can, something like "Oh, but I almost forgot to mention, I love how you..." Final Thoughts This isn't a technique to be used with rationalists. This is a normie communication protocol. It also works well with kids, teens, and people in a bad state of mind. I hope the compliment sandwich is a valuable piece in your lunch box 🧰 going forward. Bon appétit.
2025-03-03
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
80