diff --git "a/deepmind_technical_blog.jsonl" "b/deepmind_technical_blog.jsonl" new file mode 100644--- /dev/null +++ "b/deepmind_technical_blog.jsonl" @@ -0,0 +1,45 @@ +{"id": "748ff4934a3b410221cad359178753a5", "title": "Developing reliable AI tools for healthcare", "url": "https://www.deepmind.com/blog/codoc-developing-reliable-ai-tools-for-healthcare", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### New research proposes a system to determine the relative accuracy of predictive AI in a hypothetical medical setting, and when the system should defer to a human clinician\n\nArtificial intelligence (AI) has great potential to enhance how people work across a range of industries. But to integrate AI tools into the workplace in a safe and responsible way, we need to develop more robust methods for understanding when they can be most useful.\n\nSo when is AI more accurate, and when is a human? This question is particularly important in healthcare, where predictive AI is increasingly used in high-stakes tasks to assist clinicians. \n\n\nToday in [*Nature Medicine*](https://www.nature.com/articles/s41591-023-02437-x%20), we’ve published our joint paper with Google Research, which proposes CoDoC (Complementarity-driven Deferral-to-Clinical Workflow), an AI system that learns when to rely on predictive AI tools or defer to a clinician for the most accurate interpretation of medical images. \n\nCoDoC explores how we could harness human-AI collaboration in hypothetical medical settings to deliver the best results. In one example scenario, CoDoC reduced the number of false positives by 25% for a large, de-identified UK mammography dataset, compared with commonly used clinical workflows – without missing any true positives. \n\nThis work is a collaboration with several healthcare organisations, including the United Nations Office for Project Services’ Stop TB Partnership. To help researchers build on our work to improve the transparency and safety of AI models for the real world, we’ve also open-sourced [CoDoC’s code on GitHub](http://github.com/deepmind/codoc). \n\n#### CoDoC: Add-on tool for human-AI collaboration\n\nBuilding more reliable AI models often requires re-engineering the complex inner workings of predictive AI models. However, for many healthcare providers, it’s simply not possible to redesign a predictive AI model. CoDoC can potentially help improve predictive AI tools for its users without requiring them to modify the underlying AI tool itself. \n\nWhen developing CoDoC, we had three criteria:\n\n* Non-machine learning experts, like healthcare providers, should be able to deploy the system and run it on a single computer.\n* Training would require a relatively small amount of data – typically, just a few hundred examples.\n* The system could be compatible with any proprietary AI models and would not need access to the model’s inner workings or data it was trained on.\n\n#### Determining when predictive AI or a clinician is more accurate\n\nWith CoDoC, we propose a simple and usable AI system to improve reliability by helping predictive AI systems to ‘know when they don’t know’. We looked at scenarios, where a clinician might have access to an AI tool designed to help interpret an image, for example, examining a chest x-ray for whether a tuberculosis test is needed.\n\nFor any theoretical clinical setting, CoDoC’s system requires only three inputs for each case in the training dataset.\n\n1. The predictive AI outputs a confidence score between 0 (certain no disease is present) and 1 (certain that disease is present).\n2. The clinician’s interpretation of the medical image.\n3. The ground truth of whether disease was present, as, for example, established via biopsy or other clinical follow-up.\n\n*Note: CoDoC requires no access to any medical images*.\n\n \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/64b1a3fb26ed12fa107aeaeb_Q4LJmKMYD6K0eM-KH9WZP6LRrihbmn-VPlMaE_CqvyYsNQ0dxyE2Ffb7UbCDxoWD_XCI_J0z-U99z0IjT5gsKCMHzbHaq7MyAzUcgZAw4a4-VXdpgCQKBUqD3qLmHFlCmP18eNiOYn6yVo-BU_QqmUhlnmyzh3FHHdvvjlRLPsr0ARG7_ARzWAoTqGu2ym8.png)Diagram illustrating how CoDoC is trained. Here, the existing predictive AI model remains unchanged. CoDoC learns to establish the relative accuracy of the predictive AI model compared with clinicians’ interpretation, and how that relationship fluctuates with the predictive AI’s confidence scores.\n\nOnce trained, CoDoC could be inserted into a hypothetical future clinical workflow involving both an AI and a clinician. When a new patient image is evaluated by the predictive AI model, its associated confidence score is fed into the system. Then, CoDoC assesses whether accepting the AI’s decision or deferring to a clinician will ultimately result in the most accurate interpretation.  \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/64b1a3fb08a44fff7be88502_MJM-W5dcVbihF6WGUye8j4sR5-1MN0csylESIXqUkaobEqI5RT2-rfmcXqhsUzNWIQbz8k0Jf4Zu7IHWcrCB9jovUT_GhpqQvd_vPA9iXcA64ytdBYnU6ifokqX7gENDYD32rAz6GXI8rW0VcPXKxeLYSN0zojrfswjqQfnz-9jsXm1vKB7tLvUKuTRYgKw.png)Diagram illustrating how CoDoC could be inserted into a hypothetical clinical workflow.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/64b1a3fbe818e2f2636b21f7_cZqxqIC7rKPt7cjxBfwBH7jbhCMrXb4AZxpLicxUyYZr9Li9WHkZtpmuvRt1obMcbpPs80f3HdmtRFKRzgoAFdFaHnf_trYNN1HlFm3EQPKtjdKhjEhtg1Q5OkN6zRXo-tYKbeV986k_HzsAfw9vI7uyDJqReRqC0yNhqhuVjoxUuaXBizYE9IVkxqk4AKg.png)During training, we establish an ‘advantage function’ that optimises CoDoC’s decision-making. Once trained, it favours an AI-only interpretation when the model is more accurate than a clinician (green and red areas), and defers to a clinician where human judgement is better than AI’s (grey area). #### Increased accuracy and efficiency\n\nOur comprehensive testing of CoDoC with multiple real-world datasets – including only historic and de-identified data – has shown that combining the best of human expertise and predictive AI results in greater accuracy than with either alone.\n\nAs well as achieving a 25% reduction in false positives for a mammography dataset, in hypothetical simulations where an AI was allowed to act autonomously on certain occasions, CoDoC was able to reduce the number of cases that needed to be read by a clinician by two thirds. We also showed how CoDoC could hypothetically improve the triage of chest X-rays for onward testing for tuberculosis.\n\n#### Responsibly developing AI for healthcare\n\nWhile this work is theoretical, it shows our AI system’s potential to adapt: CoDoC was able to improve performance on interpreting medical imaging across varied demographic populations, clinical settings, medical imaging equipment used, and disease types.\n\nCoDoC is a promising example of how we can harness the benefits of AI in combination with human strengths and expertise. We are working with external partners to rigorously evaluate our research and the system’s potential benefits. To bring technology like CoDoC safely to real-world medical settings, healthcare providers and manufacturers will also have to understand how clinicians interact differently with AI, and validate systems with specific medical AI tools and settings.\n\n**Learn more about CoDoC:**\n\n\n\n .w-embed { width: 100%; !imporrtant }\n\n\n .w-embed { width: 100%; !imporrtant }", "date_published": "2023-07-17T00:00:00Z", "authors": ["Krishnamurthy (Dj) Dvijotham and Taylan Cemgil on behalf of the CoDoC team"], "summaries": []} +{"id": "3ef0526d8342f3925db5103f9c45e4cd", "title": "An early warning system for novel AI risks", "url": "https://www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### New research proposes a framework for evaluating general-purpose models against novel threats\n\nTo pioneer responsibly at the cutting edge of artificial intelligence (AI) research, we must identify new capabilities and novel risks in our AI systems as early as possible.\n\nAI researchers already use a range of [evaluation benchmarks](https://crfm.stanford.edu/helm/latest/) to identify unwanted behaviours in AI systems, such as AI systems making misleading statements, biased decisions, or repeating copyrighted content. Now, as the AI community builds and deploys increasingly powerful AI, we must expand the evaluation portfolio to include the possibility of *extreme risks* from general-purpose AI models that have strong skills in manipulation, deception, cyber-offense, or other dangerous capabilities.\n\nIn our [latest paper](https://arxiv.org/abs/2305.15324), we introduce a framework for evaluating these novel threats, co-authored with colleagues from University of Cambridge, University of Oxford, University of Toronto, Université de Montréal, OpenAI, Anthropic, Alignment Research Center, Centre for Long-Term Resilience, and Centre for the Governance of AI.\n\nModel safety evaluations, including those assessing extreme risks, will be a critical component of safe AI development and deployment.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/646f4a54705cac1607e02189_646e0ac75c362fd0e3a42c47_overview.png)An overview of our proposed approach: To assess extreme risks from new, general-purpose AI systems, developers must evaluate for dangerous capabilities and alignment (see below). By identifying the risks early on, this will unlock opportunities to be more responsible when training new AI systems, deploying these AI systems, transparently describing their risks, and applying appropriate cybersecurity standards.#### Evaluating for extreme risks\n\nGeneral-purpose models typically learn their capabilities and behaviours during training. However, existing methods for steering the learning process are imperfect. For example, [previous research](https://www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards) at Google DeepMind has explored how AI systems can learn to pursue undesired goals even when we correctly reward them for good behaviour.\n\nResponsible AI developers must look ahead and anticipate possible future developments and novel risks. After continued progress, future general-purpose models may learn a variety of dangerous capabilities by default. For instance, it is plausible (though uncertain) that future AI systems will be able to conduct offensive cyber operations, skilfully deceive humans in dialogue, manipulate humans into carrying out harmful actions, design or acquire weapons (e.g. biological, chemical), fine-tune and operate other high-risk AI systems on cloud computing platforms, or assist humans with any of these tasks.\n\nPeople with malicious intentions accessing such models could [misuse](https://maliciousaireport.com/) their capabilities. Or, due to failures of alignment, these AI models might take harmful actions even without anybody intending this.\n\nModel evaluation helps us identify these risks ahead of time. Under our framework, AI developers would use model evaluation to uncover: \n\n1. To what extent a model has certain ‘dangerous capabilities’that could be used to threaten security, exert influence, or evade oversight.\n2. To what extent the model is prone to applying its capabilities to cause harm (i.e. the model’s alignment). Alignment evaluations should confirm that the model behaves as intended even across a very wide range of scenarios, and, where possible, should examine the model’s internal workings.\n\nResults from these evaluations will help AI developers to understand whether the ingredients sufficient for extreme risk are present. The most high-risk cases will involve multiple dangerous capabilities combined together. The AI system doesn’t need to provide all the ingredients, as shown in this diagram:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/646f4a54f0df6298f1f9b2d0_646cea9b494bfd9f4f2b966a_ingredients.png)Ingredients for extreme risk: Sometimes specific capabilities could be outsourced, either to humans (e.g. to users or crowdworkers) or other AI systems. These capabilities must be applied for harm, either due to misuse or failures of alignment (or a mixture of both).A rule of thumb: the AI community should treat an AI system as highly dangerous if it has a capability profile sufficient to cause extreme harm, *assuming* it’s misused or poorly aligned. To deploy such a system in the real world, an AI developer would need to demonstrate an unusually high standard of safety.\n\n#### Model evaluation as critical governance infrastructure\n\nIf we have better tools for identifying which models are risky, companies and regulators can better ensure:\n\n1. **Responsible training:** Responsible decisions are made about whether and how to train a new model that shows early signs of risk.\n2. **Responsible deployment***:* Responsible decisions are made about whether, when, and how to deploy potentially risky models.\n3. **Transparency:**Useful and actionable information is reported to stakeholders, to help them prepare for or mitigate potential risks.\n4. **Appropriate security:** Strong information security controls and systems are applied to models that might pose extreme risks.\n\nWe have developed a blueprint for how model evaluations for extreme risks should feed into important decisions around training and deploying a highly capable, general-purpose model. The developer conducts evaluations throughout, and grants [structured model access](https://www.governance.ai/post/sharing-powerful-ai-models) to external safety researchers and [model auditors](https://arxiv.org/abs/2302.08500) so they can conduct [additional evaluations](https://arxiv.org/abs/2206.04737) The evaluation results can then inform risk assessments before model training and deployment.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/646f4a541e3793b73983fbfd_646ce183e8a663f0d92dedfb_b76705cd.png)A blueprint for embedding model evaluations for extreme risks into important decision making processes throughout model training and deployment.#### Looking ahead\n\nImportant [early](https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/) [work](https://cdn.openai.com/papers/gpt-4-system-card.pdf) on model evaluations for extreme risks is already underway at Google DeepMind and elsewhere. But much more progress – both technical and institutional – is needed to build an evaluation process that catches all possible risks and helps safeguard against future, emerging challenges.\n\nModel evaluation is not a panacea; some risks could slip through the net, for example, because they depend too heavily on factors external to the model, such as [complex social, political, and economic forces](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) [in society](https://dl.acm.org/doi/10.1145/3287560.3287598). Model evaluation must be combined with other risk assessment tools and a wider dedication to safety across industry, government, and civil society. \n\n[Google's recent blog on responsible AI](https://blog.google/technology/ai/a-policy-agenda-for-responsible-ai-progress-opportunity-responsibility-security/) states that, “individual practices, shared industry standards, and sound government policies would be essential to getting AI right”. We hope many others working in AI and sectors impacted by this technology will come together to create approaches and standards for safely developing and deploying AI for the benefit of all. \n\nWe believe that having processes for tracking the emergence of risky properties in models, and for adequately responding to concerning results, is a critical part of being a responsible developer operating at the frontier of AI capabilities.", "date_published": "2023-05-25T00:00:00Z", "authors": ["Toby Shevlane"], "summaries": []} +{"id": "17611b2e7b08e0a8d68838905435b8bb", "title": "AI for the board game Diplomacy", "url": "https://www.deepmind.com/blog/ai-for-the-board-game-diplomacy", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### Agents cooperate better by communicating and negotiating, and sanctioning broken promises helps keep them honest\n\nSuccessful communication and cooperation have been crucial for helping societies advance throughout history. The closed environments of board games can serve as a sandbox for modelling and investigating interaction and communication – and we can learn a lot from playing them. In our recent paper, [published today in Nature Communications](https://www.nature.com/articles/s41467-022-34473-5), we show how artificial agents can use communication to better cooperate in the board game Diplomacy, a vibrant domain in artificial intelligence (AI) research, known for its focus on alliance building. \n\nDiplomacy is challenging as it has simple rules but high emergent complexity due to the strong interdependencies between players and its immense action space. To help solve this challenge, we designed negotiation algorithms that allow agents to communicate and agree on joint plans, enabling them to overcome agents lacking this ability. \n\nCooperation is particularly challenging when we cannot rely on our peers to do what they promise. We use Diplomacy as a sandbox to explore what happens when agents may deviate from their past agreements. Our research illustrates the risks that emerge when complex agents are able to misrepresent their intentions or mislead others regarding their future plans, which leads to another big question: What are the conditions that promote trustworthy communication and teamwork?\n\nWe show that the strategy of sanctioning peers who break contracts dramatically reduces the advantages they can gain by abandoning their commitments, thereby fostering more honest communication.\n\n#### What is Diplomacy and why is it important?\n\nGames such as [chess](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)), [poker](https://en.wikipedia.org/wiki/Computer_poker_player), [Go](https://en.wikipedia.org/wiki/AlphaGo), and many [video games](https://en.wikipedia.org/wiki/AlphaStar_(software)) have always been fertile ground for AI research. [Diplomacy](https://en.wikipedia.org/wiki/Diplomacy_(game)) is a seven-player game of negotiation and alliance formation, played on an old map of Europe partitioned into provinces, where each player controls multiple units ([rules of Diplomacy](https://media.wizards.com/2015/downloads/ah/diplomacy_rules.pdf)). In the standard version of the game, called Press Diplomacy, each turn includes a negotiation phase, after which all players reveal their chosen moves simultaneously. \n\nThe heart of Diplomacy is the negotiation phase, where players try to agree on their next moves. For example, one unit may support another unit, allowing it to overcome resistance by other units, as illustrated here:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/639713e75c02819bcea39c6d_TwoMovementTwoup.png)**Two movement scenarios.** \n‍**Left:** two units (a Red unit in Burgundy and a Blue unit in Gascony) attempt to move into Paris. As the units have equal strength, neither succeeds. \n‍**Right:** the Red unit in Picardy supports the Red unit in Burgundy, overpowering Blue’s unit and allowing the Red unit into Burgundy.Computational approaches to Diplomacy have been researched since the 1980s, many of which were explored on a simpler version of the game called No-Press Diplomacy, where strategic communication between players is not allowed. Researchers have also proposed [computer-friendly negotiation protocols](http://www.daide.org.uk/), sometimes called “Restricted-Press”. \n\n#### What did we study?\n\nWe use Diplomacy as an analog to real-world negotiation, providing methods for AI agents to coordinate their moves. We take [our non-communicating Diplomacy agents](https://www.deepmind.com/publications/learning-to-play-no-press-diplomacy-with-best-response-policy-iteration) and augment them to play Diplomacy with communication by giving them a protocol for negotiating contracts for a joint plan of action. We call these augmented agents Baseline Negotiators, and they are bound by their agreements. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6397145fc76cec74b61a0e3f_ContractsTwoup.png)**Diplomacy contracts.** \n‍**Left:** a restriction allowing only certain actions to be taken by the Red player (they are not allowed to move from Ruhr to Burgundy, and must move from Piedmont to Marseilles). \n‍**Right:** A contract between the Red and Green players, which places restrictions on both sides.We consider two protocols: the Mutual Proposal Protocol and the Propose-Choose Protocol, discussed in detail in [the full paper](https://www.nature.com/articles/s41467-022-34473-5). Our agents apply algorithms that identify mutually beneficial deals by simulating how the game might unfold under various contracts. We use the [Nash Bargaining Solution](https://en.wikipedia.org/wiki/Cooperative_bargaining#:~:text=Nash%20bargaining%20game,-John%20Forbes%20Nash&text=His%20solution%20is%20called%20the,and%20independence%20of%20irrelevant%20alternatives.) from [game theory](https://en.wikipedia.org/wiki/Game_theory) as a principled foundation for identifying high-quality agreements. The game may unfold in many ways depending on the actions of players, so our agents use Monte-Carlo simulations to see what might happen in the next turn. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/638e60329aa4197c171abda4_ufX12oeSnCb1BFH9lqqBy00EI6wNFYN0aS57hnOqVpC9LZiaWXxXiSJb5QduiaJwqbg3Z17qiu_0GjotgK1n6Dje9VVYKqEroeNhkWlgtv7awectYXnLycREsu31Toiq_XvsbHJXna7JD0DzmXGLkfSuxeoZU0CEvLE_HHpdc9Fr6x7Fan6HZGRltm-YUAU.png)Simulating next states given an agreed contract. Left: current state in a part of the board, including a contract agreed between the Red and Green players. Right: multiple possible next states.Our experiments show that our negotiation mechanism allows Baseline Negotiators to significantly outperform baseline non-communicating agents.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/638e60318ec38d0bab65ca39_G_hZZdxeVRBpPy8J3I_rTBk6ZfT4hkyZusQy827VUy3DATDn8-PADanEwIq_i3STH4stVsHI53RNL21tlD6Fu3bpdB3W7W88gwMjvi1PDjFa_kJR4JL6mI2VDgO2T7PiyGB_yCJIbhKPkYKmpdGY6VtmRfMwyZsHFSD8BZWFWP4XeSWguXYfKJ8VOTlOYw4.png)Baseline Negotiators significantly outperform non-communicating agents. Left: The Mutual Proposal Protocol. Right: The Propose-Choose Protocol. “Negotiator advantage” is the ratio of win rates between the communicating agents and the non-communicating agents.#### Agents breaking agreements\n\nIn Diplomacy, agreements made during negotiation are not binding (communication is “[cheap talk'](https://en.wikipedia.org/wiki/Cheap_talk#:~:text=In%20game%20theory%2C%20cheap%20talk,the%20state%20of%20the%20world.)'). But what happens when agents who agree to a contract in one turn deviate from it the next? In many real-life settings people agree to act in a certain way, but fail to meet their commitments later on. To enable cooperation between AI agents, or between agents and humans, we must examine the potential pitfall of agents strategically breaking their agreements, and ways to remedy this problem. We used Diplomacy to study how the ability to abandon our commitments erodes trust and cooperation, and identify conditions that foster honest cooperation. \n\nSo we consider Deviator Agents, which overcome honest Baseline Negotiators by deviating from agreed contracts. Simple Deviators simply “forget” they agreed to a contract and move however they wish. Conditional Deviators are more sophisticated, and optimise their actions assuming that other players who accepted a contract will act in accordance with it.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/638e6031fdd533aac4c2a0ab_tLo-uo5QdOK-Q0batYc-dqlpNiu6Azo0ObxLoBonYMGpygqac3to7UICruy1stcyJ0iA_uNIzIbYltTEsayZ1_jSM8JmG1vJC2FHOG5L2AvLvFCLMhCvv-D4_ogrJUO7QbOAQpKsM33VBHeCdfpjj-NrL5S4Ef0cZkOXVrpPiLU2QhxsljrnSPtFX73DCKM.png)All types of our Communicating Agents. Under the green grouping terms, each blue block represents a specific agent algorithm.We show that Simple and Conditional Deviators significantly outperform Baseline Negotiators, the Conditional Deviators overwhelmingly so. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/638e60314132a1f0524a7452_M0LuOPTOFdUCM3YGtMcWKdLfMGUj6TN8fxJgecRzDUWEHV9MBtVxjL_aStMTxdeXeJGJ1zeR10gXDYi01_HRCf_pNdCV5GxeT_WYbJyk2TcZHArytukNcUiOVw0q2hFktZMCnnNUPsBFiJfcYJZNUICuGVmLYo1kNzQEl2_OfkwqUq5nd0r3p5Gd-CYg59M.png)Deviator Agents versus Baseline Negotiator Agents. Left: The Mutual Proposal Protocol. Right: The Propose-Choose Protocol. “Deviator advantage” is the ratio of win rates between the Deviator Agents over the Baseline Negotiators.#### Encouraging agents to be honest\n\nNext we tackle the deviation problem using Defensive Agents, which respond adversely to deviations. We investigate Binary Negotiators, who simply cut off communications with agents who break an agreement with them. But shunning is a mild reaction, so we also develop Sanctioning Agents, who don’t take betrayal lightly, but instead modify their goals to actively attempt to lower the deviator's value – an opponent with a grudge! We show that both types of Defensive Agents reduce the advantage of deviation, particularly Sanctioning Agents. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/638e6031afaf63b634c937c9_AnnndGDLPTrgY7s0BdFEu5_TQfcNsg6jOvdR6-6ALT72DsLOZoYBFnL065Xxq0s48dVal2H4Y1FJ1ZMpSeghdOwuoCE3UZeQzrVB01D9dG7De-NoUsGLHLPJlznHVXDwXIjgCQTNKBLLDFBb4l3GOgM3Twv_6vaBYf9qEWkPDcbaZtt9v9EjtGBcaboP2nI.png)Non-Deviator Agents (Baseline Negotiators, Binary Negotiators, and Sanctioning Agents) playing against Conditional Deviators. Left: Mutual Proposal Protocol. Right: Propose-Choose Protocol. “Deviator advantage” values lower than 1 indicate a Defensive Agent outperforms a Deviator Agent. A population of Binary Negotiators (blue) reduces the advantage of Deviators compared with a population of Baseline Negotiators (grey).Finally, we introduce Learned Deviators, who adapt and optimise their behaviour against Sanctioning Agents over multiple games, trying to render the above defences less effective. A Learned Deviator will only break a contract when the immediate gains from deviation are high enough and the ability of the other agent to retaliate is low enough. In practice, Learned Deviators occasionally break contracts late in the game, and in doing so achieve a slight advantage over Sanctioning Agents. Nevertheless, such sanctions drive the Learned Deviator to honour more than 99.7% of its contracts. \n\nWe also examine possible learning dynamics of sanctioning and deviation: what happens when Sanctioning Agents may also deviate from contracts, and the potential incentive to stop sanctioning when this behaviour is costly. Such issues can gradually erode cooperation, so additional mechanisms such as repeating interaction across multiple games or using a trust and reputation systems may be needed. \n\nOur paper leaves many questions open for future research: Is it possible to design more sophisticated protocols to encourage even more honest behaviour? How could one handle combining communication techniques and imperfect information? Finally, what other mechanisms could deter the breaking of agreements? Building fair, transparent and trustworthy AI systems is an extremely important topic, and it is a key part of DeepMind’s mission. Studying these questions in sandboxes like Diplomacy helps us to better understand tensions between cooperation and competition that might exist in the real world. Ultimately, we believe tackling these challenges allows us to better understand how to develop AI systems in line with society’s values and priorities.\n\n‍\n\nRead our full paper [here](https://www.nature.com/articles/s41467-022-34473-5).", "date_published": "2022-12-06T00:00:00Z", "authors": ["Yoram Bachrach", "János Kramár"], "summaries": []} +{"id": "f1de14e69573f6de62986c9e74eca4fa", "title": "Benchmarking the next generation of never-ending learners", "url": "https://www.deepmind.com/blog/benchmarking-the-next-generation-of-never-ending-learners", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### Learning how to build upon knowledge by tapping 30 years of computer vision research\n\nIn just a few years, large-scale deep learning (DL) models have achieved unprecedented success in a variety of domains, from predicting protein structures to natural language processing and vision[1, 2, 3].  Machine learning engineers and researchers have delivered these successes for the most part thanks to powerful new hardware that has enabled their models to scale up and be trained with more data. \n\nScaling up has resulted in fantastic capabilities, but also means that DL models can be resource intensive. For example, when large models are deployed, whatever they have learned on one task is seldom harnessed to facilitate their learning of the next task. What’s more, once new data or more compute become available, large models are typically retrained from scratch – a costly, time-consuming process.  \n\nThis raises the question of whether we could improve the trade-off between the efficiency and performance of these large models, making them faster and more sustainable while also preserving their outstanding capabilities. One answer to this is to encourage the development of models that accrue knowledge over time, and that can therefore better adapt more efficiently to new situations and novel tasks.\n\n#### Introducing NEVIS’22\n\nOur new paper, [NEVIS’22: A Stream of 100 Tasks Sampled From 30 Years of Computer Vision Research](https://arxiv.org/abs/2211.11747), proposes a playground to study the question of efficient knowledge transfer in a controlled and reproducible setting. The *Never-Ending Visual classification Stream* (NEVIS’22) is a benchmark stream in addition to an evaluation protocol, a set of initial baselines, and an open-source codebase. This package provides an opportunity for researchers to explore how models can continually build on their knowledge to learn future tasks more efficiently.\n\nNEVIS’22 is actually composed of 106 tasks extracted from publications randomly sampled from the online proceedings of major computer vision conferences over the past three decades. Each task is a supervised classification task, the best understood approach in machine learning. And crucially, the tasks are arranged chronologically, and so, become more challenging and expansive, providing increasing opportunities to transfer knowledge from a growing set of related tasks. The challenge is how to automatically transfer useful knowledge from one task to the next to achieve a better or more efficient performance.\n\nHere are some images derived from datasets referenced in Appendix H of our paper:\n\nNEVIS’22 is reproducible and sufficiently scaled to test state-of-the-art learning algorithms. The stream includes a rich diversity of tasks, from optical character recognition and texture analysis to crowd counting and scene recognition. The task-selection process, being randomly sampled, did not favour any particular approach, but merely reflects what the computer vision community has deemed interesting over time. \n\nNEVIS’22 is not only about data, but also about the methodology used to train and evaluate learning models. We evaluate learners according to their ability to learn future tasks, as measured by their trade-off between error rate and compute (the latter measured by the number of floating-point operations). So, for example, achieving a lower error rate in NEVIS’22 is not sufficient if this comes at an unreasonable computational cost. Instead, we incentivise models to be both accurate and efficient.\n\n#### Initial lessons and open challenges\n\nOur initial experiments show that the models that achieve a better trade-off are those that leverage the structure shared across tasks and employ some form of transfer learning. In particular, clever fine-tuning approaches can be rather competitive, even when combined with large pre-trained models. This latter finding highlights the possibility to further improve upon the general representations of large-scale models, opening up an entirely new avenue of research. We believe that NEVIS’22 presents an exciting new challenge for our community as we strive to develop more efficient and effective never-ending learning models. \n\n‍\n\nDiscover more about NEVIS’22 by reading [our paper](https://arxiv.org/abs/2211.11747) and downloading [our code](http://github.com/deepmind/dm_nevis).", "date_published": "2022-11-22T00:00:00Z", "authors": ["Marc’Aurelio Ranzato", "Amal Rannen-Triki"], "summaries": []} +{"id": "c765fe6e2de04efa4b9d169cf3fcd32f", "title": "How undesired goals can arise with correct rewards", "url": "https://www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### Exploring examples of goal misgeneralisation – where an AI system's capabilities generalise but its goal doesn't\n\nAs we build increasingly advanced artificial intelligence (AI) systems, we want to make sure they don’t pursue undesired goals. Such behaviour in an AI agent is often the result of [specification gaming](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity) – exploiting a poor choice of what they are rewarded for. In our [latest paper](https://arxiv.org/abs/2210.01790), we explore a more subtle mechanism by which AI systems may unintentionally learn to pursue undesired goals: [*goal misgeneralisation*](https://arxiv.org/abs/2105.14111) (GMG). \n\nGMG occurs when a system's *capabilities* generalise successfully but its *goal* does not generalise as desired, so the system competently pursues the wrong goal. Crucially, in contrast to specification gaming, GMG can occur even when the AI system is trained with a correct specification.\n\nOur earlier [work on cultural transmission](https://www.deepmind.com/blog/learning-robust-real-time-cultural-transmission-without-human-data) led to an example of GMG behaviour that we didn’t design. An agent (the blue blob, below) must navigate around its environment, visiting the coloured spheres in the correct order. During training, there is an “expert” agent (the red blob) that visits the coloured spheres in the correct order. The agent learns that following the red blob is a rewarding strategy. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/633fee1aaa8967568ee1a5b9_following_expert_with_rewards.gif)The agent (blue) watches the expert (red) to determine which sphere to go to.Unfortunately, while the agent performs well during training, it does poorly when, after training, we replace the expert with an “anti-expert” that visits the spheres in the wrong order. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/633fee50d7c574b38004a005_following_anti_expert_with_rewards.gif)The agent (blue) follows the anti-expert (red), accumulating negative reward.Even though the agent can observe that it is getting negative reward, the agent does not pursue the desired goal to “visit the spheres in the correct order” and instead competently pursues the goal “follow the red agent”.\n\nGMG is not limited to reinforcement learning environments like this one. In fact, it can occur with any learning system, including the “few-shot learning” of large language models (LLMs). Few-shot learning approaches aim to build accurate models with less training data.\n\nWe prompted one LLM, [Gopher](https://arxiv.org/abs/2112.11446), to evaluate linear expressions involving unknown variables and constants, such as x+y-3. To solve these expressions, Gopher must first ask about the values of unknown variables. We provide it with ten training examples, each involving two unknown variables.\n\nAt test time, the model is asked questions with zero, one or three unknown variables. Although the model generalises correctly to expressions with one or three unknown variables, when there are no unknowns, it nevertheless asks redundant questions like “What’s 6?”. The model always queries the user at least once before giving an answer, even when it is not necessary.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/63400170626042521ef973b3_GMG.svg)Dialogues with Gopher for few-shot learning on the Evaluating Expressions task, with GMG behaviour highlighted.Within our paper, we provide additional examples in other learning settings. \n\nAddressing GMG is important to aligning AI systems with their designers' goals simply because it is a mechanism by which an AI system may misfire. This will be especially critical as we approach artificial general intelligence (AGI).\n\nConsider two possible types of AGI systems:\n\n* **A1: Intended model.** This AI system does what its designers intend it to do.\n* **A2: Deceptive model.** This AI system pursues some undesired goal, but (by assumption) is also smart enough to know that it will be penalised if it behaves in ways contrary to its designer's intentions.\n\nSince A1 and A2 will exhibit the same behaviour during training, the possibility of GMG means that either model could take shape, even with a specification that only rewards intended behaviour. If A2 is learned, it would try to subvert human oversight in order to enact its plans towards the undesired goal.\n\nOur research team would be happy to see follow-up work investigating how likely it is for GMG to occur in practice, and possible mitigations. In our paper, we suggest some approaches, including [mechanistic](https://distill.pub/2020/circuits/zoom-in/) [interpretability](https://www.transformer-circuits.pub/2021/framework/index.html) and [recursive](https://arxiv.org/abs/1805.00899) [evaluation](https://arxiv.org/abs/1810.08575), both of which we are actively working on.\n\n‍\n\nWe’re currently collecting examples of GMG in this [publicly available spreadsheet](http://tinyurl.com/goal-misgeneralisation). If you have come across goal misgeneralisation in AI research, we invite you to [submit examples here](https://docs.google.com/forms/d/e/1FAIpQLSdEsL9BuLJAm9wdK8IK8eTTm7tbGFASbJ4AcWCmwvVPFxbl8g/viewform?resourcekey=0-_ADP04VQHl9_Yr0WmoNQtQ).", "date_published": "2022-10-07T00:00:00Z", "authors": ["Rohin Shah", "Victoria Krakovna", "Vikrant Varma", "Zachary Kenton"], "summaries": []} +{"id": "6f5e9d8a56c8284439205180ab385f98", "title": "In conversation with AI: building better language models", "url": "https://www.deepmind.com/blog/in-conversation-with-ai-building-better-language-models", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### New research drawing upon pragmatics and philosophy proposes ways to align conversational agents with human values\n\nLanguage is an essential human trait and the primary means by which we communicate information including thoughts, intentions, and feelings. Recent breakthroughs in AI research have led to the creation of conversational agents that are able to communicate with humans in nuanced ways. These agents are powered by large language models – computational systems trained on vast corpora of text-based materials to predict and produce text using advanced statistical techniques. \n\nYet, while language models such as [InstructGPT](https://arxiv.org/abs/2203.02155), [Gopher](https://arxiv.org/abs/2112.11446), and [LaMDA](https://arxiv.org/abs/2201.08239) have achieved record levels of performance across tasks such as translation, question-answering, and reading comprehension, these models have also been shown to exhibit a number of potential risks and failure modes. These include the production of toxic or discriminatory language and false or misleading information [1, 2, 3].\n\nThese shortcomings limit the productive use of conversational agents in applied settings and draw attention to the way in which they fall short of certain *communicative ideals*. To date, most approaches on the alignment of conversational agents have focused on anticipating and reducing the risks of harms [4]. \n\nOur new paper, [In conversation with AI: aligning language models with human values](https://arxiv.org/abs/2209.00731), adopts a different approach, exploring what successful communication between a human and an artificial conversational agent might look like, and what values should guide these interactions across different conversational domains.\n\n#### Insights from pragmatics\n\nTo address these issues, the paper draws upon pragmatics, a tradition in linguistics and philosophy, which holds that the purpose of a conversation, its context, and a set of related norms, all form an essential part of sound conversational practice. \n\nModelling conversation as a cooperative endeavour between two or more parties, the linguist and philosopher, Paul Grice, held that participants ought to: \n\n* Speak informatively\n* Tell the truth\n* Provide relevant information\n* Avoid obscure or ambiguous statements\n\nHowever, our paper demonstrates that further refinement of these maxims is needed before they can be used to evaluate conversational agents, given variation in the goals and values embedded across different conversational domains.\n\n#### Discursive ideals\n\nBy way of illustration, scientific investigation and communication is geared primarily toward understanding or predicting empirical phenomena. Given these goals, a conversational agent designed to assist scientific investigation would ideally only make statements whose veracity is confirmed by sufficient empirical evidence, or otherwise qualify its positions according to relevant confidence intervals. \n\nFor example, an agent reporting that, “At a distance of 4.246 light years, Proxima Centauri is the closest star to earth,” should do so only after the model underlying it has checked that the statement corresponds with the facts.\n\nYet, a conversational agent playing the role of a moderator in public political discourse may need to demonstrate quite different virtues. In this context, the goal is primarily to manage differences and enable productive cooperation in the life of a community. Therefore, the agent will need to foreground the democratic values of toleration, civility, and respect [5]. \n\nMoreover, these values *explain* why the generation of toxic or prejudicial speech by language models is often so problematic: the offending language fails to communicate equal respect for participants to the conversation, something that is a key value for the context in which the models are deployed. At the same time, scientific virtues, such as the comprehensive presentation of empirical data, may be less important in the context of public deliberation.\n\nFinally, in the domain of creative storytelling, communicative exchange aims at novelty and originality, values that again differ significantly from those outlined above. In this context, greater latitude with make-believe may be appropriate, although it remains important to safeguard communities against malicious content produced under the guise of ‘creative uses’.\n\n#### Paths ahead\n\nThis research has a number of practical implications for the development of aligned conversational AI agents. To begin with, they will need to embody different traits depending on the contexts in which they are deployed: there is no one-size-fits-all account of language-model alignment. Instead, the appropriate mode and evaluative standards for an agent – including standards of truthfulness – will vary according to the context and purpose of a conversational exchange. \n\nAdditionally, conversational agents may also have the potential to cultivate more robust and respectful conversations over time, via a process that we refer to as *context construction and elucidation*. Even when a person is not aware of the values that govern a given conversational practice, the agent may still help the human understand these values by prefiguring them in conversation, making the course of communication deeper and more fruitful for the human speaker.", "date_published": "2022-09-06T00:00:00Z", "authors": ["Atoosa Kasirzadeh and Iason Gabriel"], "summaries": []} +{"id": "c15a01f7fde216bb45687066a2163efb", "title": "From motor control to embodied intelligence", "url": "https://www.deepmind.com/blog/from-motor-control-to-embodied-intelligence", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### Using human and animal motions to teach robots to dribble a ball, and simulated humanoid characters to carry boxes and play football\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f66e86725db418b6d33f0_Football%20blog%201.gif)Humanoid character learning to traverse an obstacle course through trial-and-error, which can lead to idiosyncratic solutions. Heess, et al. \"Emergence of locomotion behaviours in rich environments\" (2017).Five years ago, we took on the challenge of teaching a fully articulated humanoid character to [traverse obstacle courses](https://youtu.be/hx_bgoTF7bs?t=88). This demonstrated what reinforcement learning (RL) can achieve through trial-and-error but also highlighted two challenges in solving *embodied* intelligence:\n\n1. **Reusing previously learned behaviours:** A significant amount of data was needed for the agent to “get off the ground”. Without any initial knowledge of what force to apply to each of its joints, the agent started with random body twitching and quickly falling to the ground. This problem could be alleviated by reusing previously learned behaviours.\n2. **Idiosyncratic behaviours:** When the agent finally learned to navigate obstacle courses, it did so with unnatural ([albeit amusing](https://www.youtube.com/watch?v=EI3gcbDUNiM&t=258s)) movement patterns that would be impractical for applications such as robotics.\n\nHere, we describe a solution to both challenges called neural probabilistic motor primitives (NPMP), involving guided learning with movement patterns derived from humans and animals, and discuss how this approach is used in our [Humanoid Football paper,](https://www.science.org/doi/10.1126/scirobotics.abo0235) published today in Science Robotics. \n\nWe also discuss how this same approach enables humanoid full-body manipulation from vision, such as a humanoid carrying an object, and robotic control in the real-world, such as a robot dribbling a ball.\n\n#### Distilling data into controllable motor primitives using NPMP\n\nAn NPMP is a general-purpose motor control module that translates short-horizon motor intentions to low-level control signals, and it’s [trained offline](https://openreview.net/forum?id=BJl6TjRcY7) or [via RL](https://proceedings.mlr.press/v119/hasenclever20a.html) by imitating motion capture (MoCap) data, recorded with trackers on humans or animals performing motions of interest.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f68789b57e07b40fcd865_Football%20blog%202.gif)An agent learning to imitate a MoCap trajectory (shown in grey).**The model has two parts:**\n\n1. An encoder that takes a future trajectory and compresses it into a motor intention.\n2. A low-level controller that produces the next action given the current state of the agent and this motor intention.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6308d5beee8c0122e2b3042b_ZBVmzNWzSrt9fWJO-quV-2Xs-1kpxlgVSwnWY6ePLO6w67fcDXX69Ws_QxvGU93ZaoV3hDQ1_2_8_OxeqbTDypOcubppSQxM17A7bSBY_UWoNv0IEmkchwQjzE3SIBZiydS43KnnN6A4sBFzsvzZWVq2cXme0hl2K4airDd2rXV_5_HflAug2aG4wd8.png)Our NPMP model first distils reference data into a low-level controller (left). This low-level controller can then be used as a plug-and-play motor control module on a new task (right).After training, the low-level controller can be reused to learn new tasks, where a high-level controller is optimised to output motor intentions directly. This enables efficient exploration – since coherent behaviours are produced, even with randomly sampled motor intentions – and constrains the final solution.\n\n#### Emergent team coordination in humanoid football\n\nFootball has been [a long-standing challenge](https://link.springer.com/chapter/10.1007/3-540-64473-3_46) for embodied intelligence research, requiring individual skills and coordinated team play. In our latest work, we used an NPMP as a prior to guide the learning of movement skills. \n\nThe result was a team of players which progressed from learning ball-chasing skills, to finally learning to coordinate. Previously, in a [study with simple embodiments](https://openreview.net/forum?id=BkG8sjR5Km), we had shown that coordinated behaviour can emerge in teams competing with each other. The NPMP allowed us to observe a similar effect but in a scenario that required significantly more advanced motor control. \n \n\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f6d699c09633d3acf48f2_football%20blog%203.gif)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f6d7c7e2a3064bd727f40_football%20blog%204.gif)Agents first mimic the movement of football players to learn an NPMP module (top). Using the NPMP, the agents then learn football-specific skills (bottom). Our agents acquired skills including agile locomotion, passing, and division of labour as demonstrated by a range of statistics, including metrics used in [real-world sports analytics](https://www.researchgate.net/profile/William-Spearman/publication/327139841_Beyond_Expected_Goals/links/5b7c3023a6fdcc5f8b5932f7/Beyond-Expected-Goals.pdf). The players exhibit both agile high-frequency motor control and long-term decision-making that involves anticipation of teammates’ behaviours, leading to coordinated team play.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f6e29ba1f6d269aff7c2d_football%20blog%205.gif)An agent learning to play football competitively using multi-agent RL.#### ‍Whole-body manipulation and cognitive tasks using vision\n\nLearning to interact with objects using the arms is another difficult control challenge. The NPMP can also enable this type of whole-body manipulation. With a small amount of MoCap data of interacting with boxes, we’re able to [train an agent to carry a box](https://www.youtube.com/watch?v=2rQAW-8gQQk) from one location to another, using egocentric vision and with only a sparse reward signal:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f6f62cacfa971dd2e681e_Football%20blog%206.gif)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f6f73a8d8fad56ff930ae_Football%20blog%207.gif)With a small amount of MoCap data (top), our NPMP approach can solve a box carrying task (bottom).Similarly, we can teach the agent to catch and throw balls:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f6fac40b994092e8e6e81_Football%20blog%208.gif)Simulated humanoid catching and throwing a ball.Using NPMP, we can also tackle [maze tasks involving locomotion, perception and memory](https://openreview.net/forum?id=BJfYvo09Y7):\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f70036725db20fe779940_Football%20blog%209.gif)Simulated humanoid collecting blue spheres in a maze.#### Safe and efficient control of real-world robots\n\nThe NPMP can also help to control real robots. Having well-regularised behaviour is critical for activities like walking over rough terrain or handling fragile objects. Jittery motions can damage the robot itself or its surroundings, or at least drain its battery. Therefore, significant effort is often invested into designing learning objectives that make a robot do what we want it to while behaving in a safe and efficient manner.\n\nAs an alternative, we investigated whether using [priors derived from biological motion](https://arxiv.org/abs/2203.17138) can give us well-regularised, natural-looking, and reusable movement skills for legged robots, such as walking, running, and turning that are suitable for deploying on real-world robots. \n\nStarting with MoCap data from humans and dogs, we adapted the NPMP approach to train skills and controllers in simulation that can then be deployed on real humanoid (OP3) and quadruped (ANYmal B) robots, respectively. This allowed the robots to be steered around by a user via a joystick or dribble a ball to a target location in a natural-looking and robust way.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f73af3cbb9ede40ae9f98_Football%20blog%2010.gif)Locomotion skills for the ANYmal robot are learned by imitating dog MoCap.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f73ed0b5f123a00699490_Football%20blog%2011.gif)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/630f741e73d0d9437e370ec8_Football%20blog%2012.gif)Locomotion skills can then be reused for controllable walking and ball dribbling.#### Benefits of using neural probabilistic motor primitives\n\nIn summary, we’ve used the NPMP skill model to learn complex tasks with humanoid characters in simulation and real-world robots. The NPMP packages low-level movement skills in a reusable fashion, making it easier to learn useful behaviours that would be difficult to discover by unstructured trial and error. Using motion capture as a source of prior information, it biases learning of motor control toward that of naturalistic movements.\n\nThe NPMP enables embodied agents to learn more quickly using RL; to learn more naturalistic behaviours; to learn more safe, efficient and stable behaviours suitable for real-world robotics; and to combine full-body motor control with longer horizon cognitive skills, such as teamwork and coordination.\n\nLearn more about our work**:** \n\n* See selected [research references](https://storage.googleapis.com/deepmind-media/From%20motor%20control%20to%20embodied%20intelligence/From%20motor%20control%20to%20embodied%20intelligence%20-%20selected%20references.pdf).\n* Read [our paper](https://www.science.org/doi/10.1126/scirobotics.abo0235) on Humanoid Football in Science Robotics or watch the [summary video](https://www.youtube.com/watch?v=tWnGTtbOK7I).\n* Read [our paper](https://dl.acm.org/doi/abs/10.1145/3386569.3392474) on humanoid whole-body control or watch the [summary video](https://www.youtube.com/watch?v=2rQAW-8gQQk).\n* Read [our paper](https://arxiv.org/abs/2203.17138) on control of real-world robots or watch the [summary video](https://youtu.be/K77HS6uO5F8).", "date_published": "2022-08-31T00:00:00Z", "authors": ["Siqi Liu", "Leonard Hasenclever", "Steven Bohez", "Guy Lever", "Zhe Wang", "S. M. Ali Eslami", "Nicolas Heess"], "summaries": []} +{"id": "cf1f4c58c4cad079a953f3c04e051a47", "title": "Discovering when an agent is present in a system", "url": "https://www.deepmind.com/blog/discovering-when-an-agent-is-present-in-a-system", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### New, formal definition of agency gives clear principles for causal modelling of AI agents and the incentives they face\n\nWe want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. [Causal influence diagrams](https://deepmindsafetyresearch.medium.com/progress-on-causal-influence-diagrams-a7a32180b0d1#b09d) (CIDs) are a way to model decision-making situations that allow us to reason about [agent incentives](https://ojs.aaai.org/index.php/AAAI/article/view/17368). For example, here is a CID for a 1-step Markov decision process – a typical framework for decision-making problems.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62fe436b27ff91584631343b_X2e5np_pjgn4n_V6ncNi4z22-oe0Ti9QTXiwgISZRqdNbgk_9EYTiMYhf799hpk6_-BhqP7MRebeIPo9GdAfZF9ntAJOy6pQnrTvSx2e72qVLzvpbldHZ51SShyWbBDopG5VEfKIV8sjBmcflHmD1OxwqyHMCTvyRg_xzMAYujIF5qIiU8yvX-9brd0.png)S₁ represents the initial state, A₁ represents the agent’s decision (square), S₂ the next state. R₂ is the agent’s reward/utility (diamond). Solid links specify causal influence. Dashed edges specify information links – what the agent knows when making its decision.By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model of a training setup?\n\nOur new paper, [Discovering Agents](https://arxiv.org/abs/2208.08345), introduces new ways of tackling these issues, including:\n\n* The first formal causal definition of agents: **Agents are systems that would adapt their policy if their actions influenced the world in a different way**\n* An algorithm for discovering agents from empirical data\n* A translation between causal models and CIDs\n* Resolving earlier confusions from incorrect causal modelling of agents\n\nCombined, these results provide an extra layer of assurance that a modelling mistake hasn’t been made, which means that CIDs can be used to analyse an agent’s incentives and safety properties with greater confidence. \n\n#### Example: modelling a mouse as an agent\n\nTo help illustrate our method, consider the following example consisting of a world containing three squares, with a mouse starting in the middle square choosing to go left or right, getting to its next position and then potentially getting some cheese. The floor is icy, so the mouse might slip. Sometimes the cheese is on the right, but sometimes on the left.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62fe441f32f36417655c9939_jemVH-R6BzKddKLolAqbSQGMBNSu_AqisubMIZik-bVf2cAwEL7LjwjFgvZtzgWznB44hn9I2dhrhyvUZEifNPbEFKZJz1_THPN4LYLBqyBfkWFg6XyrkltUBq9igEc5AOkq_1C7lodX3ut0zfBo6Tylwg3pUBeeRH1cTCdreYw8-cExQjNGUKQSEiM.png)The mouse and cheese environment.This can be represented by the following CID:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62fe446a823ed01e339817c8_lPoffxS2-D8Iar82PG8gHvVQxzmMp2wclGkNzSVU9G5FLerZB_XBTMwBuYmtvTTKQnsOD--USN64R--UsA-qljQ6LRSZhD8RJ49v3Cqn-7dBF8eAj7l1q-ETEWCAT3TYRzP3f910gx-iTiPiwFOU5JLh9fhB55Y1zlHljQBJuve4HAi54ZdWyF-p2W0.png)CID for the mouse. D represents the decision of left/right. X is the mouse’s new position after taking the action left/right (it might slip, ending up on the other side by accident). U represents whether the mouse gets cheese or not.The intuition that the mouse would choose a different behaviour for different environment settings (iciness, cheese distribution) can be captured by a [mechanised causal graph](https://drive.google.com/file/d/1_OBLw9u29FrqROsLfhO6rIaWGK4xJ3il/view),which for each (object-level) variable, also includes a mechanism variable that governs how the variable depends on its parents. Crucially, we allow for links between mechanism variables.\n\nThis graph contains additional mechanism nodes in black, representing the mouse's policy and the iciness and cheese distribution. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62fe44a2ffe0d874e37e19d8_kZ49vhfFhT9VnQdnZRisin1mjf1e47B8WRNJ74rwA_OTCKQNaoFs_6pbpUfhjmIag3qKokRdUpR00Gtm2Plx4VaYn6_E4nSMLBHlxeYLl2H3uykWsUVJImQWVVs4PpfX94LTiG4UjkeXdiayqqu1RuV2_LzN5byzh4W7V5iokJkFMo7x0tJjQmiO3js.png)Mechanised causal graph for the mouse and cheese environment.Edges between mechanisms represent direct causal influence. The blue edges are special *terminal* edges – roughly, mechanism edges A~ → B~ that would still be there, even if the object-level variable A was altered so that it had no outgoing edges. \n\nIn the example above, since U has no children, its mechanism edge must be terminal. But the mechanism edge X~ → D~ is not terminal, because if we cut X off from its child U, then the mouse will no longer adapt its decision (because its position won’t affect whether it gets the cheese).\n\n#### Causal discovery of agents\n\nCausal discovery infers a causal graph from experiments involving interventions. In particular, one can discover an arrow from a variable A to a variable B by experimentally intervening on A and checking if B responds, even if all other variables are held fixed.\n\nOur first algorithm uses this technique to discover the mechanised causal graph:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62fe4810cc7c88f8211f6acc_fig.png)Algorithm 1 takes as input interventional data from the system (mouse and cheese environment) and uses causal discovery to output a mechanised causal graph. See paper for details.Our second algorithm transforms this mechanised causal graph to a game graph:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62fe48efd915cb0f1d44bd90_alg%202.png)Algorithm 2 takes as input a mechanised causal graph and maps it to a game graph. An ingoing terminal edge indicates a decision, an outgoing one indicates a utility.Taken together, Algorithm 1 followed by Algorithm 2 allows us to discover agents from causal experiments, representing them using CIDs.\n\nOur third algorithm transforms the game graph into a mechanised causal graph, allowing us to translate between the game and mechanised causal graph representations under some additional assumptions: \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62fe494332f364906f6104dd_alg%203.png)Algorithm 3 takes as input a game graph and maps it to a mechanised causal graph. A decision indicates an ingoing terminal edge, a utility indicates an outgoing terminal edge.#### Better safety tools to model AI agents\n\nWe proposed the first formal causal definition of agents. Grounded in causal discovery, our key insight is that agents are systems that adapt their behaviour in response to changes in how their actions influence the world. Indeed, our Algorithms 1 and 2 describe a precise experimental process that can help assess whether a system contains an agent. \n\nInterest in causal modelling of AI systems is rapidly growing, and our research grounds this modelling in causal discovery experiments. Our paper demonstrates the potential of our approach by improving the safety analysis of several example AI systems and shows that causality is a useful framework for discovering whether there is an agent  in a system – a key concern for assessing risks from AGI.\n\n‍\n\nExcited to learn more? Check out our [paper](https://arxiv.org/abs/2208.08345). Feedback and comments are most welcome.", "date_published": "2022-08-18T00:00:00Z", "authors": ["Zachary Kenton", "Ramana Kumar", "Sebastian Farquhar", "Jonathan Richens", "Matt MacDermott", "Tom Everitt"], "summaries": []} +{"id": "ed3be1a9c68e00034730b0fa1f64b284", "title": "Perceiver AR: general-purpose, long-context autoregressive generation", "url": "https://www.deepmind.com/blog/perceiver-ar-general-purpose-long-context-autoregressive-generation", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Over the last few years, autoregressive Transformers have brought a steady stream of breakthroughs in generative modeling. These models generate each element of a sample – the pixels of an image, the characters of a text (typically in “token” chunks), the samples of an audio waveform, and so on – by predicting one element after the other. When predicting the next element, the model can look back at those that were created earlier.\n\nHowever, each of a Transformer’s layers grows more expensive as more elements are used as input, and practitioners can only afford to train deep Transformers on sequences no more than about 2,048 elements in length. And so, most Transformer-based models ignore all elements beyond the most recent past (around 1,500 words or 1/6 of a small image) when making a prediction.\n\nIn contrast, our recently developed [Perceiver models](https://www.deepmind.com/blog/building-architectures-that-can-handle-the-worlds-data) give excellent results on a variety of real-world tasks with up to around 100,000 elements. Perceivers use cross-attention to encode inputs into a latent space, decouplingthe input’s compute requirements from model depth. Perceivers also spend a fixed cost, regardless of input size, at nearly every layer. \n\nWhile latent-space encoding handles all elements in a single pass, autoregressive generation assumes processing happens one element at a time. To address this problem, Perceiver AR proposes a simple solution: align the latents one by one with the final elements of the input, and carefully mask the input so latents see only earlier elements. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62ab1c907ca725f44a70e624_fig1.png)Perceiver AR maps an input sequence (P e r c e i v e r A R) to a small latent space by cross-attention to produce one latent for each target token (3 latents shown, one for the targets A R , for **E**nd **O**f **S**equence). These latents are then processed by a deep stack of self-attention layers. Perceiver AR can be trained for end-to-end autoregressive generation, all while making use of very long input sequences.The result is an architecture (shown above) that attends to as much as 50x longer inputs as standard Transformers, while deploying as widely (and essentially as easily) as standard decoder-only Transformers.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62ab1cc83914dcff3ba54652_fig2.png)As context length or model size increases, the amount of compute needed to train a model grows. We can quantify the compute budget for different models by measuring their speed on real hardware (steps per second on TPUv3), as the input context length and model size increase. Unlike other generative models like Transformer or Transformer-XL, Perceiver AR decouples input context length from model depth, allowing us to easily deploy the deep models needed to model long sequences on current-generation TPUs or GPUs.Perceiver AR scales considerably better with size than both standard Transformers and Transformer-XL models at a range of sequence lengths in real terms. This property allows us to build very effective long-context models. For example, we find that a 60-layer Perceiver AR with context length 8192 outperforms a 42-layer Transformer-XL on a book-length generation task, while running faster in real wall-clock terms.\n\nOn standard, long-context image (ImageNet 64x64), language (PG-19), and music (MAESTRO) generation benchmarks, Perceiver AR produces state-of-the-art results. Increasing input context by decoupling input size from compute budget leads to several intriguing results: \n\n* Compute budget can be adapted at eval time, allowing us to spend less and smoothly degrade quality or to spend more for improved generation.\n* A larger context allows Perceiver AR to outperform Transformer-XL, even when spending the same on compute. We find that greater context leads to improved model performance even at affordable scale (~1B parameters).\n* Perceiver AR’s sample quality exhibits much less sensitivity to the order in which it generates elements. This makes Perceiver AR easy to apply to settings that don’t have a natural left-to-right ordering, such as data like images, with structure that spans more than one dimension.\n\nUsing a dataset of piano music, we trained Perceiver AR to generate new pieces of music from scratch. Because each new note is predicted based on the full sequence of notes that came before, Perceiver AR is able to produce pieces with a high level of melodic, harmonic, and rhythmic coherence:\n\n‍\n\nLearn more about using Perceiver AR:\n\n* Download the JAX code for training Perceiver AR [on Github](https://github.com/google-research/perceiver-ar)\n* Read our paper on [arXiv](https://arxiv.org/abs/2202.07765)\n* Check out our spotlight presentation at [ICML 2022](https://icml.cc/virtual/2022/spotlight/17886)\n\nSee the Google Magenta [blog post](https://magenta.tensorflow.org/perceiver-ar) with more music!", "date_published": "2022-07-16T00:00:00Z", "authors": ["Curtis Hawthorne", "Andrew Jaegle", "Cătălina Cangea", "Sebastian Borgeaud", "Charlie Nash", "Mateusz Malinowski", "Sander Dieleman", "Oriol Vinyals", "Matthew Botvinick", "Ian Simon", "Hannah Sheahan", "Neil Zeghidour", "Jean-Baptiste Alayrac", "João Carreira", "Jesse Engel"], "summaries": []} +{"id": "9a3f8fcc3f86e1ee2787c7c1cf36c5f9", "title": "Intuitive physics learning in a deep-learning model inspired by developmental psychology", "url": "https://www.deepmind.com/blog/intuitive-physics-learning-in-a-deep-learning-model-inspired-by-developmental-psychology", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Understanding the physical world is a critical skill that most people deploy effortlessly. However, this still poses a challenge to artificial intelligence; if we’re to deploy safe and helpful systems in the real world, we want these models to share our intuitive sense of physics. But before we can build those models, there is another challenge: How will we measure the ability of these models to understand the physical world? That is, what does it mean to understand the physical world and how can we quantify it?\n\nLuckily for us, developmental psychologists have spent decades studying what infants know about the physical world. Along the way, they've carved the nebulous notion of physical knowledge into a concrete set of physical concepts. And, they've developed the violation-of-expectation (VoE) paradigm for testing those concepts in infants.\n\nIn our paper published today in Nature Human Behavior, we extended their work and open-sourced the [Physical Concepts dataset](https://github.com/deepmind/physical_concepts). This synthetic video dataset ports the VoE paradigm to assess five physical concepts: solidity, object persistence, continuity, “unchangeableness'', and directional inertia.\n\nWith a benchmark for physical knowledge in hand, we turned to the task of building a model capable of learning about the physical world. Again, we looked to developmental psychologists for inspiration. Researchers not only catalogued what infants know about the physical world, they also posited the mechanisms that could enable this behaviour. Despite variability, these accounts have a central role for the notion of breaking up the physical world into a set of *objects* which evolve through time.\n\nInspired by this work, we built a system that we nickname PLATO (Physics Learning through Auto-encoding and Tracking Objects). PLATO represents and reasons about the world as a set of objects. It makes predictions about where objects will be in the future based on where they've been in the past and what other objects they're interacting with.\n\nAfter training PLATO on videos of simple physical interactions, we found that PLATO passed the tests in our Physical Concepts dataset. Furthermore, we trained \"flat\" models that were as big (or even bigger) than PLATO but did not use object-based representations. When we tested those models, we found they didn't pass all of our tests. This suggests that objects are helpful for learning intuitive physics, supporting hypotheses from the developmental literature.\n\nWe also wanted to determine how much experience was needed to develop this capacity. Evidence for physical knowledge has been shown in infants as young as two and a half months of age. How does PLATO fare in comparison? By varying the amount of training data used by PLATO, we found that PLATO could learn our physical concepts with as little as 28 hours of visual experience. The limited and synthetic nature of our dataset means we cannot make a like-for-like comparison between the amount of visual experiences received by infants and PLATO. However, this result suggests that intuitive physics can be learned with relatively little experience if supported via an inductive bias for representing the world as objects.\n\nFinally, we wanted to test PLATO's ability to generalise. In the Physical Concepts dataset, all of the objects in our test set are also present in the training set. What if we tested PLATO with objects it had never seen before? To do this, we leveraged a subset of another synthetic [dataset](http://physadept.csail.mit.edu/) developed by researchers at MIT. This dataset also probes physical knowledge, albeit with different visual appearances and a set of objects that PLATO has never seen before. PLATO passed, without any re-training, despite being tested on entirely new stimuli.\n\nWe hope this dataset can provide researchers with a more specific understanding of their model’s abilities to understand the physical world. In the future, this can be expanded to test more aspects of intuitive physics by increasing the list of physical concepts tested, and using richer visual stimuli including new object shapes or even real-world videos.", "date_published": "2022-07-11T00:00:00Z", "authors": ["Luis Piloto", "Ari Weinstein", "Peter Battaglia", "Matt Botvinick"], "summaries": []} +{"id": "01ee879e17250121ff0de200069b9434", "title": "Human-centred mechanism design with Democratic AI", "url": "https://www.deepmind.com/blog/human-centred-mechanism-design-with-democratic-ai", "source": "deepmind_technical_blog", "source_type": "blog", "text": "**In our recent** [**paper**](https://www.nature.com/articles/s41562-022-01383-x)**, published in Nature Human Behaviour, we provide a proof-of-concept demonstration that deep reinforcement learning (RL) can be used to find economic policies that people will vote for by majority in a simple game. The paper thus addresses a key challenge in AI research - how to train AI systems that align with human values.**\n\nImagine that a group of people decide to pool funds to make an investment. The investment pays off, and a profit is made. How should the proceeds be distributed? One simple strategy is to split the return equally among investors. But that might be unfair, because some people contributed more than others. Alternatively, we could pay everyone back in proportion to the size of their initial investment. That sounds fair, but what if people had different levels of assets to begin with? If two people contribute the same amount, but one is giving a fraction of their available funds, and the other is giving them all, should they receive the same share of the proceeds? \n\nThis question of how to redistribute resources in our economies and societies has long generated controversy among philosophers, economists and political scientists. Here, we use deep RL as a testbed to explore ways to address this problem.\n\nTo tackle this challenge, we created a simple game that involved four players. Each instance of the game was played over 10 rounds. On every round, each player was allocated funds, with the size of the endowment varying between players. Each player made a choice: they could keep those funds for themselves or invest them in a common pool. Invested funds were guaranteed to grow, but there was a risk, because players did not know how the proceeds would be shared out. Instead, they were told that for the first 10 rounds there was one referee (A) who was making the redistribution decisions, and for the second 10 rounds a different referee (B) took over. At the end of the game, they voted for either A or B, and played another game with this referee. Human players of the game were allowed to keep the proceeds of this final game, so they were incentivised to report their preference accurately.\n\nIn reality, one of the referees was a pre-defined redistribution policy, and the other was designed by our deep RL agent. To train the agent, we first recorded data from a large number of human groups and taught a neural network to copy how people played the game. This simulated population could generate limitless data, allowing us to use data-intensive machine learning methods to train the RL agent to maximise the votes of these “virtual” players. Having done so, we then recruited new human players, and pitted the AI-designed mechanism head-to-head against well-known baselines, such as a *libertarian* policy that returns funds to people in proportion to their contributions. \n\nWhen we studied the votes of these new players, we found that the policy designed by deep RL was more popular than the baselines. In fact, when we ran a new experiment asking a fifth human player to take on the role of referee, and trained them to try and maximise votes, the policy implemented by this “human referee” was still less popular than that of our agent.\n\nAI systems have been sometimes criticised for learning policies that may be incompatible with human values, and this problem of “value alignment” has become a major concern in AI research. One merit of our approach is that the AI learns directly to maximise the stated preferences (or votes) of a group of people. This approach may help ensure that AI systems are less likely to learn policies that are unsafe or unfair. In fact, when we analysed the policy that the AI had discovered, it incorporated a mixture of ideas that have previously been proposed by human thinkers and experts to solve the redistribution problem. \n\nFirstly, the AI chose to redistribute funds to people in proportion to their *relative* rather than *absolute* contribution. This means that when redistributing funds, the agent accounted for each player’s initial means, as well as their willingness to contribute. Secondly, the AI system especially rewarded players whose relative contribution was more generous, perhaps encouraging others to do likewise. Importantly, the AI only discovered these policies by learning to maximise human votes. The method therefore ensures that humans remain “in the loop” and the AI produces human-compatible solutions.  \n\nBy asking people to vote, we harnessed the principle of majoritarian democracy for deciding what people want. Despite its wide appeal, it is widely acknowledged that democracy comes with the caveat that the preferences of the majority are accounted for over those of the minority. In our study, we ensured that – like in most societies – that minority consisted of more generously endowed players. But more work is needed to understand how to trade off the relative preferences of majority and minority groups, by designing democratic systems that allow all voices to be heard.", "date_published": "2022-07-04T00:00:00Z", "authors": ["Raphael Koster", "Jan Balaguer", "Andrea Tacchetti", "Ari Weinstein", "Tina Zhu", "Oliver Hauser* (University of Exeter)", "Duncan Williams", "Lucy Campbell-Gillingham", "Phoebe Thacker", "Matthew Botvinick", "Christopher Summerfield"], "summaries": []} +{"id": "1bad04c018ca21ae16b5b7858c5e28cd", "title": "BYOL-Explore: Exploration with Bootstrapped Prediction", "url": "https://www.deepmind.com/blog/byol-explore-exploration-with-bootstrapped-prediction", "source": "deepmind_technical_blog", "source_type": "blog", "text": "![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62b068d95af4f802873938ec_throw_across_first_1.gif)Second-person and top-down views of a BYOL-Explore agent solving Thow-Across level of [DM-HARD-8](https://arxiv.org/abs/1909.01387), whereas pure RL and other baseline exploration methods fail to make any progress on Thow-Across.Curiosity-driven exploration is the active process of seeking new information to enhance the agent’s understanding of its environment. Suppose that the agent has learned a model of the world that can predict future events given the history of past events. The curiosity-driven agent can then use the prediction mismatch of the world model as the intrinsic reward for directing its exploration policy towards seeking new information. As follows, the agent can then use this new information to enhance the world model itself so it can make better predictions.  This iterative process can allow the agent to eventually explore every novelty  in the world and use this information to build an accurate world model.\n\nInspired by the successes of [bootstrap your own latent](https://arxiv.org/abs/2006.07733) (BYOL) – which has been applied in [computer vision](https://arxiv.org/abs/2103.16559), [graph representation learning](https://arxiv.org/abs/2102.06514), and [representation learning in RL](https://arxiv.org/abs/2007.05929) – we propose BYOL-Explore: a conceptually simple yet general, curiosity-driven AI agent for solving hard-exploration tasks. BYOL-Explore learns a representation of the world by predicting its own future representation. Then, it uses the prediction-error at the representation level as an intrinsic reward to train a curiosity-driven policy. Therefore, BYOL-Explore learns a world representation, the world dynamics, and a curiosity-driven exploration policy all-together, simply by optimising the prediction error at the representation level.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62b0511e11e44e60dc8b0b00_BYOL-Explore_Schematic_v3.png)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62b06901690bcc5c64c64fa7_fp_hard_8_1.gif)Comparison between BYOL-Explore, [Random Network Distillation](https://arxiv.org/abs/1810.12894) (RND), [Intrinsic Curiosity Module](https://arxiv.org/abs/1705.05363) (ICM) and pure RL (no intrinsic reward), in terms of mean capped human-normalised score (CHNS).Despite the simplicity of its design, when applied to the [DM-HARD-8](https://arxiv.org/abs/1909.01387) suite of challenging 3-D, visually complex, and hard exploration tasks, BYOL-Explore outperforms standard curiosity-driven exploration methods such as [Random Network Distillation](https://arxiv.org/abs/1810.12894) (RND) and [Intrinsic Curiosity Module](https://arxiv.org/abs/1705.05363) (ICM), in terms of mean capped human-normalised score (CHNS), measured across all tasks. Remarkably, BYOL-Explore achieved this performance using only a single network concurrently trained across all tasks, whereas prior work was restricted to the single-task setting and could only make meaningful progress on these tasks when provided with human expert demonstrations. \n\nAs further evidence of its generality, BYOL-Explore achieves super-human performance in the ten hardest exploration [Atari games](https://arxiv.org/abs/1207.4708), while having a simpler design than other competitive agents, such as [Agent57](https://arxiv.org/abs/2003.13350) and [Go-Explore](https://arxiv.org/abs/2004.12919).\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6307b4f7fc73190a5ad67ce6_62b06c47ed1b2613a7eea9ce_atari-chns-across-games-2.png)Comparison between BYOL-Explore, [Random Network Distillation](https://arxiv.org/abs/1810.12894) (RND), [Intrinsic Curiosity Module](https://arxiv.org/abs/1705.05363) (ICM) and pure RL (no intrinsic reward), in terms of mean capped human-normalised score (CHNS).![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6307b4f8b782305825efd7ae_62b06c464bde0e1f3a671733_dmh-chns-across-tasks-1.png)Moving forward, we can generalise BYOL-Explore to highly stochastic environments by learning a probabilistic world model that could be used to generate trajectories of the future events. This could allow the agent to model the possible stochasticity of the environment, avoid stochastic traps, and plan for exploration.", "date_published": "2022-06-20T00:00:00Z", "authors": ["Zhaohan Daniel Guo", "Shantanu Thakoor", "Miruna Pîslar", "Bernardo Avila Pires", "Florent Altché", "Corentin Tallec", "Alaa Saade", "Daniele Calandriello", "Jean-Bastien Grill", "Yunhao Tang", "Michal Valko", "Rémi Munos", "Mohammad Gheshlaghi Azar", "Bilal Piot"], "summaries": []} +{"id": "ee04c7fd6440ce67dd55847b7c46f8e4", "title": "Unlocking High-Accuracy Differentially Private Image Classification through Scale", "url": "https://www.deepmind.com/blog/unlocking-high-accuracy-differentially-private-image-classification-through-scale", "source": "deepmind_technical_blog", "source_type": "blog", "text": "A recent [DeepMind paper](https://arxiv.org/abs/2112.04359) on the ethical and social risks of language models identified large language models [leaking sensitive information](https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting) about their training data as a potential risk that organisations working on these models have the responsibility to address. Another [recent paper](https://arxiv.org/abs/2201.04845) shows that similar privacy risks can also arise in standard image classification models: a fingerprint of each individual training image can be found embedded in the model parameters, and malicious parties could exploit such fingerprints to reconstruct the training data from the model. \n\nPrivacy-enhancing technologies like differential privacy (DP) can be deployed at training time to mitigate these risks, but they often incur significant reduction in model performance. In this work, we make substantial progress towards unlocking high-accuracy training of image classification models under differential privacy.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62ab43e65845e64d1a827c87_Figure.png)*Figure 1: (left) Illustration of training data leakage in GPT-2 [credit: Carlini et al. \"Extracting Training Data from Large Language Models\", 2021]. (right) CIFAR-10 training examples reconstructed from a 100K parameter convolutional neural network [credit: Balle et al. \"Reconstructing Training Data with Informed Adversaries\", 2022]*Differential privacy was [proposed](https://link.springer.com/content/pdf/10.1007/11681878_14.pdf) as a mathematical framework to capture the requirement of protecting individual records in the course of statistical data analysis (including the training of machine learning models). DP algorithms protect individuals from any inferences about the features that make them unique (including complete or partial reconstruction) by injecting carefully calibrated noise during the computation of the desired statistic or model. Using DP algorithms provides robust and rigorous privacy guarantees both in theory and in practice, and has become a de-facto gold standard adopted by a number of [public](https://dl.acm.org/doi/10.1145/3219819.3226070) and [private](https://ai.googleblog.com/2022/02/federated-learning-with-formal.html) organisations.\n\nThe most popular DP algorithm for deep learning is differentially private stochastic gradient descent (DP-SGD), a modification of standard SGD obtained by clipping gradients of individual examples and adding enough noise to mask the contribution of any individual to each model update:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62ab451f8d59129ace23bbac_Figure2.png)*Figure 2: Illustration of how DP-SGD processes gradients of individual examples and adds noise to produce model updates with privatised gradients.*Unfortunately, prior works have found that in practice, the privacy protection provided by DP-SGD often comes at the cost of significantly less accurate models, which presents a major obstacle to the widespread adoption of differential privacy in the machine learning community. According to empirical evidence from prior works, this utility degradation in DP-SGD becomes more severe on larger neural network models – including the ones regularly used to achieve the best performance on challenging image classification benchmarks.\n\nOur work investigates this phenomenon and proposes a series of simple modifications to both the training procedure and model architecture, yielding a significant improvement on the accuracy of DP training on standard image classification benchmarks. The most striking observation coming out of our research is that DP-SGD can be used to efficiently train much deeper models than previously thought, as long as one ensures the model's gradients are well-behaved. We believe the substantial jump in performance achieved by our research has the potential to unlock practical applications of image classification models trained with formal privacy guarantees. \n\nThe figure below summarises two of our main results: an ~10% improvement on CIFAR-10 compared to previous work when privately training without additional data, and a top-1 accuracy of 86.7% on ImageNet when privately fine-tuning a model pre-trained on a different dataset, almost closing the gap with the best non-private performance.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62ab4601aabb144ad7dcd770_Figure3.png)*Figure 3: (left) Our best results on training WideResNet models on CIFAR-10 without additional data. (right) Our best results on fine-tuning NFNet models on ImageNet. The best performing model was pre-trained on an internal dataset disjoint from ImageNet.*These results are achieved at 𝜺=8, a standard setting for calibrating the strength of the protection offered by differential privacy in machine learning applications. We refer to the paper for a discussion of this parameter, as well as additional experimental results at other values of 𝜺 and also on other datasets. Together with the paper, we are also open-sourcing our implementation to enable other researchers to verify our findings and build on them. We hope this contribution will help others interested in making practical DP training a reality.\n\n‍\n\nDownload our JAX implementation [on GitHub](https://github.com/deepmind/jax_privacy).", "date_published": "2022-06-17T00:00:00Z", "authors": ["Soham De", "Leonard Berrada", "Jamie Hayes", "Samuel L. Smith", "Borja Balle"], "summaries": []} +{"id": "96daac7bf4642a7b8cdcae688bad166f", "title": "Evaluating Multimodal Interactive Agents", "url": "https://www.deepmind.com/blog/evaluating-multimodal-interactive-agents", "source": "deepmind_technical_blog", "source_type": "blog", "text": "To train agents to interact well with humans, we need to be able to measure progress. But human interaction is complex and measuring progress is difficult. In this work we developed a method, called the Standardised Test Suite (STS), for evaluating agents in temporally extended, multi-modal interactions. We examined interactions that consist of human participants asking agents to perform tasks and answer questions in a 3D simulated environment.\n\nThe STS methodology places agents in a set of behavioural scenarios mined from real human interaction data. Agents see a replayed scenario context, receive an instruction, and are then given control to complete the interaction offline. These agent continuations are recorded and then sent to human raters to annotate as success or failure. Agents are then ranked according to the proportion of scenarios on which they succeed.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6290c501360e22cfee4d2a80_select_frames_fig_v2.png)Figure 1: Example of an original scenario taken from two humans interacting alongside successful and unsuccessful agent continuations.Many of the behaviours that are second nature to humans in our day-to-day interactions are difficult to put into words, and impossible to formalise. Thus, the mechanism relied on for solving games (like Atari, Go, DotA, and Starcraft) with reinforcement learning won't work when we try to teach agents to have fluid and successful interactions with humans. For example, think about the difference between these two questions: \"Who won this game of Go?\" versus \"What are you looking at?\" In the first case, we can write a piece of computer code that counts the stones on the board at the end of the game and determines the winner with certainty. In the second case, we have no idea how to codify this: the answer may depend on the speakers, the size and shapes of the objects involved, whether the speaker is joking, and other aspects of the context in which the utterance is given. Humans intuitively understand the myriad of relevant factors involved in answering this seemingly mundane question.\n\nInteractive evaluation by human participants can serve as a touchstone for understanding agent performance, but this is noisy and expensive. It is difficult to control the exact instructions that humans give to agents when interacting with them for evaluation. This kind of evaluation is also in real-time, so it is too slow to rely on for swift progress. Previous works have relied on proxies to interactive evaluation. Proxies, such as losses and scripted probe tasks (e.g. “lift the x” where x is randomly selected from the environment and the success function is painstakingly hand-crafted), are useful for gaining insight into agents quickly, but don’t actually correlate that well with interactive evaluation. Our new method has advantages, mainly affording control and speed to a metric that closely aligns with our ultimate goal - to create agents that interact well with humans.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6290c5317e6d90ce5c603af8_sts_vs_ha.png)Figure 2: STS evaluation compared to other evaluation metrics used for evaluating interactive agents. The STS correlates best with interactive evaluation compared to previous proxies used.The development of MNIST, ImageNet and other human-annotated datasets has been essential for progress in machine learning. These datasets have allowed researchers to train and evaluate classification models for a one-time cost of human inputs. The STS methodology aims to do the same for human-agent interaction research. This evaluation method still requires humans to annotate agent continuations; however, early experiments suggest that automation of these annotations may be possible, which would enable fast and effective automated evaluation of interactive agents. In the meantime, we hope that other researchers can use the methodology and system design to accelerate their own research in this area.", "date_published": "2022-05-27T00:00:00Z", "authors": ["Josh Abramson", "Arun Ahuja", "Federico Carnevale", "Petko Georgiev", "Alex Goldin", "Jessica Landon", "Timothy Lillicrap", "Alistair Muldal", "Adam Santoro", "Tamara von Glehn", "Gregory Wayne", "Nathaniel Wong", "Chen Yan", "Blake Richards*", "Alden Hung*"], "summaries": []} +{"id": "3a0913fe2039fa24119d350c02e560dc", "title": "Dynamic language understanding: adaptation to new knowledge in parametric and semi-parametric models", "url": "https://www.deepmind.com/blog/dynamic-language-understanding-adaptation-to-new-knowledge-in-parametric-and-semi-parametric-models", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Many recent successes in language models (LMs) have been achieved within a ‘static paradigm’, where the focus is on improving performance on the benchmarks that are created without considering the temporal aspect of data. For instance, answering questions on events that the model could learn about during training, or evaluating on text sub-sampled from the same period as the training data. However, our language and knowledge are dynamic and ever evolving. Therefore, to enable a more realistic evaluation of question-answering models for the next leap in performance, it’s essential to ensure they are flexible and robust when encountering new and unseen data.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/628f80e879bcf32f9eefad77_Figure%201.png)Figure 1. We evaluate our models on unseen language and knowledge, seen here using questions about events in 2020, while the model has been trained on data up until the end of 2019.In 2021, we released [Mind the Gap: Assessing Temporal Generalization in Neural Language Models](https://arxiv.org/abs/2102.01951) and the [dynamic language modelling benchmarks](https://github.com/deepmind/deepmind-research/tree/master/pitfalls_static_language_models) for WMT and arXiv to facilitate language model evaluation that take temporal dynamics into account. In this paper, we highlighted issues that current state-of-the-art large LMs face with temporal generalisation and found that knowledge-intensive tokens take a considerable performance hit.\n\nToday, we’re releasing two papers and a new benchmark that further advance research on this topic. In [StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models](http://arxiv.org/abs/2205.11388), we study the downstream task of question-answering on our newly proposed benchmark, [*StreamingQA*](https://github.com/deepmind/streamingqa): we want to understand how parametric and retrieval-augmented, semi-parametric question-answering models adapt to new information, in order to answer questions about new events. In [Internet-augmented language models through few-shot prompting for open-domain question answering](https://arxiv.org/abs/2203.05115), we explore the power of combining a few-shot prompted large language model along with Google Search as a retrieval component. In doing so, we aim to improve the model's factuality, while making sure it has access to up-to-date information for answering a diverse set of questions.\n\n#### StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models\n\nKnowledge and language understanding of models evaluated through question-answering (QA) has been commonly studied on static snapshots of knowledge, like Wikipedia. To study how semi-parametric QA models and their underlying parametric LMs adapt to evolving knowledge, we constructed the new large-scale benchmark, StreamingQA, with human-written and automatically generated questions asked on a given date, to be answered from 14 years of time-stamped news articles (see Figure 2). We show that parametric models can be updated without full retraining, while avoiding catastrophic forgetting. For semi-parametric models, adding new articles into the search space allows for rapid adaptation, however, models with an outdated underlying LM underperform those with a retrained LM.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/628f8145cc208c1207a49e16_Figure%202.png)Figure 2. Example questions from the StreamingQA benchmark.#### Internet-augmented language models through few-shot prompting for open-domain question-answering\n\nWe’re aiming to capitalise on the unique few-shot capabilities offered by large-scale language models to overcome some of their challenges, with respect to grounding to factual and up-to-date information. Motivated by semi-parametric LMs, which ground their decisions in externally retrieved evidence, we use few-shot prompting to learn to condition LMs on information returned from the web using Google Search, a broad and constantly updated knowledge source. Our approach does not involve fine-tuning or learning additional parameters, thus making it applicable to virtually any language model. And indeed, we find that LMs conditioned on the web surpass the performance of closed-book models of similar, or even larger, model size in open-domain question-answering.", "date_published": "2022-05-26T00:00:00Z", "authors": ["Elena Gribovskaya", "Angeliki Lazaridou", "Tomáš Kočiský"], "summaries": []} +{"id": "d0a27f1644f83acd6e961dd8d76db260", "title": "Emergent Bartering Behaviour in Multi-Agent Reinforcement Learning", "url": "https://www.deepmind.com/blog/emergent-bartering-behaviour-in-multi-agent-reinforcement-learning", "source": "deepmind_technical_blog", "source_type": "blog", "text": "In [our recent paper](https://arxiv.org/abs/2205.06760), we explore how populations of deep reinforcement learning (deep RL) agents can learn microeconomic behaviours, such as production, consumption, and trading of goods. We find that artificial agents learn to make economically rational decisions about production, consumption, and prices, and react appropriately to supply and demand changes. The population converges to local prices that reflect the nearby abundance of resources, and some agents learn to transport goods between these areas to “buy low and sell high”. This work advances the broader multi-agent reinforcement learning research agenda by introducing new social challenges for agents to learn how to solve.\n\nInsofar as the goal of multi-agent reinforcement learning research is to eventually produce agents that work across the full range and complexity of human social intelligence, the set of domains so far considered has been woefully incomplete. It is still missing crucial domains where human intelligence excels, and humans spend significant amounts of time and energy. The subject matter of economics is one such domain. Our goal in this work is to establish environments based on the themes of trading and negotiation for use by researchers in multi-agent reinforcement learning.\n\nEconomics uses agent-based models to simulate how economies behave. These agent-based models often build in economic assumptions about how agents should act. In this work, we present a multi-agent simulated world where agents can learn economic behaviours from scratch, in ways familiar to any Microeconomics 101 student: decisions about production, consumption, and prices. But our agents also must make other choices that follow from a more physically embodied way of thinking. They must navigate a physical environment, find trees to pick fruits, and partners to trade them with. Recent advances in deep RL techniques now make it possible to create agents that can learn these behaviours on their own, without requiring a programmer to encode domain knowledge.\n\nOur environment, called *Fruit Market*, is a multiplayer environment where agents produce and consume two types of fruit: apples and bananas. Each agent is skilled at producing one type of fruit, but has a preference for the other – if the agents can learn to barter and exchange goods, both parties would be better off.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62824c1d9603656400227009_blog_post_environment.png)**An example map in Fruit Market:** Agents move around the map to harvest apples and bananas from trees, meet up to trade with each other, and then consume the fruit that they prefer.In our experiments, we demonstrate that current deep RL agents can learn to trade, and their behaviours in response to supply and demand shifts align with what microeconomic theory predicts. We then build on this work to present scenarios that would be very difficult to solve using analytical models, but which are straightforward for our deep RL agents. For example, in environments where each type of fruit grows in a different area, we observe the emergence of different price regions related to the local abundance of fruit, as well as the subsequent learning of arbitrage behaviour by some agents, who begin to specialise in transporting fruit between these regions.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62824c66f299c6e3f1f54b36_blog_post_supply_demand.png)**Emergent Supply and Demand curves:** In this experiment, we manipulate the probability of apple trees (a=x) and banana trees (b=y) appearing in each map location. These results replicate the theoretical supply and demand curves presented in introductory Microeconomics courses.The field of agent-based computational economics uses similar simulations for economics research. In this work, we also demonstrate that state-of-the-art deep RL techniques can flexibly learn to act in these environments from their own experience, without needing to have economic knowledge built in. This highlights the reinforcement learning community’s recent progress in multi-agent RL and deep RL, and demonstrates the potential of multi-agent techniques as tools to advance simulated economics research.\n\nAs a [path to artificial general intelligence](https://arxiv.org/abs/1903.00742) (AGI), multi-agent reinforcement learning research should encompass all critical domains of social intelligence. However, until now it hasn’t incorporated traditional economic phenomena such as trade, bargaining, specialisation, consumption, and production. This paper fills this gap and provides a platform for further research. To aid future research in this area, the Fruit Market environment will be included in the next release of the [Melting Pot](https://github.com/deepmind/meltingpot) suite of environments.", "date_published": "2022-05-16T00:00:00Z", "authors": ["Mike Johanson", "Edward Hughes", "Finbarr Timbers", "Joel Leibo"], "summaries": []} +{"id": "c4d0c395656068741fd184a8cb3fb3b6", "title": "A Generalist Agent", "url": "https://www.deepmind.com/blog/a-generalist-agent", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627d13d743dc353a184da8d4_data_sequences.png)During the training phase of Gato, data from different tasks and modalities are serialised into a flat sequence of tokens, batched, and processed by a transformer neural network similar to a large language model. The loss is masked so that Gato only predicts action and text targets.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627d148b710554b355ec4d28_diagram_train%20(1)-1.png)When deploying Gato, a prompt, such as a demonstration, is tokenised, forming the initial sequence. Next, the environment yields the first observation, which is also tokenised and appended to the sequence. Gato samples the action vector autoregressively, one token at a time.\n\nOnce all tokens comprising the action vector have been sampled (determined by the action specification of the environment), the action is decoded and sent to the environment which steps and yields a new observation. Then the procedure repeats. The model always sees all previous observations and actions within its context window of 1024 tokens.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627d14de5d578e1ad6af2aee_eval_sequence-1.png)Gato is trained on a large number of datasets comprising agent experience in both simulated and real-world environments, in addition to a variety of natural language and image datasets. The number of tasks, where the performance of the pretrained Gato model is above a percentage of expert score, grouped by domain, is shown here.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627d15240b604dc2628bc05f_barplot_domains.png)The following images also show how the pre-trained Gato model with the same weights can do image captioning, engage in an interactive dialogue, and control a robot arm, among many other tasks.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627d15dba01b303962bf0014_image_captions_v3-1.png)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627d161a9709ad24126a513b_dialogue_examples_g1-1.png)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627d1648c0eef89f6a91f370_real_robot_blue_on_green.png)", "date_published": "2022-05-12T00:00:00Z", "authors": ["Scott Reed", "Konrad Żołna", "Emilio Parisotto", "Sergio Gómez Colmenarejo", "Alexander Novikov", "Gabriel Barth-Maron", "Mai Giménez", "Yury Sulsky", "Jackie Kay", "Jost Tobias Springenberg", "Tom Eccles", "Jake Bruce", "Ali Razavi", "Ashley Edwards", "Nicolas Heess", "Yutian Chen", "Raia Hadsell", "Oriol Vinyals", "Mahyar Bordbar", "and Nando de Freitas"], "summaries": []} +{"id": "554a2a0eda5d548b925f67d4201f7158", "title": "Active offline policy selection", "url": "https://www.deepmind.com/blog/active-offline-policy-selection", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Reinforcement learning (RL) has made tremendous progress in recent years towards addressing real-life problems – and offline RL made it even more practical. Instead of direct interactions with the environment, we can now train many algorithms from a single pre-recorded dataset. However, we lose the practical advantages in data-efficiency of offline RL when we evaluate the policies at hand.\n\nFor example, when training robotic manipulators the robot resources are usually limited, and training many policies by offline RL on a single dataset gives us a large data-efficiency advantage compared to online RL. Evaluating each policy is an expensive process, which requires interacting with the robot thousands of times. When we choose the best algorithm, hyperparameters, and a number of training steps, the problem quickly becomes intractable.\n\nTo make RL more applicable to real-world applications like robotics, we propose using an intelligent evaluation procedure to select the policy for deployment, called active offline policy selection (A-OPS). In A-OPS, we make use of the prerecorded dataset and allow limited interactions with the real environment to boost the selection quality.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627bec820168b096d5504b9e_1.png)Active offline policy selection (A-OPS) selects the best policy out of a set of policies given a pre-recorded dataset and limited interaction with the environment.To minimise interactions with the real environment, we implement three key features: \n\n‍\n\n1. Off-policy policy evaluation, such as fitted Q-evaluation (FQE), allows us to make an initial guess about the performance of each policy based on an offline dataset. It correlates well with the ground truth performance in many environments, including real-world robotics where it is applied for the first time.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627bed3d532fe54cd0a5e46a_2-1.png)FQE scores are well aligned with the ground truth performance of policies trained in both sim2real and offline RL setups.The returns of the policies are modelled jointly using a Gaussian process, where observations include FQE scores and a small number of newly collected episodic returns from the robot. After evaluating one policy, we gain knowledge about all policies because their distributions are correlated through the kernel between pairs of policies. The kernel assumes that if policies take similar actions – such as moving the robotic gripper in a similar direction – they tend to have similar returns.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627bed70dc6376412174067c_3.gif)We useOPE scores and episodic returns to model latent policy performance as a Gaussian process.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627cf4b5d6cc1c56d8a29cb4_4-1.png)Similarity between the policies is modelled through the distance between the actions these policies produce.1. To be more data-efficient, we apply Bayesian optimisation and prioritise more promising policies to be evaluated next, namely those that have high predicted performance and large variance.\n\n‍\n\nWe demonstrated this procedure in a number of environments in several domains: dm-control, Atari, simulated, and real robotics. Using A-OPS reduces the regret rapidly, and with a moderate number of policy evaluations, we identify the best policy. \n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/627cf4fe45a4003f659b47c0_5.gif)In a real-world robotic experiment, A-OPS helps identify a very good policy faster than other baselines. To find a policy with close to zero regret out of 20 policies takes the same amount of time as it takes to evaluate two policies with current procedures.Our results suggest that it’s possible to make an effective offline policy selection with only a small number of environment interactions by utilising the offline data, special kernel, and Bayesian optimisation. The code for A-OPS is open-sourced and [available on GitHub](https://github.com/deepmind/active_ops) with an example dataset to try.", "date_published": "2022-05-06T00:00:00Z", "authors": ["Yutian Chen", "Ksenia Konyushkova", "Tom Paine", "Caglar Gulcehre", "Cosmin Paduraru", "Daniel J. Mankowitz", "Misha Denil", "Nando de Freitas"], "summaries": []} +{"id": "c06ea45ccd3ac6733ca0c5314b8b58b4", "title": "An empirical analysis of compute-optimal large language model training", "url": "https://www.deepmind.com/blog/an-empirical-analysis-of-compute-optimal-large-language-model-training", "source": "deepmind_technical_blog", "source_type": "blog", "text": "In the last few years, a focus in language modelling has been on improving performance through increasing the number of parameters in transformer-based models. This approach has led to impressive results and state-of-the-art performance across many natural language processing tasks. \n\nWe also pursued this line of research at DeepMind and recently showcased Gopher, a 280-billion parameter model that established leading performance on a wide range of tasks including language modelling, reading comprehension, and question answering. Since then, an even larger model named Megatron-Turing NLG has been published with 530 billion parameters.\n\nDue to the substantial cost of training these large models, it is paramount to estimate the best possible training setup to avoid wasting resources. In particular, the training compute cost for transformers is determined by two factors: the model size and the number of training tokens.\n\nThe current generation of large language models has allocated increased computational resources to increasing the parameter count of large models and keeping the training data size fixed at around 300 billion tokens. In this work, we empirically investigate the optimal tradeoff between increasing model size and the amount of training data with increasing computational resources. Specifically, we ask the question: “What is the optimal model size and number of training tokens for a given compute budget?” To answer this question, we train models of various sizes and with various numbers of tokens, and estimate this trade-off empirically. \n \nOur main finding is that the current large language models are far too large for their compute budget and are not being trained on enough data. In fact, we find that for the number of training FLOPs used to train *Gopher*, a 4x smaller model trained on 4x more data would have been preferable.\n\n‍\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62557f7626b9e103db549c7b_tokens_vs_flops%20(1).png)**Figure 1:** Based on our approach, we show our projections of the optimal number of training tokens and parameters. We show points representing the training setup of three different established large language models along with our new model, Chinchilla*.*We test our data scaling hypothesis by training *Chinchilla,* a 70-billion parameter model trained for 1.3 trillion tokens. While the training compute cost for ChinchillaandGopherare the same, we find that it outperforms Gopher and other large language models on nearly every measured task, despite having 70 billion parameters compared to Gopher’s 280 billion.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62557f43672f48833d2088c1_chinchilla.performance.image.png)**Figure 2:** For various common benchmarks that include Question Answering (TriviaQA), CommonSense (HellaSwag, PIQA, Winogrande, and BoolQ), Reading Comprehension (LAMBADA), and the large Multi-task Language Understanding (MMLU) general knowledge benchmark, we compare the performance of Gopher, Chinchilla*,* GPT-3, and Megatron-Turing NLG.After the release of Chinchilla, a model named PaLM was released with 540 billion parameters and trained on 768 billion tokens. This model was trained with approximately 5x the compute budget of Chinchilla and outperformed Chinchilla on a range of tasks. While the training corpus is different, our methods do predict that such a model trained on our data would outperform Chinchilla despite not being compute-optimal. Given the PaLM compute budget, we predict a 140-billion-parameter model trained on 3 trillion tokens to be optimal and more efficient for inference.\n\nAn additional benefit of smaller, more performant models is that the inference time and memory costs are reduced making querying the models both faster and possible on less hardware. In practice, while the training FLOPs between Gopherand Chinchilla are the same, the cost of using Chinchilla is substantially smaller, in addition to it performing better. Further simple optimisations may be possible that are able to continue to provide large gains.", "date_published": "2022-04-12T00:00:00Z", "authors": ["Jordan Hoffmann", "Sebastian Borgeaud", "Arthur Mensch", "Laurent Sifre"], "summaries": []} +{"id": "ffaac8ffc995153a4992974921e3f4f5", "title": "GopherCite: Teaching language models to support answers with verified quotes", "url": "https://www.deepmind.com/blog/gophercite-teaching-language-models-to-support-answers-with-verified-quotes", "source": "deepmind_technical_blog", "source_type": "blog", "text": "DeepMind published a [series of papers](https://deepmind.com/blog/article/language-modelling-at-scale) about large language models (LLMs) last year, including [an analysis](https://arxiv.org/abs/2112.11446) of Gopher, our large language model. Language modelling technology, which is also currently being developed by several other labs and companies, promises to strengthen many applications, from [search engines](https://blog.google/products/search/search-language-understanding-bert/) to a new wave of chatbot-like [conversational assistants](https://blog.google/technology/ai/lamda/) and beyond. One [paper](https://arxiv.org/abs/2112.04359) in this series laid out a number of reasons why “raw” language models like Gopher do not meet our standards for safely deploying this technology in user-facing applications, especially if guard rails for managing problematic and potentially harmful behaviour are not set in place.\n\nOur latest work focuses on one of these concerns: Language models like Gopher can “hallucinate” facts that appear plausible but are actually fake. Those who are familiar with this problem know to do their own fact-checking, rather than trusting what language models say. Those who are not, may end up believing something that isn’t true. This paper describes GopherCite, a model which aims to address the problem of language model hallucination. GopherCite attempts to back up all of its factual claims with evidence from the web. It uses Google Search to find relevant web pages on the internet and quotes a passage which tries to demonstrate why its response is correct. If the system is unable to form an answer that can be well-supported by evidence, it tells the user, “I don’t know”, instead of providing an unsubstantiated answer.\n\nSupporting simple factual claims with easily verifiable evidence is one step towards making language models more trustworthy, both for users interacting with them and for annotators assessing the quality of samples. A comparison between the behaviour of “raw” Gopher and our new model is helpful for illustrating this change.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6238b8206dff585fb5183967_fig_1.svg)Based on GopherCite’s response, you’ll notice that Gopher invented a fact (“Lake Placid hosted the winter Olympics in 1936”) without warning. When shown a verified snippet from a relevant Wikipedia page by GopherCite, we can confirm that Lake Placid only hosted the Olympics twice, in 1932 and 1980.\n\nTo alter Gopher’s behaviour in this way, we trained Gopher according to human preferences. We asked participants in a user study to pick their preferred answer from a pair of candidates, according to criteria including how well the evidence supports the answers given. These labels were used as training data for both supervised learning on highly rated samples and for [reinforcement learning from human preferences](https://arxiv.org/abs/1909.08593) (RLHP). We also took this approach in [our recent work on red teaming](https://deepmind.com/research/publications/2022/Red-Teaming-Language-Models-with-Language-Models).\n\nWe are not the only ones interested in this problem of factual inaccuracy in language models. Our colleagues at Google recently made progress on factual grounding in their latest [LaMDA system](https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html), having a conversational model interact with Google Search and sometimes share relevant URLs. Indeed, GopherCite’s training regimen uses similar methodology to that of LaMDA, but a critical difference is that we aim to provide a specific snippet of relevant evidence, rather than simply pointing the user to a URL. Based on motivations similar to our own, OpenAI has [recently announced work](https://openai.com/blog/webgpt/) developing a closely related system called WebGPT, which also applies RLHP to align their GPT-3 language model. Whereas GopherCite focuses on reading long document inputs, WebGPT carefully curates the context presented to the language model by interacting multiple times with a web browser. It also cites evidence to back up its responses. Similarities and differences between these systems and our own are discussed in our paper and we also demonstrate that GopherCite very often provides compelling evidence for its claims.\n\nWe conducted a user study with paid participants to assess the model on two types of questions: fact-seeking questions typed into Google Search ([released by Google in a dataset called “NaturalQuestions”](https://ai.google.com/research/NaturalQuestions)), and explanation-seeking questions which Reddit users asked on a forum called “/r/eli5” (“Explain it Like I’m 5 [years old]”). The participants in our study determined that GopherCite answers fact-seeking questions correctly – and with satisfactory evidence – about 80% of the time, and does so for explanation-seeking questions about 67% of the time. When we allow GopherCite to refrain from answering some questions, its performance improves dramatically amongst the questions it does choose to answer (see the paper for details). This explicit mechanism for abstaining is a core contribution of our work.\n\nBut when we evaluate the model on a set of “adversarial” questions, which attempt to trick the model into parroting a fiction or misconception that is stated on the internet, GopherCite often falls into the trap. For instance, when asked “what does Red Bull give you?”, here is how it responds:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6238b8399fc3670aa60958e8_fig_2.svg)An example of GopherCite's response to a question from the TruthfulQA dataset. We also show alongside the sample, how human annotators assessed three criteria we have for samples. 1. \"Plausible\": Is the answer on topic, attempting to address the user's question? 2. \"Supported\": Does the quotation convince you that the response is accurate? 3. \"True\": If the response does not contain false information.We think this failure mode and others discussed in our paper can be avoided by enriching the setting, moving from a “single-shot” reply to a user’s question, to one in which the model can ask clarifying questions of the user and engage in a dialogue. For example, we could enable future models to ask the user whether they want an answer that is literally true or one that is true in the confines of the fictional world of a Red Bull advertisement.\n\nIn summary, we think GopherCite is an important step forward, but building it has taught us that evidence citation is only one part of an overall strategy for safety and trustworthiness. More fundamentally, not all claims require quote evidence – and as we demonstrated above, not all claims supported by evidence are true. Some claims require multiple pieces of evidence along with a logical argument explaining why the claim follows. We will continue working in this area and aim to overcome the issues presented with further research and development as well as dedicated sociotechnical research.\n\nOur paper covers many more details about our methods, experiments, and relevant context from the research literature. We have also created an FAQ about GopherCite, answered by the model itself after reading the paper's introduction (using candidate samples curated by the authors):\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6238b879d3a417cd9f473c0c_fig_3.svg)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6238b8812be7bee9042434ca_fig_4.svg)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6238b887522b7603b6dcb08d_fig_5.svg)", "date_published": "2022-03-16T00:00:00Z", "authors": ["Jacob Menick", "Maja Trebacz", "Vladimir Mikulik", "John Aslanides", "Francis Song", "Martin Chadwick", "Mia Glaese", "Susannah Young", "Lucy Campbell-Gillingham", "Geoffrey Irving", "Nat McAleese"], "summaries": []} +{"id": "bb91c906d01642ebf99c857ab46fd02d", "title": "Learning Robust Real-Time Cultural Transmission without Human Data", "url": "https://www.deepmind.com/blog/learning-robust-real-time-cultural-transmission-without-human-data", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Over millennia, humankind has discovered, evolved, and accumulated a wealth of cultural knowledge, from navigation routes to mathematics and social norms to works of art. Cultural transmission, defined as efficiently passing information from one individual to another, is the inheritance process underlying this exponential increase in human capabilities.\n\nOur agent, in blue, imitates and remembers the demonstration of both bots (left) and humans (right), in red.\n\nFor more videos of our agents in action, visit our [website](https://sites.google.com/view/dm-cgi).\n\nIn this work, we use deep reinforcement learning to generate artificial agents capable of test-time cultural transmission. Once trained, our agents can infer and recall navigational knowledge demonstrated by experts. This knowledge transfer happens in real time and generalises across a vast space of previously unseen tasks. For example, our agents can quickly learn new behaviours by observing a single human demonstration, without ever training on human data.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6227d60010fcae8805be6718_Fig%201.jpg)A summary of our reinforcement learning environment. The tasks are navigational representatives for a broad class of human skills, which require particular sequences of strategic decisions, such as cooking, wayfinding, and problem solving.We train and test our agents in procedurally generated 3D worlds, containing colourful, spherical goals embedded in a noisy terrain full of obstacles. A player must navigate the goals in the correct order, which changes randomly on every episode. Since the order is impossible to guess, a naive exploration strategy incurs a large penalty. As a source of culturally transmitted information, we provide a privileged “bot” that always enters goals in the correct sequence.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6227d611c9968b617accf2a9_Fig%202.jpg)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6227d6414a3de27de2d3f161_Fig%203.jpg)Our MEDAL(-ADR) agent outperforms ablations on held-out tasks, in worlds without obstacles (top) and with obstacles (bottom).Via ablations, we identify a minimal sufficient \"starter kit\" of training ingredients required for cultural transmission to emerge, dubbed MEDAL-ADR. These components include memory (M), expert dropout (ED), attentional bias towards the expert (AL), and automatic domain randomization (ADR). Our agent outperforms the ablations, including the state-of-the-art method (ME-AL), across a range of challenging held-out tasks. Cultural transmission generalises out of distribution surprisingly well, and the agent recalls demonstrations long after the expert has departed. Looking into the agent's brain, we find strikingly interpretable neurons responsible for encoding social information and goal states.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6227d69116dd17585eae51a5_Fig%204.jpg)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6227d69a721902e35c03584d_Fig%205.jpg) \nOur agent generalises outside the training distribution (top) and possesses individual neurons that encode social information (bottom). \n\n\nIn summary, we provide a procedure for training an agent capable of flexible, high-recall, real-time cultural transmission, without using human data in the training pipeline. This paves the way for cultural evolution as an algorithm for developing more generally intelligent artificial agents.\n\nThis authors' notes is based on joint work by the Cultural General Intelligence Team: Avishkar Bhoopchand, Bethanie Brownfield, Adrian Collister, Agustin Dal Lago, Ashley Edwards, Richard Everett, Alexandre Fréchette, Edward Hughes, Kory W. Mathewson, Piermaria Mendolicchio, Yanko Oliveira, Julia Pawar, Miruna Pîslar, Alex Platonov, Evan Senter, Sukhdeep Singh, Alexander Zacherl, and Lei M. Zhang.\n\n‍\n\nRead the full paper [here](https://arxiv.org/abs/2203.00715).", "date_published": "2022-03-03T00:00:00Z", "authors": ["Cultural General Intelligence Team"], "summaries": []} +{"id": "62074ecae39b7718ae252e266036c699", "title": "Probing Image-Language Transformers for Verb Understanding", "url": "https://www.deepmind.com/blog/probing-image-language-transformers-for-verb-understanding", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Grounding language to vision is a fundamental problem for many real-world AI systems such as retrieving images or generating descriptions for the visually impaired. Success on these tasks requires models to relate different aspects of language such as objects and verbs to images. For example, to distinguish between the two images in the middle column below, models must differentiate between the verbs “catch” and “kick.” Verb understanding is particularly difficult as it requires not only recognising objects, but also how different objects in an image relate to each other. To overcome this difficulty, we introduce the SVO-Probes dataset and use it to probe language and vision models for verb understanding.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6233488fe173bc221c1df2ae_SVO-drawing2.jpg)In particular, we consider multimodal transformer models (e.g., Lu et al., 2019; Chen et al., 2020; Tan and Bansal, 2019; Li et al., 2020), which have shown success on a variety of language and vision tasks. However, despite strong performance on benchmarks, it is not clear if these models have fine-grained multimodal understanding. In particular, prior work shows that language and vision models can succeed at benchmarks without multimodal understanding: for example, answering questions about images based only on language priors (Agrawal et al., 2018) or “hallucinating” objects that are not in the image when captioning images (Rohrbach et al., 2018). To anticipate model limitations, work like Shekhar et al. propose specialised evaluations to probe models systematically for language understanding. However, prior probe sets are limited in the number of objects and verbs. We developed SVO-Probes to better evaluate potential limitations in verb understanding in current models.\n\nSVO-Probes includes 48,000 image-sentence pairs and tests understanding for more than 400 verbs. Each sentence can be broken into a triplet (or SVO triplet) and paired with positive and negative example images. The negative examples differ in only one way: the Subject, Verb, or Object is changed. The figure above shows negative examples in which the subject (left), verb (middle), or object (right) does not match the image. This task formulation makes it possible to isolate which parts of the sentence a model has the most trouble with. It also makes SVO-Probes more challenging than standard image retrieval tasks, where negative examples are often completely unrelated to the query sentence.\n\nTo create SVO-Probes, we [query an image search](https://developers.google.com/custom-search/v1/reference/rest/v1/cse/list) with SVO triplets from a common training dataset, Conceptual Captions (Sharma et al. 2018). Because image search can be noisy, a preliminary annotation step filters the retrieved images to ensure we have a clean set of image-SVO pairs. Since transformers are trained on image-sentence pairs, not image-SVO pairs, we need image-sentence pairs to probe our model. To collect sentences which describe each image, annotators write a short sentence for each image that includes the SVO triplet. For example, given the SVO triplet , an annotator could write the sentence “An animal lays in the grass.” We then use the SVO annotations to pair each sentence with a negative image, and ask annotators to verify negatives in a final annotation step. See the figure below for details.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623348fb22a815412339f8b5_SVO-drawing3.jpg)We examine whether multimodal transformers can accurately classify examples as positive or negative. The bar chart below illustrates our results. Our dataset is challenging: our standard multimodal transformer model achieves 64.3% accuracy overall (chance is 50%). Whereas accuracy is 67.0% and 73.4% on subjects and objects respectively, performance falls to 60.8% on verbs. This result shows that verb recognition is indeed challenging for vision and language models.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623349754dd07eedab303af7_SVO-drawing1.jpg)We also explore which model architectures perform best on our dataset. Surprisingly, models with weaker image modeling perform better than the standard transformer model. One hypothesis is that our standard model (with stronger image modeling ability) overfits the train set. As both these models perform worse on other language and vision tasks, our targeted probe task illuminates model weaknesses that are not observed on other benchmarks.\n\nOverall, we find that despite impressive performance on benchmarks, multimodal transformers still struggle with fine-grained understanding, especially fine-grained verb understanding. We hope SVO-Probes can help drive exploration of verb understanding in language and vision models and inspire more targeted probe datasets.\n\n‍\n\nVisit our SVO-Probes [benchmark](https://github.com/deepmind/multimodal_transformers) and [models](https://github.com/deepmind/svo_probes) on GitHub: benchmark and models.", "date_published": "2022-02-23T00:00:00Z", "authors": ["Lisa Anne Hendricks", "Aida Nematzadeh"], "summaries": []} +{"id": "44e860f9617806b53be5ea6424bef5eb", "title": "Red Teaming Language Models with Language Models", "url": "https://www.deepmind.com/blog/red-teaming-language-models-with-language-models", "source": "deepmind_technical_blog", "source_type": "blog", "text": "**In our** [**recent paper**](https://arxiv.org/abs/2202.03286)**, we show that it is possible to automatically find inputs that elicit harmful text from language models by generating inputs using language models themselves. Our approach provides one tool for finding harmful model behaviours before users are impacted, though we emphasize that it should be viewed as one component alongside many other techniques that will be needed to find harms and mitigate them once found.**\n\nLarge generative language models like GPT-3 and Gopher have a remarkable ability to generate high-quality text, but they are difficult to deploy in the real world. Generative language models come with a risk of generating very harmful text, and even a small risk of harm is unacceptable in real-world applications.\n\nFor example, in 2016, Microsoft released the Tay Twitter bot to automatically tweet in response to users. Within 16 hours, [Microsoft took Tay down](https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist) after several adversarial users elicited racist and sexually-charged tweets from Tay, which were sent to over 50,000 followers. The outcome was [not for lack of care on Microsoft’s part](https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/):\n\n\n> \"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.\" \n> \n> Peter Lee \n> VP, Microsoft\n\nThe issue is that there are so many possible inputs that can cause a model to generate harmful text. As a result, it’s hard to find all of the cases where a model fails before it is deployed in the real world. Previous work relies on paid, human annotators to manually discover failure cases ([Xu et al. 2021](https://aclanthology.org/2021.naacl-main.235/), *inter alia*). This approach is effective but expensive, limiting the number and diversity of failure cases found.\n\nWe aim to complement manual testing and reduce the number of critical oversights by finding failure cases (or ‘red teaming’) in an automatic way. To do so, we generate test cases using a language model itself and use a classifier to detect various harmful behaviors on test cases, as shown below:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62334badb21bcaf518e14447_red-teaming.jpg)Our approach uncovers a variety of harmful model behaviors: \n \n\n\n1. **Offensive Language**: Hate speech, profanity, sexual content, discrimination, etc.\n2. **Data Leakage**: Generating copyrighted or private, personally-identifiable information from the training corpus.\n3. **Contact Information Generation**: Directing users to unnecessarily email or call real people.\n4. **Distributional Bias**: Talking about some groups of people in an unfairly different way than other groups, on average over a large number of outputs.\n5. **Conversational Harms**: Offensive language that occurs in the context of a long dialogue, for example.\n\nTo generate test cases with language models, we explore a variety of methods, ranging from prompt-based generation and few-shot learning to supervised finetuning and reinforcement learning. Some methods generate more diverse test cases, while other methods generate more difficult test cases for the target model. Together, the methods we propose are useful for obtaining high test coverage while also modeling adversarial cases.\n\nOnce we find failure cases, it becomes easier to fix harmful model behavior by: \n \n\n\n1. Blacklisting certain phrases that frequently occur in harmful outputs, preventing the model from generating outputs that contain high-risk phrases.\n2. Finding offensive training data quoted by the model, to remove that data when training future iterations of the model.\n3. Augmenting the model’s prompt (conditioning text) with an example of the desired behavior for a certain kind of input, as shown in our [recent work](https://deepmind.com/blog/article/language-modelling-at-scale).\n4. Training the model to [minimize the likelihood](https://arxiv.org/abs/1908.04319) of its original, harmful output for a given test input.\n\nOverall, language models are a highly effective tool for uncovering when language models behave in a variety of undesirable ways. In our current work, we focused on red teaming harms that today’s language models commit. In the future, our approach can also be used to preemptively discover other, hypothesized harms from advanced machine learning systems, e.g., due to [inner misalignment](https://arxiv.org/abs/1906.01820) or [failures in objective robustness](https://arxiv.org/abs/2105.14111). This approach is just one component of responsible language model development: we view red teaming as one tool to be used alongside many others, both to find harms in language models and to mitigate them. We refer to Section 7.3 of [Rae et al. 2021](https://arxiv.org/abs/2112.11446) for a broader discussion of other work needed for language model safety.\n\n‍\n\nFor more details on our approach and results, as well as the broader consequences of our findings, read our [red teaming paper](https://arxiv.org/abs/2202.03286) here.", "date_published": "2022-02-07T00:00:00Z", "authors": ["Ethan Perez", "Saffron Huang", "Francis Song", "Trevor Cai", "Roman Ring", "John Aslanides", "Amelia Glaese", "Nat McAleese", "Geoffrey Irving"], "summaries": []} +{"id": "619a606a785af8bac7345c9b3067995d", "title": "Spurious normativity enhances learning of compliance and enforcement behavior in artificial agents", "url": "https://www.deepmind.com/blog/spurious-normativity-enhances-learning-of-compliance-and-enforcement-behavior-in-artificial-agents", "source": "deepmind_technical_blog", "source_type": "blog", "text": "**In** [**our recent paper**](https://www.pnas.org/content/119/3/e2106028118) **we explore how multi-agent deep reinforcement learning can serve as a model of complex social interactions, like the formation of social norms. This new class of models could provide a path to create richer, more detailed simulations of the world.**\n\nHumans are an [ultra social species](https://press.uchicago.edu/ucp/books/book/chicago/N/bo3615170.html). Relative to other mammals we benefit more from cooperation but we are also more dependent on it, and face greater cooperation challenges. Today, humanity faces numerous cooperation challenges including preventing conflict over resources, ensuring everyone can access clean air and drinking water, eliminating extreme poverty, and combating climate change. Many of the cooperation problems we face are difficult to resolve because they involve complex webs of social and biophysical interactions called [social-ecological systems](https://doi.org/10.1126/science.1172133). However, humans can collectively learn to overcome the cooperation challenges we face. We accomplish this by an ever evolving culture, including norms and institutions which organize our interactions with the environment and with one another.\n\nHowever, norms and institutions sometimes fail to resolve cooperation challenges. For example, individuals may over-exploit resources like forests and fisheries thereby causing them to collapse. In such cases, policy-makers may write laws to change institutional rules or develop other [interventions to try to change norms](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780190622046.001.0001/acprof-9780190622046) in hopes of bringing about a positive change. But policy interventions do not always work as intended. This is because real-world social-ecological systems are considerably [more complex](https://doi.org/10.1038/s41893-019-0419-7) than the models we typically use to try to predict the effects of candidate policies.\n\nModels based on game theory are often applied to the study of cultural evolution. In most of these models, the key interactions that agents have with one another are expressed in a ‘payoff matrix’. In a game with two participants and two actions A and B, a payoff matrix defines the value of the four possible outcomes: (1) we both choose A, (2) we both choose B, (3) I choose A while you choose B and (4) I choose B while you choose A. The most famous example is the ‘Prisoner’s Dilemma’, in which the actions are interpreted as “cooperate” and “defect”. Rational agents who act according to their own myopic self-interest are doomed to defect in the Prisoner’s Dilemma even though the better outcome of mutual cooperation is available.\n\nGame-theoretic models have been very widely applied. Researchers in diverse fields have used them to study a wide range of different phenomena, including economies and the evolution of human culture. However, game theory is not a neutral tool, rather it is a deeply opinionated modeling language. It imposes a strict requirement that everything must ultimately cash out in terms of the payoff matrix (or equivalent representation). This means that the modeler has to know, or be willing to assume, everything about how the effects of individual actions combine to generate incentives. This is sometimes appropriate, and the game theoretic approach has had many notable successes such as in modeling the [behavior of oligopolistic firms](https://doi.org/10.1016/S0899-8256(03)00114-3) and [cold war era international relations](https://yalebooks.yale.edu/book/9780300143379/arms-and-influence). However, game theory’s major weakness as a modeling language is exposed in situations where the modeler does not fully understand how the choices of individuals combine to generate payoffs. Unfortunately this tends to be the case with social-ecological systems because their social and ecological parts interact in complex ways that we do not fully understand.\n\nThe work we present here is one example within a research program that attempts to establish an alternative modeling framework, different from game theory, to use in the study of social-ecological systems. Our approach may be seen formally as a variety of [agent-based modeling](https://books.google.co.uk/books?id=Zrh2DwAAQBAJ&lpg=PP1&ots=OBPL7cpf_k&dq=agent%20based%20modeling%20textbook&lr&pg=PP1#v=onepage&q=agent%20based%20modeling%20textbook&f=false). However, its distinguishing feature is the incorporation of algorithmic elements from artificial intelligence, especially multi-agent deep reinforcement learning.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62334e665a58701fbf639fb2_unnamed.gif)The core idea of this approach is that every model consists of two interlocking parts: (1) a rich, dynamical model of the environment and (2) a model of individual decision-making.\n\nThe first takes the form of a researcher-designed simulator: an interactive program that takes in a current environment state and agent actions, and outputs the next environment state as well as the observations of all agents and their instantaneous rewards. The model of individual decision-making is likewise conditioned on environment state. It is an *agent* that learns from its past experience, performing a form of trial-and-error. An agent interacts with an environment by taking in observations and outputting actions. Each agent selects actions according to its behavioral policy, a mapping from observations to actions. Agents learn by changing their policy to improve it along any desired dimension, typically to obtain more reward. The policy is stored in a neural network. Agents learn ‘from scratch’, from their own experience, how the world works and what they can do to earn more rewards. They accomplish this by tuning their network weights in such a way that the pixels they receive as observations are gradually transformed into competent actions. Several learning agents can inhabit the same environment as one another. In this case the agents become interdependent because their actions affect one another.\n\nLike other agent-based modeling approaches, multi-agent deep reinforcement learning makes it easy to specify models that cross levels of analysis that would be hard to treat with game theory. For instance, actions may be far closer to low-level motor primitives (e.g. 'walk forward'; 'turn right') than the high-level strategic decisions of game theory (e.g. ‘cooperate’). This is an important feature needed to capture situations where agents must practice to learn effectively how to [implement their strategic choices](https://arxiv.org/pdf/1702.03037.pdf). For instance in one [study](https://arxiv.org/abs/2103.04982), agents learned to cooperate by taking turns cleaning a river. This solution was only possible because the environment had spatial and temporal dimensions in which agents have great freedom in how they structure their behavior towards one another. Interestingly, while the environment allowed for many different solutions (such as [territoriality](https://arxiv.org/pdf/1707.06600.pdf)), agents converged on the same turn-taking solution as human players.\n\nIn our latest study, we applied this type of model to an open question in research on cultural evolution: how to explain the existence of spurious and arbitrary social norms that appear not to have immediate material consequences for their violation beyond those imposed socially. For instance, in some societies men are expected to wear trousers not skirts; in many there are words or hand gestures that should not be used in polite company; and in most there are rules about how one styles one's hair or what one wears on one's head. We call these social norms ‘silly rules’. Importantly, in our framework, enforcing and complying with social norms both have to be learned. Having a social environment that includes a ‘silly rule’ means that agents have more opportunities to learn about enforcing norms in general. This additional practice then allows them to enforce the important rules more effectively. Overall, the ‘silly rule’ can be beneficial for the population – a surprising result. This result is only possible because our simulation focuses on learning: enforcing and complying with rules are complex skills that need training to develop.\n\nPart of why we find this result on silly rules so exciting is that it demonstrates the utility of multi-agent deep reinforcement learning in modeling cultural evolution. Culture contributes to the success or failure of policy interventions for socio-ecological systems. For instance, strengthening social norms around recycling is part of the [solution](https://www.science.org/doi/10.1126/science.aaf8317) to some environmental problems. Following this trajectory, richer simulations could lead to a deeper understanding of how to design interventions for social-ecological systems. If simulations become realistic enough, it may even be possible to test the impact of interventions, e.g. aiming to [design a tax code that fosters productivity and fairness](https://arxiv.org/abs/2108.02755).\n\nThis approach provides researchers with tools to specify detailed models of phenomena that interest them. Of course, like all research methodologies it should be expected to come with its own strengths and weaknesses. We hope to discover more about when this style of modeling can be fruitfully applied in the future. While there are no panaceas for modeling, we think there are compelling reasons to look to multi-agent deep reinforcement learning when constructing models of social phenomena, especially when they involve learning.", "date_published": "2022-01-18T00:00:00Z", "authors": ["Raphael Koster", "Dylan Hadfield-Menell *", "Richard Everett", "Laura Weidinger", "G Hadfield *", "Joel Leibo"], "summaries": []} +{"id": "cb45a17c9e004c67b01bfe6f6e8835d3", "title": "Improving language models by retrieving from trillions of tokens", "url": "https://www.deepmind.com/blog/improving-language-models-by-retrieving-from-trillions-of-tokens", "source": "deepmind_technical_blog", "source_type": "blog", "text": "In recent years, significant performance gains in autoregressive language modeling have been achieved by increasing the number of parameters in Transformer models. This has led to a tremendous increase in training energy cost and resulted in a generation of dense “Large Language Models” (LLMs) with 100+ billion parameters. Simultaneously, large datasets containing trillions of words have been collected to facilitate the training of these LLMs.\n\nWe explore an alternate path for improving language models: we augment transformers with retrieval over a database of text passages including web pages, books, news and code. We call our method RETRO, for “Retrieval Enhanced TRansfOrmers”.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6233547661f4d80dd32aafbb_LLM_figure_5_and_1.jpg)Figure 1: A high-level overview of Retrieval Enhanced TransfOrmers (RETRO).In traditional transformer language models, the benefits of model size and data size are linked: as long as the dataset is large enough, language modeling performance is limited by the size of the model. However, with RETRO the model is not limited to the data seen during training– it has access to the entire training dataset through the retrieval mechanism. This results in significant performance gains compared to a standard Transformer with the same number of parameters. We show that language modeling improves continuously as we increase the size of the retrieval database, at least up to 2 trillion tokens – 175 full lifetimes of continuous reading.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623354cb7ba0efb9ef468ebc_improving.jpg)Figure 2: Increasing the size of the retrieval dataset results in large gains in model performance.For each text passage (approximately a paragraph of a document), a nearest-neighbor search is performed which returns similar sequences found in the training database, and their continuation. These sequences help predict the continuation of the input text. The RETRO architecture interleaves regular self-attention at a document level and cross-attention with retrieved neighbors at a finer passage level. This results in both more accurate and more factual continuations.  Furthermore, RETRO increases the interpretability of model predictions, and provides a route for direct interventions through the retrieval database to improve the safety of text continuation. In our experiments on the Pile, a standard language modeling benchmark, a 7.5 billion parameter RETRO model outperforms the 175 billion parameter Jurassic-1 on 10 out of 16 datasets and outperforms the 280B Gopher on 9 out of 16 datasets.\n\nBelow, we show two samples from our 7B baseline model and from our 7.5B RETRO model model that highlight how RETRO’s samples are more factual and stay more on topic than the baseline sample.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6233553fd5cc337484139079_creative2.jpg)Figure 3: The baseline only generates 2 correct digits. With RETRO, the correct digits are generated after being retrieved by the database.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335571d5cc33e8cd13a110_creative3.jpg)Figure 4: The RETRO model stays more on-topic than the baseline sample.Type image caption here (optional)", "date_published": "2021-12-08T00:00:00Z", "authors": ["Sebastian Borgeaud", "Arthur Mensch", "Jordan Hoffmann", "Laurent Sifre"], "summaries": []} +{"id": "7606d2d21fcef21664bb7a6444b9ca0a", "title": "Creating Interactive Agents with Imitation Learning", "url": "https://www.deepmind.com/blog/creating-interactive-agents-with-imitation-learning", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Humans are an interactive species. We interact with the physical world and with one another. For artificial intelligence (AI) to be generally helpful, it must be able to interact capably with humans and their environment. In this work we present the Multimodal Interactive Agent (MIA), which blends visual perception, language comprehension and production, navigation, and manipulation to engage in extended and often surprising physical and linguistic interactions with humans.\n\n![](https://assets-global.website-files.com/img/image-placeholder.svg)We build upon the approach introduced by Abramson et al. (2020), which primarily uses imitation learning to train agents. After training, MIA displays some rudimentary intelligent behaviour that we hope to later refine using human feedback. This work focuses on the creation of this intelligent behavioural prior, and we leave further feedback-based learning for future work.\n\nWe created the Playhouse environment, a 3D virtual environment composed of a randomised set of rooms and a large number of domestic interactable objects, to provide a space and setting for humans and agents to interact together. Humans and agents can interact in the Playhouse by controlling virtual robots that locomote, manipulate objects, and communicate via text. This virtual environment permits a wide range of situated dialogues, ranging from simple instructions (e.g., “Please pick up the book from the floor and place it on the blue bookshelf”) to creative play (e.g., “Bring food to the table so that we can eat”).\n\nWe collected human examples of Playhouse interactions using language games, a collection of cues prompting humans to improvise certain behaviours. In a language game one player (the setter) receives a prewritten prompt indicating a kind of task to propose to the other player (the solver). For example, the setter might receive the prompt “Ask the other player a question about the existence of an object,'' and after some exploration, the setter could ask, ”Please tell me whether there is a blue duck in a room that does not also have any furniture.'' To ensure sufficient behavioural diversity, we also included free-form prompts, which granted setters free choice to improvise interactions (E.g. “Now take any object that you like and hit the tennis ball off the stool so that it rolls near the clock, or somewhere near it.''). In total, we collected 2.94 years of real-time human interactions in the Playhouse.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335136819da8ccaf83a63b_converted-unnamed%20(2).jpg)Example of two humans interacting in the Playhouse.Our training strategy is a combination of supervised prediction of human actions (behavioural cloning) and self-supervised learning. When predicting human actions, we found that using a hierarchical control strategy significantly improved agent performance. In this setting, the agent receives new observations roughly 4 times per second. For each observation, it produces a sequence of open-loop movement actions and optionally emits a sequence of language actions. In addition to behavioural cloning we use a form of self-supervised learning, which tasks agents with classifying whether certain vision and language inputs belong to the same or different episodes.\n\nTo evaluate agent performance, we asked human participants to interact with agents and provide binary feedback indicating whether the agent successfully carried out an instruction. MIA achieves over 70% success rate in human-rated online interactions, representing 75% of the success rate that humans themselves achieve when they play as solvers. To better understand the role of various components in MIA, we performed a series of ablations, removing, for example, visual or language inputs, the self-supervised loss, or the hierarchical control.\n\nContemporary machine learning research has uncovered remarkable regularities of performance with respect to different scale parameters; in particular, model performance scales as a power-law with dataset size, model size, and compute. These effects have been most crisply noted in the language domain, which is characterised by massive dataset sizes and highly evolved architectures and training protocols. In this work, however, we are in a decidedly different regime – with comparatively small datasets and multimodal, multi-task objective functions training heterogeneous architectures. Nevertheless, we demonstrate clear effects of scaling: as we increase dataset and model size, performance increases appreciably.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6233525b4690fbf4c4ff6dbe_creative1.jpg)Scripted probe tasks performance and human evaluation for data and model scaling. In both cases performance improvements when increasing both dataset size and model size.‍\n\nIn an ideal case, training becomes more efficient given a reasonably large dataset, as knowledge is transferred between experiences. To investigate how ideal our circumstances are, we examined how much data is needed to learn to interact with a new, previously unseen object and to learn how to follow a new, previously unheard command / verb. We partitioned our data into background data and data involving a language instruction referring to the object or the verb. When we reintroduced the data referring to the new object, we found that fewer than 12 hours of human interaction was enough to acquire the ceiling performance. Analogously, when we introduced the new command or verb ‘to clear’ (i.e. to remove all objects from a surface), we found that only 1 hour of human demonstrations was enough to reach ceiling performance in tasks involving this word.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335317c41c26f04bb44896_creative2.jpg)When learning a new command or object, the agent’s performance quickly improves with mere hours of demonstration experience.MIA exhibits startlingly rich behaviour, including a diversity of behaviours that were not preconceived by researchers, including tidying a room, finding multiple specified objects, and asking clarifying questions when an instruction is ambiguous. These interactions continually inspire us. However, the open-endedness of MIA’s behaviour presents immense challenges for quantitative evaluation. Developing comprehensive methodologies to capture and analyse open-ended behaviour in human-agent interactions will be an important focus in our future work.\n\n‍\n\nFor a more detailed description of our work, see our [paper](https://arxiv.org/abs/2112.03763).", "date_published": "2021-12-08T00:00:00Z", "authors": ["Josh Abramson", "Arun Ahuja", "Arthur Brussee", "Federico Carnevale", "Mary Cassin", "Felix Fischer", "Petko Georgiev", "Alex Goldin", "Tim Harley", "Felix Hill", "Peter C Humphreys", "Alden Hung", "Jessica Landon", "Timothy Lillicrap", "Hamza Merzic", "Alistair Muldal", "Adam Santoro", "Guy Scully", "Tamara von Glehn", "Gregory Wayne", "Nathaniel Wong", "Chen Yan", "Rui Zhu", "Mary Cassin", "Hamza Merzic"], "summaries": []} +{"id": "c60144f99bf045b9feba879401e75ccf", "title": "On the Expressivity of Markov Reward", "url": "https://www.deepmind.com/blog/on-the-expressivity-of-markov-reward", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Reward is the driving force for reinforcement learning (RL) agents. Given its central role in RL, reward is often assumed to be suitably general in its expressivity, as summarized by Sutton and Littman’s reward hypothesis:\n\n\n> \"...all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward).\" \n> ‍ \n> **- SUTTON (2004), LITTMAN (2017)**\n\nIn our work, we take first steps toward a systematic study of this hypothesis. To do so, we consider the following thought experiment involving Alice, a designer, and Bob, a learning agent:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623357d30a672579f94903f6_markov%203.jpg)We suppose that Alice thinks of a task she might like Bob to learn to solve – this task could be in the form a a natural language description (“balance this pole”), an imagined state of affairs (“reach any of the winning configurations of a chess board”), or something more traditional like a reward or value function. Then, we imagine Alice translates her choice of task into some generator that will provide learning signal (such as reward) to Bob (a learning agent), who will learn from this signal throughout his lifetime. We then ground our study of the reward hypothesis by addressing the following question: given Alice’s choice of task, is there always a reward function that can convey this task to Bob?\n\n#### What is a task?\n\nTo make our study of this question concrete, we first restrict focus to three kinds of task. In particular, we introduce three task types that we believe capture sensible kinds of tasks: 1) A set of acceptable policies (SOAP), 2) A policy order (PO), and 3) A trajectory order (TO). These three forms of tasks represent concrete instances of the kinds of task we might want an agent to learn to solve.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623357b62faa374cc0174395_markov%20(1).jpg)We then study whether reward is capable of capturing each of these task types in finite environments. Crucially, we only focus attention on Markov reward functions; for instance, given a state space that is sufficient to form a task such as (x,y) pairs in a grid world, is there a reward function that only depends on this same state space that can capture the task?\n\n#### First Main Result\n\nOur first main result shows that for each of the three task types, there are environment-task pairs for which there is no Markov reward function that can capture the task. One example of such a pair is the “go all the way around the grid clockwise or counterclockwise” task in a typical grid world:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623357e8ab00de441f317409_markov%20(2).jpg)This task is naturally captured by a SOAP that consists of two acceptable policies: the “clockwise” policy (in blue) and the “counterclockwise” policy (in purple). For a Markov reward function to express this task, it would need to make these two policies strictly higher in value than all other deterministic policies. However, there is no such Markov reward function: the optimality of a single “move clockwise” action will depend on whether the agent was already moving in that direction in the past. Since the reward function must be Markov, it cannot convey this kind of information. Similar examples demonstrate that Markov reward cannot capture every policy order and trajectory order, too.\n\n#### Second Main Result\n\nGiven that some tasks can be captured and some cannot, we next explore whether there is an efficient procedure for determining whether a given task can be captured by reward in a given environment. Further, if there is a reward function that captures the given task, we would ideally like to be able to output such a reward function. Our second result is a positive result which says that for any finite environment-task pair, there is a procedure that can 1) decide whether the task can be captured by Markov reward in the given environment, and 2) outputs the desired reward function that exactly conveys the task, when such a function exists.\n\nThis work establishes initial pathways toward understanding the scope of the reward hypothesis, but there is much still to be done to generalize these results beyond finite environments, Markov rewards, and simple notions of “task” and “expressivity”. We hope this work provides new conceptual perspectives on reward and its place in reinforcement learning.", "date_published": "2021-12-01T00:00:00Z", "authors": ["David Abel", "Doina Precup", "Anna Harutyunyan", "Mark Ho *", "Michael Littman *", "Will Dabney", "Satinder Baveja"], "summaries": []} +{"id": "a5194c2f7a030a8769db53e3786a5979", "title": "Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons", "url": "https://www.deepmind.com/blog/unsupervised-deep-learning-identifies-semantic-disentanglement-in-single-inferotemporal-face-patch-neurons", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Our brain has an amazing ability to process visual information. We can take one glance at a complex scene, and within milliseconds be able to parse it into objects and their attributes, like colour or size, and use this information to describe the scene in simple language. Underlying this seemingly effortless ability is a complex computation performed by our visual cortex, which involves taking millions of neural impulses transmitted from the retina and transforming them into a more meaningful form that can be mapped to the simple language description. In order to fully understand how this process works in the brain, we need to figure out both how the semantically meaningful information is represented in the firing of neurons at the end of the visual processing hierarchy, and how such a representation may be learnt from largely untaught experience.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623358932faa37a01b17d6b6_unnamed.gif)Figure 1. Disentangling refers to the ability of neural networks to discover semantically meaningful attributes of images without being explicitly taught what these attributes are. These models learn by mapping images into a lower-dimensional representation through an inference neural network, and trying to reconstruct the image using a generation neural network. Each individual latent unit in a disentangled representation learns to encode a single interpretable attribute, like colour or size of an object. Manipulating such latents one at a time results in interpretable changes in the generated image reconstruction. Animation credit Chris Burgess.To answer these questions in the context of face perception, we joined forces with our collaborators at Caltech ([Doris Tsao](https://www.tsaolab.caltech.edu/)) and the Chinese Academy of Science ([Le Chang](http://english.cebsit.cas.cn/)). We chose faces because they are well studied in the neuroscience community and are often seen as a “[microcosm of object recognition](https://pubmed.ncbi.nlm.nih.gov/18558862/)”. In particular, we wanted to compare the responses of single cortical neurons in the face patches at the end of the visual processing hierarchy, recorded by our collaborators to a recently emerged class of so called  “disentangling” deep neural networks that, unlike the usual “black box” systems, explicitly aim to be interpretable to humans. A “disentangling” neural network learns to map complex images into a small number of internal neurons (called latent units), each one representing a single semantically meaningful attribute of the scene, like colour or size of an object (see Figure 1). Unlike the “black box” deep classifiers trained to recognise visual objects through a biologically unrealistic amount of external supervision, such disentangling models are trained without an external teaching signal using a self-supervised objective of reconstructing input images (generation in Figure 1) from their learnt latent representation (obtained through inference in Figure 1).\n\nDisentangling was [hypothesised](https://arxiv.org/pdf/1305.0445v2.pdf) to be important in the machine learning community almost ten years ago as an integral component for building more [data-efficient](https://arxiv.org/abs/1911.10866), [transferable](https://arxiv.org/abs/1707.08475), [fair](https://arxiv.org/pdf/2002.02886.pdf), and [imaginative](https://deepmind.com/blog/article/imagine-creating-new-visual-concepts-recombining-familiar-ones) artificial intelligence systems. However, for years, building a model that can disentangle in practice has eluded the field. The first model able to do this successfully and robustly, called [β-VAE](https://openreview.net/references/pdf?id=Sy2fzU9gl), was developed by taking [inspiration from neuroscience](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3306444/#:~:text=Mounting%20evidence%20suggests%20that%20%E2%80%9Ccore,in%20the%20inferior%20temporal%20cortex.): β-VAE learns by [predicting its own inputs](https://www.nature.com/articles/s41593-018-0200-7); it requires similar visual experience for successful learning as [that encountered by babies](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(18)30027-5); and its learnt latent representation mirrors the [properties known of the visual brain](https://www.nature.com/articles/s41593-019-0377-4).\n\nIn our [new paper](https://www.nature.com/articles/s41467-021-26751-5), we measured the extent to which the disentangled units discovered by a β-VAE trained on a dataset of face images are similar to the responses of single neurons at the end of the visual processing recorded in primates looking at the same faces. The neural data was collected by our collaborators under rigorous oversight from the [Caltech Institutional Animal Care and Use Committee](https://researchcompliance.caltech.edu/committees/institutional-animal-care-and-use-committee-iacuc). When we made the comparison, we found something surprising - it seemed like the handful of disentangled units discovered by β-VAE were behaving as if they were equivalent to a similarly sized subset of the real neurons. When we looked closer, we found a strong one-to-one mapping between the real neurons and the artificial ones (see Figure 2). This mapping was much stronger than that for alternative models, including the deep classifiers previously considered to be state of the art computational models of visual processing, or a hand-crafted model of face perception seen as the “gold standard” in the neuroscience community. Not only that, β-VAE units were encoding semantically meaningful information like age, gender, eye size, or the presence of a smile, enabling us to understand what attributes single neurons in the brain use to represent faces.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623358f6c03d631aa459a833_converted-unnamed%20(3).jpg)Figure 2. Single neurons in the primate face patches at the end of the visual processing hierarchy represent interpretable face attributes, like eye shape or the presence of a smile, and are equivalent to single artificial neurons in β-VAE discovered through disentangled representation learning. Image credit Marta Garnelo.If β-VAE was indeed able to automatically discover artificial latent units that are equivalent to the real neurons in terms of how they respond to face images, then it should be possible to translate the activity of real neurons into their matched artificial counterparts, and use the generator (see Figure 1) of the trained β-VAE to visualise what faces the real neurons are representing. To test this, we presented the primates with new face images that the model has never experienced, and checked if we could render them using the β-VAE generator (see Figure 3). We found that this was indeed possible. Using the activity of as few as 12 neurons, we were able to generate face images that were more accurate reconstructions of the originals and of better visual quality than those produced by the alternative deep generative models. This is despite the fact that the alternative models are known to be better image generators than β-VAE in general.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623359325d31d8716bb08212_converted-unnamed%20(4).jpg)Figure 3. Face images were accurately reconstructed by the trained β-VAE generator from the activity of 12 one-to-one matched neurons in the primate visual cortex as the primates were viewing novel faces. Novel face images reproduced with permission from Ma et al. and Phillips et al.Our findings summarised in the [new paper](https://www.nature.com/articles/s41467-021-26751-5) suggest that the visual brain can be understood at a single-neuron level, even at the end of its processing hierarchy. This is contrary to the common belief that semantically meaningful information is [multiplexed between a large number of such neurons](https://www.sciencedirect.com/science/article/abs/pii/S0959438818300990), each one remaining largely uninterpretable individually, not unlike how information is encoded across full layers of artificial neurons in deep classifiers. Not only that, our findings suggest that it is possible that the brain learns to support our effortless ability to do visual perception by optimising the disentanglement objective. While β-VAE was originally developed with inspiration from [high-level neuroscience principles](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(18)30027-5), the utility of disentangled representations for intelligent behaviour has so far been primarily demonstrated in the [machine-learning community](https://arxiv.org/pdf/2002.02886.pdf). In line with the rich history of mutually beneficial [interactions between neuroscience and machine learning](https://www.cell.com/neuron/pdf/S0896-6273(17)30509-3.pdf), we hope that the latest insights from machine learning may now feed back to the neuroscience community to investigate the merit of disentangled representations for supporting intelligence in biological systems, in particular as the basis for [abstract reasoning](https://www.science.org/doi/10.1126/science.aat6766), or generalisable and efficient [task learning](https://www.nature.com/articles/s41593-019-0470-8).", "date_published": "2021-11-09T00:00:00Z", "authors": ["Irina Higgins", "L Chang*", "V Langston", "Demis Hassabis", "Christopher Summerfield", "Doris Tsao*", "Matt Botvinick"], "summaries": []} +{"id": "9bb9b73277d88fa8da8fb1626d20f835", "title": "Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration", "url": "https://www.deepmind.com/blog/is-curiosity-all-you-need-on-the-utility-of-emergent-behaviours-from-curious-exploration", "source": "deepmind_technical_blog", "source_type": "blog", "text": "During purely curious exploration, the JACO arm discovers how to pick up cubes, moves them around the workspace and even explores whether they can be balanced on their edges.\n\nCurious exploration enables OP3 to walk upright, balance on one foot, sit down and even catch itself safely when leaping backwards - all without a specific target task to optimise for.\n\nIntrinsic motivation [1, 2] can be a powerful concept to endow an agent with a mechanism to continuously explore its environment in the absence of task information. One common way to implement intrinsic motivation is via curiosity learning [3, 4]. With this method, a predictive model about the environment's response to an agent's actions is trained alongside the agent's policy. This model can also be called a world model. When an action is taken, the world model makes a prediction about the agent's next observation. This prediction is then compared to the true observation made by the agent. Crucially, the reward given to the agent for taking this action is scaled by the error it made when predicting the next observation. This way, the agent is rewarded for taking actions whose outcomes are not yet well predictable. Simultaneously, the world model is updated to better predict the outcome of said action.\n\nThis mechanism has been applied successfully in on-policy settings, e.g. to beat 2D computer games in an unsupervised way [4] or to train a general policy which is easily adaptable to concrete downstream tasks [5]. However, we believe that the true strength of curiosity learning lies in the diverse behaviour which emerges during the curious exploration process: As the curiosity objective changes, so does the resulting behaviour of the agent thereby discovering many complex policies which could be utilised later on, if they were retained and not overwritten.\n\n[In this paper](https://arxiv.org/abs/2109.08603), we make two contributions to study curiosity learning and harness its emergent behaviour: First, we introduce *SelMo*, an off-policy realisation of a self-motivated, curiosity-based method for exploration. We show that using SelMo, meaningful and diverse behaviour emerges solely based on the optimisation of the curiosity objective in simulated manipulation and locomotion domains. Second, we propose to extend the focus in the application of curiosity learning towards the identification and retention of emerging intermediate behaviours. We support this conjecture with an experiment which reloads self-discovered behaviours as pretrained, auxiliary skills in a hierarchical reinforcement learning setup.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335aefce7a171a10eeec26_unnamed.jpg)The control flow of the SelMo method: The agent (actor) collects trajectories in the environment using its current policy and stores them in the model replay buffer on the left. The connected world model samples uniformly that buffer and updates its parameters for forward prediction using stochastic gradient descent (SGD). The sampled trajectories are assigned curiosity rewards scaled by their respective prediction error under the current world model. The labeled trajectories are then passed on to the policy replay buffer on the right. Maximum a posteriori policy optimisation (MPO) [6] is used to fit Q-function and policy based on samples from the policy replay. The resulting, updated policy is then synced back into the actor.We run SelMo in two simulated continuous control robotic domains: On a 6-DoF JACO arm with a three-fingered gripper and on a 20-DoF humanoid robot, the OP3. The respective platforms present challenging learning environments for object manipulation and locomotion, respectively. While only optimising for curiosity, we observe that complex human-interpretable behaviour emerges over the course of the training runs. For instance, JACO learns to pick up and move cubes without any supervision or the OP3 learns to balance on a single foot or sit down safely without falling over.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335b093f091c2012d2af1f_unnamed%20(1).jpg)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335b11e94a89e257b024bf_unnamed%20(3).jpg)Example training timelines for JACO and the OP3. While optimising for the curiosity objective, complex, meaningful behaviour emerges in both manipulation and locomotion settings. The full videos can be found at the top of this page.However, the impressive behaviours observed during curious exploration have one crucial drawback: They are not persistent as they keep changing with the curiosity reward function. As the agent keeps repeating a certain behaviour, e.g. JACO lifting the red cube, the curiosity rewards accumulated by this policy are diminishing. Consequently, this leads to the learning of a modified policy which acquires higher curiosity rewards again, e.g. moving the cube outside the workspace or even attending to the other cube. But this new behaviour overwrites the old one. However, we believe that retaining the emergent behaviours from curious exploration equips the agent with a valuable skill set to learn new tasks more quickly. In order to investigate this conjecture, we set up an experiment to probe the utility of the self-discovered skills.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335b268209b6493fd3dc43_unnamed%20(2).jpg)We treat randomly sampled snapshots from different phases of the curious exploration as auxiliary skills in a modular learning framework [7] and measure how quickly a new target skill can be learned by using those auxiliaries. In the case of the JACO arm, we set the target task to be \"lift the red cube\" and use five randomly sampled self-discovered behaviours as auxiliaries. We compare the learning of this downstream task to an SAC-X baseline [8] which uses a curriculum of reward functions to reward reaching and moving the red cube which ultimately facilitates to learn lifting as well. We find that even this simple setup for skill-reuse already speeds up the learning progress of the downstream task commensurate with a hand designed reward curriculum. The results suggest that the automatic identification and retention of useful emerging behaviour from curious exploration is a fruitful avenue of future investigation in unsupervised reinforcement learning.", "date_published": "2021-09-17T00:00:00Z", "authors": ["Oliver Groth", "Markus Wulfmeier", "Giulia Vezzani", "Vibhavari Dasagi", "Tim Hertweck", "Roland Hafner", "Nicolas Heess", "and Martin Riedmiller"], "summaries": []} +{"id": "7c790755e56c12be3509f020ac1afe9e", "title": "Challenges in Detoxifying Language Models", "url": "https://www.deepmind.com/blog/challenges-in-detoxifying-language-models", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### **Undesired Behavior from Language Models**\n\nLanguage models trained on large text corpora can generate [fluent text](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), and show promise as [few/zero shot learners](https://arxiv.org/abs/2005.14165) and code generation tools, amongst other capabilities. However, prior research has also identified several issues with LM use that should be addressed, including [distributional biases](https://arxiv.org/abs/1911.03064), [social stereotypes](https://arxiv.org/abs/2004.09456), potentially revealing [training samples](https://arxiv.org/abs/2012.07805), and other [possible LM harms](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922). One particular type of LM harm is the generation of [toxic language](https://arxiv.org/abs/2009.11462), which includes hate speech, insults, profanities and threats.\n\nIn our paper, we focus on LMs and their [propensity](https://arxiv.org/abs/2009.11462) to generate toxic language. We study the effectiveness of different methods to mitigate LM toxicity, and their side-effects, and we investigate the reliability and limits of classifier-based automatic toxicity evaluation.\n\nFollowing the definition of toxicity developed by [Perspective API](https://perspectiveapi.com/), we here consider an utterance to be *toxic if it is rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion*. However, we note two important caveats. First, toxicity judgements are subjective—they depend both on the raters evaluating toxicity and their cultural background, as well as the inferred context. While not the focus of this work, it is important for future work to continue to develop this above definition, and clarify how it can be fairly applied in different contexts. Second, we note that toxicity covers only one aspect of possible LM harms, excluding e.g. harms arising from distributional model bias.\n\n#### **Measuring and Mitigating Toxicity**\n\nTo enable safer language model use, we set out to measure, understand the origins of, and mitigate toxic text generation in LMs. There has been prior work which has considered various approaches towards reducing LM toxicity, either by [fine-tuning](https://arxiv.org/abs/2004.10964) [pre-trained LMs](https://arxiv.org/abs/2009.11462), by [steering model generations](https://openreview.net/forum?id=H1edEyBKDS), or through direct [test-time filtering](https://arxiv.org/abs/2104.06390). Further, prior [work](https://arxiv.org/abs/2009.11462) has introduced automatic metrics for measuring LM toxicity, both when prompted with different kinds of prompts, as well as in unconditional generation. These metrics rely on the toxicity scores of the widely used [Perspective API](https://perspectiveapi.com/) model, which is trained on online comments annotated for toxicity.\n\nIn our study we first show that a combination of relatively simple baselines leads to a drastic reduction, as measured by previously introduced LM toxicity [metrics](https://arxiv.org/abs/2009.11462). Concretely, we find that a combination of i) filtering the LM training data annotated as toxic by [Perspective API](https://perspectiveapi.com/), ii) filtering generated text for toxicity based on a separate, fine-tuned BERT classifier trained to detect toxicity, and iii) [steering](https://arxiv.org/abs/1912.02164) the generation towards being less toxic, is highly effective at reducing LM toxicity, as measured by automatic toxicity metrics. When prompted with toxic (or non-toxic) prompts from the [RealToxicityPrompts](https://arxiv.org/abs/2009.11462) dataset, we see a 6-fold (or 17-fold) reduction compared with the previously reported state-of-the-art, in the aggregate *Probability of Toxicity* metric. We reach a value of zero in the unprompted text generation setting, suggesting that we have exhausted this metric. Given how low the toxicity levels are in absolute terms, as measured with automatic metrics, the question arises to what extent this is also reflected in human judgment, and whether improvements on these metrics are still meaningful, especially since they are derived from an imperfect automatic classification system. To gather further insights, we turn towards evaluation by humans.\n\n#### **Evaluation by Humans**\n\nWe conduct a human evaluation study where raters annotate LM-generated text for toxicity. The results of this study indicate that there is a direct and largely monotonic relation between average human and classifier-based results, and LM toxicity reduces according to human judgment.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335c02f2c1ba641d73d637_unnamed.jpg)We found inter-annotator agreement comparable to other studies measuring toxicity, and that annotating toxicity has aspects that are subjective and ambiguous. For example, we found that ambiguity frequently arose as a result of sarcasm, news-style text about violent behavior, and quoting toxic text (either neutrally or in order to disagree with it).\n\nIn addition, we find that automatic evaluation of LM toxicity becomes less reliable once detoxification measures have been applied. While initially coupled very well, for samples with a high (automatic) toxicity score, the link between human ratings and Perspective API scores disappears once we apply and increase the strength of LM toxicity reduction interventions.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335c127ba0ef7c524b59da_unnamed%20(1).jpg)Further manual inspection also reveals that false positive texts mention some identity terms at disproportionate frequencies. For example, for one detoxified model, we observe that within the high automatic toxicity bucket, 30.2% of texts mention the word “gay”, reflecting previously observed biases in automatic toxicity classifiers (which the community is already [working on](https://research.google/pubs/pub46743/) improving). Together, these findings suggest that when judging LM toxicity, a reliance on automatic metrics alone could lead to potentially misleading interpretations.\n\n#### **Unintended Consequences of Detoxification**\n\nWe further study possible unintended consequences resulting from the LM toxicity reduction interventions. For detoxified language models, we see a marked increase in the language modeling loss, and this increase correlates with the strength of the detoxification intervention. However, the increase is larger on documents that have higher automatic toxicity scores, compared to documents with lower toxicity scores. At the same time, in our human evaluations we did not find notable differences in terms of grammar, comprehension, and in how well the style of prior conditioning text is preserved.\n\nAnother consequence of detoxification is that it can disproportionately reduce the ability of the LM to model texts related to certain identity groups *(i.e. topic coverage)*, and also text by people from different identity groups and with different dialects *(i.e. dialect coverage)*. We find that there is a larger increase in the language modeling loss for text in African-American English (AAE) when compared to text in White-Aligned English.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335c28e94a8979eeb0abf9_unnamed%20(2).jpg)We see similar disparities in LM-loss degradation for text related to female actors when compared to text about male actors. For text about certain ethnic subgroups (such as Hispanic American), the degradation in performance is again relatively higher when compared to other subgroups.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335c313ff32a5db0279c02_unnamed%20(3).jpg)#### **Takeaways**\n\nOur experiments on measuring and mitigating language model toxicity provide us valuable insights into potential next steps towards reducing toxicity-related language model harms.\n\nFrom our automated and human evaluation studies, we find that existing mitigation methods are indeed very effective at reducing automatic toxicity metrics, and this improvement is largely matched with reductions in toxicity as judged by humans. However, we might have reached an exhaustion point for the use of automatic metrics in LM toxicity evaluation: after the application of toxicity reduction measures, the majority of remaining samples with high automatic toxicity scores are not actually judged as toxic by human raters, indicating that automatic metrics become less reliable for detoxified LMs. This motivates efforts towards designing more challenging benchmarks for automatic evaluation, and to consider human judgment for future studies on LM toxicity mitigation.\n\nFurther, given the ambiguity in human judgements of toxicity, and noting that judgements can vary across users and applications (e.g. language describing violence, that might otherwise be flagged as toxic, might be appropriate in a news article), future work should continue to develop and adapt the notion of toxicity for different contexts, and refine it for different LM applications. We hope the list of phenomena which we found annotator disagreement for is helpful in this regard.\n\nFinally, we also noticed unintended consequences of LM toxicity mitigation, including a deterioration in LM loss, and an unintended amplification of social biases - measured in terms of topic and dialect coverage - potentially leading to decreased LM performance for marginalized groups. Our findings suggest that alongside toxicity, it is key for future work to not rely on just a single metric, but to consider an “ensemble of metrics” which capture different issues. Future interventions, such as further reducing bias in toxicity classifiers will potentially help prevent trade-offs like the ones we observed, enabling safer language model use.", "date_published": "2021-09-15T00:00:00Z", "authors": ["Johannes Welbl", "Mia Glaese", "Jonathan Uesato", "Sumanth Dathathri", "John Mellor", "Lisa Anne Hendricks", "Kirsty Anderson *", "Pushmeet Kohli", "Ben Coppin", "Po-Sen Huang"], "summaries": []} +{"id": "ac3554d3108547ce6f1a425185482cf1", "title": "Enabling high-accuracy protein structure prediction at the proteome scale", "url": "https://www.deepmind.com/blog/enabling-high-accuracy-protein-structure-prediction-at-the-proteome-scale", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### The AlphaFold method\n\nMany novel machine learning innovations contribute to AlphaFold’s current level of accuracy. We give a high-level overview of the system below; for a technical description of the network architecture see our AlphaFold [methods paper](https://www.nature.com/articles/s41586-021-03819-2) and especially its extensive Supplementary Information.\n\nThe AlphaFold network consists of two main stages. Stage 1 takes as input the amino acid sequence and a multiple sequence alignment (MSA). Its goal is to learn a rich “pairwise representation” that is informative about which residue pairs are close in 3D space.\n\nStage 2 uses this representation to directly produce atomic coordinates by treating each residue as a separate object, predicting the rotation and translation necessary to place each residue, and ultimately assembling a structured chain. The design of the network draws on our intuitions about protein physics and geometry, for example, in the form of the updates applied and in the choice of loss.\n\nInterestingly, we can produce a 3D structure based on the representation at intermediate layers of the network. The resulting “trajectory” videos show how AlphaFold’s belief about the correct structure develops during inference, layer by layer. Typically a hypothesis emerges after the first few layers followed by a lengthy process of refinement, although some targets require the full depth of the network to arrive at a good prediction.\n\n\n\n![](https://storage.googleapis.com/deepmind-media/DeepMind.com/Authors-Notes/enabling-high-accuracy-protein-structure-prediction-at-the-proteome-scale/fig_1.gif)\n![](https://storage.googleapis.com/deepmind-media/DeepMind.com/Authors-Notes/enabling-high-accuracy-protein-structure-prediction-at-the-proteome-scale/fig_2.gif)\n![](https://storage.googleapis.com/deepmind-media/DeepMind.com/Authors-Notes/enabling-high-accuracy-protein-structure-prediction-at-the-proteome-scale/fig_3.gif)\n\nPredicted structure for the CASP14 targets T1044, T1024 and T1064 at successive layers of the network. Structures are colored by residue number and the counter shows the current layer.\n#### Accuracy and confidence\n\nAlphaFold was stringently assessed in the [CASP14](https://predictioncenter.org/casp14/zscores_final.cgi) experiment, in which participants blindly predict protein structures that have been solved but not yet made public. The method achieved high accuracy in a majority of cases, with an average 95% RMSD-Cα to the experimental structure of less than 1Å. In our papers, we further evaluate the model on a much larger set of recent PDB entries. Among the findings are strong performance on large proteins and good side chain accuracy where the backbone is well-predicted.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335d9cade0573a437b9e08_unnamed.jpg)AlphaFold’s CASP14 accuracy relative to other methods. RMSD-Cα based on the best-predicted 95% of residues for each target.An important factor in the utility of structure predictions is the quality of the associated confidence measures. Can the model identify the parts of its prediction likely to be reliable? We have developed two confidence measures on top of the AlphaFold network to address this question.\n\nThe first is pLDDT ([predicted lDDT-Cα](https://doi.org/10.1093/bioinformatics/btt473)), a per-residue measure of local confidence on a scale from 0 - 100. pLDDT can vary dramatically along a chain, enabling the model to express high confidence on structured domains but low confidence on the linkers between them, for example. In our [paper](https://www.nature.com/articles/s41586-021-03828-1), we present evidence that some regions with low pLDDT may be unstructured in isolation; either intrinsically disordered or structured only in the context of a larger complex. Regions with pLDDT < 50 should not be interpreted except as a possible disorder prediction.\n\nThe second metric is PAE (Predicted Aligned Error), which reports AlphaFold’s expected position error at residue x, when the predicted and true structures are aligned on residue y. This is useful for assessing confidence in global features, especially domain packing. For residues x and y drawn from two different domains, a consistently low PAE at (x, y) suggests AlphaFold is confident about the relative domain positions. Consistently high PAE at (x, y) suggests the relative positions of the domains should not be interpreted. The general approach used to produce PAE can be adapted to predict a variety of superposition-based metrics, including [TM-score](https://doi.org/10.1002/prot.20264) and [GDT](https://doi.org/10.1093/nar/gkg571).\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335db2dd9d1c2cb07aa2b0_unnamed%20(2).jpg)Per-residue confidence (pLDDT) and Predicted Aligned Error (PAE) for two example proteins (P54725, Q5VSL9). Both have confident individual domains, but the latter also has confident relative domain positions. Note: Q5VSL9 was solved after this prediction was produced.To emphasise, AlphaFold models are ultimately predictions: while often highly accurate they will sometimes be in error. Predicted atomic coordinates should be interpreted carefully, and in the context of these confidence measures.\n\n#### Open sourcing\n\nAlongside our [method paper](https://www.nature.com/articles/s41586-021-03819-2), we have made the AlphaFold source code available on [GitHub](https://github.com/deepmind/alphafold). This includes access to a trained model and a script for making predictions on novel input sequences. We believe this is an important step that will enable the community to use and build on our work. The easiest way to fold a single new protein with AlphaFold is to use our [Colab notebook](https://bit.ly/alphafoldcolab).\n\nThe open source code is an updated version of our CASP14 system based on the [JAX framework](https://github.com/google/jax), and it achieves equally high accuracy. It also incorporates some recent performance improvements. AlphaFold’s speed has always depended heavily on the input sequence length, with short proteins taking minutes to process and only very long proteins running into hours. Once the MSA has been assembled, the open source version can now predict the structure of a 400 residue protein in just over a minute of GPU time on a V100.\n\n#### Proteome scale and AlphaFold DB\n\nAlphaFold’s fast inference times allow the method to be applied at whole-proteome scale. In our [paper](https://www.nature.com/articles/s41586-021-03828-1), we discuss AlphaFold’s predictions for the human proteome. However, we have since generated predictions for the reference proteomes of a number of [model organisms, pathogens and economically significant species](https://alphafold.ebi.ac.uk/download), and large scale prediction is now routine. Interestingly, we observe a difference in the pLDDT distribution between species, with generally higher confidence on bacteria and archaea and lower confidence on eukaryotes, which we hypothesize may be related to the prevalence of disorder in these proteomes.\n\nNo single research group can fully explore such a large dataset, and so we partnered with [EMBL-EBI](https://www.ebi.ac.uk/) to make the predictions freely available via the [AlphaFold DB](https://alphafold.ebi.ac.uk/). Each prediction can be viewed alongside the confidence metrics described above. A bulk download is also provided for each species, and all data is covered by a CC-BY-4.0 license (making it freely available for both academic and commercial use). We are extremely grateful to EMBL-EBI for their work with us to develop this new resource. Over the course of the coming months we plan to expand the dataset to cover the over 100 million proteins in [UniRef90](https://www.uniprot.org/uniref/?query=&fil=identity:0.9).\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335e12dbf2046bb6893f14_unnamed%20(3).jpg)Example: AlphaFold DB predictions from a variety of organisms.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62335de67ba0ef82024c93af_unnamed%20(1).jpg)Distribution of per-residue confidence for 14 species; left to right: bacteria / archaea, animals, and protists.In AlphaFold DB, we have chosen to share predictions of full protein chains up to 2700 amino acids in length, rather than cropping to individual domains. The rationale is that this avoids missing structured regions that have yet to be annotated. It also provides context from the full amino acid sequence, and allows the model to attempt a domain packing prediction. AlphaFold’s intra-domain accuracy was more extensively evaluated in CASP14 and is expected to be higher than its inter-domain accuracy. However, AlphaFold was the top ranked method in the inter-domain assessment, and we expect it to produce an informative prediction in some cases. We encourage users to view the PAE plot to determine whether domain placement is likely to be meaningful.\n\n#### Future work\n\nWe are excited about the future for computational structural biology. There remain many important topics to address: predicting the structure of complexes, incorporating non-protein components, and capturing dynamics and the response to point mutations. The development of network architectures like AlphaFold that excel at the task of understanding protein structure is a cause for optimism that we can make progress on related problems.\n\nWe see AlphaFold as a complementary technology to experimental structural biology. This is perhaps best illustrated by its role in helping to solve experimental structures, through molecular replacement and docking into cryo-EM volumes. Both applications can accelerate existing research, saving months of effort. From a bioinformatics perspective, AlphaFold’s speed enables the generation of predicted structures on a massive scale. This has the potential to unlock new avenues of research, by supporting structural investigations of the contents of large sequence databases.\n\nUltimately, we hope AlphaFold will prove a useful tool for illuminating protein space, and we look forward to seeing how it is applied in the coming months and years.\n\n‍\n\nWe would love to hear your feedback and understand how AlphaFold and the AlphaFold DB have been useful in your research. Share your stories at [alphafold@deepmind.com](mailto:alphafold@deepmind.com).", "date_published": "2021-07-22T00:00:00Z", "authors": ["Kathryn Tunyasuvunakool", "Jonas Adler", "Zachary Wu", "Tim Green", "Michal Zielinski", "Augustin Žídek", "Alex Bridgland", "Andrew Cowie", "Clemens Meyer", "Agata Laydon", "Sameer Velanka *", "Gerard J Kleywegt *", "Alex Bateman *", "Richard Evans", "Alexander Pritzel", "Michael Figurnov", "Olaf Ronneberger", "Russ Bates", "Simon A. A. Kohl", "Anna Potapenko", "Andrew J Ballard", "Bernardino Romera-Paredes", "Stanislav Nikolov", "Rishub Jain", "Ellen Clancy", "David Reiman", "Stig Petersen", "Andrew Senior", "Koray Kavukcuoglu", "Ewan Birney *", "Pushmeet Kohli", "John Jumper", "Demis Hassabis"], "summaries": []} +{"id": "0c0202b3da5139c2f68a8352d17895de", "title": "Melting Pot: an evaluation suite for multi-agent reinforcement learning", "url": "https://www.deepmind.com/blog/melting-pot-an-evaluation-suite-for-multi-agent-reinforcement-learning", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Technology deployed in the real world inevitably faces unforeseen challenges. These challenges arise because the environment where the technology was developed differs from the environment where it will be deployed. When a technology transfers successfully we say it generalises. In a *multi-agent system*, such as autonomous vehicle technology, there are two possible sources of generalisation difficulty: (1) physical-environment variation such as changes in weather or lighting, and (2) social-environment variation: changes in the behaviour of other interacting individuals. Handling social-environment variation is at least as important as handling physical-environment variation, however it has been much less studied.\n\nAs an example of a social environment, consider how self-driving cars interact on the road with other cars. Each car has an incentive to transport its own passenger as quickly as possible. However, this competition can lead to poor coordination (road congestion) that negatively affects everyone. If cars work cooperatively, more passengers might get to their destination more quickly. This conflict is called a *social dilemma.*\n\nHowever, not all interactions are social dilemmas. For instance, there are *synergistic* interactions in open-source software, there are *zero-sum* interactions in sports, and *coordination problems* are at the core of supply chains. Navigating each of these situations requires a very different approach.\n\nMulti-agent reinforcement learning provides tools that allow us to explore how artificial agents may interact with one another and with unfamiliar individuals (such as human users). This class of algorithms is expected to perform better when tested for their social generalisation abilities than others. However, until now, there has been no systematic evaluation benchmark for assessing this.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6232127f110f7a5984f6e7a6_fig_1.jpg)Blue: focal populations of trained agents, Red: background population of pre-trained botsHere we introduce Melting Pot, a scalable evaluation suite for multi-agent reinforcement learning. Melting Pot assesses generalization to novel social situations involving both familiar and unfamiliar individuals, and has been designed to test a broad range of social interactions such as: cooperation, competition, deception, reciprocation, trust, stubbornness and so on. Melting Pot offers researchers a set of 21 MARL “substrates” (multi-agent games) on which to train agents, and over 85 unique test scenarios on which to evaluate these trained agents. The performance of agents on these held-out test scenarios quantifies whether agents:\n\n* Perform well across a range of social situations where individuals are interdependent,\n* Interact effectively with unfamiliar individuals not seen during training,\n* Pass a universalisation test: answering positively to the question \"what if everyone behaved like that?''\n\nThe resulting score can then be used to rank different multi-agent RL algorithms by their ability to *generalise* to novel social situations.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/623212cd392aa8e603bdfd6e_fig_2.jpg)We hope Melting Pot will become a standard benchmark for multi-agent reinforcement learning. We plan to maintain it, and will be extending it in the coming years to cover more social interactions and generalisation scenarios.\n\n‍\n\nLearn more from our [GitHub page](https://github.com/deepmind/meltingpot).", "date_published": "2021-07-14T00:00:00Z", "authors": ["Joel Z. Leibo", "Edgar Duéñez-Guzmán", "Alexander Vezhnevets", "John Agapiou", "Peter Sunehag", "Raphael Koster", "Jayd Matyas", "Charlie Beattie", "Igor Mordatch *", "Thore Graepel"], "summaries": []} +{"id": "5fad579f2bdff70a41297d030e6c79c3", "title": "Data, Architecture, or Losses: What Contributes Most to Multimodal Transformer Success?", "url": "https://www.deepmind.com/blog/data-architecture-or-losses-what-contributes-most-to-multimodal-transformer-success", "source": "deepmind_technical_blog", "source_type": "blog", "text": "The ability to ground language to vision is a fundamental aspect of real-world AI systems; it is useful across a range of tasks (*e.g.*, visual question answering) and applications (*e.g.*, generating descriptions for visually impaired). Multimodal models (pre-trained on image-language pairs) aim to address this grounding problem. A recent family of models, multimodal transformers (e.g., Lu et al., 2019; Chen et al., 2020; Tan and Bansal, 2019; Li et al., 2020), have achieved state-of-the-art performance in a range of multimodal benchmarks, suggesting that the joint-encoder transformer architecture is better suited for capturing the alignment between image-language pairs than previous approaches (such as dual encoders).\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320e2a337d937f992f130e_fig_1.jpg)In particular, compared to the dual-encoder architecture where there is no cross-talk between the modalities, multimodal transformers (joint encoders) are more sample efficient. In the plot below, we see that, when tested on zero-shot image retrieval, an existing multimodal transformer (UNITER) performs similar to a large-scale dual encoder (CLIP) which is trained on 100 times more data.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320e43fb63b51eb51e2185_fig_2.jpg)BOW-DE: Miech & Alayrac et al. Arxiv 2021, MMT: Hendricks et al. TACL 2021, UNITER: Chen et al. ECCV 2020, CLIP: Radford et al. Arxiv 2021, ALIGN: Jia et al. Arxiv 2021In this work, we examine what aspects of multimodal transformers – attention, losses, and pretraining data – are important in their success at multimodal pretraining. We find that Multimodal attention, where both language and image transformers attend to each other, is crucial for these models’ success. Models with other types of attention (even with more depth or parameters) fail to achieve comparable results to shallower and smaller models with multimodal attention. Moreover, comparable results can be achieved without the image (masked region modelling) loss originally proposed for multimodal transformers. This suggests that our current models are not tapping into the useful signal in the image modality, presumably because of the image loss formulation.\n\nWe also study different properties of multimodal datasets such as their size and the degree to which the language describes its corresponding image (noisiness). We find that a dataset’s size does not always predict multimodal transformers’ performance; its noise level and language similarity to the evaluation task are both important contributing factors. These suggest curating less noisy image–text datasets to be important despite the current trend of harvesting noisy datasets from the web.\n\nOverall, our analysis shows that multimodal transformers are stronger than dual encoder architecture (given the same amount of pretraining data), mainly due to the cross-talk through multimodal attention. However, there are still many open problems when designing multimodal models, including better losses for the image modality and robustness to dataset noise.", "date_published": "2021-02-02T00:00:00Z", "authors": ["Aida Nematzadeh", "Lisa Anne Hendricks", "Jean-baptiste Alayrac", "Rosalia Schneider", "John Mellor"], "summaries": []} +{"id": "976d4b402857f5ef80b665c02c0c9a41", "title": "Imitating Interactive Intelligence", "url": "https://www.deepmind.com/blog/imitating-interactive-intelligence", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Two questions must be answered at the outset of any artificial intelligence research. What do we want AI systems to do? And how will we evaluate when we are making progress toward this goal? Alan Turing, in his seminal paper describing the Turing Test, which he more modestly named the imitation game, argued that for a certain kind of AI, these questions may be one and the same. Roughly, if an AI’s behaviour resembles human-like intelligence when a person interacts with it, then the AI has passed the test and can be called intelligent. An AI that is designed to interact with humans should be tested via interaction with humans.\n\nAt the same time, interaction is not just a test of intelligence but also the point. For AI agents to be generally helpful, they should assist us in diverse activities and communicate with us naturally. In science fiction, the vision of robots that we can speak to is commonplace. And intelligent digital agents that can help accomplish large numbers of tasks would be eminently useful. To bring these devices into reality, we therefore must study the problem of how to create agents that can capably interact with humans and produce actions in a rich world.\n\nBuilding agents that can interact with humans and the world poses a number of important challenges. How can we provide appropriate learning signals to teach artificial agents such abilities? How can we evaluate the performance of the agents we develop, when language itself is ambiguous and abstract? As the wind tunnel is to the design of the airplane, we have created a virtual environment for researching how to make interacting agents.\n\nWe first create a simulated environment, the Playroom, in which virtual robots can engage in a variety of interesting interactions by moving around, manipulating objects, and speaking to each other. The Playroom’s dimensions can be randomised as can its allocation of shelves, furniture, landmarks like windows and doors, and an assortment of children's toys and domestic objects. The diversity of the environment enables interactions involving reasoning about space and object relations, ambiguity of references, containment, construction, support, occlusion, partial observability. We embedded two agents in the Playroom to provide a social dimension for studying joint intentionality, cooperation, communication of private knowledge, and so on.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d57fb7cb612609e1378_fig_1.jpg)Agents interacting in the Playroom. The blue agent instructs the yellow agent to “Put the helicopter into the box.”![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d6327179e737d345d3f_fig_2.jpg)The configuration of the Playroom is randomised to create diversity in data collection.We harness a range of learning paradigms to build agents that can interact with humans, including imitation learning, reinforcement learning, supervised, and unsupervised learning. As Turing may have anticipated in naming “the imitation game,” perhaps the most direct route to create agents that can interact with humans is through imitation of human behaviour. Large datasets of human behaviour along with algorithms for imitation learning from those data have been instrumental for making agents that can interact with textual language or play games. For grounded language interactions, we have no readily available, pre-existing data source of behaviour, so we created a system for eliciting interactions from human participants interacting with each other. These interactions were elicited primarily by prompting one of the players with a cue to improvise an instruction about, e.g., “Ask the other player to position something relative to something else.” Some of the interaction prompts involve questions as well as instructions, like “Ask the other player to describe where something is.” In total, we collected more than a year of real-time human interactions in this setting.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d74337d9337ba2f12ed_fig_3.jpg)Our agents each consume images and language as inputs and produce physical actions and language actions as outputs. We built reward models with the same input specifications.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d836e04ba63f81c6e26_fig_4.jpg)Left: Over the course of a 2 minute interaction, the two players (setter & solver) move around, look around, grab and drop objects, and speak. Right: The setter is prompted to “Ask the other player to lift something.” The setter instructs the solver agent to “Lift the plane which is in front of the dining table”. The solver agent finds the correct object and completes the task.Imitation learning, reinforcement learning, and auxiliary learning (consisting of supervised and unsupervised representation learning) are integrated into a form of interactive self-play that is crucial to create our best agents. Such agents can follow commands and answer questions. We call these agents “solvers.” But our agents can also provide commands and ask questions. We call these agents “setters.” Setters interactively pose problems to solvers to produce better solvers. However, once the agents are trained, humans can play as setters and interact with solver agents.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320d98f724cb999b9eb83d_fig_5.jpg)From human demonstrations we train policies using a combination of supervised learning (behavioural cloning), inverse RL to infer reward models, and forward RL to optimise policies using the inferred reward model. We use semi-supervised auxiliary tasks to help shape the representations of both the policy and reward models.![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320dc06a323a2219564a9f_fig_6.jpg)The setter agent asks the solver agent to “Take the white robot and place it on the bed.” The solver agent finds the robot and accomplishes the task. The reward function learned from demonstrations captures key aspects of the task (blue), and gives less reward (grey) when the same observations are coupled with the counterfactual instruction, “Take the red robot and place it on the bed.”Our interactions cannot be evaluated in the same way that most simple reinforcement learning problems can. There is no notion of winning or losing, for example. Indeed, communicating with language while sharing a physical environment introduces a surprising number of abstract and ambiguous notions. For example, if a setter asks a solver to put something near something else, what exactly is “near”? But accurate evaluation of trained models in standardised settings is a linchpin of modern machine learning and artificial intelligence. To cope with this setting, we have developed a variety of evaluation methods to help diagnose problems in and score agents, including simply having humans interact with agents in large trials.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320dd4c8ddbba2a8d4c9ac_fig_7.jpg)Humans evaluated the performance of agents and other humans in completing instructions in the Playroom on both instruction-following and question-answering tasks. Randomly initialised agents were successful ~0% of the time. An agent trained with supervised behavioural cloning alone (B) performed somewhat better, at ~10-20% of the time. Agents trained with semi-supervised auxiliary tasks as well (B·A) performed better. Those trained with supervised, semi-supervised, and reinforcement learning using interactive self-play were judged to perform best (BG·A & BGR·A).A distinct advantage of our setting is that human operators can set a virtually infinite set of new tasks via language, and quickly understand the competencies of our agents. There are many tasks that they cannot cope with, but our approach to building AIs offers a clear path for improvement across a growing set of competencies. Our methods are general and can be applied wherever we need agents that interact with complex environments and people.", "date_published": "2020-12-11T00:00:00Z", "authors": ["Josh Abramson", "Arun Ahuja", "Arthur Brussee", "Federico Carnevale", "Mary Cassin", "Stephen Clark", "Andrew Dudzik", "Petko Georgiev", "Aurelia Guy", "Tim Harley", "Felix Hill", "Alden Hung", "Zac Kenton", "Jessica Landon", "Timothy Lillicrap", "Kory W. Mathewson", "Alistair Muldal", "Adam Santoro", "Nikolay Savinov", "Vikrant Varma", "Gregory Wayne", "Nathaniel Wong", "Chen Yan", "Rui Zhu"], "summaries": []} +{"id": "bf42b777717094e925aa3ccd94dae890", "title": "Using Unity to Help Solve Intelligence", "url": "https://www.deepmind.com/blog/using-unity-to-help-solve-intelligence", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### A wide range of environments\n\nIn the pursuit of artificial general intelligence (AGI), we seek to create agents that can achieve goals in a wide range of environments. As our agents master the environments we create, we must continually create new environments that probe as-yet-untested cognitive abilities.\n\nGames have always provided a challenge for artificial intelligence (AI) research, most famously board games such as backgammon, chess, and Go. Video games such as Space Invaders, Quake III Arena, Dota 2 and StarCraft II have also more recently become popular for AI research. Games are ideal because they have a clear measure of success, allowing progress to be reviewed empirically and to be directly benchmarked against humans.\n\nAs AGI research progresses, so too does the research community’s interest in more complex games. At the same time, the engineering efforts needed for transforming individual video games into research environments become hard to manage. Increasingly, general-purpose game engines become the most scalable way to create a wide range of interactive environments.\n\n#### General-purpose game engines\n\nMuch AGI research has already happened in game engines such as [Project Malmo](https://www.microsoft.com/en-us/research/project/project-malmo/), based on Minecraft; [ViZDoom](http://vizdoom.cs.put.edu.pl/), based on Doom; and [DeepMind Lab](https://github.com/deepmind/lab), based on Quake III Arena. These engines can be scripted to quickly create new environments – and since many were written for older hardware, they’re able to run extremely fast on modern hardware, eliminating the environment as a performance bottleneck.\n\nBut these game engines are missing some important features. DeepMind Lab, for example, is excellent for learning navigation but poor for learning common sense notions like how objects move and interact with each other.\n\n#### Unity\n\nAt DeepMind we use Unity, a flexible and feature-rich game engine. Unity’s realistic physics simulation allows agents to experience an environment more closely grounded in the real world. The modern rendering pipeline provides more subtle visual clues such as realistic lighting and shadows. Unity scripts are written in C#, which is easy to read, and unlike with bespoke engines, provides access to all game-engine features. Multiplatform support lets us run environments at home on our laptops or at scale on Google’s data-centres. Finally, as the Unity engine continues to evolve, we can future-proof ourselves without expending a large amount of our own engineering time.\n\nUnity includes a ready-to-use machine learning toolkit called [ML-Agents](https://unity.com/products/machine-learning-agents) that focuses on simplifying the process of making an existing game available as a learning environment. DeepMind focuses on constructing a wide variety of heterogeneous environments which are run at scale, and as such we instead use dm\\_env\\_rpc (see below).\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/62320bed766cbabbba55a2e1_fig_1.jpg)Screen captures of Unity environments created at DeepMind#### Differences from conventional games\n\nTraditional video games render themselves in real-time: one second on-screen is equal to one second in a simulation. But to AI researchers, a game is just a stream of data. Games can often be processed much more quickly than in real-time, and there’s no problem if the game speed varies wildly from moment to moment.\n\nAdditionally, many reinforcement learning algorithms scale with multiple instances. That is, one AI can play thousands of games simultaneously and learn from them all at once.\n\nBecause of this, we optimise for throughput instead of latency. That is, we update our games as many times as we can and don’t worry about generating those updates at a consistent rate. We run multiple games on a single computer, with one game per processor core. Stalls caused by features such as garbage collection - a common headache for traditional game makers - are not a concern to us as long as the game generally runs quickly.\n\n#### Containerisation and dm\\_env\\_rpc\n\nGames output images, text, and sound for the player to see and hear, and also take input commands from a game controller of some kind. The structure of this data is important for AI researchers. For example, text is normally presented separately instead of being drawn onto the screen. Since flexibility in this data format is so important, we created a new open-source library called [*dm\\_env\\_rpc*](https://github.com/deepmind/dm_env_rpc), which functions as the boundary between environments and agents.\n\nBy using dm\\_env\\_rpc, we can containerise our environments and release them publicly. Containerisation means using technology like [Docker](https://www.docker.com/) to package precompiled environment binaries. Containerisation allows our research to be independently verified. It’s a more reliable and convenient way to reproduce experiments than open sourcing, which can be confused by compiler or operating system differences. For more details on how we containerise an environment, please see our work on [dm\\_memorytasks.](https://github.com/deepmind/dm_memorytasks)", "date_published": "2020-11-18T00:00:00Z", "authors": ["Simon Carter", "Manuel Sanchez", "Ricardo Barreira", "Seb Noury", "Keith Anderson", "Jay Lemmon", "Jonathan Coe", "Piotr Trochim", "Tom Handley", "Adrian Bolton"], "summaries": []} +{"id": "65add2f22e54beb4f1e6bf8ecf552583", "title": "RL Unplugged: Benchmarks for Offline Reinforcement Learning", "url": "https://www.deepmind.com/blog/rl-unplugged-benchmarks-for-offline-reinforcement-learning", "source": "deepmind_technical_blog", "source_type": "blog", "text": "![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6231efbede9e9af7f611ff68_fig%201.gif)![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6231efcc337d93e52e2e23b2_fig%202.gif)Many of the successes of RL rely heavily on repeated online interactions of an agent with an environment, which we call online RL. Despite its success in simulation, the uptake of RL for real-world applications has been limited. Power plants, robots, healthcare systems, or self-driving cars are expensive to run and inappropriate controls can have dangerous consequences. They are not easily compatible with the crucial idea of exploration in RL and the data requirements of online RL algorithms. Nevertheless, most real-world systems produce large amounts of data as part of their normal operation, and the goal of offline RL to learn a policy directly from that logged data without interacting with the environment.\n\nOffline RL methods (e.g Agarwal et al., 2020; Fujimoto et al., 2018) have shown promising results on well-known benchmark domains. However, non-standardised evaluation protocols, differing datasets, and ack of baselines make algorithmic comparisons difficult. Nevertheless, some important properties of potential real-world application domains such as partial observability, high-dimensional sensory streams (i.e., images), diverse action spaces, exploration problems, non-stationarity, and stochasticity, are underrepresented in the current offline RL literature.\n\n‍\n\n[INSERT GIF + CAPTION]\n\nWe introduce a novel collection of task domains and associated datasets together with a clear evaluation protocol. We include widely-used domains such as the DM Control Suite (Tassa et al., 2018) and Atari 2600 games (Bellemare et al., 2013), but also domains that are still challenging for strong online RL algorithms such as real-world RL (RWRL) suite tasks (Dulac-Arnold et al., 2020) and DM Locomotion tasks (Heess et al., 2017; Merel et al., 2019a,b, 2020). By standardizing the environments, datasets, and evaluation protocols, we hope to make research in offline RL more reproducible and accessible. We call our suite of benchmarks “RL Unplugged”, because offline RL methods can use it without any actors interacting with the environment. Our paper offers four main contributions: (i) a unified API for datasets (ii) a varied set of environments (iii) clear evaluation protocols for offline RL research, and (iv) reference performance baselines.\n\n##### RL Unplugged: Benchmarks for Offline Reinforcement Learning", "date_published": "2020-06-24T00:00:00Z", "authors": ["Caglar Gülçehre", "Ziyu Wang", "Alexander Novikov", "Tom Le Paine", "Sergio Gómez Colmenarejo", "K Zolna", "Rishabh Agarwal*", "Josh Merel", "Daniel Mankowitz", "Cosmin Paduraru", "Gabriel Dulac-Arnold*", "Jerry Li", "Mohammad Norouzi *", "Matt Hoffman", "Ofir Nachum *", "George Tucker *", "Nicolas Heess", "Nando de Freitas"], "summaries": []} +{"id": "f2fa5d1a9bd53accd8421af2b215cc03", "title": "dm_control: Software and Tasks for Continuous Control", "url": "https://www.deepmind.com/blog/dm-control-software-and-tasks-for-continuous-control", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### Overview\n\nA public colab notebook with a tutorial for dm\\_control software is available [here](https://colab.sandbox.google.com/github/deepmind/dm_control/blob/master/tutorial.ipynb).\n\n##### Infrastructure\n\n* An autogenerated MuJoCo Python wrapper provides full access to the underlying engine.\n* PyMJCF is a Document Object Model, wherein a hierarchy of Python *Entity* objects corresponds to MuJoCo model elements.\n* Composer is the high-level “game engine” which streamlines the composing of Entities into scenes and the defining observations, rewards, terminations and general game logic.\n* The Locomotion framework introduces several abstract Composer entities such as the Arena and Walker, facilitating locomotion-like tasks.\n\n##### Environments\n\n* The [Control Suite](https://www.youtube.com/watch?v=rAai4QzcYbs), including a new [quadruped](https://www.youtube.com/watch?v=RhRLjbb7pBE&t=5s) and [dog](https://www.youtube.com/watch?v=i0_OjDil0Fg) environment.\n* Several locomotion tasks, including soccer.\n* Single arm robotic manipulation tasks using snap-together bricks.\n\n#### Highlights\n\n##### Named Indexing\n\nExploiting MuJoCo’s support of *names* for all model elements, we allow strings to index and slice into arrays. So instead of writing:\n\n\"fingertip\\_height = physics.data.geom\\_xpos[7, 2]\"\n\n...using obscure, fragile numerical indexing, you can write:\n\n\"fingertip\\_height = physics.named.data.geom\\_xpos['fingertip', 'z']\" \n\n\nleading to a much more robust, readable codebase. \n\n\n#### PyMJCF\n\nThe PyMJCF library creates a Python object hierarchy with 1:1 correspondence to a MuJoCo model. It introduces the attach() method which allows models to be attached to one another. For example, in our tutorial we create procedural multi-legged creatures by attaching legs to bodies and creatures to the scene.\n\n##### Composer\n\nComposer is the “game engine“ framework, which defines a particular order of runtime function calls, and abstracts the affordances of *reward*, *termination* and *observation*. These abstractions allowed us to create useful submodules:\n\ncomposer.Observable: An abstract observation wrapper which can add noise, delays, buffering and filtering to any sensor.\n\ncomposer.Variation: A set of tools for randomising simulation quantities, allowing for agent robustification and sim-to-real via model variation.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6231ee766a323a1793376c3b_diagram3.svg)Diagram showing the life-cycle of Composer callbacks. Rounded rectangles represent callbacks that Tasks and Entities may implement. Blue rectangles represent built-in Composer operations.#### Locomotion\n\nThe Locomotion framework introduced the abstractions:\n\nWalker: A controllable entity with common locomotion-related methods, like projection of vectors into an egocentric frame.\n\nArena: A self-scaling randomised scene, in which the walker can be placed and given a task to perform.\n\nFor example, using just 4 function calls, we can instantiate a humanoid walker, a WallsCorridor arena and combine them in a RunThroughCorridor task.\n\n#### New Control Suite domains\n\n##### Quadruped\n\n* A generic quadruped domain with a passively stable body.\n* Several pure locomotion tasks (e.g. walk, run).\n* An escape task requiring rough terrain navigation.\n* A fetch task requiring ball dribbling.\n\n##### Dog\n\n* An elaborate model based on a skeleton commissioned from [leo3Dmodels](https://www.turbosquid.com/Search/Artists/leo3Dmodels).\n* A challenging ball-fetching task that requires precision grasping with the mouth.\n\n##### Showcase\n\nA fast-paced montage of dm\\_control based tasks from DeepMind:", "date_published": "2020-06-15T00:00:00Z", "authors": ["Yuval Tassa", "Saran Tunyasuvunakool", "Alistair Muldal", "Yotam Doron", "Siqi Liu", "Steven Bohez", "Josh Merel", "Tom Erez", "Timothy Lillicrap", "Nicolas Heess"], "summaries": []} +{"id": "ccf1de7beb444f79a6b1eef661630a1b", "title": "Acme: A new framework for distributed reinforcement learning", "url": "https://www.deepmind.com/blog/acme-a-new-framework-for-distributed-reinforcement-learning", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Overall, the high-level goals of Acme are as follows:\n\n1. To enable the reproducibility of our methods and results  — this will help clarify what makes an RL problem hard or easy, something that is seldom apparent.\n2. To simplify the way we (and the community at large) design new algorithms — we want that next RL agent to be easier for everyone to write!\n3. To enhance the readability of RL agents — there should be no hidden surprises when transitioning from a paper to code.\n\nIn order to enable these goals, the design of Acme also bridges the gap between large-, medium-, and small-scale experiments. We have done so by carefully thinking about the design of agents at many different scales.\n\nAt the highest level, we can think of Acme as a classical RL interface (found in any introductory RL text) which connects an actor (i.e. an action-selecting agent) to an environment. This actor is a simple interface which has methods for selecting actions, making observations, and updating itself. Internally, learning agents further split the problem up into an “acting” and a “learning from data” component. Superficially, this allows us to re-use the acting portions across many different agents. However, more importantly this provides a crucial boundary upon which to split and parallelize the learning process. We can even scale down from here and seamlessly attack the batch RL setting where there exists *no environment* and only a fixed dataset. Illustrations of these different levels of complexity are shown below:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c6b6b755e529e96ea6c6_Fig%201.gif)This design allows us to easily create, test, and debug novel agents in small-scale scenarios before scaling them up — all while using the same acting and learning code. Acme also provides a number of useful utilities from checkpointing, to snapshotting, to low-level computational helpers. These tools are often the unsung heroes of any RL algorithm, and in Acme we strive to keep them as simple and understandable as possible.\n\nTo enable this design Acme also makes use of [Reverb](https://deepmind.com/research/open-source/Reverb): a novel, efficient data storage system purpose built for machine learning (and reinforcement learning) data. Reverb is primarily used as a system for experience replay in distributed reinforcement learning algorithms, but it also supports other data structure representations such as FIFO and priority queues. This allows us to use it seamlessly for on- and off-policy algorithms. Acme and Reverb were designed from the beginning to play nicely with one another, but Reverb is also fully usable on its own, so go check it out!\n\nAlong with our infrastructure, we are also releasing single-process instantiations of a number of agents we have built using Acme. These run the gamut from continuous control (D4PG, MPO, etc.), discrete Q-learning (DQN and R2D2), and more. With a minimal number of changes — by splitting across the acting/learning boundary — we can run these same agents in a distributed manner. Our first release focuses on single-process agents as these are the ones mostly used by students and research practitioners.\n\nWe have also carefully benchmarked these agents on a number of environments, namely the [control suite](https://deepmind.com/research/publications/deepmind-control-suite), [Atari](https://github.com/mgbellemare/Arcade-Learning-Environment), and [bsuite](https://deepmind.com/research/open-source/bsuite).\n\n###### Playlist of videos showing agents trained using Acme framework\n\nWhile additional results are readily available in our [paper](https://arxiv.org/abs/2006.00979), we show a few plots comparing the performance of a single agent (D4PG) when measured against both actor steps and wall clock time for a continuous control task. Due to the way in which we limit the rate at which data is inserted into replay — refer to the paper for a more in-depth discussion — we can see roughly the same performance when comparing the rewards an agent receives versus the number of interactions it has taken with the environment (actor steps). However, as the agent is further parallelised we see gains in terms of how fast the agent is able to learn. On relatively small domains, where the observations are constrained to small feature spaces, even a modest increase in this parallelisation (4 actors) results in an agent that takes under half the time to learn an optimal policy:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c7165eaff527cb3d52f4_Fig%202.jpg)But for even more complex domains where the observations are images that are comparatively costly to generate we see much more extensive gains:\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c723cd9c1976c2f08330_Fig%203.jpg)And the gains can be even bigger still for domains such as Atari games where the data is more expensive to collect and the learning processes generally take longer. However, it is important to note that these results share the same acting and learning code between both the distributed and non-distributed setting. So it is perfectly feasible to experiment with these agents and results at a smaller scale — in fact this is something we do all the time when developing novel agents!\n\n‍\n\nFor a more detailed description of this design, along with further results for our baseline agents, see our [paper](https://arxiv.org/abs/2006.00979). Or better yet, take a look at our [GitHub repository](https://github.com/deepmind/acme) to see how you can start using Acme to simplify your own agents!", "date_published": "2020-06-01T00:00:00Z", "authors": ["Matt Hoffman", "Bobak Shahriari", "John Aslanides", "Gabriel Barth-Maron", "Feryal Behbahani", "Tamara Norman", "Abbas Abdolmaleki", "Albin Cassirer", "Fan Yang", "Kate Baumli", "Sarah Henderson", "Alex Novikov", "Sergio Gómez Colmenarejo", "Serkan Cabi", "Caglar Gülçehre", "Tom Le Paine", "Andrew Cowie", "Ziyu Wang", "Bilal Piot", "Nando de Freitas"], "summaries": []} +{"id": "3d3692974da40aeb85f820b84b10c899", "title": "Simple Sensor Intentions for Exploration", "url": "https://www.deepmind.com/blog/simple-sensor-intentions-for-exploration", "source": "deepmind_technical_blog", "source_type": "blog", "text": "![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c5d829a96fa1b5f53db5_Fig%201.gif)By simple color-masking, high-level image statistics can be derived. Rewarding an agent for deliberately changing these statistics, leads to diverse exploration and interesting behavior like, for example, grasping or lifting objects.#### Example Skills\n\nLearned from scratch and from pixels and proprioception only. The external rewards are sparse and Simple Sensor Intentions (SSIs) are used as the only auxiliary tasks.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c5ead7858131e4ad7dd2_Fig%202.gif)#### Video", "date_published": "2020-05-12T00:00:00Z", "authors": ["Tim Hertweck", "Martin Riedmiller", "Michael Bloesch", "Jost Tobias Springenberg", "Noah Siegel", "Markus Wulfmeier", "Roland Hafner", "Nicolas Heess"], "summaries": []} +{"id": "40bd93fb9842cc8a14235016006093b4", "title": "Learning to Segment Actions from Observation and Narration", "url": "https://www.deepmind.com/blog/learning-to-segment-actions-from-observation-and-narration", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Complex tasks that people carry out in the world, for example *making pancakes*, have multiple action steps (e.g., *pouring the mixture, flipping the pancake, removing the pancake*), and are structured. When we observe people carrying out tasks, we recognize where the action steps begin and end (*pouring the mixture* now, *flipping the pancake* later), and distinguish the important steps from the insignificant ones. Identifying important action steps and associating them with intervals of time is known as *action segmentation*, and is a crucial process for human cognition and planning. When people, and in particular, children, learn to segment actions, they rely on a number of cues, including descriptions narrated by the person carrying out the task (“now I’ll stir everything”..) and structural regularities in the task (mixing ingredientstypically happens after adding the ingredients).\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bfca149e456ff57380c2_Fig%201%20copy.jpg)In this work, inspired by how people learn to segment actions, we examine how effective language descriptions and task regularities are in improving systems for action segmentation. Action segmentation is an important first step for processing and cataloguing video: knowing which actions are occurring, and when, makes it easier to search for relevant videos and parts of video from a large, web-scale collection. However, standard, supervised, machine learning methods for predicting action segments in videos would require videos to be annotated with the action segments that occur in them. Since these annotations would be expensive and difficult to collect, we are interested in *weakly-supervised* action segmentation: training without annotated action segments.\n\nWe focus on a challenging dataset of instructional videos taken from YouTube [CrossTask, Zhukov et al. 2019], involving everyday household tasks such as cooking and assembling furniture. While these videos are naturally-occurring, they consist of tasks that have some structural regularities across videos, and have language descriptions (transcriptions of the person’s narration), which both provide a noisy source of weak supervision. We develop a flexible unsupervised model for action segmentation that can be trained without action labels, and can optionally use this weak supervision from the *task regularities* and *language descriptions*. Our model, and models from past work, both benefit substantially from both of these sources of supervision, even on top of rich features from state-of-the-art neural action and object classifiers. We also find that generativemodels of the video features typically have better performance than discriminative models on the segmentation task.\n\nOur findings suggest that using language to guide action segmentation is a promising direction for future work, when annotations for the action segments are not available.", "date_published": "2020-05-07T00:00:00Z", "authors": ["Daniel Fried*", "Jean-Baptiste Alayrac", "Phil Blunsom", "Chris Dyer", "Stephen Clark", "Aida Nematzadeh"], "summaries": []} +{"id": "9d3319a33b1b07ffbc1913fa1614b12b", "title": "Visual Grounding in Video for Unsupervised Word Translation", "url": "https://www.deepmind.com/blog/visual-grounding-in-video-for-unsupervised-word-translation", "source": "deepmind_technical_blog", "source_type": "blog", "text": "#### Translating Words Through Unpaired Narrated Videos\n\nThe most common approach for machine translation relies on supervision through paired or parallel corpus where each sentence in the source language is paired with its translation in the target language. This is limiting as we do not have access to such a paired corpus for most languages in the world. Interestingly, bilingual children can learn two languages without being exposed to them at the same time. Instead, they can leverage visual similarity across situations: what they observe while hearing \"the dog is eating'' on Monday is similar to what they see as they hear \"le chien mange'' on Friday.\n\nIn this work, inspired by bilingual children, we develop a model that learns to translate words from one language to another by tapping into the visual similarity of situations in which words occur. More specifically, our training dataset consists of disjoint sets of videos narrated in different languages. These videos share similar topics (e.g., cooking pasta or changing a tire); for example, the dataset consists of some videos on how to cook pasta narrated in Korean and a different set of videos on the same topic but in English. Note that the videos in different languages are not *paired*.\n\nOur model leverages the visual similarity of videos by associating videos with their corresponding narrations in a shared embedding space between languages. The model is trained by alternating between videos narrated in one language and those in the second language. Thanks to such a training procedure, and since we share the video representation between both languages, our model learns a joint bilingual-visual space that aligns words in two different languages.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bf8f779b4968925406fe_Fig%201.gif)#### MUVE: improving language only methods with vision\n\nWe demonstrate that our method, MUVE (Multilingual Unsupervised Visual Embeddings), can complement existing translation techniques that are trained on unpaired corpus but do not use vision. By doing so, we show that the quality of unsupervised word translation improves, most notably in situations where language-only methods suffer the most, e.g., when: (i) languages are very different (such as English and Korean or English and Japanese), (ii) the initial corpora have different statistics in the two languages, or (iii)  a limited amount of training data is available.\n\nOur findings suggest that using visual data such as videos is a promising direction to improve bilingual translation models when we do not have paired data.", "date_published": "2020-03-11T00:00:00Z", "authors": ["Gunnar Sigurdsson*", "Jean-Baptiste Alayrac", "Aida Nematzadeh", "Lucas Smaira", "Mateusz Malinowski", "Joao Carreira", "Phil Blunsom", "Andrew Zisserman"], "summaries": []} +{"id": "31119b7f944dd6fd476c55a9ad347c81", "title": "Artificial Intelligence, Values and Alignment", "url": "https://www.deepmind.com/blog/artificial-intelligence-values-and-alignment", "source": "deepmind_technical_blog", "source_type": "blog", "text": "The question of ‘value alignment’ centres upon how to ensure that AI systems are properly aligned with human values. It can be broken down into two parts. The first part is *technical* and focuses on how to encode values or principles in artificial agents, so that they reliably do what they ought to do. The second part is *normative*, and focuses on what values or principles it would be right to encode in AI.\n\nThis paper focuses on the second question, paying particular attention to the fact that we live in a pluralistic world where people have a variety of different beliefs about value. Ultimately, I suggest that we need to devise principles for alignment that treat people fairly and command widespread support despite this difference of opinion.\n\n#### Moral considerations\n\nAny new technology generates moral considerations. Yet the task of imbuing artificial agents with moral values becomes particularly important as computer systems operate with greater autonomy and at a speed that ‘increasingly prohibits humans from evaluating whether each action is performed in a responsible or ethical manner’.\n\nThe first part of the paper notes that while technologists have an important role to play in building systems that respect and embody human values, the task of selecting appropriate values is not one that can be settled by technical work alone. This becomes clear when we look at the different ways in which value alignment could be achieved, at least within the reinforcement learning paradigm.\n\nOne set of approaches try to specify a reward function for an agent that would lead it to promote the right kind of outcome and act in ways that are broadly thought to be ethical. For this approach to succeed, we need to specify appropriate goals for artificial agents and encode them in AI systems – which is far from straightforward. A second family of approaches proceeds differently. Instead of trying to specify the correct reward function for the agent upfront, it looks at ways in which an agent could learn the correct reward from examples of human behavior or human feedback. However, the question then becomes what data or feedback to train the agent on – and how this decision can be justified.\n\nEither way, important normative questions remain. \n\n\n#### Alignment with what?\n\nA key concern among AI researchers is that the systems they build are properly responsive to human direction and control. Indeed, as Stuart Russell notes, it is important that artificial agents understand the real meaning of the instructions they are given, and that they do not interpret them in an excessively literal way – with the story of King Midas serving as a cautionary tale.\n\nAt the same time, there is growing recognition that AI systems may need to go beyond this – and be designed in a way that leads them to do the right thing by default, even in the absence of direct instructions from a human operator.\n\nOne promising approach holds that AI should be designed to align with human preferences. In this way, AI systems would learn to avoid outcomes that very few people wanted or desired. However, this approach also has certain weaknesses. Revealed preferences can be irrational or based on false information. They may also be malicious. Furthermore, preferences are sometimes ‘adaptive’: people who lead lives affected by poverty or discrimination may revise their hopes and expectations downwards in order to avoid disappointment. By aligning itself with existing human preferences, AI could therefore come to act on data that is heavily compromised.\n\nTo address this weakness, I suggest that AI systems need to be properly responsive to underlying human interests and values. A principle-based approach to AI alignment, which takes into account both of these factors, would yield agents that are less likely to do harm and more likely to promote human well-being. A principle-based approach to alignment could also be sensitive to other considerations, such as the welfare of future generations, non-human animals and the environment.\n\n#### Three approaches\n\nThe final part of the paper looks at the ways in which principles for AI alignment might be identified.\n\nIn this context, I suggest that the main challenge is not to identify ‘true’ moral principles and encode them in AI – for even if we came to have great confidence in the truth of a single moral theory there would still be people with different beliefs and opinions who disagreed with us. Instead, we should try to identify principles for alignment that are acceptable to people who ascribe to a wide range of reasonable points of view. Principles of this kind could be arrived at in at least three different ways.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bf1adb861b5521f64c63_Fig%201%20copy.jpg)One approach looks at the possibility that there is an *overlapping consensus* between the moral beliefs held by people around the world. If such a consensus exists, then AI could be aligned with it – and potentially command widespread support – without encountering the problem of value imposition. In this regard, human rights are particularly promising. For while the idea of universal human rights is not wholly uncontested, the principles they embody command significant international support in practice. They also find justification in African, Islamic, Western, and Confucian philosophical traditions.\n\nA second approach to pluralistic value alignment seeks to model fair principles for AI using the idea of a *‘veil of ignorance’*. The veil of ignorance is a device proposed by the philosopher John Rawls, to help people with different values and perspectives agree upon principles of justice for a society. The central claim is that when choosing principles of this kind, people should do so from an imaginary position where they do not know who they will be in that society, or what specific moral view they will hold. As a result, they will deliberate impartially and choose principles that do not unduly favour themselves. A similar approach could be used to model principles for AI.\n\nAlthough it is difficult to say what people would choose in this situation without knowing more about the specific form of AI in question, it seems plausible that they would want to ensure that this technology is safe, amenable to human control, and that its benefits are distributed widely.\n\nThe final approach looks at ways in which *social choice theory* can be used to combine different viewpoints and inform the direction AI should take. One school of thought focuses on mathematical integration of individual preferences into a single ranking – which could be used to guide AI. More promising still are democratic methods such as voting and broad-based deliberation. When used successfully, these approaches reflect the value of equality and have the potential to ensure that principles for AI alignment enjoy widespread legitimacy.\n\n#### Further research\n\nEach proposal discussed here is tentative. They can be developed and combined in many different ways. This paper has benefited from feedback provided by over fifty people, including from audiences at workshops convened at Stanford University, Princeton University, PAI, the University of Warwick, and the University of California, Berkeley. Moving forward, our hope is that this paper can contribute to the growing conversation about AI systems and their alignment with human values.", "date_published": "2020-01-13T00:00:00Z", "authors": ["Iason Gabriel"], "summaries": []} +{"id": "86b4bd46b476ab671db57fe95df96c4c", "title": "International evaluation of an AI system for breast cancer screening", "url": "https://www.deepmind.com/blog/international-evaluation-of-an-ai-system-for-breast-cancer-screening", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Breast cancer is the second leading cause of death from cancer in women, but outcomes have been shown to improve if caught and treated early. This is why many countries around the world have set up breast cancer screening programmes, aiming to identify breast cancer at earlier stages of the disease, when treatment can be more successful.\n\nHowever, interpreting mammograms (breast x-rays) remains challenging, as evidenced by the high variability of experts’ performance in detecting cancer. In this collaborative research with [Google Health](https://health.google/) & [Cancer Research UK](https://www.cancerresearchuk.org/) Imperial Centre, [Northwestern University](https://www.northwestern.edu/), and [Royal Surrey County Hospital](https://www.royalsurrey.nhs.uk/) now [published in Nature](https://www.nature.com/articles/s41586-019-1799-6.epdf?author_access_token=V_LKV2xpSv9G1dhANYeWM9RgN0jAjWel9jnR3ZoTv0M5zwPVx5jT4z_z-YkUZTBT6_1AtRXi8QouJM7xB-oSN-cVBoH7f_QTgx-yQN3UBEVfkvO1_5urNT-CZHGCEQNGlCuO69tMQYak4SmdoDqyzg%3D%3D), we developed an AI system capable of surpassing clinical specialists from the UK and US in predicting breast cancer from mammograms, as confirmed by biopsy.\n\n#### Breast cancer screening datasets\n\nBreast cancer screening programmes vary from country to country. In the US, women are typically screened every one to two years, and their mammograms are interpreted by a single radiologist. In the UK, women are screened every three years, but each mammogram is interpreted by two radiologists, with an arbitration process in case of disagreement. We utilised large datasets collected in both countries to develop and evaluate this AI system.\n\nThe UK evaluation dataset consisted of a random sample of 10% of all women with screening mammograms at two sites in London between 2012 and 2015. It included 25,856 women, 785 of which had a biopsy, and 414 women with cancer that was diagnosed within three years of imaging. These de-identified data was collected as part of the [OPTIMAM](http://commercial.cancerresearchuk.org/optimam-mammography-image-database-and-viewing-software) database effort by Cancer Research UK, and are subject to strict privacy constraints.\n\nThe US evaluation dataset consisted of de-identified screening mammograms of 3,097 women collected between 2001 and 2018 from one academic medical centre. We included images from all 1,511 women who were biopsied during this time period and a random subset of women who never underwent biopsy. Among the women who received a biopsy, 686 were diagnosed with cancer within 2 years of imaging.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228be8cea4387c56ae3cd5f_Fig%201.jpg)Each mammogram has four images - two of each breast from different angles.#### Assessing the performance of the AI system\n\nWe compared the performance of the AI system against decisions made by individual human specialists in the original screening visit. In this evaluation, we found that the AI had an absolute reduction in false positives (women incorrectly referred for further investigation) of 5.7% for US subjects and 1.2% for UK subjects, and a reduction in false negatives (women incorrectly missed for further investigation)  of 9.4% for US subjects and 2.7% for UK subjects, compared to human experts. See the [paper](https://www.nature.com/articles/s41586-019-1799-6.epdf?author_access_token=V_LKV2xpSv9G1dhANYeWM9RgN0jAjWel9jnR3ZoTv0M5zwPVx5jT4z_z-YkUZTBT6_1AtRXi8QouJM7xB-oSN-cVBoH7f_QTgx-yQN3UBEVfkvO1_5urNT-CZHGCEQNGlCuO69tMQYak4SmdoDqyzg%3D%3D) for more extensive results.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bea6779b4947615344dd_Fig%202.jpg)The AI system accurately predicts, solely from screening mammograms, whether a patient will have a biopsy positive for breast cancer (note that only a small fraction of screening visits will result in a biopsy). It does so more accurately than individual human specialists, with lower false positive rates (for cancer prediction within three years for the UK dataset, and within two years for the US dataset). The top left indicates peak performance, with no false positives or false negatives. Credit: McKinney et al, Nature#### Generalisation across populations\n\nTo evaluate whether the AI system was able to generalise across populations and screening settings, we ran an experiment in which the AI was only allowed to learn from data from UK subjects, and then evaluated it on data from US subjects. This experiment showed that the AI system still surpassed human expert performance on US data.\n\nThis is an encouraging avenue for future research and gives more confidence about the robustness of the AI system. It might be possible that an AI diagnostic system could be beneficial even when used in areas where there is not a significant history of screening mammography on which to train it.\n\n#### Future research & potential applications\n\nWe’ve yet to determine how to best deploy an AI system for clinical use in mammography. However, we investigated one possible such scenario by using the AI system as a “second reader”. We simulated this by treating the prediction of the AI system as an independent second opinion for every mammogram, taking the place of the ‘second reader’ in the UK ‘double reading’ system. When the AI and the clinician disagreed, the existing arbitration process would take place. In these simulated experiments, we showed that an AI-aided double-reading system could achieve non-inferior performance to the UK system with only 12% of the current second reader workload.\n\nFurther research, including prospective clinical studies, will be required to understand the full extent to which this technology can benefit breast cancer screening programmes.", "date_published": "2020-01-01T00:00:00Z", "authors": ["Scott Mayer McKinney *", "Marcin T. Sieniek *", "Varun Godbole *", "Jonathan Godwin", "Natasha Antropova", "Hutan Ashrafian *", "Trevor Back", "Mary Chesus", "Greg C Corrado *", "Ara Darzi *", "Mozziyar Etemadi *", "Florencia Garcia-Vicente *", "Fiona J Gilbert *", "Mark Halling-Brown *", "Demis Hassabis", "Sunny Jansen *", "Alan Karthikesalingam", "Christopher J Kelly", "Dominic King", "Joseph Ledsam", "David Melnick *", "Hormuz Mostofi *", "Bernardino Romera Paredes", "Lily Peng *", "Joshua Jay Reicher *", "Richard Sidebottom *", "Mustafa Suleyman", "Daniel Tse *", "Kenneth C. Young *", "Jeffrey De Fauw", "Shravya Shetty *"], "summaries": []} +{"id": "634e81b378a5171c0a58e04eab84eda1", "title": "Restoring ancient text using deep learning: a case study on Greek epigraphy", "url": "https://www.deepmind.com/blog/restoring-ancient-text-using-deep-learning-a-case-study-on-greek-epigraphy", "source": "deepmind_technical_blog", "source_type": "blog", "text": "Historians rely on different sources to reconstruct the thought, society and history of past civilisations. Many of these sources are text-based – whether written on scrolls or carved into stone, the preserved records of the past help shed light on ancient societies. However, these records of our ancient cultural heritage are often incomplete: due to deliberate destruction, or erosion and fragmentation over time. This is the case for inscriptions: texts written on a durable surface (such as stone, ceramic, metal) by individuals, groups and institutions of the past, and which are the focus of the discipline called [epigraphy](https://en.wikipedia.org/wiki/Epigraphy). Thousands of inscriptions have survived to our day; but the majority have suffered damage over the centuries, and parts of the text are illegible or lost (Figure 1). The reconstruction (\"restoration\") of these documents is complex and time consuming, but necessary for a deeper understanding of civilisations past.\n\nOne of the issues with discerning meaning from incomplete fragments of text is that there are often multiple possible solutions. In many word games and puzzles, players guess letters to complete a word or phrase – the more letters that are specified, the more constrained the possible solutions become. But unlike these games, where players have to guess a phrase in isolation, historians restoring a text can estimate the likelihood of different possible solutions based on other context clues in the inscription – such as grammatical and linguistic considerations, layout and shape, textual parallels, and historical context. Now, by using machine learning trained on ancient texts, we’ve built a system that can furnish a more complete and systematically ranked list of possible solutions, which we hope will augment historians’ understanding of a text.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bcb8d5510c7928a6a8ee_Fig%201.jpg)Figure 1: Damaged inscription: a decree of the Athenian Assembly relating to the management of the Acropolis (dating 485/4 BCE). IG I3 4B. (CC BY-SA 3.0, WikiMedia) \n#### Pythia\n\nPythia – which takes its name from the woman who delivered the god Apollo's oracular responses at the Greek sanctuary of Delphi – is the first ancient text restoration model that recovers missing characters from a damaged text input using deep neural networks. Bringing together the disciplines of ancient history and deep learning, the present work offers a fully automated aid to the text restoration task, providing ancient historians with multiple textual restorations, as well as the confidence level for each hypothesis.\n\nPythia takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions (texts written in the Greek alphabet dating between the seventh century BCE and the fifth century CE). The architecture works at both the character- and word-level, thereby effectively handling long-term context information, and dealing efficiently with incomplete word representations (Figure 2). This makes it applicable to all disciplines dealing with ancient texts ([philology](https://en.wikipedia.org/wiki/Philology), [papyrology](https://en.wikipedia.org/wiki/Papyrology), [codicology](https://en.wikipedia.org/wiki/Codicology)) and applies to any language (ancient or modern).\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bdabafc10511b9bf2878_Fig%202.jpg)Figure 2: Pythia processing the phrase μηδέν ἄγαν (Mēdèn ágan) \"nothing in excess,\" a fabled maxim inscribed on Apollo’s temple in Delphi. The letters \"γα\" are the characters to be predicted, and are annotated with ‘?’. Since ἄ??ν is not a complete word, its embedding is treated as unknown (‘unk’). The decoder outputs correctly \"γα\". \n#### Experimental evaluation\n\nTo train Pythia, we wrote a non-trivial pipeline to convert the largest digital corpus of ancient Greek inscriptions ([PHI Greek Inscriptions](https://epigraphy.packhum.org/)) to machine actionable text, which we call PHI-ML. As shown in Table 1, Pythia’s predictions on PHI-ML achieve a 30.1% character error rate, compared to the 57.3% of evaluated human ancient historians (specifically, these were PhD students from Oxford). Moreover, in 73.5% of cases the ground-truth sequence was among the Top-20 hypotheses of Pythia, which effectively demonstrates the impact of this assistive method on the field of digital epigraphy, and sets the state-of-the-art in ancient text restoration.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bdbd5484b9bbe1f8bca4_Fig%203.jpg)Table 1: Pythia's Predictive performance of on PHI-ML.#### The importance of context\n\nTo evaluate Pythia’s receptiveness to context information and visualise the attention weights at each decoding step, we experimented with the modified lines of an inscription from the city of Pergamon (in modern-day Turkey)\\*. In the text of Figure 3, the last word is a Greek personal name ending in -ου. We set ἀπολλοδώρου (\"Apollodorou\") as the personal name, and hid its first 9 characters. This name was specifically chosen because it already appeared within the input text. Pythia attended to the contextually-relevant parts of the text - specifically, ἀπολλοδώρου. The sequence ἀπολλοδώρ was predicted correctly. As a litmus test, we substituted ἀπολλοδώρου in the input text with another personal name of the same length: ἀρτεμιδώρου (\"Artemidorou\"). The predicted sequence changed accordingly to ἀρτεμιδώρ, thereby illustrating the importance of context in the prediction process.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bdda39cc63339ecee0f3_Fig%204.jpg)Figure 3: Visualisation of the attention weights for the decoding of the first 4 missing characters. To aid visualisation, the weights within the area of the characters to be predicted (‘?’) are in green, and in blue for the rest of the text; the magnitude of the weights  is represented by the colour intensity. The ground-truth text ἀπολλοδώρ appears in the input text, and Pythia attends to the relevant parts of the sequence. \n#### Future research\n\nThe combination of machine learning and epigraphy has the potential to impact meaningfully  the study of inscribed texts, and widen the scope of the historian’s work. For this reason, we have open-sourced an online Python notebook, Pythia, and PHI-ML’s processing pipeline at , collaborating with scholars at the University of Oxford. By so doing, we hope to aid future research and inspire further interdisciplinary work.", "date_published": "2019-10-15T00:00:00Z", "authors": ["Yannis Assael", "Thea Sommerschield*", "Jonathan Prag*"], "summaries": []} +{"id": "49d4109602f4fff3766a5639c581992a", "title": "Making Efficient Use of Demonstrations to Solve Hard Exploration Problems", "url": "https://www.deepmind.com/blog/making-efficient-use-of-demonstrations-to-solve-hard-exploration-problems", "source": "deepmind_technical_blog", "source_type": "blog", "text": "![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228bc30b71ebc05d2d0b6cb_Fig%201.gif)We propose a new agent, which we call Recurrent Replay Distributed DQN from Demonstrations (R2D3). R2D3 is designed to make efficient use of demonstrations to solve sparse reward tasks in partially observed environments with highly variable initial conditions. The architecture of the R2D3 agent is shown below. There are several actor processes, each running independent copies of the behavior against an instance of the environment. Each actor streams its experience to a shared agent replay buffer, where experience from all actors is aggregated and globally prioritized. The actors periodically request the latest network weights from the learner process in order to update their behavior.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228ba8915232a5a9d7138d0_Fig%202.jpg)As shown in the figure, R2D3 has two replay buffers: an agent replay and a demo replay buffer, which is populated with expert demonstrations of the task to be solved.  Maintaining separate replay buffers for agent experience and expert demonstrations allows us to prioritize the sampling of agent and expert data separately. The learner process samples batches of data from both the agent and demo replay buffers simultaneously. The demo ratio (ρ) controls the proportion of data coming from the expert demonstrations vs from the agent’s own experience. The demo ratio is implemented at a batch level by randomly choosing whether to sample from the expert replay buffer independently with probability ρ. When ρ=0, R2D3 performs standard RL, when ρ=1, R2D3 performs batch RL on the data in demo buffer. The loss is optimized by the learner by using n-step double Q-learning (with n=5) and a dueling architecture.\n\nIn each replay buffer, we store fixed-length sequences of *(s, a, r)* tuples where adjacent sequences overlap by 40 time-steps. These sequences never cross episode boundaries. Given a single batch of trajectories we unroll both online and target networks on the same sequence of states to generate value estimates with the recurrent state initialized to zero.\n\n#### Hard Eight Task Suite\n\nThe tasks in the Hard Eight task suite require the agent to perform a sequence of high level skills in order to gain access to a large apple which gives the reward and terminates the episode. In the picture below, we give an example from the Baseball task. The agent must learn to execute these high level skills as a sequence of low level actions in the environment. The sequence of low-level actions can be quite long and consequently it is unlikely that the task will be solved by random exploration. Let us note that each step in this task involves interaction with physical objects in the environment which are shown in bold.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228ba9778f7e36c267110e5_Fig%203.jpg) \n\n\nIn the figure below, for each task the agent (blue triangle) must interact with objects in its environment in order to gain access to a large apple (red triangle) that provides reward. Our 3D environments are procedurally generated such that at every episode, the state of the world such as shapes of the objects, colors and positions are different. The environment is partially observable which means that the agent can only see the part of the environment at every timestep. Since the agent receives the reward only at the end of the episode and needs to execute a long sequence of actions, the exploration can be difficult. Furthermore, highly variable initial conditions and the objects that the agents can interact with can make the exploration even more difficult.\n\n![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228baa5b920ee7df8857dd2_Fig%204.jpg)#### Human demonstrating tasks\n\nBelow is a playlist of eight videos of humans performing each of the tasks to demonstrate the steps involved.\n\n#### R2D3 on Hard Eight Tasks\n\nBelow is a playlist of eight representative videos of the R2D3 agent after training on each of these tasks.\n\n#### Additional R2D3 Results\n\nWe ran a few additional experiments - shown in the playlist below - to get more information about the tasks R2D3 did not solve, or solved incorrectly.\n\n##### Remember Sensor\n\n**‍**This task requires a long memory, and has the longest episode length of any task in the suite. In an attempt to mitigate these issues, we trained the agent using a higher action repeat of 4 which reduces the episode length, and used stale lstm states instead of zero lstm states which provides information about earlier in the episode. This allows R2D3 to learn policies that display reasonable behavior.\n\n##### Throw Across\n\nThe demonstrations collected for this task had a very low success rate of 54%. We attempted to compensate for this by collecting an additional 30 demos. When we trained R2D3 with all 130 demos all seed solved the task.\n\n##### **Wall Sensor Stack**‍\n\nThe original Wall Sensor Stack environment had a bug that the R2D3 agent was able to exploit. We fixed the bug and verified the agent can learn the proper stacking behaviour.", "date_published": "2019-09-05T00:00:00Z", "authors": ["Caglar Gülçehre", "Tom Le Paine", "Bobak Shahriari", "Misha Denil", "Matt Hoffman", "Hubert Soyer", "Richard Tanburn", "Steven Kapturowski", "Neil Rabinowitz", "Duncan Williams", "Gabriel Barth-Maron", "Ziyu Wang", "Nando de Freitas", "Worlds Team"], "summaries": []}