AI Policy @🤗: Response to the U.S. National Telecommunications and Information Administration’s (NTIA) Request for Comment on AI Accountability

Published June 20, 2023
Update on GitHub

On June 12th, Hugging Face submitted a response to the US Department of Commerce NTIA request for information on AI Accountability policy. In our response, we stressed the role of documentation and transparency norms in driving AI accountability processes, as well as the necessity of relying on the full range of expertise, perspectives, and skills of the technology’s many stakeholders to address the daunting prospects of a technology whose unprecedented growth poses more questions than any single entity can answer.

Hugging Face’s mission is to “democratize good machine learning”. We understand the term “democratization” in this context to mean making Machine Learning systems not just easier to develop and deploy, but also easier for its many stakeholders to understand, interrogate, and critique. To that end, we have worked on fostering transparency and inclusion through our education efforts, focus on documentation, community guidelines and approach to responsible openness, as well as developing no- and low-code tools to allow people with all levels of technical background to analyze ML datasets and models. We believe this helps everyone interested to better understand the limitations of ML systems and how they can safely be leveraged to best serve users and those affected by these systems. These approaches have already proven their utility in promoting accountability, especially in the larger multidisciplinary research endeavors we’ve helped organize, including BigScience (see our blog series on the social stakes of the project), and the more recent BigCode project (whose governance is described in more details here).

Concretely, we make the following recommendations for accountability mechanisms:

  • Accountability mechanisms should focus on all stages of the ML development process. The societal impact of a full AI-enabled system depends on choices made at every stage of the development in ways that are impossible to fully predict, and assessments that only focus on the deployment stage risk incentivizing surface-level compliance that fails to address deeper issues until they have caused significant harm.
  • Accountability mechanisms should combine internal requirements with external access and transparency. Internal requirements such as good documentation practices shape more responsible development and provide clarity on the developers’ responsibility in enabling safer and more reliable technology. External access to the internal processes and development choices is still necessary to verify claims and documentation, and to empower the many stakeholders of the technology who reside outside of its development chain to meaningfully shape its evolution and promote their interest.
  • Accountability mechanisms should invite participation from the broadest possible set of contributors, including developers working directly on the technology, multidisciplinary research communities, advocacy organizations, policy makers, and journalists. Understanding the transformative impact of the rapid growth in adoption of ML technology is a task that is beyond the capacity of any single entity, and will require leveraging the full range of skills and expertise of our broad research community and of its direct users and affected populations.

We believe that prioritizing transparency in both the ML artifacts themselves and the outcomes of their assessment will be integral to meeting these goals. You can find our more detailed response addressing these points here.