Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings

Published September 29, 2023
Update on GitHub

One of the most important things to know about “ethics” in AI is that it has to do with values. Ethics doesn’t tell you what’s right or wrong, it provides a vocabulary of values – transparency, safety, justice – and frameworks to prioritize among them. This summer, we were able to take our understanding of values in AI to legislators in the E.U., U.K., and U.S., to help shape the future of AI regulation. This is where ethics shines: helping carve out a path forward when laws are not yet in place.

In keeping with Hugging Face’s core values of openness and accountability, we are sharing a collection of what we’ve said and done here. This includes our CEO Clem’s testimony to U.S. Congress and statements at the U.S. Senate AI Insight Forum; our advice on the E.U. AI Act; our comments to the NTIA on AI Accountability; and our Chief Ethics Scientist Meg’s comments to the Democratic Caucus. Common to many of these discussions were questions about why openness in AI can be beneficial, and we share a collection of our answers to this question here.

In keeping with our core value of democratization, we have also spent a lot of time speaking publicly, and have been privileged to speak with journalists in order to help explain what’s happening in the world of AI right now. This includes:

Some of our talks released this summer include Giada’s TED presentation on whether “ethical” generative AI is possible (the automatic English translation subtitles are great!); Yacine’s presentations on Ethics in Tech at the Markkula Center for Applied Ethics and Responsible Openness at the Workshop on Responsible and Open Foundation Models; Katie’s chat about generative AI in health; and Meg’s presentation for London Data Week on Building Better AI in the Open.

Of course, we have also made progress on our regular work (our “work work”). The fundamental value of approachability has emerged across our work, as we've focused on how to shape AI in a way that’s informed by society and human values, where everyone feels welcome. This includes a new course on AI audio from Maria and others; a resource from Katie on Open Access clinical language models; a tutorial from Nazneen and others on Responsible Generative AI; our FAccT papers on The Gradient of Generative AI Release (video) and Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML (video); as well as workshops on Mapping the Risk Surface of Text-to-Image AI with a participatory, cross-disciplinary approach and Assessing the Impacts of Generative AI Systems Across Modalities and Society (video).

We have also moved forward with our goals of fairness and justice with bias and harm testing, recently applied to the new Hugging Face multimodal model IDEFICS. We've worked on how to operationalize transparency responsibly, including updating our Content Policy (spearheaded by Giada). We've advanced our support of language diversity on the Hub by using machine learning to improve metadata (spearheaded by Daniel), and our support of rigour in AI by adding more descriptive statistics to datasets (spearheaded by Polina) to foster a better understanding of what AI learns and how it can be evaluated.

Drawing from our experiences this past season, we now provide a collection of many of the resources at Hugging Face that are particularly useful in current AI ethics discourse right now, available here: https://huggingface.co/society-ethics.

Finally, we have been surprised and delighted by public recognition for many of the society & ethics regulars, including both Irene and Sasha being selected in MIT’s 35 Innovators under 35 (Hugging Face makes up ¼ of the AI 35 under 35!); Meg being included in lists of influential AI innovators (WIRED, Fortune); and Meg and Clem’s selection in TIME’s 100 under 100 in AI. We are also very sad to say goodbye to our colleague Nathan, who has been instrumental in our work connecting ethics to reinforcement learning for AI systems. As his parting gift, he has provided further details on the challenges of operationalizing ethical AI in RLHF.

Thank you for reading!

-- Meg, on behalf of the Ethics & Society regulars at Hugging Face