AI has a problem with objectifying women

Community Article Published May 24, 2024

May 24, 2024

Last week, OpenAI did a much-publicized demo of their new chatbot, ChatGPT 4.0, now endowed with a speech interface. One of the voices used during their demo, nickname Sky, instantly attracted widespread attention, as much with the servile attitude that it adopted, commenting upon what one of the presenters was wearing, as its eerie resemblance to the voice used in “Her”, Spike Jonze’s 2013 movie about a man who falls in love with his operating system, voiced by Scarlett Johansson. Sam Altman, the CEO of OpenAI, had even tweeted a single-world, “her”, the same day, cementing the connection in people’s minds.

But a week after the ChatGPT 4.0 demo, Scarlett herself issued a statement saying that OpenAI mimicked her voice despite her refusal to collaborate with the company on multiple occasions. The company had already disabled the voice over the weekend, but the question remains - if a Hollywood actress feels used and abused by the way an AI tool misrepresents her voice and likeness, what hope is there for the rest of us?

I’ve been working in AI for over a decade, in a field that has less than 12% of female researchers. I’m often the only woman speaking on a panel or sitting at the table, and - I must admit - making sure that 50% of the world’s population is represented in technology that has the potential to change humanity is truly exhausting. And yet, the small choices that we make when creating AI systems can have wide-ranging repercussions that can contribute to entrenched perceptions and persistent stereotypes.

Take Lenna Forsten, whose image was used as the industry standard for testing compression algorithms and machine learning models alike, despite her repeated requests for the image to stop being used. As far as I know, all of the official copies of it have since been taken down (before, you could load it automatically in libraries like OpenCV and SciPy), but people still use it in academic papers and model demos, since it has acquired a cult status in the AI community.

Apart from depictions of individual women, representations of the entire gender are often incredibly objectifying and demeaning: for years in computer vision, a perfectly acceptable task used for testing the performance of AI models was putting on makeup on images of women, or swapping out their clothing from jeans to miniskirts and back.

image/png Image source: Zhang et al. (2019)

Motivated by the obvious advertising applications of these systems, the question of consent and representation was sorely lacking, and each time I would speak up, I would face pushback - it was just a “benchmark”, after all.

image/png Image source: Mo et al. (2019)

With the advent of increasingly realistic image generation models, the objectification of women has only gotten worse. Similarly to commercials of women in bikinis eating burgers used to sell pickup trucks, AI-generated images of women - celebrities, but also anonymous people from the internet - are often used to demonstrate how good image generation models have gotten. Because what’s better to show the improvement of an image generation model than a half-naked woman in a fried chicken bikini?

image/png

Image source: Reddit

This is despite AI-enabled image generation models having documented problematic behaviors, from the spontaneous generation of nude images of women to the gender and racial biases that are hard-baked into these models. AI undoubtedly has a problem with the objectification of women, and this comes with consequences both on Hollywood celebrities as well as mere mortals like myself.

But not all is lost - there are many actions that can be taken to shift the status quo. For decision makers, being more cognizant of issues of gender and power can translate into emphasizing diversity in positions of power, like boards and executive roles.

For AI developers, this can be done by avoiding the implicit objectification of women via system demos, not using the common “traditionally attractive young nubile woman” images as the go-to example, and having explicit consent of those people you chose to illustrate your system.

For the community at large, we can do better at supporting organizations like Women in Machine Learning (of which I’m on the board!) to amplify the voices of women in our field and empower future generations.

Because the situation with Scarlett isn’t the first, and will be far from the last instance of this kind of treatment of women and minorities by people in positions of power in AI. And pushing back on this treatment – and demanding respect, consent and a seat at the table – can help turn the tide on AI’s longstanding tradition of objectifying women.