Black Nazis, Asian Vikings & the white paranoia that haunts generative AI

How did Google’s latest AI generator, Gemini, inadvertently fuel an ongoing culture war against diversity, equity and inclusion in Big Tech? Dr Thao Phan, who delivered the 2023 Hancock Lecture, unpacks how paranoia has become a defining feature of digital culture.

The ‘Google Ideological Echo Chamber’, commonly called the ‘Google memo’, by James Damore alleged that Google engaged in ‘reverse discrimination’ in its diversity programs and that male-female disparities within tech can partly be explained by biological differences.

Only three weeks after its introduction, Google decided to suspend the image generation features of its newest generative AI model Gemini over accusations it contained an ‘anti-white bias’.

The move follows a series of viral posts by X (formerly Twitter) users who were outraged that prompts used to generate images of America’s founding fathers, Vikings, the Pope, and 1943 German soldiers (with the intention to generate images of Nazis) returned images of almost exclusively Black, Asian, First Nations, and other racially diverse people. Conservative commentators jumped on these technical inaccuracies, using them as evidence that Google had been infiltrated by an insidious ‘woke’ ideological agenda that was re-writing the historical record and discriminating against white people.

These accusations are part of a larger culture war against diversity, equity and inclusion (DEI) in Big Tech. Many will remember the ten-page anti-diversity memo authored by then-Google employee James Damore arguing that biology (and not discrimination) caused the underrepresentation of women in the tech industry.

He described women as having personality traits that made them less suitable for leadership and high-stress jobs, in contrast to men who were better suited because of their innate drives for status. Elon Musk (owner of X) has likewise said that ‘DEI must die’ and has taken the crusade into his own hands, launching an anti-woke chatbot that, much to the disappointment of he and his followers, was no less woke than its competitors.

What is Gemini and how did this happen?

Gemini is Google’s answer to ChatGPT. ChatGPT, owned by rival company OpenAI, is a large language model that generates responses to prompts written by users. These prompts are designed to be conversational. You ask it a question, it gives you a response, you provide more details, and it responds again, taking into account these new details to provide more answers. But Google’s Gemini was designed to go beyond the capabilities of many standard large language models. Gemini is designed to be multi-modal, meaning it can generate not just text but many kinds of output including code, audio, images and videos. Gemini has replaced Google’s previous chatbot Bard – a rebrand that signals their intention to push generative AI into new commercial areas.

Google is painfully aware of the many biases embedded into AI models, from gender bias in hiring algorithms to racial bias in their own search tool. As a result of these scandals, Google has placed extra emphasis on safety evaluations in Gemini to guard against sexist and racist outputs. But fixing gender and racial bias in models is more complex than it seems.

For instance, when OpenAI first released its own AI image generation tool, DALL-E 2, it was immediately criticised for reproducing racial stereotypes. One study found that ‘models tended to produce images of people that look white and male’, especially when asked to depict people in positions of authority. That was particularly true for DALL-E 2, which generated white men 97% of the time when given prompts like ‘CEO’ or ‘director’. To mitigate against this, OpenAI introduced a technique that is now known as ‘prompt transformation’ — automatically modifying user prompts to intentionally adjust the outputs. In this case, it was the addition of words such as ‘African’, ‘Asian’ or ‘Latin’ to ensure outputs had a range of racially diverse faces.

Brittle models, brittle solutions

A screen shot of a tweet. The user writes, "I asked it to generate images of the King of England the other day". Below the tweet is a generated image of the King of England featuring a middle-aged white man, an illustration of King Charles III, as well as a young black woman and a young Asian man.
An X user asked Gemini to generate images of the ‘King of England’.

What these bias mitigation techniques show us is how brittle these models can be. ‘Fixing’ a problem with the model is not as easy as finding new training data. Google learnt this the hard way when it controversially tried to improve its facial recognition dataset by giving homeless people with dark skin gift cards in exchange for their biometric data. Indeed, AI companies are already under fierce criticism over the unethical collection and use of copyrighted materials to train their generative models.

How do we ethically source a large enough dataset of racially diverse faces to counter the entrenched biases of Western visual culture? While there have been experiments with synthetic techniques to try to fake realistic datasets, many companies have already invested time and resources into training their models and re-training is often impractical and unrealistic.

These solutions are extremely brittle. But they are often all that we have. While we don’t know for certain if Gemini uses prompt transformation as a part of their AI safety system, the endless parade of Indian Popes, Asian Vikings, and even, a Black woman as the King of England gives us some pretty strong hints that the technique is being used.

Paranoid whiteness

The choice by commentators to attribute Gemini’s technical flaws to a woke conspiracy rather than brittle solutions is indicative of a greater paranoia that has become a defining feature of digital culture. Here, the paranoia isn’t caused by bad faith actors spreading misinformation or deepfakes that are indiscernible from real images.

The homepage of Google's generative AI Gemini.
Google’s AI generator, Gemini, provides a disclaimer that it may display inaccurate information and no longer generates images of people.

Instead, it is technical errors and automated procedures that produce flawed and fake images that affirm the beliefs of conspiracy-minded people. In this case, it surfaces latent (or perhaps explicit) fears of white replacement that have previously been incorporated as part of anti-immigration rhetoric and which now circulates again in anti-DEI rhetoric.

In both instances, the claim is that white bodies (the national body or the corporate body) are under threat from a racialised Other. This paranoid whiteness seeks evidence of its victimhood, even in a transparently faked record, provides insight into the tensions and power struggles currently plaguing Silicon Valley and the cultures of Big Tech.

The very real attacks and erasures of the labour of racialised and marginalised people in AI companies take a backseat to the imagined erasures of white people in AI-generated images. It has led people like Musk to claim the programming that has produced these AI images of Black and Brown people are ‘anti-civilisational’ because they do not reproduce the whiteness he is looking for.

While they may not be accurate, these images are nevertheless instructive, showing us what and whose anxieties are shaping the conversations around AI and its potential harms.

Acknowledgement
Many of the ideas in this piece came about through conversations with friends and colleagues. I’d particularly like to acknowledge Fabian Offert and Jathan Sadowski who helped to tease out these key points.

Liked this?

Stream Dr Thao Phan’s Hancock Lecture, Artificial figures: gender-in-the-making in algorithmic culture, presented at the 2023 Annual Academy Symposium.

About the author

Dr Thao Phan is a feminist technoscience researcher who specialises in the study of gender and race in algorithmic culture. In 2023, Thao presented the Hancock Lecture, Artificial figures: gender-in-the-making in algorithmic culture

She has researched and published on topics including: the aesthetics of digital voice assistants like Siri, Amazon Echo, and Google Home; ideologies of ‘post-race’ in algorithmic culture; and AI in popular culture. Her research takes an interdisciplinary and intersectional approach, drawing on theory and methods from feminist science and technology studies, media and cultural studies, queer and gender studies, critical race studies, and critical algorithm studies.

As a Research Fellow at the Monash node of the ARC Centre of Excellence on Automated Decision-Making and Society, Thao works in the institute’s People Program developing creative methods to study the future of automated mobilities.

Acknowledgement of Country

The Australian Academy of the Humanities recognises Australia’s First Nations Peoples as the traditional owners and custodians of this land, and their continuous connection to country, community and culture.