Race Row: AI Fails to Generate Interracial Couple Photos

Meta, a powerful multinational tech conglomerate, has recently had their developing AI receive allegations of significant racial bias. Specifically, it’s image generator being unable to produce photos of interracial couples.

Meta’s Mega Problem

Image Credit: Shutterstock /Gorodenkoff

The problem is not solely with romantic partners but has also been found when prompting for a group of friends or colleagues as well. When asked to create images of interracial couples, the tool consistently flopped and often depicted ethnic stereotypes.

Interracial Inconsistencies

Image Credit: Shutterstock / Nate Hovee

A CNN inquiry revealed that even with several prompts mixing between multiple ethnicities, the tool still falls short. The AI’s default practice is to conjure an image where both individuals are of the same race, even when diversity was specified.

An Ongoing Problem

Image Credit: Shutterstock / fizkes

While Meta’s AI has been available since the end of last year, it has already been the subject of several insensitive scandals, leading many to question what sources the tool is learning from and how it categorizes diverse populations.

Just a “Glitch”?

Image Credit: Shutterstock / Ollyy

Some may deem this irrelevant or a “harmless glitch”, but inability to develop these images could propose deeper issues. Interracial couples make up a bulk of the American public, with research revealing significant rates of these couples having long-lasting relationships. 

AI vs Reality

Image Credit: Shutterstock / Ground Picture

With this being said, the AI fails to match this. How, with such a large pool of representation to learn from, can the technology fail to account for it? Meta’s public response emphasizes eliminating bias and invites proposals to improve potential prejudices found within their systems.

More Than One

Image Credit: Shutterstock / rafapress

Meta is not alone in its difficulty to satisfy public disappointment. Other programs such as ChatGPT and Google’s Gemini have faced similar scandals, and appear to be working towards solving the problem.

How AI Works

Image Credit: Shutterstock / SomYuZu

AI programs like ChatGPT and Meta’s AI are referred to as generative AI tools. They learn from extensive datasets, i.e. pull from the resources made available to them. These resources can be anything found on the internet, and if not vetted properly, could house undetected racial biases.

Techno-Racism

Image Credit: Shutterstock / Marko Aliaksandr

Efforts to resolve “techno-racism” are underway, but serious implications suggest that AI is not smart enough to tackle complicated social problems such as racism and colorism. In fact, most of their programming goals are towards language acquisition, rather than cultural awareness.

A Different Kind of Racist?

Image Credit: Shutterstock / ShotPrime Studio

Interracial couples are just the tip of the iceberg, however. While some propose that these AI bots need more datasets for diverse audiences, other believe the tools are participating in white erasure.

The Other Side

Image Credit: Shutterstock / Maria Sbytova

Conservatives are backing these claims with a Fox News article released in late February. The Fox New Digital team conducted an unofficial experiment, hypothesizing that white people were unfairly underrepresented by the technology.

4 Programs Put to the Test

Image Credit: Shutterstock / Ascannio

The team’s amateur research utilized 4 well-known AI programs: ChatGPT, Gemini, Meta AI, and Microsoft’s Copilot. Using various prompts, the team assessed “potential shortcomings”.

Fox News Confirms Suspicions 

Image Credit: Shutterstock / rafapress

The article summarizes that while the chatbots easily answered prompts about representation for Black/Hispanic/Asian demographics, a different response was generated for the White demographic.

Gemini And Meta “Fall Short”

Image Credit: Shutterstock / Primakov

 Gemini and Meta AI were consistently quoted as “falling short” to represent white people, often asking the prompter to consider historic systems of oppression and warning against marginalizing other ethnic groups.

Victory for All?

Image Credit: Shutterstock / DavideAngelini

Those who subscribe to the idea of “white erasure” would categorize this study as a victory, proving what they feel to be true. Others state that some of these programs are performing a civic duty, checking forms of prejudice that users may not be aware of.

Meet the Maker

Image Credit: Shutterstock / Gorodenkoff

A third, less obvious option also lurks in the shadows: The bias of the AI is reflective of the awareness in its programmers. If the AI experts working on these projects add information to avoid generating hate-speech, then they are the ones who develop how the AI navigates it.

AI and Humanity

Image Credit: Shutterstock / Gorodenkoff

This, of course, is highly individualistic. Each computer scientist has their own background and carries their own belief systems. Other factors could include corporate regulation, time allotted for AI training, and variation in definition of what is considered racist or prejudiced.

AI Is Here to Stay

Image Credit: Shutterstock / TierneyMJ

Artificial intelligence is a rapidly evolving industry, with many already relying on it for everyday functions. And while it will continue to grow, how it seeks to handle cultural issues may only come when humans can answer these questions themselves.

The post Race Row: AI Fails to Generate Interracial Couple Photos first appeared on Pulse of Pride.

Featured Image Credit: Pexels / Anna Shvets.

For transparency, this content was partly developed with AI assistance and carefully curated by an experienced editor to be informative and ensure accuracy.

+ posts