Meta, a powerful multinational tech conglomerate, has recently had their developing AI receive allegations of significant racial bias. Specifically, it’s image generator being unable to produce photos of interracial couples.
Meta’s Mega Problem
The problem is not solely with romantic partners but has also been found when prompting for a group of friends or colleagues as well. When asked to create images of interracial couples, the tool consistently flopped and often depicted ethnic stereotypes.
Interracial Inconsistencies
A CNN inquiry revealed that even with several prompts mixing between multiple ethnicities, the tool still falls short. The AI’s default practice is to conjure an image where both individuals are of the same race, even when diversity was specified.
An Ongoing Problem
While Meta’s AI has been available since the end of last year, it has already been the subject of several insensitive scandals, leading many to question what sources the tool is learning from and how it categorizes diverse populations.
Just a “Glitch”?
Some may deem this irrelevant or a “harmless glitch”, but inability to develop these images could propose deeper issues. Interracial couples make up a bulk of the American public, with research revealing significant rates of these couples having long-lasting relationships.
AI vs Reality
With this being said, the AI fails to match this. How, with such a large pool of representation to learn from, can the technology fail to account for it? Meta’s public response emphasizes eliminating bias and invites proposals to improve potential prejudices found within their systems.
More Than One
Meta is not alone in its difficulty to satisfy public disappointment. Other programs such as ChatGPT and Google’s Gemini have faced similar scandals, and appear to be working towards solving the problem.
How AI Works
AI programs like ChatGPT and Meta’s AI are referred to as generative AI tools. They learn from extensive datasets, i.e. pull from the resources made available to them. These resources can be anything found on the internet, and if not vetted properly, could house undetected racial biases.
Techno-Racism
Efforts to resolve “techno-racism” are underway, but serious implications suggest that AI is not smart enough to tackle complicated social problems such as racism and colorism. In fact, most of their programming goals are towards language acquisition, rather than cultural awareness.
A Different Kind of Racist?
Interracial couples are just the tip of the iceberg, however. While some propose that these AI bots need more datasets for diverse audiences, other believe the tools are participating in white erasure.
The Other Side
Conservatives are backing these claims with a Fox News article released in late February. The Fox New Digital team conducted an unofficial experiment, hypothesizing that white people were unfairly underrepresented by the technology.
4 Programs Put to the Test
The team’s amateur research utilized 4 well-known AI programs: ChatGPT, Gemini, Meta AI, and Microsoft’s Copilot. Using various prompts, the team assessed “potential shortcomings”.
Fox News Confirms Suspicions
The article summarizes that while the chatbots easily answered prompts about representation for Black/Hispanic/Asian demographics, a different response was generated for the White demographic.
Gemini And Meta “Fall Short”
Gemini and Meta AI were consistently quoted as “falling short” to represent white people, often asking the prompter to consider historic systems of oppression and warning against marginalizing other ethnic groups.
Victory for All?
Those who subscribe to the idea of “white erasure” would categorize this study as a victory, proving what they feel to be true. Others state that some of these programs are performing a civic duty, checking forms of prejudice that users may not be aware of.
Meet the Maker
A third, less obvious option also lurks in the shadows: The bias of the AI is reflective of the awareness in its programmers. If the AI experts working on these projects add information to avoid generating hate-speech, then they are the ones who develop how the AI navigates it.
AI and Humanity
This, of course, is highly individualistic. Each computer scientist has their own background and carries their own belief systems. Other factors could include corporate regulation, time allotted for AI training, and variation in definition of what is considered racist or prejudiced.
AI Is Here to Stay
Artificial intelligence is a rapidly evolving industry, with many already relying on it for everyday functions. And while it will continue to grow, how it seeks to handle cultural issues may only come when humans can answer these questions themselves.
The post Race Row: AI Fails to Generate Interracial Couple Photos first appeared on Pulse of Pride.
Featured Image Credit: Pexels / Anna Shvets.
For transparency, this content was partly developed with AI assistance and carefully curated by an experienced editor to be informative and ensure accuracy.