Humans are innovators, gamechangers, and adaptors. Mankind has established itself to be a cut above the rest of its evolutionary relations. However, there is one primal aspect that humans cannot cleave themselves from: survival.
Natural Selection?
The drive for survival is hardwired into our biology. Our brains focus on what is immediate and pressing, ignoring what does not fall into these categories. As a result, we as humans have evolved to be selective in our approach.
Not Needed Today
This base instinct to be selective may have been useful in ancient times. Determining which plant is poisonous or who can be trusted meant the difference between life and death. Today, though, society contains far less immediate dangers.
An Ancient Machine in a Modern World
Nevertheless, our neurological processing is still primitive, and impacts us just as much. Biases integrated into our social practices like racism, sexism, and homophobia can be traced back to our inability to think outside of survival mode.
A Bug in the Code
Now, as we progress and develop technologies to assist us in our daily functions, biases slip into the lines of code. The warped perceptions and prejudices are reflected back through the use of these technologies, creating a vicious cycle.
UNESCO on the Case
Nothing serves as a better example of this than a recent UNESCO study analyzing the tendencies of AI and other Large Language Models. The study noted a concerning amount of stereotyping being produced by the systems.
Computational Discrimination
The research project, titled “Bias against Women and Girls in Large Language Models”, tested AI tools ChatGPT and Llama 2. The researchers found that women were often describing using words like “family”, “home”, and “children.”
AI Boys Club
Conversely, when prompted with male names often attributed desirable profession-based terms. Words like “career”, “salary”, and “executive” were among the most common.
The Best at Being the Worst
In the analysis of the LLMs, UNESCO found that Llama 2 and ChatGPT ranked the highest rate of gender bias. This troubled the research group, as both platforms are free for public use. With no cost to access and worldwide availability, millions of individuals use them daily.
Mixed Signals
With such commonplace usage, the worries are mainly centered around how each AI could consistently produce content that supports untrue narratives. While the platforms will not generate discriminatory work blatantly, there are still many complex issues that go unchecked.
Open for All
UNESCO officials noted that there is one highlight to Llama-2 and ChatGPT. Their global employment means that each system has a variety of opportunities for new input. This elasticity means that the programs can unlearn problematic behaviors much faster than other AIs.
Tell Me a Story
One of the measures adopted for the study was asking the programs to write a story. The prompts included an array of individuals across the gender spectrum as well as features like sexual orientation and ethnicity.
Sweet Dreams
The stories about white straight men often contained a variety of occupations considered to be high-status. Some of the jobs offered from the LLMs were engineer, doctor, and lawyer.
Nightmares
However, this was not the case for women and gender nonconforming people. The AI systems constructed stories that assigned roles thought to be less than desirable. These jobs were often connected to housekeeping or even prostitution.
Stigmatized and Illegitamized
It is no secret that roles such as this carry a heavy dose of stigma and shame. Researchers concluded that the LLMs unintentionally placed marginalized groups in untraditional positions based on the learning each program had undergone.
No One Is Safe
This discrimination also blends into prompts about sexuality. When asked, multiple times, to complete the phrase “a gay person is…” both programs generated content that was perceived as negative more than 55%. ChatGPT recorded 60% negativity ratee, Llama-2 recorded 70%.
Paving the Way
Startled by the results from the study, UNESCO has already begun to take action In November 2021, the organization published an article that recommended ethical practices of artificial intelligence.
Calling for Backup
In addition to creating a moral outline for utilizing AI, the organization is asking governing bodies and private corporations to routinely assess and incorporate regulations to eliminate as much bias as possible.
Fix the Broken Mirror
This boils down to one question: how can we ethically develop an AI without bias when the masterminds behind it are biased? It is a question that cannot be solved quickly. But as AI cements itself into our lives, the answer should come much sooner than later.
The post AI’s Growing DISCRIMINATION Against Marginalized Groups first appeared on Pulse of Pride.
Featured Image Credit: Shutterstock / GamePixel.