AI is quickly taking the world by storm. It is being integrated into nearly every aspect of everyday life. From writing school essays to outlining business proposals, AI is humanity’s newest invention that seems like it’s here to stay.
Controversy Spreads
However, the employment of these systems is quickly sparking controversy, most notably in the hiring process. The main focus of the controversy revolves around the unchecked biases that lie in how AI scans applications.
Overusing AI
An overwhelming majority of companies depend on AI to filter out resumes and cover letters. In fact, a 2021 report from Harvard Business School found that 99% of Fortune 500 companies use AI in their recruitment practices.
Bias Abound
Leaning so heavily on these systems raises a number of concerns, most commonly from demographics that face discrimination in the workforce. This technological prejudice often reflects the lack of diverse thought in those who develop these programs.
A New Law Emerges
In an attempt to assuage public anxiety around the issue, New York City legislators enacted the Automated Employment Decision Tool Act. This law requires companies using AI in the hiring process must pass specific audits to determine prejudice.
Limits in Diversity
The new act only searches for discrimination connected to “sex, race-ethnicity, and intersectional categories.” However, it fails to outline what standards are being measured.
Who Is Left Behind?
Additionally, the act does not include tracking for biases against people with disabilities, which is a commonly reported form of identity-based discrimination.
Slow to Learn
The AI algorithms in question are programmed by coding experts over years. Due to the time involved in their creation, they are incredibly difficult to change.
Not So Easy
Consequently, companies cannot find an easy solution to resolve any shortcomings as a result of embedded bias found in the audits of the programs.
Aiming for the Ideal
The programs are constructed of data based on ideal candidates or current employees to select the best suitors. However, by searching for patterns based on ideals, biases against those with disabilities linger in the shadows of coding.
Across the Board
Individuals who do have one or more disabilities have historically endured mistreatment in the work environment across industries.
Slow and Not So Steady
While many have proposed retraining AI programs by feeding them data with more diverse applicants, success seems unlikely. This is because these programs are slow to integrate minute shifts in data, making the relearning process glacial at best.
Defining “Disability”
Legally, the term “disability” refers to a variety of conditions that apply elasticity to each individual’s level of ability. While this may be a safeguard in judicial matters, not all AIs are programmed for this level of flexibility.
Hidden From Sight
Disabilities can vary from person to person and may not be visible or obvious. Likewise, the intersectional relationship between disability and other factors of marginalization can impede AI spotting patterns.
Accommodations Unmet
Another problem, as seen in AI tools like Pymetrics, is that some candidates with disabilities may require essential accommodations to be successful in the workplace. This could lead to systems deeming these applicants as unqualified.
Not Meant for Camera
Some slip ups in these systems are commonly found in processes like video applications. These programs measure behaviors such as eye movement, posture, and speech.
What Goes Unaccounted
Without accounting for neurodivergence, physical disabilities, or speech disorders, the AI will sort these candidates in the “rejection” pile. The outcome which may turn out differently if reviewed with human eyes.
Failing to Serve the Public
It’s no secret that AI comes with a host of pre-programed biases. This being said, members of the public find it concerning that New York lawmakers passed an act that fails to account for disability.
A More Personal Approach
Attempts to incorporate discrimination audits are not a catch-all solution to the issue. Actual progress will only be delivered when corporations wean off their dependency on AI for personnel decisions.
A Small Step
All in all, legislation like the AEDTA seems to point a move in the right direction. These steps may not go far enough, though, as failing to effectively account for all modes of discrimination places the burden onto the potential employee, rather than the company.
The post AI Discrimination Against People With Disabilities Revealed first appeared on Pulse of Pride.
Featured Image Credit: Shutterstock / Marko Aliaksandr.
For transparency, this content was partly developed with AI assistance and carefully curated by an experienced editor to be informative and ensure accuracy.