Strange India All Strange Things About India and worldStrange India All Strange Things About India and world


More companies are using AI hiring tools in the hiring process to pinpoint premium candidates. But potential biases in these technologies have raised ethical concerns.

IMAGE: iStock/Jovanmandic

In recent years, a growing number of organizations have utilized artificial intelligence (AI) to revolutionize their traditional workflows. These systems are implemented to enhance cost-efficiency, reduce employee burnout, and even identify premium talent. Many organizations are using AI tools to expedite the arduous hiring processes. These algorithms have been viewed as objective tools capable of eliminating human subjectivity from the employment screening process. Paradoxically, many of these models are riddled with the same inherent biases these systems are intended to remove. The idea of AI agents acting as the preliminary filter for candidates brings up a vast set of diversity and ethical concerns. That said, is it possible to leverage these systems in an equitable way to build a more inclusive workforce?

Companies turn to AI hiring tools to manage the volume of applicants and remove human bias

In the age of telecommuters and video conferences, more companies have warmed to the prospect of bringing on remote employees. Rather than limiting the talent pool to employees in a given area, qualified candidates can now readily apply from around the globe. Analyzing the sheer volume of applicants is often a daunting task for hiring managers.

With the potential to automate the screening process and eliminate human bias, these AI tools have been viewed as a win-win for efficiency and inclusivity. Needless to say, good intent does not always generate positive outcomes.

“I feel so bad for companies because the reason that they bought the tools is because they want to become more objective. It’s never because they want to do the wrong thing, but the tool itself is just not the right instrument for equity, it’s an instrument for efficiency, but it’s not an instrument for equity,” said Mutale Nkonde, CEO of AI For the People with current fellowships at Harvard and Stanford.

Recent evidence and use cases illustrate myriad ways these platforms hinder inclusivity efforts. After all, these programs are only as equitable as the data they are fed. If these algorithms are built on biased data, these systems will only perpetuate prejudice and compound a lack of diversity in-house.

“Humans are inherently biased—and machine learning capabilities can end up perpetuating that bias because that data itself might be biased, ” said Amy Hodler, director, Graph Analytics and AI Programs at Neo4j. “At the end of the day, the ML models are written by a human. AI today is effective for specific, well-defined tasks but struggles with ambiguity which can lead to subpar or even disastrous results.”

SEE: Hiring Kit: Computer Research Scientist (TechRepublic Premium)

Flaws in AI optimization metrics can lead to unintended, biased outcomes

These AI systems can be trained to look for keywords associated with high performance and other metrics the algorithms connect with top talent. Algorithms can then upgrade or penalize candidates based on these criteria. Major players in the tech industry have looked to reap the benefits of AI-enhanced hiring systems.

A few years ago, Amazon assembled a team of engineers to develop models capable of pinpointing premium talent and assigning a tiered rating system for candidates. During development, the system exhibited biased preferences and was eventually discontinued.

“Their recruitment algorithm was effectively pushing out resumes that had the words girl or girls in there, or woman or women. That meant if you put on your resume that you went to a women’s college, Spelman being a Black women’s college in Atlanta, that resume wasn’t even going to get promoted for further consideration,” Nkonde said.

When calibrating a new hiring algorithm, a company may want to use its own metrics of in-house performance to identify top talent. This algorithm can be fed historical data related to its current top-performing employees as part of its talent search filtering process. However, if a company already has a history of internal hiring bias and lack of diversity, a hiring algorithm trained on this dataset will only perpetuate this lack of inclusivity.

As Nkonde pointed out, if a company has never hired a Howard University alumni, and the algorithm has been optimized on traditional metrics and patterns associated with in-house talent, the Howard graduate’s resume will be filtered from the candidate pool and never make it to the next round.

Flawed emotion recognition technology may thwart inclusivity efforts

These sophisticated tools are used for far more than sifting through resumes to screen candidates. A number of organizations are utilizing emotion recognition during the interview process to analyze candidates.

“They’ll look at your facial expressions and then try and infer through facial expressions whether you were telling the truth, whether you were comfortable, whether you were interested, whether you were engaged and then that will have some type of score that relates to your employability,” Nkonde said.

Facial recognition technologies have come under intense scrutiny in recent weeks due to inherent flaws. These systems are prone to false-positive identification errors, particularly among minorities. Considered alongside bias in the initial application screening, this presents a compounded threat to organizational inclusivity efforts.

Digital transformation and AI adoption

From crafting financial briefings to identifying potential new uses for existing pharmaceutical medications, AI systems have been leveraged for a host of applications across industries. While these models are quite impressive at consuming massive catalogs of datasets and pinpointing patterns, these systems are woefully inadequate at human reasoning and discerning context. These latest findings reiterate the need for human oversight within the hiring and recruiting process.

“At its best, what these algorithms [are going to] do is return a very, very particular type of person, who has the same exact qualities of whoever’s considered a model of success. The reason that that’s actually even worse than human decision making is that in human decision making there’s always the option that we will think outside of the box, there’s always the option that we will take a chance. There’s always the option that we will do things differently and that’s really the story of human progress from the beginning,” Nkonde said.

While these advanced tools can certainly expedite the hiring and recruiting process, there are long-term consequences for organizations to consider. In recent weeks, organizations have taken a closer look at standard operating procedures and with diversity and inclusivity in mind. At times, this might require organizations to scrutinize more than their basic hiring practices.

“Improving the rigor in the talent process is the first step in improving diversity and inclusion. Technology cannot solve a broken culture or a broken process. Leaders and HR professionals need to make a commitment to a fair talent lifecycle, understand the value that brings and continuously monitor for improvement opportunities,” said Dr. Joel Philo, principal behavioral scientist at Infor.

SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)

Can AI be used equitably in the hiring process?

With diversity and equity at the forefront of public debate, some organizations are taking a closer look at the impact of their technological choices and standard procedures. Whether AI hiring tools can be implemented in a fair and equitable way remains largely up for debate.

“Should we use these implements when hiring people, or should we think about new ways of screening nontraditional applicants and bringing them in? Because unless we can employ some aspirational and revolutionary and equity-based thinking, we’re going to fall back on these magic machines that are here to solve our problems, when the real problem isn’t who should I hire? The real problem in my view is how do I create a labor force that gives all people opportunity?” Nkonde said.

Also see



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *