4 tips to responsibly and ethically implement AI for hiring

Join today’s leading executives online at the Data Summit on March 9th. Register here.


This article was contributed by Sanjoe Jose, CEO of Talview.

Artificial intelligence (AI) has transformed talent acquisition, solving some of the sector’s biggest challenges — such as dealing with high application volumes — and dramatically leveling the candidate playing field. The proof of its success is in the pudding: some 40% of U.S. companies now screen and assess candidates using AI. And this number will only have grown post-pandemic due to remote hiring taking over. 

However, AI-support hiring systems also have some challenges and limitations of their own. Every technology is prone to occasional errors, but when an error repeatedly and adversely impacts a certain group of candidates, this is a likely sign that biases have emerged in the system.

So, how do companies ensure that their AI technology is improving the ethics and equity of their hiring process, not hindering it? Let’s explore.

1. Robust testing and equity reporting

If a company is working alongside or partnering with an HR Tech provider, they must ensure that the providers publish equity reports of AI recommendations regularly — at least every six months, based on rigorous testing.

Trustworthy providers will often conduct cross-group comparisons using statistical tests such as a t-test. This is when two sample groups from contrasting subpopulations in the candidate pool — such as ethnicity, age or gender — are analyzed to ascertain the probability that the differences in AI recommendations between these two groups are coincidental.

If the probability is low, this indicates a bias in the system. Providers will need to go back and work on their models to locate and eliminate anything that has contributed to that bias. T-tests are also often accompanied by a Cohen’s D-test, which establishes the size of the difference between the two groups (also known as effect size) and whether it is significant enough to warrant attention.

2. Separation of in-production and in-training environments

An ethical AI model is one that is never pushed into production without rigorous testing that has proven to eradicate hiring bias. Well-documented examples of bias occurring in the hiring technology of major companies, such as Amazon and Google, act as cautionary tales of what happens when the proper safeguards aren’t put in place from the beginning.

The ideal scenario would be to have two versions of the same model: one that has already passed through substantial equity testing (like those described above) and gone into production, and another which is not for production use but whose primary function is to learn from new data continually.

This learning model should be continuously evaluated, and the new version only pushed to production after eliminating any factors contributing to equity issues. This would often require examining training data over a period of time and retraining it with a new scrubbed dataset until the issue is resolved.

3. Data-driven but human-centric

Recruiters often cannot review a significant percentage of applications, but AI can help them discover best-fit candidates that they would have otherwise not come across. AI is a game changer for streamlining and improving hiring processes, but it should never completely replace human decision-making. For example, AI-based recommendations can be used to prioritize candidates for outreach but not to eliminate individuals’ applications altogether. 

Human judgment and intuition in the hiring process will always be vital when algorithms can sometimes pass by on atypical candidates that would be great for the job. Some candidates could have incomplete resumes or a smaller digital footprint regarding professional accomplishments, and AI-based models might miss them, as the keywords that algorithms are screening for may not show up. 

The solution is to interview candidates for the same duration of time with the same questions and opportunity to respond, known as a constrained environment. This will help level the playing field and mitigate biases along the way.

4. Potential trumps experience

It is becoming increasingly easy to play hiring processes at their game and for candidates to weave keywords into their resumes to trick the algorithm into thinking they are the most suitable for the role. Likewise, popular psychometric tests are also becoming easy to outsmart. This is why it is always better to deploy a range of different assessments and hone in on potential instead of hard skills and experience.

AI and NLP-driven behavioral assessments carried out during interviews — or video and audio recordings — can pick up on a candidate’s fitness for the role using linguistics-based techniques to understand their behavioral competencies, such as emotional and social intelligence, and openness to new ideas and leadership. This creates more equity in the hiring process, especially when it comes to factors like age and economic background.

At their best, AI-driven hiring tools are invaluable for recruiters and HR departments looking to make their processes more efficient and equitable. However, this technology must be continually tested and safeguarded at all points of its development, production and deployment to avoid creating more biases instead of eliminating them. 

The best and most ethical hiring systems are supported by AI, but ultimately, it comes down to human decision-making to uncover human potential. 

Sanjoe Jose is CEO of Talview.


Originally appeared on: TheSpuzz

Scoophot
Logo