Recruiting, Technology

AI vs. Human Blind Spots in the Interview Process: Finding the Right Balance

As AI technology rapidly advances, companies are eagerly adopting these innovations to revolutionize and streamline their hiring practices. Notable firms like Hilton and Unilever have adopted AI-driven tools such as chatbots for initial screenings and one-way video interview platforms to enhance efficiency and manage large volumes of applicants. These tools automate routine tasks, allowing talent acquisition professionals to focus on more strategic aspects of recruitment.

With the swift progress and implementation of AI in hiring, it raises significant questions: Are we moving toward an era where human interviews will become redundant? Will AI eventually surpass human judgment, rendering it irrelevant? These questions are pivotal as businesses increasingly integrate AI into their hiring processes, prompting a reevaluation of the role human interaction plays in recruitment.

The goal of this article is to delineate the relative strengths and weaknesses of both AI and human assessments to help talent acquisition (TA) professionals strike the optimal balance in their recruitment strategies.

Human Blind Spots

Bias: Human interviewers inherently bring biases into the interview process, whether it’s a preference for candidates who share similar backgrounds (“similar-to-me” bias) or being overly influenced by a candidate’s one positive skill (halo effect). These biases, and many others that creep into the interview process can skew assessments and detract from both the accuracy and fairness of the interview process.

Inconsistency: Human judgment can be highly inconsistent. Factors as trivial as an interviewer’s mood or physical state (like hunger) can influence their decision-making. A well-cited study on parole board decisions found that judges gave harsher decisions before lunch, indicating how physiological states can affect outcomes. Similarly, a 1960s study by Lewis Goldberg examined the consistency of radiologists’ judgments in interpreting the same X-rays at different times. The findings revealed significant inconsistencies in their diagnoses, indicating that external factors such as fatigue and time of day could influence their decision-making. This research highlights the potential variability and subjectivity in professional judgments, even among highly trained experts.

Misalignment: Many interview processes lack clear definitions of what “good” looks like in terms of skills, values, and motivations. This ambiguity leaves substantial room for interpretation, allowing biases to creep in and creating misalignments between different interviewers. Such discrepancies often result in the need for multiple interview rounds.

AI Blind Spots

Bias in, bias out: Many AI algorithms are trained on historical human decisions that contain inherent biases, leading these systems to perpetuate rather than correct these biases. A notable example is the Amazon CV screening algorithm, which was discontinued after it was found to systematically exclude women. This bias arose because the algorithm was trained on historical data from resumes submitted over the past 10 years, which predominantly came from men, reflecting the existing male dominance in the tech industry. This case illustrates how AI, if not carefully monitored and adjusted, can reinforce existing workplace inequalities rather than eliminate them.

Recognizing Outliers: AI and ML systems excel in identifying patterns but struggle with outliers — candidates who do not fit the typical mold yet may be highly beneficial to the organization. Such systems might overlook a potentially excellent candidate simply because they do not meet conventional criteria set by previous data.

Candidate Experience: While AI can streamline the recruitment process, it often removes the human element crucial for candidates assessing a company’s culture and values. An entirely automated interview process can feel impersonal and deter candidates seeking a more human-centric workplace.

Integrating AI and Human Judgment

To maximize the benefits of both AI and human assessments in interviews, companies should aim for a hybrid approach. AI can be used at the initial screening stage to handle standard information gathering, automate tedious tasks, and provide continuous status updates throughout the process. It can also transcribe and summarize interviews, ensuring that recruiters are freed up to engage in more meaningful interactions with candidates. When AI is used for rank-ordering candidates, the hiring team should have full visibility into the criteria used for ranking and the ability to adjust these criteria as needed.

Human interviews should then focus on gaining a more nuanced understanding of the candidate’s skills, values, and motivations, while striving to reduce human bias by becoming more standardized and data-driven.

This balanced approach not only improves the efficiency and accuracy of hiring decisions but also preserves the candidate experience, fostering a positive perception of the company and attracting top talent. By thoughtfully integrating AI with human judgment, forward-thinking companies can enhance their recruitment strategies, ensuring they are both fair and effective in the increasingly competitive search for talent.

Shiran Danoch, Ph.D., is the CEO and founder of Informed Decisions an Interview Intelligence platform focused on tracking and disrupting bias for better, more equitable, hiring decisions. She is an IO psychologist with expertise in employee selection and a Ph.D. in people analytics. Shiran is highly passionate about creating work environments where people decisions are made based on data.

Shiran has consulted multinational companies and built assessment processes for the Israeli government and army.

Leave a Reply

Your email address will not be published. Required fields are marked *