First and foremost I love data and its ability to help us make informed decisions. Statistics and machine learning can help us to identify and reduce human bias — as well as see real trends that we can take action on. There are however very few areas where we actually let AI technologies take a fully prescriptive and automated approach. Prescriptive analytics provides specific recommendations for actions to achieve desired outcomes, focusing on the “what should be done” aspect — occasionally veering into unchecked actions taken by a computer. It’s a big leap, and one such example would be full self-driving vehicles — the technology is good — better than most human drivers, however there is a big problem of who is at fault when the computer makes a mistake. Liability is one of the big roadblocks to fully automated tasks. When a human enables software to replace their decisions are they now at fault? Or is it the corporation that designed and sold the solutions? Furthermore, what happens if a technology is disproportionately affecting a segment of a population? These are all perplexing questions and are largely the reason we are still in the age of predictive analytics for the vast majority of important jobs. We use past trends and behaviors to help us question our bias and make sound decisions.
It would be fair to think that when it comes to something as important as your next job, a computer wasn’t responsible for culling the majority of applicants. Unfortunately this task has been handed to computers at many businesses, largely unchecked — in some cases a while ago. To anyone in human resources, this obviously isn’t a shock — however to the uninitiated this may come as a huge disappointment. You remember the last time you brushed off your resume and made sure to list Excel, Word, and PowerPoint? Well if the job description labelled it as Microsoft Office, you could be out of luck. Entire apps have been created to help savvy applicants game their cover letters and resumes to speak a hidden language to computers that are scanning for keywords and phrases. One such scheme used to involve copy and pasting a whole job description into your document, then turning the text white and tiny — allowing applicants to nail all the hurdles regardless of their actual experience. For the most part this loophole has been closed; but we are miles from an Applicant Tracking System (ATS) that gives people a fair shake. Consider that people who are even remotely aware of such nuances have a higher likelihood of getting a job than those who don’t. Are we looking for the best person for the job? or the craftiest person looking for a job. Before you cheekily remark that you want the crafty resourceful one — remember, they are actively deceiving you before day one on the job.
The adoption of video interviewing software equipped with facial recognition technology has added another layer of complexity and controversy. These systems analyze candidates’ facial expressions, tone of voice, and even their choice of words to assess fit and potential. While proponents argue that such technology can add objectivity to the hiring process, the reality is far more nuanced. Facial recognition algorithms can inadvertently perpetuate biases, as they often rely on flawed datasets that may not represent the diversity of the broader population. Moreover, the interpretation of facial expressions and body language is highly subjective and culturally dependent, raising questions about the fairness and inclusivity of such tools. This technology’s opaque nature means candidates are often left in the dark about how their non-verbal cues are being assessed, leading to a hiring process that feels more like a black box than an equitable evaluation of one’s capabilities and experiences. Think about it this way — unless the job is to talk to a computer or record content by oneself — the evaluation method holds very little relevance. Whether you are a candidate looking for a job, or a manager looking to fill a position; do you really want the level of evaluation to be as prone to error as forcing someone to leave a voicemail in one take? We all have cringe worthy memories of nightmare voice messages, now imagine a computer evaluates your potential for a job based on that type of interaction. It’s not an effective means of evaluating talent.
The problem becomes much larger when systems start to recognize characteristics of past successful applicants. Algorithms themselves do not possess built-in discrimination, and it is uncommon for engineers to deliberately embed bias within them. Nonetheless, algorithmic bias can emerge in the context of recruitment, stemming from the core aspects of AI and machine learning (ML) technology, including how datasets are built, the objectives set by engineers, and the selection of features during various stages of the machine learning process (Chen, 2023). As a hypothetical example, an entire cohort of hires for a multi-year stretch may have been done by a department manager that subconsciously favoured golfers. The data set is now skewed to prefer features of people who play golf and overlook other demographics. This compounds as you start considering the impact on marginalized communities when they don’t fit the profile of past employees — whether on the surface, or by way of the clues a computer mistakenly interprets. These are all nuances that HR professionals work hard to reduce in the hiring process — and should be active participants in the tuning of these systems in a predictive manner. The reality is that these systems were adopted in an effort to replace HR professionals — so there is far less oversight than is acceptable in such an error prone and high stakes situation. People’s livelihoods and any company’s ability to recruit top talent is at stake. It’s a massive strategic hole in any organizations that considers themselves a market differentiator. You will miss A plus candidates, have a biased talent and they will end up with your competition.
From a liability standpoint in many jurisdictions you would need to explain exactly how your ATS screens candidates when presented with a human rights challenge. This is a huge risk most small to medium sized businesses cannot afford to take. Unless someone on your team can fully explain how the ‘black box’ works — and what evidence-based decisions lead to the criteria — my professional opinion is to be very cautious. Human oversight should be in place and continually monitoring outcomes. It’s not to say this technology won’t be sound in the future, but we are still in the infancy stages of its development. Any criteria for hiring selection needs to be grounded in evidence based positive employment outcomes.
If you are a company that prides itself on product or service, then you are what is considered a differentiator in your market. Your success relies on being able to find, recruit and retain the best talent. If you don’t — then your competition likely will. Take ownership of your recruitment efforts and use data to help YOU make the right people decisions. If you’re not sure where to start, we can help. Reach out for a strategic assessment and see if our process is right for your organization.
References
Lytton, Charlotte “AI hiring tools may be filtering out the best job applicants”, 2024, BBC News, https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination
Chen, Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit Soc Sci Commun 10, 567 (2023). https://doi.org/10.1057/s41599-023-02079-x