Employers have struggled with hiring ‘good staff’ probably since Eve opened her first apple pie store. It’s a tricky business, and many practices to improve it have been developed over the years – none of which have been particularly effective.
When job applications moved online, it didn’t take long for employers to start using primitive automation to make first-pass screening of the resumes; tools were sold to look for a keyword here, and another there. Somehow the presence of select words were deemed indicative of an application’s quality. It was then up to the humans to sort out the pre-filtered list.
Two can play this game
Advice for applicants on how to game the system quickly followed; put in a keyword here, maybe casually mention a prestigious institution like Oxford or Stanford (in white font on white background, so the machines will see it but humans won’t, should you never have actually gone to one of them) and voilà, you pass the filter and an actual human will get to see your resumé.
Or so the story went.
In any case, seeds for an HR arms race were now planted; recruiters (human and AI) would look for a feature X in a resumé, and candidates were coached to mention feature X. This happened irrespective of whether the feature has any proven predictive power for job performance or any other important trait.
As time went on, this filtering grew increasingly sophisticated.
Let’s automate a whole interview
It’s now reached the point where entire job interviews are automated. There are several providers of AI-powered software that conduct video interviews and assessments in an entirely automated manner. Particularly for mass-market roles that have multiple openings and can attract thousands of applications, these tools have become almost standard.
The attraction is clear; for employers, it’s viewed as a huge time-saver. Instead of having an army of biased people doing interviews and assessments, they can have an ostensibly fair and unbiased machine do that for them and pop out a nicely whittled down list of only good candidates.
Or so the story goes.
When the interview side is automated and you must give answers over video, what happens to trying to game the system? Is it game over when we have to be live and present during a video interview?
Of course not, it just levels up.
Applicants are now offered “helpful” guidance on how to behave in an automated interview. Some of these tips come from the companies building the AI tools, such as making sure the lighting is good, and encouraging applicants to “get excited” and “share your energy with the camera”.
But there’s scope for applicants to do more. Much more.
Two can play this game, too
A couple of years ago, I wondered what would happen if, instead of a human, the AI interviewed another AI? Would it even recognise it wasn’t dealing with a human, but a fellow AI?
So together with some colleagues at a large telecommunications company, we ran an experiment, rigging up a leading AI video interview platform to run interviews with a leading digital human.
Instead of a human, the hiring system would be looking at a video feed of a digital human; instead of hearing a human voice respond, they would hear the AI give answers.
Not only did the hiring system not realise it wasn’t interviewing a human, but the final scores it gave to our AI candidates were in line with the human candidates.
Eventually this capability of an AI agent to essentially sit in an interview for you is pretty much guaranteed to be productised to a package that will allow it to be used with little, if any, technical expertise.
Plug in GPT-3 or another large language model and chances are your agent will answer semi-intelligently to even unscripted AI interview questions, especially when we know the language analysis capabilities of these systems are… somewhat imperfect, to put it kindly. (Some of these systems will give an English proficiency score of ‘good’ when you answer them in German).
Train the language model with some of your text, and it will ‘sound’ like you, too.
You’re unlikely to get caught doing this; while the AI interviews are recorded, the software vendors of the hiring systems specifically discourage hiring managers from looking at the videos. Fair enough – what’s the point of automating something that you end up manually checking anyway?
What could possibly go wrong?
This is problematic on many levels. First, it will quite naturally result in a pointless arms race where the hiring systems will implement features that will seek to identify whether the person being interviewed is real or not, which in turn will result in more sophisticated AI agents being developed on the other side, and so on.
It’s progress, but is it useful progress?
It also makes it easy to game the systems; some of these hiring systems analyse facial expressions, disposition, eye contact and such mannerisms and deduce highly questionable results from them. Having automated the applicant-side, it can easily be optimized to smile just the right amount, laugh just the right amount, have just the right eye contact, say just the right thing, and so on – increasing the chances the applicant will progress through the hiring process or actually land the job.
These are, of course, all valuable capabilities from a research point of view. The AI hiring systems in use today are largely black boxes, partially even to their creators, more so to their customers, and certainly to the candidates. We should use tools like digital humans to probe the systems to see what features their algorithms actually rely on when scoring applicants, and with what weights. AI emotion detection, in particular, is a bullshit industry, and if any hiring tools still use emotion detection that impacts scoring, they deserve to be called out for that.
Using tools like AI agents to take job interviews for you also introduces a whole new level of inequality to the mix; at first, the impact will be that of discriminating against people who are not technically savvy. Later on, it will be discriminating against poverty, given the state-of-the-art AI agents are likely to be commercial.
It’ll also create a massive headache for employers when a single candidate can ‘apply’ for dozens, hundreds, or thousands of jobs per day, given it’s an AI agent doing the actual work.
Considering our experiences and the rapidly developing capabilities, it seems inevitable that the AI interviews will be probed and the systems gamed – and the interviewing side may not even know when that’s happening.
If your organisation is using such systems, are you sure you know what you’re getting into?