Have you spent years cultivating a strong network of allies who can help you #nurture your personal job #portfolio so that you can best navigate the modern competitive #jobscape? Bad news: it might all be worthless when artificially intelligent job recruiters start Skyping you and asking you difficult questions instead.

And here’s the usual MONTAG “surprise”: it’s already happening. Robot Vera is a Russian AI-enabled service that “finds resumes, conducts interviews and gets you the top candidates 10 times faster than a human.”

Robot Vera has already conducted 1,400,000 phone interviews and 10,000 video interviews for real companies who are hiring real people. And you’re probably going to be interviewed by someone (“something”?) like Vera in the coming years.

Tell me about an accomplishment you are most proud of

Robot Vera seems to approach the whole process just like your office’s living, breathing, unreliable, partisan - and, most importantly - expensive HR officer. Here’s what she does:

• Make a selection of the most appropriate candidates from a database
• Gives out vacancy details, asks & answers candidates’ questions

So far, so bot-tastic. But here’s the interesting kicker:

• Vera holds video interviews with short-listed candidates & recognises their emotions.

You can try out an example interview here.. Gird your loins for questions like, “describe leadership in one sentence without using the word ‘lead’,” and prepare for answers peppered with bon mots like, “wow, inspiring! What inspires you?” Before you’re bid farewell with “have a prosperous day ahead.”

According to an interview with Bloomberg, Vera is being taught to recognise “anger, pleasure, and disappointment,”, and while this might sound like useful stuff, it could be argued that any human applicant who expresses anger or disappointment in a job interview is unlikely to land that dream job.

Excited by the prospect of landing a plum role for the likes of PepsiCo, Ikea, and L’Oréal - companies that apparently Robot Vera has already found jobs for - we nervously re-arranged our new tie, gripped our CV and tried an interview with Vera.

It went great. To the question, “tell me how you grow the net promoter score of your accounts?” we answered “by filling a huge room with a million monkeys trained to use a laptop,” and to her credit, Robot Vera nodded thoughtfully and moved onto the next question. We’ll let you know if we hear back - fingers crossed!

What can you offer us that someone else can not?

So how does this tech work? What’s Vera and her ilk doing? From the looks of the above video, she’s listening to the words you speak and comparing them to desired keywords; observing your face and fitting your facial expression into the categories of “angry, sad, surprised and happy” - the most basic of the Seven Dwarfs. She’s also - possibly - listening to the tone of your voice to judge the truthiness of your answers.

For a more techno-favoured experience of interviewee pressure, try Berghain Trainer, a website that sort-of-accurately recreates getting into Berlin’s “coolest” club - a notoriously hyper-stressful experience.

In this half-fun, half-ominous game, the “doorman” makes a decision on whether you pass or fail. It will bring back the pangs of anxiety anyone who has ever tried (and, cough, ever failed) to enter will recognise.

Berghain Trainer appears to use very face and voice similar tech whilst recreating the infamously high pressure moment where you have to convince the person on the door you are worthy of stepping onto the hallowed dancefloor.

(If you’re not successful, don’t fret - not many people are, as evidenced by a series of videos of people failing repeatedly on Youtube)

Imagine all the social situations - hotel check-ins, credit card applications, etc - where a simple check-list of questions and judgement calls need to be made. Put aside the (debatable) extra pressure of whether you’re “cool” enough to enter, and Berghain Trainer is a reasonably useful introduction to the way computers may soon make important decisions about your future prospects.

As ever, at this point you should be able to hear a shrill, high-pitched scream - it’s the sound of existential terror emerging directly from the limbic lobes of HR workers the world over. And the fear-fuelled question is a hoary old one: are our jobs going to be STOLEN by robots?

Even leaving aside the stretched-truth answers we give when trying to impress someone, each of us is a bundle of ticks, “tells” and hesitations, ripe for a computer to identify - and it makes sense that the coldly analytical eye and ear of a computer could help you get - or deny you - the job of your dreams much more accurately than a human could. Or could it?

Before we percolate on the particular irony of the people who decide who gets a job losing their own jobs, we need to dig just a little bit deeper.

Can you describe to me an example of a time you had to find out whether something was truly successful?

Has your voice ever quavered when under stress or duress in a job interview? As this writer can emphatically confirm, when one’s frightening deficiencies in mathematical ability are disastrously revealed during an interview for a bank teller position, a wobbly voice at the wrong time pretty much seals the deal on your chance of landing a job.

So surely a computer can listen carefully to our voices when asked a stress-inducing question?

The TruthPhone - and similar technologies - have been available since the 1990s, and was fascinatingly featured in a New York Times article where a journalist used the phone to ask tough questions to senior politicians. Amusingly, the politicians seemed to actually believe their lies and the phone registered them as telling the truth..

You can get in on the fun yourself: LiarLiar is an open source software plug-in which is possibly and disconcertingly named after the Jim Carrey movie of the same name and allows anyone to screen a recording of human speech for lies.

According to LiarLiar’s website, “the human voice produces microtremors that fall in the 8 HZ to 9 HZ range. When the speaker is stressed, physical changes such as increased bloodflow cause the audio in this range to shift to the 11 HZ to 12 HZ range.” The software detects those tremors and you, the devious user, decide where to hire or fire the unwitting participant.

However - here’s where a large question mark appears and hovers ominously over everything to do with Voice Stress Analysis (or VSA). In a string of scientific studies - including a paper bluntly titled “Charlatanry in forensic speech science” - the evidence stacks up against the use of VSA as even a halfway useful tool.

According to the USA’s National Institute of Justice, only 15% of lies about drug use were detected in a field test using Voice Stress Analysis technology. More worryingly, the software incorrectly identified 8% of people as being hepped up on goofballs even though they were not.

Let’s face it: if the best technology can’t tell if someone is telling the truth or not, despite saucer-eyes and full-facial gurning, what chance do us calm, sober, responsible citizens (ahem) have?

For now at least, VSA continues to be described by Humanity’s Most Almighty Tome (OK, Wikipedia) as “pseudoscientific technology” and that feels about right. And yet of course, it’s being used in criminal cases - upon his arrest, George Zimmerman, shooter of Trayvon Martin, was administered a VSA test.

“Where do you see yourself in 157Bn milliseconds?”

The ability of AI to listen to, recognise and parse our voices is beyond doubt, as anyone who has pranked an Alexa-owning friend with the phrase “Alexa, play Phil Collins’ “In The Air Tonight” on repeat” can attest.

But who sets the questions? And who decides which answers as best? And what about if the interviewee is a smart-arse who answers questions with questions?

Do we want to live in a world where the douchebag who beat us to the job we want is the sensible candidate with the sharp crease in their grey slacks who has answers that play it utterly, tediously straight?

It’s a gloomy prospect that humanity, despite AI-HR appearing to be as dubiously fallible as, erm, humans, will plough forward in this furrow regardless. Because, as recent history has shown, doubling down on decisions that look increasingly ill-informed as time passes definitely works out fine.

As ever, this emerging technology doesn’t smell so much of “enormous HR sector job-cull” as it does of “re-alignment of existing human resource management strategy.” And lo, so far, Robot Vera seems to mainly do the slow, dirty work of weeding out bad applicants (Bloomberg says that it narrows the field to the best 10%) rather than posing smug Google-style teasers like “how many golf balls would fill a school bus?” (Don’t worry, there will always be plenty middle-managers who are happy to luxuriate in your agony after asking one of those.)

Just as with AI music composition, movie script writing and, heck, pretty much all other jobs, the most likely outcome is that AI will do the boring stuff and the humans will find a way to fill the rest of the time. Humans, remember, always find a way to fill time with work.

Until then, perhaps we should fight fire with fire. When you’re approached over LinkedIn by an automated HR-bot about a job you’re perfect for, MONTAG’s strong recommendation is that you reply immediately with an absurd and automatically-generated counter-recruitment message, courtesy of LinkedIn Message Generator.

Because let’s face it, would you reply to a job approach that sounded like this?

Hi [Your Name Here],

Super-pumped to meet you! My startup Deep.ai, a de-centralized Lehman Brothers, has just raised $50M in our series C to design the future of maritime piracy.

With your considerable talents in the Microsoft Office suite and user research, let's discuss your potential future as our Head of East Coast Operations. Let's hop on the phone and talk further—how's Tuesday?

Best, Brian

Of course you would. Happy job hunting.

Permalink