The Hype of Candidate Engagement
Engagement is probably good but tying it to shoddy statistics is not
It seems that 2016 was the year when recruiters began to fixate on candidate and employee engagement. Since then, there have been hundreds of articles and reports on the value of candidate engagement, awards, and numerous conferences devoted entirely to discussing it. Engagement is seen as a way to entice better candidates to apply and as a way to improve candidate quality. But it is doubtful that engagement does either of these.
There is nothing wrong with making the candidate experiences friendlier or making it easier to apply for a job. I know all too well how frustrating it can be for a candidate to navigate the typical career site, learn about and apply for a suitable position, and then find out that all their work went into a black hole. Most career sites and recruiting processes are designed for administrative ease and are not seamless, simple, or engaging. But does engaging candidates cause better quality ones to apply or does it just make them feel better?
First of all, before we make any assumptions or statements, wouldn't it be nice to start with a clear definition of engagement? And wouldn’t it also be useful to know if a positive experience results in any tangible improvement in the number of hires made per candidate or in their performance or loyalty?
If recruiters and our profession ever hope to be taken seriously, they must become more scientific and data disciplined. We need measurable definitions, larger data samples, useful metrics, and longitudinal studies to support our claims. Today we have anecdotes and some questionable qualitative data from a few mega large organizations. We have little to no data from smaller firms.
Many Definitions – None Useful
To start, what does it mean when we say a candidate is engaged?
Is it that they have clicked on more links, viewed more videos, or watched them longer, or is it the amount of time a candidate stays on the career site? Maybe it is how fast they get a response to their application or whether they can talk with a recruiter? We don’t know which, if any, of these are important. We have numerous adjectives to describe the characteristics we think are associated with engagement– words that define attitudes, behaviors, culture and the interplay between these.
But none are definitions. And none can be measured. I know many recruiters say that this doesn’t matter. It is more about “I know it when I see it.” And many recruiters say they when they speak with candidates, they learn that response time, having good videos about the positions, and so on made a difference. But do we know objectively if they did or not. Maybe the candidate was saying that because we asked a leading question or didn’t know themselves. How many potential candidates were turned off by the experience? And would they have been as good or better than the ones who pursued applying?
You can get immediate access to our 2022 trends report, exclusive interviews, and white papers by upgrading to a paid subscription.
In the next few weeks, paid subscribers will have access to an interview on assessment and screening and an in-depth review of a seminal new book on A.I. Don’t miss these!
Your upgrade to a paid subscription is only $2.50 per month/ $30 per year.
No Objective Measures
There is no evidence that the most common measures of engagement have predictive validity - a measurement of how well they predict future performance.
The two most common ways engagement/experience are justified are:
Expert opinion and anecdotes
Stories abound about how a specific approach led to more candidates or better candidates. No one defines “more” or “better” in any meaningful way. Some people swear that videos result in more qualified candidates. Others swear by interactive chatbots. The real question is, does increasing the appeal of a job or does making it easier to apply improve the performance or loyalty of those who get hired?
These and other efforts to increase engagement are never correlated to actual performance or turnover data. If we were to validate these methods, we would need a control group of people hired without the engagement methods to see if their performance or loyalty were significantly different.
There were mounds of anecdotes and case studies proving that sanitariums and sunshine cured tuberculosis. Anecdotes are not evidence. Statisticians consider anecdotal evidence the least reliable and of the lowest quality.
Surveys of candidates are common. Awards are based on the results of surveys that ask recruiters a set of questions about their process and candidates about their experience in getting hired, what attracted them, what made them more interested, or what excited them. This is used as evidence that several factors are important – speed of response, quality of information and so on. Surveys such as these are biased, ask leading questions, are rarely complied by qualified statisticians, and are completed by a narrow set of respondents. All they do is cloak the stories and anecdotes with a veneer of what seems to be objectivity. They are of no real scientific value.
If engagement or experience were to be objectively measured, it would require a measurable definition and then either objective, third-party research with a control group or a longitudinal study involving a large sample complied over time to see what makes a difference. I have seen none of these done, and it would be tough to do it, which is most likely why it has not been done.
Does having a good experience or being engaged result in more people being hired from a given set of candidates? Where is the evidence that this is the case? Any single recruiter can make that claim, but is it verifiable? Does it apply year after year? Does it apply to more than one recruiter/firm? Can we generalize it?
Does having a good experience or being engaged result in higher quality candidates? How do we define quality? Again, there is no accepted, measurable definition of candidate quality that I have seen.
I am deeply in favor of simplifying the hiring process and of making it user-friendly. We should provide more information for candidates about each position, and we should provide prompt and useful feedback. But these have nothing to do with performance or quality. Trying to justify redesign or process improvement by using shoddy data does no one a service.
Need for Data
Recruiting needs to up its game and apply data science and analytics to its processes. It needs to challenge its own assumptions and test them with data. And there is a huge need for more precise and agreed-on definitions for candidate quality and engagement.
We are a long way from showing that a positive experience or engaged candidates are any better than those who were not particularly engaged or from those who had a bad experience. Spending time and money on unproven theories is not only wasteful; it diverts us from doing things that make a bigger difference.