How Smart Are Recruiting Tools?

What are the limits on A.I.-based tools

I and many others have been championing the use of automated recruitment tools that use artificial intelligence. I have been championing them for two reasons: they are less biased than humans, and they can make the recruitment process more efficient and engaging.

The available tools fall into two general categories: screen and assess candidates and those that automate routine tasks. All of these tools are based on Specialized Intelligence, which can solve well-defined problems or do routine tasks in a given environment. However, they cannot do anything outside of their narrow tasks.

But are they really “intelligent?” What does that mean? Clearly, they are not intelligent as we would consider intelligent. As soon as you ask Siri or Alexa a question outside of their narrowly defined areas of expertise, their answers are either nonsensical, refer us to a human, or offer some other action. For example, I just asked Alexa if I needed any milk. She said, “Are you shopping for milk?” this seems somewhat intelligent, but “she” was probably programmed so that Amazon’s Whole Foods might make a milk sale.

A recent book, Atlas of AI, by AI scholar Kate Crawford, argues that “…we’ve bought this idea of [A.I.] intelligence when in fact, we’re just looking at forms of statistical analysis at scale that have as many problems as the data that it’s given.”

All the various tools vendors offer us to augment our recruitment activities rely on large amounts of Big Data. This data feeds the algorithms used to drive their decisions. Therefore, access to large amounts of data is vital. And the quality and variety of that data are crucial to their accuracy and objectiveness. Unfortunately, we have witnessed many cases when the data used was too narrow, previously biased, or the sample was not large enough to provide unbiased results. Amazon and Microsoft both have fallen into this trap.

The rise in automated tools using artificial intelligence is the result of the explosion in computing power. Large amounts of storage, fast processors, and affordable access have given us the ability to crunch vast amounts of data. Using this data, we can find patterns that were impossible to see before. We can look at performance data, education, experience, and much more for everyone we have hired, for example, and determine the traits that have led to success or failure. In addition, we can analyze the answers to the questions we have asked in interviews and correlate them to performance and retention.. This has made assessment tools more powerful and has enabled us to make predictions about people based on their past. Whether this is ethical, fair, or even legal has yet to be worked out in some cases. The technology has exceeded our ability to understand it and develop a framework for fair and ethical use.

Even though we are awed with the seeming power of AI tools, none of the technology or thinking is new. Most of the A.I. tools have been around, at least since the 1960s. The reason AI did not become as significant as it is today was the lack of computer power. To analyze a large amount of data, or even to have access to it, requires a level of computing power that did not exist until a few years ago. Many of our current tools had their genesis in the 1960s. A famous example is the first chatbot called Eliza. In the 1960s, Joseph Weizenbaum at MIT created Eliza to act as a psychotherapist. She appeared to users to be a real doctor who asked questions and offered useful advice, although she followed a very simple script. Her limitations were processor speed, computers with enough memory to process her algorithm, costs, and user access.

Even today, all chatbots follow scripts. Someone has to write the questions that will be asked and anticipate the answers. The chatbot itself is really little more than a decision tree. When a candidate answers a question, the algorithm uses a script to respond appropriately. With the powerful computers available today, coupled with natural language processing, they can mimic humans by responding in realistic and accurate ways and by using natural language processing to “understand” an answer to a degree, even if unexpected.

I am an advocate for chatbots because they offer candidates all-the-time access to answers and information. They save time, improve the candidate experience, and ease the workload of recruiters. If they are properly programmed, they can screen candidates and eliminate the ones who do not meet the requirements for the job they are seeking. But no matter how smart they appear to be, it is important to be mindful that they are not intelligent in the sense that a human is. They cannot perceive nuances in communication or understand emotions. They cannot answer anything asked outside of their script.

Automated assessment tests use the data from thousands of other test takers to feed the algorithms that score how well someone can write code or solve problems, for example. Algorithms can access candidate data to predict their personality or measure their IQ. They can search the Internet to find people who match predetermined characteristics or have specific skills or education. All of these capabilities enlarge the recruiter’s reach and scope and make their work more effective. But it is a mistake to believe that these tools are intelligent in the way a human recruiter is.

New A.I. technologies such as neural networks and even more computer power will improve the ability of these tools to provide good advice and make more accurate predictions. But every tool is based on data that has to exist somewhere. Finding and accessing this data is a growing concern. Who owns this data? Who can give access to it? Is it being used in an unbiased manner? These are the legitimate concerns many of us have.

One of the many things that differentiate humans from computers is our ability to use emotion, unconscious behaviors, and even irrational actions to make decisions. Of course, with these comes boas, prejudice, and many other undesirable traits. But along with also comes our unique ability to take a chance on someone, offer someone an opportunity although the data says not to, be swayed by our emotions. Computers cannot and will most likely never be able to do this.

None of the tools can replace human judgment or decision-making. Instead, they augment our decisions by analyzing data better than we can and not deviating from rules. They offer us probabilities and point out things we might never have seen.

But it is doubtful that they will ever be human enough to jump to a conclusion, make a decision without data, feel something is right, laugh, cry, empathize, or take a chance on some maverick candidate that might invent the next killer product.

Leave a comment

Share

Subscribe

Your upgrade to a paid subscription for a tiny $2.50 per month/ $30 per year will give you access to various special reports, white papers, videos, and more. For example, we have recently published a white paper on the future of RPO, a series of links to key readings on A.I., and an exclusive interview with Felix Wetzel, Director of Career Acquisition at Cera. Much more coming soon.

Related Links

AI Atlas by Kate Crawford

Highly recommended analysis and critical commentary on the entire A.I. establishment

Stop Talking About A.I. Ethics

Overcoming AI’s Challenge In Hiring: Avoid Human Bias