The Algorithm Would Have Rejected Patton
Where AI stumbles.
There is a famous photograph of General George S. Patton taken somewhere in France in the summer of 1944. He is standing in a field, jaw set, holding a whip, looking at something just beyond the frame. He looks like a problem. He looks like the kind of man who would fail every behavioral competency screen ever devised, flunk the culture-fit interview, and get auto-rejected before a human recruiter ever laid eyes on his resume.
He also saved the Allied campaign in Europe.
The contrast between Patton and Field Marshal Bernard Montgomery is one of the more instructive case studies in organizational psychology, yet military historians rarely frame it that way. Montgomery was everything a modern hiring algorithm would love. Meticulous. Process-oriented. A careful planner who believed in moving only when conditions were optimal, when supply lines were secured, when every contingency had been mapped. He was, by almost every traditional measure, a model commander.
He was also agonizingly slow, repeatedly cautious at moments that demanded boldness, and was widely credited with allowing German forces to escape at critical junctures that a more aggressive commander would have sealed. His campaigns in North Africa and Italy were exercises in methodical competence that cost more time, more lives, and more opportunity than necessary.
Patton, by contrast, was a disaster of a human being in almost every interpersonal dimension. He was volatile, profane, insubordinate, and capable of catastrophic judgment in public — most famously slapping a shell-shocked soldier in a hospital and nearly ending his own career. He was the kind of leader who made HR departments in every era break into a cold sweat. He was also perhaps the most brilliant and instinctive operational commander the United States produced in the twentieth century. When Eisenhower unleashed him across France with the Third Army, Patton moved so fast that his supply lines couldn’t keep up with him. He didn’t wait for perfect conditions. He didn’t wait for certainty. He found the gap and drove through it.
The question that this raises for modern recruiting is uncomfortable: how many Pattons are we systematically filtering out, and how many Montgomerys are we optimizing toward
Hiring has always involved performance. Candidates have always tried to present their best selves, shape narratives, emphasize strengths, and bury weaknesses. This is human and inevitable. But something has changed in both the architecture and the psychology of the modern hiring process that goes beyond normal professional self-presentation.
Today’s hiring environment is increasingly defined by what might be called optimization theater, a process designed to reduce uncertainty so aggressively that it has begun to eliminate the very qualities it claims to seek. AI screening tools parse resumes for keyword density. Structured behavioral interviews score candidates against competency rubrics. Predictive analytics models score the likelihood of success against patterns derived from existing high performers. Culture-fit assessments measure the alignment of values with a defined organizational profile. Patton would have failed miserably.
Every one of these tools, taken individually, is defensible. Together, they have created a system that is extraordinarily effective at identifying people who resemble whoever you already have and are extraordinarily poor at identifying people who might change what you’re capable of becoming.
This is not a small distinction. Research on team composition consistently shows that cognitive diversity, but not demographic diversity alone, and genuine differences in how people process information, frame problems, and reach conclusions, are the strongest predictors of collective intelligence. The teams most capable of navigating complex, novel challenges are rarely those assembled from people who share the same frameworks, risk tolerance, and professional arc. They are the teams with friction, with members who don’t entirely fit, who push back from unexpected angles.
You cannot build that kind of team through a process designed to minimize surprise. That is what AI is all about.
There is a concept in complexity theory called “edge of chaos” or the productive zone between rigid order and pure randomness, where the most interesting adaptive behavior tends to emerge. Healthy ecosystems operate here. So do the most innovative organizations. So did Patton’s Third Army.
Montgomery never found the edge of chaos because he was constitutionally committed to eliminating it. Every operation had to be set under conditions, with predictable inputs and controlled outputs. The problem is that war, like markets, like talent development, like organizational transformation, is not a controlled environment. It is a complex adaptive system that, by almost any definition, defeats perfect planning. The people best equipped to operate in it are not necessarily the most methodical. They are the most adaptive, the most capable of improvising under uncertainty, the most willing to act before all the information is in.
The willingness to hire that kind of person requires accepting a different kind of risk. Not the reckless risk of hiring without judgment, but the deliberate risk of leaving room for serendipity and the unplanned conversation that changes a project’s direction, the hire who came from left field and saw what everyone else had normalized, the candidate whose resume made no obvious sense but whose instincts turned out to be irreplaceable.
Organizations that have done this consistently produce disproportionate results. The history of innovation is largely a history of accidental fits — people who landed somewhere unexpected, who weren’t the obvious choice, who succeeded precisely because they hadn’t been shaped entirely by the environment they were entering.
The deeper cost of the current moment is what it does to candidates before they even arrive at the process. When every signal in the hiring environment communicates that there is a correct version of yourself to present — the right keywords, the right competency stories, the right values language — it trains people, especially younger ones early in their careers, to suppress whatever is genuinely strange or hard to categorize about themselves. They learn to translate themselves into the language the algorithm speaks. And in doing so, they lose access to whatever made them worth hiring in the first place.
The recruiters and hiring managers who understood this intuitively and who hired the person who seemed off, who took the meeting that didn’t make sense on paper, who overrode the score because something in the room told them to, were not being sloppy. They were doing something the algorithm cannot do. They were making a bet on human potential under conditions of genuine uncertainty.
That bet doesn’t always pay off. Neither does the careful, optimized, risk-minimized alternative. The difference is that when the serendipitous hire works, it tends to produce something that no process could have predicted or engineered. Montgomery took Caen eventually. Patton took everything else.
The algorithm would have hired Montgomery every time. We should think very hard about what that means. I love the potential of AI, but we have to remember that deviant behavior is often just what we need.
I need your help. This newsletter is funded by my subscribers. I can’t keep this up every week without them. If you value this at all, please become one of them by joining as a paying subscriber. Thanks.



Brilliant stuff, Kevin. The tech-driven 'optimal hiring process' is just as much a risk as the all-too-human one.