Human in the Loop

AI produces outputs. Humans produce outcomes. The gap between them is judgment and intent. The human in the loop exists because the work requires someone who understands what the desired outcome is. That someone is you.

A 1950s craftsman stands at a workbench holding a hand tool, surrounded by humming machines, his calm focus contrasting with the automated activity behind him.
The machines can run, but only the craftsman knows what he's building.

This article started with a conversation AI could never have had.

Two people. A table. Coffee and cigars. Jonathan Richter and I get together regularly for the kind of conversation that's getting harder to find: no agenda, no devices, just ideas. We think alike in a lot of ways, and last week we got onto the topic of AI, design, and what it actually means to use a powerful tool well. One thing led to another the way it does when two people are genuinely curious about the same things, and somewhere in that exchange this piece started taking shape.

No prompt could have produced that moment. No model could have sat at that table. This is not a story about AI. This is a story about what AI cannot replace.

AI is the most powerful tool most of us will ever hold. And like every powerful tool in history, it is already teaching people how to misuse it.

The tool doesn't know what it's building

A hammer cannot frame a house. It can drive a nail wherever you put it, including through something that should never have a nail in it. A mixing console cannot mix a show. It can recall every setting you saved and reproduce every parameter you specified, including the wrong ones. The tool executes. The craftsman decides.

AI generates. It does not intend. It produces language, images, plans, and proposals with a confidence that can easily be mistaken for understanding. But the model has no idea what you are actually trying to build. It doesn't know your client, your history, your values, or what is actually at stake. It knows patterns. You know the point.

This is not a limitation that will be engineered away. It is the nature of the thing. A chisel will never care about the sculpture. That's not the chisel's job.

The quiet abdication

Nobody decides one morning to stop thinking. It happens incrementally, and it happens because AI makes the incremental version feel like efficiency.

You ask AI to write the email instead of telling it what you actually want to say. You accept the first output because it sounds polished and confident. You stop asking whether the answer is right and start asking whether it sounds right. You let the tool frame the problem because framing it yourself takes longer.

That last one is where founders lose the most ground. The ability to define the real problem, before anyone starts solving it, is one of the most valuable things a leader does. The moment you outsource that framing, you are no longer leading. You are editing.

Creative professionals do a version of this too. They hand the first draft to AI and then spend their energy reacting to what came back instead of originating. The voice that makes their work theirs gets quieter every time they let something else speak first. They don't lose the skill overnight. They just stop exercising it until one day it isn't there.

Outputs versus outcomes

AI produces outputs. Humans produce outcomes.

An output is a thing that exists. A document. A strategy deck. A campaign brief. Ten thousand lines of code. An outcome is a thing that matters. A decision that moved the business forward. A piece of work that changed how someone thought about their problem. A relationship that deepened because someone said exactly the right thing at exactly the right time.

The distance between those two things is judgment. It is context. It is intent. It is the ability to ask not just whether something is technically correct but whether it is actually true, actually useful, and actually right for this person in this moment.

None of that is in the model. All of it is in you.

Not a safety check

There is a phrase used in technology and policy circles: human in the loop. It usually means a person is positioned somewhere in an automated process to catch errors, apply oversight, and approve decisions before they become irreversible.

That framing is too small.

The human in the loop is not a quality control step. The human is the entire point. The loop doesn't exist to contain AI. It exists because the work requires someone who understands what it is for.

Think about what the craftsman actually does. They don't supervise the chisel. They don't audit the saw. They bring to every tool a set of things the tool will never have: a vision of what the finished thing should be, a sense of when something has gone wrong before it becomes a problem, and a willingness to take responsibility for the outcome. The tool extends their capability. It does not replace their intelligence.

When you position yourself as a reviewer of AI output rather than the author of AI-assisted work, you have already made a mistake. Reviewers react. Authors decide. The work needs an author.

What using AI well actually requires

Using AI well is not complicated. It is just harder than it looks, because it requires something most productivity culture trains out of people: knowing what you want before you ask for it.

The prompt is not the work. The prompt is the translation of your thinking into a form the tool can act on. If your thinking is vague, the prompt will be vague, and the output will be confident-sounding vagueness dressed up as a deliverable. The model cannot fix a fuzzy brief. It will just produce something that looks like an answer.

Before you open the interface, be able to answer three questions. What am I actually trying to accomplish? What does a good outcome look like? What would disqualify an answer, even a polished one? If you can answer those, you are ready to use the tool. If you can't, no tool will save you.

Then interrogate the output. Not hostilely. Professionally. Ask whether it reflects what you actually know about the situation. Ask what it assumed that you didn't tell it. Ask what a person who knew the full context would push back on. The goal is not to find fault. The goal is to finish the work that the tool cannot finish, which is the part that requires judgment.

AI is a force multiplier. Force multipliers amplify what is already there. If you bring clarity, experience, and craft to the tool, it will extend your reach in ways that are genuinely remarkable. If you bring vagueness and passivity, it will produce a lot of it very quickly.

The fear aimed at the wrong target

Creative professionals and founders feel the threat of AI differently, but it tends to land in the same place: the fear of being replaced.

That fear is real. It is also aimed at the wrong target.

People were afraid of the automobile. They were afraid of electricity. Every transformative tool in history arrived with a version of the same panic: that it would upend the order of things, eliminate livelihoods, and leave people behind. Some of those fears were partially right. None of them were right about who actually got left behind. It was never the capable person who learned to use the new tool. It was always the person who refused to.

AI does not replace people. It amplifies them. Like every transformative tool before it, it extends what a capable person can do and makes the gap wider between people who know what they are doing and people who do not. The printing press did not replace writers. It made the ones with something to say more powerful and made the ones without something to say more visible. AI is doing the same thing, faster.

Think about what amplification actually means. A great microphone does not make a poor singer sound like a great one. It makes a great singer sound like themselves, only bigger. A poor singer handed the same microphone just becomes more audible. The tool does not change what is there. It reveals it.

This is why the fear of being replaced is aimed at the wrong target. The question AI is actually asking you is not "are you replaceable?" It is "what do you actually bring?" Your taste. Your judgment. Your understanding of what the client actually needs, not what they said they needed. Your ability to walk into a room and read what is happening under the surface. Your thirty years of knowing what works and, more importantly, what doesn't.

The model has never been in the room. It has never lost a client, salvaged a project at midnight, or made a call that turned out to be wrong and lived with the consequences. You have. That experience is not a credential. It is the actual intelligence the tool is waiting to amplify.

The craftsman who understands their tools does not fear a new one. They ask what it can do, learn how to use it well, and stay in the relationship with their own judgment that makes the work worth doing in the first place. The craftsman who fears the tool is usually afraid of something else, and that something else is worth examining.

Your taste, your experience, your values, your way of seeing: the model doesn't have any of it. All of that is what makes the work yours.

The loop doesn't close itself

I have watched engineers try to automate their way out of the hard parts of live production. The room-reading. The split-second calls. The knowing when to trust your ears over your meters. They build beautiful systems and the systems do exactly what they were told to do and sometimes that is enough. And sometimes the show turns and the system has no idea, and the engineer who stayed in the loop is the one who catches it.

That is not a metaphor. That is what happened, more than once, in rooms I have stood in.

Think of the loop the way you think of an electrical circuit. The current can flow, the system can hum, everything can be connected and ready, but without the switch, nothing lights up. The human is the switch. Without them, the circuit produces no meaningful energy. It just waits.

Every tool you use, including AI, is waiting for you to bring something it cannot generate on its own. The intention. The accountability. The understanding of what this is actually for and whether it is actually working.

The loop doesn't close itself. You close it. Stay in it.

Written by John N. Wilson , founder of Arkira Partners, where he consults with luxury hospitality, entertainment, and lifestyle brands, and Viation, where he designs integrated audiovisual systems that make spaces feel natural and inspiring.