How /goal Works Across Claude Code, Codex CLI, and Hermes Agent
Here’s something I noticed over the past few weeks.
OpenAI added /goal command to Codex CLI. Anthropic added it to Claude Code. Hermes Agent from Nous Research already had it built in.
Three different ecosystems landed on almost the same idea at the same time. That usually means something important is happening. And I think /goal changes how we work with agents more than most people realise.
One Topic: The Guide to /goal command for AI Agents
What /goal Actually Is
Most people look at it and think: another prompt or command but It’s not.
A regular prompt says: “Do this next thing.” You read the output, decide what to do next, and push the agent forward manually. You’re driving every turn.
/goal command flips that entirely.
You write down what done looks like once, and the agent works toward it until it gets there.
Here’s a real example:
/goal Resolve all TypeScript errors in /src. Done means tsc --noEmit passes with zero errors and no existing tests are broken.
One instruction. The agent figures out every step in between. It keeps going automatically until those conditions are met. This is the shift from prompting (you driving) to assigning (the agent driving toward a target you defined).
Three Tools – One Shared Primitive.
Codex CLI (OpenAI) – strong at building. Give it a spec, set a /goal, walk away. This shipped in Codex v0.128.0.
Claude Code (Anthropic) – strong at the opposite: finding what’s wrong with code that looks right. Security gaps, edge cases, spec compliance. Claude Version 2.1.139 added /goal with a built-in evaluator that runs after every single turn.
Hermes Agent (Nous Research) – not a coding worker, but an orchestrator. It doesn’t code. It coordinates. You send Hermes a message, it creates goal cards on a Kanban board, picks the right tool for each task, and manages every handoff — including sending you a summary at the end.
What makes this significant: all three accept the same format. So now they compose.
One message to Hermes → Codex builds → Claude Code reviews → Hermes verifies → you get a result. You never touched a terminal. I wrote about Hermes in an earlier JustDraft edition. This is the next layer on top of that.
The Part Most People Miss: The Judge
/goal doesn’t just run a task in a loop. After every turn, a lightweight evaluator model checks one thing: Has the goal been achieved?
If yes – done. Control hands back to you.
If no – the agent continues automatically.
This matters more than it sounds.
Coding agents are confident. They’ll tell you the build passed when it was never run. They’ll say tests passed when the tests never actually executed. I’ve seen it happen.
The evaluator closes that gap. Without it, /goal command is just a prompt with ambition. With it, it becomes a contract between you and the agent.
What This Changes for You
Your job as the human is different now.
You stop saying “keep going.” You start defining what done means – precisely. That means writing a clear completion condition, naming the validation command, and setting boundaries on what the agent can touch.
Vague goals produce confident agents and broken results. Precise conditions produce precise outcomes. The agent handles the turns. You handle the definition of done.
Why This Is Bigger Than It Looks
HTTP is a primitive. JSON is a primitive.
/goal command is becoming one for coding agents.
If the next tool adopts it, it joins this pipeline without anyone changing anything. The workers change. The primitive stays.
That’s what good standards do, they make composition possible across tools that were never designed to work together.
You’re watching one form in real time.

Interested in travel or photography, read last week’s LensLetter newsletter about impact of overtourism.
Read last week’s JustDraft about Hermes Agent Use Cases.
Two Quotes to Inspire
The next productivity leap will not come from faster answers. It will come from systems that remember the objective.
Most people prompt their tools. The best builders assign them outcomes and trust the system to fill the gap.
One Passage Summary From My Bookshelf
In the opening pages, Doerr recalls how Andy Grove of Intel built a disciplined goal-setting system around two deceptively simple questions: What do I want to accomplish? And how will I know when I’ve gotten there? Grove’s insight was that an objective without a measurable key result is just a wish. The goal has to be concrete enough that at the end of any given period, you can look at it without any ambiguity and say: did I achieve it or not? Yes or no. No negotiation, no interpretation. That binary clarity was what separated Intel’s planning from the vague aspiration-setting that plagued most organisations.
Doerr builds on this with a principle that runs through the entire book: transparency and accountability only work when goals are written down, visible, and tied to a measurable outcome. Teams that can see what “done” looks like and track their progress toward it in real time consistently outperform those operating on assumptions and informal check-ins. The goal isn’t a motivational poster. It’s a contract the team makes with itself, one that requires honest reckoning at the end of every cycle.
📚From Measure What Matters by John Doerr


