Build less, ship more: The three pillars of product command
Improve your one-shot agentic engineering performance by clearly explaining yourself.
Product command is a coordination model for centralizing intent (the why, success conditions, and constraints), decentralizing execution (letting coworkers decide the less crucial details), and aligning both through explicit verification steps.
This contrasts with a command-and-control coordination model, in which a commander is responsible for decision-making, and subordinates are responsible for execution.
The end result is outputs (products) that require less effort to achieve better quality.
This post focuses on the three pillars of product command.
Coding with AI feels like a slot machine. Every once in a while, you hit the jackpot, but most of the time, you’re holding onto hope. Instead of building, you’re babysitting... watching over the shoulder of an AI bound to make mistakes, and it’s your job as the developer to make sure it doesn’t.
That sounds terrible.
The good news: If you can just explain your intent, AI can fill in the gaps. Then you’re creating, designing systems, and building... not babysitting.
Centralize intent
Your goal is to explain what you want and what you don’t want, with as few words as possible. Based on mission command, I’ve adopted these headings:
Purpose
End State
Constraints
Tradeoffs
Risk tolerance
Escalation conditions
Verification Steps
Activation / Revalidation
Appendix
Here’s my copy-pasteable template.
Each of these headings doesn’t tell the AI what to do implementation-wise, it tells it the strategy you’re going after. You don’t have to tell it how to do things, you have to give it the why.
I work with the LLM to come up with the doc... usually by asking it to gather context on the subject first, then build the doc, and 9/10 of the time it gets the majority of it right.
I then review and ensure I agree with every single word in the document.
Distribute execution
Once I’ve built a good enough INTENT.md, I start a new context window, paste in the link, along with this text:
let's implement this. Use your best judgement. Make sure to run through the verification steps thoroughly. I'm not going to be around, so prioritize using your best judgement, making frequent commits, and but don’t submit them, keep them local. Try to find any runbooks or policies that apply to your work, and make sure you follow them. You can do this! Good luck!
Usually, the AI takes 5-20 minutes to build something, and then at the end, I ask, “Let’s build an AAR.”
Improve through structured feedback
An AAR is military for “After Action Report.” It too has a template:
Context
Intent
What actually happened (facts only)
Delta analysis (why it was different)
Initiative Assessment (When the AI made its own decisions)
Weaknesses in intent (Parts where the intent wasn’t clear enough)
What we will sustain
What we will improve
I run this after each session. It captures from the existing AI’s current context its analysis of the process it just ran.
Next, I manually review / test the changes. Usually, this means I’m using the product.
If the product is 80% what I expected, I will ship. If there’s a few changes (minor placement issues in UI, for instance) then I will fix them with the LLM and update the AAR.
If the product is < 80% what I expected, I will explain what happened, identify true weaknesses in the intent, then trash the current work, go back to the intent, and have it run the same process.
I’ve only ever had to run this process 3 times to get what I want, normally I just run it once.
Try it, it will work, I promise.
Policy: Long-term memory
One quick side note: I save all the things I want my AI to learn in a folder called “policy” next to the INTENT.md. I use the AAR to keep it updated, but like in a real organization, updating policy can have knock-on effects, so I do it sparingly.
