The Prompt File Is a Contract
When we started putting agent prompts in version control, we thought of it as a convenience. The prompts were getting long. Passing them through chat windows was tedious. Putting the text in a file and referencing the file was easier. That was it.
It turned out to be the most important architectural decision we made about AI.
What a prompt file becomes
A prompt file in the repository, alongside the code it authorizes changes to, stops being prose and becomes a contract. It says, in specific terms, what the agent is authorized to modify, what it is forbidden to touch, which branch it operates on, what constitutes done, and who signs off.
The file has a filename that ties it to a ticket. It lives on the branch that will carry the change. It is reviewed as part of the work — any disagreement about scope is resolved in the prompt file, before the agent runs, not after the PR lands. When the work completes, the prompt file is still there, in the history, demonstrating exactly what was asked of the agent at the moment the session executed.
An auditor asking "what was this AI authorized to do" gets a file. Not a recollection. Not a reconstruction. A file, with a commit hash, written by a human, that the agent was bound to.
The downstream effects
Treating the prompt as a contract cascades through the rest of the workflow.
Review becomes precise. A reviewer evaluating an agent's pull request is not asking "is this a reasonable change" in the abstract. They are asking "does this change match what the prompt file authorized." Out-of-scope modifications are not judgment calls — they are contract violations, and the contract is right there in the diff.
Scope creep becomes visible. An agent that wanders outside its lane produces a PR that no longer matches its prompt. The mismatch is obvious on review. The fix is to reject the PR and restart the session with a corrected prompt, not to re-review under a new implicit scope.
Post-mortems become tractable. When something goes wrong, the first question is always "what was the agent told to do." With a prompt file, that question has a one-file answer. Without one, it has a Slack-thread answer, a screenshot answer, or no answer at all.
Handoffs become reliable. Arch writes a prompt file. Code executes against it. PM reviews the resulting work against the same file. Every actor in the chain is looking at the same authoritative source, not inferring from context.
Why this works
The prompt file works because it externalizes the agent's authorization into a form that predates the execution and outlives it. Before the file exists, the agent has no authority. After the work completes, the file remains as evidence of what was authorized.
Compare this to the default pattern in most AI-assisted workflows: the prompt is a message typed into a chat window, sent to the model, and retained only in whatever transient log the vendor maintains. Authority is ephemeral. Evidence is ephemeral. Review is ephemeral. The entire governance question collapses into "trust the engineer who used the tool."
That is not a workable model for regulated environments. It is barely a workable model for unregulated ones once the volume of AI-driven change starts to grow.
The small discipline
Putting prompts in version control is a small discipline. It takes a few minutes per task. It does not require new tooling. It does not require buy-in from a platform team. It is something an individual engineer can adopt unilaterally, tomorrow, without permission.
The payoff is disproportionate because the prompt file is the anchor every other governance artifact can attach to. The ticket links to it. The branch is named for it. The PR references it. The session log cites it. Remove the file and the chain breaks. Keep it, and the chain holds.
The prompt file is a contract. Treat it that way, and a lot of the hard questions about AI governance become much easier to answer.