Originally published on LinkedIn.
As soon as an org adopts coding agents (Cursor / Claude Code / Windsurf / …), a new problem appears: rules sprawl.
Org-level policies, team conventions, project quirks, and developer personal preferences end up scattered across repos and tools — then drift. And agents become inconsistent (or unsafe) because nobody knows what's canonical.
I open-sourced Org Agentic Toolkit (OAT) to manage this like real configuration:
- One authoritative org baseline ("constitution")
- Explicit inheritance per project
- Deterministic compilation + validation
- Optional personal preferences overlay (without breaking org rules)
This is a small but important piece of the operational puzzle. The same principle that makes infrastructure-as-code work — a single source of truth, version-controlled, auditable — needs to apply to the rules that govern AI agents inside organizations.
Contributions welcome (agent targets, templates, validation/tooling).