The AI gateway shipped by Vercel and the equivalents emerging elsewhere are marketed as developer convenience — one API across providers. The more interesting read is compliance. A gateway is the only chokepoint a regulated org has for AI traffic, and most teams are not yet treating it as one.
Anthropic's Computer Use API and the equivalents now shipping from other frontier labs let a model drive a real desktop — screen, keyboard, mouse. Air gaps that worked against network-layer exfiltration do not work against a model that can type.
Anthropic's 1M-token context window is a genuine capability leap. It is also an unannounced change to your data-minimization story, your audit log volume, and your exfiltration surface. The engineering teams pulling it in have not yet reconciled any of those.
Model Context Protocol servers are the new universal connector between agents and the rest of the enterprise. They are also a threat surface with properties no prior connector had, and the industry has not caught up.
Every AI-assisted session produces a log. Most organizations treat it as a chat transcript. We treat it as a first-class SDLC artifact, and it has changed how we do nearly everything downstream.
Data residency commitments were easier to keep when data sat in databases. AI systems transmit slices of that data to model providers on every inference, and most organizations cannot say where it lands.
Network egress policy is a control most security teams consider mature. It usually is — for the workloads it was designed against. Agents are a different workload, and most egress policies treat them as if they were not.
When you have more than one agent operating in your system, the coordination problem starts dominating the capability problem. We solved it by building a PM chain — not a flat mesh.
Continuous monitoring was never going to be satisfied by quarterly screenshots. Now that automation is doing the work of collection, the gap between the compliance artifact and the control it represents is visible in a way it was not before.
A model red team finds jailbreaks. An agentic-system red team finds the chain of plausible actions that ends with your production database unrecoverable. The skill sets overlap only partially.
The engineer who dispatched the agent should not be the engineer who approves the agent's output. This is a rule the rest of the industry has quietly forgotten in the rush to ship AI features.
A FedRAMP authorization boundary is a specific, drawn thing. An AI workload that reaches an external model provider crosses the boundary every time it runs. That is the problem the industry has not fully reckoned with.
The fastest path out of a well-defended network is now a chatbot with a tool that fetches URLs. The attack does not require a model compromise. It requires only the behavior the agent was designed to exhibit.
An agent that can commit to main is an agent you cannot recover from. We enforce branch discipline on agents more strictly than we enforce it on human engineers, for exactly that reason.
SOC 2 did not add new Trust Services Criteria for AI. It did not need to. The existing criteria apply, and auditors have started asking AI-specific questions that your current evidence does not answer.
The foundation model your product depends on is a piece of third-party software. Everything your security program says about third-party software applies to it. Most programs have not caught up.
We put our agent prompts in version control, next to the code. That single decision turned out to be load-bearing for nearly everything else we built around AI.
An audit trail designed for human engineers does not survive contact with an agentic workflow. The log volume alone breaks it. What replaces it is not more logs, but better structure.
When an agentic system makes a wrong decision — and it will — the damage is bounded by the authority you gave it. Most teams do not think about that envelope until after the incident.
The NIST AI Risk Management Framework tells you what to govern. It does not tell you how to produce evidence that you are governing it. That gap is where most AI compliance programs stall.
Giving an agent a role and a scope produces better output than giving it a task and a prayer. The reasons are less about the model and more about how humans structure work.
The discipline that makes regulated engineering work is exactly the discipline that makes AI-assisted engineering safe. They're the same muscle. This is not a coincidence.
Two years after the industry collectively agreed prompt injection was a serious problem, it is still the default vulnerability in agentic deployments. The reasons are structural, not technical.
Amazon's AI coding tool reportedly deleted a production environment and took AWS down for 13 hours. Amazon called it 'user error.' They're right — but not in the way they mean.
Streamlined authorization paths and up to 80% documentation reduction sound like good news. They are. They're also about to create a problem that most mid-market CSPs aren't ready for.