Novaprospect

Blog

Perspectives on AI governance, FedRAMP compliance, and building regulated-grade engineering infrastructure.

AI GovernanceComplianceGateway

AI Gateway as a Compliance Control Point

The AI gateway shipped by Vercel and the equivalents emerging elsewhere are marketed as developer convenience — one API across providers. The more interesting read is compliance. A gateway is the only chokepoint a regulated org has for AI traffic, and most teams are not yet treating it as one.

Read post →
AI SecurityAgentsAir Gap

Computer Use and the Evaporating Air Gap

Anthropic's Computer Use API and the equivalents now shipping from other frontier labs let a model drive a real desktop — screen, keyboard, mouse. Air gaps that worked against network-layer exfiltration do not work against a model that can type.

Read post →
AI SecurityData MinimizationModels

The Compliance Cost of a Million-Token Context

Anthropic's 1M-token context window is a genuine capability leap. It is also an unannounced change to your data-minimization story, your audit log volume, and your exfiltration surface. The engineering teams pulling it in have not yet reconciled any of those.

Read post →
AI SecurityMCPSupply Chain

The MCP Threat Surface Nobody Is Modeling

Model Context Protocol servers are the new universal connector between agents and the rest of the enterprise. They are also a threat surface with properties no prior connector had, and the industry has not caught up.

Read post →
EngineeringAISDLC

The Session Log Is an SDLC Artifact

Every AI-assisted session produces a log. Most organizations treat it as a chat transcript. We treat it as a first-class SDLC artifact, and it has changed how we do nearly everything downstream.

Read post →
AI ComplianceData ResidencyPrivacy

Data Residency for AI: Where Did the Prompt Go?

Data residency commitments were easier to keep when data sat in databases. AI systems transmit slices of that data to model providers on every inference, and most organizations cannot say where it lands.

Read post →
AI SecurityNetworkAgents

The Egress Problem You Did Not Know You Had

Network egress policy is a control most security teams consider mature. It usually is — for the workloads it was designed against. Agents are a different workload, and most egress policies treat them as if they were not.

Read post →
AI ComplianceFedRAMPConMon

Evidence Collection at Machine Scale

Continuous monitoring was never going to be satisfied by quarterly screenshots. Now that automation is doing the work of collection, the gap between the compliance artifact and the control it represents is visible in a way it was not before.

Read post →
EngineeringAI GovernanceCompliance

Separation of Duties Applies to AI, Too

The engineer who dispatched the agent should not be the engineer who approves the agent's output. This is a rule the rest of the industry has quietly forgotten in the rush to ship AI features.

Read post →
FedRAMPAI ComplianceBoundary

FedRAMP and AI Workloads: The Boundary Problem

A FedRAMP authorization boundary is a specific, drawn thing. An AI workload that reaches an external model provider crosses the boundary every time it runs. That is the problem the industry has not fully reckoned with.

Read post →
AI SecurityData ExfiltrationAgents

Data Exfiltration Through the Helpful Agent

The fastest path out of a well-defended network is now a chatbot with a tool that fetches URLs. The attack does not require a model compromise. It requires only the behavior the agent was designed to exhibit.

Read post →
EngineeringAIGovernance

Branch Discipline for AI Agents

An agent that can commit to main is an agent you cannot recover from. We enforce branch discipline on agents more strictly than we enforce it on human engineers, for exactly that reason.

Read post →
AI SecuritySupply ChainGovernance

Your Model Is a Dependency

The foundation model your product depends on is a piece of third-party software. Everything your security program says about third-party software applies to it. Most programs have not caught up.

Read post →
EngineeringAIGovernance

The Prompt File Is a Contract

We put our agent prompts in version control, next to the code. That single decision turned out to be load-bearing for nearly everything else we built around AI.

Read post →
AI ComplianceAuditNOVAICOM

Audit Trails at Machine Scale

An audit trail designed for human engineers does not survive contact with an agentic workflow. The log volume alone breaks it. What replaces it is not more logs, but better structure.

Read post →
AI SecurityAgentsProduction Safety

The Blast Radius of an Agent

When an agentic system makes a wrong decision — and it will — the damage is bounded by the authority you gave it. Most teams do not think about that envelope until after the incident.

Read post →