Your Model Is a Dependency
The foundation model underneath your AI product is a dependency. It ships from a vendor. It updates on a schedule you do not control. It occasionally changes behavior in ways the vendor considers minor and your users consider substantial. It is reachable only over an API that the vendor can revoke. Its training data is opaque to you.
Your security program almost certainly has a policy about third-party software dependencies. That policy was probably written with npm packages, container base images, and SaaS integrations in mind. Very little of it has been extended to cover the foundation model — even though the model is, by any reasonable definition, a dependency that sits closer to the core of the product than any of those other things.
The questions your policy already asks
Walk through the questions your third-party risk process asks about a new SaaS vendor.
- Where does our data go when we send it to this vendor?
- Who at the vendor can access it?
- How long is it retained?
- What happens if the vendor is breached?
- What is the termination path if we need to exit?
- Is the vendor's compliance posture compatible with our own?
Now ask those questions about your model provider. For most organizations, the answers range from incomplete to unknown. Data retention policies for API traffic are often vague. Access controls at the provider are usually not described in customer-facing terms. Breach notification timelines exist but are seldom tested. Exit paths — to a different provider or to self-hosted models — are usually not rehearsed.
This is not a criticism of the providers. The top-tier foundation model vendors have invested heavily in enterprise-grade controls and produce the documentation to match. The criticism is of the customer-side risk process, which in many organizations has not asked the questions at all.
The update problem
Third-party dependencies change. Your dependency management process knows how to handle that for traditional software — pinned versions, change review, regression testing in a staging environment, controlled rollout.
Model updates break this pattern. A minor version change at the provider can materially alter the behavior of a deployed system, and you may not be notified in a form that your change management process recognizes. The system's outputs drift. Users complain. Your incident response turns up "the model was updated" as the root cause, after you have spent hours looking for a regression in code that did not change.
Addressing this means treating model version as explicit configuration, tracking it in your deployment manifest, pinning it where the provider allows pinning, and regression-testing against the version change the same way you regression-test a library upgrade. It also means maintaining evaluation suites that can detect behavioral drift, because unlike a library, a model can drift even at a pinned version due to infrastructure changes at the provider.
Very few teams have this discipline in place. The ones that will be operating AI products in regulated environments in two years will have had to build it.
The fallback question
The final question to ask of any model dependency is the one your disaster-recovery planning would ask of any other critical vendor: if this provider is unavailable tomorrow, what happens to my product?
"We stop working" is an acceptable answer only if the product is not critical. For anything that matters, the answer needs to include a fallback path — a second provider integrated and tested, a self-hosted model qualified and warm, or a degraded mode the product can run in while the dependency is restored.
None of this is glamorous work. It is the same supply-chain discipline that has been standard for decades in enterprise software, applied to a new dependency class. The teams that treat the model as a first-class dependency will handle the next provider outage as an operational event. The teams that treat it as infrastructure magic will handle it as an existential one.