AI as Supply Chain Risk: What SOC 2 Auditors Need to Know Now

Contact Auditor
AI supply chain risks in SOC 2 guidance

In our SOC 2 engagements, we’re seeing AI surface as a supply chain risk before the profession has issued clear guidance on how to handle it. The AICPA hasn’t yet provided formal direction, which means organizations are navigating this largely on their own, and auditors are being asked to evaluate risks that don’t fit neatly into existing frameworks. What that looks like in practice became clear after recent events.

In March 2026, the Department of Defense designated Anthropic, the company behind the AI model Claude, as a supply chain risk. According to CBS News, this was the first time the federal government applied this designation to an American company. The practical impact was immediate: government contractors using Claude in their operations had to evaluate whether they could continue doing so.

Now imagine your organization uses Claude to support supply chain processes, and you deliver services to the federal government. Overnight, a tool embedded in your daily operations becomes a compliance problem. Not because the tool broke. Not because it had a security vulnerability. Because a regulatory decision changed the rules. That is what AI supply chain risk looks like in practice.

What Is AI Supply Chain Risk?

AI supply chain risk refers to the operational and compliance risks your organization takes on when it depends on AI models or platforms to deliver its services. This is distinct from using AI to manage supply chain risk. In this context, AI is not the solution; it is the dependency being assessed.

 

SOC 2 for supply chains

A Quick Primer on SOC 2 for Supply Chain

Before diving into AI, it helps to understand what makes a SOC for Supply Chain engagement different from a standard SOC 2 audit. A standard SOC 2 focuses on the Trust Services Criteria (security, availability, confidentiality, processing integrity, and privacy) for a specific service, like a SaaS platform or a managed service.

SOC 2 for Supply Chain is broader. It looks at the IT systems, processes, and controls involved in the production and distribution of goods. Think manufacturing execution systems, logistics platforms, inventory management, and quality control workflows. As AI in supply chain risk management becomes more common, the audit implications grow more complex because AI introduces a type of dependency that does not behave like traditional software.

Why AI Is a Supply Chain Risk, Not Just a Technology Risk

Most organizations group AI tools with their other software applications under technology risk. That misses the point. AI is not just another piece of software sitting in your tech stack. When AI is embedded directly into supply chain operations, it makes decisions that affect inventory, cash flow, revenue, and customer commitments. That is operational risk, not technology risk.

Consider a manufacturer using AI to analyze inventory levels and automatically place purchase orders. If the model under-orders, the company loses revenue. If it over-orders, cash gets locked up in excess inventory. The AI is making decisions that directly affect the business.

Now layer on AI-specific risks. That model can be retrained, deprecated, or pulled from the market entirely, all without a change management process visible to you. The NIST AI Risk Management Framework (AI RMF 1.0) calls out model drift, training data provenance, and cascading upstream changes as key risk areas. If the model drifts, the financial impact compounds before anyone notices.

OpenAI shutting down Sora in March 2026, just 15 months after launch, is the extreme version of this risk. The tool did not get more expensive; it ceased to exist.

 

Mapping AI supply chain risk to TSCs

How AI Supply Chain Risk Maps to SOC 2 Trust Services Criteria

The good news is that the Trust Services Criteria (TSCs) already provide a framework for addressing AI supply chain risk. The criteria do not need to change. They just need to be applied with AI in mind. Here are the criteria that matter most.

Vendor Dependencies

CC9.2 (Risk Mitigation for Vendor Dependencies) requires organizations to assess and manage risks from vendor relationships. When an AI model provider powers part of your service delivery, that provider may qualify as a subservice organization rather than a simple vendor. The distinction matters: subservice organizations affect the services you deliver to your customers, and their controls become part of your SOC 2 story. I have seen organizations list their cloud provider and payment processor as subservice organizations, but completely miss the AI model provider running their core analytics.

Changes in the External Environment

CC3.2 (Risk Assessment for External Changes) requires organizations to factor in changes happening outside their walls. AI model updates, new training data, and shifts in model behavior are all external changes that should flow into your risk assessment. The COSO principles underlying SOC 2 require you to identify and assess significant changes. An AI model retrained on new data counts. A federal supply chain risk designation certainly counts.

Access & Availability

CC6.1 (Logical Access Controls) applies to API connections with AI services. How is the AI model’s API authenticated? Are API keys rotated? Is data encrypted in transit? These are the same questions auditors ask about any third-party integration, applied to AI.

A1.2 (Availability) matters when AI services are critical to your operations. If your application depends on an AI inference API, what happens during an outage? Is there a fallback? Is the AI provider’s uptime commitment in your system description?

 

Auditing and the black box

The Black Box Problem & What Can Actually Be Audited

Here is the challenge: traditional SOC for Supply Chain audits examined controls in environments where decision-making logic could be traced and reproduced. When AI makes supply chain decisions, the logic is often a black box.

There is a useful historical parallel. After Enron, Sarbanes-Oxley established the principle that you can no longer audit around the black box. You have to audit the black box itself. With AI, we may have no choice but to audit around it again, because the reasoning process is genuinely unknowable in its complexity. As we have explored in our post on black box testing AI in SOC audits, processing integrity has to be assessed at every phase, and each phase carries its own risks.

So if the AI itself cannot be fully audited, what can be? In practice, the AI vendor typically gets carved out of scope. That shifts the conversation to what your organization controls and, increasingly, to the supply chain risk management technology your organization uses to monitor those dependencies. There are five areas where you can build real, testable controls.

  • AI Vendor Inventory: Document every AI tool in use: who owns it, what depends on it, and how critical it is. This is the foundation. You cannot manage risk you have not identified.
  • Dependency Mapping: For each AI tool, answer one question – What breaks if this tool disappears tomorrow? The Anthropic designation and the Sora shutdown are both real case studies. Organizations that had mapped their dependencies knew what was affected. Organizations that had not were guessing.
  • Contingency Testing: Has your organization documented and tested a fallback plan for each critical AI dependency? Not just documented it. Tested it. A plan that has never been run is a compliance artifact, not a control. Risk evaluation and mitigation processes should include AI-specific scenarios.
  • Outcome Monitoring and Outlier Response: Set baselines and alert thresholds for AI-driven processes. Effective AI supply chain risk detection depends less on understanding the model’s internal logic and more on whether your monitoring caught problems and your team responded. Show the auditor the deviation, the alert, and the response. AI tools for risk monitoring in supply chains, whether purpose-built platforms or internal dashboards, can support this process, but auditors will want to see evidence of human review, not just automated alerts. This moves the audit toward validating your oversight process, which is entirely auditable.
  • AI Platform Change Monitoring: Apply the same thinking you use for threat intelligence feeds to AI platforms: monitor for model updates, discontinued features, pricing changes, and policy shifts. A growing category of AI tools for risk monitoring in supply chains is being built with exactly this kind of vendor alerting in mind. The documentation this produces is exactly the evidence auditors look for.

These are not new control categories. They are extensions of existing SOC 2 controls and vendor management practices applied to a new type of dependency. If your organization already has a mature monitoring program for subservice organizations, you are not starting from scratch.

 

How shadow AI impacts supply chain risk

One More Complication: Shadow AI

As we discussed in our post on shadow AI and SOC 2, unauthorized AI adoption is already creating audit gaps. Shadow AI makes the supply chain risk problem worse because it introduces dependencies that do not appear in any vendor inventory, risk assessment, or system description.

In practice, this shows up mid-audit. We’ve seen situations where a risk assessment lists a company’s approved AI tools, but testing reveals employees have been routing confidential client data through consumer AI products not covered by any data processing agreement. The data has already left the organization. The vendor has no contractual obligation to protect it. And in many cases, the company’s own acceptable use policy explicitly prohibits it. The compliance exposure from a single undiscovered shadow AI dependency can be significant, especially where privacy criteria are in scope.

One practical detection signal: token usage. If your documented AI usage should consume a certain volume of tokens per month, but your billing shows three times that, something undocumented is running. That gap is your shadow AI indicator, and it is built into cost monitoring that most organizations already do.

When employees adopt AI tools on their own, they are onboarding vendors outside the vendor management process. No data processing agreement. No security review. No contractual protections. Every shadow AI tool is an unassessed supply chain dependency.

AI Supply Chain Risk Frequently Asked Questions

These are some of the most common questions we get from clients about AI supply chain risk and SOC 2 engagements.

Is AI Considered a Supply Chain Risk?

Yes. When your organization depends on AI models, those models carry supply chain characteristics. Model updates, vendor pivots, regulatory designations, and API availability all create risk vectors that parallel traditional supply chain concerns.

What is Fourth-Party Risk In the Context of AI?

Fourth-party risk is when your vendor depends on another vendor’s services. In AI, this is common: your SaaS vendor may run on a foundation model from a separate provider. Changes at the model provider level cascade to your organization through a layer that is often invisible in standard vendor management.

Do SOC 2 Auditors Evaluate AI Vendors Directly?

No. SOC 2 auditors evaluate the controls your organization has in place to manage vendor relationships, including AI vendors. The audit looks at whether you have performed a proper risk assessment, due diligence, and ongoing monitoring of your AI dependencies.

Can You Audit an AI Black Box?

Not fully. That is why the audit focus shifts to what surrounds the black box: the vendor due diligence you performed, the monitoring controls tracking output quality, the contingency plans for disruption, and the evidence that your team catches and responds to anomalies. See our post on black box testing AI in SOC audits for a deeper look.

What AI Supply Chain Risk Means for Your Next SOC 2 Assessment

AI supply chain risk is here now. The Anthropic designation and the Sora shutdown are early signals of a pattern that will accelerate. The SOC 2 controls and risk assessment processes you already have can handle this. They just need to be pointed at a new category of dependency.

If you have questions about how AI in supply chain risk management applies to your next SOC 2 assessment, contact us. Our team has direct experience evaluating AI governance and supply chain risk in SOC 2 engagements.