To ensure that our most advanced systems do not become our Achilles' heel, securing artificial intelligence supply chains should be a focus for both users and policymakers.
Software supply chain incidents show how an attack on critical systems can escalate disruption.
In 2025, the 'Shai-Hulud' compromise – named after the invisible sandworms that travel beneath the Arrakis desert in the novel Dune – affected a public registry where developers typically publish and download reusable code components. By compromising just a handful of widely used packages, the attackers gained access to thousands of downstream projects, using routine updates to spread the attack on a large scale. It's similar to compromising a trusted repository of standard bolts from a reputable supplier. What if the bolts were replaced with defective ones? Any product built with them would inherit the defective bolts.
Up to 25,000 projects have been compromised to date due to this attack. This is what makes supply chain compromise a strategically attractive attack vector: it provides access and scalability by attacking implicitly trusted dependencies rather than the protected system. As a result, the extent of the damage can be extensive before defenders know where to look.
When are the risks very high?
AI is built and implemented according to the same highly dependent paradigm as software, only with higher stakes. When an organization seeks to implement AI to improve its efficiency, for example, it will purchase an AI-enabled product but will have limited or no knowledge of what is hidden underneath (for example, pre-training, datasets, or model weights) or the services used to update and host the models. As AI components become easier to reuse and harder to inspect, the gap between trust and security will continue to widen.
This means that any vulnerabilities introduced through a new artificial intelligence tool can filter into deployed systems and shape their behavior, security, or reliability long after the initial launch.
For users, this could translate into data exposure, service disruption, or unacceptable failure of critical systems. For national security, the implications are more acute, as AI can be integrated into defense support functions, critical national infrastructure, and decision-making processes, compromising the supply chain becomes a potential avenue for systemic disruption and espionage. Security must be built in from the start.
Hardware, software and everything in between
AI has driven innovation across many industries. It has helped design products faster, personalize services, and make more informed decisions at scale. According to McKinsey, 88% of organizations surveyed have adopted AI in some way, while 79% report using Generative AI.
Underlying this surge in AI-driven progress lies a distributed foundation. AI supply chains add even more dimensions of complexity beyond traditional software, including high-performance AI chips, cloud infrastructure, open-source libraries, proprietary datasets, and pre-trained models. AI agent systems that can dynamically plan, act, and integrate with external tools deepen these dependencies and make them less visible. A static pipeline becomes a living ecosystem, introducing new complexity and AI security vulnerabilities that are only just beginning to surface in discussions about AI security, governance, and deployment.
This accelerates innovation, but it also means that compromised components can be reused and spread with limited visibility from users.
Artificial intelligence capability currently relies on supply chains that are centralized and strategically exposed. At the device layer, access to advanced computing depends on a small number of manufacturing and packaging centers that can become bottlenecks. The issue is not really whether a single supplier is “reliable,” but rather that a lack of supplier diversity can cause disruption, whether through geopolitical crisis or supply shocks, which can be an inherent systemic risk that brings entire economies into a tailspin. The automotive industry recently experienced the same problem with export controls placed on Nexperia chips from China in November 2025 due to a move by the Dutch government to take control of the European facility.
Software and model layers create parallel exposure. AI development relies on shared libraries, tools, and third-party model artifacts distributed through large-scale public repositories. As of February 2026, the Hugging Face platform alone hosts over 2.5 million public models, illustrating the scale of today’s supply chain and its reliance on third-party artifacts. This accelerates innovation, but it also means that compromised components can be reused and propagated with limited visibility to users.
A compounding effect can be seen with the rise of agent-based AI, with 62% of organizations reporting experimentation with AI agents.
Gray areas – where responsibility is hazy
Much of the artificial intelligence supply chain is located in gray areas where responsibility is unclear and oversight is challenging.
Artificial intelligence systems assembled from open source tools, cloud services, and third-party integrations can span multiple jurisdictions. This creates a practical barrier to action.
End users may lack contractual influence or technical visibility upstream, and suppliers reasonably argue that they are just a link in a longer chain.
On the other hand, regulators face the problem of setting enforceable requirements across borders and business models. As a result, trust is often assumed rather than verified, because governance and incentives to verify that trust remain weak.
Politics moves forward, but practice lags behind
AI resilience will not come from better incentives or more robust models. Instead, policy measures to increase the resilience of AI supply chains must confront the fragility of the systems that support them.
The good news is that policymakers are not starting from scratch. Securing software supply chains offers a number of key lessons for developing policy levers. Following significant disruptions to the software supply chain, governments have been pushing for greater visibility into the software organizations sell, buy and use. The Software Bill of Materials (SBOM) is an inventory of all the components used in a piece of software, and its importance was recognized by the US government in 2021. It is built on the simple assumption that if buyers can see the components within a product, they can identify exposures more quickly and manage vulnerabilities earlier. In the UK, the closest equivalent approach is guided by guidelines. The NCSC’s Supply Chain Security Principles set out how organisations should establish oversight and control over suppliers, while the UK government’s Software Security Code of Practice aims to raise baseline expectations for vendors and their customers.
AI complicates this scenario by stretching the available tools to their limits. Unobservable dependencies and constantly updated integrations are beyond the developer's control. This makes it difficult to implement any security mechanisms.
Therefore, the choice is not binary between more regulation or more voluntary standards. The priority is to align requirements with interests. In lower-risk use cases, voluntary standards can improve basic cyber hygiene. On the other hand, in defense and parts of National Critical Infrastructure, procurement and regulation already impose basic cyber requirements.
The problem is that AI supply chains often extend beyond these boundaries into managed services, third-party models and upstream data choices that existing requirements only partially capture. As AI is adopted in these environments, these mechanisms will need to extend further and further upstream. Otherwise, the UK will rely on AI components that lie outside existing accountability boundaries, leaving trust to be assumed rather than demonstrated.
This is not a particular concern, as demonstrated by OWASP, an industry benchmark for application security, highlighting AI supply chains as a primary risk with its “Top 10 for LLM” and “Top 10 for Agent Applications” lists, both of which include supply chain vulnerabilities.
It remains to be seen what this means for a highly interconnected network of providers and users. The goal is not to reinvent supply chain security for AI, but to sharpen our means to face today's dangers, especially where the risks are highest.
Melina Beykou, Phd.
The Geopost

Portal Novosti spreads propaganda: Media agreement declared a "pact against Serbs"
Local elections in Serbia: Vučić weakened, alternative still does not exist
Analysis: The Battle for Hormuz and the “Prosperity Guardian”
Serbian media manipulates about American KFOR soldiers: From interest in Orthodoxy to acceptance of religion
From propaganda to influence: The global network of separatism backed by Russia
Berlin and Tokyo in a new security axis