CISA and G7 Publish Minimum Elements for AI Software Bill of Materials

CISA and G7 partners release minimum elements for AI SBOMs, setting the first multilateral standard for AI supply-chain transparency in critical infrastructure and OT environments.

BREAKING
CISA and G7 Publish Minimum Elements for AI Software Bill of Materials

CISA and its G7 partners have released joint guidance defining minimum elements for a Software Bill of Materials (SBOM) applied to artificial intelligence systems, marking the first multilateral consensus framework for AI supply-chain transparency in operational technology (OT) and critical infrastructure environments.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA), alongside G7 partners-Germany, Canada, France, Italy, Japan, the United Kingdom, and the European Union-published new joint guidance aimed at strengthening transparency and cybersecurity across AI supply chains. Titled Software Bill of Materials for AI - Minimum Elements, the document was developed by the G7 Cybersecurity Working Group to help public and private sector organizations identify the core information an AI-focused SBOM should contain. While neither exhaustive nor mandatory, the supplemental minimum elements reflect G7 expert consensus and will expand over time to keep pace with rapid AI advancement, CISA stated.

Background

The publication arrives amid accelerating AI deployment across industrial control systems (ICS) and OT networks. For critical infrastructure owners and operators, AI can increase efficiency and productivity, enhance decision-making, and reduce costs. However, integrating AI into OT environments that manage essential public services introduces significant risks-such as OT process models drifting over time or safety-process bypasses-that must be carefully managed to ensure availability and reliability.

On December 3, 2025, CISA, the NSA, the FBI, and several international cyber authorities released Principles for the Secure Integration of Artificial Intelligence in Operational Technology, a joint framework aimed at helping critical infrastructure operators deploy AI safely. That earlier document was co-authored by CISA and Australia's ASD ACSC, with contributions from the NSA's AI Security Center, the FBI, the Canadian Centre for Cyber Security, Germany's Federal Office for Information Security (BSI), the Netherlands' NCSC-NL, New Zealand's NCSC-NZ, and the UK's NCSC-UK. The newly released SBOM guidance is a direct follow-on, translating those principles into structured data requirements for AI components.

The current document builds on the shared vision for SBOM for AI introduced by the G7 Cybersecurity Working Group in June 2025 and aims to improve transparency and cybersecurity along the AI supply chain.

Details

The guidance extends traditional SBOM concepts into AI by calling for documentation of models, datasets, software components, providers, licenses, and other dependencies. The G7's SBOM for AI clusters framework is organized into seven core clusters: Metadata, Models, Dataset Properties, System Level Properties, Key Performance Indicators, Security Properties, and Infrastructure. Where applicable, the framework also includes a link to a Hardware Bill of Materials (HBOM) to account for specialized AI hardware.

The guidance carries specific implications for OT and ICS environments. Operators should limit AI connectivity through push-based or one-way architectures, proper network segmentation, careful cloud use, and strict control over vendor capabilities-including AI-specific SBOMs that document model dependencies, data sources, and update mechanisms, as well as the ability to disable risky features.1CISA’s AI SBOM guidance pushes software supply-chain oversight into new territory | CSO Online The guidance encourages owners and operators to demand secure-by-design AI systems, integrate vendor oversight into procurement, and include AI model details within SBOMs, while setting clear contractual expectations around model transparency, data use, and update notifications-mirroring standard OT vendor risk management but extended to AI.

Experts, however, flag implementation gaps. Allan Friedman, sometimes called the "godfather of SBOMs" and a senior technical adviser at the Institute for Security and Technology, said the guidance was a good document but probably mislabeled because it states that the elements it identifies are not mandatory. Industry analysts note that the voluntary nature may slow uptake. Adoption could prove slow and fragmented, much like the early days of software SBOMs, and without a mandate, many AI vendors may ignore the framework until customer pressure mounts.

Tooling readiness also presents a practical barrier. Generating an AI SBOM for a typical machine learning pipeline is more complex than running a software composition analysis tool, requiring instrumentation of data pipelines, capture of training environment snapshots, and hashing of multi-gigabyte weight files-all without a clear industry standard. Security analysts also note that an AI SBOM may show what a vendor says is inside an AI system but does not prove whether the system can be trusted for its intended enterprise use.

Legacy OT systems compound the challenge. AI can introduce new complexities and vulnerabilities in industrial environments, including latency constraints, cloud-based SCADA risks, and compatibility issues with older systems.

Outlook

According to Friedman, next steps could include mapping the guidance to current implementations and aligning it with European Union and G7 government policies to minimize conflicts. The guidance carries significant regulatory and contractual implications for critical infrastructure operators. As AI systems become more deeply embedded in OT environments, the regulatory landscape will increasingly hinge on how operational data is governed, how vendor responsibilities are structured, and how liability is allocated across complex technical ecosystems. Supply chain accountability, explicit data usage policies, and documentation of model dependencies are likely to become regulatory requirements, holding both operators and vendors responsible for AI security and functional safety.