| Version | Date | Authors | Contributors |
|---|---|---|---|
| 1.0 | 2026-01-27 | Nick Hummel | Eric Eilertson, B Keen, Rob Wood |
These requirements are written to be practical and understandable, rather than to be definite specifications. The expectation is that the vendor works with these during the design of the product and does their best to comply with their spirit, as opposed to merely treating this as a checkbox exercise after the fact. These security requirements are to be regarded as inherent to the product and vendor processes, and compliance to them is not an add-on feature.
Not all requirements apply to all products. Check the product types page to see which requirements apply in a specific case.
Debug features must fulfill the following:
- Offer no way to extract or leverage cryptographic or otherwise security-sensitive assets, like UDSs or confidential OTP bits, which persist over resets.
- Be disabled by default and only possible to enable on reset.
- Have their enablement measured as part of firmware measurement.
This only applies to debug interfaces that can potentially be used to access confidential data or perform privileged actions. For example, JTAG has to fulfill these points, whereas a pure logging facility specifically made to not show any data of the workloads running on the device or other confidential data would not have to fulfill these points.
Functionality must be implemented to allow isolating workloads/tenants from each other during the normal production use of the platform, more specifically:
- Shared resources, such as NICs, GPUs and other accelerators, hardware performance counters and memory, must be configurable to not leak data between concurrent but distinct users.
- Shared resources must also be configurable to avoid giving one user the power to make them unusable by the other users, e.g. by taking up all CPU time or simply switching them off.
The product must offer a way to easily and completely erase all user data between uses. This is to ensure no data leaks between workloads/tenants and attackers cannot store malicious data persistently.
The product must offer a way to easily (i.e. not having to physically grind it into dust) securely decommission the device. This must include erasing all user data and all other confidential data, such as OTP secrets, private keys and UDSs. Sanitization must adhere to NIST SP 800-88.
The design and source code of the product must be reviewed by an approved third party lab under OCP S.A.F.E. The short-form report must be published. Products do not have to be vulnerability-free to use them. Vulnerabilities are reviewed and it is decided case by case, based on what impact a vulnerability has with a given use-case and how it can be mitigated in the wider infrastructure, whether it is acceptable to use the product. However, it is strongly recommended getting reviewers involved as early as possible, as this will make mitigating issues much easier.
Documented business processes must be implemented that cover:
- Finding out about public vulnerabilities and vendor-known vulnerabilities of third-party components included in the product.
- Communicating all vulnerabilities and security incidents known to affect the product to the customer according to a pre-agreed timeline.
- Remediating vulnerabilities and security incidents.
Files that are confidential or need integrity protection must be transmitted in a suitably encrypted way, rather than, for example, by plaintext email or unprotected FTP. A simple way to accomplish this is to upload to a secure cloud drive of the respective customer. This applies to communication between the vendor and the customer, as well as between the vendor and their suppliers.
There must be hardware-enforced limits on temperature, clock, power, and any other relevant physical parameters that protect the physical integrity of the hardware. If there were only software-enforced limits, an attacker that successfully gained access to the required software privilege level could physically destroy the platform, thereby leaving no recovery path.
Physical interfaces that are not necessary for the production operation of the platform must be removed for production builds. All remaining interfaces, for debug, management, manufacturing, or other purposes, must be protected against unauthorized use.
These requirements apply to all software, including firmware.
All software must have the following automated release blocking tests:
- A thorough set of unit and integration tests
- Real hands-on test of a normal use-case on real hardware without simulation, emulation or virtualization
SBOMs (Software Bills of Material) must be delivered with all production releases of software. These are needed to monitor for security vulnerabilities and be able to quickly identify affected products when a new vulnerability becomes publicly known.
All software must be configured to enable the following where applicable:
- Address Space Layout Randomization (ASLR)
- Stack overflow protection (e.g. Canaries)
- Kernel Address Space Layout Randomization (KASLR)
- Kernel heap overflow protection
- Non-executable memory enforcement (aka NX, W^X, XD, XI, XN bit) via MMU, MPU, or IOMMU as appropriate
Appendix 1 lists build options, these are only recommendations as it can vary wildly what is useful and reasonable.
All software must be updatable without physically accessing the product. This ensures that vulnerabilities can be patched at scale. This includes firmware, except the first immutable stage.
All external open source or third party dependencies must be kept up to date for every build for a production release. It is not sufficient to only update when a vulnerability becomes known in the version that is in use, because there are many fewer eyes on older versions looking for vulnerabilities.
Functionality that provides privileged access, for example server remote management web interfaces, must be properly access controlled and connection to them must be encrypted, e.g. using TLS. Access control should avoid password-based authentication. All asymmetric cryptography must use PQC algorithms.
These requirements apply to firmware in addition to the software requirements. Firmware is all software that is not intended to be replaced and managed by the end user of the product.
The vendor must cryptographically sign all production firmware releases with a PQC signature scheme. The first, immutable stage need not be signed. The signatures must be verified before the firmware is executed and before a firmware update is applied.
The OCP Hardware Secure Boot document provides further details on how firmware signature verification on boot should work. ROM patching and other secure boot bypass mechanisms must be permanently disabled for production systems.
It is preferable that dual signing is supported, so that both the vendor and the customer can sign the firmware and both signatures are verified before executing/updating.
It would also be preferable if the vendor provides a signing transparency log.
Firmware must be measured by a RoT. Measurement must include everything that affects the security of the product, such as configuration, mutable code and enablement of debug/recovery modes. Measurement must be redone when firmware is reloaded, otherwise malicious code loaded after a partial reset might stay undetected.
Measurements must be provided on request via SPDM. At least SPDM 1.5 (because of PQC) with the following commands must be supported:
- Get Version
- Negotiate Algorithms
- Get Capabilities
- Get Digests
- Get Certificate
- Challenge
- Get Measurements (Respond If Ready) for attestation
- Get CSR
- Set Certificate
The OCP Attestation of System Components document provides further details on how this should be implemented.
A mechanism must be implemented that ensures that an older version of a firmware cannot be written over a newer version and successfully loaded and executed by the device. Otherwise a malicious actor could execute a downgrade attack, in which the actor flashes an old firmware version with a known vulnerability and thereby exploits a vulnerability that had already been fixed.
All firmware must only be updatable prior to the completion of the boot process, but must not be writable afterwards. This ensures that an attacker cannot establish a permanent foothold by embedding malicious code in firmware.
For each piece of firmware there must be a method for it to be recovered online (i.e. without physical access) when it is corrupted. Firmware that cannot be recovered or can only be recovered offline opens up the risk of attackers bricking entire fleets at scale, with no fast way to recover.
Since flash memory degrades over time, devices should provide a recovery path if the mutable storage is completely corrupted. If that is not feasible it is acceptable to instead guarantee that the flash memory data retention is at least 6 months without power.
These requirements apply to Operating Systems, as well as firmware.
Operating systems and firmware must remove or persistently disable all APIs, background/system services, kernel modules and other interfaces that are not needed for product functions.
This includes removing unnecessary SMM (System Management Mode) functions.
There may not be any factory default passwords that are the same across multiple devices. If such passwords are necessary, they must be generated in a cryptographically safe manner for each individual device.
It must be possible to disable all configuration menus, like boot and recovery menus, for the production use of the platform. This is to prevent attackers with physical access from making changes.
Proprietary cryptographic algorithms, or algorithms that have not been approved by a national or international standards body, are not considered to provide any security or confidentiality protections to the devices' owner or user. Furthermore, any proprietary implementation of a cryptographic algorithm must be validated by an OCP S.A.F.E. review provider. The provenance (whether third party IP, open source, or in-house development) of any cryptographic software, firmware, or hardware in the product must be transparent.
FIPS 140-3 validation is a complex topic, and it is out of scope of these requirements to provide an answer on whether it will be required for a particular product. An expert needs to assess this case by case.
Entropy sources must comply with NIST SP 800-90B to ensure they produce sufficiently random numbers.
Wherever asymmetric cryptography is required, the cryptographic algorithms and protocols built on them must meet CNSA 2.0 requirements to ensure post-quantum security.
All known speculative execution vulnerabilities must be mitigated. Everything described in BP001 also applies, this requirement just intends to specifically highlight this for speculative execution vulnerabilities.
It must be possible for the customer to update CPU microcode in collaboration with only the CPU vendor, but without the involvement of any third-party, such as the integrator or mainboard vendor. This is because additional parties add latency and might even go out of business, which would leave customers with no way to patch security vulnerabilities.
If a CPU has non-volatile memory, which is exposed via CPU pins, e.g. Intel Xeon's PIROM, then those CPU pins must remain disconnected. If it is not feasible to leave it disconnected, the writable part of the data must be cleared on every boot.
These requirements apply to system memory like DDR.
Modern memory DIMMs contain an SPD-chip, which contains information the platform needs to use the DIMM, such as timings used for communication. The chip's entire user-writable memory must either be cleared on boot or write-protected. If a malicious actor were to override this information, they could make the DIMM unusable and thereby prevent the platform from booting.
Depending on the DDR version, the memory is split into a different number of blocks that are used for different purposes. DDR5 has 16 blocks, which can all be write-protected.
Relying on vendors to deliver locked DIMMs has been unreliable in the past, so it is preferable to implement enabling of write-protection into a platform's boot process, rather than trusting that DIMMs are already locked.
System memory must be encrypted to protect data from being exfiltrated by a physical attacker. For Intel CPUs this is called Total Memory Encryption (TME), for AMD CPUs it is called Transparent Secure Memory Encryption (TSME).
PCIe devices can directly access the system memory of a platform. If a device is compromised this could lead to a compromise of the entire platform. To prevent this, all devices must be connected via an IOMMU and be set to enabled mode (as opposed to passthrough mode). This accomplishes that devices can only access the memory that they should be able to access.
PCIe links must be encrypted and integrity protected, if the platform is deployed to a third-party data center or wants to support confidential compute. This guards against an interposer gaining access to confidential data by intercepting the connection. The technologies used to accomplish this are IDE (Integrity and Data Encryption) and TDISP (Trusted Execution Environment Device Interface Security Protocol).
PCIe devices must sanitize themselves on FLRs (Function Level Resets). Sanitization means all data, apart from persistent configuration, must be erased. This ensures that the platform can be sanitized between workloads.
RoTs must implement protections against side channel analysis, as well as fault injection. RoTs are the most security-critical component of a platform, they contain secret data on which the security of the remaining platform is built. Side channel analysis and fault injection could be used to extract this data or otherwise bypass the RoTs security guarantees.
These requirements apply to entire platforms as a whole, rather than specific components. A whole server or network switch are examples of platforms.
Platforms must have dedicated RoTs acting as ultimate trust anchor for secure boot, measured boot and firmware updates. RoTs are especially hardened for security, so using such a device as ultimate trust anchor is more secure than adding this functionality to a more complex component, such a BMC.
It is preferable for this RoT to be Caliptra.
It must be reasonably difficult (require special equipment and a considerable amount of time) to exploit the platform (exfiltrate confidential data, compel the platform to perform privileged actions or execute arbitrary code) from the parts of the platform that are physically accessible during its normal operation. For rack servers that is the front panel. Otherwise it would be too simple for an attacker inside the datacenter to cause considerable damage.
Storage drives must be TCG Opal compliant in order to provide standardized encryption-at-rest and sanitization functionality.
Storage drives must sanitize in accordance with NIST SP 800-88 and the OCP S.A.F.E. storage sanitization requirements.
Networking devices, such as NICs, switches and routers need to provide functionality to enable networks to protect the confidentiality and integrity of network traffic. This is usually IPsec or PSP.
This also applies to RDMA (Nvidia: GPUDirect) traffic, especially RoCE (RDMA over Converged Ethernet). Newer standards for RDMA protection include Ultra Ethernet for scale out networking and Ultra Accelerator Link for scale up networking.
GNU Compiler Collection (GCC)
- Stack Protection (-fstack-protector-strong)
- CET Control Flow Protection (-fcf-protection=full)
- Fortify Source (-D_FORTIFY_SOURCE=2) - requires -O2 or higher
- Non-Executable Stack (-z noexecstack)
- Address Space Layout Randomization (-fpie -Wl,-pie for executables, -fpic -shared for shared libraries)
- GOT Protection - BIND_NOW (-Wl,-z,relro -Wl,-z,now) for most distributions. For RHEL 6, also use -Wl,-z,defs to catch underlinking.
- Format String Warnings (-Wformat -Wformat-security -Werror=format-security)
- GCC 8 or later: Stack Clash Protection (-fstack-clash-protection)
LLVM / Clang
- SafeStack (-fsanitize=safe-stack)
- Stack Protection (-fstack-protector-strong or -fstack-protector-all)
- Control Flow Integrity (-flto -fsanitize=cfi)
- CET Control Flow Protection (-fcf-protection=full)
- Address Space Layout Randomization (-fPIE -pie for executables, -fPIC for shared libraries)
- GOT Protection (-Wl,-z,relro -Wl,-z,now)
- Format String Warnings (-Wformat -Wformat-security -Werror=format-security)
- Speculative Load Hardening (-mspeculative-load-hardening)
- Buffer Security Check (/GS) - Also known as "stack cookies".
- Control Stack Checking Calls (/Gs)
- Control Flow Guard (/guard:cf)
- CET Shadow Stack Compatible (/CETCOMPAT)
- Address Space Layout Randomization (ASLR) (/DYNAMICBASE)
- High-Entropy Virtual Addresses (/HIGHENTROPYVA) (64-bit only)
- Handle Large Addresses (/LARGEADDRESSAWARE) (64-bit only)
- Additional Security Checks (/sdl)
- Spectre Mitigations using the /Qspectre compiler flag
- All RPC and DCOM code must be compiled using the /ROBUST option when using the MIDL compiler. The minimum target level is NT61 (/TARGET NT61).
- Avoid suppressing specific warnings with /wdnnnn or using pragmas in code.
- Enable all warnings using the /Wall flag, and treat warnings as errors with /WX
In addition for 32-bit Windows binaries:
- Data Execution Prevention (DEP) (NXCOMPAT)
- Safe Exception Handlers (/SAFESEH)
In addition, for Windows kernel-mode driver components:
- Enforce deprecation of insecure CRT functions for drivers (/D_CRT_SECURE_FORCE_DEPRECATE)