Every PQC migration programme has a first step. Most organisations get it wrong.
The natural instinct is to start where the problem is visible: external-facing TLS, public-key infrastructure supporting web services, or VPN gateways, whatever a recent security review flagged. Migrate the visible surface, demonstrate progress, move on. The problem is that RSA and elliptic-curve cryptography (ECC) are not confined to the visible surface. They are embedded in certificate chains, application signing workflows, database encryption layers, SSH authentication, JSON Web Token signatures, and legacy authentication systems that have not been touched in years. Without a complete cryptographic inventory, there is no way to know what you are migrating, in what order, or when you are done.
NIST's NCCoE PQC Migration Project, documented across the SP 1800-38 series, identifies the cryptographic inventory as the first required step in any PQC migration programme. Not algorithm selection, not hybrid scheme planning, not tooling procurement. The inventory. NIST IR 8547 (initial public draft, November 2024) reinforces this: migration without prior inventory produces rework, missed systems, and false assurance about completion.
What a Cryptographic Inventory Actually Is
A cryptographic inventory is a structured record of every cryptographic algorithm, key, certificate, protocol, and library in use across an organisation's estate, together with the systems and data assets that depend on them. It is not a penetration test. Penetration testing finds exploitable vulnerabilities in systems functioning today against current adversaries. A cryptographic inventory identifies future-quantum-vulnerable systems that are operating correctly right now and will continue to operate correctly until a CRQC exists. The threat model is different. The tooling is different. Treating them as the same activity produces gaps in both.
Analogous to an ICT asset register, but purpose-built for cryptographic dependencies. If your organisation holds an asset register for hardware and software, a cryptographic inventory is the register for the cryptographic mechanisms those assets depend on.
Why Migration Without Inventory Fails
Walk through what happens in the absence of an inventory. An organisation identifies TLS on external services as the migration starting point: it is the most visible dependency, the easiest to demonstrate progress on. The TLS migration is completed. A subsequent internal audit finds that the organisation's VPN infrastructure is still running RSA-2048 key exchange. The code-signing chain for internal software distribution uses ECDSA. Three legacy authentication systems built a decade ago depend on RSA certificates that no one had included in the migration scope. A database encryption layer uses RSA for key wrapping. None of these were in scope because none were in the inventory.
Each discovery requires a separate migration workstream. Each workstream requires resourcing, testing, and change management. The project scope expands. I call this Crypto Paralysis: the point at which a PQC migration programme stalls under the weight of hidden RSA dependencies surfacing faster than workstreams can be opened to address them. It is not a failure of algorithm selection or vendor choice. It is a failure of sequence. Without the inventory, there is no sequence.
NIST's guidance distinguishes three broad migration phases explicitly: discovery and inventory; risk assessment and prioritisation; migration execution. The guidance is clear that attempting to skip to phase three without completing phases one and two produces unplanned re-migration, compliance gaps, and elevated costs. This is not theoretical caution. It is the documented outcome of organisations that did not start here.
What the Inventory Must Cover
NIST SP 1800-38A (NCCoE PQC Migration Project, 2024) identifies four primary dependency categories that a cryptographic inventory should address:
Network protocols. TLS/HTTPS sessions, VPN tunnels, SSH authentication, and any other network-layer protocol using RSA or ECC for key establishment. This is typically the most visible layer and the easiest starting point for automated discovery.
PKI and certificate management. The full certificate hierarchy, not just the 398-day leaf TLS certificates that your certificate management tool tracks. The root CAs, intermediate CAs, and code-signing certificates carrying validity periods of five to fifteen years. CA/Browser Forum Ballot SC31 (2019) capped TLS leaf certificates at 398 days, which is why those are renewed frequently enough to be visible in lifecycle management tools. The long-lived certificates above them are not managed with the same frequency and are often missed entirely.
Code signing and software supply chain integrity. Signing certificates used for internal software distribution, firmware updates, and application packaging. These carry distinct migration complexity from the confidentiality layer. ML-DSA (FIPS 204, August 2024) and SLH-DSA (FIPS 205, August 2024) are the NIST-standardised migration targets for RSA and ECDSA in signing contexts. Inventory must capture every signing workflow and its certificate dependencies before a signing migration can be planned.
Application-layer cryptography. Database encryption key management, JWT signing, S/MIME, PGP, and any application that implements its own cryptographic operations rather than relying on a platform or infrastructure layer. These dependencies are the hardest to discover with automated tooling because they may not be visible at the network layer at all.
OT environments require a separate workstream. Industrial control systems and SCADA systems often run cryptographic protocols that are not visible to standard IT discovery tools, carry maintenance windows that make live scanning operationally risky, and depend on vendor-specific update mechanisms. NIST SP 800-82 Rev. 3 (Guide to Operational Technology Security, 2023) addresses OT-specific security assessment methodology. For organisations with significant OT estates, treating OT as an extension of the IT cryptographic inventory will produce incomplete results.
The Certificate Lifetime Problem Is Larger Than It Looks
Certificate management tooling has improved significantly over the past five years. Most organisations now have reasonable visibility over their TLS leaf certificate estate. The 398-day cap means those certificates turn over frequently; the tooling works with that cadence.
The long-lived certificates are a different problem. An intermediate CA certificate issued in 2020 with a ten-year validity period expires in 2030. If it is RSA-based, it will still be in use (signing everything below it in the chain) when credible CRQC timelines begin to converge. Code-signing certificates routinely carry five-year validity periods. A root CA certificate may be valid until 2035 or beyond. These are not visible to certificate renewal tooling because they do not trigger renewal alerts. They sit in the hierarchy, functioning correctly, signing RSA-based leaf certificates that your migration has already replaced.
The inventory must traverse the full chain, from leaf to root, and catalogue the algorithm at every level. ETSI TS 119 312 (V1.4.1, 2023) addresses certificate and PKI lifecycle management in the quantum transition, including hybrid classical/PQC certificate schemes as an interim approach during the migration period. ETSI's guidance is relevant here because the migration path for long-lived certificates is not simple revocation and replacement. It requires planning for backward compatibility, certificate hierarchy restructuring, and the hybrid transition period.
The Mosca Inequality Tells You How Urgent Your Inventory Is
Mosca's inequality (X + Y > Z, where X is migration time, Y is data confidentiality requirement in years, and Z is time until a CRQC is available) provides a framework for assessing whether a specific data asset's confidentiality window is already at risk (published in IEEE Security & Privacy, Vol. 16 No. 5, 2018).
For an organisation holding financial records or health data with a twenty-year confidentiality requirement, the calculation is direct: if a credible CRQC timeline runs from 2030 to 2040, and enterprise cryptographic inventory projects typically take six to twelve months with migration programmes taking longer still, a twenty-year confidentiality window already produces a Mosca flag under the more optimistic timeline scenarios. The urgency of beginning the inventory is a function of your data's longevity, not just the probability of CRQC arrival.
The Harvest Now, Decrypt Later threat, documented in the CISA/NSA/NIST joint advisory of August 2023, means the exposure window is already open for long-lived sensitive data. Data encrypted over RSA-based TLS last year may already be archived by an adversary awaiting the hardware. An inventory that identifies your long-lived sensitive data stores first allows you to prioritise those workstreams in the migration sequence rather than discovering them late and finding they are on the critical path.
Regulatory Direction Is Towards Formalised Cryptographic Inventories
DORA (the Digital Operational Resilience Act, Regulation EU 2022/2554, effective January 2025) requires financial entities to maintain ICT asset registers that include cryptographic dependencies. NIS2 (Directive 2022/2555, entered into force January 2023, transposition deadline October 2024) requires in-scope organisations to implement appropriate technical measures for network and information system protection; ENISA's implementation guidance links this to cryptographic agility and inventory capability. The UK NCSC's 2024 guidance on quantum-safe migration explicitly recommends cryptographic asset discovery as part of operational resilience planning.
A Cryptographic Bill of Materials (CBOM) is the emerging formal representation of this requirement, modelled on the Software Bill of Materials (SBOM) concept established under US Executive Order 14028. IBM has published CBOM tooling based on the CycloneDX schema extension. NIST IR 8547 references CBOM as a supply chain cryptographic transparency component. CBOM is not yet mandated in any major jurisdiction as of mid-2025, but it is the direction the frameworks are moving. An organisation that builds a cryptographic inventory now has the underlying data the CBOM requirement will eventually formalise.
Where to Begin
The practical starting point for most organisations is a triage assessment that identifies which dimensions of cryptographic exposure are most material before committing to a full discovery programme. That assessment shapes where to focus the initial inventory workstreams: whether the long-lived data stores are the first priority (a Mosca-driven decision), the PKI and signing infrastructure (a compliance-driven decision), or the legacy infrastructure with the longest migration lead times (a critical-path-driven decision).
QSECDEF's Quantum Threat Exposure Assessment scores an organisation's quantum cryptographic exposure across seven weighted factors including sector, data longevity, sensitivity, trust infrastructure dependence, regulatory obligations, legacy complexity, and vendor dependency. It is not a substitute for a cryptographic inventory. It does not scan an estate or enumerate certificates. It is a directional scoring instrument that produces a triage result: which dimensions of your profile are driving quantum risk, and whether a cryptographic inventory is a high-priority immediate action or a planned medium-term one. Seven questions. Runs in your browser. No account required.
The inventory itself is the next step after the triage score. QSECDEF's Cluster 9 articles cover the discovery methodology in detail, from building a cryptographic asset register through to automating discovery across different infrastructure layers. Individual membership includes access to the practitioner methodology documentation and the inventory templates our teams use in the field. Company membership covers up to ten seats.