Workshop Description
For red team leads and AI security architects. Covers quantum-accelerated attack vectors against defence AI systems, adversarial machine learning under quantum threat, and defensive hardening frameworks for ML inference pipelines.
Defence AI systems face a dual threat. Classical adversarial attacks (model poisoning, evasion, extraction) are already well-documented in the MITRE ATLAS framework. Quantum computing introduces a second dimension: Grover's algorithm provides a quadratic speedup for brute-force attacks on model parameters and API keys, the HHL algorithm enables new approaches to adversarial example generation, and quantum sampling techniques could accelerate black-box model extraction. The practical question is which of these quantum-enhanced attacks reach operational relevance first, and what that means for the red team playbook. This workshop maps the intersection, separates genuine near-term threats from speculative ones, and provides a defensive framework grounded in PQC standards (FIPS 203/204/205) for hardening AI infrastructure.
What participants cover
- Quantum-accelerated attack taxonomy: which classical pen testing techniques gain meaningful speedup from quantum algorithms and which do not
- Adversarial ML under quantum threat: how Grover search, quantum sampling, and HHL affect poisoning, evasion, and extraction attacks
- Side-channel vulnerabilities: quantum-enhanced timing and power analysis attacks against ML inference hardware
- Defensive hardening: PQC integration for model serving (TLS 1.3 with ML-KEM, ML-DSA model signing, quantum-resistant API authentication)
- MITRE ATLAS quantum extensions: mapping quantum-specific threats onto the existing adversarial ML taxonomy
- Red team planning: how to incorporate quantum-era scenarios into adversary emulation exercises today