NIST AI Security Paper: Defensive AI & Offensive AI Are Distant Cousins
Part 1: AI attack vectors are multiplying faster than the tools, systems, and standards meant to stop them. NIST new SP 800-53 control overlays for AI system security highlight the growing delta between AI offense and defense.

Share this post

🚨 NIST's AI Security Paper: A Reminder That Defensive AI & Offensive AI Are Distant Cousins
AI attack vectors are multiplying faster than the tools, systems, and standards meant to stop them. I have been stating this into the void for 2 years while most commentators are pondering whether dumb as a door post transformer models are showing signs of sentience.
Introduction
Not that I require external validation on my long-standing thesis, but on 14 August 2025, NIST released a concept paper proposing new SP 800-53 control overlays for AI system security. The paper was accompanied by a public action plan and a community engagement channel, marking the beginning of a longer process to define controls for AI.
While this may look like progress, the reality is that the advancement of AI attack surfaces is far outpacing the creation of effective countermeasures, mitigation strategies, and remediation tools. But not at my firm Open Code Mission. But this is not an adversarial for OCM. It is deeply personal to me as I spend and have spent over 30 years in the security space.
What NIST Is Proposing
The overlays are built on NIST's SP 800-53 security and privacy controls, with extensions that pull in related AI-focused guidelines:
- SP 800-218A – Secure development practices for generative and foundation models
- AI 100-2025 – Adversarial machine learning taxonomy
- Draft AI 800-1 – AI risk and governance considerations
The newest NIST framework targets the confidentiality, integrity, and availability of AI models and their underlying infrastructure.
The Delta Between AI Offence and Defence
Despite its structure, the paper illustrates a severe deficit in practical countermeasures. AI threats are evolving in real time:
- Prompt injection
- Model exfiltration
- Adversarial retraining, and of course
- Shadow AI exploitation
They are already omnipresent and highly active in the wild. Yet, the tools and services to detect, mitigate, and remediate these vectors remain immature, fragmented, or non-existent.
This growing delta between AI attack vectors and AI security overlay leaves enterprises exposed. It also raises the question: Can policy and standards keep pace with a technology moving at exponential speed?
My view is no, not in the context of how we fundamentally treat and handle data acquisition, tagging, storage, retrieval, processing, and post-processing recommittal. Enter the dePenros Tabula Rasa Theory for Data in the Sensory, Emotive, and Geospatial domains. But again, this is not about a theory I first authored in 2018.
Facing the Current Reality
Generative AI and predictive models are being adopted at the enterprise scale with little regard for the security blind spots they introduce.
Multi-agent systems are being piloted with limited governance. AI developers themselves are under immense pressure to ship products, often at the cost of embedding security controls from the start.
And then there are the open source warriors, especially here on X, who think expertise means a GitHub full of MCP tooling with as much security and strategic direction as a bucket of paint.
NIST's work is a step in the right direction but measured against the velocity of AI's offensive landscape it reads more like a slow crawl than a sprint.
Up Next
This is Part One of my analysis on the new NIST concept paper. Over the course of today, I will share more updates, unpacking its content and exposing the miserable pace of advancement in AI countermeasures compared with the breakneck speed of AI technology itself.
Stay tuned. The delta is widening, and it should concern every security leader, policymaker, and enterprise investing in AI, but it does not.