NIST Streamlines AI Security Guidance, Avoids Reinvention
The National Institute of Standards and Technology (NIST) is strategically guiding the cybersecurity community through the complex landscape of artificial intelligence, aiming to provide essential security frameworks without overwhelming professionals with entirely new directives. This measured approach seeks to avoid “reinventing the wheel,” instead integrating AI security considerations into established practices and existing guidelines.
The rapid proliferation of AI technologies presents a dual challenge: AI can significantly enhance cyber defenses, but it also introduces novel attack vectors that traditional security measures often struggle to address. With over 90% of cybersecurity professionals expressing concerns about AI-enabled threats, the need for clear, actionable guidance is more pressing than ever. Recognizing this, NIST is focusing on bolstering the security and trustworthiness of AI systems by building upon its foundational work.
A cornerstone of NIST’s strategy is the AI Risk Management Framework (AI RMF), released in January 2023. This voluntary framework offers a structured approach to managing AI-associated risks, emphasizing trustworthiness, accountability, transparency, and ethical behavior throughout an AI system’s lifecycle. The AI RMF is designed to be flexible and adaptable, building on existing frameworks such as the NIST Cybersecurity Framework (CSF) and the NIST Privacy Framework. In a further refinement, NIST also released a Generative AI Profile for the AI RMF in July 2024, specifically addressing the unique risks posed by these advanced models.
Beyond the AI RMF, NIST is actively developing a “Cyber AI Profile” under the umbrella of its Cybersecurity Framework (CSF) 2.0. This profile aims to help organizations better prepare for and manage AI-related cyber risks, including those posed by attackers leveraging AI tools for enhanced cyberattacks like deepfakes and advanced phishing. A draft of this crucial profile is anticipated within the next nine months to a year, with NIST actively seeking public feedback through workshops and requests for information to ensure its practical utility. In early August 2025, a planned virtual session to gather input on the Cyber AI Profile encountered technical difficulties, underscoring the dynamic and evolving nature of these collaborative efforts.
Furthermore, NIST plans to issue a new control overlay for its Special Publication 800-53 series, specifically targeting the unique risks inherent in AI systems, expected within the next six to twelve months. The agency is also working to integrate AI considerations into the NICE Workforce Framework for Cybersecurity, introducing an AI Security Competency Area and updating work roles to reflect the evolving impact of AI on the cybersecurity workforce. This holistic approach ensures that not only are the technologies secured, but the human element responsible for their defense is also equipped with the necessary knowledge and skills.
NIST’s commitment to avoiding redundant efforts is evident in its “zero drafts” initiative, which accelerates AI standards setting and expands participation by soliciting early feedback on guidance for AI testing, evaluation, verification, and validation. By leveraging proven secure development practices from agencies like CISA and adapting them to the AI context, NIST aims to provide guidance that is both effective and readily integrated into existing organizational processes. This strategic focus allows cybersecurity professionals to manage the multifaceted impact of AI on their work without being overwhelmed, fostering a more secure and resilient digital future.