“Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems” is a Cybersecurity Information Sheet (CSI) that the National Security Agency (NSA) is releasing today. The Defence Industrial Base enterprises and owners of National Security Systems who want to implement and run AI systems created and maintained by outside parties are the target audience for the CSI.
“AI offers previously unheard-of opportunities, but it also raises the possibility of malicious activity.” According to NSA Cybersecurity Director Dave Luber, “NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis.”
The Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK) have collaborated to create the CSI, which is the first product of the NSA’s Artificial Intelligence Security Centre (AISC).
Although the guidelines are aimed at national security, they may be applied by anyone integrating AI capabilities into a regulated environment, particularly in high-risk, high-value settings. It expands upon the Guidelines for Secure AI System Development and Artificial Intelligence Engagement that were published earlier.
The Artificial Intelligence Security Centre (AISC) has released its first set of guidelines, which positions the organization to support one of its main objectives: enhancing the availability, confidentiality, and integrity of AI systems.
September 2023 saw the creation of the AISC by the NSA as a division of the Cybersecurity Collaboration Centre (CCC). In addition to ensuring that the NSA remains ahead of its adversaries’ tactics and strategies, the AISC was established to identify and counteract AI vulnerabilities, develop and promote AI security best practices, and foster partnerships with industry and experts from U.S. industry, national labs, academia, the IC, the DoD, and select foreign partners. As the field of AI security develops, the AISC intends to collaborate with international partners to create a set of guidelines on subjects including data security, content authenticity, model security, identity management, model testing and red teaming, incident response, and recovery.