On April 15, 2024, several international security agencies released a guide of best practices for organizations to defend their artificial intelligence (AI) systems against cyberattacks and to improve confidentiality.

The National Security Agency's Artificial Intelligence Security Center (NSA AISC) produced Deploying AI Systems Securely in collaboration with the Canadian Centre for Cyber Security (CCCS), the Cybersecurity & Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate's Australian Cyber Security Centre (ASD ACSC), the New Zealand National Cyber Security Centre (NCSC-NZ) and the United Kingdom's National Cyber Security Centre (NCSC-UK).

While AI can boost productivity for organizations, the technology is vulnerable to hacks. For this reason, Deploying AI Systems Securely focuses on technical best practices to protect an organization's AI system from malicious hacks that could steal sensitive information and intellectual property.

Some guidelines:

  • Promote collaboration among all parties involved – particularly the data science, infrastructure and cybersecurity teams – to voice any risks or concerns.
  • Identify, protect and catalog all proprietary data sources that will be used in your organization's AI use in AI model training or finetuning.
  • Secure sensitive AI information, including AI model weights, outputs and logs, by encrypting the data at rest and store encryption keys in a hardware security module (HSM) for later on-demand decryption.
  • Use cryptographic methods, digital signatures and checksums to confirm each artifact's origin and integrity, thereby protecting sensitive information from unauthorized access during AI processes.
  • Store all forms of code, including as source and executable code, as well as artifacts (models, parameters, configurations, data and tests) in a version control system with proper access controls to ensure that only validated code is used and any changes are tracked.
  • Thoroughly test the AI model by applying techniques, such as adversarial testing, to evaluate the model's resilience against compromise attempts.
  • Perform continuous scans of AI models and their hosting IT environments to identify possible tampering.
  • Educate users, administrators and developers about security best practices, such as strong password management, phishing prevention and secure data handling.
  • Promote a security-aware culture to minimize the risk of human error. If possible, use a credential management system to limit, manage and monitor credential use to minimize risks further.
  • Engage external security experts to conduct audits and penetration testing on ready-to-deploy AI systems. This helps identify vulnerabilities and weaknesses that may have been overlooked internally.
  • Perform autonomous and irretrievable deletion of components, such as training and validation models or cryptographic keys, without any retention or remnants at the completion of any process where data and models are exposed or accessible.

CISA offers further joint guidance at Guidelines for secure AI system development and Engaging with Artificial Intelligence. Together, these documents offer a wealth of technological best practices.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.