Google Launches Secure AI Framework to Help Organizations Use AI Responsibly
Google has launched a new Secure AI Framework (SAIF, PDF) to help organizations use artificial intelligence (AI) in a responsible manner. The framework has six principles that are designed to ensure that AI is used safely and securely.
Google’s framework is similar to other best practices that big tech companies use to ensure responsible AI use. For example, Microsoft released a Responsible AI Standard (PDF) in June 2022 that outlines how it uses AI internally. Google’s framework, however, is designed to help other organizations use AI responsibly.
Secure Foundation
Google’s framework begins with a secure foundation. Organizations that create their own AI models or use publicly available large language models (LLMs) must have a secure IT infrastructure in place. A zero-trust policy should be used to prevent malicious code injections and to protect models from being stolen. Organizations should also be aware of the potential risks of training data with sensitive information. For example, a bank using AI to detect fraud should be aware of the potential risks of using sensitive financial data.
Extended Detection & Response
Google also emphasizes the importance of extended detection and response. Threat intelligence should be used to detect attacks before they become a problem, and monitoring should be used to track the input and output of generative AI systems. AI reinforces the need for organizations to be prepared for cyber attacks.
Google also advocates for using AI to protect AI systems. As AI becomes more accessible, malicious actors will start to use it as well. To protect against this, organizations must have a dynamic defense structure that can respond to changing threats.
Google also emphasizes the importance of consistent security management and using techniques such as reinforcement learning and human feedback to keep AI models up to date. Finally, organizations should conduct end-to-end risk assessments to verify data origins and validate AI models.
Google is already using its own framework to ensure the responsible use of AI.