There’s a new trend emerging in the general AI space — Generalized AI for cybersecurity — and Google is among those looking to get involved on the ground floor.
At RSA Conference 2023 today, Google announced Cloud Security AI Workbench, a cybersecurity software suite powered by a dedicated “security” AI language model called Sec-PaLM. A branch of Google Palm model, Sec-PaLM is “refined for security use cases,” Google says — incorporating security intelligence such as research into software vulnerabilities, malware, threat indicators threats and threat actor profiles.
Cloud Security AI Workbench extends a range of new AI-powered tools, such as Mandiant’s Threat Intelligence AI, which will leverage Sec-PaLM to find, summarize, and act on security threats. (Remember that Google bought Mandiant in 2022 for $5.4 billion.) VirusTotal, another Google property, will use Sec-PaLM to help subscribers analyze and interpret the behavior of malicious scripts.
Elsewhere, Sec-PaLM will assist customers of Chronicle, Google’s cloud cybersecurity service, in finding security events and interacting “with caution” with the results. Meanwhile, AI users of Google’s Security Command Center will receive “human-readable” explanations of the vulnerability to a Sec-PaLM attack, including affected assets, recommended mitigations and risk summary for security, compliance, and privacy findings.
Google wrote in a blog post this morning: “While general AI has captured the imagination lately, Sec-PaLM draws on years of fundamental AI research by Google and DeepMind as well as deep expertise. of our security teams. “We are just beginning to realize the power of general AI adoption in security, and we look forward to continuing to leverage this expertise for our customers and drive advancements in the community.” security coin.”
Those are pretty bold ambitions, especially considering that VirusTotal Code Insight, the first tool in the Cloud Security AI Workbench, is currently only available in a limited preview. (Google says that it plans to roll out the rest of the service to “trusted testers” in the coming months.) It’s not clear how well Sec-PaLM works — or not, frankly. works — in practice. Sure, “recommended risk mitigations and summaries” sound helpful, but are those recommendations much better or more accurate since an AI model generates them?
After all, AI language models – no matter how advanced – make mistakes. And they are vulnerable to attacks like inject nowthis can cause them to behave in ways that their creators did not intend.
Of course, that doesn’t stop the tech giants. In March, Microsoft show Security Copilot, a new tool that aims to “summarize” and “understood” threat intelligence using generalized AI models from OpenAI including GPT-4. In press documents, Microsoft – similar to Google – claims that generalized AI will better equip security professionals to fight new threats.
The jury is very interested in that. In fact, innovative AI for cybersecurity may be more of an exaggeration than anything — the lack of studies on its effectiveness. We’ll see the results soon with any luck, but in the meantime, disregard the claims of Google and Microsoft.