Google is among the companies competing for the emerging trend in the generative AI space.

Google launched Cloud Security AI Workbench a cybersecurity suite driven by Sec-PaLM, a specialised “security” AI language model. Sec-PaLM is an outgrowth of Google’s PaLM paradigm that has been “fine-tuned for security use cases, It includes security information such as research on software vulnerabilities, malware, threat indicators, and behavioural threat actor profiles.

Cloud Security AI Workbench includes a number of new AI-powered products, such as Mandiant’s Threat Intelligence AI. It will use Sec-PaLM to detect, summarise, and respond to security risks. Google Workspace company paid $5.4 billion for Mandiant in 2022. Another Google business, VirusTotal, will employ Sec-PaLM to assist subscribers in analysing and explaining the behaviour of harmful scripts.

 

Sec-PaLM on Google’s cloud cybersecurity

Sec-PaLM will allow clients of Chronicle, Google’s cloud cybersecurity service, to look for security events and engage with the findings “conservatively.” Meanwhile, Sec-PaLM will provide “human-readable” explanations of attack exposure to users of Google’s Security Command Centre AI. Furthermore, it includes impacted assets, proposed mitigations, and risk summaries for security, compliance, and privacy results.

It is worth noting that Google and DeepMind have worked on Sec-PaLM for years, as well as our security teams who have deep expertise and skill in this area.  As we begin to apply generative AI to security, we are just beginning to realize its potential. Moreover, we look forward to continuing to leverage this expertise for our customers and drive advancements across the security community.”

 

Those are lofty goals, especially given that the first product is in the Cloud Security AI Workbench. VirusTotal Code Insight is now only available in a restricted preview. Workspace Google company says it will make the remaining options available to “trusted testers” in the coming months. To be honest, it’s unclear how effectively Sec-PaLM works — or doesn’t function — in practice. The recommended mitigations and risk summaries seem useful. However, are the recommendations any better or more exact because they were generated by an AI model?

 

After all, even the most cutting-edge AI language models make mistakes. They’re also vulnerable to assaults like prompt injection, which can force them to act in unexpected ways.

AI vs Tech Titans

Of course, this does not deter the tech titans. Microsoft Office 365 company released Security Copilot in March, a new tool that seeks to “summarise” and “make sense” of threat intelligence using OpenAI generative AI models, including GPT-4. Microsoft, like Google, has claimed that generative AI will help experts deal with emerging threats more effectively.

The verdict is still out on that. Generative AI for cybersecurity may be more hype than anything else, given the scarcity of research on its usefulness. We’ll hopefully see the findings shortly, but in the meanwhile, take Google and Microsoft’s assertions with a grain of salt.

0 0 votes
Article Rating
Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Oracle Fusion HCM Training

It is an interesting article. Unogeeks is the top Oracle Fusion HCM Training Institute, which provides the best Oracle Fusion HCM Training