Skip to content Skip to footer

AI-Automated Cybersecurity: What to Automate?


AI-Automated Cybersecurity: What to Automate?
Image by Editor

 

Let’s face it: While some IT pros may have a knee-jerk reaction against AI because of the current hype, it is just a matter of time before AI becomes embedded into many daily business processes, including cybersecurity controls. But now, when this technology is still young, it can be difficult to understand the real implications and challenges of AI automation.

This article debunks a couple of common myths about how AI can enhance cybersecurity and provides IT and cybersecurity leaders with recommendations on how to make informed decisions about what to automate. 

 

 

Don’t buy into the myth that AI is going to replace all your employees. Even if that were possible, we as a society are not ready for that leap. Imagine boarding a jet and noticing that no human pilot ever enters the cockpit prior to departure. No doubt there would be mutiny on board, with passengers demanding that a pilot be present for the flight. As effective as the autopilot function is, it has its limitations, so people still want a human in charge. 

Indeed, we didn’t see human personnel purged back when the industrial revolution took hold. While machinery did take over elements of manual labor, it didn’t replace the humans themselves. Rather, the machines brought greater efficiency, predictability and consistency to the manufacturing process. In fact, new jobs and even new industries requiring new skills and greater diversity were born. Similarly, AI will bring new levels of efficiency, scalability and accuracy to business processes, and also create new opportunities and transform the labor market. In other words, you will still need cybersecurity personnel, but they will be upskilled by AI assistance. 

Another important misconception is that AI automation will inevitably reduce costs. This may sound familiar; the same was said about the cloud not too long ago. Organizations that migrated their datacenters to the cloud found that while the OPEX cost structure of the cloud has advantages over traditional CAPEX expenditures, the final costs are similar for large environments, in part because more sophisticated systems require more skilled (and expensive!) talent. Likewise, automation will change the distribution of costs, but not the overall costs. 

Finally, a fully automated AI-driven security solution is sometimes seen as a desirable goal. In reality, it is a pie-in-the-sky dream that raises questions of trust and auditability. What if that automation malfunctions or becomes compromised? How do you verify the outcomes are still aligned with the business objectives? The truth is that we are in the early stages of this new AI automated paradigm, and no one truly understands how AI automation might be exploited one day from a security perspective. AI and automation aren’t silver bullets (nothing is).

 

 

Certain processes are better suited for automation than others. Here is a good three-point assessment that can help you decide whether a security process is suitable for automation:

  • The process is repetitive and time consuming when performed manually. 
  • The process is sufficiently well defined that it can be turned into an algorithm.
  • The results of the process are verifiable, so a human can determine when something is wrong.

You don’t want your expensive security talent doing things like pouring over security logs, correcting misconfigurations or interpreting prescribed metric alerts. By equipping them with AI-driven security tools, you can increase their visibility, augment their understanding of different threats and expedite their responsiveness to attacks. 

More broadly, consider how professional sports teams are investing in technology to improve the performance of their athletes. Similarly, you need to provide your security teams with the automated tools they need to up their game. For example, the insider threat is a significant risk, but it is practically impossible to watch over every user in the company, and rogue employees are often only evident when they have already caused at least some damage. AI-based solutions can be much more efficient in reducing this risk: A user and entity behavior anomaly (UEBA) detection solution can spot subtle changes in a user’s data access patterns and differences between their behavior compared to their peers, both of which signal a potential risk that requires prompt review. 

Another area where AI can take your team’s capabilities to a whole new level is threat hunting. Automated solutions can identify more accurately traces of attacks that may have been thwarted by your protection mechanisms and compare them to your threat intelligence. These may be signs of a larger attack and you can get better prepared for it. 

 

 

ChatGPT, Bard and thousands of other amazing new apps give executives the opportunity to experience AI in action. Working with their security teams, they can explore potential applications for the technology. But instead of blindly charging forward, it’s vital to thoroughly assess which processes make sense to automate. This due diligence will help IT leaders ensure that the risks of a proposed new technology does not exceed its benefits.
 
 

Ilia Sotnikov is Security Strategist & Vice President of User Experience at Netwrix. He is responsible for technical enablement, UX design, and product vision and strategy.



Source link