Chapter 9. Conclusion: Why AI and ML Are Pivotal to the Future of Enterprise Security

There is no doubt that AI- and ML-enabled technologies are already a critical part of many security team’s arsenal. Despite the hesitation many in security have around AI and ML, especially as buzzwords, the fact is that many security tools are already using AI and ML behind the scenes. There are just too many new and evolving threats for even the largest security team to effectively track them. AI and ML allow security vendors and security teams to focus on their core mission while letting the AI and ML do the bulk of the grunt work to build better security solutions.

Here are some steps that organizations can follow when adopting AI and ML:

  • Embrace AI and ML approaches

  • Agree that this is where security is going

  • Develop a team to investigate the feasibility of using what is available now

  • Stay abreast of what is coming

  • Document and track all your research and findings.

AI and ML might become very important because of regulations like General Data Protection Regulation (GDPR) in the European Union, The Personal Information Protection and Electronics Documents Act in Canada, the California Consumer Privacy Act of 2018, and other regulations that are likely coming soon. An organization must do everything possible to protect the consumer-based data it maintains. Organizations that fail to do so will face huge fines.

However, what most organizations don’t realize is that these regulations do not advise on what types of technology are needed to protect the data of their customers and employees. The regulations broadly state that it is the responsibility of the organization to do everything possible to keep applications and data secure. Reading through the specific language used in these regulations, you will often find terms such as “reasonable security procedures,” “appropriate practices,” or “nature of the information to protect.” What this means is that an organization that has been breached will need to demonstrate, most likely in a court of law, that everything possible was done to protect the personally identifiable information and other data it stores. This completely hints at the concept of due care, which is defined as the effort made by an ordinarily prudent or reasonable party to avoid harm to another.

One can easily envision the courtrooms of the future in which the defendants will be CISOs, CIOs, and CEOs of major corporations standing in front of a jury of their peers—or even worse, standing in front of a group of their government legislators—trying to explain why they did not exercise due care similar to their peers. It doesn’t take a lot of imagination to envision this, just look at Mark Zuckerberg’s (CEO of Facebook) or Richard Smith’s (former CEO of Equifax) testimony in front of Congress.

Moving forward, as AI and ML become embedded in the existing tools in use today, or the new tools making their way to the market with AI and ML already baked in, highly skilled human operators will still be needed to understand how to use the tools to their fullest ability. Just as pilots understand their aircraft to an extreme degree, security professionals will need to understand the AI- and ML-enabled tools at their disposal. Or to put it another way, no one shows up to a modern-day battlefield carrying a spear.

The future of AI-enabled security is quite promising. Organizations are already beginning to understand how to operate their human-computer, AI- and ML-enabled defenses more like pilots operate their fighter jets.

In that spirit, attackers beware. Modern-day cyberpilots are getting better equipped and becoming much smarter at defeating your attacks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset