Ed. note: This is the second article in a two-part series about artificial intelligence, its potential impact on how organizations approach security, and the accompanying considerations around implementation, efficacy, and compliance. Read the first article here.
When you think of artificial intelligence (AI), you probably think of smart home devices turning off lights and setting thermostats, or sending notifications when new seasons of your favorite Netflix show launch. In fact, the most common use of AI is in detecting and deterring security intrusions, according to research from the Consumer Technology Association. Nearly half of organizations (44 percent), report the use of AI-enhanced tools to protect data, over pursuits such as financial trading (at 17 percent) or anticipating consumer buying preferences (at 19 percent).
The use of AI to solve against security challenges through predictive machine learning already underway. One salient proof-point is DARPA’s recent investment of more than $2 billion in new and existing programs to push beyond second-wave machine learning techniques towards contextual reasoning capabilities. Experts assess that AI will bring greater speed and accuracy in detecting and responding to breaches, analyzing user behavior, predicting new threats and uncovering installed malware on the network. Yet this technology is by no means a panacea, and as with any new technology, there are risks in lockstep with the rewards.
Create a Solid Foundation
Before integrating AI into your security protocols, make sure to first have a strong technical security foundation in place, says Will Pearce, Senior Operator with Silent Break Security, a cybersecurity consulting firm. “Don’t look to machine learning to make up for deficiencies in technical controls or policy.” He also advises that security teams apply AI to a problem that’s already well understood in the organization. “If you don’t understand Windows events before machine learning, you’re not going to understand them after.”
After all, as with any major technology change, a poorly planned AI strategy is a waste of time and money. Assessing the environment for AI is therefore a critical first step. Document the overall state of network security, including the completeness of data which any security tools collect, says Peter Clay, COO of Dark Cubed, a cybersecurity software platform that detects cyber threats. More specifically, the head of IT security should know what “normal” looks like for the network and/or area of defense so that this baseline can inform irregularities as they present.
It’s vital that companies have a firm grasp on the network architecture, its scope, and assets before proceeding with AI. Without that knowledge, AI can change from a benefit to a liability. “What do you get when you automate stupid?” Clay asks. “Faster stupid.”
Next, before selecting tools, conduct a thorough assessment of security risks and mitigation strategies. Answer three simple questions, Clay advises:
- Where does my valuable/protected data live?
- What is protecting my data today?
- What do I need to have to protect my data tomorrow?
AI excels at automating tasks with great rapidity and accuracy. In that light, Clay suggests starting with an AI tool for endpoint detection and response (EDR) on laptops and mobile devices. EDR is endpoint security software designed to help organizations identify, stop, and react to threats that have bypassed other defenses. “There are a number of products on the market that allow you to scale one administrator for up to 65,000 users because of the automation and capabilities,” Clay says.
Manage the Risks
As with any new technology, implementation is key. Taking time to understand business needs for security and the limits of AI can set the stage for a successful deployment. Those limits, say Pearce, relate to immaturity of the technology and how to deploy it. “Machine learning systems still suffer from the classic issues of false positives, poor software development practices, misconfigurations, and a lack of network logging.”
Going back to the all-important ingredient of comprehensive, accurate data to feed the machine learning engine, Pearce advises that security organizations invest in network logging and alerting products and allot sufficient time to learning the products. Top considerations include having the required budget for advanced logging systems or services and taking into account all privacy considerations that accompany the collection and responsible management of data sets. It’s also necessary to integrate data from different systems to feed into the AI tool, yet that means greater challenges in protecting personally identifiable information of employees and customers. IT will need to collaborate closely with legal teams to evaluate the legal risk of using even anonymized customer data in these machine learning engines.
Beyond data accuracy and privacy, according to Clay, IT stakeholders should determine whether it’s possible to “tune” the algorithms without highly specialized AI expertise. “Is it akin to black magic or can I understand and adjust the logic to meet my needs?” And although difficult to calculate, don’t move forward with AI unless it has a lower total cost of ownership than the current systems and human analysis in place for security management, Clay advises.
Tread Lightly into the Great Unknown
It’s easy to get caught up in the appeal of artificial intelligence. But the reality is that a 360-degree view of AI which takes into account all the benefits and risks will pay the biggest dividends. And, any AI tool on your network could feasibly be turned against your organization to initiate an attack by a talented hacker. So start slowly and measure the results of early efforts frequently as you go.
In a 2018 Harvard Business School survey, 75 percent of the 250 executives familiar with their companies’ use of cognitive technology report that AI will substantially transform their companies within three years. Notably, the study of 152 projects demonstrated a lack of confidence in highly ambitious “moon shot” projects encompassing AI over less ambitious projects to enhance business processes.
The study drives home the adage of not putting all the eggs in one basket, a sentiment echoed by industry experts like Pearce: “Arguably the confusion or skepticism surrounding machine-learning solutions comes from applying machine learning arbitrarily to every problem. Machine learning is just a tool, one of many tools organizations can choose from to protect and defend systems and data from breach.”
Jennifer DeTrani is General Counsel and EVP of Nisos, a technology-enabled cybersecurity firm. She co-founded a secure messaging platform, Wickr, where she served as General Counsel for five years. You can connect with Jennifer on Wickr (dtrain), LinkedIn or by email at dtrain@nisos.com.