Feature Story

AI and Security: the Arms Race

AI promises to be a key weapon in cyberwarfare. But human security teams won’t be replaced anytime soon.

Cyberattacks. They’re relentless, smarter than ever, and expensive — with a minimum toll of $500,000 in more than half the incidents, Cisco reports.

Increasingly, those threats are also highly automated, with sophisticated attacks that probe, adapt, hide and replicate on their own. That leaves IT and security teams struggling with a complexity and sheer volume of data that can be unmanageable with conventional defensive strategies.

“We’ve seen a number of attacks over the last few years where the information was there,” said Ryan Berg, chief scientist of Barkly Endpoint Security, “if only someone was just looking at it. There’s really too much information that people can’t process. That is causing alert fatigue, so we’re losing the needle in the haystack.”

Moving forward, the best defense against machines will be other machines.

“We foresee machines fighting machines,” said Steve Durbin, managing director of the Information Security Forum. “AI malware coming up against defensive AI. I do think that it’s going to change very quickly. What always takes us by surprise is the speed at which things change.”

We foresee machines fighting machines — AI malware coming up against defensive AI.But that doesn’t mean that human security teams are going away anytime soon.

Business leaders will need the right technology, strategies, and talent for a machine vs. machine arms race. It’s a war that’s only going to intensify as nascent technologies like machine learning and its more advanced cousin artificial intelligence play a growing role — for attackers and defenders alike.

A Swarm of Shapeshifters

“Machine learning allows us to get down to what is essential for detection content,” said Luci Lagrimas of Cisco’s threat intelligence group Talos, which blocks 19.7 billion total threats daily.

Malware and botnets, for example, are able to “shapeshift,” meaning that security teams face enemies that replicate and change, sometimes from moment to moment.

“Attackers generate a hundred thousand iterations of a piece of malware every single day,” Matt Watchinsky of Cisco Talos explained. “So, each individual person may receive a unique piece of malware. It’s the same malware, but it’s been modified in some way so that it’s unique.”

But the good guys, too, are getting smarter.

“It’s an ever-changing world, and the threats are exponentially increasing,” said Erwin Kim of Radware. “Along with the capabilities of the [defensive] systems to self-learn. We’re talking less than ten seconds to handle a zero-day variant, or new attack, if it fits a profile for us to block it. And no human has to be involved.”

Still, it’s important to note that these technologies are only beginning to make an impact. The real changes in security are down the road.

“I really don’t think that AI or machine learning is anywhere close to delivering on its full promise,” said Durbin. “That being said, it does have really exciting potential for things like incident detection, identifying and communicating risks, and of course that old chestnut of situational awareness.”

Turning that potential into reality is ever-more important, given the sheer amount of threats a typical organization faces.

“We parse hundreds of thousands of millions of things a day that we have to turn into protections,” Watchinski said. “My organization is 300 people. Nobody is looking at 90 to 95 percent of what we actually [determine] is bad. It’s all auto-classified.”

The Human Element

With machines taking over so much in security, what is the role of humans?

In short, those 300 experts on Watchinski’s team are in no danger of being replaced. And that goes for other security staffs.

“The computer is still relatively dumb,” stressed Berg. “It requires the human to introduce the context into the machine for the machine to have any understanding of what it’s doing in the first place. That piece is not going away for a long time.”

Sven Krasser, chief scientist at Crowdstrike, calls AI and machine learning “force multipliers.”

“It’s a tool that lets us lift bigger and heavier data sets,” he said. “And do things that go beyond human cognition.”

But he’s quick to add, “There’s somewhat this misunderstanding of AI as an independent agent that reasons. That’s not really what it is.”

Cybercriminals of the human variety, however, do reason.

“In the security space,” said Watchinski, “you’re up against a human being who can change the entire landscape on a dime. It’s still difficult for machines to adapt to that.”

They also know the potential of increasingly powerful machines to support their nefarious deeds.

“What is available for the good guys is also available for the bad guys,” said Durbin.

The best strategy then, is making the right investments in human talent — augmented with technology. That could include upskilling in-house talent, adding machines learning, or relying on outside vendors. In most cases, a blend of those capabilities will be the winning formula.

“It’s a great example of where we can use technology in conjunction with human interface,” said Durbin, “combining those two elements rather than one replacing the other. And with the kinds of skills shortages we are going to continue to see, that can only be for the better.”

A Talent Gap for the AI Age

While machines are in no rush to replace humans, they are forcing a redefinition of the skill sets needed to remain secure and competitive.

And given the talent crunch in IT, especially around emerging technologies like machine learning, it will be essential for leaders to ensure that their teams continue to grow their skills. Not just to expand their capabilities but to draw talent in the first place.

In a Cisco survey of 600 IT and business decision-makers across multiple industries and continents, 61 percent favored retraining their existing teams for tech skills. (However, the top tech skill for which they were willing to hire for was AI/machine learning, at 65 percent).

“You have to continually train, you have to continually up-skill,” Durbin said. “It’s also about transparency and this open engagement contract where both parties understand what they are looking to achieve. If we provide you with some of this exposure to machine learning, up-skilling you technically, what are you going to be able to provide back to us?”

“You have to have a plan to build that muscle memory within the organization,” Berg added.

“And if you want to have that capability, you are going to have to be enabled to develop it in house, because there is a genuine talent shortage. It’s not something that’s going to happen overnight, but it needs to be a process that’s part of your company’s DNA.”

Moreover, security can’t just be seen in terms of defense. It needs to be integrated into any company’s growth strategy, with CIOs and CISOs given a voice at the highest levels of the c-suite and the board. After all, innovation can’t thrive under a cloud of uncertainty from security threats.

“[Security] is a potential competitive advantage,” said Durbin. “Because security of course is becoming integral into business strategy and the way that we go to market, the way we protect our assets. And that can generate competitive advantage.” (Also read Durbin's column Data and Dollars: The CFO's Role in Cybersecurity.)

Bad Actors and Low-Hanging Fruit

The good news is that many cybercriminals have yet to leverage AI. But that’s partly because there’s still so much low-hanging fruit to exploit. Like, for example, organizations with substandard security practices (why should hackers invest in advanced technologies when many employees still click on suspicious attachments without thinking?).

That, of course, could change — fast. Especially as state-sponsored cybercriminals continue to elevate their skills.

“I think at the moment,” said Durbin, “this is probably one area where the good guys could actually steal a little bit of a march mainly because the degree of sophistication that we’re talking about is really outside the scope of the majority of cyber criminals. Now if you happened to be a nation-state or you have a particularly large cybercrime gang with significant resources available then of course you’re going to go down this route.”

In any case, smart leaders will continue to prepare for the worst.

“It’s easy to be complacent,” warned Krasser, “but you need to move on, you need to adapt your technologies. Machine learning has never been as successful as it is right now. It’s very easy to get started, but it requires an investment and that investment needs to be more than just getting a couple of data scientists on the team. It needs to be part of your strategy.”

Preparing for the worst means looking past today’s technologies to the next wave, including blockchain, the evolution from machine learning to true AI, and quantum computing, which Watchinski calls “a crypto nightmare.”

“Quantum is a long way off,” adds Durbin. “But a long way off these days is probably no more than five to six years. But it’s potentially a game changer because quantum computing will break encryptions, there’s no doubt about that.”

Already, high-target industries like banking are looking toward quantum defenses for quantum attacks.

“We’re probably not going to know when somebody has come up with quantum computing,” added Durbin, “because it’s likely to be a nation-state going around cracking encryption, gathering information.”

In the meantime, security and business leaders face a perilous threat landscape, which only promises to get worse. They must ensure that their teams have the right talent and technology. While continuing to look down the road — preparing for tomorrow’s threats, while fending off today’s emergencies.

“It’s the innovator’s dilemma,” concluded Krasser. “If you build the best tube television, that doesn’t help you when people want an LED. That’s the same there, you need to see where the ball is going to be, not where it is right now.”