In the fast-growing cybersecurity insurance market, underwriters face a uniquely complex problem: measuring or estimating the risk their policy-holders face from cybercrooks, online spies and other hackers.
The insurance industry “doesn’t have … a set of baseline tools or metrics … to quantify their customers’ risks,” Anand Paturi, vice president of security research and engineering at RiskSense, told CyberScoop. In life insurance, for instance, depending on the value of the policy, risk might be measured by reference to actuarial tables which predict life expectancy, or by a medical examination measuring a wide range of health risk factors.
“That [risk] data is how you set the price of the policy,” he explained. But the question is much more complicated in cybersecurity, Paturi argued Monday in a presentation to the National Association of Insurance Commissioners 2017 National Meeting. He gave as an example the potentially massive losses from the WannaCry and Petya outbreaks earlier this year. In both cases, the malware relied for its spread on vulnerabilities in the Windows operating system for which patches had been made available two or three months earlier.
The victims of these attacks “exposed themselves [to the attack] … by failing to patch in a timely manner,” Paturi told CyberScoop. “If any of those companies were insured, how much could they claim? How much should they be able to claim?”
A driver who is ticketed for speeding may see their rates go up, Paturi noted. “Insurers keep track of your driving habits, but how can they track your cyber hygiene?” he asked.
The solution offered by RiskSense, a company spun out of a research effort at the New Mexico Institute of Mining and Technology and incorporated in 2015, is a FICO-style score — a single numerical representation of a company’s cyber risk.
Part of what makes this a hard problem is that in enterprises with thousands of endpoints and dozens or even hundreds of software applications, the number of known vulnerabilities can rapidly climb into the thousands or even tens of thousands. With operating systems, large enterprises are likely to integrate them into their own custom-built applications, meaning there’s a risk that installing a manufacturer’s patch might break something and the patch has to be tested first before it can deployed.
The end result, according to a Gartner analysis last year, is that organizations “are challenged to align the sheer volume of vulnerabilities that they identify with available remediation resources.”
Although vulnerability scans will reveal which flaws are high-severity, that’s generally at least 20 percent, meaning that in a large system there could be thousands of them. And the scans tell organizations nothing about whether an exploit has been written to take advantage of a particular vulnerability, or whether that exploit is being used “in the wild” by real hackers or cybercriminals. They also don’t account for how critical to the organization the affected asset is.
As a consequence, Gartner says, organizations may adopt approaches “focused on mitigating and patching a percentage of vulnerabilities in a given time frame, for example, ‘Remediate 90 percent of high severity vulnerabilities within 2 weeks of discovery.'”
Such approaches “only reduce risk on paper, but not in reality … [and do] little to actually prevent breaches and successful exploitation by threat actors — hackers do not care about the 90 percent of vulnerabilities that have been remediated, they focus on the 10 percent that remain,” states the Gartner study.
The authors compare it to “giving up smoking [to improve your life expectancy] just before driving off of a cliff.”
Instead, the researchers recommend a strategic approach, in which vulnerability data is supplemented with information about active threats in the wild and an understanding of how critical different assets are to the business.
A strategic approach
“Different assets represent different risks,” even when beset by the same vulnerability, explained Jimmy Graham, director of product management for vulnerability management at Qualys, Inc. The company’s ThreatProtection product sits atop its vulnerability scanner, cross-referencing the data from the scan with a series of “Realtime Threat Indicators, or RTI’s” he told CyberScoop. RTIs are true/false statements including things like “Is this vulnerability being exploited in the wild?” or “Does this vulnerability cause denial of service?”
As a customer, “Based on what’s important to you, you can pick and choose the indicators you want to prioritize,” he said. ThreatProtection also allows asset tags to be used as a filter, meaning you can exclude for instance inward-facing assets from a list of DoS vulnerabilities that need remediation. The end result is a list of high-severity vulnerabilities that are, for example, capable of allowing remote attacks, present on critical assets or systems and being currently exploited in the wild.
The effect, said Graham, is a “smaller, more focused” list.
But Qualys doesn’t rank the vulnerabilities, Graham noted, saying such rankings were illusory. “Ranked lists [of vulnerabilities] may give the user a sense of control, but in reality, then difference [in terms of the the risk they represent] between number two on the list and number five on the list is probably too small to be measured and certainly too small to justify ranking them like that.”
ThreatProtection, which the company rolled out just over a year ago, is “automating the things that a mature vulnerability management program in a large enterprise was probably doing already, but manually,” said Graham. “Instead, we provide it as an automated service.”
RiskSense’s platform provides a different service, fusing information about a vulnerability’s severity with data about the business critically of the asset where it’s found and the prevalence of exploits in the wild. The RS3 score “is essentially a prediction of the level of peril the organization is subject to, based on its vulnerabilities and all that contextual data,” Paturi explained. He compared it to a FICO score, which represents the level of risk a customer will default on a loan or other form of credit “and we use the same scale, too … from 350 to 800.”
For insurers, they provide a “multi-client dashboard” which gives a real-time view of the score and enables them to see when a policy might need to be reviewed or rewritten because of the change in risk.
“We are giving you a single number not a long list of factors. … Insurers can use that,” he said.