New research from Harvard suggests that the freshly discovered software flaws called zero day vulnerabilities are independently rediscovered much faster than previously thought.
The rediscovery rate has big implications for U.S. cybersecurity policy because it would change the calculation officials make when deciding whether to reveal zero days discovered by U.S. agencies so they can be fixed, or keep them secret so they can be used to spy on foreign adversaries and in other cyber-operations.
“If the rediscovery rate is this high, the number of vulnerabilities [secretly retained] for operational use should be lower or subject to more aggressive scrutiny,” said Trey Herr, a post-doctoral fellow at the Belfer Center at Harvard’s Kennedy School.
Herr, along with security guru Bruce Schneier and Christopher Morris, a research assistant from the Harvard school of engineering, published their findings this week after a lengthy peer-review process, and will present them at the Black Hat USA conference in Las Vegas next week.
Because of the nature of the global software market — people and companies all over the world use the same programs — a high rediscovery rate means a greater likelihood that a vulnerability discovered and kept secret by the U.S. will be independently rediscovered by a foreign intelligence service or cybercriminals — and used against Americans before manufacturers can fix it.
The figures suggest that “up to a third of zero days found in the wild” being used by hackers might have been secretly known to U.S. agencies, Herr said — meaning they could have been fixed before they were weaponized by cyber-spies or online criminals.
Some other researchers have questioned whether those conclusions can legitimately be drawn from the data they present.
The controversy comes as White House cybersecurity czar Rob Joyce says he is reviewing the Vulnerability Equities Process — the policy structure that decides whether zero days found by U.S. agencies should be disclosed to the manufacturer or kept secret to be used in cyber-operations.
The White House did not respond to a request for comment about the new data.
The figures in the Harvard paper are several times higher than previous smaller studies and its authors hope it will force a rethink of the VEP.
The authors analyze vulnerability reports from four bug-tracker databases which record flaws found by researchers in some of the most widely used open-source software on the planet:
- Mozilla’s Firefox browser between 2012-16.
- The Open SSL code library 2014-16.
- The Android operating system 2015-16.
- Google’s Chromium browser (the open-source version of Chrome which shares most source code with it) 2009-16.
This last database provides more than three-quarters of the 8696 total vulnerabilities.
The authors narrowed the data down, looking only at the most serious vulnerabilities — the 4307 rated “high” or “critical.”
Herr told CyberScoop that was because the record-keeping tended to be better for the more severe bugs, and because it more accurately reflected the kind of vulnerabilities intelligence agencies would be looking for.
“It was amazing to us how accessible this data is,” Herr said, noting most of it is public. But, he added, “It was very messy, it took a great deal of cleaning up,” which entailed “a lot of legwork.”
Bottom line: the report says the rediscovery rate varies between 15-20 percent annually over the whole seven years, but climbs steadily and is at almost 20 percent for last year.
The RAND Corp study in March put it at under 6 percent for its sample of about 200 zero days. A 2005 study by Andy Ozment, who later went on to be a senior cybersecurity official in the Obama administration, found an 8 percent rediscovery rate in bugs reported to Microsoft between 2002-04. Other studies have similarly found the rediscovery rate to be under 10 percent.
Herr said that these studies were based on smaller, less diverse samples and often weren’t directly addressing the rediscovery issue.
“It’s a great paper,” he said of the RAND study, “But the question they ask is different. It’s ‘How long will your zero days be secret for?’ … Some biases may skew that data.”
Controversially, the Harvard paper uses its data about rediscovery to make a calculation about the proportion of zero days found in the wild that might have been secretly known to U.S. agencies.
Based on an earlier estimate by Jason Healy at Columbia University of the number of zero days maintained by the NSA, combined with the Symantec figure that there were 54 zero days found in the wild in 2015, “the rediscovery of vulnerabilities kept secret by the U.S. government for operational use could contribute anywhere from 7.5 to 33 percent of this 2015 zero day population,” the paper states.
Those figures have been challenged by other researchers.
“I don’t think their data shows nearly as broad a conclusion as they would like it to,” Dave Aitel, CEO of cybersecurity firm Immunity, told CyberScoop in an email, “which is a worrying sign from an academic paper.”
In a blog post later, he said many of the vulnerabilities in their data set appeared to have been found by fuzzing — a relatively basic hacker technique.
“Analyzing this kind of data is hard,” he told CyberScoop, adding “it requires both extreme understanding of vulnerabilities, and the desire to do a lot of boring data analysis.”
On Twitter, Aitel said the study conflated bugs that were found by bounty hunters and white hat researchers with a very different dataset — “bugs [that] can be found by your adversary.”
But he acknowledged that a “perfect” answer to the real question — what zero days the U.S. has that are also in the possession of foreign intelligence services or other actors — would be basically impossible.
“Measuring what the U.S. has versus what the Chinese and Russians have would require a clearance from all three countries,” he said in his email.
Herr argued that, whatever the exact figures, the scale of problem needed to be addressed.
“Ten more zero days a year is a serious security issue,” he said.