Written byShaun Waterman
As Black Hat USA is in full swing, Las Vegas buzzed with questions about the government’s process for disclosing newly discovered software vulnerabilities, even as the government is working to change the way the process works.
At issue: What can fresh data examining zero days tell the public about whether the U.S. government secretly retains a new software vulnerability or reveals it to the manufacturer so it can be fixed. Retained vulnerabilities can be used to spy on U.S. adversaries, but — if rediscovered by foreign spies, cybercriminals or other hackers — they could also be used to wreak havoc on systems both inside and outside the U.S.
“I’m gonna light it up,” cybersecurity researcher Katie Moussouris told CyberScoop about a planned debate on the subject.
Because of the nature of the global software market — people and companies all over the world use the same programs — a high chance of rediscovery means a greater likelihood that a vulnerability discovered and kept secret by the U.S. will be independently rediscovered — and used against Americans before manufacturers can fix it.
Two recent studies have reached very different conclusions about how high the rediscovery rate is — and some believe that the data is not informative.
Wednesday, a bill that advanced out of the House Homeland Security Committee requires a report from the Secretary of Homeland Security on the government’s use of its policy process for deciding whether zero days should be retained or reported to the manufacturer.
The report, due eight months after the Cyber Vulnerability Disclosure Reporting Act passes, would also include a description of all the disclosures made during the prior year and, if needed, a classified annex.
The controversy comes as White House cybersecurity czar Rob Joyce is also examining the Vulnerability Equities Process.
Proponents say the bill will help provide a definitive answer to questions about how helpful the Vulnerabilities Equities Process is — and conversely, how damaging retention might be.
The debate has swirled through the corridors of the Mandalay Bay Convention Center and is the subject of several sessions here.
RAND researcher Lily Ablon told a session Wednesday that her study and statistical analysis of a rare collection of more than 200 zero days found the rediscovery rate was under 6 percent a year.
Harvard researchers Trey Herr and Bruce Schneier say their research shows the rediscovery rate varies between 15-20 percent annually, but has climbed steadily over the past seven years.
Moussouris says neither study is compelling.
She says the RAND study dataset of 207 vulnerabilities is “incredibly small.”
“What broad conclusions can you draw from such a tiny sample?,” she said. “It’s a case study, not a basis for policy.”
Moussouris also says the Harvard study conflates data from different types of software — and that the authors misunderstand the process of vulnerability detection and discovery.
“How many vulnerabilities are rediscovered from a single version of Chrome, which doesn’t even tell you what [the rediscovery rate] will be in a different version of Chrome … So much of the code is changed,” she said.
Ablon, Herr, Schneier and Moussouris will debate the issue Thursday during a panel at the conference.