Advertisement

Why reforming the Vulnerability Equities Process would be a disaster

Op-Ed: The leak of NSA exploits is not the worst that could happen — and trying to reform the policy process the U.S. government uses to decide which vulnerabilities to reveal and patch will not make things better.
Joshua Epifaniou
(Getty)

When the authors of WannaCry turbo-charged their ransomware with NSA exploits leaked by the Shadow Brokers, people thought it was the Vulnerability Equities Process’ worst-case scenario. It’s really not.

The VEP is the policy process the U.S. government undertakes when one of its agencies finds a new software vulnerability. It’s how the government decides whether to tell the manufacturer about the bug, so they can patch it and keep all their customers safe; or to keep it secret and stealthily employ it to spy on foreign adversaries who use that software.

In the wake of Shadow Brokers dumping several sets of highly advanced NSA hacking tools online — many using previously unknown vulnerabilities — there have been rising demands for reform of the VEP. Lawmakers have got in on the act, pledging to legislate the process with the Protecting Our Ability to Counter Hacking, or PATCH Act of 2017.

But the sudden viral spread of sloppily written ransomware — and the promise of more to come — actually isn’t the worst case scenario for the VEP.

Advertisement

Remember Stuxnet? In real life, cyberweapons are composed of multiple elements often contributed by different organizations and sometimes — as in the Stuxnet case — even different countries.

(Stuxnet also illustrates the fact that exploits do get caught in the wild sometimes, but I’m not going to get into that here.)

The real, worst-case VEP scenario would be if an exploit leaks that is composed of GCHQ parts, with some NSA add-ons, some CIA add ons, and a piece that you bought from a third party vendor under a special license. That could cost you the trust of everyone in your exploit-building ecosystem.

Most of the proposals for reform of the VEP assume you can simply browbeat those people — your third-party vendors, and U.S-allied intelligence services like GCHQ, GCSB, etc. — into accepting that, on your whim, you can send their vulnerabilities to a vendor for patching.

This is simply not true — any more than the idea that you could license out the Windows source code if you felt like it.

Advertisement

The thing is this: The exploit vendors also get a vote on these matters. And if you kill their bugs or exploit techniques, or simply have poor operational security and get caught a lot, they tend to vote by simply not selling you the good vulnerabilities.

I cannot overstate how much we need our foreign second-party partners in this space, and even more than that, how much we need our supply chain.

Not only is the signals intelligence enabled through active network attacks inescapably necessary for the safety of the country, but we are trying to build up U.S. Cyber Command, enable law enforcement, and recover from the damage that the Snowden leaks did to our credibility.

In simple terms, yes, exploits save lives. They are not weapons, but they can be powerful tools. I have, and I cannot be more literal than this, seen it with my own eyes. You don’t have to believe me.

Ironically, in order to determine which vulnerabilities present the most risk to us and to just in general combat threats in cyberspace, we should hack into foreign services, which is going to require that we have even more capability in this space.

Advertisement

To sum up:

  • If you enforce sending vulnerabilities which are not public to vendors via a law, we will lose our best people from the NSA, and they will go work for private industry.
  • If we cannot protect our second party partner’s technology they will stop giving it to us.
  • If we give bought bugs to vendors, they will stop selling them to us. Not just that one exploit vendor. Once the U.S. government has a reputation for operating in this way, word will get out and the entire pipeline will dry up causing massive harm to our operational capability.
  • We need that technology because we do need to recover our capability in this space for strategic reasons.
Advertisement

Of course, the general rule of thumb in intelligence operations is to protect your sources and methods at all costs. And that includes your exploit vendors.

I know there are those who argue that you can do network intrusion operations entirely without zero-days — but the truth is much more complex. The operational capacity we get from zero-days cannot simply be replaced by only using exploits which have patches available.

Nor is it tenable to use exploits “just for a little while” and then give them to a vendor. This simply creates an information path from our most sensitive operations to our adversaries, breaking attribution, potentially destroying relationships and allowing our targets to find and remove our sensitive implants.

But there are better proposals available than reforming VEP. One idea is simply to fund a bug bounty out of the Commerce Department for things we find strategic  — i.e. not just for software vulnerabilities (which is something vendors like Microsoft and Apple should fund) but explicitly for exploits and toolkits other countries are using against us.

Likewise, U.S. agencies should be more open when exploits we know get caught. Rather than surrendering them on the front end, we should be better prepared to clean up on the back end.

Advertisement

For instance: having custom-built mitigation expertise available ahead of time for big corporations can limit the damage of a leak or an exploit getting caught — albeit at the cost of attribution, of letting everyone know it was us. This expertise probably should include writing and distributing third party patches, IDS signatures, and implant removal tools.

Taking a look at every individual vulnerability can’t help but overwhelm any process we build. A smarter process would develop large categories of vulnerabilities and engage the teams to think about them that way. This avoids having detailed information on any particular vulnerability sent to a large number of people — which is just asking for another Hal Martin or Snowden event. For example, knowing that the NSA is going to use “browser client sides” over and over, and needs basically as many as it can find or buy, what next? Those are the real strategic policies we need to put in place, and they have nothing to do with individual bugs.

And having sensors on as many networks as possible can help discover which of your cyberweapons might have been caught or stolen.

One important, unintended possibility if we close off our exploit pipeline is that we instead will be forced into wholesale outsourcing operations themselves — something I think we should be careful about.

Finally before we codify the VEP into any sort of law, we should look for similar efforts from Russia and China to materialize out of the norms process, something we have not seen even a glimmer of yet.

Advertisement

Dave Aitel is the CEO of Immunity Inc., and the organizer of Infiltrate — an annual security conference focused solely on offense. A former NSA ‘security scientist’ and a past contractor on DARPA’s Cyber Fast Track program, he is also a member of the U.S. Department of Commerce’s Information Systems Technical Advisory Committee.

Latest Podcasts