Advertisement

New code-validation project tries to spot the next industrial supply chain attack

A new DHS-funded project traces the provenance of software code.
(Getty Images)

A few years ago, Eric Byres, a veteran cybersecurity executive, was studying the aftermath of a clever attack on the supply chain.

A Russian hacking group known as Dragonfly had in 2013 and 2014 breached the websites of three vendors of software that supported industrial control systems (ICS). The attackers slipped malicious software into legitimate updates hosted on those websites.

The planted malware did not affect critical operations for companies, but Byres was troubled by the notion that outsiders could pull this off at all. The attackers made it clear to him that many companies he had worked with lacked an effective way of verifying whether they were using legitimate software worthy of their trust.

The problem is that just comparing digital hashes isn’t necessarily enough to mark software as trusted. A hash, as Byres put it, is “a binary answer to a non-binary problem.” A hash either passes or fails, but the task of validating critical software can be more complex.

Advertisement

Two years after the Dragonfly campaign, Byres came out of retirement to take another stab at the software supply chain. He has assembled a sprawling database for dissecting and validating code. To spot a compromise in the supply chain, the rationale goes, one needs to examine digital bits at a granular level.

The goal of his new project is to help companies defend themselves from hackers who burrow into partners’ networks, and then use that as a foothold to infect other targets.

“We’re not trying to find zero-days; we’re not trying to do vulnerability analysis,” Byres told CyberScoop. “All we’re trying to do is build the biggest spider web of information on any given binary and its components,” he said, his voice growing fervent.

The effort also will assess the fidelity of firmware – embedded software in often-critical systems – an issue that industry insiders say needs addressing.

With an $800,000 grant from the Department of Homeland Security, and a $500,000 angel investment round, Byres’ Dallas-based company, aDolus, has the database up and running. It uses a white-listing tool made by WhiteScope, and receives a feed from OSIsoft, whose software-building process provides a trove of data. Incident responders can plug into the database to check suspicious digital files they find in the field.

Advertisement

A vendor runs the aDolus program against their software, sending files to the cloud for analysis. Each software component is checked against a list of known vulnerabilities. The software ultimately receives a confidence score measuring its dependability, ranging from zero (utterly untrustworthy) to 10 (highly trusted).

It’s up to the owner of the software or firmware to decide how much risk they want to accept at their organization.

“If it’s a safety system, it probably should be a nine,” Byres said. “If it’s some minor lab system, maybe five is good enough.”

When Byres demonstrated the program on his laptop for CyberScoop he pulled up a software package released by industrial giant ABB. It got a 0.7, a “rotten” score, as Byres put it.

He then scrolled to the vulnerabilities that were flagged in the package, and pulled up information from the Common Vulnerabilities and Exposures database on each. But the name “ABB” was nowhere in those CVE alerts because the vulnerabilities were in a subcomponent of the software made by a different vendor. An asset owner checking the software for known ABB vulnerabilities wouldn’t have found these flaws.

Advertisement

“This component has basically contaminated a perfectly good package,” Byres said, almost ruefully.

Expanding and strengthening the circle of trust

One reason that verifying the provenance of critical software and firmware is top of mind for cybersecurity professionals is because of the insidious nature of a few, high-profile attacks against bundles of code.

Before the Dragonfly campaign, there was Stuxnet, the advanced computer worm that disrupted a uranium enrichment facility in Iran in 2009. The hackers behind Stuxnet stole digital certificates used to sign computer drivers. Those authenticate certificates tricked Windows operating systems, allowing the malicious code to load.

Nearly a decade later, that attack vector is by no means outdated. Last October, researchers revealed that suspected Russian hackers targeting energy companies in Poland and Ukraine had stolen a digital certificate from a manufacturer of industrial automation equipment.

Advertisement

If the signing keys for software are stolen, it can be very difficult to determine if the software is being used legitimately or not.

“If you don’t have the infrastructure to protect your keys, maybe you should try to seek out someone who has that protective infrastructure,” WhiteScope founder Billy Rios said Wednesday at the S4 Conference in Miami Beach.

In a way, the aDolus project is trying to be that digital gatekeeper by enlisting as many vendors, asset owners, and security consultants to participate as feasible. Byres said he is working on partnerships with two big suppliers of industrial equipment along with a major oil and gas company.

Byres also wants to determine if the digital chain of custody of a certificate is authentic. “If I can’t trust people to manage their keys, I’ll manage them for them,” he said.

The ambitious scope of the project has others from the industrial cybersecurity community coming forward to help.

Advertisement

“A lot of people in this field are professional engineers,” Bryan Owen, OSIsoft’s principal cybersecurity manager said, describing why he got involved in the aDolus project.

Like doctors who take the Hippocratic Oath, engineers are “here to make society a better place,” Owen said. “And the only way we’re going to do a lot of what we need to do to [secure] critical infrastructure, is if we find a way to work together.”

Sean Lyngaas

Written by Sean Lyngaas

Sean Lyngaas is CyberScoop’s Senior Reporter covering the Department of Homeland Security and Congress. He was previously a freelance journalist in West Africa, where he covered everything from a presidential election in Ghana to military mutinies in Ivory Coast for The New York Times. Lyngaas’ reporting also has appeared in The Washington Post, The Economist and the BBC, among other outlets. His investigation of cybersecurity issues in the nuclear sector, backed by a grant from the Pulitzer Center on Crisis Reporting, won plaudits from industrial security experts. He was previously a reporter with Federal Computer Week and, before that, with Smart Grid Today. Sean earned a B.A. in public policy from Duke University and an M.A. in International Relations from The Fletcher School of Law and Diplomacy at Tufts University.

Latest Podcasts