The National Institute of Standards and Technology canonical Systems Security Engineering guide SP 800-160 provides a catalog of systems and procedures that developers can use to build secure IT networks from the ground up.
The guide’s second volume, published in a draft version Wednesday, shows developers how to use those procedures to shore up the security of older legacy IT systems in order to limit the access hackers have if they do manage to break in.
Ron Ross, NIST fellow and the one of the agency’s cybersecurity experts, told CyberScoop it’s a needed corrective.
“We’ve been too focused on penetration resistance, hardening the systems, trying to keep the bad guys out,” he said, “The problem is, with the incredibly complex IT systems we have today, there will always be an [effectively] unlimited supply of vulnerabilities that we can’t know about.”
Nation-state hackers are sophisticated and persistent, Ross said: “The empirical data shows that you can’t always stop them getting in.”
Volume two focuses on cyber resilience engineering, which it defines as having the following four characteristics:
Focus on the mission: “Maximiz[ing] the ability of organizations to complete critical or essential missions or business functions despite an adversary presence in their systems and infrastructure.”
Focus on the adversary: “These guys are high end and and well resourced,” said Ross. “You have to understand how they operate.”
Assume compromise: “A fundamental assumption of cyber resiliency.” No matter “the quality of the system design, the functional effectiveness of the security components, and the trustworthiness of the selected components,” a determined and skilled adversary will get in.
Assume persistence: “The stealthy nature of the APT makes it difficult for an organization to be certain that the threat has been eradicated.”
Volume two also includes elements that “can be employed at any stage of the system life cycle,” not just when the system is being built.
“We had to address the question of what can we do today to secure the legacy systems we have,” said Ross.
The guide is an informational publication, part of a growing library of best practices that NIST’s computer scientists provide for the public and private sector.
Three dimensional chess
Ross says the key to resiliency is that it “Allows you to think [about securing IT systems] in multiple dimensions. The first dimension is that penetration resistance. We need to keep hardening our perimeter.”
But “The second dimension is what happens after they get in,” said Ross. “How do we limit their access, limit the damage they can do or the data they can steal, limit their time on target.”
One way to do that, he explained, in a modern, virtualized IT environment is to spin up new virtual machines on a regular basis. Because the VMs are generated from a secure, hardened disk image, any malware that’s been planted on the machines will be “flushed out of the system.”
Another element of the second dimension is “limiting [the adversary’s] ability to move within the system,” escalate their privileges and gain access to the network’s secure areas.
“There are many architectural and design decisions that can be used to protect your crown jewels,” said Ross, citing measures such as domain separation and network segmentation.
The third dimension is survivability. “Can your system work, can your organization carry out its mission, even with the adversary inside your networks? I call it ‘limping over the goal line,’” said Ross.
Security in context
The 158-page document is essentially a catalog of concepts, techniques and design principles that can be used to improve cyber resiliency, but the guide notes that it’s “not appropriate for every organization, application, or system.”
“Not every method will be right for every organization and for every system. [The guide] is designed to help you pick the options that help you the most,” Ross said.
He said the update is the first of three planned additional guides complimenting the original SP 800-160.
Volume one “was going to be a standalone publication,” he explained, “but there was too much material, so we decided to break out companion volumes.”
The next two volumes will be “deep dives” on hardware and software assurance. “There are unique elements to each that need to be covered,” he said.
The new guide is aimed at engineers, software developers and others involved in the design and development of IT systems, but volume two also has a second audience in mind, said Ross.
“That audience is what I call ‘enterprise practitioners,’” he explained, adding that these were the CIOs and other professionals responsible for running and managing the installed base of already-functioning IT systems in an organization — as opposed to those building new ones.
He said the guide’s risk management principles could be interpreted differently. “To an engineer, risk means ‘project risk’ — what might go wrong with the development, will it be ready on time, will it be able to do what it needs to do … For an enterprise professional, risk is about whether the organization can function. It’s mission risk.”
The plan to address that second audience garnered pushback in certain quarters, he said, “There was some controversy over that … people said, ‘Oh that’s different, we can’t cover that.’”
But, “You have to have that second part,” he said, “the installed base [of software] is there, the legacy systems are there, they’re not going anywhere. Even the systems being built now will become operational at some point. After all the testing and development, they’ll be put to work.”
At that point, he explained, “However good a job [the developers] have done, the risk is transferred over to the operational side. It has to be managed there as well.”