‘Good, fast, cheap... Pick two’: Software quality dilemma forces risky decisions

software-quality-iron-triangle-riskOne of the prevailing proverbs of application development is the truth about the so-called iron triangle — that when developing software you’ve got three options: good, fast, and cheap. But you can only pick two. Good can have varying definitions but for most it’s a solid stand-in for "quality," of which software security is an increasingly important subset.

Resilient Cyber’s Chris Hughes recently wrote a sweeping think piece on the security implications of the iron triangle. Hughes treads a lot of ground in this one, but the most consistent theme covered is the frustration that many in the security industry face in standardizing the measurement of what it means when an organization chooses secure quality as one of its two choices.

“What I was ranting about in this paper is that we all want to say security is a subset of quality and we want it to be considered in the broader discussion of having a quality product. The problem is, well then what metrics do we use to define something as secure. Everyone has a different answer.”
Chris Hughes

Further complicating the matter is that even when an organization does decide on what their security metrics and benchmarks defines "good" for them, they often have no way of holding that yardstick up to software such as commercial packages that they do not have control over.

One big question Hughes didn’t touch on in his latest piece because, however, gets to the bottom of the black box that is commercial software. Specifically, how does an organization know which two priorities its software suppliers have picked? And in the era of continuous integration/continuous deliver (CI/CD), which two priorities are those suppliers picking day-in and day-out?

Here's what you team needs to know about the reality of software quality decisions in the iron triangle, and how that affects your organization's risk profile.

[ See Special Report: How to Manage Commercial & Third-Party Software Risk ]

Commercial software and the reality of software quality decisions

This question is fundamental to so many application security (AppSec), asset management, vulnerability management, and exposure management issues today. Whether that supplier is a commercial software vendor, an open source project from which crucial components are being used, SaaS provider, or cloud platform provider — their ongoing priorities for good, fast, or cheap ultimately determine your organizations’ risk levels, said Josh Knox, senior cybersecurity technologist for ReversingLabs.

“I think that’s one of the big things — a lot of that prioritization is very hidden from you. Managers and developers come and go from your suppliers. You may have no idea. You’ll have no idea that that amazing software manager that brought the company from startup to where it's at today has left and now nobody knows how to do X, or that all the other developers are slowly leaving one by one.”
Josh Knox

As software users, your organization will never get suppliers to spill that kind of tea about the business happenings that impact their quality. You can’t check up on your suppliers’ iron triangle priorities just by asking — though some companies try to achieve this with self-assessment surveys, as well as requests for SBOMs. The fact, however, is that the proof is in the pudding with vigilant and comprehensive software supply chain security (SSCS) tools and full transparency.

RL's Knox said that with software security, you need to treat trust like if you were going to get a restaurant to regularly cater for your business. And you wouldn't just trust reviews from a year ago that say the food and service are good, he said.

“It’s incumbent upon you to do your own vetting of your software regularly to make sure that it is all right. Bottom line, that's all there is to it. You can't just read the review. You're going to have to go check it out. Taste the food and try the service.”
—Josh Knox
 

You’re also going to keep checking that restaurant’s level of quality along the way if that catering gig is regular, Knox said. This continuous, or at very least rigorously regular, validation of software risks is an important piece here.  

“I think that that's one of the things that people have to consider: That yes, your software may be rock solid today or may have been a year ago, but you really don't know what internally is happening at that company recently."
—Josh Knox 

Why the what and how of what you are testing matters 

Regular SSCS testing and validation is important, but the next logical question is: "What and how are we testing?"

That's the topic that Resilient Cyber's Hughes was digging into in his post. The industry is already starting to move toward the realization that a full slate of tests that include things like and binary analysis are important to truly understand what software looks like as it’s deployed. And they’re acknowledging that risk factors for one company or industry may not be important to others. This is why testing needs to be contextualized within the framework of an organization’s risk tolerance and business priorities, as well as the threat landscape, Hughes said.

“Understanding the binaries and the dependencies of what’s there, the vulnerabilities, and so on are all important. But then we should also know things like is it likely to be exploited or is it reachable in the product? And what’ the business criticality is of the asset, what compensating controls do you have in place, the sensitivity of data on the asset or data the asset touches and so on.” 
—Chris Hughes

Picking and choosing the factors to track is not a game of perfect. No set of benchmarks is going to get it exactly right, and in some cases the measures may even be a proxy for measuring the general security stance of software suppliers. This a lot of the basis for the third-party risk management (TPRM) space, which often measures a range of different kinds of vendors for which a company may have no access or solid visibility into the inner workings of their digital infrastructure. When it comes to software supply chain security, available tooling gets deeper than that even if the picture is not perfect.

The reality check: Commercial software quality is not transparent

There’s never going to be a perfect standard for determining what "good" software quality looks like, but the more measures and risk factors an organization uses, the more easily it can put together a picture of software risk that can be used to hold vendors's feet to the fire and to inform prioritization of mitigations when your organization simply doesn’t have the leverage to move the vendor to action.

When there is leverage using modern SSCS tools, security testing results of software can and should be used in the vendor management process, RL's Knox said.

“You do have to hold them over a barrel sometimes and be like, ‘You want me to renew? You better tell me what you're doing about X, Y, or Z."
—Josh Knox 

The other thing you have to do is mitigate. Sometimes you are stuck with software. In that case, at the very least, you should be able to examine it and then mitigate.

"So maybe that software does have a flaw in it, maybe it does have a vulnerability, but if I know what it is and how it gets exploited, then I can create mitigating controls around it to minimize the chance of it actually happening.”

Article Link: https://www.reversinglabs.com/blog/good-fast-cheap…-pick-two-software-quality-dilemma-forces-risky-decisions