Today marks the first post in a series of four discussions that will explore the Tao of Data Security. This series will delve into data security and cover the following four topics:
In 1996, the New York Yorker published an article by Malcolm Gladwell entitled Blowup.
The premise? “In the technological age, there is a ritual to disaster. When planes crash or chemical plants explode, each piece of physical evidence-of twisted metal or fractured concrete- becomes a kind of fetish object, painstakingly located, mapped, tagged, and analyzed, with findings submitted to boards of inquiry that then probe and interview and soberly draw conclusions.”
We see the same in cyber security today with new vulnerabilities coming out daily plus researchers constantly posting new, possible problems.
Consider a recent post by RSnake, a very known hacker/pen tester, who wrote the definitive book on XSS:
The moral of the story is that about a year ago I reached an inflection point in my career of 15 years in security. I realized that with every major innovation the security community comes up with, the general public and vendors alike figure out a way to abuse that innovation or work around it to do what they originally wanted to do again (think firewalls and tunneling over port 80). It feels like we’ve been battling to protect people, but the people don’t want to be protected if it means changing. They’re happy with the status quo. Of course, there’s always fear of the unknown, and fear of insecurity is a key driver of spending (think anti-virus). One thing’s for sure though, you can’t change the nature of the hill-billies, so why are we trying? Our only path to success is empowering people to do what they want, without getting in the way. The words “No” and “Can’t” have to leave our vocabulary when it comes to what consumers and developers and companies want to do. Now, the trick is: how do we build security that no one notices is there?
The Gladwell article also goes onto to highlight the concept of “risk homeostasis,” which explains RSnake’s sentiments:
…under certain circumstances, changes that appear to make a system or an organization safer in fact don't. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another. Consider, for example, the results of a famous experiment conducted several years ago in Germany. Part of a fleet of taxicabs in Munich was equipped with antilock brake systems (A.B.S.), the recent technological innovation that vastly improves braking, particularly on slippery surfaces. The rest of the fleet was left alone, and the two groups-which were otherwise perfectly matched were placed under careful and secret observation for three years. You would expect the better brakes to make for safer driving. But that is exactly the opposite of what happened. Giving some drivers A.B.S. made no difference at all in their accident rate; in fact, it turned them into markedly inferior drivers. They drove faster. They made sharper turns. They showed poorer lane discipline. They braked harder. They were more likely to tailgate. They didn't merge as well, and they were involved in more near-misses. In other words, the A.B.S. systems were not used to reduce accidents; instead, the drivers used the additional element of safety to enable them to drive faster and more recklessly without increasing their risk of getting into an accident. As economists would say, they "consumed" the risk reduction, they didn't save it.
Perhaps RSnake, and the cyber security community, is experiencing the effects of risk homeostasis instead of hillbillies? As an industry, we need to focus on cyber security approaches that protects people “without getting in the way.” To RSnake's point, we need to develop technology that "buys" risk reduction without changing consumer, developer or enterprise behavior.
There may be an interesting example of this in the appsec world. There’s a choice posed by PCI: go with code review or web application firewalls (WAF). A common argument is that code reviews/SDLCs are must—take the risk out before it gets in. But, to RSnake’s point, code scanning forces developers to change their code and the way they work—dramatically increasing the cost of deployment. By contrast, a WAF "saves" the risk reduction by applying a something that is unseen by application users (i.e., consumers or employees) as well as avoids the hurdle of developer adoption.