10 posts from August 2010
August 31, 2010
 Risk Homeostasis and RSnake
Pin It

Today marks the first post in a series of four discussions that will explore the Tao of Data Security. This series will delve into data security and cover the following four topics:

In 1996, the New York Yorker published an article by Malcolm Gladwell entitled Blowup.

The premise?  “In the technological age, there is a ritual to disaster. When planes crash or chemical plants explode, each piece of physical evidence-of twisted metal or fractured concrete- becomes a kind of fetish object, painstakingly located, mapped, tagged, and analyzed, with findings submitted to boards of inquiry that then probe and interview and soberly draw conclusions.”

We see the same in cyber security today with new vulnerabilities coming out daily plus researchers constantly posting new, possible problems.

Consider a recent post by RSnake, a very known hacker/pen tester, who wrote the definitive book on XSS:

The moral of the story is that about a year ago I reached an inflection point in my career of 15 years in security. I realized that with every major innovation the security community comes up with, the general public and vendors alike figure out a way to abuse that innovation or work around it to do what they originally wanted to do again (think firewalls and tunneling over port 80). It feels like we’ve been battling to protect people, but the people don’t want to be protected if it means changing. They’re happy with the status quo. Of course, there’s always fear of the unknown, and fear of insecurity is a key driver of spending (think anti-virus). One thing’s for sure though, you can’t change the nature of the hill-billies, so why are we trying? Our only path to success is empowering people to do what they want, without getting in the way. The words “No” and “Can’t” have to leave our vocabulary when it comes to what consumers and developers and companies want to do. Now, the trick is: how do we build security that no one notices is there?

The Gladwell article also goes onto to highlight the concept of “risk homeostasis,” which explains RSnake’s sentiments:

…under certain circumstances, changes that appear to make a system or an organization safer in fact don't. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.  Consider, for example, the results of a famous experiment conducted several years ago in Germany. Part of a fleet of taxicabs in Munich was equipped with antilock brake systems (A.B.S.), the recent technological innovation that vastly improves braking, particularly on slippery surfaces. The rest of the fleet was left alone, and the two groups-which were otherwise perfectly matched were placed under careful and secret observation for three years. You would expect the better brakes to make for safer driving. But that is exactly the opposite of what happened. Giving some drivers A.B.S. made no difference at all in their accident rate; in fact, it turned them into markedly inferior drivers. They drove faster. They made sharper turns. They showed poorer lane discipline. They braked harder. They were more likely to tailgate. They didn't merge as well, and they were involved in more near-misses. In other words, the A.B.S. systems were not used to reduce accidents; instead, the drivers used the additional element of safety to enable them to drive faster and more recklessly without increasing their risk of getting into an accident. As economists would say, they "consumed" the risk reduction, they didn't save it.

Perhaps RSnake, and the cyber security community, is experiencing the effects of risk homeostasis instead of hillbillies? As an industry, we need to focus on cyber security approaches that protects people “without getting in the way.”  To RSnake's point, we need to develop technology that "buys" risk reduction without changing consumer, developer or enterprise behavior.  

There may be an interesting example of this in the appsec world.  There’s a choice posed by PCI: go with code review or web application firewalls (WAF).  A common argument is that code reviews/SDLCs are must—take the risk out before it gets in.  But, to RSnake’s point, code scanning forces developers to change their code and the way they work—dramatically increasing the cost of deployment.  By contrast, a WAF "saves" the risk reduction by applying a something that is unseen by application users (i.e., consumers or employees) as well as avoids the hurdle of developer adoption.  


August 27, 2010
 Hackers accidentally give Microsoft their code
Pin It

Best security article this week--maybe from the past several months.  I can't figure out who is more complacent:  developers and hackers...  Anyways, highlights below

When hackers crash their systems while developing viruses, the code is often sent directly to Microsoft, according to one of its senior security architects, Rocky Heckman.When the hacker's system crashes in Windows, as with all typical Windows crashes, Heckman said the user would be prompted to send the error details — including the malicious code — to Microsoft. The funny thing is that many say yes, according to Heckman.

"People have sent us their virus code when they're trying to develop their virus and they keep crashing their systems," Heckman said. "It's amazing how much stuff we get."

At a Microsoft Tech.Ed 2010 conference session on hacking today, Heckman detailed to the delegates the top five hacking methods and the best methods for developers to avoid falling victim to them. Heckman explained how to create malicious code that could be used in cross-site scripting or SQL injection attacks and, although he said it "wasn't anything you couldn't pick up on the internet", he suggested delegates use the code responsibly to aid in their protection efforts.

According to Heckman, based on the number of attacks on Microsoft's website, the company was only too familiar with what types of attacks were most popular.

"The first thing [script kiddies] do is fire off all these attacks at," he said. "On average we get attacked between 7000 and 9000 times per second at," said the senior security architect.

"I think overall we've done pretty good, even when MafiaBoy took down half the internet, you know, Amazon and eBay and that, we didn't go down, we were still up."

Heckman said there were two reasons why the top hacking methods of cross-site scripting and SQL injection had not changed in the past six years.

"One, it tells me that the bad guys go with what they know, and two, it says the developers aren't listening," he said.

Heckman said that developers should consider all data input by a user as harmful until proven otherwise.


August 26, 2010
 Web app vulns now 50% of flaws
Pin It

New IBM X-Force report is out

One interesting statement:  “For the first time in the report’s history, web application vulnerabilities have reached 50 per cent of all code flaws reported.”

This reminded me of "back of the envelope" analysis Jeremiah did back in June with is worth revisiting Today, with 200 million plus websites: 

...Even if we just focus on the 1.3 million websites serving up SSL certificates, the scale is still unbelievably massive. Whatever the metric, experienced industry experts and aggregated statistics reports agree, the vast majority of these websites are riddled with vulnerabilities. The exploitation of thousands of websites that is fueling headlines serves as a further proof point. To quantify vulnerabilities, let’s assume an average of six serious vulnerabilities per website, WhiteHat Security’s published figures based on our own Sentinel assessments. This totals 7.8 million custom Web application vulnerabilities in circulation. We just don’t know exactly where they are.

The next and equally important problem, fixing the code, is a seemingly insurmountable obstacle. Imagine an extremely limited number of application security pros to convince 17 million developers (some unknown portion being Web developers) to add to their workload, learn about defensive programming, and remediate all the vulnerable code. And by the way, this will be accomplished in small increments. Vendor-supplied patches have no place here. According to Gary McGraw’s (Cigital, CTO) BSIMM studies, observations from large-scale software security initiatives, a software security group (SSG) ideally should be 1% of the size of the development team. Given that baseline, we’d need 170,000 software security experts when I doubt if more than 5,000 currently exist.

The problem just got worse.


August 24, 2010
 Responsible Disclosure: RIP?
Pin It

While Google and MS are bashing each other over whether we should have a responsible/coordinated/full disclosure policy, it seems were might be heading to 'no disclosure.'

RSnake (Robert Hansen) predicts that since the industry is certainly not encouraging enough and often offensive against the disclosing party, the security researcher will probably go for more profitable options (such as selling vulnerabilities to black hats) or abandoning this field of research – either of which is not a desired outcome from a security perspective.

We can already see that attitude from Arcos, a security firm, with a the recent code-execution bug in Windows apps quoted in :

“I don’t know if you saw the draft of our new commercial disclosure policy, but we essentialy gave up on alerting vendors for free. We’ve been providing free research to them for over 10 years and it hasn’t paid out well. What you’re seeing on Bugtraq now are the “remains of the old days,” so to speak :-) We’ve found better markets for this kind of information. To answer your specific question: no, we have not reported any issues in the products you mentioned – and have no intention to, should we come across one."  [Emphasis ours].


August 19, 2010
 Religion and Log Files
Pin It

Customer sent an email with the following signature:

"In God I trust. For everyone else I keep log files."


 Intel + McAfee: what does it mean?
Pin It

Oracle buys Secerno.  IBM buys Guardium.  Now, Intel buys McAfee.

The lesson is simple:  security has become a business process.  In the past, the objective of security was all about keeping the bad guys out while letting the good guys inHowever, with the advent of insiders and as external hacking’s focus shifted to data theft, the objective of security professionals changed dramatically.  Data—and the transactions that moved data—meant security teams had to deploy security as a part of supply chains, online transactions and for online collaboration among customers, employees, partners and social networks.  

Today, cyber security can't be separated from business operations. For this reason, how security teams must view and approach their roles has changed dramatically.  For example, in the past a CIO’s role was laptop distribution.  Today, CIOs build supply chains.  In the past, CISOs distributed anti-virus and set up firewalls.  Today, they must know where data resides, where it moves and how to protect it which requires a serious, comprehensive data security practice.  This means security teams need to become business process experts to keep the bad guys disarmed while keeping the good guys productive.

With this is in mind, Intel's purchase makes a lot of sense.  Welcome to the party.


August 18, 2010
 HP picks up Fortify for ~$250M
Pin It

HP buys Fortify and the rumor mill says it’s for ~$250M.  Fortify’s bookings in 2008 were $43M and not much more 2009, so they got around a 5x multiple.  Fortify went through five rounds of funding to get $40M in VC funding.    

The main players and their 2009 bookings in code analysis for security testing are:

·         Fortify/HP:  $44M

·         IBM/Ounce:  $10M (est.)

·         Veracode:  $10M (est.)

·         Checkmarx:  revenues unknown.

·         Armorize:  revenues unknown

Total market size:  about $100M.

The mistake Gartner and others have made is grouping companies like Coverity and Klocwork—which account for almost $80M in revenue.  These companies aren’t true security companies—their business is finding software quality issues in C/C++ code for embedded systems.  Very little of their business is based on finding vulnerabilities in Java or .NET—the programming languages that comprise most applications that hackers attack.  (Sure they find buffer overflows, but that’s far from a preponderance of security headaches these days).

HP and Fortify have been negotiating for more than a year, just after Ounce Labs was bought by IBM.  In the case of Ounce Labs, it was a well known “secret” that Ounce was dying and IBM picked up the company as a fire sale.  Fortify, although not profitable, had some cash on hand and wasn’t in a hurry so the process got drawn out.  But Fortify had a business model problem:  to succeed, the python needed to swallow the pig.  In other words, Fortify needed a large deal or two every quarter to meet financial goals—such as the (rare) $7M US Air Force deal in 2008.  But finding the pig often proved as elusive as the truffle.  This meant Fortify’s cash position could often get tenuous.    It’s possible—though I don’t know—that Fortify missed a quarter or the current quarter looked bleak, accelerating the acquisition.   It’s also possible they got a big pig that suddenly made them seem more attractive.

When a big companies buys smaller one many call it validation.  This is certainly true in this case but the challenge of making code analysis a broadly accepted technology remains.  If you study the code analysis for security a little bit deeper, one thing becomes clear:  it’s a rich man’s sport.  Just look at the BSIMM project—all the companies used as prototypes are quite wealthy and can afford to deploy it across many development teams.  But not a small or medium business makes the list.  Why?  Code analysis is hard to do and deploy.  The challenge HP and IBM now have is to build a large market that appeals to a spectrum companies large and small so code analysis becomes embraced by the mainstream—much like network firewalls are used by everyone.


August 17, 2010
 LIVE WEBINAR: Five Signs Your File Data is at Risk
Pin It

TOPIC: Five Signs Your File Data is at Risk
SPEAKER: Raphael Reich, Director File Security Solutions, Imperva
DATE: Wednesday, August 25, 2010
TIME: 11:00 AM (PDT) / 2:00 PM (EDT)
REGISTER: Click Here

Webinar: File Security

With unstructured data accounting for approximately 80% of business data, file security is essential.* Growing insider threats and regulatory compliance mandates make protecting unstructured data on file servers and Network Attached Storage devices a "must do" for every organization.

The first step to evaluating your company's file security is the ability to successfully answer the following five questions: 

  • Who owns our file data?
  • Who is actually using our files?
  •  Who has the potential to access our files?
  • Whose access rights should be revoked?
  • How do we know when access rights or activity violate corporate policy?

If your organization is unable to answer these five questions, your file data may be at risk. Join Raphael Reich, Director File Security Solutions, Imperva for further insight into each question, as well as tips for overcoming conventional file security limitations.

REGISTER: Click Here


August 11, 2010
 Spanish Password Security
Pin It

In January of this year, Imperva’s ADC published a report on the most commonly used passwords.  Of the 32 million, nearly 2 million were in Spanish.  Agua Marketing—a firm that specializes in marketing to Spanish speakers—helped us analyze the list.  They found many passwords and patterns.  If you read Spanish, you can access our report here.

Of the 32 million passwords, a significant portion, 1,830,196, were identified as Spanish which included passwords that could be bilingual, such as ‘chocolate’ which is spelled the same in English and Spanish as well as universal sequences like ‘abc123’.  The purely Spanish words totaled 1,001,662 including all Spanish words, proper names and intentionally misspelled expressions.  

The tricky part:  we have no way of knowing who was a native Spanish speaker.  If we had usernames, that would have helped--but they were unavailable.  However, to our knowledge, no one has ever had such a large pool of Spanish passwords to analyze.  The frequency of use of the passwords was very relevant: it gave us insight into the types of passwords selected by Spanish-speaking users.  

Key findings?  Spanish speakers devised passwords based on:

  • Names of persons
  • Keyboard sequences
  • Favorite things such as movie characters, food, etc…
  • Terms of endearment
  • Computer terms
  • Religious terms




August 09, 2010
 New Report: WAF Market Expected to Grow 24% CAGR through 2013
Pin It

New report from Technavio (sorry, you need to pay) highlights the growth in the WAF market.  

Summary: "As more applications are being added to Web, the need for better security management arises. Hence, the Web Application Firewall (WAF) market is growing at a rapid rate."

Key highlights:

  • The market for Web Application Firewall is forecast to reach $437.6 million in 2013 from $181.8 million in 2009; thus, growing at a CAGR of 24.6 percent over the same period.
  • North America and Europe will see 24 percent CAGR through 2013 while APAC will grow at a 27 percent rate.
  • Market size in 2010 $234M worldwide.



Find Us Online
RSS Feed - Subscribe Twitter Facebook iTunes LinkedIn YouTube
Monthly Archives
Email Subscription
Sign up here to receive our blog: